id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
492,445
https://en.wikipedia.org/wiki/Compass%20%28drawing%20tool%29
A compass, also commonly known as a pair of compasses, is a technical drawing instrument that can be used for inscribing circles or arcs. As dividers, it can also be used as a tool to mark out distances, in particular, on maps. Compasses can be used for mathematics, drafting, navigation and other purposes. Prior to computerization, compasses and other tools for manual drafting were often packaged as a set with interchangeable parts. By the mid-twentieth century, circle templates supplemented the use of compasses. Today those facilities are more often provided by computer-aided design programs, so the physical tools serve mainly a didactic purpose in teaching geometry, technical drawing, etc. Construction and parts Compasses are usually made of metal or plastic, and consist of two "legs" connected by a hinge which can be adjusted to allow changing of the radius of the circle drawn. Typically one leg has a spike at its end for anchoring, and the other leg holds a drawing tool, such as a pencil, a short length of just pencil lead or sometimes a pen. Handle The handle, a small knurled rod above the hinge, is usually about half an inch long. Users can grip it between their pointer finger and thumb. Legs There are two types of leg in a pair of compasses: the straight or the steady leg and the adjustable one. Each has a separate purpose; the steady leg serves as the basis or support for the needle point, while the adjustable leg can be altered in order to draw different sizes of circles. Hinge The screw through the hinge holds the two legs in position. The hinge can be adjusted, depending on desired stiffness; the tighter the hinge-screw, the more accurate the compass's performance. The better quality compass, made of plated metal, is able to be finely adjusted via a small, serrated wheel usually set between the legs (see the "using a compass" animation shown above) and it has a (dangerously powerful) spring encompassing the hinge. This sort of compass is often known as a "pair of Spring-Bow Compasses". Needle point The needle point is located on the steady leg, and serves as the center point of the circle that is about to be drawn. Pencil lead The pencil lead draws the circle on a particular paper or material. Alternatively, an ink nib or attachment with a technical pen may be used. The better quality compass, made of metal, has its piece of pencil lead specially sharpened to a "chisel edge" shape, rather than to a point. Adjusting nut This holds the pencil lead or pen in place. Uses Circles can be made by pushing one leg of the compasses into the paper with the spike, putting the pencil on the paper, and moving the pencil around while keeping the legs at the same angle. Some people who find this action difficult often hold the compasses still and move the paper round instead. The radius of the intended circle can be changed by adjusting the initial angle between the two legs. Distances can be measured on a map using compasses with two spikes, also called a dividing compass (or just "dividers"). The hinge is set in such a way that the distance between the spikes on the map represents a certain distance in reality, and by measuring how many times the compasses fit between two points on the map the distance between those points can be calculated. Compasses and straightedge Compasses-and-straightedge constructions are used to illustrate principles of plane geometry. Although a real pair of compasses is used to draft visible illustrations, the ideal compass used in proofs is an abstract creator of perfect circles. The most rigorous definition of this abstract tool is the "collapsing compass"; having drawn a circle from a given point with a given radius, it disappears; it cannot simply be moved to another point and used to draw another circle of equal radius (unlike a real pair of compasses). Euclid showed in his second proposition (Book I of the Elements) that such a collapsing compass could be used to transfer a distance, proving that a collapsing compass could do anything a real compass can do. Variants A beam compass is an instrument, with a wooden or brass beam and sliding sockets, cursors or trammels, for drawing and dividing circles larger than those made by a regular pair of compasses. Scribe-compasses is an instrument used by carpenters and other tradesmen. Some compasses can be used to draw circles, bisect angles and, in this case, to trace a line. It is the compass in the most simple form. Both branches are crimped metal. One branch has a pencil sleeve while the other branch is crimped with a fine point protruding from the end. A wing nut on the hinge serves two purposes: first it tightens the pencil and secondly it locks in the desired distance when the wing nut is turned clockwise. Loose leg wing dividers are made of all forged steel. The pencil holder, thumb screws, brass pivot and branches are all well built. They are used for scribing circles and stepping off repetitive measurements with some accuracy. A reduction compass or proportional dividers is used to reduce or enlarge patterns while conserving angles. Ellipse drawing compasses are used to draw ellipse. As a symbol A pair of compasses is often used as a symbol of precision and discernment. As such it finds a place in logos and symbols such as the Freemasons' Square and Compasses and in various computer icons. English poet John Donne used the compass as a conceit in "A Valediction: Forbidding Mourning" (1611). See also Dividers Circle Geometrography Masonic Square and Compasses Technical drawing tools References External links Beam or trammel compass (variant form) Mathematical tools Navigational equipment Stonemasonry tools Technical drawing tools
Compass (drawing tool)
Mathematics,Technology
1,208
64,570,030
https://en.wikipedia.org/wiki/HAT-P-18
HAT-P-18 is a K-type main-sequence star about 530 light-years away. The star is very old and has a concentration of heavy elements similar to solar abundance. A survey in 2015 detected very strong starspot activity on HAT-P-18. Planetary system In 2010 a transiting hot Saturn-sized planet was detected. Its equilibrium temperature is 841 K. In 2014, observations utilizing the Rossiter–McLaughlin effect detected an exoplanet, HAT-P-18b, on a retrograde orbit, with an angle between orbital plane of the planet and the parent star equatorial plane equal to 132°. Transit-timing variation measurements in 2015 did not detect additional planets in the system. In 2016, the transmission optical spectra of the planet indicated that the atmosphere is lacking detectable clouds or hazes, and is blue in color due to Rayleigh scattering of light. The atmosphere seems to gradually evaporate, but at a slow rate - less than 2% of planetary mass is lost per one billion years. By contrast, spectra taken in 2022 has showed an extensive hazes and clear evidence of water vapour, along with the tail of escaping helium. The dayside temperature of HAT-P-18b was measured in 2019 to be 1004 K. References Hercules (constellation) K-type main-sequence stars Planetary systems with one confirmed planet Planetary transit variables J17052315+3300450
HAT-P-18
Astronomy
295
14,881,584
https://en.wikipedia.org/wiki/TPBG
Trophoblast glycoprotein, also known as TPBG, 5T4, Wnt-Activated Inhibitory Factor 1 or WAIF1, is a human protein encoded by a TPBG gene. TPBG is an antagonist of Wnt/β-catenin signalling pathway. Clinical significance 5T4 is an antigen expressed in a number of carcinomas. It is an N-glycosylated transmembrane 72 kDa glycoprotein containing eight leucine-rich repeats. 5T4 is often referred to as an oncofetal antigen due to its expression in foetal trophoblast (where it was first discovered) or trophoblast glycoprotein (TPBG). 5T4 is found in tumors including the colorectal, ovarian, and gastric. Its expression is used as a prognostic aid in these cases. It has very limited expression in normal tissue but is widespread in malignant tumours throughout their development. One study found that 5T4 was present in 85% of a cohort of 72 colorectal carcinomas and in 81% of a cohort of 27 gastric carcinomas. Its confined expression appears to give 5T4 the potential to be a target for T cells in cancer immunotherapy. There has been extensive research into its role in antibody-directed immunotherapy through the use of the high-affinity murine monoclonal antibody, mAb5T4, to deliver response modifiers (such as staphylococcus aureus superantigen) accurately to a tumor. 5T4 is also the target of the cancer vaccine TroVax which is in clinical trials for the treatment of a range of different solid tumour types. Interactions TPBG has been shown to interact with GIPC1. References Further reading Immunology
TPBG
Biology
405
30,560,133
https://en.wikipedia.org/wiki/Run-around%20coil
A run-around coil is a type of energy recovery heat exchanger most often positioned within the supply and exhaust air streams of an air handling system, or in the exhaust gases of an industrial process, to recover the heat energy. Generally, it refers to any intermediate stream used to transfer heat between two streams that are not directly connected for reasons of safety or practicality. It may also be referred to as a run-around loop, a pump-around coil or a liquid coupled heat exchanger. Description A typical run-around coil system comprises two or more multi-row finned tube coils connected to each other by a pumped pipework circuit. The pipework is charged with a heat exchange fluid, normally water, which picks up heat from the exhaust air coil and gives up heat to the supply air coil before returning again. Thus heat from the exhaust air stream is transferred through the pipework coil to the circulating fluid, and then from the fluid through the pipework coil to the supply air stream. The use of this system is generally limited to situations where the air streams are separated and no other type of device can be utilised since the heat recovery efficiency is lower than other forms of air-to-air heat recovery. Gross efficiencies are usually in the range of 40 to 50%, but more significantly seasonal efficiencies of this system can be very low, due to the extra electrical energy used by the pumped fluid circuit. The fluid circuit containing the circulating pump also contains an expansion vessel, to accommodate changes in fluid pressure. In addition, there is a fill device to ensure the system remains charged. There are also controls to bypass and shut down the system when not required, and other safety devices. Pipework runs should be as short as possible, and should be sized for low velocities to minimize frictional losses, hence reducing pump energy consumption. It is possible to recover some of this energy in the form of heat given off by the motor if a glandless pump is used, where a water jacket surrounds the motor stator, thus picking up some of its heat. The pumped fluid will have to be protected from freezing, and is normally treated with a glycol based anti-freeze. This also reduces the specific heat capacity of the fluid and increases the viscosity, increasing pump power consumption, further reducing the seasonal efficiency of the device. For example, a 20% glycol mixture will provide protection down to , but will increase system resistance by 15%. For the finned tube coil design, there is a performance maximum corresponding to an eight- or ten-row coil, above this the fan and pump motor energy consumption increases substantially and seasonal efficiency starts to decrease. The main cause of increased energy consumption lies with the fan, for the same face velocity, fewer coil rows will decrease air pressure drop and increase water pressure drop. The total energy consumption will usually be less than that for a greater number of coil rows with higher air pressure drops and lower water pressure drops. Energy transfer process Normally the heat transfer between airstreams provided by the device is termed as 'sensible', which is the exchange of energy, or enthalpy, resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. Other types of air-to-air heat exchangers Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel) Recuperator, or cross plate heat exchanger Heat pipe See also HVAC Energy recovery ventilation Heat recovery ventilation Regenerative heat exchanger Air handler Thermal comfort Indoor air quality CCSI References Heating, ventilation, and air conditioning Mechanical engineering Low-energy building Energy recovery Heating Sustainable building Energy conservation Industrial equipment Thermodynamics Heat transfer
Run-around coil
Physics,Chemistry,Mathematics,Engineering
767
24,009,569
https://en.wikipedia.org/wiki/Cyclic%20nucleotide-binding%20domain
Proteins that bind cyclic nucleotides (cAMP or cGMP) share a structural domain of about 120 residues. The best studied of these proteins is the prokaryotic catabolite gene activator (also known as the cAMP receptor protein) (gene crp) where such a domain is known to be composed of three alpha-helices and a distinctive eight-stranded, antiparallel beta-barrel structure. There are six invariant amino acids in this domain, three of which are glycine residues that are thought to be essential for maintenance of the structural integrity of the beta-barrel. cAMP- and cGMP-dependent protein kinases (cAPK and cGPK) contain two tandem copies of the cyclic nucleotide-binding domain. The cAPK's are composed of two different subunits, a catalytic chain and a regulatory chain, which contains both copies of the domain. The cGPK's are single chain enzymes that include the two copies of the domain in their N-terminal section. Vertebrate cyclic nucleotide-gated ion-channels also contain this domain. Two such cations channels have been fully characterized, one is found in rod cells where it plays a role in visual signal transduction. Human proteins containing this domain CNBD1; CNGA1; CNGA2; CNGA3; CNGB1; CNGB3; HCN1; HCN2; HCN3; HCN4; KCNH1; KCNH2; KCNH3; KCNH4; KCNH5; KCNH6; KCNH7; KCNH8; PNPLA6; PNPLA7; PRKAR1A; PRKAR1B; PRKAR2A; PRKAR2B; PRKG1; PRKG2; RAPGEF2; RAPGEF3; RAPGEF4; RAPGEF6; RCNC2; SLC9A10; SLC9A11; References Further reading Protein domains Single-pass transmembrane proteins
Cyclic nucleotide-binding domain
Biology
426
48,633,927
https://en.wikipedia.org/wiki/Oat%20beta-glucan
Oat β-glucans are water-soluble β-glucans derived from the endosperm of oat kernels known for their dietary contribution as components of soluble fiber. Due to their property to lower serum total cholesterol and low-density lipoprotein cholesterol, and potentially reduce the risk of cardiovascular diseases, oat β-glucans have been assigned a qualified health claim by the European Food Safety Authority and the US Food and Drug Administration. History Oat products have been used for centuries for medicinal and cosmetic purposes; however, the specific role of β-glucan was not explored until the 20th century. β-glucans were first discovered in lichens, and shortly thereafter in barley. After joining Agriculture and Agri-Food Canada in 1969, Peter J Wood played an instrumental role in isolating and characterizing the structure and bioactive properties of oat β-glucan. A public interest in oat β-glucan arose after its cholesterol lowering effect was reported in 1984. In 1997, after reviewing 33 clinical studies performed over the previous decades, the FDA approved the claim that intake of at least 3 g of β-glucan from oats per day "as part of a diet low in saturated fat and cholesterol, may reduce the risk of heart disease." This marked the first time a public health agency claimed dietary intervention can actually help prevent disease. This health claim mobilized a dietary movement as physicians and dietitians for the first time could recommend intake of a specific food to directly combat disease. Since then, oat consumption has continued to gain traction in disease prevention with noted effects on ischemic heart disease and stroke prevention, but also in other areas like BMI reduction, blood pressure lowering and highly corroborated evidence for reduced blood serum cholesterol. Structural properties Cereal β-glucans – including β-glucan from oat, barley and wheat – are linear polysaccharides joined by 1,3 and 1,4 carbon linkages. The majority of cereal β-glucan bonds consist of 3 or 4 beta-1,4 glycosidic bonds (trimers and tetramers) interconnected by 1,3 linkages. In β-glucan, these trimers and tetramers are known as cellotriosyl and cellotetraosyl. Oats and barley differ in the ratio of cellotriosyl to cellotetraosyl, and barley has more 1-4 linkages with a degree of polymerization higher than 4. In oats, β-glucan is found mainly in the endosperm of the oat kernel, especially in the outer layers of that endosperm (a marked difference from barley, which contains β-glucan uniformly throughout the endosperm). Most oats contain 3–6% β-glucan by weight. Oats can be selectively bred based on favourable β-glucan levels. Often millers only process oat cultivars with at least 4% by weight β-glucan. Oat β-glucans are linear and linked at the 1,3 and 1,4 carbon sites. Oat β-glucans can form into a random coil structure and flow with Newtonian behaviour until they reach a critical concentration at which point they become pseudoplastic. The gelling ability of oat β-glucan correlates to the percentage of trimers. Extraction β-glucan extraction from oat can be difficult due to tendency of depolymerization – which often occurs in high pH. Thus β-glucan extraction is usually performed under a more neutral pH and generally at temperatures of 60-100 degrees Celsius. Usually β-glucan is solubilized in the extraction process with residual starch, which is then removed by hydrolysis with alpha-amylase. The residual solution usually contains coextracts of hemicelluloses and proteins which can then be separated through selective precipitation. Through wet milling, sieving, and solvent-extraction, oat beta-glucans can achieve up to 95% extraction purity. Viscosity of oat β-glucan In oats, β-glucan makes up the majority of the soluble fibre; however, oat β-glucans do become insoluble above a certain concentration. The total viscosity is determined by the level of solubility, the molecular weight, and the trimer-to-tetramer ratio. The lower the trimer-tetramer ratio, the higher the β-glucan viscosity in solution. A more viscous internal β-glucan solution generally leads to beneficial physiological effects – including a more pronounced hypoglycemic effect and lowered cholesterol levels, and a decrease in postprandial blood glucose levels. Physiological effects As fermentable fiber In the diet, β-glucans are a source of soluble, fermentable fiber – also called prebiotic fiber – which provides a substrate for microbiota within the large intestine, increasing fecal bulk and producing short-chain fatty acids as byproducts with wide-ranging physiological activities. This fermentation impacts the expression of many genes within the large intestine, which further affects digestive function and cholesterol and glucose metabolism, as well as the immune system and other systemic functions. Cholesterol In 1997, the FDA recognized the cholesterol lowering effect of oat β-glucan. In Europe, several health claim requests were submitted to the EFSA NDA Panel (Dietetic Products, Nutrition and Allergies), related to the role of β-glucans in maintenance of normal blood cholesterol concentrations and maintenance or achievement of a normal body weight. In July 2009, the Scientific Committee issued the following statements: On the basis of the data available, the Panel concludes that a cause-and-effect relationship has been established between the consumption of beta-glucans and the "reduction of blood cholesterol concentrations." The following wording reflects the scientific evidence: "Regular consumption of beta-glucans contributes to maintenance of normal blood cholesterol concentrations." In order to bear the claim, foods should provide at least 3 g/d of beta-glucans from oats, oat bran, barley, barley bran, or mixtures of non-processed or minimally processed beta-glucans in one or more servings. The target population is adults with normal or mildly elevated blood cholesterol concentrations. In November 2011, the EU Commission published its decision in favour of oat beta-glucans with regard to Article 14 of the EC Regulation on the labelling of foodstuffs with nutrition and health claim statements permitting oat beta-glucan to be described as beneficial to health. Following the opinion of the Panel on Dietetic Products, Nutrition and Allergies (NDA) the EFSA and the Regulation (EU) no. 1160/2011 of the Commission, foodstuffs through which 3 g/day of oat beta-glucan are consumed (1 g of oat beta-glucan per portion) are allowed to display the following health claim: "Oat beta-glucan reduces the cholesterol level in the blood. The lowering of the blood cholesterol level can reduce the risk of coronary heart disease." β-glucan lowers cholesterol in part by increasing the viscosity of digesta in the small intestine, although cholesterol reduction is greater in those with higher total cholesterol and LDL cholesterol in their blood. Additionally, studies suggest that it increases the activity of CYP7A1, a key enzyme in the synthesis of bile acids, thus increasing the excretion of cholesterol, and that it may have additional anti-atherogenic mechanisms. The degree of cholesterol reduction depends upon the particular strain of β-glucan in a range between a molecular weights of 26.8 and 3000 kD. Although more viscous β-glucans result in a more viscous solution of intestinal digesta, and thus more cholesterol uptake, after a certain molecular weight, β-glucans become less soluble and thus contribute less to solution viscosity. The intake of β-glucan in liquid form generally results in greater solubilization and oat β-glucan is more effective at lowering cholesterol in juices than in hard foods like bread and cookies. Despite the recognized impact of viscosity on serum cholesterol levels, no current data exists comparing internal solution viscosity and serum cholesterol. Intake of oat β-glucan at daily amounts of at least 3 grams lowers total and low-density lipoprotein cholesterol levels by 5 to 10% in people with normal or elevated blood cholesterol levels. Digestion Throughout digestion, β-glucan alters the physical properties of digesta while chemicals in the digestive tract break down β-glucan, changing its composition. Fermentation of β-glucans by microbiote results in the production of short chain fatty acids and changes to gut microbes as well as the depolymerization and structural change of the original β-glucan. In the stomach, β-glucans swell and cause gastric distension – which is associated with the signal pathway of satiation – the feeling of fullness, leading to a decreased appetite. Studies demonstrating β-glucan's effect on delayed gastric emptying may differ due to variants in food combination, β-glucan dosage, and molecular weight, and variety of food source. In the small intestine, β-glucan may reduce starch digestibility and glucose uptake – significant in the reduction of postprandial glucose levels. Oat β-glucans have a prebiotic effect where they selectively stimulate growth of specific strands of microbes in the colon, where the particular microbe stimulated depends on the degree of polymerization of the β-glucan. Specifically, Lactobacillus and Enterococcus are stimulated by all oat β-glucan while Bifidobacterium bacteria also stimulated by oat β-glucan oligosaccharides. Soluble β-glucan increases stool weight through the increase in microbial cells in the colon. Blood glucose Postprandial blood glucose levels become lower after consumption of a meal containing β-glucan as a result of increased gut viscosity, which delays gastric emptying and lengthens travel through the small intestine. In one review, the net decrease in blood glucose absorption reduced postprandial blood insulin concentrations, improving insulin sensitivity. A 2021 meta-analysis of clinical trials concluded that oat beta-glucan with molecular weights greater than 300 kg/mol reduced incremental area-under-the-curve by 23%, peak blood glucose by 28%, and insulin by 22% in a dose-responsive fashion, with similar results in participants with or without diabetes. Diabetic people who increased their daily consumption of beta-glucans by more than 3 grams per day for months also lost body weight. Cosmetics β-glucan is used in a variety of creams, ointments and powders with potential to affect collagen production and skin disorders. Wound healing and immunomodulation In preliminary research, oat β-glucan is being studied for its potential immunomodulatory effects, antitumour properties, and stimulation of collagen deposition, tissue granulation, reepithelization, and macrophage infiltration in the wound healing process. References Polysaccharides Immunomodulating drugs Oats
Oat beta-glucan
Chemistry
2,485
5,893,800
https://en.wikipedia.org/wiki/Prism%20compressor
A prism compressor is an optical device used to shorten the duration of a positively chirped ultrashort laser pulse by giving different wavelength components a different time delay. It typically consists of two prisms and a mirror. Figure 1 shows the construction of such a compressor. Although the dispersion of the prism material causes different wavelength components to travel along different paths, the compressor is built such that all wavelength components leave the compressor at different times, but in the same direction. If the different wavelength components of a laser pulse were already separated in time, the prism compressor can make them overlap with each other, thus causing a shorter pulse. Prism compressors are typically used to compensate for dispersion inside Ti:sapphire modelocked lasers. Each time the laser pulse inside travels through the optical components inside the laser cavity, it becomes stretched. A prism compressor inside the cavity can be designed such that it exactly compensates this intra-cavity dispersion. It can also be used to compensate for dispersion of ultrashort pulses outside laser cavities. Prismatic pulse compression was first introduced, using a single prism, in 1983 by Dietel et al. and a four-prism pulse compressor was demonstrated in 1984 by Fork et al. Additional experimental developments include a prism-pair pulse compressor and a six-prism pulse compressor for semiconductor lasers. The multiple-prism dispersion theory, for pulse compression, was introduced in 1982 by Duarte and Piper, extended to second derivatives in 1987, and further extended to higher order phase derivatives in 2009. An additional compressor, using a large prism with lateral reflectors to enable a multi-pass arrangement at the prism, was introduced in 2006. Principle of operation Almost all optical materials that are transparent for visible light have a normal, or positive, dispersion: the refractive index decreases with increasing wavelength. This means that longer wavelengths travel faster through these materials. The same is true for the prisms in a prism compressor. However, the positive dispersion of the prisms is offset by the extra distance that the longer wavelength components have to travel through the second prism. This is a rather delicate balance, since the shorter wavelengths travel a larger distance through air. However, with a careful choice of the geometry, it is possible to create a negative dispersion that can compensate positive dispersion from other optical components. This is shown in Figure 3. By shifting prism P2 up and down, the dispersion of the compressor can be both negative around refractive index n = 1.6 (red curve) and positive (blue curve). The range with a negative dispersion is relatively short since prism P2 can only be moved upwards over a short distance before the light ray misses it altogether. In principle, the α angle can be varied to tune the dispersion properties of a prism compressor. In practice, however, the geometry is chosen such that the incident and refracted beam have the same angle at the central wavelength of the spectrum to be compressed. This configuration is known as the "angle of minimum deviation", and is easier to align than arbitrary angles. The refractive index of typical materials such as BK7 glass changes only a small amount (0.01 – 0.02) within the few tens of nanometers that are covered by an ultrashort pulse. Within a practical size, a prism compressor can only compensate a few hundred μm of path length differences between the wavelength components. However, by using a large refractive index material (such as SF10, SF11, etc.) the compensation distance can be extended to mm level. This technology has been used successfully inside femtosecond laser cavity for compensation of the Ti:sapphire crystal, and outside for the compensation of dispersion introduced by other elements. However, high-order dispersion will be introduced by the prism compressor itself, as well as other optical elements. It can be corrected with careful measurement of the ultrashort pulse and compensate the phase distortion. MIIPS is one of the pulse shaping techniques which can measure and compensate high-order dispersion automatically. As a muddled version of pulse shaping the end mirror is sometimes tilted or even deformed, accepting that the rays do not travel back the same path or become divergent. In Figure 4, the characteristics of the dispersion orders of a prism-pair compressor made of fused silica are depicted as a function of the insertion depth of the first prism, denoted as , for laser pulses with a central wavelength of and spectral bandwidth . The assessment employs the Lah-Laguerre optical formalism — a generalized formulation of the high orders of dispersion. The compressor is evaluated at near the Brewster angle for a separation of between the prisms, an insertion depth for the second prism at the minimum wavelength , and an apex angle of for the fused silica prisms. Dispersion theory The angular dispersion for generalized prismatic arrays, applicable to laser pulse compression, can be calculated exactly using the multiple-prism dispersion theory. In particular, the dispersion, its first derivative, and its second derivative, are given by where Angular quantities are defined in the article for the multiple-prism dispersion theory and higher derivatives are given by Duarte. Comparison with other pulse compressors The most common other pulse compressor is based on gratings (see Chirped pulse amplification), which can easily create a much larger negative dispersion than a prism compressor (centimeters rather than tenths of millimeters). However, a grating compressor has losses of at least 30% due to higher-order diffraction and absorption losses in the metallic coating of the gratings. A prism compressor with an appropriate anti-reflection coating can have less than 2% loss, which makes it a feasible option inside a laser cavity. Moreover, a prism compressor is cheaper than a grating compressor. Another pulse compression technique uses chirped mirrors, which are dielectric mirrors that are designed such that the reflection has a negative dispersion. Chirped mirrors are difficult to manufacture; moreover the amount of dispersion is rather small, which means that the laser beam must be reflected a number of times in order to achieve the same amount of dispersion as with a single prism compressor. This means that it is hard to tune. On the other hand, the dispersion of a chirped-mirror compressor can be manufactured to have a specific dispersion curve, whereas a prism compressor offers much less freedom. Chirped-mirror compressors are used in applications where pulses with a very large bandwidth have to be compressed. See also Chirped pulse amplification Ti:sapphire laser Modelocking Ultrashort pulse MIIPS, a technique to calibrate and correct the high-order distortion of femtosecond laser pulse. Multiple-prism dispersion theory References Optical devices Nonlinear optics Laser science
Prism compressor
Materials_science,Engineering
1,419
634,463
https://en.wikipedia.org/wiki/Claude%20%C3%89mile%20Jean-Baptiste%20Litre
Claude Émile Jean-Baptiste Litre (1716-1778) is a fictional character created in 1978 by Kenneth Woolner of the University of Waterloo to justify the use of a capital L to denote litres. The International System of Units usually only permits the use of a capital letter when a unit is named after a person. The lower-case character l might be difficult to distinguish from the upper-case character I or the digit 1 in certain fonts and styles, and therefore both the lower-case (l) and the upper-case (L) are allowed as the symbol for litre. The United States National Institute of Standards and Technology now recommends the use of the uppercase letter L, a practice that is also widely followed in Canada and Australia. Woolner perpetrated the April Fools' Day hoax in the April 1978 issue of "CHEM 13 News", a newsletter concerned with chemistry for school teachers. According to the hoax, Claude Litre was born on 12 February 1716, the son of a manufacturer of wine bottles. During Litre's extremely distinguished fictional scientific career, he purportedly proposed a unit of volume measurement that was incorporated into the International System of Units after his death in 1778. The hoax was mistakenly printed as fact in the IUPAC journal Chemistry International and subsequently retracted. In reality, the litre derives its name from the litron, an old French unit of dry volume. See also Etiological myth False etymology References External links Reprints of articles about the Litre hoax. Hoaxes in science Fictional scientists 1978 in science Hoaxes in Canada 1978 in Canada 1978 hoaxes Non-SI metric units Fictional characters introduced in 1978 Fictitious entries April Fools' Day jokes Nonexistent people used in hoaxes
Claude Émile Jean-Baptiste Litre
Mathematics
346
23,619,441
https://en.wikipedia.org/wiki/W.%20E.%20S.%20Turner
William Ernest Stephen Turner (22 September 1881 – 27 October 1963) was a British chemist and pioneer of scientific glass technology. Biography Turner was born in Wednesbury, Staffordshire on 22 September 1881. He went to King Edward VI Grammar School, Five Ways, Birmingham, and achieved a BSc (1902) and MSc (1904) in chemistry at the University of Birmingham. He married Mary Isobell Marshall (died 1939) and they had 4 children. In 1904, he joined the University College of Sheffield as a lecturer, and, in 1915, established the Department of Glass Manufacture, becoming in 1916 the Department of Glass Technology. He remained as its head until his retirement in 1945. In 1943, he married Helen Nairn Munro, an artist noted for her glass engraving, and a teacher of glass decoration at the Edinburgh College of Art. She was provided with a blue dress and shoes in glass fibre cloth (which was then an unusual industrial material). This has been selected as one of the items in the BBC's A History of the World in 100 Objects. The same year, he established a collection of historical and modern glass which became the Turner Museum of Glass from his extensive collection, and the wedding dress is on display there. He died on 27 October 1963. Work Publications From 1904 to 1914, he published 21 papers on physical chemistry, mainly on molecular weights in solution. However, the bulk of his work from 1917 to 1954 was on the chemistry and technology of glass. Following his retirement, he produced an extensive series on the history of glass technology and on glass in archeology. Apart from this, in 1909, he wrote a series of articles in the Sheffield Daily Telegraph about the scientist in industry, in which cooperation with universities was urged. Research His early career was strictly academic, largely dealing with the associations of molecules in the liquid state. However, as his articles in the local newspaper showed, he was interested in the application of science to practical industrial problems, and this became the main theme of his work. The beginning of the First World War cut off metallurgical supplies from Germany and Austria, and Turner proposed that the University should help British industry. The work in metallurgy led to enquiries about glass, and in 1915 Turner produced a 'Report on the glass industry of Yorkshire', noting that this was largely unscientific and rule of thumb in nature. He thereby persuaded the University to set up a Department of Glass Manufacture in 1915 for research and teaching where he remained for the rest of his career, becoming internationally known. The main thrust of his research was on a fundamental understanding of the relationship between the chemical composition and the working properties of glasses. In 1916, he founded the Society of Glass Technology, becoming its first secretary. It published a Journal, which he edited until 1951. He was also involved in the formation of the International Commission on Glass. Teaching Turner initially taught physical chemistry, and in 1905 started specific courses for metallurgists. This involvement led him to become President of the Sheffield Society of Applied Metallurgy in 1914. In 1915, the Department of Glass Manufacture began an outreach programme, providing short courses to industry in Mexborough, Barnsley, Castleford and Knottingley in addition to Saturday classes in Sheffield. These were extended to glass making centres in Derby, Alloa, Glasgow and London. From 1917, full-time day students entered for what became a Bachelor of Technical Science degree. During the Second World War, Turner and other staff of the department provided technical lectures to industries such as those making glass electronic vacuum tubes. Honours He was appointed an Officer of the Order of the British Empire in the 1919 New Year Honours for application of science to the glass industry, and in 1938 was appointed a Fellow of the Royal Society. He was the only person outside Germany to receive the Otto Schott Medal. References External links Fellows of the Royal Society Officers of the Order of the British Empire 1881 births 1963 deaths English chemists Glass engineering and science Glass makers Alumni of the University of Birmingham
W. E. S. Turner
Materials_science,Engineering
812
75,564,128
https://en.wikipedia.org/wiki/Gliese%20414
Gliese 414, also known as GJ 414, is a binary system made up of an orange dwarf and a red dwarf, located about 39 light years from Earth, in the constellation Ursa Major. With an apparent magnitude of 8.31, it is not visible to the naked eye. The primary component of the system has two known exoplanets. Characteristics The main component of the system, Gliese 414 A, is a relatively active orange dwarf, about 68% the size of the Sun and 65% its mass. Its age is estimated at 12.4 billion years, about two and a half times the age of the Solar System. It is orbited by two known exoplanets, called Gliese 414 Ab and Gliese 414 Ac. The secondary component, Gliese 414 B, is a red dwarf of type M2V, that is 55% the size of the Sun and 54% its mass. Unlike its companion star, Gliese 414 B is not orbited by any known planets. The binary star system is located in the northern hemisphere, approximately 38.8 light years from Earth, in the direction of the constellation Ursa Major. The closest star to the star system is CW Ursae Majoris, at a distance of 5.3 light-years. Planetary system The primary star, Gliese 414 A, is orbited by two exoplanets. They were discovered in 2020 by analyzing radial velocity data from Keck's HIRES instrument and the Automated Planet Finder at Lick Observatory, as well as photometric data from KELT. The innermost planet, Gliese 414 Ab, orbits its star at an average distance of 0.23 astronomical units, making it close to the optimistic habitable zone. Its orbit is eccentric (e = 0.45), which causes the distance from its star to vary from 0.13 to 0.34 AU, and its equilibrium temperature is calculated at 36°C. With a minimum mass of 7.6 , it is likely to have a significant volatile-rich envelope, thus being a poor candidate for habitability. The outermost planet, Gliese 414 Ac, is a super-Neptune that orbits its star at a greater distance of 1.4 astronomical units, which makes it a frigid planet, having an equilibrium temperature of about -150 °C. It is a good candidate for future direct imaging missions. See also List of star systems within 35–40 light-years Notes and references 414 Binary systems Ursa Major 97101 54646 9001920 J11110509+3026459 J111105.67+302643.6
Gliese 414
Astronomy
561
323,737
https://en.wikipedia.org/wiki/Superfund
Superfund is a United States federal environmental remediation program established by the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA). The program is administered by the Environmental Protection Agency (EPA) and is designed to pay for investigating and cleaning up sites contaminated with hazardous substances. Sites managed under this program are referred to as Superfund sites. Of the tens of thousands of sites selected for possible action under the Superfund program, 1178 (as of 2024) remain on the National Priorities List (NPL) that makes them eligible for cleanup under the Superfund program. Sites on the NPL are considered the most highly contaminated and undergo longer-term remedial investigation and remedial action (cleanups). The state of New Jersey, the fifth smallest state in the U.S., is the location of about ten percent of the priority Superfund sites, a disproportionate amount. The EPA seeks to identify parties responsible for hazardous substances released to the environment (polluters) and either compel them to clean up the sites, or it may undertake the cleanup on its own using the Superfund (a trust fund) and seek to recover those costs from the responsible parties through settlements or other legal means. Approximately 70% of Superfund cleanup activities historically have been paid for by the potentially responsible parties (PRPs), reflecting the polluter pays principle. However, 30% of the time the responsible party either cannot be found or is unable to pay for the cleanup. In these circumstances, taxpayers had been paying for the cleanup operations. Through the 1980s, most of the funding came from an excise tax on petroleum and chemical manufacturers. However, in 1995, Congress chose not to renew this tax and the burden of the cost was shifted to taxpayers in the general public. Since 2001, most of the cleanup of hazardous waste sites has been funded through taxpayers generally. Despite its name, the program suffered from under-funding, and by 2014 Superfund NPL cleanups had decreased to only 8 sites, out of over 1,200. In November 2021, the Infrastructure Investment and Jobs Act reauthorized an excise tax on chemical manufacturers, for ten years starting in July 2022. The EPA and state agencies use the Hazard Ranking System (HRS) to calculate a site score (ranging from 0 to 100) based on the actual or potential release of hazardous substances from a site. A score of 28.5 places a site on the National Priorities List, eligible for long-term, remedial action (i.e., cleanup) under the Superfund program. , there were 1,333 sites listed; an additional 448 had been delisted, and 43 new sites have been proposed. Superfund also authorizes natural resource trustees, which may be federal, state, and/or tribal, to perform a Natural Resource Damage Assessment (NRDA). Natural resource trustees determine and quantify injuries caused to natural resources through either releases of hazardous substances or cleanup actions and then seek to restore ecosystem services to the public through conservation, restoration, and/or acquisition of equivalent habitat. Responsible parties are assessed damages for the cost of the assessment and the restoration of ecosystem services. For the federal government, EPA, US Fish and Wildlife Service, or the National Oceanic and Atmospheric Administration may act as natural resource trustees. The US Department of Interior keeps a list of the natural resource trustees appointed by state's governors. Federally recognized Tribes may act as trustees for natural resources, including natural resources related to Tribal subsistence, cultural uses, spiritual values, and uses that are preserved by treaties. Tribal natural resource trustees are appointed by tribal governments. Some states have their own versions of a state Superfund law and may perform NRDA either through state laws or through other federal authorities such as the Oil Pollution Act. CERCLA created the Agency for Toxic Substances and Disease Registry (ATSDR). The primary goal of a Superfund cleanup is to reduce the risks to human health through a combination of cleanup, engineered controls like caps and site restrictions such as groundwater use restrictions. A secondary goal is to return the site to productive use as a business, recreation or as a natural ecosystem. Identifying the intended reuse early in the cleanup often results in faster and less expensive cleanups. EPA's Superfund Redevelopment Program provides tools and support for site redevelopment. History CERCLA was enacted by Congress in 1980 in response to the threat of hazardous waste sites, typified by the Love Canal disaster in New York, and the Valley of the Drums in Kentucky. It was recognized that funding would be difficult, since the responsible parties were not easily found, and so the Superfund was established to provide funding through a taxing mechanism on certain industries and to create a comprehensive liability framework to be able to hold a broader range of parties responsible. The initial Superfund trust fund to clean up sites where a polluter could not be identified, could not or would not pay (bankruptcy or refusal), consisted of about $1.6 billion and then increased to $8.5 billion. Initially, the framework for implementing the program came from the oil and hazardous substances National Contingency Plan. The EPA published the first Hazard Ranking System in 1981, and the first National Priorities List in 1983. Implementation of the program in early years, during the Ronald Reagan administration, was ineffective, with only 16 of the 799 Superfund sites cleaned up and only $40 million of $700 million in recoverable funds from responsible parties collected. The mismanagement of the program under Anne Gorsuch Burford, Reagan's first chosen Administrator of the agency, led to a congressional investigation and the reauthorization of the program in 1986 through an act amending CERCLA. 1986 amendments The Superfund Amendments and Reauthorization Act of 1986 (SARA) added minimum cleanup requirements in Section 121 and required that most cleanup agreements with polluters be entered in federal court as a consent decree subject to public comment (section 122). This was to address sweetheart deals between industry and the Reagan-era EPA that Congress had discovered. Environmental justice initiative In 1994 President Bill Clinton issued Executive Order 12898, which called for federal agencies to make achieving environmental justice a requirement by addressing low income populations and minority populations that have experienced disproportionate adverse health and environmental effects as a result of their programs, policies, and activities. The EPA regional offices had to apply required guidelines for its Superfund managers to take into consideration data analysis, managed public participation, and economic opportunity when considering the geography of toxic waste site remediation. Some environmentalists and industry lobbyists saw the Clinton administration's environmental justice policy as an improvement, but the order did not receive bipartisan support. The newly elected Republican Congress made numerous unsuccessful efforts to significantly weaken the program. The Clinton administration then adopted some industry favored reforms as policy and blocked most major changes. Decline of excise tax Until the mid-1990s, most of the funding came from an excise tax on the petroleum and chemical industries, reflecting the polluter pays principle. Even though by 1995 the Superfund balance had decreased to about $4 billion, Congress chose not to reauthorize collection of the tax, and by 2003 the fund was empty. Since 2001, most of the funding for cleanups of hazardous waste sites has come from taxpayers. State governments pay 10 percent of cleanup costs in general, and at least 50 percent of cleanup costs if the state operated the facility responsible for contamination. By 2013 federal funding for the program had decreased from $2 billion in 1999 to less than $1.1 billion (in constant dollars). In 2001, the EPA used funds from the Superfund program to institute the cleanup of anthrax on Capitol Hill after the 2001 anthrax attacks. It was the first time the agency dealt with a biological release rather than a chemical or oil spill. From 2000 to 2015, Congress allocated about $1.26 billion of general revenue to the Superfund program each year. Consequently, less than half the number of sites were cleaned up from 2001 to 2008, compared to before. The decrease continued during the Obama administration, and since under the direction of EPA Administrator Gina McCarthy Superfund cleanups decreased even more from 20 in 2009 to a mere 8 in 2014. Reauthorization of excise tax In November 2021, Congress reauthorized an excise tax on chemical manufacturers, under the Infrastructure Investment and Jobs Act. The new chemical excise tax is effective July 1, 2022, and is double the rate of the previous Superfund tax. The 2021 law also authorized $3.5 billion in emergency appropriations from the U.S. government general fund for hazardous site cleanups in the immediate future. Provisions CERCLA authorizes two kinds of response actions: Removal actions. These are typically short-term response actions, where actions may be taken to address releases or threatened releases requiring prompt response. Removal actions are classified as: (1) emergency; (2) time-critical; and (3) non-time critical. Removal responses are generally used to address localized risks such as abandoned drums containing hazardous substances, and contaminated surface soils posing acute risks to human health or the environment. Remedial actions. These are usually long-term response actions. Remedial actions seek to permanently and significantly reduce the risks associated with releases or threats of releases of hazardous substances, and are generally larger, more expensive actions. They can include measures such as using containment to prevent pollutants from migrating, and combinations of removing, treating, or neutralizing toxic substances. These actions can be conducted with federal funding only at sites listed on the EPA National Priorities List (NPL) in the United States and the territories. Remedial action by responsible parties under consent decrees or unilateral administrative orders with EPA oversight may be performed at both NPL and non-NPL sites, commonly called Superfund Alternative Sites in published EPA guidance and policy documents. A potentially responsible party (PRP) is a possible polluter who may eventually be held liable under CERCLA for the contamination or misuse of a particular property or resource. Four classes of PRPs may be liable for contamination at a Superfund site: the current owner or operator of the site; the owner or operator of a site at the time that disposal of a hazardous substance, pollutant or contaminant occurred; a person who arranged for the disposal of a hazardous substance, pollutant or contaminant at a site; and a person who transported a hazardous substance, pollutant or contaminant to a site, who also has selected that site for the disposal of the hazardous substances, pollutants or contaminants. The liability scheme of CERCLA changed commercial and industrial real estate, making sellers liable for contamination from past activities, meaning they can't pass liability onto unknowing buyers without any responsibility. Buyers also have to be aware of future liabilities. The CERCLA also required the revision of the National Oil and Hazardous Substances Pollution Contingency Plan 9605(a)(NCP). The NCP guides how to respond to releases and threatened releases of hazardous substances, pollutants, or contaminants. The NCP established the National Priorities List, which appears as Appendix B to the NCP, and serves as EPA's information and management tool. The NPL is updated periodically by federal rulemaking. The identification of a site for the NPL is intended primarily to guide the EPA in: Determining which sites warrant further investigation to assess the nature and extent of risks to human health and the environment Identifying what CERCLA-financed remedial actions may be appropriate Notifying the public of sites, the EPA believes warrant further investigation Notifying PRPs that the EPA may initiate CERCLA-financed remedial action. Despite the name, the Superfund trust fund has lacked sufficient funds to clean up even a small number of the sites on the NPL. As a result, the EPA typically negotiates consent orders with PRPs to study sites and develop cleanup alternatives, subject to EPA oversight and approval of all such activities. The EPA then issues a Proposed Plans for remedial action for a site on which it takes public comment, after which it makes a cleanup decision in a Record of Decision (ROD). RODs are typically implemented under consent decrees by PRPs or under unilateral orders if consent cannot be reached. If a party fails to comply with such an order, it may be fined up to $37,500 for each day that non-compliance continues. A party that spends money to clean up a site may sue other PRPs in a contribution action under the CERCLA. CERCLA liability has generally been judicially established as joint and several among PRPs to the government for cleanup costs (i.e., each PRP is hypothetically responsible for all costs subject to contribution), but CERCLA liability is allocable among PRPs in contribution based on comparative fault. An "orphan share" is the share of costs at a Superfund site that is attributable to a PRP that is either unidentifiable or insolvent. The EPA tries to treat all PRPs equitably and fairly. Budgetary cuts and constraints can make more equitable treatment of PRPs more difficult. Procedures Upon notification of a potentially hazardous waste site, the EPA conducts a Preliminary Assessment/Site Inspection (PA/SI), which involves records reviews, interviews, visual inspections, and limited field sampling. Information from the PA/SI is used by the EPA to develop a Hazard Ranking System (HRS) score to determine the CERCLA status of the site. Sites that score high enough to be listed typically proceed to a Remedial Investigation/Feasibility Study (RI/FS). The RI includes an extensive sampling program and risk assessment that defines the nature and extent of the site contamination and risks. The FS is used to develop and evaluate various remediation alternatives. The preferred alternative is presented in a Proposed Plan for public review and comment, followed by a selected alternative in a ROD. The site then enters into a Remedial Design phase and then the Remedial Action phase. Many sites include long-term monitoring. Once the Remedial Action has been completed, reviews are required every five years, whenever hazardous substances are left onsite above levels safe for unrestricted use. The CERCLA information system (CERCLIS) is a database maintained by the EPA and the states that lists sites where releases may have occurred, must be addressed, or have been addressed. CERCLIS consists of three inventories: the CERCLIS Removal Inventory, the CERCLIS Remedial Inventory, and the CERCLIS Enforcement Inventory. The Superfund Innovative Technology Evaluation (SITE) program supports development of technologies for assessing and treating waste at Superfund sites. The EPA evaluates the technology and provides an assessment of its potential for future use in Superfund remediation actions. The SITE program consists of four related components: the Demonstration Program, the Emerging Technologies Program, the Monitoring and Measurement Technologies Program, and Technology Transfer activities. A reportable quantity (RQ) is the minimum quantity of a hazardous substance which, if released, must be reported. A source control action represents the construction or installation and start-up of those actions necessary to prevent the continued release of hazardous substances (primarily from a source on top of or within the ground, or in buildings or other structures) into the environment. A section 104(e) letter is a request by the government for information about a site. It may include general notice to a potentially responsible party that CERCLA-related action may be undertaken at a site for which the recipient may be responsible. This section also authorizes the EPA to enter facilities and obtain information relating to PRPs, hazardous substances releases, and liability, and to order access for CERCLA activities. The 104(e) letter information-gathering resembles written interrogatories in civil litigation. A section 106 order is a unilateral administrative order issued by EPA to PRP(s) to perform remedial actions at a Superfund site when the EPA determines there may be an imminent and substantial endangerment to the public health or welfare or the environment because of an actual or threatened release of a hazardous substance from a facility, subject to treble damages and daily fines if the order is not obeyed. A remedial response is a long-term action that stops or substantially reduces a release of a hazardous substance that could affect public health or the environment. The term remediation, or cleanup, is sometimes used interchangeably with the terms remedial action, removal action, response action, remedy, or corrective action. A nonbinding allocation of responsibility (NBAR) is a device, established in the Superfund Amendments and Reauthorization Act, that allows the EPA to make a nonbinding estimate of the proportional share that each of the various responsible parties at a Superfund site should pay toward the costs of cleanup. Relevant and appropriate requirements are those United States federal or state cleanup requirements that, while not "applicable," address problems sufficiently similar to those encountered at the CERCLA site that their use is appropriate. Requirements may be relevant and appropriate if they would be "applicable" except for jurisdictional restrictions associated with the requirement. Implementation , there were 1,322 sites listed; an additional 447 had been delisted, and 51 new sites have been proposed. Historically about 70 percent of Superfund cleanup activities have been paid for by potentially responsible party (PRPs). When the party either cannot be found or is unable to pay for the cleanup, the Superfund law originally paid for site cleanups through an excise tax on petroleum and chemical manufacturers. The last full fiscal year (FY) in which the Department of the Treasury collected the excise tax was 1995. At the end of FY 1996, the invested trust fund balance was $6.0 billion. This fund was exhausted by the end of FY 2003. Since that time Superfund sites for which the PRPs could not pay have been paid for from the general fund. Under the 2021 authorization by Congress, collection of excise taxes from chemical manufacturers will resume in 2022. Hazard Ranking System The Hazard Ranking System is a scoring system used to evaluate potential relative risks to public health and the environment from releases or threatened releases of hazardous wastes at uncontrolled waste sites. Under the Superfund program, the EPA and state agencies use the HRS to calculate a site score (ranging from 0 to 100) based on the actual or potential release of hazardous substances from a site through air, surface water or groundwater. A score of 28.5 places the site on the National Priorities List, making the site eligible for long-term remedial action (i.e., cleanup) under the Superfund program. Environmental discrimination Federal actions to address the disproportionate health and environmental disparities that minority and low-income populations face through Executive Order 12898 required federal agencies to make environmental justice central to their programs and policies. Superfund sites have been shown to impact minority communities the most. Despite legislation specifically designed to ensure equity in Superfund listing, marginalized populations still experience a lesser chance of successful listing and cleanup than areas with higher income levels. After the executive order had been put in place, there persisted a discrepancy between the demographics of the communities living near toxic waste sites and their listing as Superfund sites, which would otherwise grant them federally funded cleanup projects. Communities with both increased minority and low-income populations were found to have lowered their chances of site listing after the executive order, while on the other hand, increases in income led to greater chances of site listing. Of the populations living within 1 mile radius of a Superfund site, 44% of those are minorities despite only being around 37% of the nation's population. As of January 2021, more than 9,000 federally subsidized properties, including ones with hundreds of dwellings, were less than a mile from a Superfund site. Case studies in African American communities In 1978, residents of the rural black community of Triana, Alabama were found to be contaminated with DDT and PCB, some of whom had the highest levels of DDT ever recorded in human history. The DDT was found in high levels in Indian Creek, which many residents relied on for sustenance fishing. Although this major health threat to residents of Triana was discovered in 1978, the federal government did not act until 5 years later after the mayor of Triana filed a class-action lawsuit in 1980. In West Dallas, Texas, a mostly African American and Latino community, a lead smelter poisoned the surrounding neighborhood, elementary school, and day cares for more than five decades. Dallas city officials were informed in 1972 that children in the proximity of the smelter were being exposed to lead contamination. The city sued the lead smelters in 1974, then reduced its lead regulations in 1976. It wasn't until 1981 that the EPA commissioned a study on the lead contamination in this neighborhood and found the same results that had been found a decade earlier. In 1983, the surrounding day cares had to close due to the lead exposure while the lead smelter remained operating. It was later revealed that EPA Deputy Administrator John Hernandez had deliberately stalled the cleanup of the lead-contaminated hot spots. It wasn't until 1993 that the site was declared a Superfund site, and at the time it was one of the largest ones. However, it was not until 2004 when the EPA completed the clean-up efforts and eliminated the lead pollutant sources from the site. The Afton community of Warren County, North Carolina is one of the most prominent environmental injustice cases and is often pointed to as the roots of the environmental justice movement. PCBs were illegally dumped into the community and then it eventually became a PCB landfill. Community leaders pressed the state for the site to be cleaned up for an entire decade until it was finally detoxified. However, this decontamination did not return the site to its pre-1982 conditions. There has been a call for reparations to the community which has not yet been met. Bayview-Hunters Point, San Francisco, a historically African American community, has faced persistent environmental discrimination due to the poor remediation efforts of the San Francisco Naval Shipyard, a federally declared Superfund site. The negligence of multiple agencies to adequately clean this site has led Bayview residents to be subject to high rates of pollution and has been tied to high rates of cancer, asthma, and overall higher health hazards than other regions of San Francisco. Case studies in Native American communities One example is the Church Rock uranium mill spill on the Navajo Nation. It was the largest radioactive spill in the US but received a long delay in government response and cleanup after being placed as a lower priority site. Two sets of five-year cleanup plans have been put in place by US Congress, but contamination from the Church Rock incident has still not been completely cleaned up. Today, uranium contamination from mining during the Cold War era remains throughout the Navajo Nation, posing health risks to the Navajo community. Accessing data The data in the Superfund Program are available to the public. Superfund Site Search Superfund Policy, Reports and Other Documents TOXMAP was a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) that was deprecated on December 16, 2019. The application used maps of the United States to help users visually explore data from the EPA Toxics Release Inventory (TRI) and Superfund programs. TOXMAP was a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and other authoritative sources. Future challenges While the simple and relatively easy sites have been cleaned up, EPA is now addressing a residual number of difficult and massive sites such as large-area mining and sediment sites, which is tying up a significant amount of funding. Also, while the federal government has reserved funding for cleanup of federal facility sites, this clean-up is going much more slowly. The delay is due to a number of reasons, including EPA's limited ability to require performance, difficulty of dealing with Department of Energy radioactive wastes, and the sheer number of federal facility sites. See also Brownfield land Formerly Used Defense Sites - Environmental restoration program Hazardous Materials Transportation Act National Oil and Hazardous Substances Contingency Plan Phase I Environmental Site Assessment Pollution Toxin References Further reading "High Court Limits Liability in Superfund Cases." – New York Times, 2009-05-05 External links Common Chemicals found at Superfund Sites, August 1994 Superfund Program – EPA Superfund sites by state – EPA Superfund: A Half Century of Progress, a report by the EPA Alumni Association Agency for Toxic Substances and Disease Registry National Priorities List of Hazardous Substances 42 U.S.C. chapter 103 (CERCLA) of the United States Code from the LII 42 U.S.C. chapter 103 (CERCLA) of the United States Code from the US House of Representatives CERCLA (PDF/details) as amended in the GPO Statute Compilations collection Hazardous Substance Superfund account on USAspending.gov Pollution in the United States United States Environmental Protection Agency United States federal environmental legislation 1980 in the environment 1980 in American law 96th United States Congress Environmental issues in the United States Love Canal
Superfund
Technology
5,285
15,133,233
https://en.wikipedia.org/wiki/2%CE%B2-Propanoyl-3%CE%B2-%284-tolyl%29-tropane
2β-Propanoyl-3β-(4-tolyl)tropane also known as WF-11 or 2-PTT is a cocaine analogue 20 times more potent than cocaine at binding to the dopamine transporter with increased selectivity for the norepinephrine transporters. It also shows marked increase in metabolic stability. In contrast to the findings of cocaine effects, WF-11 has been shown to produce a uniform downregulation of tyrosine hydroxylase protein and activity gene expression with a regimen of use. See also List of cocaine analogues RTI-32 References Tropanes Stimulants Norepinephrine–dopamine reuptake inhibitors Ketones
2β-Propanoyl-3β-(4-tolyl)-tropane
Chemistry
153
4,124,553
https://en.wikipedia.org/wiki/Bruno%20Grollo
Bruno Gordano Grollo (born 1942, Melbourne, Victoria) is an Australian businessman, property developer, former Director of Grocon and is noted for his controversy surrounding the Swanston Street Wall incident on 29 March 2013. Bruno is the son of Luigi Grollo, who founded Grocon, one of Australia’s largest construction companies, in 1948 after immigrating to Australia from Italy. Bruno’s role in his company remains despite handing the title of chief executive and chairman to his son, Daniel Grollo in 1999. Following public disputes with Infrastructure NSW in 2020, Grocon announced that it and 86 of its subsidiaries have entered Voluntary Administration. Early life Bruno Grollo was born in Melbourne in 1942 and is the son of accountant Emma Girardi (1913-1986) and builder Luigi Arturo Grollo (1909-1994). His grandfather, Giovanni Grollo, was a farmer. Luigi Grollo emigrated to Australia at 18 years old, due to his adolescent life being rife with war, drought, storms and the death of his mother at 52 years old. He said of the experience growing up in Italy, ‘The following year, 1928, I saw that things were still going bad there. There was another storm that carried off everything. It left only the soles of our feet! Here were some new debts to pay off.’ Luigi Grollo and his family left their hometown of Arcade, Treviso, Italy after it became a World War I battleground and was no longer habitable. At 18 years old, with his older brother sponsoring him, he boarded the passenger ship named the Principe d’Udine and arrived in Melbourne on 24 July 1928 to start a new life in Australia. His cousin Carlo Zanatta was awaiting his arrival but did not recognise Luigi as they had not been together since he was a young boy. Luigi said of Carlo, ‘He was a good man to me. Zanatta took me to a boarding house in Russell Street, Melbourne. There we stayed all one day and one night. The next morning we left for Healesville to go to work.’ In 1938, Luigi settled in Carlton and began concreting work, whilst building his construction business, formerly known as L Grollo & Sons, on the weekends, while wife, Emma, helped with bookkeeping and accounts. Luigi’s one-man company began with residential paths, gutters, fireplace foundations and swimming pools before rapidly expanding in the 1950s to become the Grollo Group, transitioning to constructing multiple high-rises in Melbourne. Bruno had a substantial role in his father’s company whilst growing up; he and his brother would help out, gaining trade experience whilst still at school. He had minimal formal education growing up, recalling his attendance as a ‘series of Catholic schools’ before beginning his career as a labourer. In 1958, at 15 years old, Bruno left school and began his career in construction when he joined his father’s company, of, at the time almost 130 employees. His brother, Rino Grollo, soon after joined the company in 1965. In 1968, after suffering a heart-attack, the patriarch and Director of Grocon, Luigi Grollo, retired and left sons, Bruno and Rino as co-Directors of his company Grocon. Following the stressful period after their mother’s death in 2001, Bruno and Rino divided the company and its assets into two. Bruno headed Grocon Constructions and multiple building assets and in 2003 made his two sons, Adam and Daniel, joint managing directors. Controversy Bruno Grollo has been involved in several media controversies concerning himself and his company, Grocon. On Trial In 1997, Bruno Grollo and co-accuseds John William Flanagan and Robert Charles Howard were acquitted of conspiracy charges. They were accused of bribing a Federal Police officer, Superintendent Lloyd Farrell, and of conspiring to pervert the course of justice. This conspiracy arose from fears surrounding the taxation office in which the court alleged that Grollo had failed to declare $59 million in the process of building the Rialto Towers. Recorded as one of the longest trials in Victorian history, running for 13 months, this investigation into the taxation affairs of the Grollo Group resulted in a not-guilty verdict on all charges for all three men, Grollo, Flanagan and Howard, and ended on 26 June 1997. Swanston Street collapse On 28 March 2013, during wind gusts of up to 102 kilometres per hour (63 mph) a Grocon building site construction wall collapsed on Swanston Street, Melbourne killing three pedestrians walking by. This collapse resulted in the death of Bridget Jones, Alexander Jones and Marie-Faith Fiawoo. This fatal incident in which promotional hoarding incorrectly fastened to a Grocon brick wall, resulted in a court case in which Grocon Victoria Street Pty Ltd pleaded guilty to a charge of failing to warrant a safe workplace. Grollo stated about the incident, ‘I personally, along with all of the directors and employees of Grocon, reiterate our deep regret at the tragic and untimely loss’. The court case against WorkSafe Victoria, concluding in 2014, resulted in a guilty verdict and a $250,000 fine for Grollo’s company, Grocon. Grocon Constructions As the new co-director of Grocon, Bruno and his company were involved in many of the projects that created Melbourne’s skyline. His projects included the Rialto Towers, the Hyatt Hotel and the Eureka Tower in 2006, which was one of the world’s highest residential towers at the time. Continuing his expansion into Sydney with the Governor Phillip Tower, the Macquarie Towers and 1 Bligh Street, the two-brothers led Australia’s construction industry to new heights. Grollo Tower The Grollo Tower proposal was a $1.7 billion, 500m skyscraper for the Melbourne Docklands, proposed by Bruno as a gift to Melburnians in 1995, but also partially funded by the Victorian public. Bruno stated of the tower, 'It would be a golden building for a golden city for the golden times to come ... it has to put the city on the world map’ . His ambitious ideals underlined many aspects of the company, Grollo stated he wanted, ‘To do something for Melbourne that did what the pyramids did for Egypt, or the Colosseum did for Rome, or the Opera House and Harbour Bridge did for Sydney'. The Grollo Tower, although never coming to fruition, would have been the tallest in the world at that time. The proposal was reviewed again in 2003 for construction to begin in Dubai, commissioned by The Grollo Corporation and Emaar Properties, the largest development company in the Arab Emirates. The $3 billion deal was proposed as an exact replica of the original Grollo Tower, however ultimately the project was cancelled and Bruno’s ambitious skyscraper was never built. Cyclone Tracy restoration On Christmas Day in 1974, Cyclone Tracy destroyed more than 70 percent of Darwin’s buildings, including 80 percent of its houses; this led to the Northern Territory Government signing a contract with the Grollo Group to help with restorations. Both Rino and Bruno were involved in the restoration of the cyclone-torn city, building 400 cyclone-proof houses with various designs for the government. This contract substantially grew their business and by the 1980s, Grocon had a total workforce of over 1,000 employees. Voluntary administration In 2020, following public disputes with Infrastructure NSW, Grocon announced that it and 86 of its subsidiaries had been placed in voluntary administration. The Grocon predicament began in November 2020 when Daniel Grollo experienced troubles with the latest Grocon projects, in Barangaroo, Sydney and inner-city Melbourne. In January 2018, Grocon was awarded construction rights for a project in Central Barangaroo, Sydney as a deal with Aqualand and Scentre Group. In 2019, during a court battle with Dexus over a $28 million lease claim they put two subsidiaries into voluntary administration. In 2020, during the COVID-19 pandemic, Grocon's only Melbourne based project consisting of a $111 million office development stopped construction, with subcontractors, employees and creditors said to be owed more than $100 million. Grocon is suing the Government of New South Wales, claiming they lost $270 million during the sale of the Barangaroo Central project to Aqualand for $73 million in 2020, and it is due to be seen in the Supreme Court of New South Wales in 2022. Personal life Bruno Grollo married Dina Bettiol in 1965 and they had three children together, Danie, Leanna and Adam. They were married for 26 years before Dina suffered a stroke which left her severely paralysed until her death, aged 58, in December 2001; Bruno keeps a room in her honour at his house. On 14 February 2004, Grollo was remarried to Pierina Biondo at St Patrick's Cathedral, Melbourne. In 2014, he revealed in an interview with Melbourne journalist, Ruth Ostrow, of his ongoing struggles with leukemia, melanoma and prostate cancer. Grollo stated, ‘My biggest goal now is staying alive. I’m trying to live long enough to see the success of gene, nano and stem-cell therapies which will keep us alive.’ He now employs a professional team within his Melbourne home in Thornbury, ‘Casa Del Matto’ which translates to House of the Madman in English to research products on the market and new science on anti-ageing and longevity. Grollo stated, ‘This is cutting-edge biology and those young and healthy enough to be around will be able to live indefinitely.’ Grollo takes up to 100 tablets per day, exercises regularly and every day will hang upside down on a machine with a backwards tilt to increase longevity. Since retiring from the construction industry, stating, ‘buildings are hard work, they’re stressful, they are draining. They’re hard to put up. I’d had enough. I got out.' Bruno has found a passion for meditation and Maharishi yoga and has since invested $3 million into a transcendental meditation college in Watsonia. Stating that, ‘The Maharishi said consciousness is everything. It’s the closest thing to what God might be, your consciousness, mine, the dog, the cat, the flowers, the trees… transcendental meditation was the closest thing to euphoria and youth I’ve ever discovered.''' In 1991, Grollo was appointed an Officer of the Order of Australia for service to building and construction and to the community. Net worth In 2006, Grollo was listed in Forbes top 40 richest people in Australia and New Zealand. Bruno Grollo and family were listed on the Financial Review Rich List 2018 with an assessed net worth of 702 million. Bruno Grollo and family did not appear on the 2019 Rich List, although Rino Grollo and his family were independently assessed with a net worth of 583 million. Bruno, Rino, and/or their father, Luigi (whilst living), are one of thirteen living Australians who have appeared on every Financial Review Rich List, since it was first published in 1984. {| class="wikitable" ! rowspan=2 | Year ! colspan=2 width=40% | Financial Review Rich List ! colspan=2 width=40% | Forbes |- ! Rank ! Net worth bn ! Rank ! Net worth bn |- | 2006 | align="center" | | align="right" | | align="center" | | align="right" | |- ! colspan=5 style="background:#cccccc;" | |- | 2014 | align="center" | | align="right" | | align="center" | | align="right" | |- | 2015 | align="center" | | align="right" | | align="center" | n/a | align="right" | not listed |- | 2016 | align="center" | | align="right" | | align="center" | n/a | align="right" | not listed |- | 2017 | align="center" | | align="right" | 0.720 | align="center" | n/a | align="right" | not listed |- | 2018 | align="center" | 113 | align="right" | 0.702 | align="center" | | align="right" | |- | 2019 | align="center" | n/a | align="right" | not listed | align="center" | n/a | align="right" | not listed |} Philanthropy Bruno and his brother, Rino, along with their wives, Dina Bettiol and Diana Ruzzene, became well known in the Melbourne community for being generous philanthropists. They would all often donate to community groups, charities, educational organisations and sporting institutions. After their mother’s death in December 2001, they established The Emma Grollo Memorial Scholarship in her memory funded by Bruno, Rino and the Grollo Group. The scholarship seeks to provide financial support to students studying Italian language or literature at the University of Melbourne. Bruno remembers his mother with these words, ‘My mother had a unique ability to keep us united. She managed to keep us united right up until the very end ... and sometimes this was not easy ... Of all her merits, this for me was the greatest.’ References External links Official website Grocon website Eureka Tower website 1942 births Australian businesspeople Australian people of Italian descent Living people Construction and civil engineering companies Cyclone Tracy Italian-Australian culture Transcendental Meditation Officers of the Order of Australia
Bruno Grollo
Engineering
2,896
22,984,567
https://en.wikipedia.org/wiki/Venturi%20flume
In hydrology, a Venturi flume is a device used for measuring the rate of flow of a liquid in situations with large flow rates, such as a river. It is based on the Venturi effect, for which it is named. It was first developed by V.M. Cone in Fort Collins, Colorado. The Venturi flume consists of a flume with a constricted section in the center. By the Venturi effect, this causes a drop in the fluid pressure at the center of the constriction. By comparing the fluid pressure at the center of the flume with that earlier in the device, the rate of flow can be measured. References Fluid mechanics Fluid dynamics
Venturi flume
Chemistry,Engineering
143
42,758,904
https://en.wikipedia.org/wiki/United%20States%20Aeronautical%20Reserve
The United States Aeronautical Reserve (U.S.A.R.) was an early aviation organization created by Harvard University’s Aero Club on September 8, 1910. The founder was John H. Ryan, and the General Secretary esd Richard R. Sinclair. The group's recruiting stations were at Harvard University, Mineola, and Belmont Park. Some of the United States Aeronautical Reserve's General Board were Clifford Harmon, Chief of Staff; John Barry Ryan, Commodore; Herbert I. Satterlee, NY; John Barry Ryan, NY; Wilbur Wright, Dayton, Ohio; Glenn Curtiss, Hammondsport, NY; Cortland Field Bishop, NY; Hon. John F. Fitzgerald, Boston, MA; Charles H. Allen (Treasurer), NY and Richard R. Sinclair (Assistant Treasurer) NY. The United States Aeronautical Reserve military contacts were the Army’s Brigadier General James Allen, Aeronautical Division, U.S. Signal Corps, Chief Signal Officer; and Captain W. Irving Chambers of the Navy; and Major General Leonard Wood, Chief of Staff and U.S.A.R. member. "With offices not far from those of the Aero Club of America in New York City, the U.S.A.R. by November 1910 claimed no less than 3,200 members, including William Howard Taft." Notable Members The following are some of the more notable members of the organization, many of which being the early aviators and automobilists. U.S.A.R. was officially recognized by the United States Department of War and United States Department of the Navy The United States Aeronautical Reserve was officially recognized by the “War and Navy Departments,” and was “organized along strictly military lines, with a view of advancing the science as a means of supplementing the national defense . . . And they are anxious that the U.S.A.R. shall not be confused with other aero clubs in New York and other cities, which appear to be striving for existence along lines made famous by certain characteristics peculiar to the female inhabitants of Kilkenny.” Target practice bombing competition In 1910, The United States Aeronautical Reserve founder John H. Ryan also started the Commodore John Barry International Target Practice Cup through the Aeronautical Society and offered a $10,000.00 prize for a winning “bomb throwing” contest from an airplane, and the bronze trophy statue was of “Commodore Barry who was the first Commodore in the American Navy.” The Washington Post reported in a social article in October 1910 that Harmon and Grahame-White split the prize., Ryan's plan in 1910 to create an airplane landing strip on the roof of the U.S.A.R.'s main headquarters in Manhattan's 53rd Fifth Avenue address in New York City was covered by the media. Ryan figured that by combining several rooftops, he would create a landing strip of approximately 250 feet long by 17 feet wide. The Air-Scout, U.S.A.R.'s official publication In 1910, The United States Aeronautical Reserve’s General Board produced its official monthly publication, The Air-Scout, that later merged into Town & Country magazine. The Air-Scout was an upscale glossy magazine, approximately 14 inches long and 17 inches wide, filled with U.S. aviation and foreign news. It also contained social pages (such as with socialite aeroplane supporters: Mr. and Mrs. Cornelius Vanderbilt, Mrs. Harry Payne Whitney, Miss Vivien Gould, Mrs. August Belmont, Mr. Allan A. Ryan, Colonel John Jacob Astor, Mrs. Mortimer Schiff, Mrs. Charles Gibson, Miss Lilla B. Gilbert, Miss Hannah Randolph . . .); a woman's aviator page (Baroness Raymond de La Roche of France was said to be the first woman to obtain a pilot license and operate an airplane) in several issues; wireless technology news; airship news, airplane contests, military aviation news including where the U.S.A.R. may be needed; and more. There were plenty of photos from war correspondents and other professional photographers and agencies. Many of the feature writers were U.S.A.R. members including Harry M. Horton credited with "creating the earliest longest distance wireless apparatus that was first used on an airplane in flight, military aviators and similar." There were many advertisements in the publication. Industrial airplane shows In 1911, the First International :Industrial Airplane Show was held in conjunction with the 11th U.S. International Auto Show at Manhattan’s :Grand Central Palace, in New York City. The aviation show was the invent of the Aero Club of New York, and the event had the largest Palace attendance ever recorded back then., The United States Aeronautical Reserve had an exhibition booth with interesting airplane displays and a demonstration on January 5, 1911 of early wireless communication technology utilizing the "Wilcox aeroplane equipped with Horton [Harry M. Horton] wireless apparatus" used to communicate from the airplane to the land-based news media and to test distance with steamships out at sea., The Aeronautical Society and the United States Aeronautical Reserve had their full-size airplane displays in the second gallery of the :Grand Central Palace among other full-size airplanes. Charles W. Chappelle, a member of the United States Aeronautical Reserve, exhibited a full-size airplane which won him a medal for being the only African-American to invent and display an airplane. Military Airplane display in Washington, D.C., U.S.A.R. requests Grahame-White as pilot Both the Boston Daily Globe and the United States Aeronautical Reserve's (U.S.A.R.'s) The Air-Scout covered Grahame-White landing an airplane near the War Office in Washington, D.C., in October 1910. It was a distance and speed demonstration display, with the U.S.A.R. requesting Grahame-White to perform the test in front of hundreds of military personnel that stood outside and watched as he successfully landed his airplane in a narrow street within a few minutes from a satisfactory distance. According to the Boston Daily Globe, ". . . and within 10 minutes, had landed lightly on the narrow roadway between the White House and the war department, at the feet of General Leonard Wood and within a few yards of the window of President Taft's office." The Boston Daily Globe mentioned General Nelson A. Miles stating, "I am convinced that one aeroplane would annihilate an entire fleet by dropping bombs upon the deck, or the more vital spot--their engine rooms by way of the funnels . . .", and Major General Leonard Wood, commander of the army spoke on how the escalation of airplane technology and the wanted airplane capabilities would be "fulfilled" in the future. First use of airplane by military in war is from the United States Aeronautical Reserve Although the U.S.A.R. had much bigger plans for many of their airplanes to be used by the U.S. military, the U.S. military did utilize at least one of their airplanes in a peacekeeping effort with two of the U.S.A.R. members, according to The Air-Scout's March 1911 issue: “On February 16 [1911], the General Staff of the United States Army accepted the service of Mr. Collier’s biplane offered by the U.S.A.R. On the same day, Major General Leonard Wood publicly announced that the craft would be ordered to the Mexican frontier. On the next day, for the first time in the history of man, an airplane was ordered to the scene of the battle, with instructions to patrol the Mexican border in order to preserve neutrality laws. Lieutenant Foulios, a trained United States Army aviator officer, stationed at Fort Sam Houston near San Antonio, Texas, was commanded to report for service on board the airplane. Phillip O. Parmalee, one of the Wright aviators, a lieutenant of the U.S.A.R., native of Michigan, volunteered his services to the government through the reserves which were accepted. He was also commanded to proceed to Texas.” Photos of this were published in The Air-Scout. References History of aviation Aviation in the United States History of science and technology in the United States 20th-century military history of the United States Harvard University History of Boston History of New York City History of Manhattan Mineola, New York Wireless
United States Aeronautical Reserve
Engineering
1,719
2,958,127
https://en.wikipedia.org/wiki/Uttarayana
The term Uttarāyaṇa (commonly Uttarayanam) is derived from two different Sanskrit words – "uttaram" (North) and "ayanam" (movement) – thus indicating the northward movement of the Sun. In the Gregorian calendar, this pertains to the "actual movement of the sun with respect to the earth." Also known as the six month period that occurs between the winter solstice and summer solstice (approximately 20 December - 20 June). According to the Indian solar calendar, it refers to the movement of the Sun through the zodiac. This difference is because the solstices continually precess at a rate of 50 arcseconds per year due to the precession of the equinoxes, i.e. this difference is the difference between the sidereal and tropical zodiacs. The Surya Siddhanta bridges this difference by juxtaposing the four solstitial and equinoctial points with four of the twelve boundaries of the rashis. The complement of Uttarayana is Dakshinayana (the southward movement of the Sun). It is the period between Karka Sankranti and Makara Sankranti as per the sidereal zodiac and between the summer solstice and winter solstice as per the tropical zodiac. Difference between Uttarayana and Makara Sankranti There is a common misconception that Makara Sankranti marks the beginning of Uttarayana. This is because at one point in time Sayana and Nirayana zodiac were the same. Every year sidereal and tropical equinoxes slide by 50 seconds due to axial precession, giving birth to Ayanamsha and causing Makara Sankranti to slide further. When equinox slides it will increase ayanamsha and Makara Sankranti will also slide. This misconception continues as there is not much difference between actual Uttarayana date which occurs a day after winter solstice (of Dec 21) when the Sun makes the northward journey, and 14 January. However, the difference will be significant as equinoxes slide further. In 272 CE, Makara Sankranti was on 21 December. In 1000 CE, Makara Sankranti was on 31 December and now it falls on January 14. After 9000 years, Makara Sankranti will be in June. Then Makara Sankranti would mark the beginning of Dakshinayana. However Makara Sankranti still holds importance in Hindu rituals. All Drika Panchanga makers like mypanchang.com, datepanchang, janmabhumi panchang, rashtriya panchang and Vishuddha Siddhanta Panjika use the position of the tropical Sun to determine Uttarayana and Dakshinayana. Uttarayana in various treatises Surya Siddhanta Mayasura, the composer of Surya Siddhanta, defines Uttarayana, at the time of composition, as the period between the Makara Sankranti (which currently occurs around January 14) and Karka Sankranti (which currently occurs around July 16). Lātadeva describes this as half revolutions of the Sun, using the terms Uttarayana and Dakshinayana to describe the "northern and southern progress" respectively. Bal Gangadhar Tilak, a scholar and mathematician, proposes an alternative, early vedic definition of Uttarayana as starting from Vernal Equinox and ending with Autumnal Equinox. This definition interprets the term "Uttara Ayana" as "northern movement" instead of "northward movement", i.e. as the movement of the Earth in the region North of the Equator. In support of this proposal, he points to another tradition that the Uttarayana is considered the daytime of the Gods residing at the North Pole which tradition makes sense only if we define Uttarayana as the period between the Vernal and Autumnal equinoxes (when there is Midnight Sun at the North Pole). Conversely, Dakshinaya is defined as the period between the Autumnal and Vernal Equinoxes, when there is midnight sun at the South Pole. This period is also referred to as Pitrayana (with the Pitrus (i.e. ancestors) being placed at the South Pole). Drik Siddhanta This festival is currently celebrated on the 14th or 15 January but due to axial precession of the Earth it will continue to shift away from the actual season. The season occurs based on tropical sun (without ayanamsha). The Earth revolves around Sun with a tilt of 23.44 degrees. When the tilt is facing the Sun it is defined as summer and when the tilt is away from the Sun it is called winter. That is the reason when there is summer north of the equator, it will be winter south of the equator. Because of this tilt, the Sun appears to travel north and south of the equator. This motion of the Sun transitioning from south to north is called Uttarayana (the Sun is moving towards north). Once the Sun reaches north, it begins moving south and is called Dakshinayana – the Sun is moving towards south. This causes seasons which are dependent on equinoxes and solstices. Hindu Scriptures Uttarayana is referred to as the day of new good healthy wealthy beginning. In the Mahabharata, this day marks the death of Bhishma. Bhishma had the ability to choose the time of his death and although mortally wounded in war, he chose to delay his death until uttarayan. According to the Bhagavad Gita, a Hindu scripture, those who die when the Sun is on its northward course (from south to north) attain nirvana. This explains the choice made by Bhishma to wait until Uttarayana to die. According to the Hindu tradition the six month period of Uttarayana is equivalent to a single day of the Gods, while the six month period of Dakshinayana is equal to a single night of the Gods. Thus a year of twelve months is single day of the Gods. This refers to the six months of single day at the North pole and concurrent six months of night at the south pole. Rituals During the Uttarayana, devotees often undertake certain rituals to benefit during the auspicious time. Devotees often take part in pilgrimages to bathe in Prayag, where the Yamuna, Ganga and Saraswati rivers meet. Pongal is celebrated as a harvest festival in the southern states of India like Tamil Nadu. Although rituals and customs may vary, it is generally celebrated as a four-day festival. On the first day, unwanted household items are discarded and burned in bonfires to symbolize starting anew. The second day, people dress in new clothes and prepare pongal, a sweet dish that is made of rice, milk and jaggery, and offer it to Surya, the Hindu sun deity. On the third day, cattle are worshipped because they are seen as a symbol of prosperity. And, on the last day, some regions host bull-fighting and farmers offer prayers for the new, fresh harvest. Known as Lori in the northern states, children go door-to-door asking for sweets and money, and in the evening, people gather around huge bonfires to sing, dance, and make offerings to Agni, the fire deity, for future prosperity. Traditional dishes made from flatbread and mustard leaves are shared with offerings of sesame brittle, peanuts, popcorn, and jaggery. It is celebrated in other North Indian states like Haryana, Delhi, and Himachal Pradesh. References External links Animated illustration of Uttarayana and Dakshinayana Hindu astronomy Hindu calendar Articles containing video clips Summer solstice Winter solstice
Uttarayana
Astronomy
1,602
35,976,368
https://en.wikipedia.org/wiki/Galbulus
A galbulus is a fleshy cone (megastrobilus); chiefly relating to those borne by junipers and cypresses, and often mistakenly called a berry. These cones (galbuli) are formed by fleshy cone scales which accrete into a single mass under a unified epidermis. Although originally used for the cypresses, the term is more applicable to the junipers. See also Aril, fleshy modified cone-scales also found in some species of gymnosperms References Plant morphology
Galbulus
Biology
109
27,129,223
https://en.wikipedia.org/wiki/Mycena%20californiensis
Mycena californiensis is a species of fungus in the family Mycenaceae. It is a common and abundant species in the coastal oak woodlands of California, where it grows saprobically, feeding on the fallen leaves and acorns of various oak species. First described in 1860 by Berkeley and Curtis, the species was collected four years earlier during an exploring and surveying expedition. It was subsequently considered a doubtful species by later Mycena researchers, until a 1999 publication validated the taxon. Mycena elegantula is considered a synonym. Making their appearance in late autumn to early winter, the small and fragile fruit bodies are characterized by reddish-brown tones in the cap, stem, and the edges of the gills. If cut, the mushroom tissue will "bleed" a deep reddish to orangish latex. As is typical of the genus Mycena, caps of M. californiensis are bluntly conical, becoming bell-shaped to convex, and eventually flatten out when old. They measure up to in diameter, and are attached to thin, hollow stems that are up to long. History and taxonomy The species was originally collected for science purposes by the American botanist Charles Wright during the North Pacific Exploring and Surveying Expedition of 1853–56. The single collection was found growing on fallen oak leaves at Mare Island Naval Shipyard, in Solano County, California in January 1856. The specimen was sent by American mycologist Moses Ashley Curtis to his British colleague Miles Joseph Berkeley, who published a brief description of the species in 1860, calling it Agaricus californiensis, in what was then the subgenus Mycena. Berkeley and Curtis noted that it differed from A. aurantio-marginatus (known today as Mycena aurantiomarginata) in the nature of the gills, and they called it "a more graceful species." In his 1887 Sylloge Fungorum, Pier Andrea Saccardo raised the subgenus Mycena to generic status, so the species became known as Mycena californiensis. In his 1947 monograph of North American Mycena, Alexander H. Smith included it as an "excluded or doubtful species", saying that the species "cannot be recognized until the microscopic characters of the type are known." Researching his 1982 monograph of Mycena, Maas Geesteranus examined the holotype material—the particular specimen designated by Berkeley and Curtis to represent the type of the species. Because of its deteriorated condition, however, he was unable to corroborate the distinguishing features proposed by Berkeley and Curtis, and he agreed with Smith's assessment of the species. In the late 1990s, as part of his studies on the Mycena of California, Brian Perry noted that a common species in California, usually referred to as Mycena elegantula or , presented characteristics not congruent with either (in particular, M. elegantula had not previously been reported to contain latex). He compared isotype material (material collected at the same time and place as the holotype) of M. californiensis with Californian specimens and the type of M. elegantula and found all of them to represent the same species, publishing the results with Dennis Desjardin in their 1999 Mycotaxon article "Mycena californiensis resurrected". Part of the confusion, they noted, was apparently due to Smith's concept of M. elegantula not agreeing with the species' type (something also noticed by Geesteranus). Because M. californiensis is the earlier name (published in 1860 vs. 1895 for Mycena elegantula), it has priority over the later name M. elegantula, according to the rules of botanical nomenclature. Description The cap of M. californiensis is initially conic or bell-shaped, but flattens out in maturity, and typically reaches dimensions of up to . The cap margins (edges) are curved inwards when young, but as they age they become wavy or crenate (with rounded scallops), develop striations (radial grooves) and may even split. The surface of the cap is dull and smooth. Its color ranges from reddish brown to brownish orange in young specimens, with the color fading as the mushroom matures; the center of the cap is usually darker than the margins. The flesh is thin, and either the same color as the cap or lighter; it may stain a dark red color when bruised. The gills have an adnate attachment to the stem—broadly attached slightly above the bottom of the gill, with most of the gill fused to the stem. They are not closely spaced together, and there are about 15–20 of them. Some of the gills do not extend the full distance from the edge of the cap to the stem. These short gills, called lamellulae, form one to two groups of roughly equal length. All of the gills have a white to pinkish-buff color, with the gill edges ranging from reddish orange to reddish brown to brownish orange. The hollow stem is long by thick, and roughly the same thickness throughout. The top of the stem may be either pruinose (appearing to be covered with a very fine whitish powder on a surface) or smooth, while the stem base is covered with "hairs" that may be strigose (large, coarse, and bristle-like) to downy (soft and fuzzy). The stem is some shade of brown. The mushroom tissue will "bleed" a brownish-range to reddish-brown latex when it is cut. The edibility of M. californiensis is unknown. Microscopic characteristics In deposit, such as with a spore print, the spores appear white. Further details are revealed with a light microscope: the spores are ellipsoid to almond-shaped, smooth, thin-walled, and measure 8–12 by 4–6 μm. The basidia (the spore-bearing cells) are club-shaped, four-spored, and typically have dimensions of 26–37.5 by 7–10.5 μm. M. californiensis has cheilocystidia (cystidia on the gill edges) that measure 16–50 by 6.5–20 μm. These cells have irregular projections that can range in size from 1.5 to 18.8 by 1.5–6.5 μm and are variously shaped, from knob-like to cylindrical. The cells contain brownish contents that will stain darkly with Melzer's reagent, a common chemical reagent used in mushroom identification. With the exception of the medullary hyphae of the stem (longitudinally-arranged hyphae making up the stem surface), all hyphae contain clamp connections. Similar species Mycena californiensis may be distinguished from the closely related M. atromarginata by its smaller size and the purplish tint to the edge of the gills, and from M. purpureofusca by its differently shaped, longer spores. Another Mycena commonly confused with M. californiensis is M. sanguinolenta, a species that also exudes reddish latex. It can be distinguished from M. californiensis by the fusiform (tapering at each end) cheilocystidia that do not have outgrowths. An additional difference between the two is that M. sanguinolenta is associated with conifer wood and debris. Habitat and distribution The fruit bodies grow in clusters or scattered on the decomposing leaves and acorns of oak trees, such as Coast Live Oak, Valley Oak and Black Oak. It is common in the coastal oak woodlands of California, where it appears from late autumn to early winter. References External links Several photos at Mushroomhobby.com californiensis Fungi described in 1860 Fungi of California Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis Fungi without expected TNC conservation status Fungus species
Mycena californiensis
Biology
1,649
22,259,847
https://en.wikipedia.org/wiki/Barry%20R.%20Bickmore
Barry Robert Bickmore is a professor in the department of geological sciences at Brigham Young University (BYU). He is also a devout Mormon, having written Restoring the Ancient Church: Joseph Smith and Early Christianity (Ben Lomond: FAIR, 1999) as well as several articles that have been published in the FARMS Review. Bickmore was born in Redwood City, California, and raised in California and Utah. He served as a missionary for the Church of Jesus Christ of Latter-day Saints (LDS Church) in Iowa. He obtained a degree in geology with minors in philosophy and chemistry from BYU. He then received a Ph.D. in geochemistry from Virginia Polytechnic Institute and State University, where his advisor was Michael F. Hochella. He then was a postdoctoral research assistant at the University of Colorado for about a year and a half prior to joining the BYU faculty in August 2001. Bickmore, a conservative Republican, is known for his activism in support of action to combat global warming, such as when he criticized a proposed bill in Utah that described climate change as a hoax. The bill passed in spite of Bickmore's efforts to defeat it. Among other callings in the LDS Church, Bickmore has served as a seminary teacher. In geochemistry and related fields, Bickmore has focused on the study of low-temperature geochemical reactions and the development of geoscience curricula as part of the curriculum of elementary education majors. References External links Bickmore's blog, Anti-Climate Change Extremism in Utah Maxwell Institute bio BYU faculty bio Mineralogical study with Bickmore as the lead author American Latter Day Saint writers American Mormon missionaries in the United States 20th-century Mormon missionaries Living people Brigham Young University alumni Virginia Tech alumni Church Educational System instructors Brigham Young University faculty American geochemists Mormon apologists Latter Day Saints from California Latter Day Saints from Virginia Latter Day Saints from Colorado Latter Day Saints from Utah Year of birth missing (living people) Utah Republicans
Barry R. Bickmore
Chemistry
407
56,059,359
https://en.wikipedia.org/wiki/Postia%20cylindrica
Postia cylindrica is a species of poroid fungus in the family Fomitopsidaceae. Found in Southern China, it was described as a new species in 2017 by Hai-Sheng Yuan. The type collection was found growing on a dead pine tree in Jiangxi. The fungus is characterized macroscopically by crust-like to effused-reflexed fruit bodies with a cream to buff coloured cap surface and a reddish-brown margin that curves inward. There are gloeoplerous (oily) hyphal cells in the cuticular layer, and an absence of cystidia in the hymenium. The fungus produces smooth, cylindrical, thin-walled spores measuring 4.7–5.2 by 1.3–1.5 μm. References Fungi described in 2017 Fungi of China Fomitopsidaceae Fungus species
Postia cylindrica
Biology
174
38,846,481
https://en.wikipedia.org/wiki/The%20Magic%20%28book%29
The Magic is a 2012 self-help and spirituality book written by Rhonda Byrne. It is the third book in The Secret series. The book was released on March 6, 2012, as a paperback and e-book. The book is available in 41 languages. See also The Hero The Power The Secret References Further reading 2012 non-fiction books Atria Publishing Group books Australian non-fiction books New Thought literature Self-help books Quantum mysticism
The Magic (book)
Physics
92
3,329,157
https://en.wikipedia.org/wiki/Higher-order%20differential%20cryptanalysis
In cryptography, higher-order differential cryptanalysis is a generalization of differential cryptanalysis, an attack used against block ciphers. While in standard differential cryptanalysis the difference between only two texts is used, higher-order differential cryptanalysis studies the propagation of a set of differences between a larger set of texts. Xuejia Lai, in 1994, laid the groundwork by showing that differentials are a special case of the more general case of higher order derivates. Lars Knudsen, in the same year, was able to show how the concept of higher order derivatives can be used to mount attacks on block ciphers. These attacks can be superior to standard differential cryptanalysis. Higher-order differential cryptanalysis has notably been used to break the KN-Cipher, a cipher which had previously been proved to be immune against standard differential cryptanalysis. Higher-order derivatives A block cipher which maps -bit strings to -bit strings can, for a fixed key, be thought of as a function . In standard differential cryptanalysis, one is interested in finding a pair of an input difference and an output difference such that two input texts with difference are likely to result in output texts with a difference i.e., that is true for many . Note that the difference used here is the XOR which is the usual case, though other definitions of difference are possible. This motivates defining the derivative of a function at a point as Using this definition, the -th derivative at can recursively be defined as Thus for example . Higher order derivatives as defined here have many properties in common with ordinary derivative such as the sum rule and the product rule. Importantly also, taking the derivative reduces the algebraic degree of the function. Higher-order differential attacks To implement an attack using higher order derivatives, knowledge about the probability distribution of the derivative of the cipher is needed. Calculating or estimating this distribution is generally a hard problem but if the cipher in question is known to have a low algebraic degree, the fact that derivatives reduce this degree can be used. For example, if a cipher (or the S-box function under analysis) is known to only have an algebraic degree of 8, any 9th order derivative must be 0. Therefore, it is important for any cipher or S-box function in specific to have a maximal (or close to maximal) degree to defy this attack. Cube attacks have been considered a variant of higher-order differential attacks. Resistance against Higher-order differential attacks Limitations of Higher-order differential attacks Works for small or low algebraic degree S-boxes or small S-boxes. In addition to AND and XOR operations. See also Differential Cryptanalysis KN-Cipher Cube attack References Cryptographic attacks
Higher-order differential cryptanalysis
Technology
548
1,503,668
https://en.wikipedia.org/wiki/Risk%20management%20plan
A risk management plan is a document to foresee risks, estimate impacts, and define responses to risks. It also contains a risk assessment matrix. According to the Project Management Institute, a risk management plan is a "component of the project, program, or portfolio management plan that describes how risk management activities will be structured and performed". Moreover, according to the Project Management Institute, a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project's objectives". Risk is inherent with any project, and project managers should assess risks continually and develop plans to address them. The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team to avoid having the analysis become stale and not reflective of actual potential project risks. Risk response Broadly, there are four potential responses to risk with numerous variations on the specific terms used to name these response options: Avoid – Change plans to circumvent the problem; Control / mitigate / modify / reduce – Reduce threat impact or likelihood (or both) through intermediate steps; Accept / retain – Assume the chance of the negative impact (or auto-insurance), eventually budget the cost (e.g. via a contingency budget line); or Transfer / share – Outsource risk (or a portion of the risk) to a third party or parties that can manage the outcome. This is done financially through insurance contracts or hedging transactions, or operationally through outsourcing an activity. (Mnemonic: SARA, for Share Avoid Reduce Accept, or A-CAT, for "Avoid, Control, Accept, or Transfer") Risk management plans often include matrices. Examples The United States Department of Defense, as part of acquisition, uses risk management planning that may have a Risk Management Plan document for the specific project. The general intent of the RMP in this context is to define the scope of risks to be tracked and means of documenting reports. It is also desired that there would be an integrated relationship to other processes. An example of this would be explaining which developmental tests verify risks of the design type were minimized are stated as part of the test and evaluation master plan. A further example would be instructions from 5000.2D that for programs that are part of a system of systems the risk management strategy shall specifically address integration and interoperability as a risk area. The RMP specific process and templates shift over time (e.g. the disappearance of 2002 documents Defense Finance and Accounting Service / System Risk Management Plan, and the SPAWAR Risk Management Process). See also Event chain methodology Project management Project Management Professional Risk evaluation and mitigation strategy (REMS) Risk management Risk management tools Risk management framework Gordon–Loeb model for cyber security investments Citations References External links Creating The Risk Management Plan (template included) EPA RMP Rule page Risk Management Guide for DoD Acquisition (ver 6 - ver 5.2 more detailed but obsolete) Defense Acquisition University, System Engineering Fundamentals (see ch 15) US DoD extension to PMBOK Guide, June 2003 (see ch 11) US DoD extension to PMBOK Guide (see ch 11) US Defense Acquisition Guidebook (DAG) - ch8 testing DAU Risk Management Plan template Risk management Systems engineering Project management
Risk management plan
Engineering
701
14,573
https://en.wikipedia.org/wiki/Isaac%20Asimov
Isaac Asimov ( ;  – April 6, 1992) was an American writer and professor of biochemistry at Boston University. During his lifetime, Asimov was considered one of the "Big Three" science fiction writers, along with Robert A. Heinlein and Arthur C. Clarke. A prolific writer, he wrote or edited more than 500 books. He also wrote an estimated 90,000 letters and postcards. Best known for his hard science fiction, Asimov also wrote mysteries and fantasy, as well as popular science and other non-fiction. Asimov's most famous work is the Foundation series, the first three books of which won the one-time Hugo Award for "Best All-Time Series" in 1966. His other major series are the Galactic Empire series and the Robot series. The Galactic Empire novels are set in the much earlier history of the same fictional universe as the Foundation series. Later, with Foundation and Earth (1986), he linked this distant future to the Robot series, creating a unified "future history" for his works. He also wrote more than 380 short stories, including the social science fiction novelette "Nightfall", which in 1964 was voted the best short science fiction story of all time by the Science Fiction Writers of America. Asimov wrote the Lucky Starr series of juvenile science-fiction novels using the pen name Paul French. Most of his popular science books explain concepts in a historical way, going as far back as possible to a time when the science in question was at its simplest stage. Examples include Guide to Science, the three-volume Understanding Physics, and Asimov's Chronology of Science and Discovery. He wrote on numerous other scientific and non-scientific topics, such as chemistry, astronomy, mathematics, history, biblical exegesis, and literary criticism. He was the president of the American Humanist Association. Several entities have been named in his honor, including the asteroid (5020) Asimov, a crater on Mars, a Brooklyn elementary school, Honda's humanoid robot ASIMO, and four literary awards. Surname Asimov's family name derives from the first part of (), meaning 'winter grain' (specifically rye) in which his great-great-great-grandfather dealt, with the Russian surname ending -ov added. Azimov is spelled in the Cyrillic alphabet. When the family arrived in the United States in 1923 and their name had to be spelled in the Latin alphabet, Asimov's father spelled it with an S, believing this letter to be pronounced like Z (as in German), and so it became Asimov. This later inspired one of Asimov's short stories, "Spell My Name with an S". Asimov refused early suggestions of using a more common name as a pseudonym, believing that its recognizability helped his career. After becoming famous, he often met readers who believed that "Isaac Asimov" was a distinctive pseudonym created by an author with a common name. Life Early life Asimov was born in Petrovichi, Russian SFSR, on an unknown date between October 4, 1919, and January 2, 1920, inclusive. Asimov celebrated his birthday on January 2. Asimov's parents were Russian Jews, Anna Rachel (née Berman) and Judah Asimov, the son of a miller. He was named Isaac after his mother's father, Isaac Berman. Asimov wrote of his father, "My father, for all his education as an Orthodox Jew, was not Orthodox in his heart", noting that "he didn't recite the myriad prayers prescribed for every action, and he never made any attempt to teach them to me." In 1921, Asimov and 16 other children in Petrovichi developed double pneumonia. Only Asimov survived. He had two younger siblings: a sister, Marcia (born Manya; June 17, 1922 – April 2, 2011), and a brother, Stanley (July 25, 1929 – August 16, 1995), who would become vice-president of Newsday. Asimov's family travelled to the United States via Liverpool on the RMS Baltic, arriving on February 3, 1923 when he was three years old. His parents spoke Yiddish and English to him; he never learned Russian, his parents using it as a secret language "when they wanted to discuss something privately that my big ears were not to hear". Growing up in Brooklyn, New York, Asimov taught himself to read at the age of five (and later taught his sister to read as well, enabling her to enter school in the second grade). His mother got him into first grade a year early by claiming he was born on September 7, 1919. In third grade he learned about the "error" and insisted on an official correction of the date to January 2. He became a naturalized U.S. citizen in 1928 at the age of eight. After becoming established in the U.S., his parents owned a succession of candy stores in which everyone in the family was expected to work. The candy stores sold newspapers and magazines, which Asimov credited as a major influence in his lifelong love of the written word, as it presented him as a child with an unending supply of new reading material (including pulp science fiction magazines) that he could not have otherwise afforded. Asimov began reading science fiction at age nine, at the time that the genre was becoming more science-centered. Asimov was also a frequent patron of the Brooklyn Public Library during his formative years. Education and career Asimov attended New York City public schools from age five, including Boys High School in Brooklyn. Graduating at 15, he attended the City College of New York for several days before accepting a scholarship at Seth Low Junior College. This was a branch of Columbia University in Downtown Brooklyn designed to absorb some of the academically qualified Jewish and Italian-American students who applied to the more prestigious Columbia College but exceeded the unwritten ethnic admission quotas which were common at the time. Originally a zoology major, Asimov switched to chemistry after his first semester because he disapproved of "dissecting an alley cat". After Seth Low Junior College closed in 1936, Asimov finished his Bachelor of Science degree at Columbia's Morningside Heights campus (later the Columbia University School of General Studies) in 1939. (In 1983, Dr. Robert Pollack (dean of Columbia College, 1982–1989) granted Asimov an honorary doctorate from Columbia College after requiring that Asimov place his foot in a bucket of water to pass the college's swimming requirement.) After two rounds of rejections by medical schools, Asimov applied to the graduate program in chemistry at Columbia in 1939; initially he was rejected and then only accepted on a probationary basis. He completed his Master of Arts degree in chemistry in 1941 and earned a Doctor of Philosophy degree in chemistry in 1948. During his chemistry studies, he also learned French and German. From 1942 to 1945 during World War II, between his masters and doctoral studies, Asimov worked as a civilian chemist at the Philadelphia Navy Yard's Naval Air Experimental Station and lived in the Walnut Hill section of West Philadelphia. In September 1945, he was conscripted into the post-war U.S. Army; if he had not had his birth date corrected while at school, he would have been officially 26 years old and ineligible. In 1946, a bureaucratic error caused his military allotment to be stopped, and he was removed from a task force days before it sailed to participate in Operation Crossroads nuclear weapons tests at Bikini Atoll. He was promoted to corporal on July 11 before receiving an honorable discharge on July 26, 1946. After completing his doctorate and a postdoctoral year with Robert Elderfield, Asimov was offered the position of associate professor of biochemistry at the Boston University School of Medicine. This was in large part due to his years-long correspondence with William Boyd, a former associate professor of biochemistry at Boston University, who initially contacted Asimov to compliment him on his story Nightfall. Upon receiving a promotion to professor of immunochemistry, Boyd reached out to Asimov, requesting him to be his replacement. The initial offer of professorship was withdrawn and Asimov was offered the position of instructor of biochemistry instead, which he accepted. He began work in 1949 with a $5,000 salary (), maintaining this position for several years. By 1952, however, he was making more money as a writer than from the university, and he eventually stopped doing research, confining his university role to lecturing students. In 1955, he was promoted to tenured associate professor. In December 1957, Asimov was dismissed from his teaching post, with effect from June 30, 1958, due to his lack of research. After a struggle over two years, he reached an agreement with the university that he would keep his title and give the opening lecture each year for a biochemistry class. On October 18, 1979, the university honored his writing by promoting him to full professor of biochemistry. Asimov's personal papers from 1965 onward are archived at the university's Mugar Memorial Library, to which he donated them at the request of curator Howard Gotlieb. In 1959, after a recommendation from Arthur Obermayer, Asimov's friend and a scientist on the U.S. missile defense project, Asimov was approached by DARPA to join Obermayer's team. Asimov declined on the grounds that his ability to write freely would be impaired should he receive classified information, but submitted a paper to DARPA titled "On Creativity" containing ideas on how government-based science projects could encourage team members to think more creatively. Personal life Asimov met his first wife, Gertrude Blugerman (May 16, 1917, Toronto, Canada – October 17, 1990, Boston, U.S.), on a blind date on February 14, 1942, and married her on July 26. The couple lived in an apartment in West Philadelphia while Asimov was employed at the Philadelphia Navy Yard (where two of his co-workers were L. Sprague de Camp and Robert A. Heinlein). Gertrude returned to Brooklyn while he was in the army, and they both lived there from July 1946 before moving to Stuyvesant Town, Manhattan, in July 1948. They moved to Boston in May 1949, then to nearby suburbs Somerville in July 1949, Waltham in May 1951, and, finally, West Newton in 1956. They had two children, David (born 1951) and Robyn Joan (born 1955). In 1970, they separated and Asimov moved back to New York, this time to the Upper West Side of Manhattan where he lived for the rest of his life. He began seeing Janet O. Jeppson, a psychiatrist and science-fiction writer, and married her on November 30, 1973, two weeks after his divorce from Gertrude. Asimov was a claustrophile: he enjoyed small, enclosed spaces. In the third volume of his autobiography, he recalls a childhood desire to own a magazine stand in a New York City Subway station, within which he could enclose himself and listen to the rumble of passing trains while reading. Asimov was afraid of flying, doing so only twice: once in the course of his work at the Naval Air Experimental Station and once returning home from Oʻahu in 1946. Consequently, he seldom traveled great distances. This phobia influenced several of his fiction works, such as the Wendell Urth mystery stories and the Robot novels featuring Elijah Baley. In his later years, Asimov found enjoyment traveling on cruise ships, beginning in 1972 when he viewed the Apollo 17 launch from a cruise ship. On several cruises, he was part of the entertainment program, giving science-themed talks aboard ships such as the Queen Elizabeth 2. He sailed to England in June 1974 on the for a trip mostly devoted to lectures in London and Birmingham, though he also found time to visit Stonehenge and Shakespeare's birthplace. Asimov was a teetotaler. He was an able public speaker and was regularly invited to give talks about science in his distinct New York accent. He participated in many science fiction conventions, where he was friendly and approachable. He patiently answered tens of thousands of questions and other mail with postcards and was pleased to give autographs. He was of medium height, and stocky build. In his later years, he adopted a signature style of "mutton-chop" sideburns. He took to wearing bolo ties after his wife Janet objected to his clip-on bow ties. He never learned to swim or ride a bicycle, but did learn to drive a car after he moved to Boston. In his humor book Asimov Laughs Again, he describes Boston driving as "anarchy on wheels". Asimov's wide interests included his participation in later years in organizations devoted to the comic operas of Gilbert and Sullivan. Many of his short stories mention or quote Gilbert and Sullivan. He was a prominent member of The Baker Street Irregulars, the leading Sherlock Holmes society, for whom he wrote an essay arguing that Professor Moriarty's work "The Dynamics of An Asteroid" involved the willful destruction of an ancient, civilized planet. He was also a member of the male-only literary banqueting club the Trap Door Spiders, which served as the basis of his fictional group of mystery solvers, the Black Widowers. He later used his essay on Moriarty's work as the basis for a Black Widowers story, "The Ultimate Crime", which appeared in More Tales of the Black Widowers. In 1984, the American Humanist Association (AHA) named him the Humanist of the Year. He was one of the signers of the Humanist Manifesto. From 1985 until his death in 1992, he served as honorary president of the AHA, and was succeeded by his friend and fellow writer Kurt Vonnegut. He was also a close friend of Star Trek creator Gene Roddenberry, and earned a screen credit as "special science consultant" on Star Trek: The Motion Picture for his advice during production. Asimov was a founding member of the Committee for the Scientific Investigation of Claims of the Paranormal, CSICOP (now the Committee for Skeptical Inquiry) and is listed in its Pantheon of Skeptics. In a discussion with James Randi at CSICon 2016 regarding the founding of CSICOP, Kendrick Frazier said that Asimov was "a key figure in the Skeptical movement who is less well known and appreciated today, but was very much in the public eye back then." He said that Asimov's being associated with CSICOP "gave it immense status and authority" in his eyes. Asimov described Carl Sagan as one of only two people he ever met whose intellect surpassed his own. The other, he claimed, was the computer scientist and artificial intelligence expert Marvin Minsky. Asimov was an on-and-off member and honorary vice president of Mensa International, albeit reluctantly; he described some members of that organization as "brain-proud and aggressive about their IQs". After his father died in 1969, Asimov annually contributed to a Judah Asimov Scholarship Fund at Brandeis University. In 2006, he was named by Carnegie Corporation of New York to the inaugural class of winners of the Great Immigrants Award. Illness and death In 1977, Asimov had a heart attack. In December 1983, he had triple bypass surgery at NYU Medical Center, during which he contracted HIV from a blood transfusion. His HIV status was kept secret out of concern that the anti-AIDS prejudice might extend to his family members. He died in Manhattan on April 6, 1992, and was cremated. The cause of death was reported as heart and kidney failure. Ten years following Asimov's death, Janet and Robyn Asimov agreed that the HIV story should be made public; Janet revealed it in her edition of his autobiography, It's Been a Good Life. Writings Overview Asimov's career can be divided into several periods. His early career, dominated by science fiction, began with short stories in 1939 and novels in 1950. This lasted until about 1958, all but ending after publication of The Naked Sun (1957). He began publishing nonfiction as co-author of a college-level textbook called Biochemistry and Human Metabolism. Following the brief orbit of the first human-made satellite Sputnik I by the USSR in 1957, he wrote more nonfiction, particularly popular science books, and less science fiction. Over the next quarter-century, he wrote only four science fiction novels, and 120 nonfiction books. Starting in 1982, the second half of his science fiction career began with the publication of Foundation's Edge. From then until his death, Asimov published several more sequels and prequels to his existing novels, tying them together in a way he had not originally anticipated, making a unified series. There are many inconsistencies in this unification, especially in his earlier stories. Doubleday and Houghton Mifflin published about 60% of his work up to 1969, Asimov stating that "both represent a father image". Asimov believed his most enduring contributions would be his "Three Laws of Robotics" and the Foundation series. The Oxford English Dictionary credits his science fiction for introducing into the English language the words "robotics", "positronic" (an entirely fictional technology), and "psychohistory" (which is also used for a different study on historical motivations). Asimov coined the term "robotics" without suspecting that it might be an original word; at the time, he believed it was simply the natural analogue of words such as mechanics and hydraulics, but for robots. Unlike his word "psychohistory", the word "robotics" continues in mainstream technical use with Asimov's original definition. Star Trek: The Next Generation featured androids with "positronic brains" and the first-season episode "Datalore" called the positronic brain "Asimov's dream". Asimov was so prolific and diverse in his writing that his books span all major categories of the Dewey Decimal Classification except for category 100, philosophy and psychology. However, he wrote several essays about psychology, and forewords for the books The Humanist Way (1988) and In Pursuit of Truth (1982), which were classified in the 100s category, but none of his own books were classified in that category. According to UNESCO's Index Translationum database, Asimov is the world's 24th-most-translated author. Science fiction Asimov became a science fiction fan in 1929, when he began reading the pulp magazines sold in his family's candy store. At first his father forbade reading pulps until Asimov persuaded him that because the science fiction magazines had "Science" in the title, they must be educational. At age 18 he joined the Futurians science fiction fan club, where he made friends who went on to become science fiction writers or editors. Asimov began writing at the age of 11, imitating The Rover Boys with eight chapters of The Greenville Chums at College. His father bought him a used typewriter at age 16. His first published work was a humorous item on the birth of his brother for Boys High School's literary journal in 1934. In May 1937 he first thought of writing professionally, and began writing his first science fiction story, "Cosmic Corkscrew" (now lost), that year. On May 17, 1938, puzzled by a change in the schedule of Astounding Science Fiction, Asimov visited its publisher Street & Smith Publications. Inspired by the visit, he finished the story on June 19, 1938, and personally submitted it to Astounding editor John W. Campbell two days later. Campbell met with Asimov for more than an hour and promised to read the story himself. Two days later he received a detailed rejection letter. This was the first of what became almost weekly meetings with the editor while Asimov lived in New York, until moving to Boston in 1949; Campbell had a strong formative influence on Asimov and became a personal friend. By the end of the month, Asimov completed a second story, "Stowaway". Campbell rejected it on July 22 but—in "the nicest possible letter you could imagine"—encouraged him to continue writing, promising that Asimov might sell his work after another year and a dozen stories of practice. On October 21, 1938, he sold the third story he finished, "Marooned Off Vesta", to Amazing Stories, edited by Raymond A. Palmer, and it appeared in the March 1939 issue. Asimov was paid $64 (), or one cent a word. Two more stories appeared that year, "The Weapon Too Dreadful to Use" in the May Amazing and "Trends" in the July Astounding, the issue fans later selected as the start of the Golden Age of Science Fiction. For 1940, ISFDB catalogs seven stories in four different pulp magazines, including one in Astounding. His earnings became enough to pay for his education, but not yet enough for him to become a full-time writer. He later said that unlike other Golden Age writers Heinlein and A. E. van Vogt—also first published in 1939, and whose talent and stardom were immediately obvious—Asimov "(this is not false modesty) came up only gradually". Through July 29, 1940, Asimov wrote 22 stories in 25 months, of which 13 were published; he wrote in 1972 that from that date he never wrote a science fiction story that was not published (except for two "special cases"). By 1941 Asimov was famous enough that Donald Wollheim told him that he purchased "The Secret Sense" for a new magazine only because of his name, and the December 1940 issue of Astonishing—featuring Asimov's name in bold—was the first magazine to base cover art on his work, but Asimov later said that neither he nor anyone else—except perhaps Campbell—considered him better than an often published "third rater". Based on a conversation with Campbell, Asimov wrote "Nightfall", his 32nd story, in March and April 1941, and Astounding published it in September 1941. In 1968 the Science Fiction Writers of America voted "Nightfall" the best science fiction short story ever written. In Nightfall and Other Stories Asimov wrote, "The writing of 'Nightfall' was a watershed in my professional career ... I was suddenly taken seriously and the world of science fiction became aware that I existed. As the years passed, in fact, it became evident that I had written a 'classic'." "Nightfall" is an archetypal example of social science fiction, a term he created to describe a new trend in the 1940s, led by authors including him and Heinlein, away from gadgets and space opera and toward speculation about the human condition. After writing "Victory Unintentional" in January and February 1942, Asimov did not write another story for a year. He expected to make chemistry his career, and was paid $2,600 annually at the Philadelphia Navy Yard, enough to marry his girlfriend; he did not expect to make much more from writing than the $1,788.50 he had earned from the 28 stories he had already sold over four years. Asimov left science fiction fandom and no longer read new magazines, and might have left the writing profession had not Heinlein and de Camp been his coworkers at the Navy Yard and previously sold stories continued to appear. In 1942, Asimov published the first of his Foundation stories—later collected in the Foundation trilogy: Foundation (1951), Foundation and Empire (1952), and Second Foundation (1953). The books describe the fall of a vast interstellar empire and the establishment of its eventual successor. They feature his fictional science of psychohistory, whose theories could predict the future course of history according to dynamical laws regarding the statistical analysis of mass human actions. Campbell raised his rate per word, Orson Welles purchased rights to "Evidence", and anthologies reprinted his stories. By the end of the war Asimov was earning as a writer an amount equal to half of his Navy Yard salary, even after a raise, but Asimov still did not believe that writing could support him, his wife, and future children. His "positronic" robot stories—many of which were collected in I, Robot (1950)—were begun at about the same time. They promulgated a set of rules of ethics for robots (see Three Laws of Robotics) and intelligent machines that greatly influenced other writers and thinkers in their treatment of the subject. Asimov notes in his introduction to the short story collection The Complete Robot (1982) that he was largely inspired by the tendency of robots up to that time to fall consistently into a Frankenstein plot in which they destroyed their creators. The Robot series has led to film adaptations. With Asimov's collaboration, in about 1977, Harlan Ellison wrote a screenplay of I, Robot that Asimov hoped would lead to "the first really adult, complex, worthwhile science fiction film ever made". The screenplay has never been filmed and was eventually published in book form in 1994. The 2004 movie I, Robot, starring Will Smith, was based on an unrelated script by Jeff Vintar titled Hardwired, with Asimov's ideas incorporated later after the rights to Asimov's title were acquired. (The title was not original to Asimov but had previously been used for a story by Eando Binder.) Also, one of Asimov's robot short stories, "The Bicentennial Man", was expanded into a novel The Positronic Man by Asimov and Robert Silverberg, and this was adapted into the 1999 movie Bicentennial Man, starring Robin Williams. In 1966 the Foundation trilogy won the Hugo Award for the all-time best series of science fiction and fantasy novels, and they along with the Robot series are his most famous science fiction. Besides movies, his Foundation and Robot stories have inspired other derivative works of science fiction literature, many by well-known and established authors such as Roger MacBride Allen, Greg Bear, Gregory Benford, David Brin, and Donald Kingsbury. At least some of these appear to have been done with the blessing of, or at the request of, Asimov's widow, Janet Asimov. In 1948, he also wrote a spoof chemistry article, "The Endochronic Properties of Resublimated Thiotimoline". At the time, Asimov was preparing his own doctoral dissertation, which would include an oral examination. Fearing a prejudicial reaction from his graduate school evaluation board at Columbia University, Asimov asked his editor that it be released under a pseudonym. When it nevertheless appeared under his own name, Asimov grew concerned that his doctoral examiners might think he wasn't taking science seriously. At the end of the examination, one evaluator turned to him, smiling, and said, "What can you tell us, Mr. Asimov, about the thermodynamic properties of the compound known as thiotimoline". Laughing hysterically with relief, Asimov had to be led out of the room. After a five-minute wait, he was summoned back into the room and congratulated as "Dr. Asimov". Demand for science fiction greatly increased during the 1950s, making it possible for a genre author to write full-time. In 1949, book publisher Doubleday's science fiction editor Walter I. Bradbury accepted Asimov's unpublished "Grow Old with Me" (40,000 words), but requested that it be extended to a full novel of 70,000 words. The book appeared under the Doubleday imprint in January 1950 with the title of Pebble in the Sky. Doubleday published five more original science fiction novels by Asimov in the 1950s, along with the six juvenile Lucky Starr novels, the latter under the pseudonym "Paul French". Doubleday also published collections of Asimov's short stories, beginning with The Martian Way and Other Stories in 1955. The early 1950s also saw Gnome Press publish one collection of Asimov's positronic robot stories as I, Robot and his Foundation stories and novelettes as the three books of the Foundation trilogy. More positronic robot stories were republished in book form as The Rest of the Robots. Book publishers and the magazines Galaxy and Fantasy & Science Fiction ended Asimov's dependence on Astounding. He later described the era as his "'mature' period". Asimov's "The Last Question" (1956), on the ability of humankind to cope with and potentially reverse the process of entropy, was his personal favorite story. In 1972, his stand-alone novel The Gods Themselves was published to general acclaim, winning Best Novel in the Hugo, Nebula, and Locus Awards. In December 1974, former Beatle Paul McCartney approached Asimov and asked him to write the screenplay for a science-fiction movie musical. McCartney had a vague idea for the plot and a small scrap of dialogue, about a rock band whose members discover they are being impersonated by extraterrestrials. The band and their impostors would likely be played by McCartney's group Wings, then at the height of their career. Though not generally a fan of rock music, Asimov was intrigued by the idea and quickly produced a treatment outline of the story adhering to McCartney's overall idea but omitting McCartney's scrap of dialogue. McCartney rejected it, and the treatment now exists only in the Boston University archives. Asimov said in 1969 that he had "the happiest of all my associations with science fiction magazines" with Fantasy & Science Fiction; "I have no complaints about Astounding, Galaxy, or any of the rest, heaven knows, but F&SF has become something special to me". Beginning in 1977, Asimov lent his name to Isaac Asimov's Science Fiction Magazine (now Asimov's Science Fiction) and wrote an editorial for each issue. There was also a short-lived Asimov's SF Adventure Magazine and a companion Asimov's Science Fiction Anthology reprint series, published as magazines (in the same manner as the stablemates Ellery Queen's Mystery Magazines and Alfred Hitchcock's Mystery Magazines "anthologies"). Due to pressure by fans on Asimov to write another book in his Foundation series, he did so with Foundation's Edge (1982) and Foundation and Earth (1986), and then went back to before the original trilogy with Prelude to Foundation (1988) and Forward the Foundation (1992), his last novel. Popular science Asimov and two colleagues published a textbook in 1949, with two more editions by 1969. During the late 1950s and 1960s, Asimov substantially decreased his fiction output (he published only four adult novels between 1957's The Naked Sun and 1982's Foundation's Edge, two of which were mysteries). He greatly increased his nonfiction production, writing mostly on science topics; the launch of Sputnik in 1957 engendered public concern over a "science gap". Asimov explained in The Rest of the Robots that he had been unable to write substantial fiction since the summer of 1958, and observers understood him as saying that his fiction career had ended, or was permanently interrupted. Asimov recalled in 1969 that "the United States went into a kind of tizzy, and so did I. I was overcome by the ardent desire to write popular science for an America that might be in great danger through its neglect of science, and a number of publishers got an equally ardent desire to publish popular science for the same reason". Fantasy and Science Fiction invited Asimov to continue his regular nonfiction column, begun in the now-folded bimonthly companion magazine Venture Science Fiction Magazine. The first of 399 monthly F&SF columns appeared in November 1958 and they continued until his terminal illness. These columns, periodically collected into books by Doubleday, gave Asimov a reputation as a "Great Explainer" of science; he described them as his only popular science writing in which he never had to assume complete ignorance of the subjects on the part of his readers. The column was ostensibly dedicated to popular science but Asimov had complete editorial freedom, and wrote about contemporary social issues in essays such as "Thinking About Thinking" and "Knock Plastic!". In 1975 he wrote of these essays: "I get more pleasure out of them than out of any other writing assignment." Asimov's first wide-ranging reference work, The Intelligent Man's Guide to Science (1960), was nominated for a National Book Award, and in 1963 he won a Hugo Award—his first—for his essays for F&SF. The popularity of his science books and the income he derived from them allowed him to give up most academic responsibilities and become a full-time freelance writer. He encouraged other science fiction writers to write popular science, stating in 1967 that "the knowledgeable, skillful science writer is worth his weight in contracts", with "twice as much work as he can possibly handle". The great variety of information covered in Asimov's writings prompted Kurt Vonnegut to ask, "How does it feel to know everything?" Asimov replied that he only knew how it felt to have the 'reputation' of omniscience: "Uneasy". Floyd C. Gale said that "Asimov has a rare talent. He can make your mental mouth water over dry facts", and "science fiction's loss has been science popularization's gain". Asimov said that "Of all the writing I do, fiction, non-fiction, adult, or juvenile, these F & SF articles are by far the most fun". He regretted, however, that he had less time for fiction—causing dissatisfied readers to send him letters of complaint—stating in 1969 that "In the last ten years, I've done a couple of novels, some collections, a dozen or so stories, but that's nothing". In his essay "To Tell a Chemist" (1965), Asimov proposed a simple shibboleth for distinguishing chemists from non-chemists: ask the person to read the word "unionized". Chemists, he noted, will read un-ionized (electrically neutral), while non-chemists will read union-ized (belonging to a trade union). Coined terms Asimov coined the term "robotics" in his 1941 story "Liar!", though he later remarked that he believed then that he was merely using an existing word, as he stated in Gold ("The Robot Chronicles"). While acknowledging the Oxford Dictionary reference, he incorrectly states that the word was first printed about one third of the way down the first column of page 100 in the March 1942 issue of Astounding Science Fiction – the printing of his short story "Runaround". In the same story, Asimov also coined the term "positronic" (the counterpart to "electronic" for positrons). Asimov coined the term "psychohistory" in his Foundation stories to name a fictional branch of science which combines history, sociology, and mathematical statistics to make general predictions about the future behavior of very large groups of people, such as the Galactic Empire. Asimov said later that he should have called it psychosociology. It was first introduced in the five short stories (1942–1944) which would later be collected as the 1951 fix-up novel Foundation. Somewhat later, the term "psychohistory" was applied by others to research of the effects of psychology on history. Other writings In addition to his interest in science, Asimov was interested in history. Starting in the 1960s, he wrote 14 popular history books, including The Greeks: A Great Adventure (1965), The Roman Republic (1966), The Roman Empire (1967), The Egyptians (1967) The Near East: 10,000 Years of History (1968), and Asimov's Chronology of the World (1991). He published Asimov's Guide to the Bible in two volumes—covering the Old Testament in 1967 and the New Testament in 1969—and then combined them into one 1,300-page volume in 1981. Complete with maps and tables, the guide goes through the books of the Bible in order, explaining the history of each one and the political influences that affected it, as well as biographical information about the important characters. His interest in literature manifested itself in several annotations of literary works, including Asimov's Guide to Shakespeare (1970), Asimov's Annotated Don Juan (1972), Asimov's Annotated Paradise Lost (1974), and The Annotated Gulliver's Travels (1980). Asimov was also a noted mystery author and a frequent contributor to Ellery Queen's Mystery Magazine. He began by writing science fiction mysteries such as his Wendell Urth stories, but soon moved on to writing "pure" mysteries. He published two full-length mystery novels, and wrote 66 stories about the Black Widowers, a group of men who met monthly for dinner, conversation, and a puzzle. He got the idea for the Widowers from his own association in a stag group called the Trap Door Spiders, and all of the main characters (with the exception of the waiter, Henry, who he admitted resembled Wodehouse's Jeeves) were modeled after his closest friends. A parody of the Black Widowers, "An Evening with the White Divorcés," was written by author, critic, and librarian Jon L. Breen. Asimov joked, "all I can do ... is to wait until I catch him in a dark alley, someday." Toward the end of his life, Asimov published a series of collections of limericks, mostly written by himself, starting with Lecherous Limericks, which appeared in 1975. Limericks: Too Gross, whose title displays Asimov's love of puns, contains 144 limericks by Asimov and an equal number by John Ciardi. He even created a slim volume of Sherlockian limericks. Asimov featured Yiddish humor in Azazel, The Two Centimeter Demon. The two main characters, both Jewish, talk over dinner, or lunch, or breakfast, about anecdotes of "George" and his friend Azazel. Asimov's Treasury of Humor is both a working joke book and a treatise propounding his views on humor theory. According to Asimov, the most essential element of humor is an abrupt change in point of view, one that suddenly shifts focus from the important to the trivial, or from the sublime to the ridiculous. Particularly in his later years, Asimov to some extent cultivated an image of himself as an amiable lecher. In 1971, as a response to the popularity of sexual guidebooks such as The Sensuous Woman (by "J") and The Sensuous Man (by "M"), Asimov published The Sensuous Dirty Old Man under the byline "Dr. 'A (although his full name was printed on the paperback edition, first published 1972). However, by 2016, Asimov's habit of groping women was seen as sexual harassment and came under criticism, and was cited as an early example of inappropriate behavior that can occur at science fiction conventions. Asimov published three volumes of autobiography. In Memory Yet Green (1979) and In Joy Still Felt (1980) cover his life up to 1978. The third volume, I. Asimov: A Memoir (1994), covered his whole life (rather than following on from where the second volume left off). The epilogue was written by his widow Janet Asimov after his death. The book won a Hugo Award in 1995. Janet Asimov edited It's Been a Good Life (2002), a condensed version of his three autobiographies. He also published three volumes of retrospectives of his writing, Opus 100 (1969), Opus 200 (1979), and Opus 300 (1984). In 1987, the Asimovs co-wrote How to Enjoy Writing: A Book of Aid and Comfort. In it they offer advice on how to maintain a positive attitude and stay productive when dealing with discouragement, distractions, rejection, and thick-headed editors. The book includes many quotations, essays, anecdotes, and husband-wife dialogues about the ups and downs of being an author. Asimov and Star Trek creator Gene Roddenberry developed a unique relationship during Star Treks initial launch in the late 1960s. Asimov wrote a critical essay on Star Treks scientific accuracy for TV Guide magazine. Roddenberry retorted respectfully with a personal letter explaining the limitations of accuracy when writing a weekly series. Asimov corrected himself with a follow-up essay to TV Guide claiming that despite its inaccuracies, Star Trek was a fresh and intellectually challenging science fiction television show. The two remained friends to the point where Asimov even served as an advisor on a number of Star Trek projects. In 1973, Asimov published a proposal for calendar reform, called the World Season Calendar. It divides the year into four seasons (named A–D) of 13 weeks (91 days) each. This allows days to be named, e.g., "D-73" instead of December 1 (due to December 1 being the 73rd day of the 4th quarter). An extra 'year day' is added for a total of 365 days. Awards and recognition Asimov won more than a dozen annual awards for particular works of science fiction and a half-dozen lifetime awards. He also received 14 honorary doctorate degrees from universities. 1955 – Guest of Honor at the 13th World Science Fiction Convention 1957 – Thomas Alva Edison Foundation Award for best science book for youth, for Building Blocks of the Universe 1960 – Howard W. Blakeslee Award from the American Heart Association for The Living River 1962 – Boston University's Publication Merit Award 1963 – A special Hugo Award for "adding science to science fiction," for essays published in The Magazine of Fantasy and Science Fiction 1963 – Fellow of the American Academy of Arts and Sciences 1964 – The Science Fiction Writers of America voted "Nightfall" (1941) the all-time best science fiction short story 1965 – James T. Grady Award of the American Chemical Society (now called the James T. Grady-James H. Stack Award for Interpreting Chemistry) 1966 – Best All-time Novel Series Hugo Award for the Foundation trilogy 1967 – Edward E. Smith Memorial Award 1967 – AAAS-Westinghouse Science Writing Award for Magazine Writing, for essay "Over the Edge of the Universe" (in the March 1967 Harper's Magazine) 1972 – Nebula Award for Best Novel for The Gods Themselves 1973 – Hugo Award for Best Novel for The Gods Themselves 1973 – Locus Award for Best Novel for The Gods Themselves 1975 – Golden Plate Award of the American Academy of Achievement 1975 – Klumpke-Roberts Award "for outstanding contributions to the public understanding and appreciation of astronomy" 1975 – Locus Award for Best Reprint Anthology for Before the Golden Age 1977 – Hugo Award for Best Novelette for The Bicentennial Man 1977 – Nebula Award for Best Novelette for The Bicentennial Man 1977 – Locus Award for Best Novelette for The Bicentennial Man 1981 – An asteroid, 5020 Asimov, was named in his honor 1981 – Locus Award for Best Non-Fiction Book for In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978 1983 – Hugo Award for Best Novel for Foundation's Edge 1983 – Locus Award for Best Science Fiction Novel for Foundation's Edge 1984 – Humanist of the Year 1986 – The Science Fiction and Fantasy Writers of America named him its 8th SFWA Grand Master (presented in 1987). 1987 – Locus Award for Best Short Story for "Robot Dreams" 1992 – Hugo Award for Best Novelette for "Gold" 1995 – Hugo Award for Best Non-Fiction Book for I. Asimov: A Memoir 1995 – Locus Award for Best Non-Fiction Book for I. Asimov: A Memoir 1996 – A 1946 Retro-Hugo for Best Novel of 1945 was given at the 1996 WorldCon for "The Mule", the 7th Foundation story, published in Astounding Science Fiction 1997 – The Science Fiction and Fantasy Hall of Fame inducted Asimov in its second class of two deceased and two living persons, along with H. G. Wells. 2000 – Asimov was featured on a stamp in Israel 2001 – The Isaac Asimov Memorial Debates at the Hayden Planetarium in New York were inaugurated 2009 – A crater on the planet Mars, Asimov, was named in his honor 2010 – In the US Congress bill about the designation of the National Robotics Week as an annual event, a tribute to Isaac Asimov is as follows: "Whereas the second week in April each year is designated as 'National Robotics Week', recognizing the accomplishments of Isaac Asimov, who immigrated to America, taught science, wrote science books for children and adults, first used the term robotics, developed the Three Laws of Robotics, and died in April 1992: Now, therefore, be it resolved ..." 2015 – Selected as a member of the New York State Writers Hall of Fame. 2016 – A 1941 Retro-Hugo for Best Short Story of 1940 was given at the 2016 WorldCon for Robbie, his first positronic robot story, published in Super Science Stories, September 1940 2018 – A 1943 Retro-Hugo for Best Short Story of 1942 was given at the 2018 WorldCon for Foundation, published in Astounding Science-Fiction, May 1942 Writing style Asimov was his own secretary, typist, indexer, proofreader, and literary agent. He wrote a typed first draft composed at the keyboard at 90 words per minute; he imagined an ending first, then a beginning, then "let everything in-between work itself out as I come to it". (Asimov used an outline only once, later describing it as "like trying to play the piano from inside a straitjacket".) After correcting a draft by hand, he retyped the document as the final copy and only made one revision with minor editor-requested changes; a word processor did not save him much time, Asimov said, because 95% of the first draft was unchanged. After disliking making multiple revisions of "Black Friar of the Flame", Asimov refused to make major, second, or non-editorial revisions ("like chewing used gum"), stating that "too large a revision, or too many revisions, indicate that the piece of writing is a failure. In the time it would take to salvage such a failure, I could write a new piece altogether and have infinitely more fun in the process". He submitted "failures" to another editor. Asimov's fiction style is extremely unornamented. In 1980, science fiction scholar James Gunn wrote of I, Robot: Asimov addressed such criticism in 1989 at the beginning of Nemesis: Gunn cited examples of a more complex style, such as the climax of "Liar!". Sharply drawn characters occur at key junctures of his storylines: Susan Calvin in "Liar!" and "Evidence", Arkady Darell in Second Foundation, Elijah Baley in The Caves of Steel, and Hari Seldon in the Foundation prequels. Other than books by Gunn and Joseph Patrouch, there is relatively little literary criticism on Asimov (particularly when compared to the sheer volume of his output). Cowart and Wymer's Dictionary of Literary Biography (1981) gives a possible reason: Gunn's and Patrouch's studies of Asimov both state that a clear, direct prose style is still a style. Gunn's 1982 book comments in detail on each of Asimov's novels. He does not praise all of Asimov's fiction (nor does Patrouch), but calls some passages in The Caves of Steel "reminiscent of Proust". When discussing how that novel depicts night falling over futuristic New York City, Gunn says that Asimov's prose "need not be ashamed anywhere in literary society". Although he prided himself on his unornamented prose style (for which he credited Clifford D. Simak as an early influence), and said in 1973 that his style had not changed, Asimov also enjoyed giving his longer stories complicated narrative structures, often by arranging chapters in nonchronological ways. Some readers have been put off by this, complaining that the nonlinearity is not worth the trouble and adversely affects the clarity of the story. For example, the first third of The Gods Themselves begins with Chapter 6, then backtracks to fill in earlier material. (John Campbell advised Asimov to begin his stories as late in the plot as possible. This advice helped Asimov create "Reason", one of the early Robot stories). Patrouch found that the interwoven and nested flashbacks of The Currents of Space did serious harm to that novel, to such an extent that only a "dyed-in-the-kyrt Asimov fan" could enjoy it. In his later novel Nemesis one group of characters lives in the "present" and another group starts in the "past", beginning 15 years earlier and gradually moving toward the time of the first group. Alien life Asimov once explained that his reluctance to write about aliens came from an incident early in his career when Astoundings editor John Campbell rejected one of his science fiction stories because the alien characters were portrayed as superior to the humans. The nature of the rejection led him to believe that Campbell may have based his bias towards humans in stories on a real-world racial bias. Unwilling to write only weak alien races, and concerned that a confrontation would jeopardize his and Campbell's friendship, he decided he would not write about aliens at all. Nevertheless, in response to these criticisms, he wrote The Gods Themselves, which contains aliens and alien sex. The book won the Nebula Award for Best Novel in 1972, and the Hugo Award for Best Novel in 1973. Asimov said that of all his writings, he was most proud of the middle section of The Gods Themselves, the part that deals with those themes. In the Hugo Award–winning novelette "Gold", Asimov describes an author, based on himself, who has one of his books (The Gods Themselves) adapted into a "compu-drama", essentially photo-realistic computer animation. The director criticizes the fictionalized Asimov ("Gregory Laborian") for having an extremely nonvisual style, making it difficult to adapt his work, and the author explains that he relies on ideas and dialogue rather than description to get his points across. Romance and women In the early days of science fiction some authors and critics felt that the romantic elements were inappropriate in science fiction stories, which were supposedly to be focused on science and technology. Isaac Asimov was a supporter of this point of view, expressed in his 1938-1939 letters to Astounding, where he described such elements as "mush" and "slop". To his dismay, these letters were met with a strong opposition. Asimov attributed the lack of romance and sex in his fiction to the "early imprinting" from starting his writing career when he had never been on a date and "didn't know anything about girls". He was sometimes criticized for the general absence of sex (and of extraterrestrial life) in his science fiction. He claimed he wrote The Gods Themselves (1972) to respond to these criticisms, which often came from New Wave science fiction (and often British) writers. The second part (of three) of the novel is set on an alien world with three sexes, and the sexual behavior of these creatures is extensively depicted. Views Religion Asimov was an atheist, and a humanist. He did not oppose religious conviction in others, but he frequently railed against superstitious and pseudoscientific beliefs that tried to pass themselves off as genuine science. During his childhood, his parents observed the traditions of Orthodox Judaism less stringently than they had in Petrovichi; they did not force their beliefs upon young Isaac, and he grew up without strong religious influences, coming to believe that the Torah represented Hebrew mythology in the same way that the Iliad recorded Greek mythology. When he was 13, he chose not to have a bar mitzvah. As his books Treasury of Humor and Asimov Laughs Again record, Asimov was willing to tell jokes involving God, Satan, the Garden of Eden, Jerusalem, and other religious topics, expressing the viewpoint that a good joke can do more to provoke thought than hours of philosophical discussion. For a brief while, his father worked in the local synagogue to enjoy the familiar surroundings and, as Isaac put it, "shine as a learned scholar" versed in the sacred writings. This scholarship was a seed for his later authorship and publication of Asimov's Guide to the Bible, an analysis of the historic foundations for the Old and New Testaments. For many years, Asimov called himself an atheist; he considered the term somewhat inadequate, as it described what he did not believe rather than what he did. Eventually, he described himself as a "humanist" and considered that term more practical. Asimov continued to identify himself as a secular Jew, as stated in his introduction to Jack Dann's anthology of Jewish science fiction, Wandering Stars: "I attend no services and follow no ritual and have never undergone that curious puberty rite, the Bar Mitzvah. It doesn't matter. I am Jewish." When asked in an interview in 1982 if he was an atheist, Asimov replied, Likewise, he said about religious education: "I would not be satisfied to have my kids choose to be religious without trying to argue them out of it, just as I would not be satisfied to have them decide to smoke regularly or engage in any other practice I consider detrimental to mind or body." In his last volume of autobiography, Asimov wrote, The same memoir states his belief that Hell is "the drooling dream of a sadist" crudely affixed to an all-merciful God; if even human governments were willing to curtail cruel and unusual punishments, wondered Asimov, why would punishment in the afterlife not be restricted to a limited term? Asimov rejected the idea that a human belief or action could merit infinite punishment. If an afterlife existed, he claimed, the longest and most severe punishment would be reserved for those who "slandered God by inventing Hell". Asimov said about using religious motifs in his writing: Politics Asimov became a staunch supporter of the Democratic Party during the New Deal, and thereafter remained a political liberal. He was a vocal opponent of the Vietnam War in the 1960s and in a television interview during the early 1970s he publicly endorsed George McGovern. He was unhappy about what he considered an "irrationalist" viewpoint taken by many radical political activists from the late 1960s and onwards. In his second volume of autobiography, In Joy Still Felt, Asimov recalled meeting the counterculture figure Abbie Hoffman. Asimov's impression was that the 1960s' counterculture heroes had ridden an emotional wave which, in the end, left them stranded in a "no-man's land of the spirit" from which he wondered if they would ever return. Asimov vehemently opposed Richard Nixon, considering him "a crook and a liar". He closely followed Watergate, and was pleased when the president was forced to resign. Asimov was dismayed over the pardon extended to Nixon by his successor Gerald Ford: "I was not impressed by the argument that it has spared the nation an ordeal. To my way of thinking, the ordeal was necessary to make certain it would never happen again." After Asimov's name appeared in the mid-1960s on a list of people the Communist Party USA "considered amenable" to its goals, the FBI investigated him. Because of his academic background, the bureau briefly considered Asimov as a possible candidate for known Soviet spy ROBPROF, but found nothing suspicious in his life or background. Asimov appeared to hold an equivocal attitude towards Israel. In his first autobiography, he indicates his support for the safety of Israel, though insisting that he was not a Zionist. In his third autobiography, Asimov stated his opposition to the creation of a Jewish state, on the grounds that he was opposed to having nation-states in general, and supported the notion of a single humanity. Asimov especially worried about the safety of Israel given that it had been created among Muslim neighbors "who will never forgive, never forget and never go away", and said that Jews had merely created for themselves another "Jewish ghetto". Social issues Asimov believed that "science fiction ... serve[s] the good of humanity". He considered himself a feminist even before women's liberation became a widespread movement; he argued that the issue of women's rights was closely connected to that of population control. Furthermore, he believed that homosexuality must be considered a "moral right" on population grounds, as must all consenting adult sexual activity that does not lead to reproduction. He issued many appeals for population control, reflecting a perspective articulated by people from Thomas Malthus through Paul R. Ehrlich. In a 1988 interview by Bill Moyers, Asimov proposed computer-aided learning, where people would use computers to find information on subjects in which they were interested. He thought this would make learning more interesting, since people would have the freedom to choose what to learn, and would help spread knowledge around the world. Also, the one-to-one model would let students learn at their own pace. Asimov thought that people would live in space by 2019. In 1983 Asimov wrote: He continues on education: Sexual harassment Asimov would often fondle, kiss and pinch women at conventions and elsewhere without regard for their consent. According to Alec Nevala-Lee, author of an Asimov biography and writer on the history of science fiction, he often defended himself by saying that far from showing objections, these women cooperated. In a 1971 satirical piece, The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched." According to Nevala-Lee, however, "many of these encounters were clearly nonconsensual." He wrote that Asimov's behaviour, as a leading science-fiction author and personality, contributed to an undesirable atmosphere for women in the male-dominated science fiction community. In support of this, he quoted some of Asimov's contemporary fellow-authors such as Judith Merril, Harlan Ellison and Frederik Pohl, as well as editors such as Timothy Seldes. Additional specific incidents were reported by other people including Edward L. Ferman, long-time editor of The Magazine of Fantasy & Science Fiction, who wrote "...instead of shaking my date's hand, he shook her left breast. Environment and population Asimov's defense of civil applications of nuclear power, even after the Three Mile Island nuclear power plant incident, damaged his relations with some of his fellow liberals. In a letter reprinted in Yours, Isaac Asimov, he states that although he would prefer living in "no danger whatsoever" to living near a nuclear reactor, he would still prefer a home near a nuclear power plant to a slum on Love Canal or near "a Union Carbide plant producing methyl isocyanate", the latter being a reference to the Bhopal disaster. In the closing years of his life, Asimov blamed the deterioration of the quality of life that he perceived in New York City on the shrinking tax base caused by the middle-class flight to the suburbs, though he continued to support high taxes on the middle class to pay for social programs. His last nonfiction book, Our Angry Earth (1991, co-written with his long-time friend, science fiction author Frederik Pohl), deals with elements of the environmental crisis such as overpopulation, oil dependence, war, global warming, and the destruction of the ozone layer. In response to being presented by Bill Moyers with the question "What do you see happening to the idea of dignity to human species if this population growth continues at its present rate?", Asimov responded: Other authors Asimov enjoyed the writings of J. R. R. Tolkien, and used The Lord of the Rings as a plot point in a Black Widowers story, titled Nothing like Murder. In the essay "All or Nothing" (for The Magazine of Fantasy and Science Fiction, Jan 1981), Asimov said that he admired Tolkien and that he had read The Lord of the Rings five times. (The feelings were mutual, with Tolkien saying that he had enjoyed Asimov's science fiction. This would make Asimov an exception to Tolkien's earlier claim that he rarely found "any modern books" that were interesting to him.) He acknowledged other writers as superior to himself in talent, saying of Harlan Ellison, "He is (in my opinion) one of the best writers in the world, far more skilled at the art than I am." Asimov disapproved of the New Wave's growing influence, stating in 1967 "I want science fiction. I think science fiction isn't really science fiction if it lacks science. And I think the better and truer the science, the better and truer the science fiction". The feelings of friendship and respect between Asimov and Arthur C. Clarke were demonstrated by the so-called "Clarke–Asimov Treaty of Park Avenue", negotiated as they shared a cab in New York. This stated that Asimov was required to insist that Clarke was the best science fiction writer in the world (reserving second-best for himself), while Clarke was required to insist that Asimov was the best science writer in the world (reserving second-best for himself). Thus, the dedication in Clarke's book Report on Planet Three (1972) reads: "In accordance with the terms of the Clarke–Asimov treaty, the second-best science writer dedicates this book to the second-best science-fiction writer." In 1980, Asimov wrote a highly critical review of George Orwell's 1984. Though dismissive of his attacks, James Machell has stated that they "are easier to understand when you consider that Asimov viewed 1984 as dangerous literature. He opines that if communism were to spread across the globe, it would come in a completely different form to the one in 1984, and by looking to Orwell as an authority on totalitarianism, 'we will be defending ourselves against assaults from the wrong direction and we will lose'." Asimov became a fan of mystery stories at the same time as science fiction. He preferred to read the former because "I read every [science fiction] story keenly aware that it might be worse than mine, in which case I had no patience with it, or that it might be better, in which case I felt miserable". Asimov wrote "I make no secret of the fact that in my mysteries I use Agatha Christie as my model. In my opinion, her mysteries are the best ever written, far better than the Sherlock Holmes stories, and Hercule Poirot is the best detective fiction has seen. Why should I not use as my model what I consider the best?" He enjoyed Sherlock Holmes, but considered Arthur Conan Doyle to be "a slapdash and sloppy writer." Asimov also enjoyed humorous stories, particularly those of P. G. Wodehouse. In non-fiction writing, Asimov particularly admired the writing style of Martin Gardner, and tried to emulate it in his own science books. On meeting Gardner for the first time in 1965, Asimov told him this, to which Gardner answered that he had based his own style on Asimov's. Influence Paul Krugman, holder of a Nobel Prize in Economics, stated Asimov's concept of psychohistory inspired him to become an economist. John Jenkins, who has reviewed the vast majority of Asimov's written output, once observed, "It has been pointed out that most science fiction writers since the 1950s have been affected by Asimov, either modeling their style on his or deliberately avoiding anything like his style." Along with such figures as Bertrand Russell and Karl Popper, Asimov left his mark as one of the most distinguished interdisciplinarians of the 20th century. "Few individuals", writes James L. Christian, "understood better than Isaac Asimov what synoptic thinking is all about. His almost 500 books—which he wrote as a specialist, a knowledgeable authority, or just an excited layman—range over almost all conceivable subjects: the sciences, history, literature, religion, and of course, science fiction." Bibliography Depending on the counting convention used, and including all titles, charts, and edited collections, there may be currently over 500 books in Asimov's bibliography—as well as his individual short stories, individual essays, and criticism. For his 100th, 200th, and 300th books (based on his personal count), Asimov published Opus 100 (1969), Opus 200 (1979), and Opus 300 (1984), celebrating his writing. An extensive bibliography of Isaac Asimov's works has been compiled by Ed Seiler. His book writing rate was analysed, showing that he wrote faster as he wrote more. An online exhibit in West Virginia University Libraries' virtually complete Asimov Collection displays features, visuals, and descriptions of some of his more than 600 books, games, audio recordings, videos, and wall charts. Many first, rare, and autographed editions are in the Libraries' Rare Book Room. Book jackets and autographs are presented online along with descriptions and images of children's books, science fiction art, multimedia, and other materials in the collection. Science fiction "Greater Foundation" series The Robot series was originally separate from the Foundation series. The Galactic Empire novels were published as independent stories, set earlier in the same future as Foundation. Later in life, Asimov synthesized the Robot series into a single coherent "history" that appeared in the extension of the Foundation series. All of these books were published by Doubleday & Co, except the original Foundation trilogy which was originally published by Gnome Books before being bought and republished by Doubleday. The Robot series: (first Elijah Baley SF-crime novel) (second Elijah Baley SF-crime novel) (third Elijah Baley SF-crime novel) (sequel to the Elijah Baley trilogy) Galactic Empire novels: (early Galactic Empire) (long before the Empire) (Republic of Trantor still expanding) Foundation prequels: Original Foundation trilogy: (also published with the title 'The Man Who Upset the Universe' as a 35¢ Ace paperback, D-125, in about 1952) Extended Foundation series: Lucky Starr series (as Paul French) All published by Doubleday & Co David Starr, Space Ranger (1952) Lucky Starr and the Pirates of the Asteroids (1953) Lucky Starr and the Oceans of Venus (1954) Lucky Starr and the Big Sun of Mercury (1956) Lucky Starr and the Moons of Jupiter (1957) Lucky Starr and the Rings of Saturn (1958) Norby Chronicles (with Janet Asimov) All published by Walker & Company Norby, the Mixed-Up Robot (1983) Norby's Other Secret (1984) Norby and the Lost Princess (1985) Norby and the Invaders (1985) Norby and the Queen's Necklace (1986) Norby Finds a Villain (1987) Norby Down to Earth (1988) Norby and Yobo's Great Adventure (1989) Norby and the Oldest Dragon (1990) Norby and the Court Jester (1991) Novels not part of a series Novels marked with an asterisk (*) have minor connections to Foundation universe. The End of Eternity (1955), Doubleday (*) Fantastic Voyage (1966), Bantam Books (paperback) and Houghton Mifflin (hardback) (a novelization of the movie) The Gods Themselves (1972), Doubleday Fantastic Voyage II: Destination Brain (1987), Doubleday (not a sequel to Fantastic Voyage, but a similar, independent story) Nemesis (1989), Bantam Doubleday Dell (*) Nightfall (1990), Doubleday, with Robert Silverberg (based on "Nightfall", a 1941 short story written by Asimov) Child of Time (1992), Bantam Doubleday Dell, with Robert Silverberg (based on "The Ugly Little Boy", a 1958 short story written by Asimov) The Positronic Man (1992), Bantam Doubleday Dell, with Robert Silverberg (*) (based on The Bicentennial Man, a 1976 novella written by Asimov) Short-story collections Mysteries Novels The Death Dealers (1958), Avon Books, republished as A Whiff of Death by Walker & Company Murder at the ABA (1976), Doubleday, also published as Authorized Murder Short-story collections Black Widowers series Tales of the Black Widowers (1974), Doubleday More Tales of the Black Widowers (1976), Doubleday Casebook of the Black Widowers (1980), Doubleday Banquets of the Black Widowers (1984), Doubleday Puzzles of the Black Widowers (1990), Doubleday The Return of the Black Widowers (2003), Carroll & Graf Other mysteries Asimov's Mysteries (1968), Doubleday The Key Word and Other Mysteries (1977), Walker The Union Club Mysteries (1983), Doubleday The Disappearing Man and Other Mysteries (1985), Walker The Best Mysteries of Isaac Asimov (1986), Doubleday Nonfiction Popular science Collections of Asimov's essays for F&SF The following books collected essays which were originally published as monthly columns in The Magazine of Fantasy and Science Fiction and collected by Doubleday & Co Fact and Fancy (1962) View from a Height (1963) Adding a Dimension (1964) Of Time and Space and Other Things (1965) From Earth to Heaven (1966) Science, Numbers, and I (1968) The Solar System and Back (1970) The Stars in Their Courses (1971) The Left Hand of the Electron (1972) The Tragedy of the Moon (1973) Asimov On Astronomy (updated version of essays in previous collections) (1974) Asimov On Chemistry (updated version of essays in previous collections) (1974) Of Matters Great and Small (1975) Asimov On Physics (updated version of essays in previous collections) (1976) The Planet That Wasn't (1976) Asimov On Numbers (updated version of essays in previous collections) (1976) Quasar, Quasar, Burning Bright (1977) The Road to Infinity (1979) The Sun Shines Bright (1981) Counting the Eons (1983) X Stands for Unknown (1984) The Subatomic Monster (1985) Far as Human Eye Could See (1987) The Relativity of Wrong (1988) Asimov on Science: A 30 Year Retrospective 1959–1989 (1989) (features the first essay in the introduction) Out of the Everywhere (1990) The Secret of the Universe (1991) Other general science essay collections Only a Trillion (1957), Abelard-Schuman, ; (1976) revised and updated ed. Is Anyone There? (1967), Doubleday, (which includes the article in which he coined the term "spome") Today and Tomorrow and— (1973), Doubleday Science Past, Science Future (1975), Doubleday, Please Explain (1975), Houghton Mifflin, Life and Time (1978), Doubleday The Roving Mind (1983), Prometheus Books, new edition 1997, The Dangers of Intelligence (1986), Houghton Mifflin Past, Present and Future (1987), Prometheus Books, The Tyrannosaurus Prescription (1989), Prometheus Books Frontiers (1990), Dutton Frontiers II (1993), Dutton Other science books by Asimov The Chemicals of Life (1954), Abelard-Schuman Inside the Atom (1956), Abelard-Schuman, Building Blocks of the Universe (1957; revised 1974), Abelard-Schuman, The World of Carbon (1958), Abelard-Schuman, The World of Nitrogen (1958), Abelard-Schuman, Words of Science and the History Behind Them (1959), Houghton Mifflin The Clock We Live On (1959), Abelard-Schuman, Breakthroughs in Science (1959), Houghton Mifflin, Realm of Numbers (1959), Houghton Mifflin, Realm of Measure (1960), Houghton Mifflin The Wellsprings of Life (1960), Abelard-Schuman, Life and Energy (1962), Doubleday, The Genetic Code (1962), The Orion Press The Human Body: Its Structure and Operation (1963), Houghton Mifflin, , (revised) The Human Brain: Its Capacities and Functions (1963), Houghton Mifflin, Planets for Man (with Stephen H. Dole) (1964), Random House, reprinted by RAND in 2007 An Easy Introduction to the Slide Rule (1965), Houghton Mifflin, The Intelligent Man's Guide to Science (1965), Basic Books The title varied with each of the four editions, the last being Asimov's New Guide to Science (1984) The Universe: From Flat Earth to Quasar (1966), Walker, The Neutrino (1966), Doubleday, ASIN B002JK525W Understanding Physics Vol. I, Motion, Sound, and Heat (1966), Walker, Understanding Physics Vol. II, Light, Magnetism, and Electricity (1966), Walker, Understanding Physics Vol. III, The Electron, Proton, and Neutron (1966), Walker, Photosynthesis (1969), Basic Books, Our World in Space (1974), New York Graphic, Eyes on the Universe: A History of the Telescope (1976), Andre Deutsch Limited, The Collapsing Universe (1977), Walker, Extraterrestrial Civilizations (1979), Crown, A Choice of Catastrophes (1979), Simon & Schuster, Visions of the Universe with illustrations by Kazuaki Iwasaki (1981), Cosmos Store, Exploring the Earth and the Cosmos (1982), Crown, The Measure of the Universe (1983), Harper & Row Think About Space: Where Have We Been and Where Are We Going? with co-author Frank White (1989), Walker Asimov's Chronology of Science and Discovery (1989), Harper & Row, second edition adds content thru 1993, Beginnings: The Story of Origins (1989), Walker Isaac Asimov's Guide to Earth and Space (1991), Random House, Atom: Journey Across the Subatomic Cosmos (1991), Dutton, Mysteries of Deep Space: Quasars, Pulsars and Black Holes (1994) Earth's Moon (1988), Gareth Stevens, revised in 2003 by Richard Hantula The Sun (1988), Gareth Stevens, revised in 2003 by Richard Hantula The Earth (1988), Gareth Stevens, revised in 2004 by Richard Hantula Jupiter (1989), Gareth Stevens, revised in 2004 by Richard Hantula Venus (1990), Gareth Stevens, revised in 2004 by Richard Hantula Literary works All published by Doubleday Asimov's Guide to Shakespeare, vols I and II (1970), Asimov's Annotated "Don Juan" (1972) Asimov's Annotated "Paradise Lost" (1974) Familiar Poems, Annotated (1976) Asimov's The Annotated "Gulliver's Travels" (1980) Asimov's Annotated "Gilbert and Sullivan" (1988) The Bible Words from Genesis (1962), Houghton Mifflin Words from the Exodus (1963), Houghton Mifflin Asimov's Guide to the Bible, vols I and II (1967 and 1969, one-volume ed. 1981), Doubleday, The Story of Ruth (1972), Doubleday, In the Beginning (1981), Crown Autobiography In Memory Yet Green: The Autobiography of Isaac Asimov, 1920–1954 (1979, Doubleday) In Joy Still Felt: The Autobiography of Isaac Asimov, 1954–1978 (1980, Doubleday) I. Asimov: A Memoir (1994, Doubleday) It's Been a Good Life (2002, Prometheus Books), condensation of Asimov's three volumes of autobiography, edited by his widow, Janet Jeppson Asimov History All published by Houghton Mifflin except where otherwise stated The Kite That Won the Revolution (1963), The Greeks: A Great Adventure (1965) The Roman Republic (1966) The Roman Empire (1967) The Egyptians (1967) The Near East (1968) The Dark Ages (1968) Words from History (1968) The Shaping of England (1969) Constantinople: The Forgotten Empire (1970) The Land of Canaan (1971) The Shaping of France (1972) The Shaping of North America: From Earliest Times to 1763 (1973) The Birth of the United States: 1763–1816 (1974) Our Federal Union: The United States from 1816 to 1865 (1975), The Golden Door: The United States from 1865 to 1918 (1977) Asimov's Chronology of the World (1991), HarperCollins, The March of the Millennia (1991), with co-author Frank White, Walker & Company, Humor The Sensuous Dirty Old Man (1971) (As Dr. A), Walker & Company, Isaac Asimov's Treasury of Humor (1971), Houghton Mifflin, Lecherous Limericks (1975), Walker, More Lecherous Limericks (1976), Walker, Still More Lecherous Limericks (1977), Walker, Limericks, Two Gross, with John Ciardi (1978), Norton, A Grossery of Limericks, with John Ciardi (1981), Norton, Limericks for Children (1984), Caedmon Asimov Laughs Again (1992), HarperCollins On writing science fiction Asimov on Science Fiction (1981), Doubleday Asimov's Galaxy (1989), Doubleday Other nonfiction Opus 100 (1969), Houghton Mifflin, Asimov's Biographical Encyclopedia of Science and Technology (1964), Doubleday (revised edition 1972, ) Opus 200 (1979), Houghton Mifflin, Isaac Asimov's Book of Facts (1979), Grosset & Dunlap, Opus 300 (1984), Houghton Mifflin, Our Angry Earth: A Ticking Ecological Bomb (1991), with co-author Frederik Pohl, Tor, . Television, music, and film appearances I Robot, a concept album by the Alan Parsons Project that examined some of Asimov's work The Last Word (1959) The Dick Cavett Show, four appearances 1968–71 The Nature of Things (1969) ABC News coverage of Apollo 11, 1969, with Fred Pohl, interviewed by Rod Serling David Frost interview program, August 1969. Frost asked Asimov if he had ever tried to find God and, after some initial evasion, Asimov answered, "God is much more intelligent than I am—let him try to find me." BBC Horizon "It's About Time" (1979), show hosted by Dudley Moore Target ... Earth? (1980) The David Letterman Show (1980) NBC TV Speaking Freely, interviewed by Edwin Newman (1982) ARTS Network talk show hosted by Studs Terkel and Calvin Trillin, approximately (1982) Oltre New York (1986) Voyage to the Outer Planets and Beyond (1986) Gandahar (1987), a French animated science-fiction film by René Laloux. Asimov wrote the English translation for the film. Bill Moyers interview (1988) Stranieri in America (1988) Adaptations Several of his stories ("The Dead Past", "Sucker Bait", "Satisfaction Guaranteed", "Reason", "Liar!", and "The Naked Sun") were adapted as television plays for the first three series of the science-fiction (later horror) anthology series Out of the Unknown between 1965 and 1969. Only "The Dead Past" and "Sucker Bait" are known to still exist entirely as 16mm telerecordings. Tele-snaps, brief audio recordings and video clips exist for "Satisfaction Guaranteed" and "The Prophet" (adapted from "Reason"), while only production stills, brief audio recordings and video clips exist for "Liar!". Production stills and an almost complete audio recording exist for "The Naked Sun". El robot embustero (1966), short film directed by Antonio Lara de Gavilán, based on short story "Liar!" A halhatatlanság halála (1977), TV movie directed by András Rajnai, based on novel The End of Eternity The Ugly Little Boy (1977), short film directed by Barry Morse and Donald W. Thompson, based on novelette The Ugly Little Boy The End of Eternity (1987), film directed by Andrei Yermash, based on novel The End of Eternity Nightfall (1988), film directed by Paul Mayersberg, based on novelette "Nightfall" Robots (1988), film directed by Doug Smith and Kim Takal, based on the Robot series Robot City (1995), an adventure game released for Windows and Mac OS, based on the book series of the same name that consists of science fiction novels written by multiple authors, inspired by the Robot series. Bicentennial Man (1999), film directed by Chris Columbus, based on novelette "The Bicentennial Man" and on novel The Positronic Man Nightfall (2000), film directed by Gwyneth Gibby, based on novelette "Nightfall" I, Robot (2004), film directed by Alex Proyas, with very tenuous connections with the short stories of the Robot series Eagle Eye (2008), film directed by D. J. Caruso, loosely based on short story "All the Troubles of the World" Formula of Death (2012), TV movie directed by Behdad Avand Amini, based on novel The Death Dealers Spell My Name with an S (2014), short film directed by Samuel Ali, based on short story "Spell My Name with an S" Foundation (2021), series created by David S. Goyer and Josh Friedman, based on the Foundation series References Explanatory footnotes Citations General and cited sources Asimov, Isaac. Isaac Asimov's Treasury of Humor (1971), Boston: Houghton Mifflin, . In Memory Yet Green (1979), New York: Avon, . In Joy Still Felt (1980), New York: Avon . I. Asimov: A Memoir (1994), (hc), (pb). Yours, Isaac Asimov (1996), edited by Stanley Asimov. New York: Doubleday . It's Been a Good Life (2002), edited by Janet Asimov. . Goldman, Stephen H., "Isaac Asimov", in Dictionary of Literary Biography, Vol. 8, Cowart and Wymer eds. (Gale Research, 1981), pp. 15–29. Gunn, James. "On Variations on a Robot", IASFM, July 1980, pp. 56–81. Isaac Asimov: The Foundations of Science Fiction (1982). . The Science of Science-Fiction Writing (2000). . Further reading External links Asimov Online, a vast repository of information about Asimov, maintained by Asimov enthusiast Edward Seiler Jenkins' Spoiler-Laden Guide to Isaac Asimov, reviews of all of Asimov's books 1920 births 1992 deaths 20th-century American essayists 20th-century American male writers 20th-century American memoirists 20th-century American novelists 20th-century American short story writers 20th-century atheists AIDS-related deaths in New York (state) American alternate history writers American atheists American biochemists American critics of religions American historians of science American humanists American humorists American male essayists American male feminists American male non-fiction writers American male novelists American male short story writers American mystery writers American people of Russian-Jewish descent American science fiction writers American science writers American skeptics American writers of Russian descent Analog Science Fiction and Fact people Asimov's Science Fiction people Atheist feminists Bible commentators Boston University faculty Boys High School (Brooklyn) alumni Columbia Graduate School of Arts and Sciences alumni Columbia University School of General Studies alumni Date of birth unknown Fellows of the American Academy of Arts and Sciences Futurians Historians of astronomy Hugo Award–winning writers Humor researchers Jewish American atheists Jewish American essayists Jewish American memoirists Jewish American military personnel Jewish American non-fiction writers Jewish American novelists Jewish American short story writers Jewish American feminists American feminists Jewish skeptics Mensans Military personnel from New York City Naturalized citizens of the United States Nebula Award winners New York (state) Democrats Novelists from Massachusetts Novelists from New York (state) People from Smolensk Oblast People from the Upper West Side Pulp fiction writers SFWA Grand Masters Science Fiction Hall of Fame inductees Scientists from New York City Soviet emigrants to the United States United States Army non-commissioned officers United States Navy civilians Writers about religion and science Writers from Brooklyn Yiddish-speaking people 20th-century American Jews
Isaac Asimov
Astronomy
17,619
55,719,898
https://en.wikipedia.org/wiki/Thioacyl%20chloride
In organic chemistry, a thioacyl chloride is an organic compound containing the functional group . Their formula is usually written , where R is a side chain. Thioacyl chlorides are analogous to acyl chlorides, but much rarer and less robust. The best studied is thiobenzoyl chloride, a purple oil first prepared by chlorination of dithiobenzoic acid with a combination of chlorine and thionyl chloride. A more modern preparation employs phosgene as the chlorinating agent; this also generates carbonyl sulfide as a by-product: PhCS2H + COCl2 → PhC(S)Cl + HCl + COS The most common thioacyl chloride is thiophosgene. References Functional groups Organosulfur compounds
Thioacyl chloride
Chemistry
170
4,049,168
https://en.wikipedia.org/wiki/Glass%20cloth
Glass cloth is a textile material woven from glass fiber yarn. Home and garden Glass cloth was originally developed to be used in greenhouse paneling, allowing sunlight's ultraviolet rays to be filtered out, while still allowing visible light through to plants. Glass cloth is also a term for a type of tea towel suited for polishing glass. The cloth is usually woven with the plain weave, and may be patterned in various ways, though checked cloths are the most common. The original cloth was made from linen, but a large quantity is made with cotton warp and tow weft, and in some cases they are composed entirely of cotton. Short fibres of the cheaper kind are easily detached from the cloth. In the Southern Plains during the Dust Bowl, states' health officials recommended attaching translucent glass cloth to the inside frames of windows to help in keeping the dust out of buildings, although people also used paperboard, canvas or blankets. Eyewitness accounts indicate they were not completely successful. Use in technology Given the properties of glass - in particular, its heat resistance and inability to ignite - glass is often used to create fire barriers in hazardous environments, such as those inside racecars. Due to its poor flexibility and ability to cause skin irritation, glass fibers are typically inadequate for use in apparel. However, the bi-directional strength of glass cloth has found utility in some fiberglass reinforced plastics. The Rutan VariEze homebuilt aircraft uses a moldless glass-cloth/epoxy composite, which acts as a protective skin. Glass cloth is also commonly used as a reinforcing lattice for pre-pregs. See also G-10 (material) Glass fiber References Woven fabrics Linens Fiberglass Composite materials Fibre-reinforced polymers Glass applications
Glass cloth
Physics,Chemistry,Materials_science
355
244,050
https://en.wikipedia.org/wiki/Moorea%20sandpiper
The Moorea Sandpiper (Prosobonia ellisi) is an extinct member of the large wader family Scolopacidae that was endemic to Mo'orea in French Polynesia, where the locals called it te-te in the Tahitian language. Two specimens were collected by Georg Forster and William Anderson between September 30 and October 11, 1777, during Captain Cook's third voyage, but both have since disappeared and the bird became extinct in the nineteenth century. Several drawings of the bird were made by those accompanying Cook on his voyage; William Ellis and John Webber both illustrated the sandpiper between August-December of 1777. These illustrations show a somewhat lighter brown bird than the Tahiti Sandpiper, with no white spot behind the eye, a more conspicuous light rusty eye-ring, two white wing-bars and rusty secondary and primary coverts; one of Latham's specimens had yellow legs and feet. The exact relationships between the Moorea and Tahiti specimens are still not fully resolved, with some being unsure if they are separate species. The Moorea Sandpiper was said to be found "close to small brooks" and it was still at least moderately common around 1776 - 1779 during Cook's last voyage. Invasive rats may have been a contributing factor in its fall to extinction. References Further reading Greenway, James C. (1967): Tahitian Sandpiper. In: Extinct and Vanishing Birds of the World (2nd ed.): 263–264. Dover Publications, New York. Latham, John (1785): "White-winged Sandpiper": In: A general synopsis of birds 3: 172, plate 82. London. Latham, John (1824): "White-winged Sandpiper": In: A general history of birds 9: 296. External links BirdLife species factsheet. Retrieved 11-SEP-2006. Prosobonia Bird extinctions since 1500 Birds described in 1906 Birds of the Society Islands Extinct birds of Oceania Controversial bird taxa Taxa named by Richard Bowdler Sharpe Mo'orea
Moorea sandpiper
Biology
425
378,938
https://en.wikipedia.org/wiki/Magnesium%20in%20biology
Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA. Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis. Function A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments. Human health Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time. The most common symptom of excess oral magnesium intake is diarrhea. Supplements based on amino acid chelates (such as glycinate, lysinate etc.) are much better-tolerated by the digestive system and do not have the side-effects of the older compounds used, while sustained-release dietary supplements prevent the occurrence of diarrhea. Since the kidneys of adult humans excrete excess magnesium efficiently, oral magnesium poisoning in adults with normal renal function is very rare. Infants, which have less ability to excrete excess magnesium even when healthy, should not be given magnesium supplements, except under a physician's care. Pharmaceutical preparations with magnesium are used to treat conditions including magnesium deficiency and hypomagnesemia, as well as eclampsia. Such preparations are usually in the form of magnesium sulfate or chloride when given parenterally. Magnesium is absorbed with reasonable efficiency (30% to 40%) by the body from any soluble magnesium salt, such as the chloride or citrate. Magnesium is similarly absorbed from Epsom salts, although the sulfate in these salts adds to their laxative effect at higher doses. Magnesium absorption from the insoluble oxide and hydroxide salts (milk of magnesia) is erratic and of poorer efficiency, since it depends on the neutralization and solution of the salt by the acid of the stomach, which may not be (and usually is not) complete. Magnesium orotate may be used as adjuvant therapy in patients on optimal treatment for severe congestive heart failure, increasing survival rate and improving clinical symptoms and patient's quality of life. In 2022, magnesium salts were the 207th most commonly prescribed medication in the United States, with more than 1million prescriptions. Nerve conduction Magnesium can affect muscle relaxation through direct action on cell membranes. Mg2+ ions close certain types of calcium channels, which conduct positively charged calcium ions into neurons. With an excess of magnesium, more channels will be blocked and nerve cells activity will decrease. Hypertension Intravenous magnesium sulphate is used in treating pre-eclampsia. For other than pregnancy-related hypertension, a meta-analysis of 22 clinical trials with dose ranges of 120 to 973 mg/day and a mean dose of 410 mg, concluded that magnesium supplementation had a small but statistically significant effect, lowering systolic blood pressure by 3–4 mm Hg and diastolic blood pressure by 2–3 mm Hg. The effect was larger when the dose was more than 370 mg/day. Diabetes and glucose tolerance Higher dietary intakes of magnesium correspond to lower diabetes incidence. For people with diabetes or at high risk of diabetes, magnesium supplementation lowers fasting glucose. Mitochondria Magnesium is essential as part of the process that generates adenosine triphosphate. Mitochondria are often referred to as the "powerhouses of the cell" because their primary role is generating energy for cellular processes. They achieve this by breaking down nutrients, primarily glucose, through a series of chemical reactions known as cellular respiration. This process ultimately produces adenosine triphosphate (ATP), the cell's main energy currency. Vitamin D Magnesium and vitamin D have a synergistic relationship in the body, meaning they work together to optimize each other's functions: Magnesium activates vitamin D Vitamin D influences magnesium absorption. Bone health: They play crucial roles in calcium absorption and bone metabolism. Muscle function: They contribute to muscle contraction and relaxation, impacting physical performance and overall well-being. Immune function: They support a healthy immune system and may help reduce inflammation. Overall, maintaining adequate levels of both magnesium and vitamin D is essential for optimal health and well-being. Testosterone It is theorized that the process of making testosterone from cholesterol, needs magnesium to function properly. Studies have shown that significant gains in testosterone occur after taking 10 mg magnesium/kg body weight/day. Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for magnesium in 1997. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EARs for magnesium for women and men ages 31 and up are 265 mg/day and 350 mg/day, respectively. The RDAs are 320 and 420 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 350 to 400 mg/day depending on age of the woman. RDA for lactation ranges 310 to 360 mg/day for same reason. For children ages 1–13 years, the RDA increases with age from 65 to 200 mg/day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of magnesium the UL is set at 350 mg/day. The UL is specific to magnesium consumed as a dietary supplement, the reason being that too much magnesium consumed at one time can cause diarrhea. The UL does not apply to food-sourced magnesium. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes. * = Adequate intake The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 300 and 350 mg/day, respectively. AIs for pregnancy and lactation are also 300 mg/day. For children ages 1–17 years, the AIs increase with age from 170 to 250 mg/day. These AIs are lower than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 250 mg/day lower than the U.S. value. The magnesium UL is unique in that it is lower than some of the RDAs. It applies to intake from a pharmacological agent or dietary supplement only and does not include intake from food and water. Labeling For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value (%DV). For magnesium labeling purposes, 100% of the daily value was 400 mg, but as of May 27, 2016, it was revised to 420 mg to bring it into agreement with the RDA. A table of the old and new adult Daily Values is provided at Reference Daily Intake. Food sources Green vegetables such as spinach provide magnesium because of the abundance of chlorophyll molecules, which contain the ion. Nuts (especially Brazil nuts, cashews and almonds), seeds (e.g., pumpkin seeds), dark chocolate, roasted soybeans, bran, and some whole grains are also good sources of magnesium. Although many foods contain magnesium, it is usually found in low levels. As with most nutrients, daily needs for magnesium are unlikely to be met by one serving of any single food. Eating a wide variety of fruits, vegetables, and grains will help ensure adequate intake of magnesium. Because magnesium readily dissolves in water, refined foods, which are often processed or cooked in water and dried, in general, are poor sources of the nutrient. For example, whole-wheat bread has twice as much magnesium as white bread because the magnesium-rich germ and bran are removed when white flour is processed. The table of food sources of magnesium suggests many dietary sources of magnesium. "Hard" water can also provide magnesium, but "soft" water contains less of the ion. Dietary surveys do not assess magnesium intake from water, which may lead to underestimating total magnesium intake and its variability. Too much magnesium may make it difficult for the body to absorb calcium. Not enough magnesium can lead to hypomagnesemia as described above, with irregular heartbeats, high blood pressure (a sign in humans but not some experimental animals such as rodents), insomnia, and muscle spasms (fasciculation). However, as noted, symptoms of low magnesium from pure dietary deficiency are thought to be rarely encountered. Following are some foods and the amount of magnesium in them: Pumpkin seeds, no hulls ( cup) = 303 mg Chia seeds, ( cup) = 162 mg Buckwheat flour ( cup) = 151 mg Brazil nuts ( cup) = 125 mg Oat bran, raw ( cup) = 110 mg Cocoa powder ( cup) = 107 mg Halibut (3 oz) = 103 mg Almonds ( cup) = 99 mg Cashews ( cup) = 89 mg Whole wheat flour ( cup) = 83 mg Spinach, boiled ( cup) = 79 mg Swiss chard, boiled ( cup) = 75 mg Chocolate, 70% cocoa (1 oz) = 73 mg Tofu, firm ( cup) = 73 mg Black beans, boiled ( cup) = 60 mg Quinoa, cooked ( cup) = 59 mg Peanut butter (2 tablespoons) = 50 mg Walnuts ( cup) = 46 mg Sunflower seeds, hulled ( cup) = 41 mg Chickpeas, boiled ( cup) = 39 mg Kale, boiled ( cup) = 37 mg Lentils, boiled ( cup) = 36 mg Oatmeal, cooked ( cup) = 32 mg Fish sauce (1 Tbsp) = 32 mg Milk, non fat (1 cup) = 27 mg Coffee, espresso (1 oz) = 24 mg Whole wheat bread (1 slice) = 23 mg Biological range, distribution, and regulation In animals, it has been shown that different cell types maintain different concentrations of magnesium. It seems likely that the same is true for plants. This suggests that different cell types may regulate influx and efflux of magnesium in different ways based on their unique metabolic needs. Interstitial and systemic concentrations of free magnesium must be delicately maintained by the combined processes of buffering (binding of ions to proteins and other molecules) and muffling (the transport of ions to storage or extracellular spaces). In plants, and more recently in animals, magnesium has been recognized as an important signaling ion, both activating and mediating many biochemical reactions. The best example of this is perhaps the regulation of carbon fixation in chloroplasts in the Calvin cycle. Magnesium is very important in cellular function. Deficiency of the nutrient causes disease of the affected organism. In single-cell organisms such as bacteria and yeast, low levels of magnesium manifests in greatly reduced growth rates. In magnesium transport knockout strains of bacteria, healthy rates are maintained only with exposure to very high external concentrations of the ion. In yeast, mitochondrial magnesium deficiency also leads to disease. Plants deficient in magnesium show stress responses. The first observable signs of both magnesium starvation and overexposure in plants is a decrease in the rate of photosynthesis. This is due to the central position of the Mg2+ ion in the chlorophyll molecule. The later effects of magnesium deficiency on plants are a significant reduction in growth and reproductive viability. Magnesium can also be toxic to plants, although this is typically seen only in drought conditions. In animals, magnesium deficiency (hypomagnesemia) is seen when the environmental availability of magnesium is low. In ruminant animals, particularly vulnerable to magnesium availability in pasture grasses, the condition is known as 'grass tetany'. Hypomagnesemia is identified by a loss of balance due to muscle weakness. A number of genetically attributable hypomagnesemia disorders have also been identified in humans. Overexposure to magnesium may be toxic to individual cells, though these effects have been difficult to show experimentally. Hypermagnesemia, an overabundance of magnesium in the blood, is usually caused by loss of kidney function. Healthy animals rapidly excrete excess magnesium in the urine and stool. Urinary magnesium is called magnesuria. Characteristic concentrations of magnesium in model organisms are: in E. coli 30-100mM (bound), 0.01-1mM (free), in budding yeast 50mM, in mammalian cell 10mM (bound), 0.5mM (free) and in blood plasma 1mM. Biological chemistry Mg2+ is the fourth-most-abundant metal ion in cells (per moles) and the most abundant free divalent cation — as a result, it is deeply and intrinsically woven into cellular metabolism. Indeed, Mg2+-dependent enzymes appear in virtually every metabolic pathway: Specific binding of Mg2+ to biological membranes is frequently observed, Mg2+ is also used as a signalling molecule, and much of nucleic acid biochemistry requires Mg2+, including all reactions that require release of energy from ATP. In nucleotides, the triple-phosphate moiety of the compound is invariably stabilized by association with Mg2+ in all enzymatic processes. Chlorophyll In photosynthetic organisms, Mg2+ has the additional vital role of being the coordinating ion in the chlorophyll molecule. This role was discovered by Richard Willstätter, who received the Nobel Prize in Chemistry 1915 for the purification and structure of chlorophyll binding with sixth number of carbon Enzymes The chemistry of the Mg2+ ion, as applied to enzymes, uses the full range of this ion's unusual reaction chemistry to fulfill a range of functions. Mg2+ interacts with substrates, enzymes, and occasionally both (Mg2+ may form part of the active site). In general, Mg2+ interacts with substrates through inner sphere coordination, stabilising anions or reactive intermediates, also including binding to ATP and activating the molecule to nucleophilic attack. When interacting with enzymes and other proteins, Mg2+ may bind using inner or outer sphere coordination, to either alter the conformation of the enzyme or take part in the chemistry of the catalytic reaction. In either case, because Mg2+ is only rarely fully dehydrated during ligand binding, it may be a water molecule associated with the Mg2+ that is important rather than the ion itself. The Lewis acidity of Mg2+ (pKa 11.4) is used to allow both hydrolysis and condensation reactions (most common ones being phosphate ester hydrolysis and phosphoryl transfer) that would otherwise require pH values greatly removed from physiological values. Essential role in the biological activity of ATP ATP (adenosine triphosphate), the main source of energy in cells, must be bound to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. Nucleic acids Nucleic acids have an important range of interactions with Mg2+. The binding of Mg2+ to DNA and RNA stabilises structure; this can be observed in the increased melting temperature (Tm) of double-stranded DNA in the presence of Mg2+. In addition, ribosomes contain large amounts of Mg2+ and the stabilisation provided is essential to the complexation of this ribo-protein. A large number of enzymes involved in the biochemistry of nucleic acids bind Mg2+ for activity, using the ion for both activation and catalysis. Finally, the autocatalysis of many ribozymes (enzymes containing only RNA) is Mg2+ dependent (e.g. the yeast mitochondrial group II self splicing introns). Magnesium ions can be critical in maintaining the positional integrity of closely clustered phosphate groups. These clusters appear in numerous and distinct parts of the cell nucleus and cytoplasm. For instance, hexahydrated Mg2+ ions bind in the deep major groove and at the outer mouth of A-form nucleic acid duplexes. Cell membranes and walls Biological cell membranes and cell walls are polyanionic surfaces. This has important implications for the transport of ions, in particular because it has been shown that different membranes preferentially bind different ions. Both Mg2+ and Ca2+ regularly stabilize membranes by the cross-linking of carboxylated and phosphorylated head groups of lipids. However, the envelope membrane of E. coli has also been shown to bind Na+, K+, Mn2+ and Fe3+. The transport of ions is dependent on both the concentration gradient of the ion and the electric potential (ΔΨ) across the membrane, which will be affected by the charge on the membrane surface. For example, the specific binding of Mg2+ to the chloroplast envelope has been implicated in a loss of photosynthetic efficiency by the blockage of K+ uptake and the subsequent acidification of the chloroplast stroma. Proteins The Mg2+ ion tends to bind only weakly to proteins (Ka ≤ 105) and this can be exploited by the cell to switch enzymatic activity on and off by changes in the local concentration of Mg2+. Although the concentration of free cytoplasmic Mg2+ is on the order of 1 mmol/L, the total Mg2+ content of animal cells is 30 mmol/L and in plants the content of leaf endodermal cells has been measured at values as high as 100 mmol/L (Stelzer et al., 1990), much of which buffered in storage compartments. The cytoplasmic concentration of free Mg2+ is buffered by binding to chelators (e.g., ATP), but also, what is more important, it is buffered by storage of Mg2+ in intracellular compartments. The transport of Mg2+ between intracellular compartments may be a major part of regulating enzyme activity. The interaction of Mg2+ with proteins must also be considered for the transport of the ion across biological membranes. Manganese In biological systems, only manganese (Mn2+) is readily capable of replacing Mg2+, but only in a limited set of circumstances. Mn2+ is very similar to Mg2+ in terms of its chemical properties, including inner and outer shell complexation. Mn2+ effectively binds ATP and allows hydrolysis of the energy molecule by most ATPases. Mn2+ can also replace Mg2+ as the activating ion for a number of Mg2+-dependent enzymes, although some enzyme activity is usually lost. Sometimes such enzyme metal preferences vary among closely related species: For example, the reverse transcriptase enzyme of lentiviruses like HIV, SIV and FIV is typically dependent on Mg2+, whereas the analogous enzyme for other retroviruses prefers Mn2+. Measuring magnesium in biological samples By radioactive isotopes The use of radioactive tracer elements in ion uptake assays allows the calculation of km, Ki and Vmax and determines the initial change in the ion content of the cells. 28Mg decays by the emission of a high-energy beta or gamma particle, which can be measured using a scintillation counter. However, the radioactive half-life of 28Mg, the most stable of the radioactive magnesium isotopes, is only 21 hours. This severely restricts the experiments involving the nuclide. Also, since 1990, no facility has routinely produced 28Mg, and the price per mCi is now predicted to be approximately US$30,000. The chemical nature of Mg2+ is such that it is closely approximated by few other cations. However, Co2+, Mn2+ and Ni2+ have been used successfully to mimic the properties of Mg2+ in some enzyme reactions, and radioactive forms of these elements have been employed successfully in cation transport studies. The difficulty of using metal ion replacement in the study of enzyme function is that the relationship between the enzyme activities with the replacement ion compared to the original is very difficult to ascertain. By fluorescent indicators A number of chelators of divalent cations have different fluorescence spectra in the bound and unbound states. Chelators for Ca2+ are well established, have high affinity for the cation, and low interference from other ions. Mg2+ chelators lag behind and the major fluorescence dye for Mg2+ (mag-fura 2) actually has a higher affinity for Ca2+. This limits the application of this dye to cell types where the resting level of Ca2+ is < 1 μM and does not vary with the experimental conditions under which Mg2+ is to be measured. Recently, Otten et al. (2001) have described work into a new class of compounds that may prove more useful, having significantly better binding affinities for Mg2+. The use of the fluorescent dyes is limited to measuring the free Mg2+. If the ion concentration is buffered by the cell by chelation or removal to subcellular compartments, the measured rate of uptake will give only minimum values of km and Vmax. By electrophysiology First, ion-specific microelectrodes can be used to measure the internal free ion concentration of cells and organelles. The major advantages are that readings can be made from cells over relatively long periods of time, and that unlike dyes very little extra ion buffering capacity is added to the cells. Second, the technique of two-electrode voltage-clamp allows the direct measurement of the ion flux across the membrane of a cell. The membrane is held at an electric potential and the responding current is measured. All ions passing across the membrane contribute to the measured current. Third, the technique of patch-clamp uses isolated sections of natural or artificial membrane in much the same manner as voltage-clamp but without the secondary effects of a cellular system. Under ideal conditions the conductance of individual channels can be quantified. This methodology gives the most direct measurement of the action of ion channels. By absorption spectroscopy Flame atomic absorption spectroscopy (AAS) determines the total magnesium content of a biological sample. This method is destructive; biological samples must be broken down in concentrated acids to avoid clogging the fine nebulising apparatus. Beyond this, the only limitation is that samples must be in a volume of approximately 2 mL and at a concentration range of 0.1 – 0.4 μmol/L for optimum accuracy. As this technique cannot distinguish between Mg2+ already present in the cell and that taken up during the experiment, only content not uptaken can be quantified. Inductively coupled plasma (ICP) using either the mass spectrometry (MS) or atomic emission spectroscopy (AES) modifications also allows the determination of the total ion content of biological samples. Magnesium transport The chemical and biochemical properties of Mg2+ present the cellular system with a significant challenge when transporting the ion across biological membranes. The dogma of ion transport states that the transporter recognises the ion then progressively removes the water of hydration, removing most or all of the water at a selective pore before releasing the ion on the far side of the membrane. Due to the properties of Mg2+, large volume change from hydrated to bare ion, high energy of hydration and very low rate of ligand exchange in the inner coordination sphere, these steps are probably more difficult than for most other ions. To date, only the ZntA protein of Paramecium has been shown to be a Mg2+ channel. The mechanisms of Mg2+ transport by the remaining proteins are beginning to be uncovered with the first three-dimensional structure of a Mg2+ transport complex being solved in 2004. The hydration shell of the Mg2+ ion has a very tightly bound inner shell of six water molecules and a relatively tightly bound second shell containing 12–14 water molecules (Markham et al., 2002). Thus, it is presumed that recognition of the Mg2+ ion requires some mechanism to interact initially with the hydration shell of Mg2+, followed by a direct recognition/binding of the ion to the protein. In spite of the mechanistic difficulty, Mg2+ must be transported across membranes, and a large number of Mg2+ fluxes across membranes from a variety of systems have been described. However, only a small selection of Mg2+ transporters have been characterised at the molecular level. Ligand ion channel blockade Magnesium ions (Mg2+) in cellular biology are usually in almost all senses opposite to Ca2+ ions, because they are bivalent too, but have greater electronegativity and thus exert greater pull on water molecules, preventing passage through the channel (even though the magnesium itself is smaller). Thus, Mg2+ ions block Ca2+ channels such as (NMDA channels) and have been shown to affect gap junction channels forming electrical synapses. Plant physiology of magnesium The previous sections have dealt in detail with the chemical and biochemical aspects of Mg2+ and its transport across cellular membranes. This section will apply this knowledge to aspects of whole plant physiology, in an attempt to show how these processes interact with the larger and more complex environment of the multicellular organism. Nutritional requirements and interactions Mg2+ is essential for plant growth and is present in higher plants in amounts on the order of 80 μmol g−1 dry weight. The amounts of Mg2+ vary in different parts of the plant and are dependent upon nutritional status. In times of plenty, excess Mg2+ may be stored in vascular cells (Stelzer et al., 1990; and in times of starvation Mg2+ is redistributed, in many plants, from older to newer leaves. Mg2+ is taken up into plants via the roots. Interactions with other cations in the rhizosphere can have a significant effect on the uptake of the ion.(Kurvits and Kirkby, 1980; The structure of root cell walls is highly permeable to water and ions, and hence ion uptake into root cells can occur anywhere from the root hairs to cells located almost in the centre of the root (limited only by the Casparian strip). Plant cell walls and membranes carry a great number of negative charges, and the interactions of cations with these charges is key to the uptake of cations by root cells allowing a local concentrating effect. Mg2+ binds relatively weakly to these charges, and can be displaced by other cations, impeding uptake and causing deficiency in the plant. Within individual plant cells, the Mg2+ requirements are largely the same as for all cellular life; Mg2+ is used to stabilise membranes, is vital to the utilisation of ATP, is extensively involved in the nucleic acid biochemistry, and is a cofactor for many enzymes (including the ribosome). Also, Mg2+ is the coordinating ion in the chlorophyll molecule. It is the intracellular compartmentalisation of Mg2+ in plant cells that leads to additional complexity. Four compartments within the plant cell have reported interactions with Mg2+. Initially, Mg2+ will enter the cell into the cytoplasm (by an as yet unidentified system), but free Mg2+ concentrations in this compartment are tightly regulated at relatively low levels (≈2 mmol/L) and so any excess Mg2+ is either quickly exported or stored in the second intracellular compartment, the vacuole. The requirement for Mg2+ in mitochondria has been demonstrated in yeast and it seems highly likely that the same will apply in plants. The chloroplasts also require significant amounts of internal Mg2+, and low concentrations of cytoplasmic Mg2+. In addition, it seems likely that the other subcellular organelles (e.g., Golgi, endoplasmic reticulum, etc.) also require Mg2+. Distributing magnesium ions within the plant Once in the cytoplasmic space of root cells Mg2+, along with the other cations, is probably transported radially into the stele and the vascular tissue. From the cells surrounding the xylem the ions are released or pumped into the xylem and carried up through the plant. In the case of Mg2+, which is highly mobile in both the xylem and phloem, the ions will be transported to the top of the plant and back down again in a continuous cycle of replenishment. Hence, uptake and release from vascular cells is probably a key part of whole plant Mg2+ homeostasis. Figure 1 shows how few processes have been connected to their molecular mechanisms (only vacuolar uptake has been associated with a transport protein, AtMHX). The diagram shows a schematic of a plant and the putative processes of Mg2+ transport at the root and leaf where Mg2+ is loaded and unloaded from the vascular tissues. Mg2+ is taken up into the root cell wall space (1) and interacts with the negative charges associated with the cell walls and membranes. Mg2+ may be taken up into cells immediately (symplastic pathway) or may travel as far as the Casparian band (4) before being absorbed into cells (apoplastic pathway; 2). The concentration of Mg2+ in the root cells is probably buffered by storage in root cell vacuoles (3). Note that cells in the root tip do not contain vacuoles. Once in the root cell cytoplasm, Mg2+ travels toward the centre of the root by plasmodesmata, where it is loaded into the xylem (5) for transport to the upper parts of the plant. When the Mg2+ reaches the leaves it is unloaded from the xylem into cells (6) and again is buffered in vacuoles (7). Whether cycling of Mg2+ into the phloem occurs via general cells in the leaf (8) or directly from xylem to phloem via transfer cells (9) is unknown. Mg2+ may return to the roots in the phloem sap. When a Mg2+ ion has been absorbed by a cell requiring it for metabolic processes, it is generally assumed that the ion stays in that cell for as long as the cell is active. In vascular cells, this is not always the case; in times of plenty, Mg2+ is stored in the vacuole, takes no part in the day-to-day metabolic processes of the cell (Stelzer et al., 1990), and is released at need. But for most cells it is death by senescence or injury that releases Mg2+ and many of the other ionic constituents, recycling them into healthy parts of the plant. In addition, when Mg2+ in the environment is limiting, some species are able to mobilise Mg2+ from older tissues. These processes involve the release of Mg2+ from its bound and stored states and its transport back into the vascular tissue, where it can be distributed to the rest of the plant. In times of growth and development, Mg2+ is also remobilised within the plant as source and sink relationships change. The homeostasis of Mg2+ within single plant cells is maintained by processes occurring at the plasma membrane and at the vacuole membrane (see Figure 2). The major driving force for the translocation of ions in plant cells is ΔpH. H+-ATPases pump H+ ions against their concentration gradient to maintain the pH differential that can be used for the transport of other ions and molecules. H+ ions are pumped out of the cytoplasm into the extracellular space or into the vacuole. The entry of Mg2+ into cells may occur through one of two pathways, via channels using the ΔΨ (negative inside) across this membrane or by symport with H+ ions. To transport the Mg2+ ion into the vacuole requires a Mg2+/H+ antiport transporter (such as AtMHX). The H+-ATPases are dependent on Mg2+ (bound to ATP) for activity, so that Mg2+ is required to maintain its own homeostasis. A schematic of a plant cell is shown including the four major compartments currently recognised as interacting with Mg2+. H+-ATPases maintain a constant ΔpH across the plasma membrane and the vacuole membrane. Mg2+ is transported into the vacuole using the energy of ΔpH (in A. thaliana by AtMHX). Transport of Mg2+ into cells may use either the negative ΔΨ or the ΔpH. The transport of Mg2+ into mitochondria probably uses ΔΨ as in the mitochondria of yeast, and it is likely that chloroplasts take Mg2+ by a similar system. The mechanism and the molecular basis for the release of Mg2+ from vacuoles and from the cell is not known. Likewise, the light-regulated Mg2+ concentration changes in chloroplasts are not fully understood, but do require the transport of H+ ions across the thylakoid membrane. Magnesium, chloroplasts and photosynthesis Mg2+ is the coordinating metal ion in the chlorophyll molecule, and in plants where the ion is in high supply about 6% of the total Mg2+ is bound to chlorophyll. Thylakoid stacking is stabilised by Mg2+ and is important for the efficiency of photosynthesis, allowing phase transitions to occur. Mg2+ is probably taken up into chloroplasts to the greatest extent during the light-induced development from proplastid to chloroplast or etioplast to chloroplast. At these times, the synthesis of chlorophyll and the biogenesis of the thylakoid membrane stacks absolutely require the divalent cation. Whether Mg2+ is able to move into and out of chloroplasts after this initial developmental phase has been the subject of several conflicting reports. Deshaies et al. (1984) found that Mg2+ did move in and out of isolated chloroplasts from young pea plants, but Gupta and Berkowitz (1989) were unable to reproduce the result using older spinach chloroplasts. Deshaies et al. had stated in their paper that older pea chloroplasts showed less significant changes in Mg2+ content than those used to form their conclusions. The relative proportion of immature chloroplasts present in the preparations may explain these observations. The metabolic state of the chloroplast changes considerably between night and day. During the day, the chloroplast is actively harvesting the energy of light and converting it into chemical energy. The activation of the metabolic pathways involved comes from the changes in the chemical nature of the stroma on the addition of light. H+ is pumped out of the stroma (into both the cytoplasm and the lumen) leading to an alkaline pH. Mg2+ (along with K+) is released from the lumen into the stroma, in an electroneutralisation process to balance the flow of H+. Finally, thiol groups on enzymes are reduced by a change in the redox state of the stroma. Examples of enzymes activated in response to these changes are fructose 1,6-bisphosphatase, sedoheptulose bisphosphatase and ribulose-1,5-bisphosphate carboxylase. During the dark period, if these enzymes were active a wasteful cycling of products and substrates would occur. Two major classes of the enzymes that interact with Mg2+ in the stroma during the light phase can be identified. Firstly, enzymes in the glycolytic pathway most often interact with two atoms of Mg2+. The first atom is as an allosteric modulator of the enzymes' activity, while the second forms part of the active site and is directly involved in the catalytic reaction. The second class of enzymes includes those where the Mg2+ is complexed to nucleotide di- and tri-phosphates (ADP and ATP), and the chemical change involves phosphoryl transfer. Mg2+ may also serve in a structural maintenance role in these enzymes (e.g., enolase). Magnesium stress Plant stress responses can be observed in plants that are under- or over-supplied with Mg2+. The first observable signs of Mg2+ stress in plants for both starvation and toxicity is a depression of the rate of photosynthesis, it is presumed because of the strong relationships between Mg2+ and chloroplasts/chlorophyll. In pine trees, even before the visible appearance of yellowing and necrotic spots, the photosynthetic efficiency of the needles drops markedly. In Mg2+ deficiency, reported secondary effects include carbohydrate immobility, loss of RNA transcription and loss of protein synthesis. However, due to the mobility of Mg2+ within the plant, the deficiency phenotype may be present only in the older parts of the plant. For example, in Pinus radiata starved of Mg2+, one of the earliest identifying signs is the chlorosis in the needles on the lower branches of the tree. This is because Mg2+ has been recovered from these tissues and moved to growing (green) needles higher in the tree. A Mg2+ deficit can be caused by the lack of the ion in the media (soil), but more commonly comes from inhibition of its uptake. Mg2+ binds quite weakly to the negatively charged groups in the root cell walls, so that excesses of other cations such as K+, NH4+, Ca2+, and Mn2+ can all impede uptake.(Kurvits and Kirkby, 1980; In acid soils Al3+ is a particularly strong inhibitor of Mg2+ uptake. The inhibition by Al3+ and Mn2+ is more severe than can be explained by simple displacement, hence it is possible that these ions bind to the Mg2+ uptake system directly. In bacteria and yeast, such binding by Mn2+ has already been observed. Stress responses in the plant develop as cellular processes halt due to a lack of Mg2+ (e.g. maintenance of ΔpH across the plasma and vacuole membranes). In Mg2+-starved plants under low light conditions, the percentage of Mg2+ bound to chlorophyll has been recorded at 50%. Presumably, this imbalance has detrimental effects on other cellular processes. Mg2+ toxicity stress is more difficult to develop. When Mg2+ is plentiful, in general the plants take up the ion and store it (Stelzer et al., 1990). However, if this is followed by drought then ionic concentrations within the cell can increase dramatically. High cytoplasmic Mg2+ concentrations block a K+ channel in the inner envelope membrane of the chloroplast, in turn inhibiting the removal of H+ ions from the chloroplast stroma. This leads to an acidification of the stroma that inactivates key enzymes in carbon fixation, which all leads to the production of oxygen free radicals in the chloroplast that then cause oxidative damage. See also Biology and pharmacology of chemical elements Magnesium deficiency (agriculture) Notes References electronic-book electronic- External links Magnesium Deficiency List of foods rich in Magnesium The Magnesium Website- Includes full text papers and textbook chapters by leading magnesium authorities Mildred Seelig, Jean Durlach, Burton M. Altura and Bella T. Altura. Links to over 300 articles discussing magnesium and magnesium deficiency. Dietary Reference Intake Physiology Plant physiology Magnesium Biology and pharmacology of chemical elements Biological systems
Magnesium in biology
Chemistry,Biology
8,696
1,286,023
https://en.wikipedia.org/wiki/Fazlur%20Rahman%20Khan
Fazlur Rahman Khan (, Fazlur Rôhman Khan; 3 April 1929 – 27 March 1982) was a Bangladeshi-American structural engineer and architect, who initiated important structural systems for skyscrapers. Considered the "father of tubular designs" for high-rises, Khan was also a pioneer in computer-aided design (CAD). He was the designer of the Sears Tower, since renamed Willis Tower, the tallest building in the world from 1973 until 1998, and the 100-story John Hancock Center. A partner in the firm Skidmore, Owings & Merrill in Chicago, Khan, more than any other individual, ushered in a renaissance in skyscraper construction during the second half of the 20th century. He has been called the "Einstein of structural engineering" and the "Greatest Structural Engineer of the 20th Century" for his innovative use of structural systems that remain fundamental to modern skyscraper design and construction. In his honor, the Council on Tall Buildings and Urban Habitat established the Fazlur Khan Lifetime Achievement Medal, as one of their CTBUH Skyscraper Awards. Although best known for skyscrapers, Khan was also an active designer of other kinds of structures, including the Hajj airport terminal, the McMath–Pierce solar telescope and several stadium structures. Family and background Fazlur Rahman Khan was born on 3 April 1929 to a Bengali Muslim family in Dhaka, Bengal Presidency (present-day Bangladesh). He was from and brought up in the Khan Bari of Bhandarikandi in Madaripur, Faridpur District. His father, Khan Bahadur Abdur Rahman Khan, was a high school mathematics teacher and textbook author who eventually became the Director of Public Instruction in Bengal and after retirement served as the first Principal of Jagannath College. His mother, Khadijah Khatun, was the daughter of Abdul Basit Chowdhury, the Zamindar (aristocratic landowner) of Dulai in Pabna who traced his ancestry to a migrant from Samarkand in Turkestan. Khan's paternal uncle, Abdul Hakim Khan, was the son-in-law of Syed Abdul Jabbar, a zamindar based in Comilla. Early life and education Khan attended Armanitola Government High School, in Dhaka. After that, he studied Civil Engineering in Bengal Engineering and Science University, Shibpur (present day Indian Institute of Engineering Science and Technology, Shibpur), Kolkata, India, and then received his Bachelor of Civil Engineering degree from Ahsanullah Engineering College (now Bangladesh University of Engineering and Technology). He received a Fulbright Scholarship and a government scholarship, which enabled him to travel to the United States in 1952. There he studied at the University of Illinois at Urbana–Champaign. In three years Khan earned two master's degrees – one in structural engineering and one in theoretical and applied mechanics – and a PhD in structural engineering with thesis titled Analytical Study of Relations Among Various Design Criteria for Rectangular Prestressed Concrete Beams. His hometown in Dhaka did not have any buildings taller than three stories. He did not view his first skyscraper in person until the age of 21 years old, and he had not stepped inside a mid-rise building until he moved to the United States for graduate school. Despite this, the environment of his hometown in Dhaka later influenced his tube building concept, which was inspired by the bamboo that sprouted around Dhaka. He found that a hollow tube, like the bamboo in Dhaka, lent a high-rise vertical durability. Career In 1955, employed by the architectural firm Skidmore, Owings & Merrill (SOM), he began working in Chicago. He was made a partner in 1966. He worked the rest of his life side by side with fellow architect Bruce Graham. Khan introduced design methods and concepts for efficient use of material in building architecture. His first building to employ the tube structure was the Chestnut De-Witt apartment building. During the 1960s and 1970s, he became noted for his designs for Chicago's 100-story John Hancock Center and 110-story Sears Tower, since renamed Willis Tower, the tallest building in the world from 1973 until 1998. He believed that engineers needed a broader perspective on life, saying, "The technical man must not be lost in his own technology; he must be able to appreciate life, and life is art, drama, music, and most importantly, people." Khan's personal papers, most of which were in his office at the time of his death, are held by the Ryerson & Burnham Libraries at the Art Institute of Chicago. The Fazlur Khan Collection includes manuscripts, sketches, audio cassette tapes, slides and other materials regarding his work. Personal life For enjoyment, Khan loved singing Rabindranath Tagore's poetic songs in Bengali. He and his wife, Liselotte, an immigrant from Austria, had one daughter who was born in 1960. In 1967, he elected to become a United States citizen. Khan was a Muslim at the time when he died. Innovations Khan discovered that the rigid steel frame structure that had long dominated tall building design was not the only system fitting for tall buildings, marking the start of a new era of skyscraper construction. Tube structural systems Khan's central innovation in skyscraper design and construction was the idea of the "tube" structural system for tall buildings, including the framed tube, trussed tube, and bundled tube variants. His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design. Most buildings over 40 stories constructed since the 1960s now use a tube design derived from Khan's structural engineering principles. Lateral loads (horizontal forces) such as wind forces, seismic forces, etc., begin to dominate the structural system and take on increasing importance in the overall building system as the building height increases. Wind forces become very substantial, and forces caused by earthquakes, etc. are important as well. The tubular designs resist such forces for tall buildings. Tube structures are stiff and have significant advantages over other framing systems. They not only make the buildings structurally stronger and more efficient, but also significantly reduce the structural material requirements. The reduction of material makes the buildings economically more efficient and reduces environmental impact. The tubular designs enable buildings to reach even greater heights. Tubular systems allow greater interior space and further enable buildings to take on various shapes, offering added freedom to architects. These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land. Khan was among a group of engineers who encouraged a rebirth in skyscraper construction after a hiatus of over thirty years. The tubular systems have yet to reach their limit when it comes to height. Another important feature of the tubular systems is that buildings can be constructed using steel or reinforced concrete, or a composite of the two, to reach greater heights. Khan pioneered the use of lightweight concrete for high-rise buildings, at a time when reinforced concrete was used for mostly low-rise construction of only a few stories in height. Most of Khan's designs were conceived considering pre-fabrication and repetition of components so projects could be quickly built with minimal errors. The population explosion, starting with the baby boom of the 1950s, created widespread concern about the amount of available living space, which Khan solved by building upward. More than any other 20th-century engineer, Fazlur Rahman Khan made it possible for people to live and work in "cities in the sky". Mark Sarkisian (Director of Structural and Seismic Engineering at Skidmore, Owings & Merrill) said, "Khan was a visionary who transformed skyscrapers into sky cities while staying firmly grounded in the fundamentals of engineering." Framed tube Since 1963, the new structural system of framed tubes became highly influential in skyscraper design and construction. Khan defined the framed tube structure as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation." Closely spaced interconnected exterior columns form the tube. Horizontal loads, for example from wind and earthquakes, are supported by the structure as a whole. About half the exterior surface is available for windows. Framed tubes allow fewer interior columns, and so create more usable floor space. The bundled tube structure is more efficient for tall buildings, lessening the penalty for height. The structural system also allows the interior columns to be smaller and the core of the building to be free of braced frames or shear walls that use valuable floor space. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. The first building to apply the tube-frame construction was the DeWitt-Chestnut Apartment Building, since renamed Plaza on DeWitt, building that Bruce Graham designed and Khan did the engineering for was completed in Chicago in 1963. This laid the foundations for the framed tube structure used in the construction of the World Trade Center. Trussed tube and X-bracing Khan pioneered several other variants of the tube structure design. One of these was the concept of applying X-bracing to the exterior of the tube to form a trussed tube. X-bracing reduces the lateral load on a building by transferring the load into the exterior columns, and the reduced need for interior columns provides a greater usable floor space. Khan first employed exterior X-bracing on his engineering of the John Hancock Center in 1965, and this can be clearly seen on the building's exterior, making it an architectural icon. In contrast to earlier steel frame structures, such as the Empire State Building (1931), which required about 206 kilograms of steel per square meter and One Chase Manhattan Plaza (1961), which required around 275 kilograms of steel per square meter, the John Hancock Center was far more efficient, requiring only 145 kilograms of steel per square meter. The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower. Bundle tube One of Khan's most important variants of the tube structure concept was the bundled tube, which was used for the Willis Tower and One Magnificent Mile. The bundled tube design was not only the most efficient in economic terms, but it was also "innovative in its potential for versatile formulation of architectural space. Efficient towers no longer had to be box-like; the tube-units could take on various shapes and could be bundled together in different sorts of groupings." Tube in tube Tube-in-tube system takes advantage of core shear wall tubes in addition to exterior tubes. The inner tube and outer tube work together to resist gravity loads and lateral loads and to provide additional rigidity to the structure to prevent significant deflections at the top. This design was first used in One Shell Plaza. Later buildings to use this structural system include the Petronas Towers. Outrigger and belt truss The outrigger and belt truss system is a lateral load resisting system in which the tube structure is connected to the central core wall with very stiff outriggers and belt trusses at one or more levels. BHP House was the first building to use this structural system followed by the First Wisconsin Center, since renamed U.S. Bank Center, in Milwaukee. The center rises 601 feet, with three belt trusses at the bottom, middle and top of the building. The exposed belt trusses serve aesthetic and structural purposes. Later buildings to use this include Shanghai World Financial Center. Concrete tube structures The last major buildings engineered by Khan were the One Magnificent Mile and Onterie Center in Chicago, which employed his bundled tube and trussed tube system designs respectively. In contrast to his earlier buildings, which were mainly steel, his last two buildings were concrete. His earlier DeWitt-Chestnut Apartments building, built in 1963 in Chicago, was also a concrete building with a tube structure. Trump Tower in New York City is another example that adapted this system. Shear wall frame interaction system Khan developed the shear wall frame interaction system for mid high-rise buildings. This structural system uses combinations of shear walls and frames designed to resist lateral forces. The first building to use this structural system was the 35-stories Brunswick Building. The Brunswick building was completed in 1965 and became the tallest reinforced concrete structure of its time. The structural system of Brunswick Building consists of a concrete shear wall core surrounded by an outer concrete frame of columns and spandrels. Apartment buildings up to 70 stories high have successfully used this concept. Legacy Khan's seminal work of developing tall building structural systems are still used today as the starting point when considering design options for tall buildings. Tube structures have since been used in many skyscrapers, including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, Bank of China Tower and most other buildings in excess of 40 stories constructed since the 1960s. The strong influence of tube structure design is also evident in the world's current tallest skyscraper, the Burj Khalifa in Dubai. According to Stephen Bayley of The Daily Telegraph: Life cycle civil engineering Khan and Mark Fintel conceived ideas of shock absorbing soft-stories, for protecting structures from abnormal loading, particularly strong earthquakes, over a long period of time. This concept was a precursor to modern seismic isolation systems. The structures are designed to behave naturally during earthquakes where traditional concepts of material ductility are replaced by mechanisms that allow for movement during ground shaking while protecting material elasticity. The IALCCE established the Fazlur R. Khan Life-Cycle Civil Engineering Medal. Other architectural work Khan designed several notable structures that are not skyscrapers. Examples include the Hajj terminal of King Abdulaziz International Airport, completed in 1981, which consists of tent-like roofs that are folded up when not in use. The project received several awards, including the Aga Khan Award for Architecture, which described it as an "outstanding contribution to architecture for Muslims". The tent-like tensile structures advanced the theory and technology of fabric as a structural material and led the way to its use for other types of terminals and large spaces. Khan also designed the King Abdulaziz University, the United States Air Force Academy in Colorado Springs and the Hubert H. Humphrey Metrodome in Minneapolis. With Bruce Graham, Khan developed a cable-stayed roof system for the Baxter Travenol Laboratories in Deerfield, Illinois. Computers for structural engineering and architecture In the 1970s, engineers were just starting to use computer structural analysis on a large scale. SOM was at the center of these new developments, with undeniable contributions from Khan. Graham and Khan lobbied SOM partners to purchase a mainframe computer, a risky investment at a time, when new technologies were just starting to form. The partners agreed, and Khan began programming the system to calculate structural engineering equations, and later, to develop architectural drawings. Professional milestones List of buildings Buildings on which Khan was structural engineer include: McMath–Pierce solar telescope, Kitt Peak National Observatory, Arizona, 1962 DeWitt-Chestnut Apartments, Chicago, 1963 Brunswick Building, Chicago, 1965 John Hancock Center, Chicago, 1965–1969 One Shell Square, New Orleans, Louisiana, 1972 140 William Street (formerly BHP House), Melbourne, 1972 Sears Tower, renamed Willis Tower, Chicago, 1970–1973 First Wisconsin Center, renamed U.S. Bank Center, Milwaukee, 1973 Hajj Terminal, King Abdulaziz International Airport, Jeddah, 1974–1980 King Abdulaziz University, Jeddah, 1977–1978 Hubert H. Humphrey Metrodome, Minneapolis, Minnesota, 1982 One Magnificent Mile, Chicago, completed 1983 Onterie Center, Chicago, completed 1986 United States Air Force Academy, Colorado Springs, Colorado Awards and chair Among Khan's other accomplishments, he received the Wason Medal (1971) and Alfred Lindau Award (1973) from the American Concrete Institute (ACI); the Thomas Middlebrooks Award (1972) and the Ernest Howard Award (1977) from ASCE; the Kimbrough Medal (1973) from the American Institute of Steel Construction; the Oscar Faber medal (1973) from the Institution of Structural Engineers, London; the International Award of Merit in Structural Engineering (1983) from the International Association for Bridge and Structural Engineering IABSE; the AIA Institute Honor for Distinguished Achievement (1983) from the American Institute of Architects; and the John Parmer Award (1987) from Structural Engineers Association of Illinois and Illinois Engineering Hall of Fame from Illinois Engineering Council (2006). Khan was cited five times by Engineering News-Record as among those who served the best interests of the construction industry, and in 1972 he was honored with ENR Man of the Year award. In 1973 he was elected to the National Academy of Engineering. He received honorary doctorates from Northwestern University, Lehigh University, and the Swiss Federal Institute of Technology Zürich (ETH Zurich). The Council on Tall Buildings and Urban Habitat named one of their CTBUH Skyscraper Awards the Fazlur Khan Lifetime Achievement Medal after him, and other awards have been established in his honor, along with a chair at Lehigh University. Promoting educational activities and research, the Fazlur Rahman Khan Endowed Chair of Structural Engineering and Architecture honors Khan's legacy of engineering advancement and architectural sensibility. Dan Frangopol is the first holder of the chair. Khan was mentioned by President Obama in 2009 in his speech in Cairo, Egypt when he cited the achievements of America's Muslim citizens. Khan was the subject of the Google Doodle on 3 April 2017, marking what would have been his 88th birthday. Documentary film In 2021, director Laila Kazmi began production on a feature-length documentary film to be called Reaching New Heights: Fazlur Rahman Khan and the Skyscraper on the life and legacy of Khan. The film is produced by Kazmi's production company Kazbar Media, with development support from ITVS, which provides co-production support to independent documentaries on PBS. The film is helmed by director and producer Laila Kazmi, with associate producer Arnila Guha, and New York-based art director Begoña Lopez. It is fiscally sponsored by Film Independent. Charity In 1971 the Bangladesh Liberation War broke out. Khan was heavily involved with creating public opinion and garnering emergency funding for Bengali people during the war. He created the Chicago-based Bangladesh Emergency Welfare Appeal organization. Death Khan died of a heart attack on 27 March 1982 while on a trip in Jeddah, Saudi Arabia, at the age of 52, at which time he was a general partner in SOM. His body was returned to the United States and was buried in Graceland Cemetery in Chicago. See also Chicago school Engineering Legends, a 2005 book List of Bangladeshi architects References Further reading Ali, Mir M. (2001). Art of the Skyscraper: The Genius of Fazlur Khan. Rizzoli International Publications, Inc., New York, NY, External links Fazlur Rahman Khan Collection in the South Asian American Digital Archive (SAADA) Fazlur Rahman Khan Documentary Project Fazlur Khan Lifetime Achievement Medal Letter from Bill Clinton Exhibition at Princeton University 1929 births 1982 deaths 20th-century American architects Muslims from Illinois American people of Bangladeshi descent Bangladesh University of Engineering and Technology alumni 20th-century Bengalis Burials at Graceland Cemetery (Chicago) Bangladeshi civil engineers Pakistani emigrants to the United States Recipients of the Independence Day Award Structural engineers University of Dhaka alumni Grainger College of Engineering alumni 20th-century American engineers Skidmore, Owings & Merrill people People from Madaripur District Bangladeshi people of Central Asian descent
Fazlur Rahman Khan
Engineering
4,009
35,217,524
https://en.wikipedia.org/wiki/%CE%91-Vetivone
α-Vetivone is an organic compound that is classified as a sesquiterpene (derived from three isoprene units). It is a major component of the oil of vetiver, which is used to prepare certain high value perfumes. α-Vetivone is isolated by steam distillation of the roots of the grass Vetiveria zizanioides. Two other components of this distillate are the sesquiterpenes khusimol and β-vetivone shown below. References Perfume ingredients Ketones Sesquiterpenes Bicyclic compounds
Α-Vetivone
Chemistry
127
35,219,521
https://en.wikipedia.org/wiki/Corrado%20de%20Concini
Corrado de Concini (born 28 July 1949, in Rome) is an Italian mathematician and professor at the Sapienza University of Rome. He studies algebraic geometry, quantum groups, invariant theory, and mathematical physics. Life and work He was born in Rome in 1949, the son of Ennio de Concini, a noted screenwriter and film director. Corrado de Concini received in 1971 the mathematics degree from Sapienza University of Rome and in 1975 a Ph.D. from the University of Warwick under the supervision of George Lusztig (The mod-2 cohomology of the orthogonal groups over a finite field). In 1975 he was a lecturer (Professore Incaricato) at the University of Salerno, and in 1976 was associate professor at the University of Pisa. In 1981 he went to the University of Rome, where in 1983 he was a professor of higher algebra. From 1988 to 1996 he was professor at the Scuola Normale Superiore in Pisa, and from 1996 to 2019 professor at the Sapienza University of Rome. Since 2020 he is emeritus professor at the Sapienza University of Rome De Concini was also a visiting scientist at the Brandeis University, the Mittag-Leffler Institute (1981), the Tata Institute of Fundamental Research (1982), Harvard University (1987), the Massachusetts Institute of Technology (1989), the University of Paris VI, the Institut des Hautes Études Scientifiques (1992, 1996), the École Normale Supérieure (2004, Lagrange Michelet Chair), and the Mathematical Sciences Research Institute (2000, 2002). From 2003 to 2007 he was president of Istituto Nazionale di Alta Matematica Francesco Severi. In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley (Equivariant embeddings of homogeneous spaces). In 1992, he held a plenary lecture on the first European Congress of Mathematicians in Paris (Representations of quantum groups at roots of 1). In 1986 he was awarded the Caccioppoli Prize. Since 1993 he is a corresponding member and since 2009 a full member of the Accademia dei Lincei and since 2005 a corresponding member of the Istituto Lombardo. From 2021, Corrado de Concini is the president of the Accademia delle Scienze detta dei XL (whose gold medal he won in 1990). Writings With Claudio Procesi: Topics in Hyperplane Arrangements, Polytopes and Box-Splines, Springer, 2010. With Claudio Procesi: Quantum groups, in: D-modules, representation theory, and quantum groups (Venice, 1992), 31–140, Lecture Notes in Math., vol. 1565, Springer, Berlin, 1993. See also Wonderful compactification References External links 1949 births Living people Scientists from Rome 20th-century Italian mathematicians 21st-century Italian mathematicians Group theorists Algebraic geometers Topologists Sapienza University of Rome alumni Alumni of the University of Warwick Academic staff of the University of Salerno Academic staff of the University of Pisa Academic staff of the Scuola Normale Superiore di Pisa Academic staff of the Sapienza University of Rome
Corrado de Concini
Mathematics
658
11,552,241
https://en.wikipedia.org/wiki/Wallace%20D.%20Hayes
Wallace Dean Hayes (September 4, 1918 – March 2, 2001) was a professor of mechanical and aerospace engineering at Princeton University and one of the world's leading theoretical aerodynamicists, whose numerous and fundamental contributions to the theories of supersonic and hypersonic flow and wave motion strongly influenced the design of aircraft at supersonic speeds and missiles at hypersonic speeds. This greatly enhanced the development of supersonic flight and supersonic aircraft design. In a series of publications beginning in 1947 with his Ph.D. thesis under Theodore von Kármán at the California Institute of Technology , he developed a theory of supersonic flow called the area rule which strongly influenced the design of high-speed aircraft. His work also provided the first understanding of the behavior of delta wing aircraft flying just above the speed of sound. He followed his work in supersonic flow with groundbreaking studies in the late 1940s and early 1950s in hypersonic flow, which is considered to begin at about five times the speed of sound, or Mach 5. He developed the Hayes similitude principle, which enabled designers to take the results of one series of tests or calculations and apply them to the design of an entire family of similar configurations where neither tests nor detailed calculations are available. Many of his developments appeared in his book Hypersonic Flow Theory, co-written with Ronald Probstein and first published in 1959. He made important contributions to the understanding of sonic booms and served on numerous NASA advisory committees on the subject. Hayes was born in Beijing, China and educated in California where he received his B.S. in physics in 1941 and his Ph.D. in physics, magna cum laude, in 1947 from the California Institute of Technology. His work in the aircraft industry began in 1939 with Consolidated Aircraft and continued during World War II as an aerodynamicist with North American Aviation. From 1952 to 1954 he was scientific liaison officer with the Office of Naval Research in London. In 1954, he came to Princeton University, where he taught until 1989. He also taught at the California Institute of Technology, Brown University, Delft Technical University, and the University of New Mexico at Holloman Air Force Base. He was elected to the National Academy of Engineering, the American Academy of Arts and Sciences, the American Physical Society(Fellow, 1986) and the American Institute of Aeronautics and Astronautics, which honored him in 1965 with its Research Award. Hayes was an active member of the Sierra Club since 1942 and an avid outdoor sports enthusiast enjoying rock-climbing, hiking, water sports, and skiing. He was also a glider and small airplane flight instructor. He died on March 2, 2001, in Hightstown, New Jersey at age 82. Select publications Physics of Shock Waves (1967) Inviscid Flows (1967) Inviscid Flows (1966) Hypersonic Flow Theory (1966) Hypersonic Flow Theory (1959) Gasdynamic Discontinuities (1960) Linearized Supersonic Flows (1947) On Supersonic Similitude (1947) Linear Supersonic Flow (1947) Gasdynamic Discontinuities (1947) Notes External links Dr. Wallace D. Hayes, National Academy of Engineering Wallace Hayes, Pioneer of Supersonic Flight, Princeton University obituary Wallace Hayes, 82, Aeronautics Expert, Dies, The New York Times obituary Wallace D. Hayes Memorial Tributes: National Academy of Engineering, Volume 1, pp. 151-156. 1918 births 2001 deaths Princeton University faculty California Institute of Technology alumni Aerodynamicists Fluid dynamicists Fellows of the American Physical Society Academic staff of the Delft University of Technology Brown University faculty University of New Mexico faculty
Wallace D. Hayes
Chemistry
728
7,338,545
https://en.wikipedia.org/wiki/Quincha
Quincha is a traditional construction system that uses, fundamentally, wood and cane or giant reed forming an earthquake-proof framework that is covered in mud and plaster. History Quincha is a Spanish term widely known in Latin America, borrowed from Quechua qincha (kincha in Kichwa). Even though Spanish and Portuguese are closely related languages, in this case, the Portuguese equivalent is completely different: Pau-a-pique. Historically, quincha has been utilized in the Spanish and Portuguese colonies throughout the different regions of the Americas. The construction technology is said to have existed for at least 8,000 years. In Peru, it is a popular construction design in the coastal regions. It is also adopted in urban centers after the incidence of earthquakes such as the case of the rebuilding of the city of Trujillo after the 1759 earthquake. Construction The framework or wattle is a main feature of traditional quincha. It is constructed by interweaving pieces of wood, cane, or bamboo and is covered with a mixture of mud and straw (or daub). It is then covered on both sides with a thin lime plaster finish, which serves as a sort of wall or ceiling panels. Quincha is known for its flexibility since it can be shaped into different designs. For example, the builders of the church at San Jose at Ingenio, Nazca modified quincha to construct its ornate twin-towered facade. Its resistance to earthquake is attributed to the combination of heavy mass (used for thermal insulation) and timber-frame structure. The lattice design of its framework also provides the quincha building stability, allowing it to shake during an earthquake without damage. A modern iteration of quincha is called quincha metallica, a method developed by the Chilean architect Marcelo Cortés. In this system, steel and wielded wire mesh are used instead of bamboo or cane to create the matrix that holds the mud, which is also improved through the addition of lime to control the clay's expansion and water impermeability. See also Wattle and daub References Soil-based building materials
Quincha
Engineering
435
1,962,530
https://en.wikipedia.org/wiki/Holy%20anointing%20oil
In the ancient Israelite religion, the holy anointing oil () formed an integral part of the ordination of the priesthood and the High Priest as well as in the consecration of the articles of the Tabernacle (Exodus 30:26) and subsequent temples in Jerusalem. The primary purpose of anointing with the holy anointing oil was to sanctify, to set the anointed person or object apart as , or "holy" (Exodus 30:29). Originally, the oil was used exclusively for the priests and the Tabernacle articles, but its use was later extended to include kings (1 Samuel 10:1). It was forbidden to be used on an outsider (Exodus 30:33) or to be used on the body of any common person (Exodus 30:32a) and the Israelites were forbidden to duplicate any like it for themselves (Exodus 30:32b). Some segments of Christianity have continued the practice of using holy anointing oil as a devotional practice, as well as in various liturgies. A variant form, known as oil of Abramelin, is used in Ecclesia Gnostica Catholica, the ecclesiastical arm of Ordo Templi Orientis (O.T.O.), an international fraternal initiatory organization devoted to promulgating the Law of Thelema. A number of religious groups have traditions of continuity of the holy anointing oil, with part of the original oil prepared by Moses remaining to this day. These groups include rabbinical Judaism, the Armenian Church, the Assyrian Church of the East, The Church of Jesus Christ of Latter-day Saints, the Coptic Church, the Saint Thomas Nazrani churches, and others. Biblical recipe The holy anointing oil described in Exodus 30:22–25 was created from: Pure myrrh () 500 shekels (about ) Sweet cinnamon () 250 shekels (about ) "Fragrant cane" (, sometimes translated as calamus) 250 shekels (about ) Cassia () 500 shekels (about ) Olive oil () one (about , or ) Identification of While sources agree about the identity of four of the five ingredients of anointing oil, the identity of the fifth, , has been a matter of debate. The Bible indicates that it was an aromatic cane or grass, which was imported from a distant land by way of the spice routes, and that a related plant grows in Israel (kaneh bosem is referenced as a cultivated plant in the Song of Songs 4:14. Several different plants have been named as possibly being the . Acorus calamus Most lexicographers, botanists, and biblical commentators translate as "cane balsam". The Aramaic Targum Onkelos renders the Hebrew in Aramaic as . Ancient translations and sources identify this with the plant variously referred to as sweet cane, or sweet flag (the Septuagint, the Rambam on Kerithoth 1:1, Saadia Gaon and Jonah ibn Janah). This plant is known to botanists as Acorus calamus. According to Aryeh Kaplan in The Living Torah, "It appears that a similar species grew in the Holy Land, in the Hula region in ancient times (Theophrastus, History of Plants 9:7)." Cymbopogon Maimonides, in contrast, indicates that it was the Indian plant, rosha grass (Cymbopogon martinii), which resembles red straw. Many standard reference works on Bible plants by Michael Zohary (University of Jerusalem, Cambridge, 1985), James A. Duke (2010), and Hans Arne Jensen (Danish 2004, English translation 2012) support this conclusion, arguing that the plant was a variety of Cymbopogon. James A. Duke, quoting Zohary, notes that it is "hopeless to speculate" about the exact species, but that Cymbopogon citratus (Indian lemon-grass) and Cymbopogon schoenanthus are also possibilities. Kaplan follows Maimonides in identifying it as the Cymbopogon martinii or palmarosa plant. Cannabis Sula Benet, in Early Diffusion and Folk Uses of Hemp (1967), identified it as cannabis. Rabbi Aryeh Kaplan notes that "On the basis of cognate pronunciation and Septuagint readings, some identify Keneh bosem with the English and Greek cannabis, the hemp plant." Benet argued that equating Keneh Bosem with sweet cane could be traced to a mistranslation in the Septuagint, which mistook Keneh Bosem, later referred to as "cannabos" in the Talmud, as "kalabos", a common Egyptian marsh cane plant. In Judaism In the ancient Near East Customs varied in the cultures of the Middle East. However, anointing with special oil in Israel was either a strictly priestly or kingly right. When a prophet was anointed, it was because he was first a priest. When a non-king was anointed, such as Elijah's anointing of Hazael and Jehu, it was a sign that Hazael was to become king of Aram (Syria) and Jehu was to become king of Israel. Extra-biblical sources show that it was common to anoint kings in many ancient Near Eastern monarchies. Therefore, in Israel, anointing was not only a sacred act but also a socio-political one. In the Hebrew Bible, bad smells appear as indications of the presence of disease, decay, rotting processes and death (Exodus 7:18), while pleasant aromas suggest places that were biologically clean and conducive to habitation and/or food production and harvesting. Spices and oils were chosen which assisted mankind in orienting themselves and in creating a sense of safety as well as a sense of elevation above the physical world of decay. The sense of smell was also considered highly esteemed by deity. In Deuteronomy 4:28 and Psalms 115:5–6, the sense of smell is included in connection with the polemics against idols. In the Hebrew Bible God takes pleasure in inhaling the "soothing odor" () of offerings (Genesis 8:21; the phrase is also seen in other verses). To the ancient Israelite there was no oil or fat with more symbolic meaning than olive oil. It was used as an emollient, a fuel for lighting lamps, for nutrition, and for many other purposes. It was scented olive oil that was chosen to be a holy anointing oil for the Israelites. In Rabbinic Judaism The Talmud asserts that the original anointing oil prepared by Moses remained miraculously intact and was used by future generations without replacement, including in the future Third Temple when it is rebuilt. This suggests that, following ancient customs, new oil was added to the old thus continuing the original oil for all time. In Christianity Anointing oil is used in Christian communities for various reasons. Anointing of the sick is prescribed in this passage in the New Testament: The epithet "Christ" as a title for Jesus refers to "the anointed one". In the Armenian Church The holy anointing oil of the Armenian Church is called the holy muron ('muron' means myrrh). The church holds a special reverence for the continuity factor of the oil. According to tradition, a portion of the holy anointing oil of Exodus 30, which Moses and Aaron had blessed, still remained in Jesus' time. Jesus Christ blessed this oil and then gave some of it to Thaddeus, who took the holy oil to Armenia and healed King Abkar of a terrible skin disease by anointing him with the holy oil. Thaddeus is said to have buried a bottle of the holy anointing oil in Daron under an evergreen tree. Gregory the Illuminator discovered the hidden treasure and mixed it with muron that he had blessed. It is said that "To this day, whenever a new batch of muron is prepared and blessed, a few drops of the old one go into it, so that the Armenian muron always contains a small amount of the original oil blessed by Moses, Jesus Christ, and Gregory the Illuminator." The holy muron is composed of olive oil and 48 aromas and flowers. The remaining portion of the previous blessed holy oil is poured into the newly prepared oil during the blessing ceremony and passes the blessing from generation to generation. It is said that this procedure has been followed for nearly 1700 years. The Catholicos of all Armenians in Etchmiadzin combines a new mixture of holy muron in the cauldron every seven years using a portion of the holy muron from the previous blend. This is distributed to all of the Armenian churches throughout the world. Before Christianity, muron was reserved solely for the enthroning of royalty and for very special events. In later years, it was used with extreme unction and to heal the sick, and to anoint ordained clergy. In the Assyrian Church of the East It is said by the Assyrian Church that the holy anointing oil "was given and handed down to us by our holy fathers Mar Addai and Mar Mari and Mar Tuma." The holy anointing oil of the Assyrian Church is variously referred to as the Oil of the Holy Horn, the Oil of the Qarna, or the Oil of Unction. This holy oil is an apostolic tradition, believed to have originated from the oil consecrated by the apostles themselves, and which by succession has been handed down in the Church into the modern day. The original oil which the disciples blessed began to run low and more oil was added to it. The Assyrian Church believes that this has continued to this very day with new oil being added as the oil level lowers. This succession of holy oil is believed to be a continuity of the blessings placed upon the oil from the beginning. Both the Oil of Unction and the Holy Leaven are referred to as "leaven", although there is no actual leavening agent present in the oil. Yohanan bar Abgareh referred to the oil in 905, as did Shlemon d-Basra in the 13th century. Yohanan bar Zo'bee in the 14th century integrated the Holy Oil of unction with baptism and other rites. Isaaq Eshbadhnaya in the 15th century wrote the Scholion which is a commentary on specific theological topics, stating that John the Baptist gave John the Evangelist a baptismal vessel of water from Christ's baptism, which was collected by John the Baptist from water dripping from Christ after his baptism in Jordan River. Jesus gave each disciple a "loaf," at the Last Supper, but the Scholion states that to John he gave two loaves, with the instructions to eat only one and to save the other. At the crucifixion, John collected the water from Jesus's side in the vessel and the blood he collected on the loaf from the Last Supper. After the descent of the Holy Spirit on Pentecost the disciples took the vessel and mixed it with oil and each took a horn of it. The loaf they ground up and added flour and salt to it. Each took a portion of the holy oil and the holy bread which were distributed in every land by the hand of those who missionized there. The Assyrian Church has two types of holy oils; the one is ordinary olive oil, blessed or not blessed, the other is the oil of the Holy Horn which is believed to have been handed down from the apostles. The Holy Horn is constantly renewed by the addition of oil blessed by a bishop on Maundy Thursday. While almost anyone can by tradition be anointed with the regular oil, the oil of the Holy Horn is restricted for ordination and sanctification purposes. In the Coptic Church The holy anointing oil of the Coptic Church is referred to as the holy myron ('myron' means myrrh). The laying on of hands for the dwelling of the Holy Spirit is believed to have been a specific rite of the apostles and their successors the bishops, and as the regions of mission increased, consequently numbers of Christian believers and converts increased. It was not possible for the apostles to wander through all the countries and cities to lay hands on all of those baptized, so they established anointment by the holy myron as an alternative, it is believed, for the laying on of the hands for the Holy Spirit's indwelling. The first who made the myron were the apostles who had kept the fragrant oils which were on the body of Jesus Christ during his burial, and they added the spices which were brought by those women who prepared them to anoint Christ, but had discovered he had been resurrected. They melted all these spices in pure olive oil, prayed on it in the upper room in Zion, and made it a holy anointing oil. They decided that their successors, the bishops, must renew the making of the myron whenever it is nearly used up, by incorporating the original oil with the new. Today the Coptic Church uses it for ordination, in the sanctification of baptismal water, and in the consecration of churches and church altars and vessels. It is said that when Mark the Evangelist went to Alexandria, he took with him some of the holy myron oil made by the apostles and that he used it in the sacrament of Chrism, as did the patriarchs who succeeded him. This continued until the era of Athanasius the Apostolic, the 20th patriarch, who then decided to remake the myron in Alexandria. Hence, it is reported, he prepared all of the needed perfumes and spices, with pure olive oil, from which God ordered Moses to make the holy anointing oil as specified in the recipe in the thirtieth chapter of the book of Exodus. Then the sanctification of the holy myron was fulfilled in Alexandria, and Athanasius was entrusted with the holy oil, which contained spices which touched Jesus's body while it was in the tomb, as well as the original oil which had been prepared by the apostles and brought to Egypt by Mark. He distributed the oil to the churches abroad: to the See of Rome, Antioch and Constantinople, together with a document of its authenticity, and all of the patriarchs are said to have rejoiced in receiving it. The Coptic Church informs that the fathers of the Church and scholars like Justin Martyr, Tertullian, Hippolytus, Origen, Ambrose, and Cyril of Jerusalem, spoke about the holy myron and how they received its use in anointing by tradition. For example, Hippolytus, in his Apostolic Tradition, speaks of the holy oil "according to ancient custom" Origen writes about the holy oil "according to the tradition of the church" Cyril of Jerusalem goes into further detail in speaking about the grace of the Holy Spirit in the holy myron: "this oil is not just any oil: after the epiclesis of the Spirit, it becomes charism of Christ and power of the Holy Spirit through the presence of the deity". The early fathers and scholars mention the use of the holy myron, as well as a documentation by Abu'l-Barakat Ibn Kabar, a 14th-century Coptic priest and scholar, in his book Misbah az-Zulmah fi idah al-khidmah (The Lamp of Darkness in Clarifying the Service). According to his account, the holy apostles took from the spices that were used to anoint the body of Jesus Christ when he was buried, added pure olive oil to it, and prayed over it in Upper Zion, the first church where the Holy Spirit fell in the upper room. This holy oil was then distributed among all of the apostles so that wherever they preached, new converts would be anointed with it as a seal. They also commanded that whenever a new batch of Holy Myron was made, they add to it the old holy myron to keep the first holy myron continually with all that would ever be made afterwards. According to the available resources, the holy myron in the Church of Egypt has been made 34 times. Among the Saint Thomas Christians and Nasranis According to tradition, Thomas the Apostle laid the original foundation for Christianity in India. It is reported that Jewish communities already present in India enticed Thomas to make his missionary journey there. It is said that he brought holy anointing oil with him and that the St. Thomas Christians still have this oil to this day. Patriarch Ya'qub, of the Syrian Malabar Nasrani Church, is remembered for his celebration of the liturgy and his humble encouragement to accept the simple way of life. After he consecrated sacred myron in the Mor Gabriel monastery in 1964, holy myron flowed from the glass container the following day and many people were said to have been healed by it. In the Baptist, Methodist and Pentecostal churches In many evangelical denominations, such as those of the Baptist, Methodist and Pentecostal traditions, holy anointing oil is often used in the anointing of the sick and in deliverance ministry. It is additionally used "anoint babies as a sign of blessing and protection for the new life ahead" and to "anoint clergy as they begin a new assignment in ministry". Bottles of holy anointing oil are often sold at Christian religious goods stores, being purchased by both clergy and laity for use in prayer or house blessings. In Mandaeism In Mandaeism, anointing sesame oil, called () in Mandaic, is used during rituals such as the (baptism) and (death mass), both of which are performed by Mandaean priests. In Western esotericism and Thelema Abramelin oil Abramelin oil, also called oil of Abramelin, is an anointing oil used in Western esotericism, especially in ceremonial magic. It is blended from aromatic plant materials. Its name came about due to its having been described in a medieval grimoire called The Book of the Sacred Magic of Abramelin the Mage (1897) written by Abraham the Jew (presumed to have lived from c. 1362 – c. 1458). The recipe is adapted from that of the biblical holy anointing oil described in the Book of Exodus (30:22-25) and attributed to Moses. In the English translation The Book of Abramelin: A New Translation (2006) by Steven Guth of Georg Dehn, which was compiled from all the known German manuscript sources, the formula reads as follows: In the first printed edition, Peter Hammer, 1725, the recipe reads: Note that the proportions in this edition conform with the recipe for holy anointing oil from the Bible (Exodus 30:22-25). The original popularity of Abramelin oil rested on the importance magicians place upon Jewish traditions of holy oils and, more recently, upon S. L. MacGregor Mathers' translation of The Book of Abramelin and the resurgence of 20th-century occultism, such as found in the works of the Hermetic Order of the Golden Dawn and Aleister Crowley, the founder of Thelema, who used a similar version of the oil in his system of Magick, and has since spread into other modern occult traditions. There are multiple recipes in use today. This oil is currently used in several ceremonies of the Thelemic church, Ecclesia Gnostica Catholica, including the rites of confirmation and ordination. It is also commonly used to consecrate magical implements and temple furniture. The eucharistic host of the Gnostic Mass—called the Cake of Light—includes this oil as an important ingredient. Recipes Samuel Mathers' recipe According to the S. L. MacGregor Mathers English translation from 1897, which derives from an incomplete French manuscript copy of The Book of Abramelin, the recipe is: Crowley's recipe using essential oils Early in the 20th century, the Aleister Crowley created his own version of Abramelin oil, which is called "oil of Abramelin" in The Book of the Law. It was based on S. L. MacGregor Mathers' substitution of galangal for calamus. Crowley also abandoned the book's method of preparation—which specifies blending myrrh "tears" (resin) and "fine" (finely ground) cinnamon—instead opting for using distilled essential oils in a base of olive oil. His recipe (from his commentary to The Book of the Law) reads as follows: 8 parts cinnamon essential oil 4 parts myrrh essential oil 2 parts galangal essential oil 7 parts olive oil Crowley weighed out his proportions of essential oils according to the recipe specified by Mathers' translation for weighing out raw materials. The result is to give the cinnamon a strong presence, so that when it is placed upon the skin "it should burn and thrill through the body with an intensity as of fire". This formula is unlike the grimoire recipe and it cannot be used for practices that require the oil to be poured over the head. Rather, Crowley intended it to be applied in small amounts, usually to the top of the head or the forehead, and to be used for anointing of magical equipment as an act of consecration. Symbolism Oil of Abramelin was seen as highly important by Crowley, and he used his version of it throughout his life. In Crowley's magical system, the oil came to symbolize the aspiration to what he called the Great Work—"The oil consecrates everything that is touched with it; it is his aspiration; all acts performed in accordance with that are holy". Crowley went on to say: Crowley also had a symbolic view of the ingredients: Effects Mathers' use of the ingredient galangal instead of calamus and/or Crowley's innovative use of essential oils rather than raw ingredients has resulted in some changes from the original recipe: Symbolism: In Jewish, Greek, and European magical botanic symbolism, the ascription given to sweet flag or calamus is generally that of fertility, due to the shape of the plant's fruiting body. Crowley gave the following Qabalistic meaning for galangal: "Galangal represents both Kether and Malkuth, the First and the Last, the One and the Many." Thus Crowley's substitution therefore shifts the symbolism to microcosm/macrocosm unity, which is reflective of Thelema's mystical aim—the union of the adept with the Absolute. Skin sensation: The original recipe for Abramelin oil does not irritate the skin and can be applied according to traditional Jewish and Christian religious and magical practices. Crowley's recipe has a much higher concentration of cinnamon than the original recipe. This results in an oil which can be noticeably hot on the skin and can cause skin rashes if applied too liberally. Digestive toxicity: Galangal is edible, calamus is not, as it has some toxicity. This is certainly relevant to those who use Crowley's oil of Abramelin as a core ingredient for the eucharistic Cake of Light, giving it a mild opiated taste (from the myrrh) and a spicy tang (from the cinnamon and the ginger-like galangal). Heavy use of calamus in such a recipe would render the host inedible. See also Holy water Shemen Afarsimon, oil of persimmon, in the Mishnah Washing and anointing References Works cited External links The Anal-retentive's Guide to Oil of Abramelin by Frater RIKB Recipe for Mathers-style Macerated Oil of Abramelin by Alchemy Works Thelemic Consecration of the Oil, by T. Apiryon Safety Guidelines for Essential Oils Ceremonial magic Christian terminology Judaism terminology Magic substances Myrrh Oils Religious objects Tabernacle and Temples in Jerusalem Thelema
Holy anointing oil
Physics,Chemistry
4,955
18,629,636
https://en.wikipedia.org/wiki/Maffei%201
Maffei 1 is a massive elliptical galaxy in the constellation Cassiopeia. Once believed to be a member of the Local Group of galaxies, it is now known to belong to a separate group, the IC 342/Maffei Group. It was named after Paolo Maffei, who discovered it and the neighboring Maffei 2 in 1967 via their infrared emissions. Maffei 1 is a slightly flattened core type elliptical galaxy. It has a boxy shape and is made mainly of old metal-rich stars. It has a tiny blue nucleus in which stars continue to form. Like all large ellipticals it contains a significant population of globular clusters. Maffei 1 is situated at an estimated distance of 3–4 Mpc from the Milky Way. It may be the closest giant elliptical galaxy. Maffei 1 lies in the Zone of Avoidance and is heavily obscured by the Milky Way's stars and dust. If it were not obscured, it would be one of the largest (about the size of the full moon), brightest, and best-known galaxies in the sky. It can be observed visually, using a 30–35 cm or bigger telescope under a very dark sky. Discovery The Italian astronomer Paolo Maffei was one of the pioneers of infrared astronomy. In the 1950s and 60s, in order to obtain high quality images of celestial objects in the very near infrared part of the spectrum (the I-band, 680–880 nm), he used chemically hyper-sensitized standard Eastman emulsions I-N. To achieve the hyper-sensitization he immersed them in 5% ammonia solution for 3–5 minutes. This procedure increased their sensitivity by an order of magnitude. Between 1957 and 1967 Maffei observed many different objects using this technique, including globular clusters and planetary nebulae. Some of those objects were not visible at all on blue light (250–500 nm) sensitive plates. The galaxy Maffei 1 was discovered on a hyper-sensitized I-N photographic plate exposed on 29 September 1967 with the Schmidt telescope at Asiago Observatory. Maffei found Maffei 1, together with its companion spiral galaxy Maffei 2, while searching for diffuse nebulae and T Tauri stars. The object had an apparent size up to 50″ in the near infrared but was not visible on the corresponding blue light sensitive plate. Its spectrum lacked any emission or absorption lines. Later it was shown to be radio-quiet as well. In 1970 Hyron Spinrad suggested that Maffei 1 is a nearby heavily obscured giant elliptical galaxy. Maffei 1 would be among the ten brightest galaxies in the northern sky if not situated behind the Milky Way. Distance Maffei 1 is located only 0.55° from the galactic plane in the middle of the zone of avoidance and suffers from about 4.7 magnitudes of extinction (a factor of about 1/70) in visible light. In addition to extinction, observation of Maffei 1 is further hindered by the fact that it is covered by myriads of faint Milky Way stars, which can easily be confused with its own. As a result, determining its distance has been particularly difficult. In 1971, soon after its discovery, Hyron Spinrad estimated the distance to Maffei 1 at about 1 Mpc, which would place it within the Local Group of galaxies. In 1983 this estimate was revised up to 2.1 Mpc by Ronald Buta and Marshall McCall using the general relation between the luminosity and velocity dispersion for elliptical galaxies. That distance puts Maffei 1 well outside the Local Group, but close enough to have influenced it in the past. In 1993 Gerard Luppino and John Tonry used surface brightness fluctuations to derive a new distance estimate to Maffei 1 of . Later in 2001, Tim Davidge and Sidney van den Bergh used adaptive optics to observe the brightest asymptotic giant branch stars in Maffei 1 and concluded that it is located at the distance 4.4 Mpc from the Sun. The latest determination of the distance to Maffei 1, which is based on the re-calibrated luminosity/velocity dispersion relation for the elliptical galaxies and the updated extinction, is , or over 9 million light years away. For perspective, the nearby Andromeda Galaxy is estimated to be about 2.5 million light years away. The larger (≥3 Mpc) distances reported in the past 20 years would imply that Maffei 1 has never been close enough to the Local Group to significantly influence its dynamics. Maffei 1 moves away from the Sun at the speed of about 66 km/s. Its velocity relative to the Local Group's center of mass is, however, 297 km/s away. That means that Maffei 1 participates in the general expansion of the Universe. Physical properties Size and shape Maffei 1 is a massive elliptical galaxy classified as type E3 in the Hubble classification scheme. This means that it is slightly flattened, its semi-minor axis being 70% of its semi-major axis. Maffei 1 has also a boxy shape (E(b)3 type), while its central region (radius ≈ 34 pc) is deficient in light emission as compared to the r1/4 law, meaning that Maffei 1 is a core type elliptical. Both the boxy shape and the presence of an underluminous core are typical of intermediate to massive ellipticals. The apparent dimensions of Maffei 1 depend strongly on the wavelength of light because of the heavy obscuration by the Milky Way. In blue light it is 1–2′ across while in the near infrared its major axis reaches 23′—more than 3/4 of the Moon's diameter. At a distance of 3 Mpc this corresponds to approximately 23 kpc. The total visible absolute magnitude of Maffei 1, MV=−20.8, is comparable to that of the Milky Way. Nucleus Maffei 1 possesses a tiny blue nucleus at its center approximately 1.2 pc across. It contains about 29 solar masses of ionized hydrogen. This implies that it has undergone recent star formation. There are no signs of an active galactic nucleus (AGN) in the center of Maffei 1. The X-ray emission from the center is extended and likely comes from a number of stellar sources. Stars and stellar clusters Maffei 1 is mainly made of old metal-rich stars more than 10 billion years in age. As a large elliptical galaxy, Maffei 1 is expected to host a significant population of globular clusters (about 1100). However, due to heavy intervening absorption, ground-based observations for a long time failed to identify any of them. Observations by the Hubble Space Telescope in 2000 revealed about 20 globular cluster candidates in the central region of the galaxy. Later infrared observations from telescopes on the ground also found a population of bright globular cluster candidates. Group membership Maffei 1 is a principal member of a nearby group of galaxies. The group's other members are the giant spiral galaxies IC 342 and Maffei 2. Maffei 1 has also a small satellite spiral galaxy, Dwingeloo 1, as well as a number of dwarf satellites like MB1. The Group is one of the closest galaxy groups to the Milky Way galaxy. Notes References External links Maffei 1 Galaxies Beyond the Heart: Maffei 1 and 2—Astronomy Picture of the Day 2010 March 9 Elliptical galaxies Peculiar galaxies IC 342/Maffei Group Cassiopeia (constellation) 09892 Sharpless objects UGCA objects
Maffei 1
Astronomy
1,575
144,553
https://en.wikipedia.org/wiki/Projectile
A projectile is an object that is propelled by the application of an external force and then moves freely under the influence of gravity and air resistance. Although any objects in motion through space are projectiles, they are commonly found in warfare and sports (for example, a thrown baseball, kicked football, fired bullet, shot arrow, stone released from catapult). In ballistics, mathematical equations of motion are used to analyze projectile trajectories through launch, flight, and impact. Motive force Blowguns and pneumatic rifles use compressed gases, while most other guns and cannons utilize expanding gases liberated by sudden chemical reactions by propellants like smokeless powder. Light-gas guns use a combination of these mechanisms. Railguns utilize electromagnetic fields to provide a constant acceleration along the entire length of the device, greatly increasing the muzzle velocity. Some projectiles provide propulsion during flight by means of a rocket engine or jet engine. In military terminology, a rocket is unguided, while a missile is guided. Note the two meanings of "rocket" (weapon and engine): an ICBM is a guided missile with a rocket engine. An explosion, whether or not by a weapon, causes the debris to act as multiple high velocity projectiles. An explosive weapon or device may also be designed to produce many high velocity projectiles by the break-up of its casing; these are correctly termed fragments. In sports In projectile motion the most important force applied to the ‘projectile’ is the propelling force, in this case the propelling forces are the muscles that act upon the ball to make it move, and the stronger the force applied, the more propelling force, which means the projectile (the ball) will travel farther. See pitching, bowling. As a weapon Delivery projectiles Many projectiles, e.g. shells, may carry an explosive charge or another chemical or biological substance. Aside from explosive payload, a projectile can be designed to cause special damage, e.g. fire (see also early thermal weapons), or poisoning (see also arrow poison). Kinetic projectiles Wired projectiles Some projectiles stay connected by a cable to the launch equipment after launching it: for guidance: wire-guided missile (range up to ) to administer an electric shock, as in the case of a Taser (range up to ); two projectiles are shot simultaneously, each with a cable. to make a connection with the target, either to tow it towards the launcher, as with a whaling harpoon, or to draw the launcher to the target, as a grappling hook does. Typical projectile speeds Equations of motion An object projected at an angle to the horizontal has both the vertical and horizontal components of velocity. The vertical component of the velocity on the y-axis is given as while the horizontal component of the velocity is . There are various calculations for projectiles at a specific angle : 1. Time to reach maximum height. It is symbolized as (), which is the time taken for the projectile to reach the maximum height from the plane of projection. Mathematically, it is given as where = acceleration due to gravity (app 9.81 m/s²), = initial velocity (m/s) and = angle made by the projectile with the horizontal axis. 2. Time of flight (): this is the total time taken for the projectile to fall back to the same plane from which it was projected. Mathematically it is given as . 3. Maximum Height (): this is the maximum height attained by the projectile OR the maximum displacement on the vertical axis (y-axis) covered by the projectile. It is given as . 4. Range (): The Range of a projectile is the horizontal distance covered (on the x-axis) by the projectile. Mathematically, . The Range is maximum when angle = 45°, i.e. . See also Atlatl Ballistics Gunpowder Bullet Impact depth Kinetic bombardment Shell (projectile) Projectile point Projectile use by animals Arrow Dart Missile Sling ammunition Spear Torpedo Range of a projectile Space debris Trajectory of a projectile Notes References External links Open Source Physics computer model Projectile Motion Applet Another projectile Motion Applet Ammunition Ballistics
Projectile
Physics
850
60,332,587
https://en.wikipedia.org/wiki/USB3%20Vision
USB3 Vision is an interface standard introduced in 2013 for industrial cameras. It describes a specification on top of the USB standard, with a particular focus on supporting high-performance cameras based on USB 3.0. It is recognized as one of the fastest growing machine vision camera standards. As of October 2019, version 1.1 is the latest version of the standard. The standard is hosted by the AIA and developing a product implementing this standard must pass compliance tests and be licensed. As of late 2019, there are 42 companies that license this standard. The standard itself for reference or evaluation may be requested free of charge. The standard is built upon many of the same pieces as GigE Vision, being based on GenICam, but utilizes USB ports instead of Ethernet. Some of the benefits of this standard include simple plug and play usability, power over the cable, and high bandwidth. Additionally, it defines locking connectors that modify the standard USB connectors with additional screw-locks for industrial purposes. Technology The standard covers four major areas: Device Detection Register Access Streaming Data Event Handling The standard defines a specific USB Class ID (Class 0xEF, Subclass 0x05) for identifying the device. As the standard is defined at a protocol layer, the software vendor providing the driver may be a different entity than the company designing the camera. Register Access includes mandatory USB3 vision registers as well as camera specific registers which may control parameters such as shutter speed or integration time, gamma correction, white balance, etc. The later register types are diverse across cameras. The camera specific registers can be queried via a XML schema file which is part of the GenICam standard. The GenICam standard has a Standard Feature Naming Convention so that vendor agnostic software can be created. The GenICam standard is independent of the transfer protocol. This standard and GigE Vision are examples of wire protocols which pair with the GenICam standard. This contrasts with Camera Serial Interface; the Camera Command Set (CCS) is part of that standard for controlling camera parameters. For many real devices, the vendors provide alternate methods such as I2C to access the full set of parameters that a specific device may support. These can include lighting synchronization and separate motor controls for optical focusing elements. Implementations A complete list of companies offering products complying with this standard is available here: Companies that license USB3 Vision Open Source implementations: Linux kernel driver (NOTE: Basic register access and image streaming only. Significant application logic outside of this kernel module is needed to incorporate GenICam and be fully compatible with the USB3 Vision specification) Aravis uses libusb to implement the USB3 Vision protocol. Supports GenICam interface for register introspection. Basler Linux kernel modifications - Allows Usb3 zero copy streaming. Linux 4.9+ zero copy usbfs is supported by newer versions of libusb. References Cameras
USB3 Vision
Technology
590
2,133,700
https://en.wikipedia.org/wiki/Coulomb%20blockade
In mesoscopic physics, a Coulomb blockade (CB), named after Charles-Augustin de Coulomb's electrical force, is the decrease in electrical conductance at small bias voltages of a small electronic device comprising at least one low-capacitance tunnel junction. Because of the CB, the conductance of a device may not be constant at low bias voltages, but disappear for biases under a certain threshold, i.e. no current flows. Coulomb blockade can be observed by making a device very small, like a quantum dot. When the device is small enough, electrons inside the device will create a strong Coulomb repulsion preventing other electrons to flow. Thus, the device will no longer follow Ohm's law and the current-voltage relation of the Coulomb blockade looks like a staircase. Even though the Coulomb blockade can be used to demonstrate the quantization of the electric charge, it remains a classical effect and its main description does not require quantum mechanics. However, when few electrons are involved and an external static magnetic field is applied, Coulomb blockade provides the ground for a spin blockade (like Pauli spin blockade) and valley blockade, which include quantum mechanical effects due to spin and orbital interactions respectively between the electrons. The devices can comprise either metallic or superconducting electrodes. If the electrodes are superconducting, Cooper pairs (with a charge of minus two elementary charges ) carry the current. In the case that the electrodes are metallic or normal-conducting, i.e. neither superconducting nor semiconducting, electrons (with a charge of ) carry the current. In a tunnel junction The following section is for the case of tunnel junctions with an insulating barrier between two normal conducting electrodes (NIN junctions). The tunnel junction is, in its simplest form, a thin insulating barrier between two conducting electrodes. According to the laws of classical electrodynamics, no current can flow through an insulating barrier. According to the laws of quantum mechanics, however, there is a nonvanishing (larger than zero) probability for an electron on one side of the barrier to reach the other side (see quantum tunnelling). When a bias voltage is applied, this means that there will be a current, and, neglecting additional effects, the tunnelling current will be proportional to the bias voltage. In electrical terms, the tunnel junction behaves as a resistor with a constant resistance, also known as an ohmic resistor. The resistance depends exponentially on the barrier thickness. Typically, the barrier thickness is on the order of one to several nanometers. An arrangement of two conductors with an insulating layer in between not only has a resistance, but also a finite capacitance. The insulator is also called dielectric in this context, the tunnel junction behaves as a capacitor. Due to the discreteness of electrical charge, current through a tunnel junction is a series of events in which exactly one electron passes (tunnels) through the tunnel barrier (we neglect cotunneling, in which two electrons tunnel simultaneously). The tunnel junction capacitor is charged with one elementary charge by the tunnelling electron, causing a voltage build up , where is the capacitance of the junction. If the capacitance is very small, the voltage build up can be large enough to prevent another electron from tunnelling. The electric current is then suppressed at low bias voltages and the resistance of the device is no longer constant. The increase of the differential resistance around zero bias is called the Coulomb blockade. Observation In order for the Coulomb blockade to be observable, the temperature has to be low enough so that the characteristic charging energy (the energy that is required to charge the junction with one elementary charge) is larger than the thermal energy of the charge carriers. In the past, for capacitances above 1 femtofarad (10−15 farad), this implied that the temperature has to be below about 1 kelvin. This temperature range is routinely reached for example by Helium-3 refrigerators. Thanks to small sized quantum dots of only few nanometers, Coulomb blockade has been observed next above liquid helium temperature, up to room temperature. To make a tunnel junction in plate condenser geometry with a capacitance of 1 femtofarad, using an oxide layer of electric permittivity 10 and thickness one nanometer, one has to create electrodes with dimensions of approximately 100 by 100 nanometers. This range of dimensions is routinely reached for example by electron beam lithography and appropriate pattern transfer technologies, like the Niemeyer–Dolan technique, also known as shadow evaporation technique. The integration of quantum dot fabrication with standard industrial technology has been achieved for silicon. CMOS process for obtaining massive production of single electron quantum dot transistors with channel size down to 20 nm x 20 nm has been implemented. Single-electron transistor The simplest device in which the effect of Coulomb blockade can be observed is the so-called single-electron transistor. It consists of two electrodes known as the drain and the source, connected through tunnel junctions to one common electrode with a low self-capacitance, known as the island. The electrical potential of the island can be tuned by a third electrode, known as the gate, which is capacitively coupled to the island. In the blocking state no accessible energy levels are within tunneling range of an electron (in red) on the source contact. All energy levels on the island electrode with lower energies are occupied. When a positive voltage is applied to the gate electrode the energy levels of the island electrode are lowered. The electron (green 1.) can tunnel onto the island (2.), occupying a previously vacant energy level. From there it can tunnel onto the drain electrode (3.) where it inelastically scatters and reaches the drain electrode Fermi level (4.). The energy levels of the island electrode are evenly spaced with a separation of This gives rise to a self-capacitance of the island, defined as To achieve the Coulomb blockade, three criteria have to be met: The bias voltage must be lower than the elementary charge divided by the self-capacitance of the island: ; The thermal energy in the source contact plus the thermal energy in the island, i.e. must be below the charging energy: or else the electron will be able to pass the QD via thermal excitation; and The tunneling resistance, should be greater than which is derived from Heisenberg's uncertainty principle. Coulomb blockade thermometer A typical Coulomb blockade thermometer (CBT) is made from an array of metallic islands, connected to each other through a thin insulating layer. A tunnel junction forms between the islands, and as voltage is applied, electrons may tunnel across this junction. The tunneling rates and hence the conductance vary according to the charging energy of the islands as well as the thermal energy of the system. Coulomb blockade thermometer is a primary thermometer based on electric conductance characteristics of tunnel junction arrays. The parameter , the full width at half minimum of the measured differential conductance dip over an array of N junctions together with the physical constants provide the absolute temperature. Ionic Coulomb blockade Ionic Coulomb blockade (ICB) is the special case of CB, appearing in the electro-diffusive transport of charged ions through sub-nanometer artificial nanopores or biological ion channels. ICB is widely similar to its electronic counterpart in quantum dots,[1] but presents some specific features defined by possibly different valence z of charge carriers (permeating ions vs electrons) and by the different origin of transport engine (classical electrodiffusion vs quantum tunnelling). In the case of ICB, Coulomb gap is defined by dielectric self-energy of incoming ion inside the pore/channel and hence depends on ion valence z. ICB appears strong , even at the room temperature, for ions with , e.g. for Ca^2+ ions. ICB has been recently experimentally observed in sub-nanometer MoS2 pores. In biological ion channels ICB typically manifests itself in such valence selectivity phenomena as conduction bands (vs fixed charge ) and concentration-dependent divalent blockade of sodium current. See also Ionic Coulomb blockade Quantisation of charge Elementary charge References General Single Charge Tunneling: Coulomb Blockade Phenomena in Nanostructures, eds. H. Grabert and M. H. Devoret (Plenum Press, New York, 1992) D.V. Averin and K.K Likharev, in Mesoscopic Phenomena in Solids, eds. B.L. Altshuler, P.A. Lee, and R.A. Webb (Elsevier, Amsterdam, 1991) External links Computational Single-Electronics book Coulomb blockade online lecture Nanoelectronics Quantum electronics Mesoscopic physics
Coulomb blockade
Physics,Materials_science
1,890
1,238,920
https://en.wikipedia.org/wiki/Permutation%20automaton
In automata theory, a permutation automaton, or pure-group automaton, is a deterministic finite automaton such that each input symbol permutes the set of states. Formally, a deterministic finite automaton may be defined by the tuple (Q, Σ, δ, q0, F), where Q is the set of states of the automaton, Σ is the set of input symbols, δ is the transition function that takes a state q and an input symbol x to a new state δ(q,x), q0 is the initial state of the automaton, and F is the set of accepting states (also: final states) of the automaton. is a permutation automaton if and only if, for every two distinct states and in Q and every input symbol in Σ, δ(qi,x) ≠ δ(qj,x). A formal language is p-regular (also: a pure-group language) if it is accepted by a permutation automaton. For example, the set of strings of even length forms a p-regular language: it may be accepted by a permutation automaton with two states in which every transition replaces one state by the other. Applications The pure-group languages were the first interesting family of regular languages for which the star height problem was proved to be computable. Another mathematical problem on regular languages is the separating words problem, which asks for the size of a smallest deterministic finite automaton that distinguishes between two given words of length at most n – by accepting one word and rejecting the other. The known upper bound in the general case is . The problem was later studied for the restriction to permutation automata. In this case, the known upper bound changes to . References Permutations Finite automata
Permutation automaton
Mathematics
388
21,726,554
https://en.wikipedia.org/wiki/Laves%20phase
Laves phases are intermetallic phases that have composition AB2 and are named for Fritz Laves who first described them. The phases are classified on the basis of geometry alone. While the problem of packing spheres of equal size has been well-studied since Gauss, Laves phases are the result of his investigations into packing spheres of two sizes. Laves phases fall into three Strukturbericht types: cubic MgCu2 (C15), hexagonal MgZn2 (C14), and hexagonal MgNi2 (C36). The latter two classes are unique forms of the hexagonal arrangement, but share the same basic structure. In general, the A atoms are ordered as in diamond, hexagonal diamond, or a related structure, and the B atoms form tetrahedra around the A atoms for the AB2 structure. Laves phases are of particular interest in modern metallurgy research because of their abnormal physical and chemical properties. Many hypothetical or primitive applications have been developed. However, little practical knowledge has been elucidated from Laves phase study so far. A characteristic feature is the almost perfect electrical conductivity, but they are not plastically deformable at room temperature. In each of the three classes of Laves phase, if the two types of atoms were perfect spheres with a size ratio of , the structure would be topologically tetrahedrally close-packed. At this size ratio, the structure has an overall packing volume density of 0.710. Compounds found in Laves phases typically have an atomic size ratio between 1.05 and 1.67. Analogues of Laves phases can be formed by the self-assembly of a colloidal dispersion of two sizes of sphere. Laves phases are instances of the more general Frank-Kasper phases. References Intermetallics Crystal structure types
Laves phase
Physics,Chemistry,Materials_science
387
12,499,955
https://en.wikipedia.org/wiki/Jodrell%20Bank%20Centre%20for%20Astrophysics
The Jodrell Bank Centre for Astrophysics at the University of Manchester, is among the largest astrophysics groups in the UK. It includes the Jodrell Bank Observatory, the MERLIN/VLBI National Facility, and the Jodrell Bank Visitor Centre. The centre was formed after the merger of the Victoria University of Manchester and UMIST which brought two astronomy groups together. The Jodrell Bank site also hosts the headquarters of the SKA Observatory (SKAO) - the International Governmental Organisation (IGO) tasked with the delivery and operation of the Square Kilometre Array, created on the signing of the Rome Convention in 2019. The SKA will be the largest telescope in the world - construction is expected to start at the end of this decade. The JBCA is part of the School of Physics and Astronomy. The current director is Professor Michael Garrett. Research The research at the Centre focuses on: Astrochemistry Astrophysical masers The Cosmic Microwave Background Galaxy formation and evolution Gravitational lenses Theoretical astrophysics and cosmology Planetary nebulae Pulsars Stellar physics (including star formation and solar plasmas) Development of telescope receivers Jodrell Bank Observatory The Jodrell Bank Observatory, located near Goostrey and Holmes Chapel in Cheshire, has played an important role in the research of meteors, quasars, pulsars, masers and gravitational lenses, and was heavily involved with the tracking of space probes at the start of the Space Age. The main telescope at the observatory is the Lovell Telescope, which is the third largest steerable radio telescope in the world. There are three other active telescopes located at the observatory; the Mark II, as well as 42 ft and 7m-diameter radio telescopes. Jodrell Bank Observatory is also the base of the Multi-Element Radio Linked Interferometer Network (MERLIN), a National Facility run by the University of Manchester on behalf of UK Research and Innovation. References External links Jodrell Bank Observatory Astronomy institutes and departments Square Kilometre Array Departments of the University of Manchester
Jodrell Bank Centre for Astrophysics
Astronomy
410
60,558,638
https://en.wikipedia.org/wiki/Monoallelic%20gene%20expression
Monoallelic gene expression (MAE) is the phenomenon of the gene expression, when only one of the two gene copies (alleles) is actively expressed (transcribed), while the other is silent. Diploid organisms bear two homologous copies of each chromosome (one from each parent), a gene can be expressed from both chromosomes (biallelic expression) or from only one (monoallelic expression). MAE can be Random monoallelic expression (RME) or Constitutive monoallelic expression (constitutive). Constitutive monoallelic expression occurs from the same specific allele throughout the whole organism or tissue, as a result of genomic imprinting. RME is a broader class of monoallelic expression, which is defined by random allelic choice in somatic cells, so that different cells of the multi-cellular organism express different alleles. Constitutive monoallelic gene expression Random monoallelic gene expression (RME) X-chromosome inactivation (XCI), is the most striking and well-studied example of RME. XCI leads to the transcriptional silencing of one of the X chromosomes in female cells, which results in expression of the genes that located on the other, remaining active X chromosome. XCI is critical for balanced gene expression in female mammals. The allelic choice of XCI by individual cells takes place randomly in epiblasts of the preimplantation embryo, which leads to mosaic gene expression of the paternal and maternal X chromosome in female tissues. XCI is a chromosome-wide monoallelic expression, that includes expression of all genes that are located on X chromosome, in contrast to autosomal RME (aRME) that relates to single genes that are interspersed over the genome. aRME's can be fixed or dynamic, depending whether or not the allele-specific expression is conserved in daughter cells after mitotic cell division. Types of aRME Fixed aRME are established either by silencing of one allele that previously has been biallelically expressed, or by activation of a single allele from previously silent gene. Expression activation of the silent allele is coupled with a feedback mechanism that prevents expression of the second allele. Another scenario is also possible due to limited time-window of low-probability initiation, that could lead to high frequencies of cells with single-allele expression. It is estimated that 2-10% of all genes are fixed aRME. Studies of fixed aRME require either expansion of monoclonal cultures or lineage-traced in vivo or in vitro cells that are mitotically. Dynamic aRME occurs as a consequence of stochastic allelic expression. Transcription happens in bursts, which results in RNA molecules being synthesized from each allele separately. So over time, both alleles have a probability to initiate transcription. Transcriptional bursts are allelically stochastic, and lead to either maternal or paternal allele being accumulated in the cell. The gene transcription burst frequency and intensity combined with RNA-degradation rate form the shape of RNA distribution at the moment of observation and thus whether the gene is bi- or monoallelic. Studies that distinguish fixed and dynamic aRME require single-cell analyses of clonally related cells. Mechanisms of aRME Allelic exclusion is a process of gene expression when one allele is expressed and the other one kept silent. Two most studied cases of allelic exclusion are monoallelic expression of immunoglobulins in B and T cells and olfactory receptors in sensory neurons. Allelic exclusion is cell-type specific (as opposed to organism-wide XCI), which increases intercellular diversity, thus specificity towards certain antigens or odors. Allele-biased expression is skewed expression level of one allele over the other, but both alleles are still expressed (in contrast to allelic exclusion). This phenomenon is often observed in cells of immune function Methods of detection Methods of MAE detection are based on the difference between alleles, which can be distinguished either by the sequence of expressed mRNA or protein structure. Methods of MAE detection can be divided into single gene or whole genome MAE analysis. Whole genome MAE analysis cannot be performed based on protein structure yet, so these are completely NGS based techniques. Single-gene analysis Genome-wide analysis References External links Gene expression
Monoallelic gene expression
Chemistry,Biology
903
2,519,869
https://en.wikipedia.org/wiki/Angewandte%20Chemie
Angewandte Chemie (, meaning "Applied Chemistry") is a weekly peer-reviewed scientific journal that is published by Wiley-VCH on behalf of the German Chemical Society (Gesellschaft Deutscher Chemiker). Publishing formats include feature-length reviews, short highlights, research communications, minireviews, essays, book reviews, meeting reviews, correspondences, corrections, and obituaries. This journal contains review articles covering all aspects of chemistry. According to the Journal Citation Reports, the journal had a 2023 impact factor of 16.1. Editions The journal appears in two editions with separate volume and page numbering: a German edition, Angewandte Chemie, and a fully English-language edition, Angewandte Chemie International Edition. The editions are identical in content with the exception of occasional reviews of German-language books or German translations of IUPAC recommendations. Publication history In 1887, Ferdinand Fischer established the Zeitschrift für die Chemische Industrie (Journal for the Chemical Industry). In 1888, the title was changed to Zeitschrift für Angewandte Chemie (Journal of Applied Chemistry), and volume numbering started over. This title was kept until the end of 1941 when it was changed to Die Chemie. Until 1920, the journal was published by Springer Verlag and by Verlag Chemie starting in 1921. Due to World War II, the journal did not publish from April 1945 to December 1946. In 1947, publication was resumed under the current title, Angewandte Chemie. In 1962, the English-language edition was launched as Angewandte Chemie International Edition in English, which has a separate volume counting. With the beginning of Vol. 37 (1998) "in English" was dropped from the journal name. Several journals have merged into Angewandte Chemie, including Chemische Technik/Chemische Apparatur in 1947 and Zeitschrift für Chemie in 1990. 2020 controversy In June 2020, the journal withdrew a paper by Tomas Hudlicky (Brock University), "Organic synthesis—Where now?" is thirty years old. A reflection on the current state of affairs, stating that it was "accepted after peer review and appears as an accepted article online prior to editing, proofing, and formal publication of the final Version of Record". The paper drew opprobrium for criticizing the alleged "preferential status" of women and minorities in chemistry. The journal withdrew the paper within hours, stating that the "paper contains opinions that don't reflect our values and has been removed. [...] Something went very wrong here and we're committed to do better." Additionally, 16 members of the journal's advisory board resigned on 8 June. On the same day it was reported that two editors had been suspended for passing the article. As a consequence, shaping of a new version of the journal begun, with diversity, equity, and inclusion, transparency, and a continued commitment to scientific excellence as the guiding principles. A new editorial team was formed additionally. Hudlicky responded to the backlash and retraction stating "I stand by the views I wished to express in the essay, some of which are common knowledge, while others were duly cited from primary and secondary sources". Following a condemnation by Brock University's former vice-president, he was defended by the Canadian Association of University Teachers and Brock University Faculty Association. Subsequently, he edited and republished the article on his own website. Impact factor While it has been suggested that the journal's impact factor is as high as it is in comparison to other chemistry journals because the journal contains reviews, the editors claim this effect is too small to explain the difference or affect the ranking of the journal in its subject group. References Chemistry journals Publications established in 1887 Society of German Chemists Wiley-VCH academic journals Weekly journals English-language journals
Angewandte Chemie
Chemistry
800
35,642,366
https://en.wikipedia.org/wiki/Walter%20Polakov
Walter Nicholas Polakov (July 18, 1879 – December 20, 1948 ) was a mechanical engineer, consulting engineer, and pioneer of scientific management. Biography Early years Walter Polakov was born in Luga, Russian Empire, and attended High School in Moscow before studying for a mechanical engineering degree at the Royal Institute of Technology Dresden in 1902. Returning to Moscow, he studied psychology and industrial hygiene before being employed at the Tula Locomotive Works, Moscow. In the USA In 1906 he emigrated with his family to the United States, where he was employed by the American Locomotive Company. There he met Henry Gantt, who was a consultant for the company at that time. Polakov joined Gantt's consulting company in 1910 and got to know Frederick Taylor, Frank Gilbreth and Harrington Emerson. However, by 1912 he was working for Wallace Clark before launching his own consulting company in 1915. Polakov joined the Taylor Society at this time and supported a Marxist view of capitalism in their bulletin. He also joined the American Society of Mechanical Engineers (ASME) and was part of a faction led by Gantt that broke from the ASME conference to hold their own meeting of the New Machine, an organization which sought political as well as an economic power. About fifty people listened to Gantt's call for industrial reform and Polakov's analysis of inefficiency in the industrial context. Little came of their initiative despite lobbying Woodrow Wilson to give more power to managers. However, responding to the war needs of the US Navy, Gantt and Polakov were employed as consultants by the Emergency Fleet Corporation where Gantt finalized the development of his Gantt charts. Having helped the US shipbuilders keep up with losses due to German submarine action, the Gantt charts were then applied to managing fleet movements at the U. S. Shipping Board. In the USSR Polakov returned to his native Russia—by then the Soviet Union—in 1929, staying until 1931. Whilst there he worked for the Supreme Soviet of the National Economy to develop the First Five Year Plan. Here he introduced the Gantt chart, supplying Russian translations of explanations. Publications (1912) "Power Plant Betterment by Scientific Management", Engineering Magazine, (NY) Vol 41, pp. 101–12, 278–92, 448–56, 577–82, 798–809, 970–75 (1916) "Discussion of Robert Valentine, The Progressive Relations Between Efficiency and Consent", Bulletin of Taylor Society, November, pp. 7–17 (1921) Man and His Affairs from an Engineering Point of View Baltimore: Williams (1921) Mastering Power Production: The Industrial, Economic and Social Problems Involved and Their Solution New York: The Engineering Magazine Company (1921) "Making Work Fascinating" ASME Journal, December (1922) (1931) "The Gantt Chart in Russia", American Machinist, 75, pp. 261–4 (1933)The Power Age: Its Quest and Challenge New York: Covici Friede Publishers References 1879 births 1948 deaths Engineers from the Russian Empire Emigrants from the Russian Empire to the United States People from Luga, Leningrad Oblast Russian Marxists American Marxists Industrial engineers American expatriates in the Soviet Union
Walter Polakov
Engineering
665
55,846,595
https://en.wikipedia.org/wiki/Shadow%20board
A shadow board is a type of tool board for organizing a set of tools; the board defines where particular tools should be placed when they are not in use. Shadow boards have the outlines of a work station's tools marked on them, allowing operators to identify quickly which tools are in use or missing. The boards are commonly located near the work station where the tools are used. Shadow boards are often used in the manufacturing environment to improve a facility's lean six sigma capabilities. Shadow boards reduce time spent looking for tools and also reduce losses. They improve work station safety because tools are replaced safely after use, rather than becoming potential hazards. See also Knolling 5S (methodology) Peg board References Tools Industrial equipment Containers Ordering Lean manufacturing
Shadow board
Engineering
150
76,380,163
https://en.wikipedia.org/wiki/Taylor%E2%80%93Maccoll%20flow
Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán. Mathematical description Consider a steady supersonic flow past a solid cone that has a semi-vertical angle . A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave. The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar. This particularly suggests that for each value of incoming Mach number , there exists a maximum value of beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when is close to , it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream. Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant throughout. In this case, the flow behind the shock wave is a potential flow. Hence we can introduce the velocity potential such that . Since the problem do not have any length scale and is clearly axisymmetric, the velocity field and the pressure field will be turn out to functions of the polar angle only (the origin of the spherical coordinates is taken to be located at the vertex). This means that we have The steady potential flow is governed by the equation where the sound speed is expressed as a function of the velocity magnitude only. Substituting the above assumed form for the velocity field, into the governing equation, we obtain the general Taylor–Maccoll equation The equation is simplified greatly for a polytropic gas for which , i.e., where is the specific heat ratio and is the stagnation enthalpy. Introducing this formula into the general Taylor–Maccoll equation and introducing a non-dimensional function , where (the speed of the potential flow when it flows out into a vacuum), we obtain, for the polytropic gas, the Taylor–Maccoll equation, The equation must satisfy the condition that (no penetration on the solid surface) and also must correspond to conditions behind the shock wave at , where is the half-angle of shock cone, which must be determined as part of the solution for a given incoming flow Mach number and . The Taylor–Maccoll equation has no known explicit solution and it is integrated numerically. Kármán–Moore solution When the cone angle is very small, the flow is nearly parallel everywhere in which case, an exact solution can be found, as shown by Theodore von Kármán and Norton B. Moore in 1932. The solution is more apparent in the cylindrical coordinates (the here is the radial distance from the -axis, and not the density). If is the speed of the incoming flow, then we write , where is a small correction and satisfies where is the Mach number of the incoming flow. We expect the velocity components to depend only on , i.e., in cylindrical coordinates, which means that we must have , where is a self-similar coordinate. The governing equation reduces to On the surface of the cone , we must have and conesequently . In the small-angle approximation, the weak shock cone is given by . The trivial solution for describes the uniform flow upstream of the shock cone, whereas the non-trivial solution satisfying the boundary condition on the solid surface behind the shock wave is given by We therefore have exhibiting a logarthmic singularity as The velocity components are given by The pressure on the surface of the cone is found to be (in this formula, is the density of the incoming gas). See also Kármán–Moore theory References Fluid dynamics
Taylor–Maccoll flow
Chemistry,Engineering
957
2,339,955
https://en.wikipedia.org/wiki/Xenon%20tetrafluoride
Xenon tetrafluoride is a chemical compound with chemical formula . It was the first discovered binary compound of a noble gas. It is produced by the chemical reaction of xenon with fluorine: Xe + 2  → This reaction is exothermic, releasing an energy of 251 kJ/mol. Xenon tetrafluoride is a colorless crystalline solid that sublimes at 117 °C. Its structure was determined by both NMR spectroscopy and X-ray crystallography in 1963. The structure is square planar, as has been confirmed by neutron diffraction studies. According to VSEPR theory, in addition to four fluoride ligands, the xenon center has two lone pairs of electrons. These lone pairs are mutually trans. Synthesis The original synthesis of xenon tetrafluoride occurred through direct 1:5-molar-ratio combination of the elements in a nickel (Monel) vessel at 400 °C. The nickel does not catalyze the reaction, but rather protects the container surfaces against fluoride corrosion. Controlling the process against impurities is difficult, as xenon difluoride (), tetrafluoride, and hexafluoride () are all in chemical equilibrium, the difluoride favored at low temperatures little fluorine and the hexafluoride favored at high temperatures and excess fluorine. Fractional sublimation (xenon tetrafluoride is particularly involatile) or other equilibria generally allow purification of the product mixture. The elements combine more selectively when γ- or UV-irradiated in a nickel container or dissolved in anhydrous hydrogen fluoride with catalytic oxygen. That reaction is believed selective because dioxygen difluoride at standard conditions is too weak an oxidant to generate xenon(VI) species. Alternatively, fluoroxenonium perfluorometallate salts pyrolyze to XeF4. Reactions Xenon tetrafluoride hydrolyzes at low temperatures to form elemental xenon, oxygen, hydrofluoric acid, and aqueous xenon trioxide: It is used as a precursor for synthesis of all tetravalent Xe compounds. Reaction with tetramethylammonium fluoride gives tetramethylammonium pentafluoroxenate, which contains the pentagonal anion. The anion is also formed by reaction with cesium fluoride: CsF + → Reaction with bismuth pentafluoride () forms the cation: + → XeF3BiF6 The cation in the salt XeF3Sb2F11 has been characterized by NMR spectroscopy. At 400 °C, reacts with xenon to form : XeF4 + Xe → 2 XeF2 The reaction of xenon tetrafluoride with platinum yields platinum tetrafluoride and xenon: XeF4 + Pt → PtF4 + Xe Applications Xenon tetrafluoride has few applications. It has been shown to degrade silicone rubber for analyzing trace metal impurities in the rubber. reacts with the silicone to form simple gaseous products, leaving a residue of metal impurities. References External links WebBook page for XeF4 Fluorides Nonmetal halides Xenon(IV) compounds
Xenon tetrafluoride
Chemistry
720
2,517,899
https://en.wikipedia.org/wiki/Glauber
Glauber is a scientific discovery method written in the context of computational philosophy of science. It is related to machine learning in artificial intelligence. Glauber was written, among other programs, by Pat Langley, Herbert A. Simon, G. Bradshaw and J. Zytkow to demonstrate how scientific discovery may be obtained by problem solving methods, in their book Scientific Discovery, Computational Explorations on the Creative Mind. Their programs simulate historical scientific discoveries based on the empirical evidence known at the time of discovery. Glauber was named after Johann Rudolph Glauber, a 17th-century alchemist whose work helped to develop acid-base theory. Glauber (the method) rediscovers the law of acid-alkali reactions producing salts, given the qualities of substances and observed facts, the result of mixing substances. From that knowledge Glauber discovers that substances that taste bitter react with substances tasting sour, producing substances tasting salty. In few words, the law: Acid + Alkali --> Salt Glauber was designed by Pat Langley as part of his work on discovery heuristics in an attempt to have a computer automatically review a host of values and characteristics and make independent analyses from them. In the case of Glauber, the goal was to have an autonomous application that could estimate, even perfectly describe, the nature of a given chemical compound by comparing it to related substances. Langley formalized and compiled Glauber in 1983. The software were supplied with information about a variety of materials as they had been described by 17-18th century chemists, before most of modern chemical knowledge had been uncovered or invented. Qualitative descriptions like taste, rather than numerical data such as molecular weight, were programmed into the application. Chemical reactions that were known in that era and the distinction between reactants and products were also provided. From this knowledge, Glauber was to figure out which substances were acids, bases, and salts without any quantitative information. The system examined chemical substances and all of their most likely reactions and correlates the expected taste and related acidity or saltiness according to the rule that acids and bases produce salts. Glauber was a very successful advance in theoretical chemistry as performed by computer and it, along with similar systems developed by Herbert A. Simon including Stahl (which examines oxidation) and DALTON (which calculates atomic weight), helped form the groundwork of all current automated chemical analysis. The Glauber method Information representation (data structures) Glauber uses two predicates: Reacts and Has-Quality, represented in Lisp lists as follows: (Reacts Inputs {reactant1 reactant2 ...} Outputs {product1 product2 ...}) (Has-Quality Object {substance} quality {value}) For their experiment the authors used the following facts: (Reacts Inputs {HCl NaOH} Outputs {NaCl}) (Reacts Inputs {HCl KOH} Outputs {KCl}) (Reacts Inputs {HNO3 NaOH} Outputs {NaNO3}) (Reacts Inputs {HNO3 KOH} Outputs {KNO3 }) (Has-Quality Object {HCl} Tastes {Sour}) (Has-Quality Object {HNO3} Tastes {Sour }) (Has-Quality Object {NaOH} Tastes {Bitter}) (Has-Quality Object {KOH} Tastes {Bitter}) (Has-Quality Object {NaCl} Tastes {Salty}) (Has-Quality Object {NaNO3} Tastes {Salty}) (Has-Quality Object {KCl} Tastes {Salty}) (Has-Quality Object {KNO3} Tastes {Salty}) Discovering the following law and equivalence classes: Salts: {KNO3, KCl, NaNO3, NaCl} Acids: {HCl, HNO3} Alkalis: {NaOH, KOH} ∀ alkali ∀ acid ∃ salt (Reacts Inputs {acid, alkali} Outputs {salt}) ∀ salt (Has-Quality Object {salt} Tastes {Salty}) ∀ acid (Has-Quality Object {acid} Tastes {Sour}) ∀ alkali (Has-Quality Object {alkali} Tastes {Bitter}) The modern notation with strings like: NaOH, HCl, etc., is used just as short substance names. Here they do not mean the chemical structure of the substances, which was not known at the time of the discovery; the program works with any name used in the 17th century like aqua regia, muriatic acid, etc. Procedures Glauber is based in two procedures: Form-Class and Determine-Quantifier. The procedure Form-Class generalize the Reacts predicates by replacing the substance names by variables ranging on equivalence classes determined by a quality whose value distinguishes the substances in each class. In the experiment designed by its authors, the substances are partitioned in three classes based in the value of the taste quality based on their values: acids (sour), alkalis (bitter) and salts (salty). Glauber main procedure Input: Reacts and Has-Quality predicate sets Output: On success returns a generalized version of the Reacts predicate whose variables range over the equivalence classes and a new Class predicate which is like Has-Quality having a name-class instead of substance name: (Has-Quality {class-name} quality {value}) If there are no more substance names in the Reacts predicates then finish process the Reacts predicates with the Form-Class procedure process the result of the previous step with Determine-Quantifier go to step 3 Form-Class Input: the Reacts and Has-Quality predicate sets Output: a new substances class, a new Has-Quality and a new Reacts predicate set Count the number of occurrences of each quality {value} in the Has-Quality predicates Select the quality value with the largest number of occurrences, which substances are in the Reacts predicates Create a name for the class Generate a new Has-Quality predicate set removing all the predicates in Has-Quality with the selected quality {value} and adding the predicate (Has-Quality {class-name} quality {value}) to the Class predicates where class-name is the name obtained in step 3 Generate a new Reacts predicate set by replacing the name of the substance in the class formed in the step 2 by the name created in step 3 Create a new class extension by associating the name generated on step 3 with the set of all substances on the class selected on step 2 Determine-Quantifier Input: the Reacts, Has-Quality and Class (generated by Form-Class) predicate sets Output: An intentional quantified class corresponding to the extensional class generated by Form-Class, a new Reacts predicate set extended with the appropriate quantifier of the last discovered class received from Form-Class Universally quantify the rule to determine the class (Has-Quality {class-name} quality {value}) => (∀ class-name (Has-Quality {class-name} quality {value})) Generate Reacts predicates replacing each substance in the new class for its class-name in the Reacts predicates if all the predicates generated in the previous step are contained in the original set then quantify universally else quantify existentially References Chemistry software
Glauber
Chemistry
1,520
14,907,107
https://en.wikipedia.org/wiki/Single-subject%20design
In design of experiments, single-subject curriculum or single-case research design is a research design most often used in applied fields of psychology, education, and human behaviour in which the subject serves as his/her own control, rather than using another individual/group. Researchers use single-subject design because these designs are sensitive to individual organism differences vs group designs which are sensitive to averages of groups. The logic behind single subject designs is 1) Prediction, 2) Verification, and 3) Replication. The baseline data predicts behaviour by affirming the consequent. Verification refers to demonstrating that the baseline responding would have continued had no intervention been implemented. Replication occurs when a previously observed behaviour changed is reproduced. There can be large numbers of subjects in a research study using single-subject design, however—because the subject serves as their own control, this is still a single-subject design. These designs are used primarily to evaluate the effect of a variety of interventions in applied research. Design standards Effect size Although there are no standards on the specific statistics required for effect size calculation, it is best practice to include an effect size estimate. Reporting standards When reporting on findings obtained through single-subject designs, specific guidelines are used for standardization and to ensure completeness and transparency: Types of single-subject designs Reversal design Reversal design involves repeated measurement of behaviour in a given setting during three consecutive phases (ABA)- baseline, intervention, and return to baseline. Variations include extending the ABA design with repeated reversals (ABAB) and including multiple treatments (ABCABC). AB designs, or reversal designs with no return to baseline, are not considered experimental. Functional control cannot be determined in AB designs because there is no replication. Alternating treatments design Alternating treatments design (ATD) compares the effects of two or more independent variables on the dependent variable. Variations include a no-treatment control condition and a final best-treatment verification phase. Multiple baseline design Multiple baseline design involves simultaneous baseline measurement begins on two or more behaviours, settings, or participants. The IV is implemented on one behaviour, setting, or participant, while baseline continues for all others. Variations include the multiple probe design and delayed multiple baseline design. Changing criterion design Changing criterion designs are used to evaluate the effects of an IV on the gradual improvement of a behavior already in the participant's repertoire. Interpretation of data In order to determine the effect of the independent variable on the dependent variable, the researcher will graph the data collected and visually inspect the differences between phases. If there is a clear distinction between baseline and intervention, and then the data returns to the same trends/level during reversal, a functional relation between the variables is inferred. Sometimes, visual inspection of the data demonstrates results that statistical tests fail to find. Features assessed during visual analysis include: Level. The overall average (mean) of the outcome measures within a phase. Trend. The slope of the best-fitting straight line for the outcome measures within a phase. Variability. The range, variance, or standard deviation of the outcome measures about the best-fitting line. Immediacy of Effect. The change in level between the last three data points in one phase and the first three data points of the next. Overlap. The proportion of data from one phase that overlaps with data from the previous phase. Consistency of Data Patterns. The extent to which there is consistency in the data patterns from phases with the same conditions. Limitations Research designs are traditionally preplanned so that most of the details about to whom and when the intervention will be introduced are decided prior to the beginning of the study. However, in single-subject designs, these decisions are often made as the data are collected. In addition, there are no widely agreed-upon rules for altering phases, so conflicting ideas could emerge as to how a research experiment should be conducted in single-subject design. The major criticism of single-subject designs are: Carry-over effects: Results from the previous phase carry-over into the next phase. Order effects: The ordering (sequence) of the intervention or treatment affects what results. Irreversibility: In some withdrawal designs, once a change in the independent variable occurs, the dependent variable is affected. This cannot be undone by simply removing the independent variable. Ethical problems: Withdrawal of treatment in the withdrawal design can at times present ethical and feasibility problems. History Historically, single-subject designs have been closely tied to the experimental analysis of behavior and applied behavior analysis. See also N of 1 trial Single-subject research Segmented regression Meta-analysis References Further reading Ledford, Jennifer R. & Gast, David L. (2018). Single subject research methodology in behavioral sciences: applications in special education and behavioral sciences. Routledge, 2009. Design of experiments Science experiments Behaviorism
Single-subject design
Biology
965
53,571,864
https://en.wikipedia.org/wiki/Mary%20Garson
Mary Jean Garson is a British-Australian organic chemist and academic. She is an Emerita Professor in the School of Chemistry and Molecular Biosciences at the University of Queensland. Early life Garson was born in Rugby, England, the daughter of an engineer and botanist. She took her B.A with Honours from the University of Cambridge, Newnham College in 1974. Garson's focus was the natural sciences, specializing in chemistry. She obtained an MA in Natural Sciences and she took her PhD in organic chemistry from Cambridge in 1977. Career Garson won a Royal Society postdoctoral fellowship after her PhD, undertaking research in Rome, Italy from 1977 to 1978. She continued her research at New Hall at Cambridge on a college research fellowship from 1978 to 1981. She worked as a medicinal chemist from 1981 to 1983 at Smith Kline and French Research Ltd in Welwyn, England,. Garson won a Queen Elizabeth II Research Fellowship from James Cook University (1983–1986), based in the Townsville region to research the bioactive organic chemicals in marine organisms. In Townsville, she undertook dive training to study on the Great Barrier Reef. Garson then took a teaching/research position as the first female academic in chemistry at the University of Wollongong, before moving to the University of Queensland as a lecturer in 1990. She was promoted to Senior Lecturer in 1992 and Reader in 1998. She researches and publishes on the structure, biosynthesis and function of natural products, especially those from marine invertebrates and other microorganisms. She also researches the chemistry of South East Asian medicinal plants. Garson was promoted to Professor in the School of Chemistry and Molecular Biosciences in 2006, and has served as Deputy Head of the School from 2005 to 2009. Since 2021, she is an Emeritus Professor of Chemistry at the university. Awards and honours 2009 – Our Women, Our State (Queensland Government) – Highly Commended 2011 – Leighton Medal of the Royal Australian Chemical Institute, in recognition of her contributions and leadership to the chemistry community, within Australia and overseas. 2013 – Distinguished Woman in Chemistry or Chemical Engineering award of the International Union of Pure and Applied Chemistry 2014 – named as one of "175 Faces of Chemistry" by the Royal Society of Chemistry, UK 2017 – inaugural Margaret Sheil Women in Chemistry Leadership award of the Royal Australian Chemical Institute 2018 - Royal Society of Chemistry, Australasian lecturer (by invitation) 2019 – Member of the Order of Australia (AM) in the Australia Day Honours for "significant service to education, particularly to organic chemistry, and as an advocate for women in science". 2023 – named as a Distinguished Fellow of the Royal Australian Chemical Institute 2024 – elected Fellow of the Australian Academy of Science A species of marine flatworm, discovered at Heron Island, is named for her Maritigrella marygarsonae. Memberships President-Royal Australian Chemical Institute (Queensland Division) Chair, International Relations Committee of RACI Member, National Committee for Chemistry Executive Secretary, World Chemistry Congress/IUPAC General Assembly (2001) Chair, Board of Australian Science Innovations Organiser, Chemistry-Biotechnology Symposium at World Chemistry Congress (Torino, 2007); 27th International Symposium on the Chemistry of Natural Products (Brisbane, 2011) Organiser, Women sharing a Chemical Moment in Time, International Year of Chemistry (2011) Leadership roles in Division III (organic and biomolecular) of International Union of Pure and Applied Chemistry (IUPAC) as Titular Member (2006-2007), Secretary (2008-2011), President-elect (2012–2013), Division President and Bureau Member (2014–2015), then as Past-President (2016-2017) Elected to Membership of the Bureau of the International Union of Pure and Applied Chemistry (2018-2021) Co-chair IUPAC100 (centennial) Management Committee (2016-2019) Co-convenor of Women's Global Breakfast networking event held in over 100 countries since 2019; the theme for the 2024 breakfast event on February 27 is "Catalyzing Change in Chemistry" Incoming Vice-President/President-elect, and Chair of the Science Board, of the International Union of Pure and Applied Chemistry for 2024-2025 References 1953 births Living people 21st-century Australian chemists Australian women chemists Organic chemists Alumni of Newnham College, Cambridge Academic staff of the University of Queensland Members of the Order of Australia English emigrants to Australia Fellows of the Australian Academy of Science
Mary Garson
Chemistry
902
5,598,607
https://en.wikipedia.org/wiki/Chemical%20%26%20Engineering%20News
Chemical & Engineering News (C&EN) is a weekly news magazine published by the American Chemical Society (ACS), providing professional and technical news and analysis in the fields of chemistry and chemical engineering. It includes information on recent news and research in these fields, career and employment information, business and industry news, government and policy news, funding in these fields, and special reports. The magazine is available to all members of the American Chemical Society. The ACS also publishes C&EN Global Enterprise (), an online resource that republishes articles from C&EN for easier online access to content. History The magazine was established in 1923, and has been on the internet since 1998. The editor-in-chief is Nick Ishmael Perkins. Abstracting and indexing The magazine is abstracted and indexed in Chemical Abstracts Service, Science Citation Index, and Scopus. References External links American Chemical Society academic journals Chemical engineering journals Engineering magazines Magazines established in 1923 Magazines published in Washington, D.C. Professional and trade magazines Science and technology magazines published in the United States Weekly magazines published in the United States
Chemical & Engineering News
Chemistry,Engineering
226
2,302,765
https://en.wikipedia.org/wiki/Cicutoxin
Cicutoxin is a naturally-occurring poisonous chemical compound produced by several plants from the family Apiaceae including water hemlock (Cicuta species) and water dropwort (Oenanthe crocata). The compound contains polyene, polyyne, and alcohol functional groups and is a structural isomer of oenanthotoxin, also found in water dropwort. Both of these belong to the C17-polyacetylenes chemical class. It causes death by respiratory paralysis resulting from disruption of the central nervous system. It is a potent, noncompetitive antagonist of the gamma-aminobutyric acid (GABA) receptor. In humans, cicutoxin rapidly produces symptoms of nausea, emesis and abdominal pain, typically within 60 minutes of ingestion. This can lead to tremors, seizures, and death. LD50(mouse; i.p.) ~9 mg/kg History Johann Jakob Wepfer's book Cicutae Aquaticae Historia Et Noxae Commentario Illustrata was published in 1679; it contains the earliest published report of toxicity associated with Cicuta plants. The name cicutoxin was coined by Boehm in 1876 for the toxic compound arising from the plant Cicuta virosa, and he also extracted and named the isomeric toxin oenanthotoxin from Oenanthe crocata. A review published in 1911 examined 27 cases of cicutoxin poisoning, 21 of which had resulted in death – though some of these cases involved deliberate poisoning. This review included a case where a family of five used Cicuta extracts as a topical treatment for itching, resulting in the deaths of two children, a report that suggests that cicutoxin may be absorbed through the skin. A review from 1962 examined 78 cases, 33 of which resulted in death, and cases of cicutoxin poisoning continue to occur: A child used the stem of a plant as a toy whistle and died of cicutoxin poisoning A 14-year-old boy died 20 hours after consuming a 'wild carrot' in 2001 In 1992, two brothers were foraging for wild ginseng and found a hemlock root. One of them ate three bites of the supposed ginseng root and the other one ingested one bite. The first brother died three hours later while the second made a full recovery with supportive medical care after experiencing seizures and delirium. All plants from the genus Cicuta contain cicutoxin. These plants are found in swampy, wet habitats in North America and parts of Europe. The Cicuta plants are often mistaken for edible roots such as parsnip, wild carrot or wild ginseng. All parts of the Cicuta plants are poisonous, though the root is the most toxic part of the plant and toxin levels are highest in spring – ingestion of a 2–3 cm portion of root can be fatal to adults. In one reported incident, 17 boys ingested parts of the plant, with only those who consumed the root experiencing seizures whilst those who consumed only leaves and flowers merely became unwell. The toxicity of the plants depends on various factors, such as seasonal variation, temperature, geographical location and soil conditions. The roots remain toxic even after drying. Plants containing cicutoxin Cicutoxin is found in five species of water hemlock, all belonging to the family Apiaceae. These include all four species in the genus Cicuta and one species from the genus Oenanthe: the bulblet-bearing water hemlock, C. bulbifera; the Douglas water hemlock, C. douglasii; the spotted water hemlock or spotted cowbane, C. maculata; Mackenzie's water hemlock, C. virosa; and, the water dropwort, O. crocata. Cicutoxin is found in all parts of these plants, along with several other C17 polyacetylenes. C. virosa, for example, produces isocicutoxin, a geometric isomer of cicutoxin, while O. crocata contains the toxin oenanthotoxin, a structural isomer of cicutoxin. Cicuta plants also produce multiple congeners of cicutoxin, such as Virol A and Virol C. Chemistry Building on Boehm's work, Jacobsen reported the first isolation of pure cicutoxin as a yellowish oil in 1915. Its chemical structure was not determined until 1953, however, when it was shown that it has a molecular formula of C17H22O2 and it is an aliphatic, highly unsaturated alcohol with two triple bonds conjugated with three double bonds, and two hydroxyl groups. The first synthesis of cicutoxin was reported in 1955. Though the overall yield was only 4% and the product was the racemic mixture, the synthesis has been described as "a significant accomplishment" given that it was achieved "without the benefit of modern coupling reactions". The absolute configuration of the naturally-occurring form of cicutoxin was reported in 1999 to be (R)-(−)-cicutoxin, systematically named as (8E,10E,12E,14R)-heptadeca-8,10,12-triene-4,6-diyne-1,14-diol. Outside of a plant, cicutoxin breaks down when exposed to air, light, or heat, making it difficult to handle. Cicutoxin has a long carbon structure and few hydrophilic substituents which gives it hydrophobic characteristics. Hydrophobic and/or small molecules can be absorbed through the skin. Research has shown that cicutoxin will pass through the skin of frogs and the experience of the family who used a Cicuta plant as a topical antipruritic strongly suggests that the compound is able to pass through human skin. Laboratory synthesis The first total synthesis of racemic cicutoxin was published in 1955 and reported that this racemate was about twice as active as the naturally-occurring enantiomer. A complete synthesis of the natural product, (R)-(–)-cicutoxin, in four linear steps was reported in 1999, from three key fragments: (R)-(–)-1-hexyn-3-ol (8), 1,4-diiodo-1,3-butadiene (9), and THP-protected 4,6-heptadiyn-1-ol (6). (R)-(–)-1-hexyn-3-ol (8) is a known compound and was obtained by Corey-Bakshi-Shibata reduction of 1-hexyn-3-one. 1,4-diiodo-1,3-butadiene (9) is also a known compound and it is readily available by dimerization of acetylene accompanied by addition of iodine in the presence of platinum (IV) catalyst and sodium iodide. The last key fragment, THP-protected 4,6-heptadiyn-1-ol (6) is a known compound. The first step is the Sonogashira coupling of compound 8 and 9. This step gave dienynol (10) with 63 percent yield. The second step is a palladium -catalyzed coupling reaction. The coupling of compound 6 and 10 leads to the 17-carbon frame (11) with 74 percent yield. Compound 11 already has the stereo center in place and only needs a few structural changes: the third and fourth step. The third step is the reduction of the C5 triple bond in compound 11, this was accomplished by using a compound called Red-Al. The last step is the removal of the THP protection group. When THP is removed and a hydrogen is bound to the oxygen, then (R)-(–)-cicutoxin is formed. These four steps are the full synthesis of cicutoxin and gives an overall yield of 18 percent. Biochemistry Cicutoxin is known to interact with the GABAA receptor and it also has been shown to block the potassium channel in T lymphocytes. A similar effect where potassium channels in neurons are blocked could account for the toxic effect on the nervous system. The interactions are explained in Mechanism of action. Mechanism of action The exact mechanism of action is not known for cicutoxin, even though it is well known to be a violent toxin. The mechanism is not known because of the chemical instability of cicutoxin, but there have been studies that delivered some evidence for a mechanism of action. Cicutoxin is a noncompetitive gamma-aminobutyric acid (GABA) antagonist in the central nervous system (CNS). GABA normally binds to the beta domain of the GABAA receptor and activates the receptor which causes a flow of chloride across the membrane. Cicutoxin binds to the same place as GABA, because of this the receptor is not activated by GABA. The pore of the receptor won't open and chloride can't flow across the membrane. Binding of cicutoxin to the beta domain also blocks the chloride channel. Both effects of cicutoxin on the GABAA-receptor cause a constant depolarization. This causes hyperactivity in cells, which leads to seizures. There also have been some studies that suggest that cicutoxin increases the duration of the neuronal repolarization in a dose-dependent manner. The toxin could increase the duration of the repolarization up to sixfold at 100 μmol/L. The prolonged action potentials may cause higher excitatory activity. It has been demonstrated that cicutoxin also blocks potassium channels in T-lymphocytes. The toxin inhibits the proliferation of the lymphocytes . This has made it a substance of interest in research for a medicine against leukemia. Metabolism It is unknown how the body gets rid of cicutoxin. There is evidence that it has a long half-life in the body, because of a patient who was submitted in a hospital after eating a root of a Cicuta plant. The man was in the hospital for two days and still had a fuzzy feeling in his head two days after leaving the hospital. There is also the case of a sheep (discussed in Effects on animals) where the sheep fully recovered after seven days. Poisoning Symptoms First signs of cicutoxin poisoning start 15–60 minutes after ingestion and are: vomiting, convulsions, widened pupils, salivation, excess sweating. It may cause coma. Other described symptoms are cyanosis, amnesia, absence of muscle reflexes, metabolic acidosis and cardiovascular changes which may cause heart problems and central nervous system problems which manifest themselves as convulsions and either an overactive or underactive heart. Due to an overactive nervous system respiratory failure occurs which may cause suffocation and accounts for most of the deaths. Dehydration from water loss due to vomiting can also occur. If untreated, the kidneys can also fail, causing death. Treatment The adverse effects from cicutoxin poisoning are gastrointestinal or cardiac nature. With no antidote known, only symptomatic treatments are available, though supportive treatments do substantially improve survival rates. Treatments used include the administration of activated charcoal within 30 minutes of ingestion to reduce the uptake of poison, maintaining open airways to prevent suffocation, rehydration to address the dehydration caused by vomiting, and administration of benzodiazepines that enhance the effect of GABA on the GABAA receptor or barbiturates to reduce seizures. Effects on animals The LD50 of cicutoxin for mice is 2.8 mg/kg (10.8 μmol/kg). In comparison, the LD50 of virol A is 28.0 mg/kg (109 μmol/kg) and of isocicutoxin is 38.5 mg/kg (149 μmol/kg). Cattle usually ingest parts of Cicuta plants in Spring, while grazing on new growth around ditches and rivers where these plants grow. Animals display similar effects of cicutoxin poisoning as do humans, but without vomiting (which can lead to increased lethality) – recorded symptoms include salivation, seizures, frequent urination and defecation, and degeneration of skeletal and cardiac muscles. Seizures are usually short, less than a minute per seizure, and occur at intervals of 15 to 30 minutes for around two hours. Ewes recover more slowly after eating cicutoxin-containing tubers, taking up to seven days to recover fully. Research studies on ewes has shown that skeletal and cardiac myodegeneration (damage of muscle tissues) only occur after a dose sufficient to induce symptoms of intoxication is administered. Analysis of the animal's blood showed elevated serum enzymes that indicate muscle damage (LDH, AST and CK values). At necropsy, the ewe's heart had multifocal pale areas and pallor of the long digital extensor muscle groups; by contrast, a ewe given a lethal dose of cicutoxin-containing tubers had only microscopic lesions. The number and duration of seizures had a direct effect on the skeletal and cardiac myodegeneration and amount of serum change. Ewes given up to 2.5 times the lethal dose along with medications to treat symptoms of cicutoxin poisoning recovered, demonstrating that symptomatic treatment can be life-saving. Medications administered included sodium pentobarbital (at 20–77 mg/kg intravenously) at the first seizure to control seizure activity, atropine (75–150 mg) to reduce salivary excretion during anesthesia, and Ringer's lactate solution until the ewes recovered. Medical use Cicutoxin has been shown to have anti-leukemia properties as it inhibits the proliferation of the lymphocytes. It has also been investigated for antitumor activity, where it was shown that a methanolic extract of C. maculata demonstrated significant cytotoxicity in the 9 KB (human nasopharyngeal carcinoma) cell structure assay. References Additional References Neurotoxins Plant toxins Conjugated enynes Primary alcohols Secondary alcohols GABAA receptor negative allosteric modulators Convulsants Conjugated diynes Potassium channel blockers
Cicutoxin
Chemistry
3,033
24,979,660
https://en.wikipedia.org/wiki/MAREC
The MAtrixware REsearch Collection (MAREC) is a standardised patent data corpus available for research purposes. MAREC seeks to represent patent documents of several languages in order to answer specific research questions. It consists of 19 million patent documents in different languages, normalised to a highly specific XML schema. MAREC is intended as raw material for research in areas such as information retrieval, natural language processing or machine translation, which require large amounts of complex documents. The collection contains documents in 19 languages, the majority being English, German and French, and about half of the documents include full text. In MAREC, the documents from different countries and sources are normalised to a common XML format with a uniform patent numbering scheme and citation format. The standardised fields include dates, countries, languages, references, person names, and companies as well as subject classifications such as IPC codes. MAREC is a comparable corpus, where many documents are available in similar versions in other languages. A comparable corpus can be defined as consisting of texts that share similar topics – news text from the same time period in different countries, while a parallel corpus is defined as a collection of documents with aligned translations from the source to the target language. Since the patent document refers to the same “invention” or “concept of idea” the text is a translation of the invention, but it does not have to be a direct translation of the text itself – text parts could have been removed or added for clarification reasons. The 19,386,697 XML files measure a total of 621 GB and are hosted by the Information Retrieval Facility. Access and support are free of charge for research purposes. Use Cases MAREC is used in the Patent Language Translations Online (PLuTO) project. References External links User guide and statistics Information Retrieval Facility Corpora Information retrieval systems Machine translation Natural language processing XML
MAREC
Technology
375
4,786,254
https://en.wikipedia.org/wiki/Ogden%20tables
The Ogden tables are a set of statistical tables and other information for use in court cases in the UK. Their purpose is to make it easier to calculate future losses in personal injury and fatal accident cases. The tables take into account life expectancy and provide a range of discount rates from -2.0% to 3.0% in steps of 0.5%. The discount rate is fixed by the Lord Chancellor under section 1 of the Damages Act 1996; as of 15 July 2019, this rate is -0.25%. The discount rate in Northern Ireland is -1.5%. The full and official name of the tables is Actuarial Tables with explanatory notes for use in Personal Injury and Fatal Accident Cases, but the unofficial name became common parlance following the Civil Evidence Act 1995, where this shorthand name was used as a subheading – Sir Michael Ogden QC having been the chairman of the Working Party for the first four editions. History The tables were first published in 1984. Section 10 of the Civil Evidence Act 1995 authorised their use in evidence in the UK "for the purpose of assessing, in an action for personal injury, the sum to be awarded as general damages for future pecuniary loss". They were first used by the House of Lords in Wells v. Wells in July 1999. The 7th edition of the tables made changes to the discount rate range (previously 0.0% to 5.0% revised to -2.0% to 3.0%) to allow for a revision of the rate by the Lord Chancellor (currently under consideration as at 24 October 2011) and to provide for the implications of the case of Helmot v. Simon. The 8th edition was published in 2020 and updated in August 2022. Using the Ogden tables There are 28 tables of data in the Ogden Tables. Table 1 (Males) and Table 2 (Females) are for life expectancy and loss for life. Tables 3 to 14 are for loss of earnings up to various retirement ages. Tables 15 to 26 are for loss of pension from various retirement ages. Table 27 is for discounting for a time in the future and Table 28 is for a recurring loss over a period of time. How to calculate life expectancy To calculate life expectancy, you need to use Table 1 (for males) or Table 2 (for females) and use the data in the 0% column. So for a 45 year old female, using Table 2 you would look down the first column to find 45 and then across to the 0% column which gives a figure of 43.93. In cases where the age is not a whole number, i.e. female who is 45.75 years, then you use the figure for 45 years (43.93) and the figure for 46 years (42.87) and interpolate between the two (46-45.75) x 43.93 + (45.75-45) x 42.87 to give 43.14 years. How to calculate multiplier for lifetime loss If the claimant is to suffer a loss that will last their entire life, you need to use Table 1 (for males) or Table 2 (for females) and use the data in the 2.5% column. So for a 50-year-old male, using Table 1 you would look down to first column to find 50 and then across to the 2.5% column which gives a figure of 22.69. How to calculate value of a single loss in the future If the claimant needs to pay for something in the future, then the present value can be worked out using Table 27. Look up the period in the future in the first column and then across to the 2.5% column for the multiplier. For example, a purchase required in 10 years time would need to multiplied by 0.7812. How to calculate multiplier for loss over a period If the claimant has a recurring loss over a period of say 15 years, then use Table 28 looking up 15 in the first column and then across to the 2.5% column which gives a multiplier of 12.54. If the loss does not start until some time in the future, then you can combine Table 27 and Table 28 to give an overall multiplier. For example a loss over a period of 15 years that starts in 10 years time would have a Table 27 multiplier of 0.7812 and a Table 28 multiplier of 12.54 giving an overall multiplier of 9.80. References External links Compensation for injury and death (Ogden tables) - from UK Government Actuary's Department piCalculator - Complete Schedule of Loss and Reserving Tools for Personal Injury Claims Frenkels Forensics, Frenkels Calculator - Frenkels Calculator incorporating Ogden Calculator and Loss of Earnings Calculator Actuarial science Forensic statistics Medical malpractice
Ogden tables
Mathematics
1,014
1,955,842
https://en.wikipedia.org/wiki/Four-fermion%20interactions
In quantum field theory, fermions are described by anticommuting spinor fields. A four-fermion interaction describes a local interaction between four fermionic fields at a point. Local here means that it all happens at the same spacetime point. This might be an effective field theory or it might be fundamental. Relativistic models Some examples are the following: Fermi's theory of the weak interaction. The interaction term has a (vector minus axial) form. The Gross–Neveu model. This is a four-fermi theory of Dirac fermions without chiral symmetry and as such, it may or may not be massive. The Thirring model. This is a four-fermi theory of fermions with a vector coupling. The Nambu–Jona-Lasinio model. This is a four-fermi theory of Dirac fermions with chiral symmetry and as such, it has no bare mass. Nonrelativistic models A nonrelativistic example is the BCS theory at large length scales with the phonons integrated out so that the force between two dressed electrons is approximated by a contact term. In four space-time dimensions, such theories are not renormalisable. See also Oblique correction Peskin–Takeuchi parameter Quantum field theory
Four-fermion interactions
Physics
277
11,750,971
https://en.wikipedia.org/wiki/Geoportal
A geoportal is a type of web portal used to find and access geographic information (geospatial information) and associated geographic services (display, editing, analysis, etc.) via the Internet. Geoportals are important for effective use of geographic information systems (GIS) and a key element of a spatial data infrastructure (SDI). Geographic information providers, including government agencies and commercial sources, use geoportals to publish descriptions (geospatial metadata) of their geographic information. Geographic information consumers, professional or casual, use geoportals to search and access the information they need. Thus geoportals serve an increasingly important role in the sharing of geographic information and can avoid duplicated efforts, inconsistencies, delays, confusion, and wasted resources. Background The U.S. National Spatial Data Infrastructure (NSDI), started in 1994 (see OMB Circular A-16), is considered the earliest geoportal concept. The U.S. Federal Geospatial Data Committee (FGDC) coordinated development of the Federal Geographic Data Clearinghouse (or NSDI Clearinghouse Network), the first large geoportal. It has many distributed catalogs that can be searched via a client interface. First released in 2003, the Geospatial One-Stop (GOS) geoportal was developed as part of a U.S. e-government initiative. Unlike the NSDI Clearinghouse Network, GOS was built around a centralized metadata catalog database, with an architecture that links users to data providers through a Web-based geoportal. The user of GOS may employ a simple Web browser (thin client) or may interface directly with a GIS (thick client). In September 2011, GOS was retired and the content it included by then became part of the broader open data site (Geo.)Data.gov. At the same time, the United States federal government launched the Geospatial Platform, which represents a shift from focusing on cataloging references to resources, to providing shared web services for national significant datasets, API for developers, and end-user applications (built on those web services and API). More recently, there has been a proliferation of geoportals for sharing of geographic information based on region or theme. Examples include the INSPIRE geoportal (Infrastructure for Spatial Information in the European Community, established in 2007), the NatCarb geoportal, which provides geographic information concerning carbon sequestration in the United States, and UNSDI, the United Nations Spatial Data Infrastructure. Modern web-based geoportals include direct access to raw data in multiple formats, complete metadata, online visualization tools so users can create maps with data in the portal, automated provenance linkages across users, datasets and created maps, commenting mechanisms to discuss data quality and interpretation, and sharing or exporting created maps in various formats. Open portals allow user contribution of datasets as well. Geoportals also form a key component of commercial cloud-based GIS platforms, providing a library of geographic data that users can employ with online GIS tools or desktop GIS software. Google Earth Engine is a cloud-based image processing platform that includes a portal to several petabytes of satellite imagery. Esri's ArcGIS Online, with its Living Atlas Geoportal, provides a large of volume of data covering a wide variety of topics. Esri also sells Portal for ArcGIS as part of its ArcGIS Enterprise server software, which enables institutions to create their own geoportals. See also Georeference List of GIS data sources National Mapping Agency#List of national mapping agencies Spatial Data Infrastructure References Sources Fu, P., and J. Sun. 2010. Web GIS: Principles and Applications. ESRI Press. Redlands, CA. . Goodchild, M.F., P. Fu, and P.M. Rich. 2007. Geographic information sharing: the case of the Geospatial One-Stop portal. Annals of the Association of American Geographers 97(2):250-266. Maguire, D.J., and P.A. Longley. 2005. The emergence of geoportals and their role in spatial data infrastructures. Computers, Environment and Urban Systems 29: 3-14. Tang, W. and Selwood, J. 2005. Spatial Portals: Gateways to Spatial Information. ESRI Press, Redlands, CA. Geographic data and information Web portals
Geoportal
Technology
921
3,375,398
https://en.wikipedia.org/wiki/Fatsuit
A fatsuit, also known as a fat suit or a fat-suit, is a bodysuit-like undergarment used to thicken the appearance of an actress or actor of light to medium build into an overweight or obese character, in conjunction with prosthetic makeup. Fatsuits worn by characters are either deliberately visible or mainly concealed. Most are intended as unseen body padding beneath a costume (e.g., Rosemary Shanahan in Shallow Hal, and Sherman Klump in The Nutty Professor), others appear as realistic flesh and are viewed directly (e.g., Fat Bastard in Austin Powers, and Les Grossman's hands in Tropic Thunder). A fatsuit is often used to provide comedic effect, as in music videos for "Fat" by "Weird Al" Yankovic, "Marblehead Johnson" by The Bluetones, "Keine Lust" by Rammstein and "Way 2 Sexy" by Drake, and the episode "The Cooper Extraction" of The Big Bang Theory. Experience of obesity Fatsuits may also be used to impart the experience of being obese to the wearer, not just the appearance of obesity to their audience. The suit in this case is weighted, as well as padded. Where the intention is to impart the experience of being seen as overweight in a community, its appearance must also be realistic and so a fatsuit rather than just a weight belt is needed. Several celebrities noted for their slimness have worn such garments and recorded their, and others', reactions as documentary of social attitudes to weight. See also Bodysuit Prosthetic makeup References External links Costume design Undergarments
Fatsuit
Engineering
345
34,377,958
https://en.wikipedia.org/wiki/Petri%20TTL
Petri TTL was a manual 35 mm SLR camera with TTL metering. It was built by Petri Camera Company, Japan, from 1974. It is unknown when the production stopped. Features The Petri TTL was a no-frills and very conservative camera. It was quite big and of heavy, all-metal construction. The only 'luxury' item found on the camera was a self-timer. The camera was fully manual, with a built-in CdS light meter. The battery was only for the metering circuit. The user needed to push a button on the front of the camera to close the aperture, and then set the aperture ring on the lens to a value where the meter needle would fit inside a marker ring. After this, the user could let go of the button, and have full light in the viewfinder to compose the picture. On release of the shutter, the aperture would close to the correct setting. As soon as the film was wound forwards, the light meter would switch on. It was not possible to switch it off manually, so the only way to conserve battery would be to delay advancing the film until the next exposure. It was not possible to attach a winder or motor to the camera. The shutter was a horizontal cloth-curtain focal-plane shutter with a speed range of 1/1 to 1/1000 second. As it was fully mechanical, the camera could be used even if the battery was dead. Flash sync was set for 1/60 second. The release button was placed in an uncommon spot, halfway down the front of the camera. If the user used the middle finger for the shutter release, it was possible to have an unusually solid grip on the housing. For reasons unknown, it did not activate the self-timer: the timer had a separate release button that became available when the self-timer arm was cocked. Even with the self-timer ready, the camera could be used in the normal mode. There were a wide range of lenses, bellows and other accessories available, both from Petri and from third-party producers. References Anonymous. "Petri TTL instruction book" ©Petri Camera Company, inc. Cameras by type Single-lens reflex cameras Products introduced in 1974
Petri TTL
Technology
459
46,306,761
https://en.wikipedia.org/wiki/Signal%20overspill
Signal overspill is the receiving of a broadcast signal outside of its geographical target area. Radio frequencies have no way of obeying geographical borders and licensing arrangements, and the extent of overspill depends on where broadcast transmitters are sited and their power. In addition to traditional transmitters, overspill occurs when the footprint of a satellite is greater than that needed to serve its target audience. Transmitters located near to international borders may overspill into a large part of a neighbouring country, for example the signal from Republic of Ireland broadcaster 2RN's Clermont Carn site can be picked up in a large swathe of Northern Ireland, and vice versa BBC broadcasts can be picked up in the Republic. Another example is signal overspill within the Indonesia–Malaysia–Singapore growth triangle. Overspill is usually welcomed by listeners and viewers as it gives them additional choices, when for example the Republic of Ireland began to migrate to a digital platform measures were put in place so that viewers in Northern Ireland could continue to receive the channels they had become used to. However, legally and often politically overspill can be unwelcome. Broadcast rights are sold on a per territory basis, and overspill can be seen as harmful to the commercial and intellectual property rights of creators. Politically some governments may be wary of their own populace becoming too familiar with the culture of a neighbouring country or territory and feel threatened by it. For example, in China prior to its reforms, television dramas from Hong Kong could be easily picked up in neighbouring Guangdong, and helped spread the desire for greater liberty and material goods in Guangdong. Cross border radio and television reception was an important influence on political developments in Germany during the Cold War. Overspill may have an accidental soft power effect, for example for many years listeners in the Netherlands were able to pick up BBC radio signals, listeners wanting to learn English would tune into the BBC leading to a British cultural influence on the Netherlands. Some nations will purposefully site transmitters and broadcast at a higher power than strictly necessary as a purposeful exercise in soft power. With regards to television, countries wishing to prevent this will choose a television encoding system incompatible to that of its neighbours. Overspill is used as a cover by stations, such as those known as border blaster and those of the radio périphérique, where the audience supposedly accidentally receiving a broadcast is actually the intended audience. The transmitters used are positioned and are very much more powerful than that needed to serve their licensed audience. See also Rimshot (broadcasting) References Broadcast engineering International relations Broadcast transmitters
Signal overspill
Engineering
517
57,253,073
https://en.wikipedia.org/wiki/Future%20Affordable%20Turbine%20Engine
The Future Affordable Turbine Engine (FATE) is a US Army program for a 5,000-10,000-shp class turboshaft/turboprop for Future Vertical Lift aircraft and its Joint Multi Role precursor. Design To extend range and endurance and to increase hot-and-high payload and performance, it should reduce BSFC by 35%, reduce production/maintenance costs by 45%, improve power-to-weight by 80% and design life by 20% to more than 6,000 hours. Development In November 2011, GE was selected for $45 million over five years, to develop technologies including advanced aerodynamics, cooling configurations and improved materials; and rig tests to validate innovative components, leading up to a full system demonstration. In 2017, following the successful tests of the engine’s compressor with the highest single-spool pressure ratio recorded, combustor with GE's most extensive use of CMCs allowing unprecedented high-temperature capability and weight reduction, and turbine rig tests, the first assembled engine completed testing after running 40 hours, reaching the program goals, before a second prototype began testing in 2018. See also Adaptive Versatile Engine Technology (ADVENT) Improved Turbine Engine Program List of aircraft engines Comparable engines Lycoming T55 (Boeing CH-47 Chinook) Rolls-Royce T406 (Bell Boeing V-22 Osprey) General Electric GE38/T408 (Sikorsky CH-53K) References Aircraft engines Turboshaft engines
Future Affordable Turbine Engine
Technology
300
61,677,522
https://en.wikipedia.org/wiki/Buffalo%20Lithia%20Water
Buffalo Lithia Water (later Buffalo Mineral Springs Water) was a brand of lithia water bottled in Buffalo Lithia Springs, Virginia. It was advertised with outsize medical claims, including the ability to treat fevers and nervous disorders. One ad promised a "Marvelous Efficiency in Gout, Rheumatism, [and] Gastrointestinal Dyspepsia." It was sold from the late 19th century to the 1950s. At the height of its popularity, it was available in approximately 20,000 groceries and pharmacies in Europe, Canada, and the United States. In 1910, the United States Attorney for the District of Columbia filed suit against the company for misbranding and false advertising, alleging that there was too little lithium in the water to qualify as a lithia water. Giving testimony in the case in 1912, a Dr.Collins testified that "for a person to obtain a therapeutic dose of lithium by drinking Buffalo Lithia Water, he would have to drink from 150,000 to 225,000 gallons of water per day." In 1917, the case was finally decided against Buffalo Lithia Water. The company was forced to change its name, rebranding itself as Buffalo Mineral Springs Water. Subsequent to the case, the company was sold. In the 1950s, the United States Army Corps of Engineers took possession of the property containing the original spring. During the building of the John H. Kerr Dam and the creation of Kerr Lake, the grounds were flooded. The bottling business was never re-opened. References Bottled water brands Soft drinks Patent medicines Lithia water
Buffalo Lithia Water
Chemistry
323
21,046,621
https://en.wikipedia.org/wiki/Display%20case
A display case (also called a showcase, display cabinet, shadow box, or vitrine) is a cabinet with one or often more transparent tempered glass (or plastic, normally acrylic for strength) surfaces, used to display objects for viewing. A display case may appear in an exhibition, museum, retail store, restaurant, or house. Often, labels are included with the displayed objects, providing information such as descriptions or prices. In a museum, the displayed cultural artifacts are normally part of the museum's collection, or are part of a temporary exhibition. In retail or a restaurant, the items are normally being offered for sale. A trophy case is used to display sports trophies or other awards. Description A display case may be freestanding on the floor, or built-in (usually a custom installation). Built-in displays may be mounted on the wall, may act as room partitions, or may be hung from the ceiling. On occasion, display cases are built into the floor, such as at the Museum of Sydney (in Sydney, Australia), where the remains of drains and privies are shown in their original context, along with other archeological artifacts. There are three types of freestanding showcases: counter, middle floor (mid-floor), and wall. Counter showcases are designed to display objects through one side (the "customer side") and have them accessible through the other (the "clerk side"). For this reason, the counter displays are most relevant for retail stores. The middle floor cases are built to display objects from all sides, and are meant to be placed in the middle of the room. Wall showcases are meant to be placed against a wall, where the products are displayed and accessed from the same side. These last two types are used heavily – not only by stores – but also by museums, schools, and especially in homes to showcase valuable items or collections. Display cases are typically made by specialist companies with a background in woodworking or welding, and come in standard sizes or often are custom order. Display cases are often designed with security in mind and are normally lockable. They also are made in variety of styles, shapes, and materials as available at a store fixture supplier. Conservation grade cases are used to display valuable artifacts in museums, libraries, and archives. These cases are designed to provide a tightly controlled environment free from chemical pollutants. They can ship pre-assembled or knockdown (in pieces to be assembled by the customer). Pre-assembled showcases are assembled (and usually tested) by the manufacturer, and are shipped ready-to-use. Knockdown showcases are usually lower in price and cheaper to ship, but may be of poorer quality than pre-assembled, and may arrive missing pieces. American artist Joseph Cornell constructed many shadow boxes during his career, with the works evoking a strong sense of nostalgia, decay, or loss. Use in the United States military By tradition, shadow boxes are typically presented to members of the United States Armed Forces upon retirement. These shadow boxes will usually contain the various medals and awards a person has earned through a military career, the flags of both their country and their military service branch, and their final badge of rank. A similar case, called a uniform display case, displays an entire military uniform with correct insignia placement. Gallery See also References External links Cabinets (furniture) Collections care Glass applications Museology Museum design
Display case
Engineering
690
1,126,990
https://en.wikipedia.org/wiki/Bruce%20Eckel
Bruce Eckel (born ) is a computer programmer, author, and consultant. Eckel's best known works are Thinking in Java and the two-volume series Thinking in C++, aimed at programmers wanting to learn the Java or C++ programming languages, respectively, particularly those with little experience of object-oriented programming. Eckel was a founding member of the ANSI/ISO C++ standard committee. Views on computing In 2011, Eckel extolled the virtues of the Go programming language as a natural successor to C++: See also ISO/IEC 14882 - C++ standard References Bibliography Computer Interfacing with Pascal & C, Bruce Eckel. Eisys 1988, . Using C++, Bruce Eckel. Osborne/McGraw-Hill 1989, . C++ Inside & Out, Bruce Eckel. Osborne/McGraw-Hill 1993, . Blackbelt C++: The Masters Collection, Edited by Bruce Eckel. M&T/Holt 1994, . Thinking in C++: Introduction to Standard C++, Volume One (2nd Edition), Bruce Eckel. Prentice-Hall PTR 2000, . Available for free download Thinking in C++, Vol. 2: Practical Programming, 2nd Edition, Bruce Eckel and Chuck Allison. Prentice-Hall PTR, 2003. . Available for free download Thinking in Java, 4th Edition, Bruce Eckel. Prentice-Hall PTR, 2006. . "First Steps in Flex", Bruce Eckel and James Ward. MindView, Inc., 2008. . "Atomic Scala", Bruce Eckel and Dianne Marsh. Mindview, LLC, 2013. . "On Java 8", Bruce Eckel, MindView LLC, 2017. . "Atomic Kotlin", Bruce Eckel & Svetlana Isakova, MindView LLC, 2021. . External links Computing Thoughts- Eckel's blog Interview with Bruce Eckel by Clay Shannon List of other interviews with him Latest news on upcoming book Atomic Kotlin MindView - Eckel's company - Eckel on building corporate cultures that increase employee happiness and thus employee productivity, O'Reilly Open Source Convention, 2013 People in information technology American technology writers 1957 births Living people
Bruce Eckel
Technology
461
2,022,808
https://en.wikipedia.org/wiki/%C5%BDin%C4%8Dica
(in Slovak), (in Czech), (in Polish), () (in Ukrainian), or Zyntyca (in Goralic) is a drink made of sheep milk whey similar to kefir consumed mostly in Slovakia and Poland. It is a by-product in the process of making bryndza cheese. Žinčica is fermented by the following Lactic acid bacteria: Lactobacillus casei, Lactobacillus plantarum, Lactococcus lactis and Leuconostoc mesenteroides. Traditionally, this drink is served in a črpák, a wooden cup with a pastoral scene carved into the handle. is typically served with žinčica. The origin of the word is the Romanian jintiță, the drink being carried by Vlach shepherds instead of water. See also Bryndza References Fermented dairy products Fermented drinks Polish cuisine Slovak drinks
Žinčica
Biology
195
14,762,060
https://en.wikipedia.org/wiki/Lumican
Lumican, also known as LUM, is an extracellular matrix protein that, in humans, is encoded by the LUM gene on chromosome 12. Structure Lumican is a proteoglycan Class II member of the small leucine-rich proteoglycan (SLRP) family that includes decorin, biglycan, fibromodulin, keratocan, epiphycan, and osteoglycin. Like the other SLRPs, lumican has a molecular weight of about 40 kilodaltons and has four major intramolecular domains: a signal peptide of 16 amino acid residues; a negatively-charged N-terminal domain containing sulfated tyrosine and disulfide bond(s); ten tandem leucine-rich repeats allowing lumican to bind to other extracellular components such as collagen; a carboxyl terminal domain of 50 amino acid residues containing two conserved cysteines 32 residues apart. There are four N-linked sites within the leucine-rich repeat domain of the protein core that can be substituted with keratan sulfate. The core protein of lumican (like decorin and fibromodulin) is horseshoe shaped. This enables it bind to collagen molecules within a collagen fibril, thus helping keep adjacent fibrils apart. Function Lumican is a major keratan sulfate proteoglycan of the cornea but is ubiquitously distributed in most mesenchymal tissues throughout the body. Lumican is involved in collagen fibril organization and circumferential growth, corneal transparency, and epithelial cell migration and tissue repair. Corneal transparency is possible due to the exact alignment of collagen fibers by lumican (and keratocan) in the intrafibrillar space. Clinical significance Mice that have the lumican gene knocked out (Lum-/-) develop opacities of the cornea in both eyes and fragile skin. The lumican (LUM) gene was thought to be a candidate susceptibility gene for high myopia; however, a meta-analysis showed no association between LUM polymorphism and high myopia susceptibility in all genetic models studied. Lum knockout mice also have abnormal collagen in their heart tissue, with fewer and thicker fibrils. Mice deficient in both lumican and fibromodulin develop severe tendinopathy (tendon pathology), revealing the importance of these SLRPs in the development of correctly sized and aligned collagen fibers in tendon. Along with other extracellular matrix components, lumican expression was increased in equine flexor tendons six weeks after an injury. Lumican is present in the extracellular matrix of uteral tissues in fertile women. There is an increase of lumican during the proliferative to secretory phase of the endometrium. In menopausal endometrial tissue, the level of lumican expression decreases and is also low in pathological compared to normal endometrium. Lumican is highly expressed in pleural effusions (lung fluid) of patients with adenocarcinoma. Its expression was low in cancer cells but high in the extracellular matrix surrounding the tumor. Lumican expression was not associated with tumor grade or stage. In about half the patients with pancreatic ductal adenocarcinoma tested, lumican in the extracellular matrix around the tumor was associated with a reduction in metastatic recurrence after surgery and with a three-fold longer survival than patients without stromal lumican. As lumican can directly bind to and inhibit matrix metalloproteinase-14 (MMP14), lumican may limit tumor progression by preventing extracellular matrix collagen proteolysis by this enzyme. References Further reading Proteins
Lumican
Chemistry
825
38,112,346
https://en.wikipedia.org/wiki/49%20Orionis
49 Orionis is a single star in the equatorial constellation of Orion. It has the Bayer designation d Orionis, while 49 Orionis is the Flamsteed designation. This object is visible to the naked eye as a faint, white-hued star with an apparent visual magnitude of 4.80. It is located 141 light years away from the Sun based on parallax, but is drifting closer with a radial velocity of −5 km/s. In the past 49 Orionis was reported as a spectroscopic binary and an orbit was computed with a period of 445.74 days and an eccentricity of 0.549. But it was later determined to be single. This object is an A-type main-sequence star with a stellar classification of A4Vn, where the 'n' suffix indicates broadened "nebulous" lines caused by rapid rotation. It is around 284 million years old with a projected rotational velocity of 186 km/s. This spin is giving the star an oblate shape with an equatorial bulge that is an estimated 8% larger than the polar radius. The star has 1.8 times the mass of the Sun and double the Sun's radius. It is radiating 22 times the Sun's luminosity from its photosphere at an effective temperature of 8,416 K. References A-type main-sequence stars Orion (constellation) Orionis, d BD-07 1142 Orionis, 49 9187 037507 026563 1937
49 Orionis
Astronomy
312
27,865
https://en.wikipedia.org/wiki/Surface%20%28topology%29
In the part of mathematics referred to as topology, a surface is a two-dimensional manifold. Some surfaces arise as the boundaries of three-dimensional solid figures; for example, the sphere is the boundary of the solid ball. Other surfaces arise as graphs of functions of two variables; see the figure at right. However, surfaces can also be defined abstractly, without reference to any ambient space. For example, the Klein bottle is a surface that cannot be embedded in three-dimensional Euclidean space. Topological surfaces are sometimes equipped with additional information, such as a Riemannian metric or a complex structure, that connects them to other disciplines within mathematics, such as differential geometry and complex analysis. The various mathematical notions of surface can be used to model surfaces in the physical world. In general In mathematics, a surface is a geometrical shape that resembles a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, such as spheres. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a two-dimensional space; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). The concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface. Definitions and first examples A (topological) surface is a topological space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a (coordinate) chart. It is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as local coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean. In most writings on the subject, it is often assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second-countable, and Hausdorff. It is also often assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second-countable, and connected. More generally, a (topological) surface with boundary is a Hausdorff topological space in which every point has an open neighbourhood homeomorphic to some open subset of the closure of the upper half-plane H2 in C. These homeomorphisms are also known as (coordinate) charts. The boundary of the upper half-plane is the x-axis. A point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of such points is known as the boundary of the surface which is necessarily a one-manifold, that is, the union of closed curves. On the other hand, a point mapped to above the x-axis is an interior point. The collection of interior points is the interior of the surface which is always non-empty. The closed disk is a simple example of a surface with boundary. The boundary of the disc is a circle. The term surface used without qualification refers to surfaces without boundary. In particular, a surface with empty boundary is a surface in the usual sense. A surface with empty boundary which is compact is known as a 'closed' surface. The two-dimensional sphere, the two-dimensional torus, and the real projective plane are examples of closed surfaces. The Möbius strip is a surface on which the distinction between clockwise and counterclockwise can be defined locally, but not globally. In general, a surface is said to be orientable if it does not contain a homeomorphic copy of the Möbius strip; intuitively, it has two distinct "sides". For example, the sphere and torus are orientable, while the real projective plane is not (because the real projective plane with one point removed is homeomorphic to the open Möbius strip). In differential and algebraic geometry, extra structure is added upon the topology of the surface. This added structure can be a smoothness structure (making it possible to define differentiable maps to and from the surface), a Riemannian metric (making it possible to define length and angles on the surface), a complex structure (making it possible to define holomorphic maps to and from the surface—in which case the surface is called a Riemann surface), or an algebraic structure (making it possible to detect singularities, such as self-intersections and cusps, that cannot be described solely in terms of the underlying topology). Extrinsically defined surfaces and embeddings Historically, surfaces were initially defined as subspaces of Euclidean spaces. Often, these surfaces were the locus of zeros of certain functions, usually polynomial functions. Such a definition considered the surface as part of a larger (Euclidean) space, and as such was termed extrinsic. In the previous section, a surface is defined as a topological space with certain properties, namely Hausdorff and locally Euclidean. This topological space is not considered a subspace of another space. In this sense, the definition given above, which is the definition that mathematicians use at present, is intrinsic. A surface defined as intrinsic is not required to satisfy the added constraint of being a subspace of Euclidean space. It may seem possible for some surfaces defined intrinsically to not be surfaces in the extrinsic sense. However, the Whitney embedding theorem asserts every surface can in fact be embedded homeomorphically into Euclidean space, in fact into E4: The extrinsic and intrinsic approaches turn out to be equivalent. In fact, any compact surface that is either orientable or has a boundary can be embedded in E3; on the other hand, the real projective plane, which is compact, non-orientable and without boundary, cannot be embedded into E3 (see Gramain). Steiner surfaces, including Boy's surface, the Roman surface and the cross-cap, are models of the real projective plane in E3, but only the Boy surface is an immersed surface. All these models are singular at points where they intersect themselves. The Alexander horned sphere is a well-known pathological embedding of the two-sphere into the three-sphere. The chosen embedding (if any) of a surface into another space is regarded as extrinsic information; it is not essential to the surface itself. For example, a torus can be embedded into E3 in the "standard" manner (which looks like a bagel) or in a knotted manner (see figure). The two embedded tori are homeomorphic, but not isotopic: They are topologically equivalent, but their embeddings are not. The image of a continuous, injective function from R2 to higher-dimensional Rn is said to be a parametric surface. Such an image is so-called because the x- and y- directions of the domain R2 are 2 variables that parametrize the image. A parametric surface need not be a topological surface. A surface of revolution can be viewed as a special kind of parametric surface. If f is a smooth function from R3 to R whose gradient is nowhere zero, then the locus of zeros of f does define a surface, known as an implicit surface. If the condition of non-vanishing gradient is dropped, then the zero locus may develop singularities. Construction from polygons Each closed surface can be constructed from an oriented polygon with an even number of sides, called a fundamental polygon of the surface, by pairwise identification of its edges. For example, in each polygon below, attaching the sides with matching labels (A with A, B with B), so that the arrows point in the same direction, yields the indicated surface. Any fundamental polygon can be written symbolically as follows. Begin at any vertex, and proceed around the perimeter of the polygon in either direction until returning to the starting vertex. During this traversal, record the label on each edge in order, with an exponent of -1 if the edge points opposite to the direction of traversal. The four models above, when traversed clockwise starting at the upper left, yield sphere: real projective plane: torus: Klein bottle: . Note that the sphere and the projective plane can both be realized as quotients of the 2-gon, while the torus and Klein bottle require a 4-gon (square). The expression thus derived from a fundamental polygon of a surface turns out to be the sole relation in a presentation of the fundamental group of the surface with the polygon edge labels as generators. This is a consequence of the Seifert–van Kampen theorem. Gluing edges of polygons is a special kind of quotient space process. The quotient concept can be applied in greater generality to produce new or alternative constructions of surfaces. For example, the real projective plane can be obtained as the quotient of the sphere by identifying all pairs of opposite points on the sphere. Another example of a quotient is the connected sum. Connected sums The connected sum of two surfaces M and N, denoted M # N, is obtained by removing a disk from each of them and gluing them along the boundary components that result. The boundary of a disk is a circle, so these boundary components are circles. The Euler characteristic of is the sum of the Euler characteristics of the summands, minus two: The sphere S is an identity element for the connected sum, meaning that . This is because deleting a disk from the sphere leaves a disk, which simply replaces the disk deleted from M upon gluing. Connected summation with the torus T is also described as attaching a "handle" to the other summand M. If M is orientable, then so is . The connected sum is associative, so the connected sum of a finite collection of surfaces is well-defined. The connected sum of two real projective planes, , is the Klein bottle K. The connected sum of the real projective plane and the Klein bottle is homeomorphic to the connected sum of the real projective plane with the torus; in a formula, . Thus, the connected sum of three real projective planes is homeomorphic to the connected sum of the real projective plane with the torus. Any connected sum involving a real projective plane is nonorientable. Closed surfaces A closed surface is a surface that is compact and without boundary. Examples of closed surfaces include the sphere, the torus and the Klein bottle. Examples of non-closed surfaces include an open disk (which is a sphere with a puncture), a cylinder (which is a sphere with two punctures), and the Möbius strip. A surface embedded in three-dimensional space is closed if and only if it is the boundary of a solid. As with any closed manifold, a surface embedded in Euclidean space that is closed with respect to the inherited Euclidean topology is not necessarily a closed surface; for example, a disk embedded in that contains its boundary is a surface that is topologically closed but not a closed surface. Classification of closed surfaces The classification theorem of closed surfaces states that any connected closed surface is homeomorphic to some member of one of these three families: the sphere, the connected sum of g tori for g ≥ 1, the connected sum of k real projective planes for k ≥ 1. The surfaces in the first two families are orientable. It is convenient to combine the two families by regarding the sphere as the connected sum of 0 tori. The number g of tori involved is called the genus of the surface. The sphere and the torus have Euler characteristics 2 and 0, respectively, and in general the Euler characteristic of the connected sum of g tori is . The surfaces in the third family are nonorientable. The Euler characteristic of the real projective plane is 1, and in general the Euler characteristic of the connected sum of k of them is . It follows that a closed surface is determined, up to homeomorphism, by two pieces of information: its Euler characteristic, and whether it is orientable or not. In other words, Euler characteristic and orientability completely classify closed surfaces up to homeomorphism. Closed surfaces with multiple connected components are classified by the class of each of their connected components, and thus one generally assumes that the surface is connected. Monoid structure Relating this classification to connected sums, the closed surfaces up to homeomorphism form a commutative monoid under the operation of connected sum, as indeed do manifolds of any fixed dimension. The identity is the sphere, while the real projective plane and the torus generate this monoid, with a single relation , which may also be written , since . This relation is sometimes known as after Walther von Dyck, who proved it in , and the triple cross surface is accordingly called . Geometrically, connect-sum with a torus () adds a handle with both ends attached to the same side of the surface, while connect-sum with a Klein bottle () adds a handle with the two ends attached to opposite sides of an orientable surface; in the presence of a projective plane (), the surface is not orientable (there is no notion of side), so there is no difference between attaching a torus and attaching a Klein bottle, which explains the relation. Proof The classification of closed surfaces has been known since the 1860s, and today a number of proofs exist. Topological and combinatorial proofs in general rely on the difficult result that every compact 2-manifold is homeomorphic to a simplicial complex, which is of interest in its own right. The most common proof of the classification is , which brings every triangulated surface to a standard form. A simplified proof, which avoids a standard form, was discovered by John H. Conway circa 1992, which he called the "Zero Irrelevancy Proof" or "ZIP proof" and is presented in . A geometric proof, which yields a stronger geometric result, is the uniformization theorem. This was originally proven only for Riemann surfaces in the 1880s and 1900s by Felix Klein, Paul Koebe, and Henri Poincaré. Surfaces with boundary Compact surfaces, possibly with boundary, are simply closed surfaces with a finite number of holes (open discs that have been removed). Thus, a connected compact surface is classified by the number of boundary components and the genus of the corresponding closed surface – equivalently, by the number of boundary components, the orientability, and Euler characteristic. The genus of a compact surface is defined as the genus of the corresponding closed surface. This classification follows almost immediately from the classification of closed surfaces: removing an open disc from a closed surface yields a compact surface with a circle for boundary component, and removing k open discs yields a compact surface with k disjoint circles for boundary components. The precise locations of the holes are irrelevant, because the homeomorphism group acts k-transitively on any connected manifold of dimension at least 2. Conversely, the boundary of a compact surface is a closed 1-manifold, and is therefore the disjoint union of a finite number of circles; filling these circles with disks (formally, taking the cone) yields a closed surface. The unique compact orientable surface of genus g and with k boundary components is often denoted for example in the study of the mapping class group. Non-compact surfaces Non-compact surfaces are more difficult to classify. As a simple example, a non-compact surface can be obtained by puncturing (removing a finite set of points from) a closed manifold. On the other hand, any open subset of a compact surface is itself a non-compact surface; consider, for example, the complement of a Cantor set in the sphere, otherwise known as the Cantor tree surface. However, not every non-compact surface is a subset of a compact surface; two canonical counterexamples are the Jacob's ladder and the Loch Ness monster, which are non-compact surfaces with infinite genus. A non-compact surface M has a non-empty space of ends E(M), which informally speaking describes the ways that the surface "goes off to infinity". The space E(M) is always topologically equivalent to a closed subspace of the Cantor set. M may have a finite or countably infinite number Nh of handles, as well as a finite or countably infinite number Np of projective planes. If both Nh and Np are finite, then these two numbers, and the topological type of space of ends, classify the surface M up to topological equivalence. If either or both of Nh and Np is infinite, then the topological type of M depends not only on these two numbers but also on how the infinite one(s) approach the space of ends. In general the topological type of M is determined by the four subspaces of E(M) that are limit points of infinitely many handles and infinitely many projective planes, limit points of only handles, limit points of only projective planes, and limit points of neither. Assumption of second-countability If one removes the assumption of second-countability from the definition of a surface, there exist (necessarily non-compact) topological surfaces having no countable base for their topology. Perhaps the simplest example is the Cartesian product of the long line with the space of real numbers. Another surface having no countable base for its topology, but not requiring the Axiom of Choice to prove its existence, is the Prüfer manifold, which can be described by simple equations that show it to be a real-analytic surface. The Prüfer manifold may be thought of as the upper half plane together with one additional "tongue" Tx hanging down from it directly below the point (x,0), for each real x. In 1925, Tibor Radó proved that all Riemann surfaces (i.e., one-dimensional complex manifolds) are necessarily second-countable (Radó's theorem). By contrast, if one replaces the real numbers in the construction of the Prüfer surface by the complex numbers, one obtains a two-dimensional complex manifold (which is necessarily a 4-dimensional real manifold) with no countable base. Surfaces in geometry Polyhedra, such as the boundary of a cube, are among the first surfaces encountered in geometry. It is also possible to define smooth surfaces, in which each point has a neighborhood diffeomorphic to some open set in E2. This elaboration allows calculus to be applied to surfaces to prove many results. Two smooth surfaces are diffeomorphic if and only if they are homeomorphic. (The analogous result does not hold for higher-dimensional manifolds.) Thus closed surfaces are classified up to diffeomorphism by their Euler characteristic and orientability. Smooth surfaces equipped with Riemannian metrics are of foundational importance in differential geometry. A Riemannian metric endows a surface with notions of geodesic, distance, angle, and area. It also gives rise to Gaussian curvature, which describes how curved or bent the surface is at each point. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. However, the famous Gauss–Bonnet theorem for closed surfaces states that the integral of the Gaussian curvature K over the entire surface S is determined by the Euler characteristic: This result exemplifies the deep relationship between the geometry and topology of surfaces (and, to a lesser extent, higher-dimensional manifolds). Another way in which surfaces arise in geometry is by passing into the complex domain. A complex one-manifold is a smooth oriented surface, also called a Riemann surface. Any complex nonsingular algebraic curve viewed as a complex manifold is a Riemann surface. In fact, every compact orientable surface is realizable as a Riemann surface. Thus compact Riemann surfaces are characterized topologically by their genus: 0, 1, 2, .... On the other hand, the genus does not characterize the complex structure. For example, there are uncountably many non-isomorphic compact Riemann surfaces of genus 1 (the elliptic curves). Complex structures on a closed oriented surface correspond to conformal equivalence classes of Riemannian metrics on the surface. One version of the uniformization theorem (due to Poincaré) states that any Riemannian metric on an oriented, closed surface is conformally equivalent to an essentially unique metric of constant curvature. This provides a starting point for one of the approaches to Teichmüller theory, which provides a finer classification of Riemann surfaces than the topological one by Euler characteristic alone. A complex surface is a complex two-manifold and thus a real four-manifold; it is not a surface in the sense of this article. Neither are algebraic curves defined over fields other than the complex numbers, nor are algebraic surfaces defined over fields other than the real numbers. See also Boundary (topology) Volume form, for volumes of surfaces in En Poincaré metric, for metric properties of Riemann surfaces Roman surface Boy's surface Tetrahemihexahedron Crumpled surface, a non-differentiable surface obtained by deforming (crumpling) a differentiable surface Notes References Simplicial proofs of classification up to homeomorphism , English translation of 1934 classic German textbook , Chapter I , Cambridge undergraduate course , for closed oriented Riemannian manifolds Morse theoretic proofs of classification up to diffeomorphism , careful proof aimed at undergraduates (Original 1969-70 Orsay course notes in French for "Topologie des Surfaces") Other proofs , similar to Morse theoretic proof using sliding of attached handles ; page discussing the paper: On Conway's ZIP Proof , short elementary proof using spanning graphs , contains short account of Thomassen's proof External links Classification of Compact Surfaces in Mathifold Project The Classification of Surfaces and the Jordan Curve Theorem in Home page of Andrew Ranicki Math Surfaces Gallery, with 60 ~surfaces and Java Applet for live rotation viewing Math Surfaces Animation, with JavaScript (Canvas HTML) for tens surfaces rotation viewing The Classification of Surfaces Lecture Notes by Z.Fiedorowicz History and Art of Surfaces and their Mathematical Models 2-manifolds at the Manifold Atlas Geometric topology Differential geometry of surfaces Analytic geometry
Surface (topology)
Mathematics
4,737
21,743,669
https://en.wikipedia.org/wiki/Bare%20particle
In theoretical physics, a bare particle is an excitation of an elementary quantum field. In solid-state physics, quasiparticles are dressed particles that also include additional particles surrounding the bare one. See also Quasiparticle References Quantum field theory
Bare particle
Physics
53
1,724,825
https://en.wikipedia.org/wiki/Doron%20Zeilberger
Doron Zeilberger (דורון ציילברגר; born 2 July 1950) is an Israeli-American mathematician, known for his work in combinatorics. Education and career He received his doctorate from the Weizmann Institute of Science in 1976, under the direction of Harry Dym, with the thesis "New Approaches and Results in the Theory of Discrete Analytic Functions." He is a Board of Governors Professor of Mathematics at Rutgers University. Contributions Zeilberger has made contributions to combinatorics, hypergeometric identities, and q-series. Zeilberger gave the first proof of the alternating sign matrix conjecture, noteworthy not only for its mathematical content, but also for the fact that Zeilberger recruited nearly a hundred volunteer checkers to "pre-referee" the paper. In 2011, together with Manuel Kauers and Christoph Koutschan, Zeilberger proved the q-TSPP conjecture, which was independently stated in 1983 by George Andrews and David P. Robbins. Zeilberger is an ultrafinitist. He is also known for crediting his computer "Shalosh B. Ekhad" as a co-author ("Shalosh" and "Ekhad" mean "Three" and "One" in Hebrew respectively, referring to his first computer, an AT&T 3B1), and for his provocative opinions. Awards and honors Zeilberger received a Lester R. Ford Award in 1990. Together with Herbert Wilf, Zeilberger was awarded the American Mathematical Society's Leroy P. Steele Prize for Seminal Contributions to Research in 1998 for their development of WZ theory, which has revolutionized the field of hypergeometric summation. In 2004, Zeilberger was awarded the Euler Medal; the citation refers to him as "a champion of using computers and algorithms to do mathematics quickly and efficiently". In 2016 he received, together with Manuel Kauers and Christoph Koutschan, the David P. Robbins Prize of the American Mathematical Society. Zeilberger was a member of the inaugural 2013 class of fellows of the American Mathematical Society. See also MacMahon Master theorem Wilf–Zeilberger pair References External links Doron Zeilberger's homepage Biography from ScienceWorld From A = B to Z = 60, a conference in honor of Doron Zeilberger's 60th birthday, 27 and 28 May 2010 1950 births Living people 20th-century Israeli mathematicians 21st-century Israeli mathematicians Combinatorialists Fellows of the American Mathematical Society Israeli Jews Jewish scientists People from Haifa Rutgers University faculty Weizmann Institute of Science alumni
Doron Zeilberger
Mathematics
535
38,001
https://en.wikipedia.org/wiki/Phenylalanine
Phenylalanine (symbol Phe or F) is an essential α-amino acid with the formula . It can be viewed as a benzyl group substituted for the methyl group of alanine, or a phenyl group in place of a terminal hydrogen of alanine. This essential amino acid is classified as neutral, and nonpolar because of the inert and hydrophobic nature of the benzyl side chain. The L-isomer is used to biochemically form proteins coded for by DNA. Phenylalanine is a precursor for tyrosine, the monoamine neurotransmitters dopamine, norepinephrine (noradrenaline), and epinephrine (adrenaline), and the biological pigment melanin. It is encoded by the messenger RNA codons UUU and UUC. Phenylalanine is found naturally in the milk of mammals. It is used in the manufacture of food and drink products and sold as a nutritional supplement as it is a direct precursor to the neuromodulator phenethylamine. As an essential amino acid, phenylalanine is not synthesized de novo in humans and other animals, who must ingest phenylalanine or phenylalanine-containing proteins. The one-letter symbol F was assigned to phenylalanine for its phonetic similarity. History The first description of phenylalanine was made in 1879, when Schulze and Barbieri identified a compound with the empirical formula, C9H11NO2, in yellow lupine (Lupinus luteus) seedlings. In 1882, Erlenmeyer and Lipp first synthesized phenylalanine from phenylacetaldehyde, hydrogen cyanide, and ammonia. The genetic codon for phenylalanine was first discovered by J. Heinrich Matthaei and Marshall W. Nirenberg in 1961. They showed that by using mRNA to insert multiple uracil repeats into the genome of the bacterium E. coli, they could cause the bacterium to produce a polypeptide consisting solely of repeated phenylalanine amino acids. This discovery helped to establish the nature of the coding relationship that links information stored in genomic nucleic acid with protein expression in the living cell. Dietary sources Good sources of phenylalanine are eggs, chicken, liver, beef, milk, and soybeans. Another common source of phenylalanine is anything sweetened with the artificial sweetener aspartame, such as diet drinks, diet foods and medication; the metabolism of aspartame produces phenylalanine as one of the compound's metabolites. Dietary recommendations The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For phenylalanine plus tyrosine, for adults 19 years and older, 33 mg/kg body weight/day. In 2005 the DRI is set to 27 mg/kg per day (with no tyrosine), the FAO/WHO/UNU recommendation of 2007 is 25 mg/kg per day (with no tyrosine). Other biological roles L-Phenylalanine is biologically converted into L-tyrosine, another one of the DNA-encoded amino acids. L-tyrosine in turn is converted into L-DOPA, which is further converted into dopamine, norepinephrine (noradrenaline), and epinephrine (adrenaline). The latter three are known as the catecholamines. Phenylalanine uses the same active transport channel as tryptophan to cross the blood–brain barrier. In excessive quantities, supplementation can interfere with the production of serotonin and other aromatic amino acids as well as nitric oxide due to the overuse (eventually, limited availability) of the associated cofactors, iron or tetrahydrobiopterin. The corresponding enzymes for those compounds are the aromatic amino acid hydroxylase family and nitric oxide synthase. In plants Phenylalanine is the starting compound used in the synthesis of flavonoids. Lignan is derived from phenylalanine and from tyrosine. Phenylalanine is converted to cinnamic acid by the enzyme phenylalanine ammonia-lyase. Biosynthesis Phenylalanine is biosynthesized via the shikimate pathway. Phenylketonuria The genetic disorder phenylketonuria (PKU) is the inability to metabolize phenylalanine because of a lack of the enzyme phenylalanine hydroxylase. Individuals with this disorder are known as "phenylketonurics" and must regulate their intake of phenylalanine. Phenylketonurics often use blood tests to monitor the amount of phenylalanine in their blood. Lab results may report phenylalanine levels using either mg/dL and μmol/L. One mg/dL of phenylalanine is approximately equivalent to 60 μmol/L. A (rare) "variant form" of phenylketonuria called hyperphenylalaninemia is caused by the inability to synthesize a cofactor called tetrahydrobiopterin, which can be supplemented. Pregnant women with hyperphenylalaninemia may show similar symptoms of the disorder (high levels of phenylalanine in blood), but these indicators will usually disappear at the end of gestation. Pregnant women with PKU must control their blood phenylalanine levels even if the fetus is heterozygous for the defective gene because the fetus could be adversely affected due to hepatic immaturity. A non-food source of phenylalanine is the artificial sweetener aspartame. This compound is metabolized by the body into several chemical byproducts including phenylalanine. The breakdown problems phenylketonurics have with the buildup of phenylalanine in the body also occurs with the ingestion of aspartame, although to a lesser degree. Accordingly, all products in Australia, the U.S. and Canada that contain aspartame must be labeled: "Phenylketonurics: Contains phenylalanine." In the UK, foods containing aspartame must carry ingredient panels that refer to the presence of "aspartame or E951" and they must be labeled with a warning "Contains a source of phenylalanine." In Brazil, the label "Contém Fenilalanina" (Portuguese for "Contains Phenylalanine") is also mandatory in products which contain it. These warnings are placed to help individuals avoid such foods. D-, L- and DL-phenylalanine The stereoisomer D-phenylalanine (DPA) can be produced by conventional organic synthesis, either as a single enantiomer or as a component of the racemic mixture. It does not participate in protein biosynthesis although it is found in proteins in small amounts - particularly aged proteins and food proteins that have been processed. The biological functions of D-amino acids remain unclear, although D-phenylalanine has pharmacological activity at niacin receptor 2. DL-Phenylalanine (DLPA) is marketed as a nutritional supplement for its purported analgesic and antidepressant activities, which have been supported by clinical trials. DL-Phenylalanine is a mixture of D-phenylalanine and L-phenylalanine. The reputed analgesic activity of DL-phenylalanine may be explained by the possible blockage by D-phenylalanine of enkephalin degradation by the enzyme carboxypeptidase A. Enkephalins act as agonists of the mu and delta opioid receptors, and agonists of these receptors are known to produce antidepressant effects. The mechanism of DL-phenylalanine's supposed antidepressant activity may also be accounted for in part by the precursor role of L-phenylalanine in the synthesis of the neurotransmitters norepinephrine and dopamine, though clinical trials have not found an antidepressant effect from L-phenylalanine alone. Elevated brain levels of norepinephrine and dopamine are thought to have an antidepressant effect. D-Phenylalanine is absorbed from the small intestine and transported to the liver via the portal circulation. A small amount of D-phenylalanine appears to be converted to L-phenylalanine. D-Phenylalanine is distributed to the various tissues of the body via the systemic circulation. It appears to cross the blood–brain barrier less efficiently than L-phenylalanine, and so a small amount of an ingested dose of D-phenylalanine is excreted in the urine without penetrating the central nervous system. L-Phenylalanine is an antagonist at α2δ Ca2+ calcium channels with a Ki of 980 nM. In the brain, L-phenylalanine is a competitive antagonist at the glycine binding site of NMDA receptor and at the glutamate binding site of AMPA receptor. At the glycine binding site of NMDA receptor L-phenylalanine has an apparent equilibrium dissociation constant (KB) of 573 μM estimated by Schild regression which is considerably lower than brain L-phenylalanine concentration observed in untreated human phenylketonuria. L-Phenylalanine also inhibits neurotransmitter release at glutamatergic synapses in hippocampus and cortex with IC50 of 980 μM, a brain concentration seen in classical phenylketonuria, whereas D-phenylalanine has a significantly smaller effect. Commercial synthesis L-Phenylalanine is produced for medical, feed, and nutritional applications, such as aspartame, in large quantities by utilizing the bacterium Escherichia coli, which naturally produces aromatic amino acids like phenylalanine. The quantity of L-phenylalanine produced commercially has been increased by genetically engineering E. coli, such as by altering the regulatory promoters or amplifying the number of genes controlling enzymes responsible for the synthesis of the amino acid. Derivatives Boronophenylalanine (BPA) is a dihydroxyboryl derivative of phenylalanine, used in neutron capture therapy. 4-Azido-L-phenylalanine is a protein-incorporated unnatural amino acid used as a tool for bioconjugation in the field of chemical biology. See also Phenylalaninol References External links Phenylalanine mass spectrum Phenylalanine at ChemSynthesis Alpha-Amino acids Animal products Proteinogenic amino acids Glucogenic amino acids Ketogenic amino acids Aromatic amino acids Essential amino acids Enkephalinase inhibitors Phenethylamines Dopamine agonists Carbonic anhydrase activators Monoamine precursors
Phenylalanine
Chemistry
2,402
71,619,423
https://en.wikipedia.org/wiki/PGC%202933
PGC or LEDA 2933 is a faint dwarf irregular galaxy in the Sculptor Group. It can be seen in the southern constellation Phoenix. According to measurements, the galaxy is located 11.15 million light-years away. Because it is situated in the Sculptor Group, it is one of the closest galaxies to the Milky Way. It is obscured by a few brighter stars and galaxies (the brightest of them on the right side of the photo is 1425 light-years away from the Solar System). The galaxy has a diameter of 2,000 light years. References Dwarf irregular galaxies Sculptor Group Phoenix (constellation) 002933 540-032
PGC 2933
Astronomy
132
3,351,683
https://en.wikipedia.org/wiki/Verify%20in%20field
Verify in field is a construction document notation indicating that the dimensions on a drawing (including architectural, structural, plumbing, mechanical, and electrical plans or miscellaneous vendor shop drawings) require additional verification on the actual site or field. This is commonly shown on drawings as "VIF". Generally, the dimensions to be verified will be highlighted by "bubbling" around them or through some other method to indicate that verification is required. Construction documents
Verify in field
Engineering
88
59,142,827
https://en.wikipedia.org/wiki/NGC%20705
NGC 705 is a lenticular galaxy located 240 million light-years away in the constellation Andromeda. The galaxy was discovered by astronomer William Herschel on September 21, 1786 and is also a member of Abell 262. Although NGC 705 is an early type galaxy, it has a dust lane that is concentrated toward its central region. It is projected to lie about from the cd-galaxy NGC 708. See also List of NGC objects (1–1000) References External links 705 6958 Andromeda (constellation) Astronomical objects discovered in 1786 Lenticular galaxies Abell 262 1345
NGC 705
Astronomy
123
9,465,380
https://en.wikipedia.org/wiki/Watchclock
A watchclock is a mechanical clock used by security guards as part of their guard tour patrol system which require regular patrols. The most commonly used form was the mechanical clock systems that required a key for manual punching of a number to a strip of paper inside with the time pre-printed on it. Recently, electronic systems have increased in popularity due to their light weight, ease of use, and downloadable logging capabilities. This increase in the electronic systems led the largest U.S. manufacturer of watchclocks, Detex, to discontinue all of their mechanical watchclocks on December 31, 2011, including the Detex Newman which had been manufactured for 130 years. Watchclocks often had a paper or light cardboard disk or paper tape placed inside for a set period of time, usually 24 hours for disk models, and 96 hours for tape models. The user would carry the clock to each checkpoint where a numbered key was mounted (typically chained in place, ensuring that the user was present). That key was then inserted into the clock and turned, which would imprint the disk with the key number. The paper disk or tape had the times pre-printed and the key impressed the key number on the corresponding time. After the shift (or a specified time period, up to 96 hours in the case of the Detex Guardsman clocks), an authorized person (usually a supervisor), would unlock the watchclock and retrieve the disk or tape and insert a new one. In the case of Detex brand clocks, each time the cover is opened or closed, a mechanical device would puncture the disk or tape at the current time; if a disk had more than two perforations on it, it proved that the clock had been opened and possibly tampered with, or records forged. The approximately five pound circular watchclock was enclosed in a black leather pouch attached to a leather strap and carried over the shoulder. Inside buildings mounted near doors, were watchclock stations consisting of a small metal box with a hinged lid, which contained a numbered key affixed by a twelve-inch chain. The watchman would insert the key into the clock, rotate it and a numeric stamp would be pressed onto a roll or disk of paper locked inside the clock. Gallery References External links Detex Corporation official page Watchclocks at Watchcloks.org Watchclocks Automatic identification and data capture Recording devices
Watchclock
Technology
491
10,789,250
https://en.wikipedia.org/wiki/Television%20standards%20conversion
Television standards conversion is the process of changing a television transmission or recording from one video system to another. Converting video between different numbers of lines, frame rates, and color models in video pictures is a complex technical problem. However, the international exchange of television programming makes standards conversion necessary so that video may be viewed in another nation with a differing standard. Typically video is fed into video standards converter which produces a copy according to a different video standard. One of the most common conversions is between the NTSC and PAL standards. History The first known case of television systems conversion was in Europe a few years after World War II, mainly with the RTF (France) and the BBC (UK) trying to exchange their black and white 441 line and 405 line programming. The problem got worse with the introduction of color standards PAL, SECAM (both 625 lines), and the French black and white 819 line service. Until the 1980s, standards conversion was so difficult that 24 frame/s 16 mm or 35mm film was the preferred medium of programming interchange. Overview Perhaps the most technically challenging conversion to make is the PAL and SÉCAM to NTSC conversion. PAL and SÉCAM use 625 lines at 50 fields/s or 25 frames/s NTSC uses 525 lines at fields/s (60000/1001) or 30 frames/s The NTSC standard is temporally and spatially incompatible with both PAL and SÉCAM. Aside from the line count being different, converting to a format that requires 60 fields every second from a format that has only 50 fields poses difficulty. Every second, an additional 10 fields must be generated—the converter has to create new frames (from the existing input) in real time. Conversion between PAL and SÉCAM does not require similar timing changes, but still requires color encoding and sound conversion. Hidden signals: not always transferred TV contains many hidden signals. One signal type that is not transferred, except on some very expensive converters, is the closed captioning signal. Teletext signals do not need to be transferred, but the captioning data stream should be if it is technologically possible to do so. With HDTV broadcasting, this is less of an issue, for the most part meaning only passing the captioning datastream on to the new source material. However, DVB and ATSC have significantly different captioning datastream types. Role of information theory Theory behind systems conversion Information theory and the Nyquist–Shannon sampling theorem imply that conversion from one television standard to another will be easier if the conversion is from a higher framerate to a lower framerate (NTSC to PAL or SECAM, for example) is from a higher resolution to a lower resolution (HDTV to NTSC) is from one progressive-scan source to another progressive-scan source (interlaced PAL and NTSC are temporally and spatially incompatible with each other) has relatively little interframe motion, which reduces temporal or spatial judder is from a source whose signal-to-noise ratio is not detrimentally high [low?] is from a source that has no continuous (or periodic) signal defect that would inhibit translation. Sampling systems and ratios The subsampling in a video system is usually expressed as a three-part ratio. The three terms of the ratio are the number of brightness ("luminance", "luma", "Y") samples and the numbers of samples of the two color ("chroma") components (U/Cb then V/Cr) for each complete sample area. For quality comparison, only the ratio between those values is important, so 4:4:4 could easily be called 1:1:1; but traditionally the value for brightness is always 4, with the rest of the values scaled accordingly. The sampling principles above apply to both digital and analog television. Telecine judder The "3:2 pulldown" conversion process for 24 frame/s film to television (telecine) creates a slight error in the video signal compared to the original film frames. This is one reason why motion in 24-fps films viewed on typical NTSC home equipment may not appear as smooth as when viewed in a cinema. The phenomenon is particularly apparent during slow, steady camera movements, which appear slightly jerky when telecined. This process is commonly called telecine judder. PAL material in which 2:2:2:2:2:2:2:2:2:2:2:3 pulldown has been applied, suffers from a similar lack of smoothness, though this effect is not usually called telecine judder. Every 12th film frame is displayed for the duration of 3 PAL fields (60 milliseconds), whereas each of the 11 other frames is displayed for the duration of 2 PAL fields (40 milliseconds). This causes a slight "hiccup" in the video about twice a second. Television systems converters must avoid creating telecine judder effects during the conversion process. Avoiding this judder is economically importance, because much NTSC (60 Hz, technically 29.97 frame/s) resolution material that originates from film will have this problem when converted to PAL or SECAM (both 50 Hz, 25 frame/s). Historical standards conversion techniques Orthicon to orthicon This method was used by Ireland to convert 625 line service to 405 line service. It is perhaps the most basic television standard conversion technique. RTÉ used this method during the latter years of its use of the 405 line system. A standards converter was used to provide the 405 line service, but according to more than one former RTÉ engineering source the converter blew up and afterwards the 405 line service was provided by a 405 line camera pointing at a monitor. This is not the best conversion technique but it can work if one is going from a higher resolution to a lower one – at the same frame rate. Slow phosphors are required on both orthicons. The first video standards converters were analog. That is, a special professional video camera that used a video camera tube would be pointed at a cathode ray tube video monitor. Both the camera and the monitor could be switched to either NTSC or PAL, to convert both ways. Robert Bosch GmbH's Fernseh division made a large three rack analog video standards converter. These were the high-end converters of the 1960s and 1970s. Image Transform in Universal City, California, used the Fernseh converter and in the 1980s made their own custom digital converter. This was also a larger three-rack device. As digital memory size became larger in smaller packages, converters became the size of a microwave oven. Today one can buy a very small consumer converter for home use. SSTV to PAL and NTSC The Apollo Moon missions (late 1960s, early 1970s) used slow-scan television (SSTV) as opposed to normal bandwidth television; this was mostly done to save battery power (and transmission bandwidth, since the SSTV video from the Apollo missions was multiplexed with all other voice and telemetry communications from the spacecraft). The camera used only 7 watts of power. SSTV was used to transmit images from inside Apollo 7, Apollo 8, and Apollo 9, as well as the Apollo 11 Lunar Module television from the Moon; see Apollo TV camera. The SSTV system used in NASA's early Apollo missions transferred ten frames per second with a resolution of 320 frame lines using less bandwidth than a normal TV transmission. The early SSTV systems used by NASA differ significantly from the SSTV systems currently in use by amateur radio enthusiasts today. Standards conversion was necessary so that the missions could be seen by a worldwide audience in both PAL/SECAM (625 lines, 50 Hz) and NTSC (525 lines, 60 Hz) resolutions. Later Apollo missions featured color field sequential cameras that output 60-frame/s video. Each frame corresponded to one of the RGB primary colors. This method is compatible with black and white NTSC, but incompatible with color NTSC. In fact, even NTSC monochrome TV compatibility is marginal. A monochrome set could have reproduced the pictures, but the pictures would have flickered terribly. The camera color video ran at only 10  frame/s. Also, Doppler shift in the lunar signal would have caused pictures to tear and flip. For these reasons, the Apollo Moon pictures required special conversion techniques. The conversion steps were completely electromechanical, and they took place in nearly real time. First, the downlink station corrected the pictures for Doppler shift. Next, in an analog disc recorder, the downlink station recorded and replayed every video field six times. On the six-track recorder, recording and playback took place simultaneously. After the recorder, analog video processors added the missing components of the NTSC color signal: These components included: The 3.58-MHz color burst, The high-resolution monochrome signal, The sound, The I and Q color signals. The conversion delay lasted only some 10 seconds. Then color Moon pictures left the downlink station for world distribution. Standards conversion methods in common use Nyquist subsampling This conversion technique may become popular with manufacturers of HDTV --> NTSC and HDTV --> PAL converter boxes for the ongoing global conversion to HDTV. Multiple Nyquist subsampling was used by the defunct MUSE HDTV system that was used in Japan. MUSE chipsets that can be used for systems conversion do exist, or can be revised for the needs of HDTV --> Analog TV converter boxes. How it works In a typical image transmission setup, all stationary images are transmitted at full resolution. Moving pictures possess a lower resolution visually, based on complexity of interframe image content. When one uses Nyquist subsampling as a standards conversion technique, the horizontal and vertical resolution of the material are reduced – this is an excellent method for converting HDTV to standard definition television, but it works very poorly in reverse. As the horizontal and vertical content change from frame to frame, moving images will be blurred (in a manner similar to using 16 mm movie film for HDTV projection). In fact, whole-camera pans would result in a loss of 50% of the horizontal resolution. The Nyquist subsampling method of systems conversion only works for HDTV to Standard Definition Television, so as a standards conversion technology it has a very limited use. Phase Correlation is usually preferred for HDTV to standard definition conversion. Framerate conversion There is a large difference in frame rate between film (24.0 frames per second) and NTSC (approximately 29.97 frames per second). Unlike the two other most common video formats, PAL and SECAM, this difference cannot be overcome by a simple speed-up, because the required 25% speed-up would be clearly noticeable. To convert 24 frame/s film to 29.97 frame/s (presented as 59.94 interlaced fields per second) NTSC, a process called "3:2 pulldown" is used, in which every other film frame is duplicated across an additional interlaced field to achieve a framerate of 23.976 (the audio is slowed imperceptibly from the 24 frame/s source to match). This produces irregularities in the sequence of images which some people can perceive as a stutter during slow and steady pans of the camera in the source material. See telecine for more details. For viewing native PAL or SECAM material (such as European television series and some European movies) on NTSC equipment, a standards conversion has to take place. There are basically two ways to accomplish this: The framerate can be slowed from 25 to 23.976 frames per second (a slowdown of about 4%) to subsequently apply 3:2 pulldown. Interpolation of the contents of adjacent frames in order to produce new intermediate frames; this introduces artifacts, and even the most modestly trained of eyes can quickly spot video that has been converted between formats. Linear interpolation When converting PAL (625 lines @ 25 frame/s) to NTSC (525 lines @ 30 frame/s), the converter must eliminate 100 lines per frame. The converter must also create five frames per second. To reduce the 625-line signal to 525, less expensive converters drop 100 lines. These converters maintain picture fidelity by evenly spacing removed lines. (For example, the system might discard every sixth line from each PAL field. After the 50th discard, this process would stop. By then the system would have passed the viewable area of the field. In the following field, the process would repeat, completing one frame.) To create the five additional frames, the converter repeats every fifth frame. If there is little inter-frame motion, this conversion algorithm is fast, inexpensive and effective. Many inexpensive consumer television system converters have employed this technique. Yet in practise, most video features significant inter-frame motion. To reduce conversion artefacts, more modern or expensive equipment may use sophisticated techniques. Doubler The most basic and literal way to double lines is to repeat each scanline, though the results of this are generally very crude. Linear interpolation use digital interpolation to recreate the missing lines in an interlaced signal, and the resulting quality depends on the technique used. Generally the bob version of linear deinterlacer will only interpolate within a single field, rather than merging information from adjacent fields, to preserve the smoothness of motion, resulting in a frame rate equal to the field rate (i.e. a 60i signal would be converted to 60p.) The former technique in moving areas and the latter in static areas, which improves overall sharpness. Interfield interpolation Interfield Interpolation is a technique in which new frames are created by blending adjacent frames, rather than repeating a single frame. This is more complex and computationally expensive than linear interpolation, because it requires the interpolator to have knowledge of the preceding and the following frames to produce an intermediate blended frame. Deinterlacing may also be required in order to produce images which can be interpolated smoothly. Interpolation can also be used to reduce the number of scanlines in the image by averaging the colour and intensity of pixels on neighbouring lines, a technique similar to Bilinear filtering, but applied to only one axis. There are simple 2-line and 4 line converters. The 2-line converter creates a new line by comparing two adjacent lines, whereas a 4-line model compares 4 lines to average the 5th. Interfield interpolation reduces judder, but at the expense of picture smearing. The greater the blending applied to smooth out the judder, the greater the smear caused by blending. Adaptive motion interpolation Some more advanced techniques measure the nature and degree of inter-frame motion in the source, and use adaptive algorithms to blend the image based on the results. Some such techniques are known as motion compensation algorithms, and are computationally much more expensive than the simpler techniques, thus requiring more powerful hardware to be effective in real-time conversion. Adaptive Motion algorithms capitalize on the way the human eye and brain process moving images – in particular, detail is perceived less clearly on moving objects. Adaptive interpolation requires that the converter analyzes multiple successive fields and to detect the amount and type of motion of different areas of the picture. Where little motion is detected, the converter can use linear interpolation. When greater motion is detected, the converter can switch to an inter-field technique which sacrifices detail for smoother motion. Adaptive Motion Interpolation has many variations and is commonly found in midrange converters. The quality and cost is dependent upon the accuracy in analyzing the type and amount of motion, and the selection of the most appropriate algorithm for processing the type of motion. Adaptive motion interpolation + block matching Block matching involves dividing the image into mosaic blocks – say perhaps for the sake of explanation, 8x8 pixels. The blocks are then stored in memory. The next field read out is also divided up into the same number and size of mosaic blocks. The converter's computer then goes to work and starts matching up blocks. The blocks that stayed in the same relative position (read: there was no motion in this part of the image) receive relatively little processing. For each block that changed, the converter searches in every direction through its memory, looking for a match to find out where the "block" went (if there's motion, the block obviously had to have gone somewhere..). The search starts at the immediate surrounding blocks (assuming little motion). If a match isn't found, then it searches further and further out until it finds a match. When the matching block is found, the converter then knows how far the block moved and in which direction. This data is then stored as a motion vector for this block. Since interframe motion is often predictable owing to Newton's laws of motion in the real world, the motion vector can then be used to calculate where the block will probably be in the next field. The Newtonian method saves a lot of search and processing time. When panning from left to right is taking place (over say 10 fields) it is safe to assume that the 11th field will be similar or very close. Block matching can be seen as the "cutting and pasting" of image blocks. The technique is highly effective but it does require a tremendous amount of computing power. Consider a block of only 8x8 pixels. For each block, the computer has 64 possible directions and 64 pixels to be matched to the block in the next field. Also consider that the greater the motion, the further out the search must be conducted. Just to find an adjacent block in the next field would entail making a search of 9 blocks. 2 blocks out would require a search and match of 25 blocks – 3 blocks further distant and it grows to 49 etc. The type of motion can exponentially compound the compute power required. Consider a rotating object, where a simple straight line motion vector is of little help in predicting where the next block should match. It can quickly be seen that the more inter frame motion introduced, the much greater the processing power required. This is the general concept of block matching. Block match converters can vary widely in price and performance depending on the attention to detail and complexity. A weird artifact of block matching owes to the size of the block itself. If a moving object is smaller than the mosaic block, consider that it's the entire block that gets moved. In most cases, it's not an issue, but consider a thrown baseball. The ball itself has a high motion vector, but its background that makes up the rest of the block might not have any motion. The background gets transported in the moved block as well, based on the motion vector of the baseball, What you might see is the ball with a small amount of outfield or whatever, tagging along. As it's in motion, the block may be "soft" depending upon what additional techniques were used and barely noticeable unless you're looking for it. Block matching requires a staggering amount of processing horsepower, but today's microprocessors are making it a viable solution. Phase correlation Phase correlation is perhaps the most computationally complex of the general algorithms. Phase correlation's success lies in the fact that it is effective with coping with rapid motion and random motion. Phase correlation does not easily get confused by rotating or twirling objects that confuse most other kinds of systems converters. Phase correlation is elegant as well as technically and conceptually complex. Its successful operation is derived by performing a Fourier transform to each field of video. A fast Fourier transform (FFT) is an algorithm which deals with the transformation of discrete values (in this case image pixels). When applied to a sample of finite values, a fast Fourier transform expresses any changes (motion) in terms of frequency components. Since the result of the FFT represents only the inter-frame changes in terms of frequency distribution, there is far less data that has to be processed in order to calculate the motion vectors. DTV to analog converters for consumers A digital television adapter (DTA), commonly known as a converter box or decoder box, is a device that receives, by means of an antenna, a digital television (DTV) transmission, and converts that signal into an analog signal that can be received and displayed on an analog television. These boxes cheaply convert HDTV (16:9 at 720 or 1080) to (NTSC or PAL at 4:3). Very little is known about the specific conversion technologies used by these converter boxes in the PAL and NTSC regions. Downconversion is usually required, hence very little image quality loss is perceived by viewers at the recommended viewing distance with most television sets. Offline conversion A lot of cross format television conversion is done offline. There are several DVD packages that offer offline PAL ↔ NTSC conversion – including cross conversion (technically MPEG ↔ DTV) from the myriad of MPEG-based web video formats. Cross conversion can use any method commonly in use for TV system format conversion, but typically (in order to reduce complexity and memory use) it is left up to the codec to do the conversion. Most modern DVDs are converted from 525 <--> 625 lines in this way, as it is very economical for most programming that originates at EDTV resolution. See also Three-two pull down Reverse Standards Conversion IEEE papers on systems conversion AES/EBU papers on systems conversions ATSC tuner Digital television Digital television adapter DTV transition in the United States Set-top box References External links http://www.hawestv.com/moon_cam/moonctel.htm Film and video technology Television transmission standards Video hardware
Television standards conversion
Engineering
4,495
51,277,173
https://en.wikipedia.org/wiki/Cellular%20agriculture
Cellular agriculture focuses on the production of agricultural products from cell cultures using a combination of biotechnology, tissue engineering, molecular biology, and synthetic biology to create and design new methods of producing proteins, fats, and tissues that would otherwise come from traditional agriculture. Most of the industry is focused on animal products such as meat, milk, and eggs, produced in cell culture rather than raising and slaughtering farmed livestock which is associated with substantial global problems of detrimental environmental impacts (e.g. of meat production), animal welfare, food security and human health. Cellular agriculture is a field of the biobased economy. The most well known cellular agriculture concept is cultured meat. History Although cellular agriculture is a nascent scientific discipline, cellular agriculture products were first commercialized in the late 20th century with insulin and rennet. On March 24, 1990, the FDA approved a bacterium that had been genetically engineered to produce rennet, making it the first genetically engineered product for food. Rennet is a mixture of enzymes that turns milk into curds and whey in cheese making. Traditionally, rennet is extracted from the inner lining of the fourth stomach of calves. Today, cheese making processes use rennet enzymes from genetically engineered bacteria, fungi, or yeasts because they are unadulterated, more consistent, and less expensive than animal-derived rennet. In 2004, Jason Matheny founded New Harvest, whose mission is to "accelerate breakthroughs in cellular agriculture". New Harvest is the only organization focused exclusively on advancing the field of cellular agriculture and provided the first PhD funding specifically for cellular agriculture, at Tufts University. By 2014, IndieBio, a synthetic biology accelerator in San Francisco, has incubated several cellular agriculture startups, hosting Muufri (making milk from cell culture, now Perfect Day Foods), The EVERY Company (making egg whites from cell culture), Gelzen (making gelatin from bacteria and yeast, now Geltor), Afineur (making cultured coffee beans) and Pembient (making rhino horn). Muufri and The EVERY Company were both initially sponsored by New Harvest. In 2015, Mercy for Animals created The Good Food Institute, which promotes plant-based and cellular agriculture. Also in 2015, Isha Datar coined the term "cellular agriculture" (often shortened to "cell ag") in a New Harvest Facebook group. On July 13, 2016, New Harvest hosted the world's first international conference on cellular agriculture in San Francisco, California. The day after the conference, New Harvest hosted the first closed-door workshop for industry, academic, and government stakeholders in cellular agriculture. Research tools Several key research tools are at the foundation of research in cellular agriculture. These include: Cell lines A fundamental missing piece in the advancement of cultured meat is the availability of the appropriate cellular materials. While some methods and protocols from human and mouse cell culture may apply to agricultural cellular materials, it has become clear that most do not. This is evidenced by the fact that established protocols for creating human and mouse embryonic stem cells have not succeeded in establishing ungulate embryonic stem cell lines. The ideal criteria for cell lines for the purpose of cultured meat production include immortality, high proliferative ability, surface independence, serum independence, and tissue-forming ability. The specific cell types most suitable for cellular agriculture are likely to differ from species to species. Growth media Conventional methods for growing animal tissue in culture involve the use of fetal bovine serum (FBS). FBS is a blood product extracted from fetal calves. This product supplies cells with nutrients and stimulating growth factors, but is unsustainable and resource-heavy to produce, with large batch-to-batch variation. Cultured meat companies have been putting significant resources into alternative growth media. After the creation of the cell lines, efforts to remove serum from the growth media are key to the advancement of cellular agriculture as fetal bovine serum has been the target of most criticisms of cellular agriculture and cultured meat production. It is likely that two different media formulations will be required for each cell type: a proliferation media, for growth, and a differentiation media, for maturation. Scaling technologies As biotechnological processes are scaled, experiments start to become increasingly expensive, as bioreactors of increasing volume will have to be created. Each increase in size will require a re-optimization of various parameters such as unit operations, fluid dynamics, mass transfer, and reaction kinetics. Scaffold materials For cells to form tissue, it is helpful for a material scaffold to be added to provide structure. Scaffolds are crucial for cells to form tissues larger than 100 μm across. An ideal scaffold must be non-toxic for the cells, edible, and allow for the flow of nutrients and oxygen. It must also be cheap and easy to produce on a large scale without the need for animals. 3D tissue systems The final phase for creating cultured meat involves bringing together all the previous pieces of research to create large (>100 μm in diameter) pieces of tissue that can be made of mass-produced cells without the need for serum, where the scaffold is suitable for cells and humans. Applications While the majority of the discussion has been around food applications, particular cultured meat, cellular agriculture can be used to create any kind of agricultural product, including those that never involved animals to begin with, like Ginkgo Biowork's fragrances. Meat Cultured meat (also known by other names) is a meat produced by in vitro cell cultures of animal cells. It is a form of cellular agriculture, with such agricultural methods being explored in the context of increased consumer demand for protein. Cultured meat is produced using tissue engineering techniques traditionally used in regenerative medicines. The concept of cultured meat was introduced to wider audiences by Jason Matheny in the early 2000s after he co-authored a paper on cultured meat production and created New Harvest, the world's first nonprofit organization dedicated to in-vitro meat research. Cultured meat may have the potential to address substantial global problems of the environmental impact of meat production, animal welfare, food security and human health. Specifically, it can be thought of in the context of the mitigation of climate change. In 2013, professor Mark Post at Maastricht University pioneered a proof-of-concept for cultured meat by creating the first hamburger patty grown directly from cells. Since then, other cultured meat prototypes have gained media attention: SuperMeat opened a farm-to-fork restaurant called "The Chicken" in Tel Aviv to test consumer reaction to its "Chicken" burger, while the "world's first commercial sale of cell-cultured meat" occurred in December 2020 at the Singapore restaurant "1880", where cultured meat manufactured by the US firm Eat Just was sold. While most efforts in the space focus on common meats such as pork, beef, and chicken which comprise the bulk of consumption in developed countries, some new companies such as Orbillion Bio have focused on high end or unusual meats including Elk, Lamb, Bison, and the prized Wagyu strain of beef. Avant Meats has brought cultured grouper fish to market as other companies have started to pursue cultivating additional fish species and other seafood. The production process is constantly evolving, driven by multiple companies and research institutions. The applications of cultured meat have led to ethical, health, environmental, cultural, and economic discussions. In terms of market strength, data published by the non-governmental organization Good Food Institute found that in 2021 cultivated meat companies attracted $140 million in Europe alone. Currently cultured meat is served at special events and few high end restaurants, mass production of cultured meat has not started yet. In 2020, the world's first regulatory approval for a cultivated meat product was awarded by the Government of Singapore. The chicken meat was grown in a bioreactor in a fluid of amino acids, sugar, and salt. The chicken nuggets food products are ~70% lab-grown meat, while the remainder is made from mung bean proteins and other ingredients. The company pledged to strive for price parity with premium "restaurant" chicken servings. Dairy Perfect Day is a San Francisco-based startup that started as the New Harvest Dairy Project and was incubated by IndieBio in 2014. Perfect Day is making dairy from yeast instead of cows. The company changed its name from Muufri to Perfect Day in August 2016. New Culture is a San Francisco-based startup that was incubated by IndieBio in 2019. New Culture makes mozzarella cheese using casein protein (dairy protein) made by microbes instead of cows. Real Vegan Cheese based in the San Francisco Bay-area is a grass-roots, non-profit Open Science collective working out of two open community labs and was spun out of the International Genetically Engineered Machine (iGEM) competition in 2014. Real Vegan Cheese are making cheese using casein protein (dairy protein) made by microbes instead of cows. Formo, based in Germany, is a startup making dairy products using microbial precision fermentation. Imagindairy, based in Israel, is a startup attempting to create milk proteins from bioengineered yeast. In 2024 it had received FDA and Israeli Ministry of Health approval for its products. Remilk, based in Israel, is a startup attempting to create milk proteins from bioengineered yeast. In 2022 it had received FDA approval for its products. Wilk, based in Israel, is a startup attempting to produce human mother milk ingredients using cells from breast reduction surgeries, to supplement infant formulas. NewMoo, based in Israel, is a startup attempting to create casein protein within the seeds of genetically modified plants. Real Deal Milk, based in Spain, is a startup attempting to create milk proteins from bioengineered microbes. Opalia, based in Canada, is a startup attempting to produce milk from cows' mammary cells. De Novo Dairy, based in South-Africa, is a startup attempting to produce human mother milk ingredients using cells from breast reduction surgeries, to supplement infant formulas. Cultivated Biosciences, based in Switzerland, is a startup attempting to produce fats from non-GMO yeast to make plant based milk more creamy. Naturopy, based in France, is a startup attempting to create milk proteins from bioengineered yeast. Eggs The EVERY Company is a San Francisco-based startup that started as the New Harvest Egg Project and was incubated by IndieBio in 2015. The EVERY Company is making egg whites from yeast instead of eggs. Gelatin Geltor is a San Francisco-based startup that was incubated by IndieBio in 2015. Geltor is developing a proprietary protein production platform that uses bacteria and yeast to produce gelatin. Coffee In 2021, media outlets reported that the world's first synthetic coffee products have been created by two biotechnology companies, still awaiting regulatory approvals for near-term commercialization. Such products – which can be produced via cellular agriculture in bioreactors and for which multiple companies' R&D have acquired substantial funding – may have equal or highly similar effects, composition and taste as natural products but use less water, generate less carbon emissions, require less labor and cause no deforestation. Cell-cultured coffee is a much more radical approach to the multiple challenges that traditional coffee is facing. While 100% coffee, cell-cultured coffee is cultivated in the lab from coffee cells to deliver, after drying, a powder that can be roasted and extracted. Horseshoe crab blood Sothic Bioscience is a Cork-based startup incubated by IndieBio in 2015. Sothic Bioscience is building a platform for biosynthetic horseshoe crab blood production. Horseshoe crab blood contains limulus amebocyte lysate (LAL), which is the gold standard in validating medical equipment and medication. Fish Cellular agriculture could be used for commercial fish feed. Finless Foods is working to develop and mass manufacture marine animal food products. Wild Type is a San Francisco-based startup focused on creating cultured meat to address items such as climate change, food security, and health. Fragrances Ginkgo Bioworks is a Boston-based organism design company culturing fragrances and designing custom microbes. Silk Spiber is a Japan-based company decoding the gene responsible for the production of fibroin in spiders and then bioengineering bacteria with recombinant DNA to produce the protein, which they then spin into their artificial silk. Bolt Threads is a California-based company creating engineered silk fibers based on proteins found in spider silk that can be produced at commercial scale. Bolt examines the DNA of spiders and then replicates those genetic sequences in other ingredients to create a similar silk fiber. Bolt's silk is made primarily of sugar, water, salts, and yeast. Through a process called wet spinning, this liquid is spun into fiber, similar to the way fibers like acrylic and rayon are made. Leather Modern Meadow is a Brooklyn-based startup growing collagen, a protein found in animal skin, to make biofabricated leather. Pet food Clean Meat cluster lists Because Animals, Wild Earth and Bond Pet Foods as participants in developing pet foods that use cultured meat. Wood In 2022, scientists reported the first 3D-printed lab-grown wood. It is unclear if it could ever be used on a commercial scale (e.g. with sufficient production efficiency and quality). Issues Academic programs New Harvest Cultured Tissue Fellowship at Tufts University A joint program between New Harvest and the Tissue Engineering Research Center (TERC), an NIH-supported initiative established in 2004 to advance tissue engineering. The fellowship program offers funding for Masters and PhD students at Tufts university who are interested in bioengineering tunable structures, mechanics, and biology into 3D tissue systems related to their utility as foods. Conferences New Harvest Conference New Harvest brings together pioneers in the cellular agriculture and new, interested parties from industry and academia to share relevant learnings for cellular agriculture's path moving forward. The Conference has been held in San Francisco, California, Brooklyn, New York, and is currently held in Cambridge, Massachusetts. Industrializing Cell-Based Meats & Seafood Summit The 3rd Annual Industrializing Cell-Based Meats & Seafood Summit is the only industry-led forum uniting key decision-makers from biotech and food tech, leading food and meat companies, and investors to discuss key operational and technical challenges for the development of cell-based meats and seafood. International Scientific Conference on Cultured Meat The International Scientific Conference on Cultured Meat began in collaboration with Maastricht University in 2015, and brings together an international group of scientists and industry experts to present the latest research and developments in cultured meat. It takes place annually in Maastricht, The Netherlands. Good Food Conference The GFI conference is an event focused on accelerating the commercialization of plant-based and clean meat. Cultured Meat Symposium The Cultured Meat Symposium is a conference held in Silicon Valley highlighting top industry insights of the clean meat revolution. Alternative Protein Show The Alternative Protein Show is a "networking event" to facilitate collaboration in the "New Protein Landscape", which includes plant-based and cellular agriculture. New Food Conference The New Food Conference is an industry-oriented event that aims to accelerate and empower innovative alternatives to animal products by bringing together key stakeholders. It is Europe's first and biggest conference on new-protein solutions. In the media Books Clean Meat: How Growing Meat Without Animals Will Revolutionize Dinner and the World is a book about cellular agriculture written by animal activist Paul Shapiro (author). The book reviews startup companies that are currently working towards mass-producing cellular agriculture products. Meat Planet: Artificial Flesh and the Future of Food by Benjamin Aldes Wurgaft is the result of five years researching cellular agriculture, and explores the quest to generate meat in the lab, asking what it means to imagine that this is the future of food. It is published by the University of California Press. Where do hot dogs come from? A Children's Book about Cellular Agriculture by Anita Broellochs, Alex Shirazi and Illustrated by Gabriel Gonzalez turns a family BBQ into a scientific story explaining how hot dogs are made with cellular agriculture technologies. The book was launched on Kickstarter on July 20, 2021. Podcasts Cultured Meat and Future Food is a podcast about clean meat and future food technologies hosted by Alex Shirazi, a mobile User Experience Designer based in Menlo Park, California, whose current projects focus on retail technology. The podcast features interviews with industry professionals from startups, investors, and non-profits working on cellular agriculture. Similar fields of research and production Microbial food cultures and genetically engineered microbial production (e.g. of spider silk or solar-energy-based protein powder) Controlled self-assembly of plant proteins (e.g. of spider silk similar plant-proteins-based plastics alternatives) Cell-free artificial synthesis (see Biobased economy#Agriculture) Imitation foods (e.g. meat analogues and milk substitutes) References External links Overview of relevant bibliography New Harvest Cellular Agriculture Society Further reading Clean meat, consumer attitudes and the transition to a cellular agriculture food economy A Closer Look at Cellular Agriculture and the Processes Defining It As lab-grown meat advances, U.S. lawmakers call for regulation CELLULAR AGRICULTURE: A WAY TO FEED TOMORROW'S SMART CITY? Cellular Agriculture, Intentional Imperfection And 'Post Truth': The Transformative Food Trends Of 2017 The 4 Key Biotechnologies Needed to Get Cellular Agriculture to Commercialization Cellular agriculture: Growing meat in a lab setting How Might Cellular Agriculture Impact the Livestock, Dairy, and Poultry Industries? Biological engineering Meat
Cellular agriculture
Engineering,Biology
3,628
58,787
https://en.wikipedia.org/wiki/Timeline%20of%20black%20hole%20physics
Timeline of black hole physics Pre-20th century 1640 — Ismaël Bullialdus suggests an inverse-square gravitational force law 1676 — Ole Rømer demonstrates that light has a finite speed 1684 — Isaac Newton writes down his inverse-square law of universal gravitation 1758 — Rudjer Josip Boscovich develops his theory of forces, where gravity can be repulsive on small distances. So according to him strange classical bodies, such as white holes, can exist, which won't allow other bodies to reach their surfaces 1784 — John Michell discusses classical bodies which have escape velocities greater than the speed of light 1795 — Pierre Laplace discusses classical bodies which have escape velocities greater than the speed of light 1798 — Henry Cavendish measures the gravitational constant G 1876 — William Kingdon Clifford suggests that the motion of matter may be due to changes in the geometry of space 20th century Before 1960s 1909 — Albert Einstein, together with Marcel Grossmann, starts to develop a theory which would bind metric tensor gik, which defines a space geometry, with a source of gravity, that is with mass 1910 — Hans Reissner and Gunnar Nordström define Reissner–Nordström singularity, Hermann Weyl solves special case for a point-body source 1915 — Albert Einstein presents (David Hilbert presented this independently five days earlier in Göttingen) the complete Einstein field equations at the Prussian Academy meeting in Berlin on 25 November 1915 1916 — Karl Schwarzschild solves the Einstein vacuum field equations for uncharged spherically symmetric non-rotating systems 1917 — Paul Ehrenfest gives conditional principle a three-dimensional space 1918 — Hans Reissner and Gunnar Nordström solve the Einstein–Maxwell field equations for charged spherically symmetric non-rotating systems 1918 — Friedrich Kottler gets Schwarzschild solution without Einstein vacuum field equations 1923 — George David Birkhoff proves that the Schwarzschild spacetime geometry is the unique spherically symmetric solution of the Einstein vacuum field equations 1931 — Subrahmanyan Chandrasekhar calculates, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (at 1.4 solar masses) has no stable solutions 1939 — Robert Oppenheimer and Hartland Snyder calculate the gravitational collapse of a pressure-free homogeneous fluid sphere into a black hole 1958 — David Finkelstein theorises that the Schwarzschild radius is a causality barrier: an event horizon of a black hole 1960s 1963 — Roy Kerr solves the Einstein vacuum field equations for uncharged symmetric rotating systems, deriving the Kerr metric for a rotating black hole 1963 — Maarten Schmidt discovers and analyzes the first quasar, 3C 273, as a highly red-shifted active galactic nucleus, a billion light years away 1964 — Roger Penrose proves that an imploding star will necessarily produce a singularity once it has formed an event horizon 1964 — Yakov Zel’dovich and independently Edwin Salpeter propose that accretion discs around supermassive black holes are responsible for the huge amounts of energy radiated by quasars 1964 — Hong-Yee Chiu coins the word quasar for a 'quasi-stellar radio source' in his article in Physics Today 1964 — The first recorded use of the term "black hole", by journalist Ann Ewing 1965 — Ezra T. Newman, E. Couch, K. Chinnapared, A. Exton, A. Prakash, and Robert Torrence solve the Einstein–Maxwell field equations for charged rotating systems 1966 — Yakov Zel’dovich and Igor Novikov propose searching for black hole candidates among binary systems in which one star is optically bright and X-ray dark and the other optically dark but X-ray bright (the black hole candidate) 1967 — Jocelyn Bell discovers and analyzes the first radio pulsar, direct evidence for a neutron star 1967 — Werner Israel presents the proof of the no-hair theorem at King's College London 1967 — John Wheeler introduces the term "black hole" in his lecture to the American Association for the Advancement of Science 1968 — Brandon Carter uses Hamilton–Jacobi theory to derive first-order equations of motion for a charged particle moving in the external fields of a Kerr–Newman black hole 1969 — Roger Penrose discusses the Penrose process for the extraction of the spin energy from a Kerr black hole 1969 — Roger Penrose proposes the cosmic censorship hypothesis After 1960s 1972 — Identification of Cygnus X-1/HDE 226868 from dynamic observations as the first binary with a stellar black hole candidate 1972 — Stephen Hawking proves that the area of a classical black hole's event horizon cannot decrease 1972 — James Bardeen, Brandon Carter, and Stephen Hawking propose four laws of black hole mechanics in analogy with the laws of thermodynamics 1972 — Jacob Bekenstein suggests that black holes have an entropy proportional to their surface area due to information loss effects 1974 — Stephen Hawking applies quantum field theory to black hole spacetimes and shows that black holes will radiate particles with a black-body spectrum which can cause black hole evaporation 1975 — James Bardeen and Jacobus Petterson show that the swirl of spacetime around a spinning black hole can act as a gyroscope stabilizing the orientation of the accretion disc and jets 1989 — Identification of microquasar V404 Cygni as a binary black hole candidate system 1994 — Charles Townes and colleagues observe ionized neon gas swirling around the center of our Galaxy at such high velocities that a possible black hole mass at the very center must be approximately equal to that of 3 million suns 21st century 2002 — Astronomers at the Max Planck Institute for Extraterrestrial Physics present evidence for the hypothesis that Sagittarius A* is a supermassive black hole at the center of the Milky Way galaxy 2002 — Physicists at The Ohio State University publish fuzzball theory, which is a quantum description of black holes positing that they are extended objects composed of strings and don't have singularities. 2002 — NASA's Chandra X-ray Observatory identifies double galactic black holes system in merging galaxies NGC 6240 2004 — Further observations by a team from UCLA present even stronger evidence supporting Sagittarius A* as a black hole 2006 — The Event Horizon Telescope begins capturing data 2012 — First visual evidence of black-holes: Suvi Gezari's team in Johns Hopkins University, using the Hawaiian telescope Pan-STARRS 1, publish images of a supermassive black hole 2.7 million light-years away swallowing a red giant 2015 — LIGO Scientific Collaboration detects the distinctive gravitational waveforms from a binary black hole merging into a final black hole, yielding the basic parameters (e.g., distance, mass, and spin) of the three spinning black holes involved 2019 — Event Horizon Telescope collaboration releases the first direct photo of a black hole, the supermassive M87* at the core of the Messier 87 galaxy References See also Timeline of gravitational physics and relativity Schwarzschild radius Black holes Black hole physics Black hole physics
Timeline of black hole physics
Physics,Astronomy
1,457
3,224,640
https://en.wikipedia.org/wiki/Iguanomorpha
Iguania is an infraorder of squamate reptiles that includes iguanas, chameleons, agamids, and New World lizards like anoles and phrynosomatids. Using morphological features as a guide to evolutionary relationships, the Iguania are believed to form the sister group to the remainder of the Squamata, which comprise nearly 11,000 named species, roughly 2000 of which are iguanians. However, molecular information has placed Iguania well within the Squamata as sister taxa to the Anguimorpha and closely related to snakes. The order has been under debate and revisions after being classified by Charles Lewis Camp in 1923 due to difficulties finding adequate synapomorphic morphological characteristics. Most iguanians are arboreal but there are several terrestrial groups. They usually have primitive fleshy, non-prehensile tongues, although the tongue is highly modified in chameleons. Today they are scattered occurring in Madagascar, the Fiji and Friendly Islands and Western Hemisphere. Classification The Iguania currently include these extant families: Clade Acrodonta Family Agamidae – agamid lizards, Old World arboreal lizards Family Chamaeleonidae – chameleons Clade Pleurodonta – American arboreal lizards, chuckwallas, iguanas Family Leiocephalidae Genus Leiocephalus: curly-tailed lizards Family Corytophanidae – helmet lizards Family Crotaphytidae – collared lizards, leopard lizards Family Hoplocercidae – dwarf and spinytail iguanas Family Iguanidae – marine, Fijian, Galapagos land, spinytail, rock, desert, green, and chuckwalla iguanas Family Tropiduridae – tropidurine lizards subclade of Tropiduridae Tropidurini – neotropical ground lizards Family Dactyloidae – anoles Family Polychrotidae subclade of Polychrotidae Polychrus Family Phrynosomatidae – North American spiny lizards Family Liolaemidae – South American swifts Family Opluridae – Malagasy iguanas Family Leiosauridae – leiosaurs subclade of Leiosaurini Leiosaurae subclade of Leiosaurini Anisolepae Phylogeny Below is a cladogram from the phylogenetic analysis of Daza et al. (2012) (a morphological analysis), showing the interrelationships of extinct and living iguanians: The extinct Arretosauridae (Paleogene iguanians from Central Asia) are alternatively classified in either the Acrodonta with other Old World iguanians, or in Pleurodonta as a sister group to the Crotaphytidae. Conservation status As of 2020 The IUCN Red List of endangered species lists 63.3% of the species as Least concern, 6.7% Near Threatened, 8.2 vulnerable, 9.1% endangered, 3.1% critically endangered, 0.3 extinct and 9.2% data deficient. The major threats include agriculture, residential and commercial development. References Further reading Early Jurassic first appearances Toxicofera
Iguanomorpha
Biology,Environmental_science
662
7,844,595
https://en.wikipedia.org/wiki/Center-of-momentum%20frame
In physics, the center-of-momentum frame (COM frame), also known as zero-momentum frame, is the inertial frame in which the total momentum of the system vanishes. It is unique up to velocity, but not origin. The center of momentum of a system is not a location, but a collection of relative momenta/velocities: a reference frame. Thus "center of momentum" is a short for "center-of-momentum ". A special case of the center-of-momentum frame is the center-of-mass frame: an inertial frame in which the center of mass (which is a single point) remains at the origin. In all center-of-momentum frames, the center of mass is at rest, but it is not necessarily at the origin of the coordinate system. In special relativity, the COM frame is necessarily unique only when the system is isolated. Properties General The center of momentum frame is defined as the inertial frame in which the sum of the linear momenta of all particles is equal to 0. Let S denote the laboratory reference system and S′ denote the center-of-momentum reference frame. Using a Galilean transformation, the particle velocity in S′ is where is the velocity of the mass center. The total momentum in the center-of-momentum system then vanishes: Also, the total energy of the system is the minimal energy as seen from all inertial reference frames. Special relativity In relativity, the COM frame exists for an isolated massive system. This is a consequence of Noether's theorem. In the COM frame the total energy of the system is the rest energy, and this quantity (when divided by the factor c2, where c is the speed of light) gives the invariant mass (rest mass) of the system: The invariant mass of the system is given in any inertial frame by the relativistic invariant relation but for zero momentum the momentum term (p/c)2 vanishes and thus the total energy coincides with the rest energy. Systems that have nonzero energy but zero rest mass (such as photons moving in a single direction, or, equivalently, plane electromagnetic waves) do not have COM frames, because there is no frame in which they have zero net momentum. Due to the invariance of the speed of light, a massless system must travel at the speed of light in any frame, and always possesses a net momentum. Its energy is – for each reference frame – equal to the magnitude of momentum multiplied by the speed of light: Two-body problem An example of the usage of this frame is given below – in a two-body collision, not necessarily elastic (where kinetic energy is conserved). The COM frame can be used to find the momentum of the particles much easier than in a lab frame: the frame where the measurement or calculation is done. The situation is analyzed using Galilean transformations and conservation of momentum (for generality, rather than kinetic energies alone), for two particles of mass m1 and m2, moving at initial velocities (before collision) u1 and u2 respectively. The transformations are applied to take the velocity of the frame from the velocity of each particle from the lab frame (unprimed quantities) to the COM frame (primed quantities): where V is the velocity of the COM frame. Since V is the velocity of the COM, i.e. the time derivative of the COM location R (position of the center of mass of the system): so at the origin of the COM frame, , this implies The same results can be obtained by applying momentum conservation in the lab frame, where the momenta are p1 and p2: and in the COM frame, where it is asserted definitively that the total momenta of the particles, p1' and p2', vanishes: Using the COM frame equation to solve for V returns the lab frame equation above, demonstrating any frame (including the COM frame) may be used to calculate the momenta of the particles. It has been established that the velocity of the COM frame can be removed from the calculation using the above frame, so the momenta of the particles in the COM frame can be expressed in terms of the quantities in the lab frame (i.e. the given initial values): Notice that the relative velocity in the lab frame of particle 1 to 2 is and the 2-body reduced mass is so the momenta of the particles compactly reduce to This is a substantially simpler calculation of the momenta of both particles; the reduced mass and relative velocity can be calculated from the initial velocities in the lab frame and the masses, and the momentum of one particle is simply the negative of the other. The calculation can be repeated for final velocities v1 and v2 in place of the initial velocities u1 and u2, since after the collision the velocities still satisfy the above equations: so at the origin of the COM frame, R = 0, this implies after the collision In the lab frame, the conservation of momentum fully reads: This equation does not imply that instead, it simply indicates the total mass M multiplied by the velocity of the centre of mass V is the total momentum P of the system: Similar analysis to the above obtains where the final relative velocity in the lab frame of particle 1 to 2 is See also Laboratory frame of reference Breit frame References Classical mechanics Coordinate systems Frames of reference Geometric centers Kinematics Momentum
Center-of-momentum frame
Physics,Mathematics,Technology
1,130
27,642,888
https://en.wikipedia.org/wiki/Electric%20utility
An electric utility, or a power company, is a company in the electric power industry (often a public utility) that engages in electricity generation and distribution of electricity for sale generally in a regulated market. The electrical utility industry is a major provider of energy in most countries. Electric utilities include investor owned, publicly owned, cooperatives, and nationalized entities. They may be engaged in all or only some aspects of the industry. Electricity markets are also considered electric utilities—these entities buy and sell electricity, acting as brokers, but usually do not own or operate generation, transmission, or distribution facilities. Utilities are regulated by local and national authorities. Electric utilities are facing increasing demands including aging infrastructure, reliability, and regulation. In 2009, the French company EDF was the world's largest producer of electricity. Organization Power transactions An electric power system is a group of generation, transmission, distribution, communication, and other facilities that are physically connected. The flow of electricity within the system is maintained and controlled by dispatch centers which can buy and sell electricity based on system requirements. Executive compensation The executive compensation received by the executives in utility companies often receives the most scrutiny in the review of operating expenses. Just as regulated utilities and their governing bodies struggle to maintain a balance between keeping consumer costs reasonable and being profitable enough to attract investors, they must also compete with private companies for talented executives and then be able to retain those executives. Regulated companies are less likely to use incentive-based remuneration in addition to base salaries. Executives in regulated electric utilities are less likely to be paid for their performance in bonuses or stock options. They are less likely to approve compensation policies that include incentive-based pay. The compensation for electric utility executives will be the lowest in regulated utilities that have an unfavorable regulatory environment. These companies have more political constraints than those in a favorable regulatory environment and are less likely to have a positive response to requests for rate increases. Just as increased constraints from regulation drive compensation down for executives in electric utilities, deregulation has been shown to increase remuneration. The need to encourage risk-taking behavior in seeking new investment opportunities while keeping costs under control requires deregulated companies to offer performance-based incentives to their executives. It has been found that increased compensation is also more likely to attract executives experienced in working in competitive environments. In the United States, the Energy Policy Act of 1992 removed previous barriers to wholesale competition in the electric utility industry. Currently 24 states allow for deregulated electric utilities: Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Texas, Virginia, Arizona, Arkansas, California, Connecticut, Delaware, Illinois, Maine, Maryland, Massachusetts, Michigan, Montana, New Hampshire, New Jersey, New Mexico, New York, and Washington D.C. As electric utility monopolies have been increasingly broken up into deregulated businesses, executive compensation has risen; particularly incentive compensation. Oversight Oversight is typically carried out at the national level, however it varies depending on financial support and external influences. There is no existence of any influential international energy oversight organization. There does exist a World Energy Council, but its mission is mostly to advise and share new information. It does not hold any kind of legislative or executive power. Alternative energy promotion Alternative energy has become more and more prevalent in recent times and as it is inherently independent of more traditional sources of energy, the market seems to have a very different structure. In the United States, to promote the production and development of alternative energies, there are many subsidies, rewards, and incentives that encourage companies to take up the challenge themselves. There is precedent for such a system working in countries like Nicaragua. In 2005, Nicaragua gave renewable energy companies tax and duty exemptions, which spurred a great deal of private investment. The success in Nicaragua may not be an easily replicated situation however. The movement was known as Energiewende and it is generally considered a failure for many reasons. A primary reason was that it was improperly timed and was proposed during a period in which their energy economy was under more competition. Globally, the transition of electric utilities to renewables remains slow, hindered by concurrent continued investment in the expansion of fossil fuel capacity. Nuclear energy Nuclear energy may be classified as a green source depending on the country. Although there used to be much more privatization in this energy sector, after the 2011 Fukushima district nuclear power plant disaster in Japan, there has been a move away from nuclear energy itself, especially for privately owned nuclear power plants. The criticism being that privatization of companies tend to have the companies themselves cutting corners and costs for profits which has proven to be disastrous in the worst-case scenarios. This placed a strain on many other countries as many foreign governments felt pressured to close nuclear power plants in response to public concerns. Nuclear energy however still holds a major part in many communities around the world. Customer expectations Utilities have found that it isn't simple to meet the unique needs of individual customers, whether residential, corporate, industrial, government, military, or otherwise. Customers in the twenty-first century have new and urgent expectations that demand a transformation of the electric grid. They want a system that gives them new tools, better data to help manage energy usage, advanced protections against cyberattacks, and a system that minimizes outage times and quickens power restoration. See also Consumer Advocate for Customers of Public Utilities Electricity transmission Rate Case Rate base (energy) References Electric power Public utilities Economics of transport and utility industries
Electric utility
Physics,Engineering
1,105
709,999
https://en.wikipedia.org/wiki/Clifford%20module
In mathematics, a Clifford module is a representation of a Clifford algebra. In general a Clifford algebra C is a central simple algebra over some field extension L of the field K over which the quadratic form Q defining C is defined. The abstract theory of Clifford modules was founded by a paper of M. F. Atiyah, R. Bott and Arnold S. Shapiro. A fundamental result on Clifford modules is that the Morita equivalence class of a Clifford algebra (the equivalence class of the category of Clifford modules over it) depends only on the signature . This is an algebraic form of Bott periodicity. Matrix representations of real Clifford algebras We will need to study anticommuting matrices () because in Clifford algebras orthogonal vectors anticommute For the real Clifford algebra , we need mutually anticommuting matrices, of which p have +1 as square and q have −1 as square. Such a basis of gamma matrices is not unique. One can always obtain another set of gamma matrices satisfying the same Clifford algebra by means of a similarity transformation. where S is a non-singular matrix. The sets γa′ and γa belong to the same equivalence class. Real Clifford algebra R3,1 Developed by Ettore Majorana, this Clifford module enables the construction of a Dirac-like equation without complex numbers, and its elements are called Majorana spinors. The four basis vectors are the three Pauli matrices and a fourth antihermitian matrix. The signature is (+++−). For the signatures (+−−−) and (−−−+) often used in physics, 4×4 complex matrices or 8×8 real matrices are needed. See also Weyl–Brauer matrices Higher-dimensional gamma matrices Clifford module bundle References . See also the programme website for a preliminary version. . . Representation theory Clifford algebras
Clifford module
Mathematics
380
1,300,874
https://en.wikipedia.org/wiki/Actinic%20keratosis
Actinic keratosis (AK), sometimes called solar keratosis or senile keratosis, is a pre-cancerous area of thick, scaly, or crusty skin. Actinic keratosis is a disorder (-osis) of epidermal keratinocytes that is induced by ultraviolet (UV) light exposure (actin-). These growths are more common in fair-skinned people and those who are frequently in the sun. They are believed to form when skin gets damaged by UV radiation from the sun or indoor tanning beds, usually over the course of decades. Given their pre-cancerous nature, if left untreated, they may turn into a type of skin cancer called squamous cell carcinoma. Untreated lesions have up to a 20% risk of progression to squamous cell carcinoma, so treatment by a dermatologist is recommended. Actinic keratoses characteristically appear as thick, scaly, or crusty areas that often feel dry or rough. Size commonly ranges between 2 and 6 millimeters, but they can grow to be several centimeters in diameter. AKs are often felt before they are seen, and the texture is sometimes compared to sandpaper. They may be dark, light, tan, pink, red, a combination of all these, or have the same color as the surrounding skin. Given the causal relationship between sun exposure and AK growth, they often appear on a background of sun-damaged skin and in areas that are commonly sun-exposed, such as the face, ears, neck, scalp, chest, backs of hands, forearms, or lips. Because sun exposure is rarely limited to a small area, most people who have an AK have more than one. If clinical examination findings are not typical of AK and the possibility of in situ or invasive squamous cell carcinoma (SCC) cannot be excluded based on clinical examination alone, a biopsy or excision can be considered for definitive diagnosis by histologic examination of the lesional tissue. Multiple treatment options for AK are available. Photodynamic therapy (PDT) is one option for the treatment of numerous AK lesions in a region of the skin, termed field cancerization. It involves the application of a photosensitizer to the skin followed by illumination with a strong light source. Topical creams, such as 5-fluorouracil or imiquimod, may require daily application to affected skin areas over a typical time course of weeks. Cryotherapy is frequently used for few and well-defined lesions, but undesired skin lightening, or hypopigmentation, may occur at the treatment site. By following up with a dermatologist, AKs can be treated before they progress to skin cancer. If cancer does develop from an AK lesion, it can be caught early with close monitoring, at a time when treatment is likely to have a high cure rate. Signs and symptoms Actinic keratoses (AKs) most commonly present as a white, scaly plaque of variable thickness with surrounding redness; they have a sandpaper-like texture when felt with a gloved hand. Skin nearby the lesion often shows evidence of solar damage characterized by pigmentary alterations, being yellow or pale in color with areas of hyperpigmentation; deep wrinkles, coarse texture, purpura and ecchymoses, dry skin, and scattered telangiectasias are also characteristic. Photoaging leads to an accumulation of oncogenic changes, resulting in a proliferation of mutated keratinocytes that can manifest as AKs or other neoplastic growths. With years of sun damage, it is possible to develop multiple AKs in a single area on the skin. This condition is termed field cancerization. The lesions are usually asymptomatic, but can be tender, itch, bleed, or produce a stinging or burning sensation. AKs are typically graded in accordance with their clinical presentation: Grade I (easily visible, slightly palpable), Grade II (easily visible, palpable), and Grade III (frankly visible and hyperkeratotic). Variants Actinic keratoses can have various clinical presentations, often characterized as follows: Classic (or common): Classic AKs present as white, scaly macules, papules or plaques of various thickness, often with surrounding erythema. They are usually 2–6mm in diameter but can sometimes reach several centimeters in diameter. Hypertrophic (or hyperkeratotic): Hypertrophic AKs (HAKs) appears as a thicker scale or rough papule or plaque, often adherent to an erythematous base. Classic AKs can progress to become HAKs, and HAKs themselves can be difficult to distinguish from malignant lesions. Atrophic: Atrophic AKs lack an overlying scale, and therefore appear as a nonpalpable change in color (or macule). They are often smooth and red and are less than 10mm in diameter. AK with cutaneous horn: A cutaneous horn is a keratinic projection with its height at least one-half of its diameter, often conical in shape. They can be seen in the setting of actinic keratosis as a progression of an HAK, but are also present in other skin conditions. 38–40% of cutaneous horns represent AKs. Pigmented AK: Pigmented AKs are rare variants that often present as macules or plaques that are tan to brown in color. They can be difficult to distinguish from a solar lentigo or lentigo maligna. Actinic cheilitis: When an AK forms on the lip, it is called actinic cheilitis. This usually presents as a rough, scaly patch on the lip, often accompanied by the sensation of dry mouth and symptomatic splitting of the lips. Bowenoid AK: Usually presents as a solitary, erythematous, scaly patch or plaque with well-defined borders. Bowenoid AKs are differentiated from Bowen's disease by degree of epithelial involvement as seen on histology. The presence of ulceration, nodularity, or bleeding should raise concern for malignancy. Specifically, clinical findings suggesting an increased risk of progression to squamous cell carcinoma can be recognized as "IDRBEU": I (induration/inflammation), D (diameter > 1 cm), R (rapid enlargement), B (bleeding), E (erythema), and U (ulceration). AKs are usually diagnosed clinically, but because they are difficult to clinically differentiate from squamous cell carcinoma, any concerning features warrant biopsy for diagnostic confirmation. Causes The most important cause of AK formation is solar radiation, through a variety of mechanisms. Mutation of the p53 tumor suppressor gene, induced by UV radiation, has been identified as a crucial step in AK formation. This tumor suppressor gene, located on chromosome 17p132, allows for cell cycle arrest when DNA or RNA is damaged. Dysregulation of the p53 pathway can thus result in unchecked replication of dysplastic keratinocytes, thereby serving as a source of neoplastic growth and the development of AK, as well as possible progression from AK to skin cancer. Other molecular markers that have been associated with the development of AK include the expression of p16ink4, p14, the CD95 ligand, TNF-related apoptosis-inducing ligand (TRAIL) and TRAIL receptors, and loss of heterozygosity. Evidence also suggests that the human papillomavirus (HPV) plays a role in the development of AKs. The HPV virus has been detected in AKs, with measurable HPV viral loads (one HPV-DNA copy per less than 50 cells) measured in 40% of AKs. Similar to UV radiation, higher levels of HPV found in AKs reflect enhanced viral DNA replication. This is suspected to be related to the abnormal keratinocyte proliferation and differentiation in AKs, which facilitate an environment for HPV replication. This in turn may further stimulate the abnormal proliferation that contributes to the development of AKs and carcinogenesis. Ultraviolet radiation It is thought that ultraviolet (UV) radiation induces mutations in the keratinocytes of the epidermis, promoting the survival and proliferation of these atypical cells. Both UV-A and UV-B radiation have been implicated as causes of AKs. UV-A radiation (wavelength 320–400 nm) reaches more deeply into the skin and can lead to the generation of reactive oxygen species, which in turn can damage cell membranes, signaling proteins, and nucleic acids. UV-B radiation (wavelength 290–320 nm) causes thymidine dimer formation in DNA and RNA, leading to significant cellular mutations. In particular, mutations in the p53 tumor suppressor gene have been found in 30–50% of AK lesion skin samples. UV radiation has also been shown to cause elevated inflammatory markers such as arachidonic acid, as well as other molecules associated with inflammation. Eventually, over time these changes lead to the formation of AKs. Several predictors for increased AK risk from UV radiation have been identified: Extent of sun exposure: Cumulative sun exposure leads to an increased risk for development of AKs. In one U.S. study, AKs were found in 55% of fair-skinned men with high cumulative sun exposure, and in only 19% of fair-skinned men with low cumulative sun exposure in an age-matched cohort (the percents for women in this same study were 37% and 12% respectively). Furthermore, the use of sunscreen (SPF 17 or higher) has been found to significantly reduce the development of AK lesions, and also promotes the regression of existing lesions. History of sunburn: Studies show that even a single episode of painful sunburn as a child can increase an individual's risk of developing AK as an adult. Six or more painful sunburns over the course of a lifetime was found to be significantly associated with the likelihood of developing AK. Skin pigmentation Melanin is a pigment in the epidermis that functions to protect keratinocytes from the damage caused by UV radiation; it is found in higher concentrations in the epidermis of darker-skinned individuals, affording them protection against the development of AKs. Fair-skinned individuals have a significantly increased risk of developing AKs when compared to olive-skinned individuals (odds ratios of 14.1 and 6.5, respectively), and AKs are uncommon in dark-skinned people of African descent. Other phenotypic features seen in fair-skinned individuals that are associated with an increased propensity to develop AKs include: Freckling Light hair and eye color Propensity to sunburn Inability to tan Other risk factors Immunosuppression: People with a compromised immune system from medical conditions (such as AIDS) or immunosuppressive therapy (such as chronic immunosuppression after organ transplantation, or chemotherapy for cancer) are at increased risk for developing AKs. They may develop AK at an earlier age or have an increased number of AK lesions compared to immunocompetent people. Human papillomavirus (HPV): The role of HPV in the development of AK remains unclear, but evidence suggests that infection with the betapapillomavirus type of HPV may be associated with an increased likelihood of AK. Genodermatoses: Certain genetic disorders interfere with DNA repair after sun exposure, thereby putting these individuals at higher risk for the development of AKs. Examples of such genetic disorders include xeroderma pigmentosum and Bloom syndrome. Balding: AKs are commonly found on the scalps of balding men. The degree of baldness seems to be a risk factor for lesion development, as men with severe baldness were found to be seven times more likely to have 10 or more AKs when compared to men with minimal or no baldness. This observation can be explained by an absence of hair causing a larger proportion of scalp to be exposed to UV radiation if other sun protection measures are not taken. Diagnosis Physicians usually diagnose actinic keratosis by doing a thorough physical examination, through a combination of visual observation and touch. However a biopsy may be necessary when the keratosis is large in diameter, thick, or bleeding, in order to make sure that the lesion is not a skin cancer. Actinic keratosis may progress to invasive squamous cell carcinoma (SCC) but both diseases can present similarly upon physical exam and can be difficult to distinguish clinically. Histological examination of the lesion from a biopsy or excision may be necessary to definitively distinguish AK from in situ or invasive SCC. In addition to SCCs, AKs can be mistaken for other cutaneous lesions including seborrheic keratoses, basal cell carcinoma, lichenoid keratosis, porokeratosis, viral warts, erosive pustular dermatosis of the scalp, pemphigus foliaceus, inflammatory dermatoses like psoriasis, or melanoma. Biopsy A lesion biopsy is performed if the diagnosis remains uncertain after a clinical physical exam, or if there is suspicion that the AK might have progressed to squamous cell carcinoma. The most common tissue sampling techniques include shave or punch biopsy. When only a portion of the lesion can be removed due to its size or location, the biopsy should sample tissue from the thickest area of the lesion, as SCCs are most likely to be detected in that area. If a shave biopsy is performed, it should extend through to the level of the dermis in order to provide sufficient tissue for diagnosis; ideally, it would extend to the mid-reticular dermis. Punch biopsy usually extends to the subcutaneous fat when the entire length of the punch blade is utilized. Histopathology On histologic examination, actinic keratoses usually show a collection of atypical keratinocytes with hyperpigmented or pleomorphic nuclei, extending to the basal layer of the epidermis. A "flag sign" is often described, referring to alternating areas of orthokeratosis and parakeratosis. Epidermal thickening and surrounding areas of sun-damaged skin are often seen. The normal ordered maturation of the keratinocytes is disordered to varying degrees: there may be widening of the intracellular spaces, cytologic atypia such as abnormally large nuclei, and a mild chronic inflammatory infiltrate. Specific findings depend on the clinical variant and particular lesion characteristics. The seven major histopathologic variants are all characterized by atypical keratinocytic proliferation beginning in the basal layer and confined to the epidermis; they include: Hypertrophic: Notable for marked hyperkeratosis, often with evident parakeratosis. Keratinocytes in the stratum malphigii may show a loss of polarity, pleomorphism, and anaplasia. Some irregular downward proliferation into the uppermost dermis may be observed, but does not represent frank invasion. Atrophic: With slight hyperkeratosis and overall atrophic changes to the epidermis; the basal layer shows cells with large, hyperchromatic nuclei in close proximity to each other. These cells have been observed to proliferate into the dermis as buds and duct-like structures. Lichenoid: Demonstrate a band-like lymphocytic infiltrate in the papillary dermis, directly beneath the dermal-epidermal junction. Achantholytic: Intercellular clefts or lacunae in the lowermost epidermal layer that result from anaplastic changes; these produce dyskeratotic cells with disrupted intercellular bridges. Bowenoid: This term is controversial and usually refers to full-thickness atypia, microscopically indistinguishable from Bowen's Disease. However most dermatologists and pathologists will use it in reference to tissue samples that are notable for small foci of atypia that involve the full thickness of the epidermis, in the background of a lesion that is otherwise consistent with an AK. Epidermolytic: With granular degeneration. Pigmented: Show pigmentation in the basal layer of the epidermis, similar to a solar lentigo. Dermoscopy Dermoscopy is a noninvasive technique utilizing a handheld magnifying device coupled with a transilluminating lift. It is often used in the evaluation of cutaneous lesions but lacks the definitive diagnostic ability of biopsy-based tissue diagnosis. Histopathologic exam remains the gold standard. Polarized contact dermoscopy of AKs occasionally reveals a "rosette sign," described as four white points arranged in a clover pattern, often localized to within a follicular opening. It is hypothesized that the "rosette sign" corresponds histologically to the changes of orthokeratosis and parakeratosis known as the "flag sign." Non-pigmented AKs: linear or wavy vascular patterning, or a "strawberry pattern," described as unfocused vessels between hair follicles, with white-haloed follicular openings. Pigmented AKs: gray to brown dots or globules surrounding follicular openings, and annular-granular rhomboidal structures; often difficult to differentiate from lentigo maligna. Prevention Ultraviolet radiation is believed to contribute to the development of actinic keratoses by inducing mutations in epidermal keratinocytes, leading to proliferation of atypical cells. Therefore, preventive measures for AKs are targeted at limiting exposure to solar radiation, including: Limiting extent of sun exposure Avoid sun exposure during noontime hours between 10:00 AM and 2:00 PM when UV light is most powerful Minimize all time in the sun, since UV exposure occurs even in the winter and on cloudy days Using sun protection Applying sunscreens with SPF ratings 30 or greater that also block both UVA and UVB light, at least every 2 hours and after swimming or sweating Applying sunscreen at least 15 minutes before going outside, as this allows time for the sunscreen to be absorbed appropriately by the skin Wearing sun protective clothing such as hats, sunglasses, long-sleeved shirts, long skirts, or trousers. “Consider taking 10 micrograms of vitamin D a day if you always cover up outdoors. This is because you may not get enough vitamin D from sunlight.” Recent research implicating human papillomavirus (HPV) in the development of AKs suggests that HPV prevention might in turn help prevent development of AKs, as UV-induced mutations and oncogenic transformation are likely facilitated in cases of active HPV infection. A key component of HPV prevention includes vaccination, and the CDC currently recommends routine vaccination in all children at age 11 or 12. There are some data that in individuals with a history of non-melanoma skin cancer, a low-fat diet can serve as a preventative measure against future actinic keratoses. Management There are a variety of treatment options for AK depending on the patient and the clinical characteristics of the lesion. AKs show a wide range of features, which guide decision-making in choosing treatment. As there are multiple effective treatments, patient preference and lifestyle are also factors that physicians consider when determining the management plan for actinic keratosis. Regular follow-up is advisable after any treatment to make sure no new lesions have developed and that old ones are not progressing. Adding topical treatment after a procedure may improve outcomes. Medication Topical medications are often recommended for areas where multiple or ill-defined AKs are present, as the medication can easily be used to treat a relatively large area. Fluorouracil cream Topical fluorouracil (5-FU) destroys AKs by blocking methylation of thymidylate synthetase, thereby interrupting DNA and RNA synthesis. This in turn prevents the proliferation of dysplastic cells in AK. Topical 5-FU is the most utilized treatment for AK, and often results in effective removal of the lesion. Overall, there is a 50% efficacy rate resulting in 100% clearance of AKs treated with topical 5-FU. 5-FU may be up to 90% effective in treating non-hyperkeratotic lesions. While topical 5-FU is a widely used and cost-effective treatment for AKs and is generally well tolerated, its potential side-effects can include: pain, crusting, redness, and local swelling. These adverse effects can be mitigated or minimized by reducing the frequency of application or taking breaks between uses. The most commonly used application regimen consists of applying a layer of topical cream to the lesion twice a day after washing; duration of treatment is typically 2–4 weeks to thinner skin like the cheeks and up to 8 weeks for the arms; treatment of up to 8 weeks has demonstrated a higher cure rate. Imiquimod cream Imiquimod is a topical immune-enhancing agent licensed for the treatment of genital warts. Imiquimod stimulates the immune system through the release and up-regulation of cytokines. Treatment with Imiquimod cream applied 2–3 times per week for 12 to 16 weeks was found to result in complete resolution of AKs in 50% of people, compared to 5% of controls. The Imiquimod 3.75% cream has been validated in a treatment regimen consisting of daily application to entire face and scalp for two 2-week treatment cycles, with a complete clearance rate of 36%. While the clearance rate observed with the Imiquimod 3.75% cream was lower than that observed with the 5% cream (36 and 50 percent, respectively), there are lower reported rates of adverse reactions with the 3.75% cream: 19% of individuals using Imiquimod 3.75% cream reported adverse reactions including local erythema, scabbing, and flaking at the application site, while nearly a third of individuals using the 5% cream reported the same types of reactions with Imiquimod treatment. However, it is ultimately difficult to compare the efficacy of the different strength creams directly, as current study data varies in methodology (e.g. duration and frequency of treatment, and amount of skin surface area covered). Ingenol mebutate gel Ingenol mebutate is a newer treatment for AK used in Europe and the United States. It works in two ways, first by disrupting cell membranes and mitochondria resulting cell death, and then by inducing antibody-dependent cellular cytotoxicity to eliminate remaining tumor cells. A 3-day treatment course with the 0.015% gel is recommended for the scalp and face, while a 2-day treatment course with the 0.05% gel is recommended for the trunk and extremities. Treatment with the 0.015% gel was found to completely clear 57% of AK, while the 0.05% gel had a 34% clearance rate. Advantages of ingenol mebutate treatment include the short duration of therapy and a low recurrence rate. Local skin reactions including pain, itching and redness can be expected during treatment with ingenol mebutate. This treatment was derived from the petty spurge, Euphorbia peplus which has been used as a traditional remedy for keratosis. Diclofenac sodium gel Topical diclofenac sodium gel is a nonsteroidal anti-inflammatory drug that is thought to work in the treatment of AK through its inhibition of the arachidonic acid pathway, thereby limiting the production of prostaglandins which are thought to be involved in the development of UVB-induced skin cancers. Recommended duration of therapy is 60 to 90 days with twice daily application. Treatment of facial AK with diclofenac gel led to complete lesion resolution in 40% of cases. Common side effects include dryness, itching, redness, and rash at the site of application. Retinoids Topical retinoids have been studied in the treatment of AK with modest results, and the American Academy of Dermatology does not currently recommend this as first-line therapy. Treatment with adapalene gel daily for 4 weeks, and then twice daily thereafter for a total of nine months led to a significant but modest reduction in the number AKs compared to placebo; it demonstrated the additional advantage of improving the appearance of photodamaged skin. Topical tretinoin is ineffective as treatment for reducing the number of AKs. For secondary prevention of AK, systemic, low-dose acitretin was found to be safe, well tolerated and moderately effective in chemoprophylaxis for skin cancers in kidney transplant patients. Acitretin is a viable treatment option for organ transplant patients according to expert opinion. Tirbanibulin Tirbanibulin (Klisyri) was approved for medical use in the United States in December 2020, for the treatment of actinic keratosis on the face or scalp. Procedures Cryotherapy Liquid nitrogen (−195.8 °C) is the most commonly used destructive therapy for the treatment of AK in the United States. It is a well-tolerated office procedure that does not require anesthesia. Cryotherapy is particularly indicated for cases where there are fewer than 15 thin, well-demarcated lesions. Caution is encouraged for thicker, more hyperkeratotic lesions, as dysplastic cells may evade treatment. Treatment with both cryotherapy and field treatment can be considered for these more advanced lesions. Cryotherapy is generally performed using an open-spray technique, wherein the AK is sprayed for several seconds. The process can be repeated multiple times in one office visit, as tolerated. Cure rates from 67 to 99 percent have been reported, depending on freeze time and lesion characteristics. Disadvantages include discomfort during and after the procedure; blistering, scarring and redness; hypo- or hyperpigmentation; and destruction of healthy tissue. Photodynamic therapy AKs are one of the most common dermatologic lesions for which photodynamic therapy, including topical methyl aminolevulinate (MAL) or 5-aminolevulinic acid (5-ALA), is indicated. Treatment begins with preparation of the lesion, which includes scraping away scales and crusts using a dermal curette. A thick layer of topical MAL or 5-ALA cream is applied to the lesion and a small area surrounding the lesion, which is then covered with an occlusive dressing and left for a period of time. During this time the photosensitizer accumulates in the target cells within the AK lesion. The dressings are then removed and the lesion is treated with light at a specified wavelength. Multiple treatment regimens using different photosensitizers, incubation times, light sources, and pretreatment regimens have been studied and suggest that longer incubation times lead to higher rates of lesion clearance. Photodynamic therapy is gaining in popularity. It has been found to have a 14% higher likelihood of achieving complete lesion clearance at 3 months compared to cryotherapy, and seems to result in superior cosmetic outcomes when compared to cryotherapy or 5-FU treatment. Photodynamic therapy can be particularly effective in treating areas with multiple AK lesions. Surgical techniques Surgical excision: Excision should be reserved for cases when the AK is a thick, horny papule, or when deeper invasion is suspected and histopathologic diagnosis is necessary. It is a rarely utilized technique for AK treatment. Shave excision and curettage (sometimes followed by electrodesiccation when deemed appropriate by the physician): This technique is often used for treatment of AKs, and particularly for lesions appearing more similar to squamous cell carcinoma, or those that are unresponsive to other treatments. The surface of the lesion can be scraped away using a scalpel, or the base can be removed with a curette. Tissue can be evaluated histopathologically under the microscope, but specimens acquired using this technique are not often adequate to determine whether a lesion is invasive or intraepidermal. Dermabrasion: Dermabrasion is useful in the treatment of large areas with multiple AK lesions. The process involves using a hand-held instrument to "sand" the skin, removing the stratum corneum layer of the epidermis. Diamond fraises or wire brushes revolving at high speeds are used. The procedure can be quite painful and requires procedural sedation and anesthetic, necessitating a hospital stay. One-year clearance rates with dermabrasion treatment are as high as 96%, but diminish drastically to 54% at five years. Laser therapy Laser therapy using carbon dioxide () or erbium:yttrium aluminum garnet (Er:YAG) lasers is a treatment approach being utilized with increased frequency, and sometimes in conjunction with computer scanning technology. Laser therapy has not been extensively studied, but current evidence suggests it may be effective in cases involving multiple AKs refractive to medical therapy, or AKs located in cosmetically sensitive locations such as the face. The laser has been recommended for extensive actinic cheilitis that has not responded to 5-FU. Chemical peels A chemical peel is a topically applied agent that wounds the outermost layer of the skin, promoting organized repair, exfoliation, and eventually the development of smooth and rejuvenated skin. Multiple therapies have been studied. A medium-depth peel may effectively treat multiple non-hyperkeratotic AKs. It can be achieved with 35% to 50% trichloroacetic acid (TCA) alone or at 35% in combination with Jessner's solution in a once-daily application for a minimum of 3 weeks; 70% glycolic acid (α-hydroxy acid); or solid . When compared to treatment with 5-FU, chemical peels have demonstrated similar efficacy and increased ease of use with similar morbidity. Chemical peels must be performed in a controlled clinic environment and are recommended only for individuals who are able to comply with follow-up precautions, including avoidance of sun exposure. Furthermore, they should be avoided in individuals with a history of HSV infection or keloids, and in those who are immunosuppressed or who are taking photosensitizing medications. Prognosis Untreated AKs follow one of three paths: they can either persist as AKs, regress, or progress to invasive skin cancer, as AK lesions are considered to be on the same continuum with squamous cell carcinoma (SCC). AK lesions that regress also have the potential to recur. Progression: The overall risk of an AK turning into invasive cancer is low. In average-risk individuals, likelihood of an AK lesion progressing to SCC is less than 1% per year. Despite this low rate of progression, studies suggest that a full 60% of SCCs arise from pre-existing AKs, reinforcing the idea that these lesions are closely related. Regression: Reported regression rates for single AK lesions have ranged between 15 and 63% after one year. Recurrence: Recurrence rates after 1 year for single AK lesions that have regressed range between 15 and 53%. Clinical course Given the aforementioned differering clinical outcomes, it is difficult to predict the clinical course of any given actinic keratosis. AK lesions may also come and go—in a cycle of appearing on the skin, remaining for months, and then disappearing. Often they will reappear in a few weeks or months, particularly after unprotected sun exposure. Left untreated, there is a chance that the lesion will advance to become invasive. Although it is difficult to predict whether an AK will advance to become squamous cell carcinoma, it has been noted that squamous cell carcinomas originate in lesions formerly diagnosed as AKs with frequencies reported between 65 and 97%. Epidemiology Actinic keratosis is very common, with an estimated 14% of dermatology visits related to AKs. It is seen more often in fair-skinned individuals, and rates vary with geographical location and age. Other factors such as exposure to ultraviolet (UV) radiation, certain phenotypic features, and immunosuppression can also contribute to the development of AKs. Men are more likely to develop AK than women, and the risk of developing AK lesions increases with age. These findings have been observed in multiple studies, with numbers from one study suggesting that approximately 5% of women ages 20–29 develop AK compared to 68% of women ages 60–69, and 10% of men ages 20–29 develop AK compared to 79% of men ages 60–69. Geography seems to play a role in the sense that individuals living in locations where they are exposed to more UV radiation throughout their lifetime have a significantly higher risk of developing AK. Much of the literature on AK comes from Australia, where the prevalence of AK is estimated at 40–50% in adults over 40, as compared to the United States and Europe, where prevalence is estimated at under 11–38% in adults. One study found that those who immigrated to Australia after age 20 had fewer AKs than native Australians in all age groups. Research Diagnostically, researchers are investigating the role of novel biomarkers to assist in determining which AKs are more likely to develop into cutaneous or metastatic SCC. Upregulation of matrix metalloproteinases (MMP) is seen in many different types of cancers, and the expression and production of MMP-7 in particular has been found to be elevated in SCC specifically. The role of serin peptidase inhibitors (Serpins) is also being investigated. SerpinA1 was found to be elevated in the keratinocytes of SCC cell lines, and SerpinA1 upregulation was correlated with SCC tumor progression in vivo. Further investigation into specific biomarkers could help providers better assess prognosis and determine best treatment approaches for particular lesions. In terms of treatment, a number of medications are being studied. Resiquimod is a TLR 7/8 agonist that works similarly to imiquimod, but is 10 to 100 times more potent; when used to treat AK lesions, complete response rates have range from 40 to 74%. Afamelanotide is a drug that induces the production of melanin by melanocytes to act as a protective factor against UVB radiation. It is being studied to determine its efficacy in preventing AKs in organ transplant patients who are on immunosuppressive therapy. Epidermal growth factor receptor (EGFR) inhibitors such as gefitinib, and anti-EGFR antibodies such as cetuximab are used in the treatment of various types of cancers, and are currently being investigated for potential use in the treatment and prevention of AKs. References Dermatology
Actinic keratosis
Chemistry
7,379
8,419,626
https://en.wikipedia.org/wiki/Linear%20response%20function
A linear response function describes the input-output relationship of a signal transducer, such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance; see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related. Mathematical definition Denote the input of a system by (e.g. a force), and the response of the system by (e.g. a position). Generally, the value of will depend not only on the present value of , but also on past values. Approximately is a weighted sum of the previous values of , with the weights given by the linear response function : The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function. The complex-valued Fourier transform of the linear response function is very useful as it describes the output of the system if the input is a sine wave with frequency . The output reads with amplitude gain and phase shift . Example Consider a damped harmonic oscillator with input given by an external driving force , The complex-valued Fourier transform of the linear response function is given by The amplitude gain is given by the magnitude of the complex number and the phase shift by the arctan of the imaginary part of the function divided by the real one. From this representation, we see that for small the Fourier transform of the linear response function yields a pronounced maximum ("Resonance") at the frequency . The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit. The width of the maximum, typically is much smaller than so that the Quality factor can be extremely large. Kubo formula The exposition of linear response theory, in the context of quantum statistics, can be found in a paper by Ryogo Kubo. This defines particularly the Kubo formula, which considers the general case that the "force" is a perturbation of the basic operator of the system, the Hamiltonian, where corresponds to a measurable quantity as input, while the output is the perturbation of the thermal expectation of another measurable quantity . The Kubo formula then defines the quantum-statistical calculation of the susceptibility by a general formula involving only the mentioned operators. As a consequence of the principle of causality the complex-valued function has poles only in the lower half-plane. This leads to the Kramers–Kronig relations, which relates the real and the imaginary parts of by integration. The simplest example is once more the damped harmonic oscillator. See also Convolution Green–Kubo relations Fluctuation theorem Dispersion (optics) Lindblad equation Semilinear response Green's function Impulse response Resolvent formalism Propagator References External links Linear Response Functions in Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.): DMFT at 25: Infinite Dimensions, Verlag des Forschungszentrum Jülich, 2014 Equations of physics
Linear response function
Physics,Mathematics
705
38,244,165
https://en.wikipedia.org/wiki/Marius%20Jeuken
Marius Jeuken (26 January 1916 - 24 March 1983) was professor of theoretical biology at the Institute of Theoretical Biology at Leiden University in the Netherlands, from 1968 until his death. Jeuken was also a member of the Society of Jesus; he joined the Dutch Jesuits in 1934 and was ordained a Catholic priest in 1946 in Maastricht. Background In 1953 Leiden University started a section Theoretical Biology within the department of Zoology under the direction of the mathematical biologist H.R. van der Vaart. In 1957 this section was transformed into an Institute with Van der Vaart as its Professor. But in 1961 he moved to Raleigh, North Carolina. In 1966, the institute attracted Evert Meelis as a statistical consultant and invited Marius Jeuken to direct the institute, after he had been teaching biology at the Gadjah Mada University of Yogyakarta (Indonesia). He had been trained as an animal physiologist, and had defended his thesis under supervision of C. J. van der Klaauw at Leiden University. In the same year, 1966, Jeuken became Professor by special appointment to teach Philosophy at the Landbouwhogeschool in Wageningen, a position he held until 1971. In 1968, Jeuken was given a Professorship in Philosophical Biology. For at least half of his time he was dedicated to do experimental research so his theoretical biology would not lose contact with reality. One of his specific interests was how biology could benefit from the philosophy developed by the mathematician and philosopher Alfred North Whitehead at Harvard University. It inspired one of his students, Gerard Verschuuren, to further expand on this issue (on Hemostatic Regulation). His institute has always had strong historical ties with the Prof. Dr. Jan van der Hoeven Foundation, which is still the publisher of Acta Biotheoretica, the oldest journal of theoretical biology in the world. For years, Jeuken acted as its Editor-in-Chief, until he died in 1983. Articles in English The biological and philosophical definitions of life, Acta biotheoretica, 24 (1975), 14-21. Remarks on the Is-Ought problem. In Science and absolute values. London, 1974, 1059-1062. Commentary on G. Stent's paper: structuralism and biology. In Science and absolute values. London, 1974, 858-862. A note on models and explanation in biology, Acta biotheoretica, 18 (1969), 284-290. The study of animal behaviour, Medan ilmu pengetahuan, (1961), 247-259. Philosophy and theoretical science, Laporan kongres ilmu pengetahuan nasional pertama. Djakarta, 1958, 95-114. Function in biology, Acta biotheoretica, (1958), 29-46. References Extended links A listing of his publications Acta Biotheoretica A History of the Institute by J.A.J. Metz Prof. Dr. Jan van der Hoeven Foundation for Theoretical Biology of Animal and Man Announcement of the Foundation for Theoretical Biology at Leyden, in Nature, 136, 99-99 (20 July 1935) 1916 births 1983 deaths 20th-century Dutch biologists 20th-century Dutch Jesuits Leiden University alumni Academic staff of Leiden University Scientists from The Hague Philosophers of science Theoretical biologists
Marius Jeuken
Biology
694
65,005,031
https://en.wikipedia.org/wiki/Glyceryl%20octyl%20ascorbic%20acid
Glyceryl octyl ascorbic acid (GO-VC) is an amphipathic derivative of vitamin C consisting of two ether linkages: a 1-octyl at position 2 and a glycerin at position 3. The chemical name is 2-glyceryl-3-octyl ascorbic acid. The isomer in which these two groups are swapped (2-octyl-3-glyceryl ascorbic acid, OG-VC) is also known. It is considered as a new stable amphipathic in the field of aesthetic medicine. Overview Vitamin C is rapidly converted to ascorbic acid radicals by UV rays, which causes cytotoxicity and sunburn, but GO-VC improves the stability of conventional vitamin C derivatives, and thus eliminates the problems of these prooxidants. Water-soluble vitamin C derivatives, such as sodium ascorbyl phosphate (APS), which have been used since the 1990s, have a problem of drying the skin in order to the sebum suppression effect. On the other hand, GO-VC has a high moisturizing power due to the binding of glycerin and can prevent the dryness of the skin. In addition, GO-VC has a sterilizing activity of octanol, so it has a sterilizing activity against many bacteria. GO-VC is also used for wound healing and wrinkle prevention because it has a proliferative effect on fibroblasts and a promoting effect on type I collagen production. GO-VC has a stronger melanin production inhibitory effect than arbutin, which is used as a whitening agent, and it was confirmed in clinical trials that even low concentrations of 0.01 to 0.1% (by weight) are effective against acne redness and pigmentation. The water-soluble vitamin C derivatives such as ascorbic acid 2-glucoside and APPS (trisodium ascorbyl palmitate) can not add to water-soluble polymer gels commonly used in cosmetics such as carboxy vinyl polymer and sodium polyacrylate. This is because the viscosity changes, causing precipitation. On the other hand, GO-VC can be dispersed in water-soluble polymer gel transparently and uniformly or can be stably dissolved for a long time. The fat-soluble vitamin C derivatives such as ascorbyl tetrahexyl decanoate (VC-IP) are almost insoluble in water, making it difficult to mix in water-soluble formulations such as lotions without the use of surfactants. Fat-soluble vitamin C derivatives causes lipid oxidation problems when lipids are released, and the color of the formulation tends to change. GO-VC can solve these problems almost completely. GO-VC is well absorbed percutaneously due to its amphiphilic nature, and because it is negatively charged rather than completely non-ionic, it can facilitate percutaneous absorption with an iontophoresis device. In addition, GO-VC is amphipathic but does not have a lipid group, so there are few skin toxicity problems due to lipid peroxidation, and it does not have the sticky feeling of conventional vitamin C derivatives and has a good feel. Stability When the aqueous solution containing vitamin C and GO-VC was stored at 50 °C for 90 days, the vitamin C residual amount decreased to less than 30% in 30 days, whereas the residual amount of GO-VC was 90% or more. Moreover, after 90 days, 80% or more of GO-VC was confirmed to remain. It is considered that these high stability are due to the two most reactive hydroxyl groups of vitamin C being capped by glycerin and octanol at the same time. Because the viscosity is stable in the preparation containing GO-VC and the polymer gel too, and it can be kept in a transparent state for a long period of time. Therefore, GO-VC can be added to many preparations such as lotions, creams, serums and gels. Acne It was reported that GO-VC is effective against post-inflammatory hyperpigmentation (PIH), post-inflammatory erythema (PIE), and atrophic scar (AS), which are important complications in acne. It applied a complex vitamin C derivative lotion containing GO-VC to each of 10 patients with acne twice on the right side twice a day for 3 months, and confirmed the left side without application and its effect. It was reported that there was a marked improvement in PIH, PIE, and AS on the only right side applying lotion containing GO-VC after 3 months. Pigmentation Many phenolic compounds, which are conventional whitening agents, react with tyrosinase to induce melanocyte-specific cytotoxicity, and thus there was a risk of developing vitiligo. GO-VC reduced the intracellular melanin content of B16 melanoma cells. GO-VC's pigmentation inhibitory mechanism is shown to act through a novel melanogenesis inhibitory system that does not depend on tyrosinase activity inhibition, indicating that it is a safe and effective pigmentation inhibitor with low risk of vitiligo. GO-VC showed a remarkable effect in an actual pigmentation suppression clinical study, and a gel preparation containing 0.1% GO-VC was applied twice a day in the morning and evening on the entire face after 13 female subjects aged 39.8 years on average. As a result of a 1-5 month study, GO-VC significantly improved post-inflammatory pigmentation. It is reported that GO-VC also showed a clear improvement in pigmentation caused by metal allergy, which was not very effective when applied with hydroquinone. Skin pore related diseases Since conventional water-soluble vitamin C does not easily penetrate the skin barrier, an amphipathic vitamin C derivative was developed to improve this. However, since lipids such as palmitic acid were chemical modified to vitamin C derivatives in the past, exposure to ultraviolet light generated free fatty acids, raising concerns about lipid peroxidation. It was thought that GO-VC could avoid the problem of lipid peroxidation because GO-VC is amphipathic with octanol instead of lipid. The effect of 0.05% gel of GO-VC was investigated on skin pore related diseases. As a result, it was confirmed that there were no side effects and the number of abnormal pores decreased to 70% or less within 1 to 2 months after application. References Organic acids 3-Hydroxypropenals Glycerol ethers Lactones
Glyceryl octyl ascorbic acid
Chemistry
1,394
39,592,843
https://en.wikipedia.org/wiki/Next-generation%20matrix
In epidemiology, the next-generation matrix is used to derive the basic reproduction number, for a compartmental model of the spread of infectious diseases. In population dynamics it is used to compute the basic reproduction number for structured population models. It is also used in multi-type branching models for analogous computations. The method to compute the basic reproduction ratio using the next-generation matrix is given by Diekmann et al. (1990) and van den Driessche and Watmough (2002). To calculate the basic reproduction number by using a next-generation matrix, the whole population is divided into compartments in which there are infected compartments. Let be the numbers of infected individuals in the infected compartment at time t. Now, the epidemic model is , where In the above equations, represents the rate of appearance of new infections in compartment . represents the rate of transfer of individuals into compartment by all other means, and represents the rate of transfer of individuals out of compartment . The above model can also be written as where and Let be the disease-free equilibrium. The values of the parts of the Jacobian matrix and are: and respectively. Here, and are m × m matrices, defined as and . Now, the matrix is known as the next-generation matrix. The basic reproduction number of the model is then given by the eigenvalue of with the largest absolute value (the spectral radius of ). Next generation matrices can be computationally evaluated from observational data, which is often the most productive approach where there are large numbers of compartments. See also Mathematical modelling of infectious disease References Sources Matrices Epidemiology
Next-generation matrix
Mathematics,Environmental_science
330
63,326,139
https://en.wikipedia.org/wiki/Integrally%20convex%20set
An integrally convex set is the discrete geometry analogue of the concept of convex set in geometry. A subset X of the integer grid is integrally convex if any point y in the convex hull of X can be expressed as a convex combination of the points of X that are "near" y, where "near" means that the distance between each two coordinates is less than 1. Definitions Let X be a subset of . Denote by ch(X) the convex hull of X. Note that ch(X) is a subset of , since it contains all the real points that are convex combinations of the integer points in X. For any point y in , denote near(y) := {z in | |zi - yi| < 1 for all i in {1,...,n} }. These are the integer points that are considered "nearby" to the real point y. A subset X of is called integrally convex if every point y in ch(X) is also in ch(X ∩ near(y)). Example Let n = 2 and let X = { (0,0), (1,0), (2,0), (2,1) }. Its convex hull ch(X) contains, for example, the point y = (1.2, 0.5). The integer points nearby y are near(y) = {(1,0), (2,0), (1,1), (2,1) }. So X ∩ near(y) = {(1,0), (2,0), (2,1)}. But y is not in ch(X ∩ near(y)). See image at the right. Therefore X is not integrally convex. In contrast, the set Y = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex. Properties Iimura, Murota and Tamura have shown the following property of integrally convex set. Let be a finite integrally convex set. There exists a triangulation of ch(X) that is integral, i.e.: The vertices of the triangulation are the vertices of X; The vertices of every simplex of the triangulation lie in the same "cell" (hypercube of side-length 1) of the integer grid . The example set X is not integrally convex, and indeed ch(X) does not admit an integral triangulation: every triangulation of ch(X), either has to add vertices not in X, or has to include simplices that are not contained in a single cell. In contrast, the set Y = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex, and indeed admits an integral triangulation, e.g. with the three simplices {(0,0),(1,0),(1,1)} and {(1,0),(2,0),(2,1)} and {(1,0),(1,1),(2,1)}. See image at the right. References Discrete geometry
Integrally convex set
Mathematics
700
41,561,194
https://en.wikipedia.org/wiki/Rail-Veyor
Railveyor is a remote controlled, electrically powered light-rail haulage solution for surface and underground applications in the mining and aggregate industries. Railveyor Technologies Global Inc. is a private Sudbury, Canada-based industrial bulk material handling and material haulage company that manufactures and installs Railveyor systems. History Railveyor's light-rail system was first demonstrated by its inventor, Mike Dibble, in conjunction with the Florida Institute of Phosphate Research from 1999-2001. Since then it has been installed commercially by Harmony Gold at its Phakisa Gold Mine in Free State, South Africa. Canadian entrepreneur Risto Laamanen incorporated the business, secured the global distribution rights, and set up a second demonstration and test site with Vale S.A. at their Frood Stobie mine in Sudbury, Ontario, Canada in 2008. Following successful testing of the system at the Frood Stobie test site, a Railveyor system was installed at Vale's Copper Cliff Mine 114 Ore Body Mine and became operational in 2012, with the intention of using the Railveyor system as an enabling technology for rapid mine development and high speed production. Risto Laamanen died on July 7, 2009, but the Laamanen family continue to be large investors in the private company, Railveyor Technologies Global Inc., along with investors from Canada and the United States of America. About The Railveyor system incorporates a remotely operated electrically powered series of two wheeled railcars driven by power stations located along on a light-rail track. Because the cars are remotely operated and compact in size, they can be used as an enabling technology for rapid development and high speed production at the working face. The Railveyor system can reduce capital costs and infrastructure, travelling below shafts and in spaces as small as 10 by 12 feet or 3.05 m by 3.66 m. Using multiple train systems in tandem optimizes continuous material haulage. The railcars can travel at variable speeds up to 18 mph, or 8 metres/second, and climb grades of 20%. The company claims that the system combines the best features of conveyors, rail, and truck haulage, including travelling on 20% inclines, increased capacity and availability, reduced installation time, a small profile, and a short turning radius of 95 feet or 30 m. The system is used for underground and surface applications in the mining and aggregate industries. Awards 2013 Bell Canada Business Excellence Award for Innovation 2020 Mining Cleantech Challenge winner References External links Manufacturing companies established in 1999 Industrial equipment Material-handling equipment Companies based in Greater Sudbury 1999 establishments in Ontario
Rail-Veyor
Engineering
529
14,075,504
https://en.wikipedia.org/wiki/VEGFR1
Vascular endothelial growth factor receptor 1 is a protein that in humans is encoded by the FLT1 gene. Function FLT1 is a member of VEGF receptor gene family. It encodes a receptor tyrosine kinase which is activated by VEGF-A, VEGF-B, and placental growth factor. The sequence structure of the FLT1 gene resembles that of the FMS (now CSF1R) gene; hence, Yoshida et al. (1987) proposed the name FLT as an acronym for FMS-like tyrosine kinase. The ablation of VEGFR1 by chemical and genetic means has also recently been found to augment the conversion of white adipose tissue to brown adipose tissue as well as increase brown adipose angiogenesis in mice. Functional genetic variation in FLT1 (rs9582036) has been found to affect non-small cell lung cancer survival. Interactions FLT1 has been shown to interact with PLCG1 and vascular endothelial growth factor B (VEGF-B). See also VEGF receptors References Further reading Tyrosine kinase receptors
VEGFR1
Chemistry
234
10,554,103
https://en.wikipedia.org/wiki/Chronometer%20watch
A chronometer (, khronómetron, "time measurer") is an extraordinarily accurate mechanical timepiece, with an original focus on the needs of maritime navigation. In Switzerland, timepieces certified by the Contrôle Officiel Suisse des Chronomètres (COSC) may be marked as Certified Chronometer or Officially Certified Chronometer. Outside Switzerland, equivalent bodies, such as the Japan Chronometer Inspection Institute, have in the past certified timepieces to similar standards, although use of the term has not always been strictly controlled. History The term chronometer was coined by Jeremy Thacker of Beverley, England in 1714, referring to his invention of a clock ensconced in a vacuum chamber. The term chronometer is also used to describe a marine chronometer used for celestial navigation and determination of longitude. The marine chronometer was invented by John Harrison in 1730. This was the first of a series of chronometers that enabled accurate marine navigation. From then on, an accurate chronometer was essential to open-ocean marine or air navigation out of sight of land. Early in the 20th century the advent of radiotelegraphy time signals supplemented the onboard marine chronometer for marine and air navigation, and various radio navigation systems were invented, developed, and implemented during and following the Second World War (e.g., Gee, Sonne (a.k.a. Consol), LORAN(-A and -C), Decca Navigator System and Omega Navigation System) that significantly reduced the need for positioning using an onboard marine chronometer. These culminated in the development and implementation of global satellite navigation systems (GSN-GPS) in the last quarter of the 20th century. The marine chronometer is no longer used as the primary means for navigation at sea, although it is still required as a backup, since radio systems and their associated electronics can fail for a variety of reasons. Once mechanical timepiece movements developed sufficient precision to allow for accurate marine navigation, there eventually developed what became known as "chronometer competitions" at astronomical observatories located in Europe. The Neuchâtel Observatory, Geneva Observatory, Besancon Observatory, and Kew Observatory are prominent examples of observatories that certified the accuracy of mechanical timepieces. The observatory testing regime typically lasted for 30 to 50 days and contained accuracy standards that were far more stringent and difficult than modern standards such as those set by COSC. When a movement passed the observatory, it became certified as an observatory chronometer and received a Bulletin de Marche from the Observatory, stipulating the performance of the movement. Because only very few movements were ever given the attention and manufacturing level necessary to pass the Observatory standards, there are very few observatory chronometers in existence. Most observatory chronometers had movements so specialized to accuracy that they could never withstand being used as wristwatches in normal usage. They were useful only for accuracy competitions, and so never were sold to the public for usage. However, in 1966 and 1967, Girard Perregaux manufactured approximately 670 wristwatches with the Calibre 32A movement, which became Observatory Chronometers certified by the Neuchatel Observatory, while in 1968, 1969 and 1970 Seiko had 226 wristwatches with its 4520 and 4580 Calibres certified. These observatory chronometers were then sold to the public for normal usage as wristwatches, and some examples may still be found today. The observatory competitions ended with the advent of the quartz watch movement, in the late 1960s and early 1970s, which generally has superior accuracy at far lesser costs. In 2009, the Watch Museum of Le Locle renewed the tradition and launched a new chronometry contest based on ISO 3159 certification. In 2017 the Observatory Chronometer Database (OCD) went online, which contains all mechanical timepieces ("chronometres-mecaniques") certified as observatory chronometers by the observatory in Neuchatel from 1945 to 1967, due to a successful participation in the competition which resulted in the issuance of a Bulletin de Marche. All database entries are submissions to the wristwatch category ("chronometres-bracelet") at the observatory competition. The term chronometer is often wrongly used by the general public to refer to timekeeping instruments fitted with an additional mechanism that may be set in motion by pushbuttons to enable measurement of the duration of an event. Such an instrument, typically called a stopwatch, is in fact a chronograph or chronoscope. It may be chronometer certified, provided it meets the criteria set for the standard. Mechanical chronometers A mechanical chronometer is a spring-driven escapement timekeeper, like a watch, but its parts are more massively built. Changes in the elasticity of the balance spring caused by variations in temperature are compensated for by devices built into it. Chronometers often included other innovations to increase their efficiency and precision. Hard stones such as diamond, ruby, and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapement. Chronometer makers also took advantage of the physical properties of rare metals such as gold, platinum, and palladium. Complications In horological terms, a complication in a mechanical watch is a special feature that causes the design of the watch movement to become more complicated. Examples of complications include: Tourbillon Perpetual calendar Minute repeater Equation of time Power reserve Moon phases Chronograph Rattrapante Grande sonnerie More recent times Quartz and atomic timepieces have made mechanical chronometers obsolete for time standards used scientifically and/or industrially. Most watchmakers do still produce them. However, they are mostly considered status symbols promoted by luxury watchmakers as a symbol of fine craftmanship and aesthetics. Certified chronometers More than 1.8 million officially-certified chronometer certificates, mostly for mechanical wristwatch chronometers (wristwatches) with sprung balance oscillators, are being delivered each year, after passing the COSC's most extreme tests and being singly identified by an officially-recorded individual serial number. According to COSC, an officially-certified chronometer is a high-precision watch capable of displaying the seconds and housing a movement that has been tested over several days, in different positions, and at different temperatures, by an official, neutral body (COSC). Each movement is individually tested for several consecutive days, in five positions and at three temperatures. Any watch with denominations "certified chronometer" or "officially-certified chronometer" contains a certified movement and matches the criteria in ISO 3159 Timekeeping instruments—wristwatch chronometers with spring balance oscillator. See also References External links American Watchmakers-Clockmakers Institute Federation of the Swiss Watch Industry Contrôle Officiel Suisse des Chronomètres - COSC Accuracy of wristwatches Observatory Chronometer Database (OCD) Chronometer certification chronometer Cronosurf - The online chronometer watch - Web Chronograph Chronometer web version Clocks Watches
Chronometer watch
Physics,Technology,Engineering
1,493
31,341,292
https://en.wikipedia.org/wiki/When%20Technology%20Fails
When Technology Fails, edited by Neil Schlager, is a collection of 103 case studies about significant technological disasters, accidents, and failures of the 20th century. It was published in 1994 by Gale Research, Inc. It was one of the top referenced books in the New York Public Library in 1995. The book was updated and re-released in 2005. The book consists of 1,000- to 1,500-word entries, arranged by subject, that discuss the background, timeline, and impact of each event. Each entry is written by journalists, engineers, and researchers, and provides a cursory overview, rather than in-depth technological analysis. Entries are supplemented by bibliographies, black-and-white photographs, charts, and other print media. See also Normal Accidents Megaprojects and Risk Northeast Blackout of 2003 Brittle Power Fukushima nuclear disaster References External links https://openlibrary.org/books/OL1430475M/When_technology_fails Accidents Engineering failures Books about nuclear issues
When Technology Fails
Technology,Engineering
213
17,509,460
https://en.wikipedia.org/wiki/Philips%20Pavilion
The Philips Pavilion (; ) was a modernist pavilion in Brussels, Belgium, constructed for the 1958 Brussels World's Fair (Expo 58). Commissioned by electronics manufacturer Philips and designed by the office of Le Corbusier, it was built to house a multimedia spectacle that celebrated postwar technological progress. Because Le Corbusier was busy with the planning of Chandigarh, much of the project management was assigned to Iannis Xenakis, who was also an experimental composer and was influenced in the design by his composition Metastaseis. The reinforced concrete pavilion is a cluster of nine hyperbolic paraboloids in which Edgard Varèse's Poème électronique was spatialized by sound projectionists using telephone dials. The speakers were set into the walls, which were coated in asbestos, giving a textured look to the walls. Varèse drew up a detailed spatialization scheme for the entire piece, which made great use of the pavilion's physical layout, especially its height. The asbestos hardened the walls, which created a cavernous acoustic. As audiences entered and exited the building, Xenakis's musique concrète composition Concret PH was heard. The building was demolished on 30 January 1959. The European Union funded a virtual recreation of the Philips Pavilion, which was chaired by Vincenzo Lombardi from the University of Turin. Arseniusz Romanowicz's Warszawa Ochota train station in Poland is supposedly inspired by the Philips Pavilion. Construction References Further reading Marc Treib, Space Calculated in Seconds: The Philips Pavilion, Le Corbusier, Edgard Varèse, Princeton: Princeton Architectural Press, 1996 James Harley, Xenakis: his life in music, London: Taylor & Francis Books, 2004 Richard Jarvis, Music to my Eyes: The design of the Philips Pavilion by Ianis Xenakis, Boston: Boston Architectural Center, 2002 "The Architectural Design of Le Corbusier and Xenakis" in Philips Technical Review v. 20 n. 1 (1958/1959) Joe Drew, "Recreating the Philips Pavilion", ANABlog. January 16, 2010. Jan de Heer and Kees Tazelaar, From Harmony to Chaos: Le Corbusier, Varèse, Xenakis and Le poème électronique, Amsterdam: 1001 Publishers, 2017 External links Film De Bouw van het Philips Paviljoen (Building the Philips Pavilion), a Dutch documentary about the construction project. Virtual Electronic Poem Project, a site about a virtual reconstruction of the Philips Pavilion with extensive information about the original site. Le Corbusier buildings Expo 58 1958 in Belgium Spatial music Philips World's fair architecture in Belgium Former buildings and structures in Belgium Hyperboloid structures
Philips Pavilion
Technology
555
984,629
https://en.wikipedia.org/wiki/Social%20complexity
In sociology, social complexity is a conceptual framework used in the analysis of society. In the sciences, contemporary definitions of complexity are found in systems theory, wherein the phenomenon being studied has many parts and many possible arrangements of the parts; simultaneously, what is complex and what is simple are relative and change in time. Contemporary usage of the term complexity specifically refers to sociologic theories of society as a complex adaptive system, however, social complexity and its emergent properties are recurring subjects throughout the historical development of social philosophy and the study of social change. Early theoreticians of sociology, such as Ferdinand Tönnies, Émile Durkheim, and Max Weber, Vilfredo Pareto and Georg Simmel, examined the exponential growth and interrelatedness of social encounters and social exchanges. The emphases on the interconnectivity among social relationships, and the emergence of new properties within society, is found in the social theory produced in the subfields of sociology. Social complexity is a basis for the connection of the phenomena reported in microsociology and macrosociology, and thus provides an intellectual middle-range for sociologists to formulate and develop hypotheses. Methodologically, social complexity is theory-neutral, and includes the phenomena studied in microsociology and the phenomena studied in macrosociology. Theoretic background In 1937, the sociologist Talcott Parsons continued the work of the early theoreticians of sociology with his work on action theory; and by 1951, Parson had developed action theory into formal systems theory in The Social System (1951). In the following decades, the synergy between general systems thinking and the development of social system theories is carried forward by Robert K. Merton in discussions of theories of the middle-range and social structure and agency. From the late 1970s until the early 1990s, sociological investigation concerned the properties of systems in which the strong correlation of sub-parts leads to the observation of autopoetic, self-organizing, dynamical, turbulent, and chaotic behaviours that arise from mathematical complexity, such as the work of Niklas Luhmann. One of the earliest usages of the term "complexity", in the social and behavioral sciences, to refer specifically to a complex system is found in the study of modern organizations and management studies. However, particularly in management studies, the term often has been used in a metaphorical rather than in a qualitative or quantitative theoretical manner. By the mid-1990s, the "complexity turn" in social sciences begins as some of the same tools generally used in complexity science are incorporated into the social sciences. By 1998, the international, electronic periodical, Journal of Artificial Societies and Social Simulation, had been created. In the last several years, many publications have presented overviews of complexity theory within the field of sociology. Within this body of work, connections also are drawn to yet other theoretical traditions, including constructivist epistemology and the philosophical positions of phenomenology, postmodernism and critical realism. Methodologies Methodologically, social complexity is theory-neutral, meaning that it accommodates both local and global approaches to sociological research. The very idea of social complexity arises out of the historical-comparative methods of early sociologists; obviously, this method is important in developing, defining, and refining the theoretical construct of social complexity. As complex social systems have many parts and there are many possible relationships between those parts, appropriate methodologies are typically determined to some degree by the research level of analysis differentiated by the researcher according to the level of description or explanation demanded by the research hypotheses. At the most localized level of analysis, ethnographic, participant- or non-participant observation, content analysis and other qualitative research methods may be appropriate. More recently, highly sophisticated quantitative research methodologies are being developed and used in sociology at both local and global levels of analysis. Such methods include (but are not limited to) bifurcation diagrams, network analysis, non-linear modeling, and computational models including cellular automata programming, sociocybernetics and other methods of social simulation. Complex social network analysis Complex social network analysis is used to study the dynamics of large, complex social networks. Dynamic network analysis brings together traditional social network analysis, link analysis and multi-agent systems within network science and network theory. Through the use of key concepts and methods in social network analysis, agent-based modeling, theoretical physics, and modern mathematics (particularly graph theory and fractal geometry), this method of inquiry brought insights into the dynamics and structure of social systems. New computational methods of localized social network analysis are coming out of the work of Duncan Watts, Albert-László Barabási, Nicholas A. Christakis, Kathleen Carley and others. New methods of global network analysis are emerging from the work of John Urry and the sociological study of globalization, linked to the work of Manuel Castells and the later work of Immanuel Wallerstein. Since the late 1990s, Wallerstein increasingly makes use of complexity theory, particularly the work of Ilya Prigogine. Dynamic social network analysis is linked to a variety of methodological traditions, above and beyond systems thinking, including graph theory, traditional social network analysis in sociology, and mathematical sociology. It also links to mathematical chaos and complex dynamics through the work of Duncan Watts and Steven Strogatz, as well as fractal geometry through Albert-László Barabási and his work on scale-free networks. Computational sociology The development of computational sociology involves such scholars as Nigel Gilbert, Klaus G. Troitzsch, Joshua M. Epstein, and others. The foci of methods in this field include social simulation and data-mining, both of which are sub-areas of computational sociology. Social simulation uses computers to create an artificial laboratory for the study of complex social systems; data-mining uses machine intelligence to search for non-trivial patterns of relations in large, complex, real-world databases. The emerging methods of socionics are a variant of computational sociology. Computational sociology is influenced by a number of micro-sociological areas as well as the macro-level traditions of systems science and systems thinking. The micro-level influences of symbolic interaction, exchange, and rational choice, along with the micro-level focus of computational political scientists, such as Robert Axelrod, helped to develop computational sociology's bottom-up, agent-based approach to modeling complex systems. This is what Joshua M. Epstein calls generative science. Other important areas of influence include statistics, mathematical modeling and computer simulation. Sociocybernetics Sociocybernetics integrates sociology with second-order cybernetics and the work of Niklas Luhmann, along with the latest advances in complexity science. In terms of scholarly work, the focus of sociocybernetics has been primarily conceptual and only slightly methodological or empirical. Sociocybernetics is directly tied to systems thought inside and outside of sociology, specifically in the area of second-order cybernetics. Areas of application In the first decade of the 21st century, the diversity of areas of application has grown as more sophisticated methods have developed. Social complexity theory is applied in studies of social cooperation and public goods; altruism; education; global civil society collective action and social movements; social inequality; workforce and unemployment; policy analysis; health care systems; and innovation and social change, to name a few. A current international scientific research project, the Seshat: Global History Databank, was explicitly designed to analyze changes in social complexity from the Neolithic Revolution until the Industrial Revolution. As a middle-range theoretical platform, social complexity can be applied to any research in which social interaction or the outcomes of such interactions can be observed, but particularly where they can be measured and expressed as continuous or discrete data points. One common criticism often cited regarding the usefulness of complexity science in sociology is the difficulty of obtaining adequate data. Nonetheless, application of the concept of social complexity and the analysis of such complexity has begun and continues to be an ongoing field of inquiry in sociology. From childhood friendships and teen pregnancy to criminology and counter-terrorism, theories of social complexity are being applied in almost all areas of sociological research. In the area of communications research and informetrics, the concept of self-organizing systems appears in mid-1990s research related to scientific communications. Scientometrics and bibliometrics are areas of research in which discrete data are available, as are several other areas of social communications research such as sociolinguistics. Social complexity is also a concept used in semiotics. See also Social science Complex society Complexity economics Complexity theory and organizations Differentiation (sociology) Econophysics Engaged theory Network Analysis and Ethnographic Problems Personal information management General Aggregate data Artificial neural network Cognitive complexity Computational complexity theory Dual-phase evolution Evolutionary programming Game theory Generic-case complexity Multi-agent system Systemography References Further reading Byrne, David (1998). Complexity Theory and the Social Sciences. London: Routledge. Byrne, D., & Callaghan, G. (2013). Complexity theory and the social sciences: The state of the art. Routledge. Castellani, Brian and Frederic William Hafferty (2009). Sociology and Complexity Science: A New Area of Inquiry (Series: Understanding Complex Systems XV). Berlin, Heidelberg: Springer-Verlag. Eve, Raymond, Sara Horsfall and Mary E. Lee (1997). Chaos, Complexity and Sociology: Myths, Models, and Theories. Thousand Oaks, CA: Sage Publications. Jenks, Chris and John Smith (2006). Qualitative Complexity: Ecology, Cognitive Processes and the Re-Emergence of Structures in Post-Humanist Social Theory. New York, NY: Routledge. Kiel, L. Douglas (ed.) (2008). Knowledge Management, Organizational Intelligence, Learning and Complexity. UNESCO (EOLSS): Paris, France. Kiel, L. Douglas and Euel Elliott (eds.) (1997). Chaos Theory in the Social Sciences: Foundations and Applications. The University of Michigan Press: Ann Arbor, MI. Leydesdorff, Loet (2001). A Sociological Theory of Communication: The Self-Organization of the Knowledge-Based Society. Parkland, FL: Universal Publishers. Urry, John (2005). "The Complexity Turn." Theory, Culture and Society, 22(5): 1–14. Complex systems theory Self-organization Nonlinear systems Sociological theories Sociological terminology
Social complexity
Mathematics
2,131
13,079,354
https://en.wikipedia.org/wiki/Drazin%20inverse
In mathematics, the Drazin inverse, named after Michael P. Drazin, is a kind of generalized inverse of a matrix. Let A be a square matrix. The index of A is the least nonnegative integer k such that rank(Ak+1) = rank(Ak). The Drazin inverse of A is the unique matrix AD that satisfies It's not a generalized inverse in the classical sense, since in general. If A is invertible with inverse , then . If A is a block diagonal matrix where is invertible with inverse and is a nilpotent matrix, then Drazin inversion is invariant under conjugation. If is the Drazin inverse of , then is the Drazin inverse of . The Drazin inverse of a matrix of index 0 or 1 is called the group inverse or {1,2,5}-inverse and denoted A#. The group inverse can be defined, equivalently, by the properties AA#A = A, A#AA# = A#, and AA# = A#A. A projection matrix P, defined as a matrix such that P2 = P, has index 1 (or 0) and has Drazin inverse PD = P. If A is a nilpotent matrix (for example a shift matrix), then The hyper-power sequence is for convergence notice that For or any regular with chosen such that the sequence tends to its Drazin inverse, Drazin inverses in categories A study of Drazin inverses via category-theoretic techniques, and a notion of Drazin inverse for a morphism of a category, has been recently initiated by Cockett, Pacaud Lemay and Srinivasan. This notion is a generalization of the linear algebraic one, as there is a suitably defined category having morphisms matrices with complex entries; a Drazin inverse for the matrix M amounts to a Drazin inverse for the corresponding morphism in . Jordan normal form and Jordan-Chevalley decomposition As the definition of the Drazin inverse is invariant under matrix conjugations, writing , where J is in Jordan normal form, implies that . The Drazin inverse is then the operation that maps invertible Jordan blocks to their inverses, and nilpotent Jordan blocks to zero. More generally, we may define the Drazin inverse over any perfect field, by using the Jordan-Chevalley decomposition where is semisimple and is nilpotent and both operators commute. The two terms can be block diagonalized with blocks corresponding to the kernel and cokernel of . The Drazin inverse in the same basis is then defined to be zero on the kernel of , and equal to the inverse of on the cokernel of . See also Constrained generalized inverse Inverse element Moore–Penrose inverse Jordan normal form Generalized eigenvector References External links Drazin inverse on Planet Math Group inverse on Planet Math Matrices de:Pseudoinverse#Ausgewählte weitere Versionen von verallgemeinerten Inversen
Drazin inverse
Mathematics
645
30,714,583
https://en.wikipedia.org/wiki/Kim%20McKay
Kim Coral McKay is an Australian environmentalist, author, entrepreneur, and businesswoman. She co-founded the Clean Up Australia campaign in 1989, and the Clean Up the World campaign in 1992, and also co-created The National Geographic Society's The Genographic Project, the world's largest DNA population study. Early life and education Kim Coral McKay was born in Sydney. She attended Mackellar Girls’ High School in Manly Vale. She attended the University of Technology, Sydney (UTS), graduating in BA Communications. Career From 1983 to 1987, McKay was a consultant and then partner at Harfield McKay Communications, where she specialised in major events sponsorship and tourism activities, including The BOC Challenge solo around the world yacht race in 1982/82 and 1986–87. She also worked on Australian Professional Surfing Association events and Law Week for the Law Society of New South Wales. McKay co-founded the Clean Up Australia campaign in 1989, and the Clean Up the World campaign in 1992, with the support of the United Nations Environment Programme (UNEP). She worked in partnership for 10 years with 1994 Australian of the Year Ian Kiernan AO to develop Clean Up into a leading non-profit community environmental organisation. She served as Deputy Chairwoman from 1989 to 2009 and developed the "Rubbish Report" volunteer citizen science program for data collection and analysis, one of the first "citizen science" community initiatives. Clean Up Australia became one of the largest community environmental projects in Australia, operating in more than 900 cities and towns across the country, involving more than half a million volunteers at its peak. In 1992, McKay co-founded Clean Up the World, securing a partnership with UNEP to endorse and support the activity, along with international corporate support. The program operates in more than 125 countries and involves many millions of volunteers. McKay, who was also Deputy Chairwoman of Clean Up the World from 1993–2009, told ABC radio that: "The belief that everyone can make a difference is a driving theme behind my actions." In 1989, McKay established Profile Communications Pty Ltd, a Sydney-based event marketing and communications consultancy. The company focused on special event creation and marketing communications programs, with clients including Discovery Communications and Discovery Channel Eco-Challenge in Cairns. McKay was Managing Director of Profile Communications until 1998, when she closed the company to move to the United States to work for Discovery Communications. In 1998, McKay joined Discovery Communications, based in Washington D.C., where she oversaw the marketing and communications for Discovery Channel's largest event and documentary production — the Emmy-award-winning Discovery Channel Eco-Challenge, the world's leading adventure sports race. She worked in close collaboration with the Executive Producer and the Race Director Mark Burnett to ensure the successful operation of the extreme sports event, held annually in remote locations around the world, including Morocco and Patagonia. In 2000, McKay joined National Geographic Channels International, where she was responsible for the global marketing and communications activities for the world's fastest-growing cable channel. In 2004, McKay returned to Australia to start Momentum2, a Sydney-based marketing and communications agency specialising in major events, corporate sustainability and social responsibility programs. Key clients included National Geographic, Qantas and Harpo Productions (Oprah's Ultimate Australian Adventure). McKay co-created The Genographic Project with population geneticist Dr Spencer Wells for the National Geographic Society, in partnership with IBM. The multiyear research project, the world's largest DNA population study, uses cutting-edge genetic and computational technologies to analyze historical patterns in DNA from around the world, to better understand our shared genetic roots. In February 2014, McKay was appointed as the 17th director/curator of the Australian Museum, succeeding Frank Howarth. She had previously served from 2012–14 as a Trustee of the museum. Since starting in the role as Director and CEO, in April 2014, McKay has initiated a transformation program at the museum, including enshrining free general admission for children into government policy, constructing the new award-winning entrance pavilion, Crystal Hall, establishing the Australian Museum Centre for Citizen Science (part of the Australian Museum Research Institute), creating new galleries and programs, and restoring the Westpac Long Gallery, Australia's first museum gallery, which reopened in October 2017 and now houses the permanent exhibition "200 Treasures of the Australian Museum". Academic career McKay was an honorary adjunct professor at the Macquarie Graduate School of Management. Media McKay is a media commentator on practical environmental action and is a public speaker addressing business and not-for-profit conferences, as well as schools and community groups. She presented a series of on-camera "True Green Tips" for Sky News in Australia (as part of the weekly Eco Report in 2009), co-authored a weekly "True Green" column for The Sunday Telegraph (Sydney) Body and Soul section in 2007 and was named one of G Magazine's Top 20 Australian Eco Heroes in November 2009. McKay has also presented a weekly "True Green" radio spot for ABC regional radio in NSW. She was the master of ceremonies at the media conference for "Oprah's Ultimate Australian Adventure" at the Sydney Opera House, in December 2010. Recognition and awards In 2008, McKay was awarded an Officer of the Order of Australia AO for distinguished service to the environment and the community. In 2020, she was awarded the UTS Chancellor's Award for Excellence in 2010. In 2011, McKay was included in the book "The Power of 100...One Hundred Women who have Shaped Australia". In 2013, McKay was named in The Australian Financial Review's 100 Women of Influence list. Other honours and awards include: Fellow of the Royal Society of New South Wales, 2021 Australian Geographic's Lifetime of Conservation Award for Excellence "for a life dedicated to the protection of Australia’s environment" – 2013 Australian Financial Review/Westpac 100 Women of Influence Awards (Social Enterprise/Not-for-profit sector) – 2013 Awarded AICD Board Director's women's leadership scholarship – 2013 Inducted as a UTS “Luminary” – 2011 Inclusion in The Power of 100...One Hundred Women Who Have Shaped Australia book for 100th Anniversary of International Women's Day — 2011 UTS Chancellor's Award for Excellence (UTS Alumni top award) — 2010 Co-creator, The Genographic Project for National Geographic and IBM – global DNA study focusing on population migratory history, 2005–2011 Co-founder Clean Up the World, a project held in conjunction with the United Nations Environment Program operating in 120+ countries – 1993; Deputy Chairwoman – 1993–2009 G Magazine top 20 Australian Eco Heroes – published 2009 International Panel Judge for the MacArthur Foundation 100 and Change Grant, awarding US$100 million to a project that will change the world — 2017 Appointed Fellow, Public Relations Institute of Australia (NSW) — 1997 Winner, Environment Category, Avon Spirit of Achievement Award — 1994 Winner, International Public Relations Association (IPRA) Golden World Award for Excellence in Environmental Communication — 1994 (presented in Uruguay) Member, Community Relations Committee, Sydney 2000 Olympic Bid — 1995 United Nations Honorary Mention for Excellence in Communication for Clean Up the World — 1994 (presented at the UN, New York) Public Relations Institute of Australia — Golden Target Awards: Winner — Special Event (NSW) – 1998; Winner — Community Event (NSW) – 1994; Highly Commended — Program Category — 1992; Winner — Project of the Year — 1990 Publications McKay is the co-author with Jenny Bonnin of the "True Green" series of books, published in Australia by ABC Books and by National Geographic Books in the United States. True Green: 100 Everyday Ways You Can Contribute to a Healthier Planet (2007) True Green @ Work: 100 Ways You Can Make the Environment Your Business (with Tim Wallace) (2008) True Green Kids: 100 Things You Can Do to Save the Planet (2008) True Green Home: 100 Inspirational Ideas for Creating a Green Environment at Home (2009) True Green Life: In 100 Everyday Ways (2010) True Green Kids won the AAAS/Subaru Book Prize for Best Hands On Science Book in 2009 and was featured in the journal Science Magazine. Not-for-profit roles Australian Museum Trust secretary and board member Australian Museum Foundation board member One Million Women Advisory Board Sydney Institute of Marine Science Foundation Board UNSW Sydney Science Advisory Council MGSM Reference Panel Somerville Museum Board Member Chief Executive Women (CEW) member Public Relations Institute of Australia (NSW) Fellow Sydney Salon co-creator and advisory board member Lizard Island Reef Research Foundation Board\ Council of Australasian Museum Directors(CAMD) McKay's previous roles have included: Clean Up Australia/World Deputy Chairwoman; Fairtrade Australia and New Zealand Board Member; RANZCO Eye Foundation Board Member; National Business Leaders Forum on Sustainable Development Steering Committee; CSIRO's ECOS Magazine Chairwoman of the Advisory Board; Center for Australian and New Zealand Studies, Georgetown University, Board Member; National Breast Cancer Foundation Marketing Advisory Committee; Sydney Olympic Games Bid Community Relations Committee; Short-handed Sailing Association of Australia Co-founder. In 2008, she travelled to Antarctica as part of the Expedition Team, lecturing on board the Australian-based cruise ship ORION. References Australian environmentalists Australian women environmentalists 1959 births Living people Environmental communication University of Technology Sydney alumni Directors of museums in Australia Women museum directors Officers of the Order of Australia Fellows of the Royal Society of New South Wales
Kim McKay
Environmental_science
1,917
3,024,813
https://en.wikipedia.org/wiki/Dihydroxylation
Dihydroxylation is the process by which an alkene is converted into a vicinal diol. Although there are many routes to accomplish this oxidation, the most common and direct processes use a high-oxidation-state transition metal (typically osmium or manganese). The metal is often used as a catalyst, with some other stoichiometric oxidant present. In addition, other transition metals and non-transition metal methods have been developed and used to catalyze the reaction. Osmium catalyzed reactions Osmium tetroxide (OsO4) is a popular oxidant used in the dihydroxylation of alkenes because of its reliability and efficiency with producing syn-diols. Since it is expensive and toxic, catalytic amounts of OsO4 are used in conjunction with a stoichiometric oxidizing agent. The Milas hydroxylation, Upjohn dihydroxylation, and Sharpless asymmetric dihydroxylation reactions all use osmium as the catalyst as well as varying secondary oxidizing agents. The Milas dihydroxylation was introduced in 1930, and uses hydrogen peroxide as the stoichiometric oxidizing agent. Although the method can produce diols, overoxidation to the dicarbonyl compound has led to difficulties isolating the vicinal diol. Therefore, the Milas protocol has been replaced by the Upjohn and Sharpless asymmetric dihydroxylation. Upjohn dihydroxylation was reported in 1973 and uses OsO4 as the active catalyst in the dihydroxylation procedure. It also employs N-Methylmorpholine N-oxide (NMO) as the stoichiometric oxidant to regenerate the osmium catalyst, allowing for catalytic amounts of osmium to be used. The Upjohn protocol yields high conversions to the vicinal diol and tolerates many substrates. However, the protocol cannot dihydroxylate tetrasubstituted alkenes. The Upjohn conditions can be used for synthesizing anti-diols from allylic alcohols, as demonstrated by Kishi and coworkers. Sharpless asymmetric The Sharpless asymmetric dihydroxylation was developed by K. Barry Sharpless to use catalytic amounts of OsO4 along with the stoichiometric oxidant K3[Fe(CN)6]. The reaction is performed in the presence of a chiral auxiliary. The selection of dihydroquinidine (DHQD) or dihydroquinine (DHQ) as a chiral auxiliary dictates the facial selectivity of the olefin, since the absolute configuration of the ligands are opposite. The catalyst, oxidant, and chiral auxiliary can be purchased premixed for selective dihydroxylation. AD-mix-α contains the chiral auxiliary (DHQ)2PHAL, which positions OsO4 on the alpha-face of the olefin; AD-mix-β contains (DHQD)2PHAL and delivers hydroxyl groups to the beta-face. The Sharpless asymmetric dihydroxylation has a large scope for substrate selectivity by changing the chiral auxiliary class. Applications of Sharpless methods The synthesis of highly substituted and stereospecific sugars has been achieved by Sharpless-based methods. Kakelokelose is one specific example. Mechanism In the dihydroxylation mechanism, a ligand first coordinates to the metal catalyst (depicted as osmium), which dictates the chiral selectivity of the olefin. The alkene then coordinates to the metal through a (3+2) cycloaddition, and the ligand dissociates from the metal catalyst. Hydrolysis of the olefin then yields the vicinal diol, and oxidation of the catalyst by a stoichiometric oxidant regenerates the metal catalyst to repeat the cycle. The concentration of the olefin is crucial to the enantiomeric excess of the diol since higher concentrations of the alkene can associate with the other catalytic site to produce the other enantiomer. More variants As mentioned above, the ability to synthesize anti-diols from allylic alcohols can be achieved with the use of NMO as a stoichiometric oxidant. The use of tetramethylenediamine (TMEDA) as a ligand produced syn-diols with a favorable diastereomeric ratio compared to Kishi’s protocol; however, stoichiometric osmium is employed. Syn-selectivity is due to the hydrogen bond donor ability of the allylic alcohol and the acceptor ability of the diamine. This has since been applied to homoallylic systems. Alternative to Os-based reagents Ruthenium-based reagents are rapid. Typically, the ruthenium tetroxide is created in situ from ruthenium trichloride, and the oxidant NaIO4. The turnover-limiting step of the reaction is the hydrolysis step; therefore, sulfuric acid is added to increase the rate of this step. Manganese is also used in dihydroxylation and is often chosen when osmium tetroxide methods yield poor results. Similar to ruthenium, the oxidation potential of manganese is high, leading to over-oxidation of substrates. Potassium permanganate is often used as the oxidant for dihydroxylation; however, due to its poor solubility in organic solvent, a phase-transfer catalyst (such as benzyltriethylammonium chloride, TEBACl) is also added to increase the number of substrates for dihydroxylation. Mild conditions are required to avoid over-oxidation. In particular, a solution that is too warm, acidic, or concentrated will lead to cleavage of the glycol. Arene dihydroxylations The dihydroxylation of aromatic compounds gives dihydrocatechols and related derivatives. The conversions are catalyzed by several enzymes, notably Toluene dioxygenases (TDs) and benzene 1,2-dioxygenase. (1) cis-1,2-Dihydrocatechol is a versatile synthetic intermediate. Prévost and Woodward dihydroxylation Unlike the other methods described that use transition metals as catalyst, the Prévost and Woodward methods use iodine and a silver salt. However, the addition of water into the reaction directs the cis- and trans- addition of the hydroxyl groups. The Prévost reaction typically uses silver benzoate to produce trans-diols; the Woodward modification of the Prévost reaction uses silver acetate to produce cis-diols. In both the Prévost and Woodward reactions, iodine is first added to the alkene producing a cyclic iodinium ion. The anion from the corresponding silver salt is then added by nucleophilic substitution to the iodinium ion. In the Prévost reaction, the iodinium ion undergoes nucleophilic attack by benzoate anion. The benzoate anion acts as a nucleophile again to displace iodide through a neighboring-group participation mechanism. A second benzoate anion reacts with the intermediate to produce the anti-substituted dibenzoate product, which can then undergo hydrolysis to yield trans-diols. The Woodward modification of the Prévost reaction yields cis-diols. Acetate anion reacts with the cyclic iodinium ion to yield an oxonium ion intermediate. This can then readily react with water to give the monoacetate, which can then be hydrolyzed to give a cis-diol To eliminate the need for silver salts, Sudalai and coworkers modified the Prévost-Woodward reaction; the reaction is catalyzed with LiBr, and uses NaIO4 and PhI(OAc)2 as oxidants. LiBr reacts with NaIO4 and acetic acid to produce lithium acetate, which can then proceed through the reaction as previously mentioned. The protocol produced high dr for the corresponding diol, depending on the oxidant chosen. Application of both Woodward and Sharpless methods Dihydroxylation methods have been investigated for the synthesis of steroids. Brassinosteroids, which is a potential insecticide, has a stereochemically-rich array of hydroxy substituents. The hydroxyl groups in the steroid can be using both Woodward conditions to yield a cis-diol to the A ring of the steroid. Then, the alkene chain on the D ring was dihydroxylated to yield the second cis-diol using OsO4 and NMO as the stoichiometric oxidant. References Chemical processes Alkenes Diols
Dihydroxylation
Chemistry
1,859
41,902,696
https://en.wikipedia.org/wiki/Penicillium%20allii
Penicillium allii is an anamorph fungus species of the genus of Penicillium. Penicillium allii is a pathogen of garlic (Allium sativum). Further reading Penicillium allii, a New Species from Egyptian Garlic Michael A. Vincent and John I. Pitt Mycologia Vol. 81, No. 2 (Mar. - Apr., 1989), pp. 300–303 Published by: Mycological Society of America See also List of Penicillium species References allii Fungi described in 1989 Fungus species
Penicillium allii
Biology
121
41,945,536
https://en.wikipedia.org/wiki/Data%20center%20network%20architectures
A data center is a pool of resources (computational, storage, network) interconnected using a communication network. A data center network (DCN) holds a pivotal role in a data center, as it interconnects all of the data center resources together. DCNs need to be scalable and efficient to connect tens or even hundreds of thousands of servers to handle the growing demands of cloud computing. Today's data centers are constrained by the interconnection network. Types of data center network topologies Data center networks can be divided into multiple separate categories. Fixed topology Tree-based Basic tree Clos network VL2 Fat-tree Al-Fares et al. Portland Hedera Recursive DCell BCube MDCube FiConn Flexible topology Fully optical OSA (Optical switching architecture) Hybrid c-Through Helios Types of data center network architectures Three-tier The legacy three-tier DCN architecture follows a multi-rooted tree based network topology composed of three layers of network switches, namely access, aggregate, and core layers. The servers in the lowest layers are connected directly to one of the edge layer switches. The aggregate layer switches interconnect together multiple access layer switches. All of the aggregate layer switches are connected to each other by core layer switches. Core layer switches are also responsible for connecting the data center to the Internet. The three-tier is the common network architecture used in data centers. However, three-tier architecture is unable to handle the growing demand of cloud computing. The higher layers of the three-tier DCN are highly oversubscribed. Moreover, scalability is another major issue in three-tier DCN. Major problems faced by the three-tier architecture include, scalability, fault tolerance, energy efficiency, and cross-sectional bandwidth. The three-tier architecture uses enterprise-level network devices at the higher layers of topology that are very expensive and power hungry. Fat tree The fat tree DCN architecture reduces the oversubscription and cross section bandwidth problem faced by the legacy three-tier DCN architecture. Fat tree DCN employs commodity network switches based architecture using Clos topology. The network elements in fat tree topology also follows hierarchical organization of network switches in access, aggregate, and core layers. However, the number of network switches is much larger than the three-tier DCN. The architecture is composed of k pods, where each pod contains, (k/2)2 servers, k/2 access layer switches, and k/2 aggregate layer switches in the topology. The core layers contain (k/2)2 core switches where each of the core switches is connected to one aggregate layer switch in each of the pods. The fat tree topology can offer up to 1:1 oversubscription ratio and full bisection bandwidth, depending on each rack's total bandwidth versus the bandwidth available at the tree's highest levels. Higher tree branches are typically oversubscribed to their lower branches by a ratio of 1:5, with the problem compounding at the highest tree levels, including up to 1:80 or 1:240, at the highest levels. The fat tree architecture uses a customized addressing scheme and routing algorithm. The scalability is one of the major issues in fat tree DCN architecture and maximum number of pods is equal to the number of ports in each switch. DCell DCell is a server-centric hybrid DCN architecture where one server is directly connected to one server. A server in the DCell architecture is equipped with multiple network interface cards (NICs). The DCell follows a recursively built hierarchy of cells. A cell0 is the basic unit and building block of DCell topology arranged in multiple levels, where a higher level cell contains multiple lower layer cells. The cell0 is building block of DCell topology, which contains n servers and one commodity network switch. The network switch is only used to connect the server within a cell0. A cell1 contains k=n+1 cell0 cells, and similarly a cell2 contains k * n + 1 dcell1. The DCell is a highly scalable architecture where a four level DCell with only six servers in cell0 can accommodate around 3.26 million servers. Besides very high scalability, the DCell architecture depicts very high structural robustness. However, cross section bandwidth and network latency is a major issue in DCell DCN architecture. Others Some of the other well-known DCNs include BCube, Camcube, FiConn, Jelly fish, and Scafida. A qualitative discussion of different DCNs along with benefits and drawbacks associated with each one has been made available. Challenges Scalability is one of the foremost challenges to the DCNs. With the advent of cloud paradigm, data centers are required to scale up to hundreds of thousands of nodes. Besides offering immense scalability, the DCNs are also required to deliver high cross-section bandwidth. Current DCN architectures, such as three-tier DCN offer poor cross-section bandwidth and possess very high over-subscription ratio near the root. Fat tree DCN architecture delivers 1:1 oversubscription ratio and high cross section bandwidth, but it suffers from low scalability limited to k=total number of ports in a switch. DCell offers immense scalability, but it delivers very poor performance under heavy network load and one-to-many traffic patterns. Performance Analysis of DCNs A quantitative analysis of the three-tier, fat tree, and DCell architectures for performance comparison (based on throughput and latency) is performed for different network traffic pattern. The fat tree DCN delivers high throughput and low latency as compared to three-tier and DCell. DCell suffers from very low throughput under high network load and one to many traffic patterns. One of the major reasons for DCell's low throughput is very high over subscription ratio on the links that interconnect the highest level cells. Structural robustness and Connectivity of DCNs The DCell exhibits very high robustness against random and targeted attacks and retains most of its node in the giant cluster after even 10% of targeted failure. multiple failures whether targeted or random, as compared to the fat tree and three-tier DCNs. One of the major reasons for high robustness and connectivity of the DCell is its multiple connectivity to other nodes that is not found in fat tree or three-tier architectures. Energy efficiency of DCNs The concerns about the energy needs and environmental impacts of data centers are intensifying. Energy efficiency is one of the major challenges of today's information and communications technology (ICT) sector. The networking portion of a data center is accounted to consume around 15% of overall cyber energy usage. Around 15.6 billion kWh of energy was utilized solely by the communication infrastructure within the data centers worldwide in 2010. The energy consumption by the network infrastructure within a data center is expected to increase to around 50% in data centers. IEEE 802.3az standard has been standardized in 2011 that make use of adaptive link rate technique for energy efficiency. Moreover, fat tree and DCell architectures use commodity network equipment that is inherently energy efficient. Workload consolidation is also used for energy efficiency by consolidating the workload on few devices to power-off or sleep the idle devices. References Data centers Networks
Data center network architectures
Technology
1,505
2,671,210
https://en.wikipedia.org/wiki/Nitryl
Nitryl is the nitrogen dioxide (NO2) moiety when it occurs in a larger compound as a univalent fragment. Examples include nitryl fluoride (NO2F) and nitryl chloride (NO2Cl). Like nitrogen dioxide, the nitryl moiety contains a nitrogen atom with two bonds to the two oxygen atoms, and a third bond shared equally between the nitrogen and the two oxygen atoms. The nitrogen-centred radical is then free to form a bond with another univalent fragment (X) to produce an N−X bond, where X can be F, Cl, OH, etc. In organic nomenclature, the nitryl moiety is known as the nitro group. For instance, nitryl benzene is normally called nitrobenzene (PhNO2). See also Dinitrogen tetroxide Nitro compound Nitrosyl (R−N=O) Isocyanide (R−N≡C) Nitryl fluoride Nitrate References Inorganic nitrogen compounds Oxides Free radicals Nitrogen–oxygen compounds
Nitryl
Chemistry,Biology
224
5,624,725
https://en.wikipedia.org/wiki/Wine%20cellar
A wine cellar is a storage room for wine in bottles or barrels, or more rarely in carboys, amphorae, or plastic containers. In an active wine cellar, important factors such as temperature and humidity are maintained by a climate control system. In contrast, passive wine cellars are not climate-controlled, and are usually built underground to reduce temperature swings. An aboveground wine cellar is often called a wine room, while a small wine cellar (fewer than 500 bottles) is sometimes termed a wine closet. The household department responsible for the storage, care and service of wine in a great mediaeval house was termed the buttery. Large wine cellars date back over 3,700 years. Purpose Wine cellars protect alcoholic beverages from potentially harmful external influences, providing darkness, constant temperature, and constant humidity. Wine is a natural, perishable food product issued from fermentation of fruit. Left exposed to heat, light, vibration or fluctuations in temperature and humidity, all types of wine can spoil. When properly stored, wines not only maintain their quality but many actually improve in aroma, flavor, and complexity as they mature. Depending on their level of sugar and alcohol, wines are more or less sensitive to temperature variances; wine with higher alcohol and/or sugar content will be less sensitive to temperature variance. Conditions Wine can be stored satisfactorily between as long as any variations are gradual. A temperature of , much like that found in the caves used to store wine in France, is ideal for both short-term storage and long-term aging of wine. Wine generally matures differently and more slowly at a lower temperature than it does at a higher temperature. When the temperature swings are significant, 14 degrees or more, it will cause the wine to breathe through the cork which significantly speeds up the aging process. Between , wines will age normally. Active versus passive Wine cellars can be either active or passively cooled. Active wine cellars are highly insulated and need to be properly constructed. They require specialized wine cellar conditioning and cooling systems to maintain the desired temperature and humidity. In a very dry climate, it may be necessary to actively humidify the air, but in most areas this is not necessary. Passive wine cellars must be located in naturally cool and damp areas with minor seasonal and diurnal temperature variations, for example, a basement in a temperate climate. Passive cellars may be less predictable, but cost nothing to operate and are not affected by power outages. Humidity Some wine experts debate the importance of humidity for proper wine storage. In the Wine Spectator, writer Matt Kramer noted a French study which claimed that the relative humidity within a bottle is maintained 100% regardless of the closure used or the orientation of the bottle. However, Alexis Lichine says that low humidity can be a problem because it may cause organic corks to dry prematurely. of gravel covering the floor periodically sprinkled with a little water was recommended to retain the desired humidity. Gallery See also Storage of wine Aging of wine CellarTracker (database) References Cellar Rooms pt:Adega
Wine cellar
Engineering
625