id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
795,058
https://en.wikipedia.org/wiki/Aplite
Aplite () is an intrusive igneous rock that has a granitic composition. Aplites are fine-grained to aphanitic (without grains visible to the naked eye) and may consist of only quartz and feldspar or the term may refer to any leucocratic (pale-coloured) minor intrusion of that grain size. They are associated with the later stages of many larger intermediate to felsic intrusions. Occurrence Aplites have a global distribution and have been described from most areas where there are significant granitic intrusions. They occur in the form of intrusive sheets, both within the associated granitic bodies and in the surrounding country rock. Trace element analysis shows that there are no volcanic equivalents to aplites. Some aplites form in close association with pegmatites, which are otherwise uncommon in granites. These aplite-pegmatite sheet complexes may show fine-scale banding with alternations of aplite and pegmatite. Syenite-aplites consist mainly of alkali feldspar; the diorite-aplites of plagioclase; there are nepheline-bearing aplites, including those containing the elaeolite variety of nepheline. In all cases, they bear the same relation to the parent masses. Formation Aplites are intruded at a late stage in the history of the associated intrusion when crystallisation is at an advanced stage. In this state, the parent intrusion is mechanically coherent enough to support tensile stresses, leading to fractures. These fractures may then be filled by the remaining uncrystallised parts of the original magma (the residual fluids) to form the aplites. There may be repeated phases of intrusion, solidification, fracturing and aplite dyke intrusion. This has been observed in the Half Dome granodiorite in Yosemite National Park, California, with multiple stages of aplite intrusion recognised, with the earlier aplite sheets becoming deformed by later intrusions into the main granodiorite body, but recognisable from their consistent chemistry. References Felsic rocks Phaneritic rocks Plutonic rocks Subvolcanic rocks
Aplite
Chemistry
460
11,296,918
https://en.wikipedia.org/wiki/Discovery%20shopping
Discovery shopping (also known as discovery shopping search) is a type of online shopping that emphasizes the browsing aspects of the shopping experience. Discovery shopping search offers shoppers guided queries for more personalized results. The goal is to recreate the experience of live shopping as a leisure activity, where the items are selected by sampling or viewing a variety of similar or related goods. This is sometimes referred to as window shopping. Unlike a Comparison Shopping Engine which evaluates prices and feature sets for identical or very closely related products, discovery shopping enables users to tailor product results to suit their preferences. To achieve this experience online, discovery shopping sites offer features such as specifying styles, colors and brands, showing similar items, and displaying results in a visually engaging format. Such tools allow shoppers to narrow down from a large to number of choices to a set of products that they find appealing. Comprehensiveness and relevancy are also critical factors, since choice and accuracy increase a shopper's chances of finding a product they wish to purchase. Discovery search was pegged as a hot trend for 2007, according to a recent report from Forrester Research. According to Silicon Valley strategy consultant Sramana Mitra, examples of discovery shopping sites include TheFind.com, Listar, ShopStyle.com. References See also Online shop Price comparison service E-commerce
Discovery shopping
Technology
274
11,456,236
https://en.wikipedia.org/wiki/Michael%20Mingos
David Michael Patrick Mingos (born 6 August 1944) is a British chemist and academic. He was Principal of St Edmund Hall, Oxford from 1999 to 2009, and Professor of Inorganic Chemistry at the University of Oxford. Education Mingos attended the Harvey Grammar School, King Edward VII School Lytham St Anne's, University of Manchester Institute of Science and Technology (Chemistry Department Prize 1963, BSc First Class 1965, Hon DSc 2000), and the University of Sussex (DPhil 1968, and Hon DSc 2001). Career Mingos undertook postdoctoral research at Northwestern University (Fulbright Fellow 1968–70) and at the University of Sussex (ICI Fellow 1970–71). From 1971 until 1976 he was a Lecturer at Queen Mary, University of London. He then moved to the University of Oxford as Fellow and Tutor at Keble College and University Lecturer. From 1977 until 1992 he was also Lecturer at Pembroke College, Oxford. In 1978, Mingos, Stephen G. Davies and Malcolm Green compiled a set of rules that summarise where nucleophilic additions will occur on pi ligands. Mingos' 1984 paper on the polyhedral skeletal electron pair theory develops Wade's electron counting rules for predicting the molecular geometry of cluster compounds. In 1990 he was appointed Reader in Inorganic Chemistry and for the academic year 1991/92 he served as Assessor. From 1992 until 1999 he worked at Imperial College London as Sir Edward Frankland British Petroleum Professor of Inorganic Chemistry (1992–99) and Dean of the Royal College of Science (1996–99). In 1999 Mingos was appointed Principal of St Edmund Hall, Oxford and at the same time he became a visiting professor at Imperial College London. In 2000 he received as a Title of Distinction the title of professor of inorganic chemistry at the University of Oxford. He was superseded as principal by Professor Keith Gull on 1 October 2009. With David J. Wales he is the co-author of the textbook Introduction to Cluster Chemistry. Honours and awards In 1980, Mingos was award the Corday-Morgan Medal and Prize of the Royal Society of Chemistry. He was elected a Fellow of the Royal Society (FRS) in 1992. Personal life Michael Mingos is the son of Vasso Mingos, of Athens, and Rose Enid Billie Mingos née Griffiths. References 1944 births English chemists Inorganic chemists English science writers English people of Greek descent Fellows of the Royal Society Living people Scientists from Lancashire Academics of Queen Mary University of London Academics of Imperial College London Northwestern University faculty Fellows of Keble College, Oxford Principals of St Edmund Hall, Oxford People educated at King Edward VII and Queen Mary School People educated at The Harvey Grammar School Deans of the Royal College of Science Alumni of the University of Sussex
Michael Mingos
Chemistry
556
2,531,152
https://en.wikipedia.org/wiki/Diazonium%20compound
Diazonium compounds or diazonium salts are a group of organic compounds sharing a common functional group where R can be any organic group, such as an alkyl or an aryl, and X is an inorganic or organic anion, such as a halide. The parent compound where R is hydrogen, is diazenylium. Structure and general properties Arenediazonium cations and related species According to X-ray crystallography the linkage is linear in typical diazonium salts. The bond distance in benzenediazonium tetrafluoroborate is 1.083(3) Å, which is almost identical to that for dinitrogen molecule (N≡N). The linear free energy constants σm and σp indicate that the diazonium group is strongly electron-withdrawing. Thus, the diazonio-substituted phenols and benzoic acids have greatly reduced pKa values compared to their unsubstituted counterparts. The pKa of phenolic proton of 4-hydroxybenzenediazonium is 3.4, versus 9.9 for phenol itself. In other words, the diazonium group raises the ionization constant Ka (enhances the acidity) by a million-fold. This also causes arenediazonium salts to have decreased reactivity when electron-donating groups are present on the aromatic ring. The stability of arenediazonium salts is highly sensitive to the counterion. Phenyldiazonium chloride is dangerously explosive, but benzenediazonium tetrafluoroborate is easily handled on the bench. Alkanediazonium cations and related species Alkanediazonium salts are synthetically unimportant due to their extreme and uncontrolled reactivity toward SN2/SN1/E1 substitution. These cations are however of theoretical interest. Furthermore, methyldiazonium carboxylate is believed to be an intermediate in the methylation of carboxylic acids by diazomethane, a common transformation. Loss of is both enthalpically and entropically favorable: , ΔH = −43 kcal/mol , ΔH = −11 kcal/mol For secondary and tertiary alkanediazonium species, the enthalpic change is calculated to be close to zero or negative, with minimal activation barrier. Hence, secondary and (especially) tertiary alkanediazonium species are either unbound, nonexistent species or, at best, extremely fleeting intermediates. The aqueous pKa of methyldiazonium () is estimated to be <10. Preparation The process of forming diazonium compounds is called "diazotation", "diazoniation", or "diazotization". The reaction was first reported by Peter Griess in 1858, who subsequently discovered several reactions of this new class of compounds. Most commonly, diazonium salts are prepared by treatment of aromatic amines with nitrous acid and additional acid. Usually the nitrous acid is generated in situ (in the same flask) from sodium nitrite and the excess mineral acid (usually aqueous HCl, , p-, or ): Chloride salts of diazonium cation, traditionally prepared from the aniline, sodium nitrite, and hydrochloric acid, are unstable at room temperature and are classically prepared at 0 – 5 °C. However, one can isolate diazonium compounds as tetrafluoroborate or tosylate salts, which are stable solids at room temperature. It is often preferred that the diazonium salt remain in solution, but they do tend to supersaturate. Operators have been injured or even killed by an unexpected crystallization of the salt followed by its detonation. Due to these hazards, diazonium compounds are often not isolated(not necessarily). Instead they are used in situ. This approach is illustrated in the preparation of an arenesulfonyl compound: Reactions Diazo coupling reactions The first and still main use of diazonium salts is azo coupling, which is exploited in the production of azo dyes. In some cases water-fast dyed fabrics are simply immersed in an aqueous solution of the diazonium compound, followed by immersion in a solution of the coupler (the electron-rich ring that undergoes electrophilic substitution). In this process, the diazonium compound is attacked by, i.e., coupled to, electron-rich substrates. When the coupling partners are arenes such as anilines and phenols, the process is an example of electrophilic aromatic substitution: The deep colors of the dyes reflects their extended conjugation. A popular azo dye is aniline yellow, produced from aniline. Naphthalen-2-ol (beta-naphthol) gives an intensely orange-red dye. Methyl orange is an example of an azo dye that is used in the laboratory as a pH indicator.. Another commercially important class of coupling partners are acetoacetic amides, as illustrated by the preparation of Pigment Yellow 12, a diarylide pigment. Displacement of the group Arenediazonium cations undergo several reactions in which the group is replaced by another group or ion. Sandmeyer reaction Benzenediazonium chloride heated with cuprous chloride or cuprous bromide respectively dissolved in HCl or HBr yield chlorobenzene or bromobenzene, respectively. In the Gattermann reaction (there are other "Gattermann reactions"), benzenediazonium chloride is warmed with copper powder and HCl or HBr to produce chlorobenzene and bromobenzene respectively. Replacement by iodide Arenediazonium cations react with potassium iodide to give the aryl iodide: Replacement by fluoride Fluorobenzene is produced by thermal decomposition of benzenediazonium tetrafluoroborate. The conversion is called the Balz–Schiemann reaction. The traditional Balz–Schiemann reaction has been the subject of many motivations, e.g. using hexafluorophosphate(V) () and hexafluoroantimonate(V) () in place of tetrafluoroborate (). The diazotization can be effected with nitrosonium salts such as nitrosonium hexafluoroantimonate(V) . Biaryl coupling A pair of diazonium cations can be coupled to give biaryls. This conversion is illustrated by the coupling of the diazonium salt derived from anthranilic acid to give diphenic acid (). In a related reaction, the same diazonium salt undergoes loss of and to give benzyne. Replacement by hydrogen Arenediazonium cations reduced by hypophosphorous acid, ethanol, sodium stannite or alkaline sodium thiosulphate gives benzene: An alternative way suggested by Baeyer & Pfitzinger is to replace the diazo group with H is: first to convert it into hydrazine by treating with then to oxidize it into hydrocarbon by boiling with cupric sulphate solution. Replacement by a hydroxyl group Phenols are produced by heating aqueous solutions of arenediazonium salts: This reaction goes by the German name Phenolverkochung ("cooking down to yield phenols"). The phenol formed may react with the diazonium salt and hence the reaction is carried in the presence of an acid which suppresses this further reaction. A Sandmeyer-type hydroxylation is also possible using and in water. Replacement by a nitro group Nitrobenzene can be obtained by treating benzenediazonium fluoroborate with sodium nitrite in presence of copper. Alternatively, the diazotisation of the aniline can be conducted in presence of cuprous oxide, which generates cuprous nitrite in situ: Replacement by a cyano group The cyano group usually cannot be introduced by nucleophilic substitution of haloarenes, but such compounds can be easily prepared from diazonium salts. Illustrative is the preparation of benzonitrile using the reagent cuprous cyanide: This reaction is a special type of Sandmeyer reaction. Replacement by a trifluoromethyl group Two research groups reported trifluoromethylations of diazonium salts in 2013. Goossen reported the preparation of a complex from CuSCN, , and . In contrast, Fu reported the trifluoromethylation using Umemoto's reagent (S-trifluoromethyldibenzothiophenium tetrafluoroborate) and Cu powder (Gattermann-type conditions). They can be described by the following equation: The bracket indicates that other ligands on copper are likely present but are omitted. Replacement by a thiol group Diazonium salts can be converted to thiols in a two-step procedure. Treatment of benzenediazonium chloride with potassium ethylxanthate followed by hydrolysis of the intermediate xanthate ester gives thiophenol: Replacement by an aryl group The aryl group can be coupled to another using arenediazonium salts. For example, treatment of benzenediazonium chloride with benzene (an aromatic compound) in the presence of sodium hydroxide gives diphenyl: This reaction is known as the Gomberg–Bachmann reaction. A similar conversion is also achieved by treating benzenediazonium chloride with ethanol and copper powder. Replacement by boronate ester group A Bpin (pinacolatoboron) group, of use in Suzuki-Miyaura cross coupling reactions, can be installed by reaction of a diazonium salt with bis(pinacolato)diboron in the presence of benzoyl peroxide (2 mol %) as an initiator:. Alternatively similar borylation can be achieved using transition metal carbonyl complexes including dimanganese decacarbonyl. Replacement by formyl group A formyl group, –CHO, can be introduced by treating the aryl diazonium salt with formaldoxime (), followed by hydrolysis of the aryl aldoxime to give the aryl aldehyde. This reaction is known as the Beech reaction. Other dediazotizations by organic reduction at an electrode by mild reducing agents such as ascorbic acid (vitamin C) by gamma radiation from solvated electrons generated in water photoinduced electron transfer reduction by metal cations, most commonly a cuprous salt. anion-induced dediazoniation: a counterion such as iodine gives electron transfer to the diazonium cation forming the aryl radical and an iodine radical solvent-induced dediazoniation with solvent serving as electron donor Meerwein reaction Benzenediazonium chloride reacts with compounds containing activated double bonds to produce phenylated products. The reaction is called the Meerwein arylation: Metal complexation In their reactions with metal complexes, diazonium cations behave similarly to . For example, low-valent metal complexes add with diazonium salts. Illustrative complexes are and the chiral-at-metal complex . Grafting reactions In a potential application in nanotechnology, the diazonium salts 4-chlorobenzenediazonium tetrafluoroborate very efficiently functionalizes single wall nanotubes. In order to exfoliate the nanotubes, they are mixed with an ionic liquid in a mortar and pestle. The diazonium salt is added together with potassium carbonate, and after grinding the mixture at room temperature the surface of the nanotubes are covered with chlorophenyl groups with an efficiency of 1 in 44 carbon atoms. These added substituents prevent the tubes from forming intimate bundles due to large cohesive forces between them, which is a recurring problem in nanotube technology. It is also possible to functionalize silicon wafers with diazonium salts forming an aryl monolayer. In one study, the silicon surface is washed with ammonium hydrogen fluoride leaving it covered with silicon–hydrogen bonds (hydride passivation). The reaction of the surface with a solution of diazonium salt in acetonitrile for 2 hours in the dark is a spontaneous process through a free radical mechanism: So far grafting of diazonium salts on metals has been accomplished on iron, cobalt, nickel, platinum, palladium, zinc, copper and gold surfaces. Also grafting to diamond surfaces has been reported. One interesting question raised is the actual positioning on the aryl group on the surface. An in silico study demonstrates that in the period 4 elements from titanium to copper the binding energy decreases from left to right because the number of d-electrons increases. The metals to the left of iron are positioned tilted towards or flat on the surface favoring metal to carbon pi bond formation and those on the right of iron are positioned in an upright position, favoring metal to carbon sigma bond formation. This also explains why diazonium salt grafting thus far has been possible with those metals to right of iron in the periodic table. Reduction to a hydrazine group Diazonium salts can be reduced with stannous chloride () to the corresponding hydrazine derivatives. This reaction is particularly useful in the Fischer indole synthesis of triptan compounds and indometacin. The use of sodium dithionite is an improvement over stannous chloride since it is a cheaper reducing agent with fewer environmental problems. Biochemistry Alkanediazonium ions, otherwise rarely encountered in organic chemistry, are implicated as the causative agents in the carcinogens. Specifically, nitrosamines are thought to undergo metabolic activation to produce alkanediazonium species. Safety Solid diazonium halides are often dangerously explosive, and fatalities and injuries have been reported. The nature of the anions affects stability of the salt. Arenediazonium perchlorates, such as nitrobenzenediazonium perchlorate, have been used to initiate explosives. See also Diazo Diazo printing process Benzenediazonium chloride Triazene cleavage Dinitrogen complex References External links Organic compounds Carbon-heteroatom bond forming reactions Functional groups Organonitrogen compounds
Diazonium compound
Chemistry
3,030
62,456,017
https://en.wikipedia.org/wiki/Heyde%20theorem
In the mathematical theory of probability, the Heyde theorem is the characterization theorem concerning the normal distribution (the Gaussian distribution) by the symmetry of one linear form given another. This theorem was proved by C. C. Heyde. Formulation Let   be independent random variables. Let   be nonzero constants such that for all . If the conditional distribution of the linear form given is symmetric then all random variables have normal distributions (Gaussian distributions). References C. C. Heyde, “Characterization of the normal law by the symmetry of a certain conditional distribution,” Sankhya, Ser. A,32, No. 1, 115–118 (1970). A. M. Kagan, Yu. V. Linnik, and C. R. Rao, Characterization Problems in Mathematical Statistics, Wiley, New York (1973). Probability theorems
Heyde theorem
Mathematics
176
38,049,962
https://en.wikipedia.org/wiki/Iota%20Delphini
Iota Delphini (ι Del, ι Delphini) is a star in the constellation Delphinus. It has an apparent magnitude of about 5.4, meaning that it is just barely visible to the naked eye. Based upon parallax measurements made by the Gaia spacecraft, this star is located at a distance of 196 light years. Iota Delphini's spectral type is A1IV, meaning it is an A-type subgiant. Observations of the star's spectrum reveal a periodic Doppler shift. This means that Iota Delphini is a spectroscopic binary with a period of 11 days and an eccentricity of 0.23. However, almost no information is known about the companion star. Iota Delphini appears to be an Am star, also known as a metallic-line star. These types of stars have spectra indicating varying amounts of metals, like iron. Observations of Iota Delphini's spectrum have showed lower amounts of calcium and higher amounts of iron than usual. References Delphini, Iota Delphinus 101800 196544 7883 BD+10 4339 A-type subgiants Delphini, 05 Am stars
Iota Delphini
Astronomy
251
17,459,933
https://en.wikipedia.org/wiki/Open%20Core%20Protocol
The Open Core Protocol (OCP) is a protocol for on-chip subsystem communications. It is an openly licensed, core-centric protocol and defines a bus-independent, configurable interface. OCP International Partnership (OCP-IP) produces OCP specifications. OCP data transfer models range from simple request-grant handshaking through pipelined request-response to complex out-of-order operations. Legacy IP cores can be adapted to OCP, while new implementations may take advantage of advanced features: designers select only those features and signals encompassing a core's specific data, control and test configuration. The Open Core Protocol (OCP) is one of several FPGA processor interconnects used to connect soft FPGA peripherals to FPGA CPUs—both soft microprocessor and hard-macro processor. Other such interconnects include Advanced eXtensible Interface (AXI), Avalon, and the Wishbone bus. FPGA vendor Altera joined the Open Core Protocol International Partnership in 2010. Advantages Eliminates the ongoing task of interface protocol (re)definition, verification, documentation and support Readily adapts to support new core capabilities Test bench portability simplifies (re)verification Limits test suite modifications for core enhancements Interfaces to any bus structure or on-chip network Delivers industry-standard flexibility and reuse Point-to-point protocol can directly interface two cores Disadvantages Neither Altera nor Xilinx, the two largest FPGA vendors, supports this protocol. References External links Computer peripherals
Open Core Protocol
Technology
323
3,273,682
https://en.wikipedia.org/wiki/Muscone
Muscone is a macrocyclic ketone, an organic compound that is the primary contributor to the odor of musk. Natural muscone is obtained from musk, a glandular secretion of the musk deer, which has been used in perfumery and medicine for thousands of years. Since obtaining natural musk requires killing the endangered animal, nearly all muscone used in perfumery and for scenting consumer products today is synthetic. It has the characteristic smell of being "musky". Chemical structure and synthesis The chemical structure of muscone was first elucidated by Leopold Ružička. It is a 15-membered ring ketone with one methyl substituent in the 3-position. It is an oily liquid that is found naturally as the (−)-enantiomer, (R)-3-methylcyclopentadecanone. Muscone has been synthesized as the pure (−)-enantiomer as well as the racemate. It is very slightly soluble in water and miscible with alcohol. One asymmetric synthesis of (−)-muscone begins with commercially available (+)-citronellal, and forms the 15-membered ring via ring-closing metathesis: A more recent enantioselective synthesis involves an intramolecular aldol addition/dehydration reaction of a macrocyclic diketone. Isotopologs Isotopologs of muscone have been used in a study of the mechanism of olfaction. Global replacement of all hydrogen atoms in muscone was achieved by heating muscone in heavy water (D2O) at 150 °C in the presence of a rhodium on carbon catalyst. It was found that the human musk-recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to muscone, fails to distinguish between muscone and the so-prepared isotopolog in vitro. OR5AN1 is reported to bind to muscone and related musks such as civetone through hydrogen-bond formation from tyrosine-258 along with hydrophobic interactions with surrounding aromatic residues in the receptor. References Flavors Perfume ingredients Macrocycles Ketones
Muscone
Chemistry
479
14,373,561
https://en.wikipedia.org/wiki/Shinya%20Yamanaka
is a Japanese stem cell researcher and a Nobel Prize laureate. He is a professor and the director emeritus of Center for iPS Cell (induced Pluripotent Stem Cell) Research and Application, Kyoto University; as a senior investigator at the UCSF-affiliated Gladstone Institutes in San Francisco, California; and as a professor of anatomy at University of California, San Francisco (UCSF). Yamanaka is also a past president of the International Society for Stem Cell Research (ISSCR). He received the 2010 BBVA Foundation Frontiers of Knowledge Award in the biomedicine category, the 2011 Wolf Prize in Medicine with Rudolf Jaenisch, and the 2012 Millennium Technology Prize together with Linus Torvalds. In 2012, he and John Gurdon were awarded the Nobel Prize for Physiology or Medicine for the discovery that mature cells can be converted to stem cells. In 2013, he was awarded the $3 million Breakthrough Prize in Life Sciences for his work. Education Yamanaka was born in Higashiōsaka, Japan, in 1962. After graduating from Tennōji High School attached to Osaka Kyoiku University, he received his M.D. degree at Kobe University in 1987 and his Ph.D. degree at Osaka City University, Graduate School of Medicine in 1993. After this, he went through a residency in orthopedic surgery at National Osaka Hospital and a postdoctoral fellowship at the Gladstone Institute of Cardiovascular disease, San Francisco. Afterwards, he worked at the Gladstone Institutes in San Francisco, US, and Nara Institute of Science and Technology in Japan. Yamanaka is currently a professor and the director emeritus of Center for iPS Research and Application (CiRA), Kyoto University. He is also a senior investigator at the Gladstone Institutes. Professional career Between 1987 and 1989, Yamanaka was a resident in orthopedic surgery at the National Osaka Hospital. His first operation was to remove a benign tumor from his friend Shuichi Hirata, a task he could not complete after one hour when a skilled surgeon would have taken ten minutes or so. Some seniors referred to him as "Jamanaka", a pun on the Japanese word for obstacle. From 1993 to 1996, he was at the Gladstone Institute of Cardiovascular disease. Between 1996 and 1999, he was an assistant professor at Osaka City University Medical School, but found himself mostly looking after mice in the laboratory, not doing actual research. His wife advised him to become a practicing doctor, but instead he applied for a position at the Nara Institute of Science and Technology. He stated that he could and would clarify the characteristics of embryonic stem cells, and this can-do attitude won him the job. From 1999 to 2003, he was an associate professor there, and started the research that would later win him the 2012 Nobel Prize. He became a full professor and remained at the institute in that position from 2003 to 2005. Between 2004 and 2010, Yamanaka was a professor at the Institute for Frontier Medical Sciences, Kyoto University. Between 2010 and 2022, Yamanaka was the director and a professor at the center for iPS Cell Research and Application (CiRA), Kyoto University. In April 2022, he stepped down and took place of the director emeritus of CiRA keeping with professor position. In 2006, he and his team generated induced pluripotent stem cells (iPS cells) from adult mouse fibroblasts. iPS cells closely resemble embryonic stem cells, the in vitro equivalent of the part of the blastocyst (the embryo a few days after fertilization) which grows to become the embryo proper. They could show that his iPS cells were pluripotent, i.e. capable of generating all cell lineages of the body. Later he and his team generated iPS cells from human adult fibroblasts, again as the first group to do so. A key difference from previous attempts by the field was his team's use of multiple transcription factors, instead of transfecting one transcription factor per experiment. They started with 24 transcription factors known to be important in the early embryo, but could in the end reduce it to four transcription factors – Sox2, Oct4, Klf4 and c-Myc. Yamanaka's Nobel Prize–winning research in iPS cells The 2012 Nobel Prize in Physiology or Medicine was awarded jointly to Sir John B. Gurdon and Shinya Yamanaka "for the discovery that mature cells can be reprogrammed to become pluripotent." Background-different cell types There are different types of stem cells. These are some types of cells that will help in understanding the material. Background-different stem cell techniques Historical background The prevalent view during the early 20th century was that mature cells were permanently locked into the differentiated state and cannot return to a fully immature, pluripotent stem cell state. It was thought that cellular differentiation can only be a unidirectional process. Therefore, non-differentiated egg/early embryo cells can only develop into specialized cells. However, stem cells with limited potency (adult stem cells) remain in bone marrow, intestine, skin etc. to act as a source of cell replacement. The fact that differentiated cell types had specific patterns of proteins suggested irreversible epigenetic modifications or genetic alterations to be the cause of unidirectional cell differentiation. So, cells progressively become more restricted in the differentiation potential and eventually lose pluripotency. In 1962, John B. Gurdon demonstrated that the nucleus from a differentiated frog intestinal epithelial cell can generate a fully functional tadpole via transplantation to an enucleated egg. Gurdon used somatic cell nuclear transfer (SCNT) as a method to understand reprogramming and how cells change in specialization. He concluded that differentiated somatic cell nuclei had the potential to revert to pluripotency. This was a paradigm shift at the time. It showed that a differentiated cell nucleus has retained the capacity to successfully revert to an undifferentiated state, with the potential to restart development (pluripotent capacity). However, the question still remained whether an intact differentiated cell could be fully reprogrammed to become pluripotent. Yamanaka's research Shinya Yamanaka proved that introduction of a small set of transcription factors into a differentiated cell was sufficient to revert the cell to a pluripotent state. Yamanaka focused on factors that are important for maintaining pluripotency in embryonic stem (ES) cells. This was the first time an intact differentiated somatic cell could be reprogrammed to become pluripotent. Knowing that transcription factors were involved in the maintenance of the pluripotent state, he selected a set of 24 ES cell transcriptional factors as candidates to reinstate pluripotency in somatic cells. First, he collected the 24 candidate factors. When all 24 genes encoding these transcription factors were introduced into skin fibroblasts, few actually generated colonies that were remarkably similar to ES cells. Secondly, further experiments were conducted with smaller numbers of transcription factors added to identify the key factors, through a very simple and yet sensitive assay system. Lastly, he identified the four key genes. They found that 4 transcriptional factors (Myc, Oct3/4, Sox2 and Klf4) were sufficient to convert mouse embryonic or adult fibroblasts to pluripotent stem cells (capable of producing teratomas in vivo and contributing to chimeric mice). These pluripotent cells are called iPS (induced pluripotent stem) cells; they appeared with very low frequency. iPS cells can be selected by inserting the b-geo gene into the Fbx15 locus. The Fbx15 promoter is active in pluripotent stem cells which induce b-geo expression, which in turn gives rise to G418 resistance; this resistance helps us identify the iPS cells in culture. Moreover, in 2007, Yamanaka and his colleagues found iPS cells with germline transmission (via selecting for Oct4 or Nanog gene). Also in 2007, they were the first to produce human iPS cells. Some issues that current methods of induced pluripotency face are the very low production rate of iPS cells and the fact that the 4 transcriptional factors are shown to be oncogenic. In July 2014, during a scandal involving Japanese stem cell researcher Haruko Obokata fabricating data, doctoring images, and plagiarizing the work of others, Yamanaka faced public scrutiny for his associated work lacking full documentation. Yamanaka denied manipulating images in his papers on embryonic mouse stem cells, but he could not find lab notes to confirm that the raw data was consistent with the published results. Further research and future prospects Since the original discovery by Yamanaka, much further research has been done in this field, and many improvements have been made to the technology. Improvements made to Yamanaka's research as well as future prospects of his findings are as follows: The delivery mechanism of pluripotency factors has been improved. At first retroviral vectors, that integrate randomly in the genome and cause deregulation of genes that contribute to tumor formation, were used. However, now, non-integrating viruses, stabilised RNAs or proteins, or episomal plasmids (integration-free delivery mechanism) are used. Transcription factors required for inducing pluripotency in different cell types have been identified (e.g. neural stem cells). Small substitutive molecules were identified, that can substitute for the function of the transcription factors. Transdifferentiation experiments were carried out. They tried to change the cell fate without proceeding through a pluripotent state. They were able to systematically identify genes that carry out transdifferentiation using combinations of transcription factors that induce cell fate switches. They found trandifferentiation within germ layer and between germ layers, e.g., exocrine cells to endocrine cells, fibroblast cells to myoblast cells, fibroblast cells to cardiomyocyte cells, fibroblast cells to neurons Cell replacement therapy with iPS cells is a possibility. Stem cells can replace diseased or lost cells in degenerative disorders and they are less prone to immune rejection. However, there is a danger that it may introduce mutations or other genomic abnormalities that render it unsuitable for cell therapy. So, there are still many challenges, but it is a very exciting and promising research area. Further work is required to guarantee safety for patients. Can medically use iPS cells from patients with genetic and other disorders to gain insights into the disease process. - Amyotrophic lateral sclerosis (ALS), Rett syndrome, spinal muscular atrophy (SMA), α1-antitrypsin deficiency, familial hypercholesterolemia and glycogen storage disease type 1A. - For cardiovascular disease, Timothy syndrome, LEOPARD syndrome, type 1 and 2 long QT syndrome - Alzheimer's, Spinocerebellar ataxia, Huntington's etc. iPS cells provide screening platforms for development and validation of therapeutic compounds. For example, kinetin was a novel compound found in iPS cells from familial dysautonomia and beta blockers & ion channel blockers for long QT syndrome were identified with iPS cells. Yamanaka's research has "opened a new door and the world's scientists have set forth on a long journey of exploration, hoping to find our cells’ true potential." In 2013, iPS cells were used to generate a human vascularized and functional liver in mice in Japan. Multiple stem cells were used to differentiate the component parts of the liver, which then self-organized into the complex structure. When placed into a mouse host, the liver vessels connected to the hosts vessels and performed normal liver functions, including breaking down of drugs and liver secretions. In 2022, Yamanaka factors were shown to effect age related measures in aged mice. Recognition In 2007, Yamanaka was recognized as a "Person Who Mattered" in the Time Person of the Year edition of Time magazine. Yamanaka was also nominated as a 2008 Time 100 Finalist. In June 2010, Yamanaka was awarded the Kyoto Prize for reprogramming adult skin cells to pluripotential precursors. Yamanaka developed the method as an alternative to embryonic stem cells, thus circumventing an approach in which embryos would be destroyed. In May 2010, Yamanaka was given "Doctor of Science honorary degree" by Mount Sinai School of Medicine. In September 2010, he was awarded the Balzan Prize for his work on biology and stem cells. Yamanaka has been listed as one of the 15 Asian Scientists To Watch by Asian Scientist magazine on May 15, 2011. In June 2011, he was awarded the inaugural McEwen Award for Innovation; he shared the $100,000 prize with Kazutoshi Takahashi, who was the lead author on the paper describing the generation of induced pluripotent stem cells. In June 2012, he was awarded the Millennium Technology Prize for his work in stem cells. He shared the 1.2 million euro prize with Linus Torvalds, the creator of the Linux kernel. In October 2012, he and fellow stem cell researcher John Gurdon were awarded the Nobel Prize in Physiology or Medicine "for the discovery that mature cells can be reprogrammed to become pluripotent." 2007 – Osaka Science Prize 2007 – Inoue Prize for Science 2007 – Asahi Prize 2007 – Meyenburg Cancer Research Award 2008 – Yamazaki-Teiichi Prize in Biological Science & Technology 2008 – Robert Koch Prize 2008 – Medals of Honor (Japan) (with purple ribbon) 2008 – Shaw Prize in Life Science & Medicine 2008 – Sankyo Takamine Memorial Award 2008 – Massry Prize from the Keck School of Medicine, University of Southern California 2008 - Golden Plate Award of the American Academy of Achievement 2009 – Lewis S. Rosenstiel Award for Distinguished Work in Basic Medical Research 2009 – Gairdner Foundation International Award 2009 – Albert Lasker Award for Basic Medical Research 2010 – Balzan Prize for Stem Cells: Biology and potential applications 2010 – March of Dimes Prize in Developmental Biology 2010 – Kyoto Prize in Biotechnology and medical technology 2010 – Person of Cultural Merit 2010 – BBVA Foundation Frontiers of Knowledge Award in the Biomedicine Category 2011 – Albany Medical Center Prize in biomedicine 2011 – Wolf Prize in Medicine 2011 – King Faisal International Prize for Medicine 2011 – McEwen Award for Innovation 2012 – Millennium Technology Prize 2012 – Fellow of the National Academy of Sciences 2012 – Nobel Prize in Physiology or Medicine 2012 – Order of Culture 2013 – Breakthrough Prize in Life Sciences 2013 – Member of the Pontifical Academy of Sciences 2014 – UCSF 150th Anniversary Alumni Excellence Awards 2016 – Honorable Emeritus Professor, Hiroshima University Interest in sports Yamanaka practiced judo (2nd Dan black belt) and played rugby as a university student. He also has a history of running marathons. After a 20-year gap, he competed in the inaugural Osaka Marathon in 2011 as a charity runner with a time of 4:29:53. He took part in Kyoto Marathon to raise money for iPS research since 2012. His personal best is 3:25:20 at 2018 Beppu-Ōita Marathon. See also Catherine Verfaillie List of Japanese Nobel laureates List of Nobel laureates affiliated with Kyoto University Tasuku Honjo References General references: The Discovery and Future of Induced Pluripotent Stem (iPS) Cloning and Stem Cell Discoveries Earn Nobel in Medicine (New York Times, October 8, 2012) Specific citations: External links Shinya Yamanaka, Center for iPS Cell Research and Application (CiRA), Kyoto University International Society for Stem Cell Research (ISSCR) 1962 births Living people 21st-century Japanese biologists Japanese Nobel laureates Academic staff of Kyoto University People from Higashiōsaka Cell biologists Stem cell researchers Biogerontologists Wolf Prize in Medicine laureates Laureates of the Imperial Prize Nobel laureates in Physiology or Medicine Foreign associates of the National Academy of Sciences Members of the French Academy of Sciences Recipients of the Order of Culture Recipients of the Albert Lasker Award for Basic Medical Research Members of the Pontifical Academy of Sciences Kobe University alumni Articles containing video clips Academic staff of Nara Institute of Science and Technology University of California, San Francisco faculty University of California, San Francisco alumni Members of the National Academy of Medicine Kyoto laureates in Advanced Technology
Shinya Yamanaka
Biology
3,418
16,079,328
https://en.wikipedia.org/wiki/NGC%20559
NGC 559 (also known as Caldwell 8) is an open cluster and Caldwell object in the constellation Cassiopeia. It shines at magnitude +9.5. Its celestial coordinates are RA , dec . It is located near the open cluster NGC 637, and the bright magnitude +2.2 irregular variable star Gamma Cassiopeiae. The cluster is 7 arcmins across. The object is also called Ghost's Goblet. This name was coined by astronomer Stephen J. O'Meara, as the center of the star cluster, with a little imagination, is reminiscent of a still photograph of a jeweled goblet that is about to vanish in a ghostly manner. O'Meara attributes the impression of fading to the low brightness (about +12) of many stars in the center as well as to the great age of the star cluster, which is about 1.8 billion years old. References External links Open clusters 0559 008b Cassiopeia (constellation) 17871109
NGC 559
Astronomy
210
28,476,152
https://en.wikipedia.org/wiki/Coital%20incontinence
Coital incontinence (CI) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation. It has been reported to occur in 10% to 27% of sexually active women with urinary continence problems. There is evidence to suggest links between urinary leakage at penetration and urodynamic stress incontinence, and between urinary leakage at orgasm and detrusor overactivity. Coital incontinence is physiologically distinct from female ejaculation, with which it is sometimes confused. References Urinary incontinence Sexual health
Coital incontinence
Biology
138
1,733,595
https://en.wikipedia.org/wiki/Spencer%20Wells
Spencer Wells (born April 6, 1969) is an American geneticist, anthropologist, author and entrepreneur. He co-hosts The Insight podcast with Razib Khan. Wells led The Genographic Project from 2005 to 2015, as an Explorer-in-Residence at the National Geographic Society, and is the founder and executive director of personal genomics nonprofit The Insitome Institute. Biography Youth and education Wells was born in Marietta, Georgia and grew up in Lubbock, Texas. He attended both All Saints School and Lubbock High School, and received a National Merit Scholarship. He obtained a Bachelor of Science in biology from the University of Texas at Austin in 1988 and a Ph.D. in biology from Harvard University in 1994. He was a postdoctoral fellow at Stanford University between 1994 and 1998, and a research fellow at the University of Oxford from 1999 to 2000. Career Wells did his Ph.D. work under Richard Lewontin, and later did postdoctoral research with Luigi Luca Cavalli-Sforza and Sir Walter Bodmer. His work, which has helped to establish the critical role played by Central Asia in the peopling of the world, has been published in journals such as Science, American Journal of Human Genetics, and the Proceedings of the National Academy of Sciences. Wells is renowned for his logistically complex sample-collecting expeditions in remote parts of the world. EurAsia98, which in 1998 took him and his team from London to the Altai Mountains on the Mongolian border, via an overland route through the Caucasus, Iran and the -stans of Central Asia, was sponsored by Land Rover. In 2005 he led a team of Genographic scientists on the first modern expedition to the Tibesti Mountains in northern Chad, and in 2006 he led a team to the Wakhan Corridor on the Tajik-Afghan border. His work has taken him to more than 100 countries. He wrote the book The Journey of Man: A Genetic Odyssey (2002), which explains how genetic data has been used to trace human migrations over the past 50,000 years, when modern humans first migrated outside of Africa. According to Wells, one group took a southern route and populated southern India and southeast Asia, then Australia. The other group, accounting for 90% of the world's non-African population (some 5.4 billion people as of 2014), took a northern route, eventually peopling most of Eurasia (largely displacing the aboriginals in southern India, Sri Lanka and Southeast Asia in the process), North Africa and the Americas. Wells also wrote and presented the 2003 PBS/National Geographic documentary of the same name. Wells has contributed to efforts to determine the date of Y-chromosomal Adam. From 2005-2015, Wells led the Genographic Project, undertaken by the National Geographic Society, IBM, and the Waitt Foundation, which aimed to create a picture of how our ancestors populated the planet by analyzing DNA samples from around the world. The project is credited with creating the personal genomics industry. He has presented the results of his work around the world, including at the 2007 TED conference, where he spoke specifically about human diversity. Wells was a keynote speaker at the Science & Technology Summit in The Hague on November 18, 2010. He also gave the keynote address at the University of Texas College of Natural Sciences commencement exercises on May 21, 2011. Wells was one of the keynote speakers at the Southern California Genealogical Society Jamboree that was co-sponsored by the International Society of Genetic Genealogy on June 3, 2013. The focus was on Family History and DNA: Genetic Genealogy in 2013, where he was quoted as saying: Since 2005, the Genographic Project has used the latest genetic technology to expand our knowledge of the human story, and its pioneering use of DNA testing to engage and involve the public in the research effort has helped to create a new breed of "citizen scientist." Geno 2.0 expands the scope for citizen science, harnessing the power of the crowd to discover new details of human population history. Allegations of anti-Semitism In July 2020, Wells attracted criticism for tweeting that Israel should be bombed “until the sand turns to glass”. from the online edition of Algemeiner Journal. The University of Texas at Austin subsequently distanced itself from Wells, stating, "Spencer Wells is no longer a faculty or advisory council member at UT. He previously had a courtesy, unpaid appointment as a part-time adjunct that did not involve teaching. That ended in May and was not renewed. We do not have any association with the views held by Mr. Wells." Personal life Wells is married to Holly Morse, and the two have lived in Lombok, Indonesia since 2020. He was previously married to Trendell Thompson (1998-2005), with whom he has two children, Sasha Thompson-Wells and Margot Thompson-Wells; and Pamela Caragol Wells (2005-2015). Awards and honors National Merit Scholar Phi Beta Kappa Fellow of the Explorers Club National Geographic Explorer-in-Residence Kistler Prize Outstanding Young Texas Ex Frank H.T. Rhodes Class of '56 Professorship, Cornell University Director of the Texas Lyceum Distinguished Alumnus, College of Natural Sciences, University of Texas at Austin Books The Journey of Man: A Genetic Odyssey, 2002 (Penguin, UK; Princeton University Press and Random House, US; Fischer Verlag, Germany; Longanesi, Italy; Oceano, Spain/Latin America; Ucila International, Slovenia; Dokoran, Czech Republic; Akkord, Hungary; Oriental Press, China; Basilico, Japan; ScienceBooks, Korea; Yurt, Turkey; CD Press, Romania; Alpina, Russia) Deep Ancestry: Inside the Genographic Project, 2006 (National Geographic) Pandora's Seed: The Unforeseen Cost of Civilization, 2010 (Random House, US; Penguin, UK; Contact, Netherlands; Codice, Italy; Eksmo, Russia; Nika Center, Ukraine; Commonwealth, Taiwan; Eulyoo, Korea; Kagaku-Dojin, Japan; Shanghai BBT, China) Films 2000 – The Difference (Channel Four, UK) 2002 – The Real King and Queen (Discovery Channel) 2003 – Journey of Man (PBS/National Geographic Channel) – CINE Golden Eagle award 2004 – Quest for the Phoenicians (PBS) 2005 – Search for Adam (National Geographic Channel) 2007 – China's Secret Mummies (National Geographic Channel) – nominated for Outstanding Historical Programming Emmy 2009 – The Human Family Tree (National Geographic Channel) – nominated for Outstanding Science and Technology Programming Emmy See also Recent single-origin hypothesis Y-chromosomal Adam The Genographic Project References External links The Genographic Project Cover article from the December 2004 issue of Discover Interview about Genghis Khan's Y-chromosome on Radiolab Interview in PLoS Genetics Interview on The Colbert Report Interview on The Daily Show Talk on personal genomics at the Frontiers Forum 2019 1969 births Living people Population geneticists American geneticists University of Texas at Austin College of Natural Sciences alumni Harvard Graduate School of Arts and Sciences alumni Recent African origin of modern humans People from Washington, D.C. Genographic Project
Spencer Wells
Biology
1,468
25,229,818
https://en.wikipedia.org/wiki/African%20Renaissance%20Monument
The African Renaissance Monument (French: Monument de la Renaissance Africaine) is a tall bronze statue located on top of one of the twin hills known as Collines des Mamelles, outside Dakar, Senegal. Built overlooking the Atlantic Ocean in the Ouakam suburb, the statue was designed by the Senegalese architect Pierre Goudiaby after an idea presented by President Abdoulaye Wade, and built by Mansudae Overseas Projects, a monument construction company from North Korea. Site preparation atop the hill began in 2006, and construction of the bronze statue began in 2008. Originally scheduled for completion in December 2009, delays stretched into early 2010, and the formal dedication occurred on 4 April 2010, Senegal's "National Day", commemorating the 50th anniversary of the country's independence from France. It is the tallest statue in Africa. Construction The project was launched by then Senegalese President Abdoulaye Wade, who considered it part of Senegal's prestige projects, aimed at providing monuments to herald a new era of African Renaissance. It shows a family drawn up towards the sky, the man carrying his child on his biceps and holding his wife by the waist, "an Africa emerging from the bowels of the earth, leaving obscurantism to go towards the light". The monument indeed represents an African family resolutely turned towards the North-West. The project of the monument was entrusted to the Senegalese architect Pierre Goudiaby Atepa, author "in particular" of the Door of the Third Millennium, which overhangs the road of the Corniche. The work was "drawn" by President Wade, who owns 35% of the copyright, but the work was initiated by the Senegalese artist Ousmane Sow, who withdrew from the project following a disagreement with Abdoulaye Wade. Unveiling On 3 April 2010, the African Renaissance Monument was unveiled in Dakar in front of 19 African heads of state, including President of Malawi and the African Union, Bingu wa Mutharika, Jean Ping of the African Union Commission, and the Presidents of Benin, Cape Verde, Republic of the Congo, Ivory Coast, The Gambia, Liberia, Mali, Mauritania and Zimbabwe, as well as representatives from North Korea, and Jesse Jackson and musician Akon, both from the United States, all of whom were given a tour. President Wade said, "It brings to life our common destiny. Africa has arrived in the 21st century standing tall and more ready than ever to take its destiny into its hands." President Bingu said, "This monument does not belong to Senegal. It belongs to the African people wherever we are." Controversies Expense Thousands of people protested against "all the failures of President Wade's regime, the least of which is this horrible statue" on the city's streets beforehand, with riot police deployed to maintain control. Deputy leader of the opposition Ndeye Fatou Toure described the monument as an "economic monster and a financial scandal in the context of the current [economic] crisis". The colossal statue has been criticized for its cost at US$ 27 million (£16.6m). The payment was made in kind, with 30 to 40 hectares of land that has variously been reported as sponsored by a Senegalese businessman or state-owned land. Style The statue was built by Mansudae Overseas Projects, a North Korean sculpting company famous for various projects and large statues throughout Africa since the 1970s. The statue was poorly received by art critics around the world after its much-delayed unveiling in 2010 and was compared by some to the (once-abandoned) Christopher Columbus statue project that was unveiled in Arecibo, Puerto Rico in 2016. Local imams argued that a statue depicting a human figure is idolatrous and objected to the perceived immodesty of the semi-nude male and female figures. Revenue The project has also attracted controversy due to Wade's claim to the intellectual property rights of the statue, and insisting that he is entitled to 35 percent of the profits raised. Opposition figures have sharply criticised Wade's plan to claim intellectual property rights, insisting that the president cannot claim copyright over ideas conceived as a function of his public office. Local artists Ousmane Sow, a world-renowned Senegalese sculptor, also objected to the use of foreign builders, saying it was anything but a symbol of African Renaissance and nothing to do with art. Gallery of images See also African Renaissance Mansudae Overseas Projects List of tallest statues Sungbo's Eredo References National monuments and memorials Buildings and structures in Dakar Buildings and structures completed in 2010 Mansudae Overseas Projects Colossal statues Sculptures of Black people Sculptures of children in Senegal 2010 sculptures Monuments and memorials completed in the 2010s
African Renaissance Monument
Physics,Mathematics
967
1,794,902
https://en.wikipedia.org/wiki/Pin%20group
In mathematics, the pin group is a certain subgroup of the Clifford algebra associated to a quadratic space. It maps 2-to-1 to the orthogonal group, just as the spin group maps 2-to-1 to the special orthogonal group. In general the map from the Pin group to the orthogonal group is not surjective or a universal covering space, but if the quadratic form is definite (and dimension is greater than 2), it is both. The non-trivial element of the kernel is denoted which should not be confused with the orthogonal transform of reflection through the origin, generally denoted General definition Let be a vector space with a non-degenerate quadratic form . The pin group is the subset of the Clifford algebra consisting of elements of the form , where the are vectors such that . The spin group is defined similarly, but with restricted to be even; it is a subgroup of the pin group. In this article, is always a real vector space. When has basis vectors satisfying and the pin group is denoted Pin(p, q). Geometrically, for vectors with , is the reflection of a vector across the hyperplane orthogonal to . More generally, an element of the pin group acts on vectors by transforming to , which is the composition of k reflections. Since every orthogonal transformation can be expressed as a composition of reflections (the Cartan–Dieudonné theorem), it follows that this representation of the pin group is a homomorphism from the pin group onto the orthogonal group. This is often called the twisted adjoint representation. The elements ±1 of the pin group are the elements which map to the identity , and every element of corresponds to exactly two elements of . Definite form The pin group of a definite form maps onto the orthogonal group, and each component is simply connected (in dimension 3 and higher): it double covers the orthogonal group. The pin groups for a positive definite quadratic form Q and for its negative −Q are not isomorphic, but the orthogonal groups are. In terms of the standard forms, O(n, 0) = O(0, n), but Pin(n, 0) and Pin(0, n) are in general not isomorphic. Using the "+" sign convention for Clifford algebras (where ), one writes and these both map onto O(n) = O(n, 0) = O(0, n). By contrast, we have the natural isomorphism Spin(n, 0) ≅ Spin(0, n) and they are both the (unique) non-trivial double cover of the special orthogonal group SO(n), which is the (unique) universal cover for n ≥ 3. Indefinite form There are as many as eight different double covers of O(p, q), for p, q ≠ 0, which correspond to the extensions of the center (which is either C2 × C2 or C4) by C2. Only two of them are pin groups—those that admit the Clifford algebra as a representation. They are called Pin(p, q) and Pin(q, p) respectively. As topological group Every connected topological group has a unique universal cover as a topological space, which has a unique group structure as a central extension by the fundamental group. For a disconnected topological group, there is a unique universal cover of the identity component of the group, and one can take the same cover as topological spaces on the other components (which are principal homogeneous spaces for the identity component) but the group structure on other components is not uniquely determined in general. The Pin and Spin groups are particular topological groups associated to the orthogonal and special orthogonal groups, coming from Clifford algebras: there are other similar groups, corresponding to other double covers or to other group structures on the other components, but they are not referred to as Pin or Spin groups, nor studied much. In 2001, Andrzej Trautman found the set of all 32 inequivalent double covers of O(p) x O(q), the maximal compact subgroup of O(p, q) and an explicit construction of 8 double covers of the same group O(p, q). Construction The two pin groups correspond to the two central extensions The group structure on Spin(V) (the connected component of determinant 1) is already determined; the group structure on the other component is determined up to the center, and thus has a ±1 ambiguity. The two extensions are distinguished by whether the preimage of a reflection squares to ±1 ∈ Ker (Spin(V) → SO(V)), and the two pin groups are named accordingly. Explicitly, a reflection has order 2 in O(V), r2 = 1, so the square of the preimage of a reflection (which has determinant one) must be in the kernel of Spin±(V) → SO(V), so , and either choice determines a pin group (since all reflections are conjugate by an element of SO(V), which is connected, all reflections must square to the same value). Concretely, in Pin+, has order 2, and the preimage of a subgroup {1, r} is C2 × C2: if one repeats the same reflection twice, one gets the identity. In Pin−, has order 4, and the preimage of a subgroup {1, r} is C4: if one repeats the same reflection twice, one gets "a rotation by 2π"—the non-trivial element of Spin(V) → SO(V) can be interpreted as "rotation by 2π" (every axis yields the same element). Low dimensions In 1 dimension, the pin groups are congruent to the first dihedral and dicyclic groups: In 2 dimensions, the distinction between Pin+ and Pin− mirrors the distinction between the dihedral group of a 2n-gon and the dicyclic group of the cyclic group C2n. In Pin+, the preimage of the dihedral group of an n-gon, considered as a subgroup Dihn < O(2), is the dihedral group of a 2n-gon, Dih2n < Pin+(2), while in Pin−, the preimage of the dihedral group is the dicyclic group Dicn < Pin−(2). The resulting commutative square of subgroups for Spin(2), Pin+(2), SO(2), O(2) – namely C2n, Dih2n, Cn, Dihn – is also obtained using the projective orthogonal group (going down from O by a 2-fold quotient, instead of up by a 2-fold cover) in the square SO(2), O(2), PSO(2), PO(2), though in this case it is also realized geometrically, as "the projectivization of a 2n-gon in the circle is an n-gon in the projective line". In 3 dimensions the situation is as follows. The Clifford algebra generated by 3 anticommuting square roots of +1 is the algebra of 2×2 complex matrices, and Pin+(3) is isomorphic to . The Clifford algebra generated by 3 anticommuting square roots of -1 is the algebra , and Pin−(3) is isomorphic to SU(2) × C2. These groups are nonisomorphic because the center of Pin+(3) is C4 while the center of Pin−(3) is C2 × C2. Center Suppose . The center of is when , and when . The center of is when , and when . Name The name was introduced in , where they state "This joke is due to J-P. Serre". It is a back-formation from Spin: "Pin is to O(n) as Spin is to SO(n)", hence dropping the "S" from "Spin" yields "Pin". Notes References Lie groups
Pin group
Mathematics
1,655
1,009,291
https://en.wikipedia.org/wiki/Pyranometer
A pyranometer () is a type of actinometer used for measuring solar irradiance on a planar surface and it is designed to measure the solar radiation flux density (W/m2) from the hemisphere above within a wavelength range 0.3 μm to 3 μm. A typical pyranometer does not require any power to operate. However, recent technical development includes use of electronics in pyranometers, which do require (low) external power (see heat flux sensor). Explanation The solar radiation spectrum that reaches Earth's surface extends its wavelength approximately from 300 nm to 2800 nm. Depending on the type of pyranometer used, irradiance measurements with different degrees of spectral sensitivity will be obtained. To make a measurement of irradiance, it is required by definition that the response to "beam" radiation varies with the cosine of the angle of incidence. This ensures a full response when the solar radiation hits the sensor perpendicularly (normal to the surface, sun at zenith, 0° angle of incidence), zero response when the sun is at the horizon (90° angle of incidence, 90° zenith angle), and 0.5 at a 60° angle of incidence. It follows that a pyranometer should have a so-called "directional response" or "cosine response" that is as close as possible to the ideal cosine characteristic. Types Following the definitions noted in the ISO 9060, three types of pyranometer can be recognized and grouped in two different technologies: thermopile technology and silicon semiconductor technology. The light sensitivity, known as 'spectral response', depends on the type of pyranometer. The figure here above shows the spectral responses of the three types of pyranometer in relation to the solar radiation spectrum. The solar radiation spectrum represents the spectrum of sunlight that reaches the Earth's surface at sea level, at midday with A.M. (air mass) = 1.5. The latitude and altitude influence this spectrum. The spectrum is influenced also by aerosol and pollution. Thermopile pyranometers A thermopile pyranometer (also called thermo-electric pyranometer) is a sensor based on thermopiles designed to measure the broad band of the solar radiation flux density from a 180° field of view angle. A thermopile pyranometer thus usually measures from 300 to 2800 nm with a largely flat spectral sensitivity (see the spectral response graph) The first generation of thermopile pyranometers had the active part of the sensor equally divided in black and white sectors. Irradiation was calculated from the differential measure between the temperature of the black sectors, exposed to the sun, and the temperature of the white sectors, sectors not exposed to the sun or better said in the shades. In all thermopile technology, irradiation is proportional to the difference between the temperature of the sun exposed area and the temperature of the shadow area. Design In order to attain the proper directional and spectral characteristics, a thermopile pyranometer is constructed with the following main components: A thermopile sensor with a black coating. It absorbs all solar radiation, has a flat spectrum covering the 300 to 50,000 nanometer range, and has a near-perfect cosine response. A glass dome. It limits the spectral response from 300 to 2,800 nanometers (cutting off the part above 2,800 nm), while preserving the 180° field of view. It also shields the thermopile sensor from convection. Many, but not all, first-class and secondary standard pyranometers (see ISO 9060 classification of thermopile pyranometers) include a second glass dome as an additional "radiation shield", resulting in a better thermal equilibrium between the sensor and inner dome, compared to some single dome models by the same manufacturer. The effect of having a second dome, in these cases, is a strong reduction of instrument offsets. Class A, single dome models, with low zero-offset (+/- 1 W/m2) are available. In the modern thermopile pyranometers the active (hot) junctions of the thermopile are located beneath the black coating surface and are heated by the radiation absorbed from the black coating. The passive (cold) junctions of the thermopile are fully protected from solar radiation and in thermal contact with the pyranometer housing, which serves as a heat-sink. This prevents any alteration from yellowing or decay when measuring the temperature in the shade, thus impairing the measure of the solar irradiance. The thermopile generates a small voltage in proportion to the temperature difference between the black coating surface and the instrument housing. This is of the order of 10 μV (microvolts) per W/m2, so on a sunny day the output will be around 10 mV (millivolts). Each pyranometer has a unique sensitivity, unless otherwise equipped with electronics for signal calibration. Usage Thermopile pyranometers are frequently used in meteorology, climatology, climate change research, building engineering physics, photovoltaic systems, and monitoring of photovoltaic power stations. The solar energy industry, in a 2017 standard, IEC 61724-1:2017, has defined the type and number of pyranometers that should be used depending on the size and category of solar power plant. That norm advises to install thermopile pyranometers horizontally (GHI, Global Horizontal Irradiation), and to install photovoltaic pyranometers in the plane of PV modules (POA, Plane Of Array) to enhance accuracy in Performance Ratio calculation. To use the data measured by a pyranometer (horizontal or in-plane), quality assessment (QA) of the raw measured data is necessary. This is because the pyranometer measurements typically suffer from environment-induced errors but also handling and neglect errors, such as: Pollution of the glass dome (e.g. deposition of atmospheric dust, bird droppings, snowfall), which reduces the measured irradiance Issues with positioning, resulting in measurements in a different plane (i.e. not horizontal or in-plane with PV modules) than expected Data logger errors resulting in e.g. static values, oscillations, or data capped to a certain value Reflections and shading from the surrounding objects resulting in inaccurate measurements (i.e. not corresponding to solar irradiance) Calibration issues of the instrument, leading to measurement errors, offset, or drift over time Dew, snow, or frost on the glass dome on lower-end pyranometers not equipped with heating units Each of the above issues appears as a specific pattern in the measured time series. Thanks to this, the issues can be identified, the erroneous records flagged, and excluded from the dataset. The methods employed for data QA can be either manual, relying on an expert to identify the patterns, or automated, where an algorithm does the job. As many of the patterns are complex, not easily described, and require a particular context, manual QA is very common. A specialist software with suitable tools is required to perform the QA. After the QA procedure, the remaining ‘clean’ dataset reflects the solar irradiance at the measurement site to within the uncertainty of measurement of the instrument. The ‘clean’ measured dataset can be optionally enhanced with data from a satellite-based solar irradiance model. This data is available globally for a much longer time period (typically decades into the past) than the data measured by the pyranometer. The satellite model data can be correlated (or site adapted) to the pyranometer-measured data to produce a dataset with a long time period of data accurate for the specific site, with a defined uncertainty. Such data can be used to perform bankable solar resource studies or produce Solar potential maps. For monitoring of operational PV power plants, pyranometers play an essential role in verifying the solar irradiance available at any given time or over a certain time period. Due to weather variability, redundancy, and the spatial scale of contemporary solar plants (above 100MWp), multiple pyranometers are installed to provide accurate solar irradiation for each section of the PV power plant. IEC 61724-1:2017 international standard for example calls for at least 4 Class A thermopile pyranometers to be installed at 100MWp PV power plant at all times. Solar measurements that were QA’d could be used to derive Key Performance Indicators (KPI) such as Performance ratio* - metrics used in asset health monitoring or various contractual scenarios relating to energy produced (billing) or asset management (i.e. O&M). In these calculations, the measured sum of in-plane irradiation over a certain period is used as the determinant to which normalized produced PV electricity is compared to. Due to the difficulty of obtaining reliable in-plane measurements, especially in operational power plants, Energy Performance Index is increasingly being used instead of the older Performance ratio metric. Photovoltaic pyranometer – silicon photodiode Also known as a photoelectric pyranometer in the ISO 9060, a photodiode-based pyranometer can detect the portion of the solar spectrum between 400 nm and 1100 nm. The photodiode converts the aforementioned solar spectrum frequencies into current at high speed, thanks to the photoelectric effect. The conversion is influenced by the temperature with a raise in current produced by the raise in temperature (about 0,1% • °C) Design A photodiode-based pyranometer is composed by a housing dome, a photodiode, and a diffuser or optical filters. The photodiode has a small surface area and acts as a sensor. The current generated by the photodiode is proportional to irradiance; an output circuit, such as a transimpedance amplifier, generates a voltage directly proportional to the photocurrent. The output is usually on the order of millivolts, the same order of magnitude as thermopile-type pyranometers. Usage Photodiode-based pyranometers are implemented where the quantity of irradiation of the visible solar spectrum, or of certain portions such as UV, IR or PAR (photosynthetically active radiation), needs to be calculated. This is done by using diodes with specific spectral responses. Photodiode-based pyranometers are the core of luxmeters used in photography, cinema and lighting technique. Sometimes they are also installed close to modules of photovoltaic systems. Photovoltaic pyranometer – photovoltaic cell Built around the 2000s concurrently with the spread of photovoltaic systems, the photovoltaic pyranometer is an evolution of the photodiode pyranometer. It answered the need for a single reference photovoltaic cell when measuring the power of cell and photovoltaic modules. Specifically, each cell and module is tested through flash tests by their respective manufacturers, and thermopile pyranometers do not possess the adequate speed of response nor the same spectral response of a cell. This would create obvious mismatch when measuring power, which would need to be quantified. In the technical documents, this pyranometer is also known as "reference cell". The active part of the sensor is composed of a photovoltaic cell working in near short-circuit condition. As such, the generated current is directly proportionate to the solar radiation hitting the cell in a range between 350 nm and 1150 nm. When invested by a luminous radiation in the mentioned range, it produces current as a consequence of the photovoltaic effect. Its sensitivity is not flat, but it is same as that of Silicon photovoltaic cell. See the Spectral Response graph. Design A photovoltaic pyranometer is essentially assembled with the following parts: A metallic container with a fixing staff A small photovoltaic cell Signal conditioning electronics Silicon sensors such as the photodiode and the photovoltaic cell vary the output in function of temperature. In the more recent models, the electronics compensate the signal with the temperature, therefore removing the influence of temperature out of the values of solar irradiance. Inside several models, the case houses a board for the amplification and conditioning of the signal. Usage Photovoltaic pyranometers are used in solar simulators and alongside photovoltaic systems for the calculation of photovoltaic module effective power and system performance. Because the spectral response of a photovoltaic pyranometer is similar to that of a photovoltaic module, it may also be used for preliminary diagnosis of malfunction in photovoltaic systems. Reference PV Cell or Solar Irradiance Sensor may have external inputs ensuring the connection of Module Temperature Sensor, Ambient Temperature Sensor and Wind speed sensor with only one Modbus RTU output connected directly to the Datalogger. These data are suitable for monitoring the Solar PV Plants. Standardization and calibration Both thermopile-type and photovoltaic pyranometers are manufactured according to standards. Thermopile pyranometers Thermopile pyranometers follow the ISO 9060 standard, which is also adopted by the World Meteorological Organization (WMO). This standard discriminates three classes. The latest version of ISO 9060, from 2018 uses the following classification: Class A for best performing, followed by Class B and Class C, while the older ISO 9060 standard from 1990 used ambiguous terms as "secondary standard", "first class" and "second class"., Differences in classes are due to a certain number of properties in the sensors: response time, thermal offsets, temperature dependence, directional error, non-stability, non-linearity, spectral selectivity and tilt response. These are all defined in ISO 9060. For a sensor to be classified in a certain category, it needs to fulfill all the minimum requirements for these properties. ‘Fast response’ and ‘spectrally flat’ are two sub-classifications, included in ISO 9060:2018. They help to further distinguish and categorise sensors. To gain the ‘fast response’ classification, the response time for 95% of readings must be less than 0.5 seconds; while ‘spectrally flat’ can apply to sensors with a spectral selectivity of less than 3% in the 0,35 to 1,5 μm spectral range. While most Class A pyranometers are ‘spectrally flat’, sensors in the ‘fast response’ sub-classification are much rarer. Most Class A pyranometers have a response time of 5 seconds or more. The calibration is typically done having the World Radiometric Reference (WRR) as an absolute reference. It is maintained by PMOD in Davos, Switzerland. In addition to the World Radiometric Reference, there are private laboratories such as ISO-Cal North America who have acquired accreditation for these unique calibrations. For the Class A pyranometer, calibration is done following ASTM G167, ISO 9847 or ISO 9846. Class B and class C pyranometers are usually calibrated according to ASTM E824 and ISO 9847. Photovoltaic pyranometer Photovoltaic pyranometers are standardized and calibrated under IEC 60904-4 for primary reference samples and under IEC 60904-2 for secondary reference samples and the instruments intended for sale. In both standards, their respective traceability chain starts with the primary standard known as the group of cavity radiometer by the World Radiometric Reference (WRR). Signal conditioning The natural output value of these pyranometers does not usually exceed tens of millivolt (mV). It is considered a "weak" signal, and as such, rather vulnerable to electromagnetic interferences, especially where the cable runs across decametrical distances or lies in photovoltaic systems. Thus, these sensors are frequently equipped with signal conditioning electronics, giving an output of 4-20 mA or 0-1 V. Another solution implies greater immunities to noises, like Modbus over RS-485, suitable for ambiances with electromagnetic interferences typical of medium-large scale photovoltaic power stations, or SDI-12 output, where sensors are part of a low power weather station. The equipped electronics often concur to easy integration in the system's SCADA. Additional information can also be stored in the electronics of the sensor, like calibration history, serial number. See also Actinometer Photodiode Heat flux sensor Net radiometer Pyrgeometer Pyrheliometer Radiometer Sunlight Solar constant Sun path References External links Meteo-Technology instrumentation website Website showing measured data using a thermopile pyranometer north to the arctic circle Measuring instruments Meteorological instrumentation and equipment
Pyranometer
Technology,Engineering
3,586
1,133,651
https://en.wikipedia.org/wiki/Ternus%20illusion
The Ternus illusion, also commonly referred to as the Ternus Effect, is an illusion related to human visual perception involving apparent motion. In a simplified explanation of one form of the illusion, two discs, (referred to here as L for left and C for centre) are shown side by side as the first frame in a sequence of three frames. Next a blank frame is presented for a very short, variable duration. In the final frame, two similar discs (C for centre and R for right) are then shown in a shifted position. Depending on various factors including the time intervals between frames as well as spacing and layout, observers perceive either element motion, in which L appears to move to R while C remains stationary or they report experiencing group motion, in which L and C appear to move together to C and R. Both element motion and group motion can be observed in animated examples to the right in Figures 1 and 2. Overview In 1926 and then again in 1938, the Gestalt psychologist Joseph Ternus observed and defined the "problem of phenomenal identity". Ternus' research was based around earlier undertakings in the domain by Pikler in 1917. This problem of phenomenal identity that Ternus had discovered occurs due to the human visual system's natural ability to establish and then preserve the entities of objects even when the defining attributes of those objects have undergone drastic changes and no longer resemble what they once did. The effect that Ternus had observed was in fact a bistable percept or perception of apparent motion which he found using a display consisting of three frames presented sequentially. When observers are presented with two immobile stimuli, that are presented in a sequential fashion at two differing locations, the stimuli will often be perceived as a solitary object that is simply moving from a starting location to another position. This apparent motion or apparent movement is of great interest to researchers because the perceived movement does not derive strictly from the physical aspect of vision such as the stimulation caused by impingement on the retina. Instead, apparent motion appears to arise from the visual system's processing of the physical properties of the percept. It is for this reason that apparent motion is a key area of research in the domain of vision research. The Ternus illusion is perhaps one of the best examples of such an effect. In order to observe the Ternus illusion/effect, observers view an apparent motion display known as a Ternus display. The Ternus display features a series of frames that are separated by what is known as a blank interstimulus interval (ISI). A standard Ternus display consists of three frames, sequentially presented to the observer. As can be seen in Figure 3, Frame 1 consists of three equally spread out discs that are laid out in a horizontal fashion. Frame 2 is the blank ISI which separates Frame 1 and 3 for a variable duration. Frame 3, is simply the reverse of Frame 1 with the discs on the right hand side instead. This means that the disc on the outside of Frame 1 will now appear to be in the location that the centre disc was originally in as part of Frame 1. When these three frames are quickly presented in a sequential manner, individuals report perceiving two discrete types of motion from observing the Ternus display. These different perceptions are dependent on the duration of the ISI. Numerous studies have demonstrated that short ISIs cause the observer to perceive the central elements as immobile with one outside element jumping across those elements, known as element motion. These studies also support the finding that longer ISIs create the perception that the elements are all moving as one from left to right, known as group motion and that these percepts are not capable of occurring simultaneously. Research suggests that these variations in apparent motion are achieved by grouping the visual elements in such a way that there is an intertwining of the perception of motion and the perception of the objects identity. At intermediate ISIs, perceived motion is bistable, meaning that for the observer the perceptual experience interchanges between element motion and group motion in a spontaneous manner. While the bistability is present, observers are still capable of judging which percept leaves more of a lasting impression. As aforementioned the two percepts are never experienced simultaneously. This occurs due to intermediate ISIs yielding different percentages of group movement and element movement that are each dependent upon their exact values. Element motion Element motion can be observed when ISI’s are presented for less than 50 milliseconds. Though the most common time frame used to achieve this illusion is usually around 20 milliseconds. Element motion is characterized as the outer disc in the Ternus display being seen as "jumping over" the other two discs in the display, which are then considered to be the (inner) discs; placing itself in the right hand side location. This effect can be seen in motion in the example at the top of the page in Figure 1. According to Braddick from his research in 1980, element motion can be attributed to the low-level short range motion process, signalling a null or no-movement for the two elements in the middle of the display between Frame 1 and Frame 3 when short ISIs are shown. As a response to this the higher level long-range motion process passes a signal of element movement. This means that the outer element appears to jump across to the opposite side of the display and back again. Group motion When the ISI (Frame 2) in the Ternus motion display is shown for the relatively long interval of at least 50 milliseconds, group motion can be observed. The longer the inter-frame interval or ISI the more likely a group motion effect is to occur. Group motion gives the perceiver the illusion that all of the discs within the display are moving simultaneously to the right and then back again. As with element motion this effect can be seen in Figure 3 as well as demonstrated in Figure 2. Braddick in 1980 posited that the occurrence of group motion at longer ISIs can be attributed to the short-range motion process signalling motion in the central elements of the motion display, which in turn leads to the long-range process to signal that the three elements are moving in unison. Factors Since the discovery of the Ternus illusion there has been an abundance of research into how and why it occurs. As can be deemed from research above, one of the most critical factors appears to be the length of the ISI, as it seems to be a heavy determinant in which percept becomes apparent to the observer however there are many other factors implicated. A reasonable amount of the research in this area appears to be well empirically supported, such as the idea that lower level (short range processes) and higher level (long range processes) are involved in determining which illusion is perceived. A study by Scott-Samuel & Hess found that the perception of element motion is influenced by changes in the spatial appearance within the Ternus display which suggests that apparent motion is mediated entirely by a long-range motion process. Research undertaken by Kramer and Yantis in 1997 found that perceptual grouping of the elements also appears to have important implications. Kramer and his colleagues found an increase in observers perceiving group motion when the elements in the display seemed to form a logical group in contrast to when they were independently arranged. Yantis found that the perceived continuity of a briefly interrupted element in perception depends on early neural mechanisms in the visual system such as visible persistence as well as on a representation of a three-dimensional surface layout. As previously mentioned, studies have alluded to the idea that high level motion mechanisms determine the final decision in which percept shows through, however recent research by He & Ooi suggests that this final decision is also influenced by accounting for numerous grouping factors such as proximity, similarity and common surface amongst the elements in the scene. Though there are many ideas relating to causative factors, even current research seems to be lacking in a conclusive explanation for why the Ternus effect occurs and has not yet discovered exactly which mechanisms are responsible. Petersik and his team in 2006 suggested that intensive brain-imaging research on each percept is the way forward and is the most likely way to discover the answer. On the other hand, Grossberg and Rudd (1992, Psychol. Rev., 99, 78–121) have developed a neural model of motion perception that simulates many examples of long-range apparent motion, including both the Ternus and reverse-contrast Ternus illusions. See also Beta movement Stroboscopic effect Apparent motion Persistence of vision Gestalt psychology References Optical illusions
Ternus illusion
Physics
1,745
48,407,339
https://en.wikipedia.org/wiki/Nikolay%20Zheludev
Nikolay Zheludev (born 23 April 1955) is a British scientist specializing in nanophotonics, metamaterials, nanotechnology, electrodynamics, and nonlinear optics. Nikolay Zheludev is one of the founding members of the closely interlinked fields of metamaterials and nanophotonics that emerged at the dawn of the 21st century on the crossroads of optics and nanotechnology. Nikolay's work focus on developing new concepts in which nanoscale structuring of matter enhance and radically change its optical properties. Career and research Zheludev started his academic career at the International Laser Centre at Moscow State University, where he also obtained MSc, PhD and DSc. He moved to the UK in 1991, becoming in 2007 the director of the Centre for Photonic Metamaterials and deputy director of the Optoelectronics Research Centre of the University of Southampton, one of the world's leading institutes for photonics research and the largest photonics group in the UK. Nikolay also works in Singapore, where since 2012 he has founded and directed the Centre for Disruptive Photonic Technologies at Nanyang Technological University. Since 2014 he has been the founding co-director of The Photonics Institute, Singapore, Asia leading research organization uniting 250 faculty and researches working in photonics. Zheludev has led some major multi-million research programmes in the UK and Singapore including UK EPSRC NanoPhotonics Portfolio Partnership (2004–2009), Basic Technology Programme “Nanoscope” (2008–2013), Programme on “Nanostructured Photonic Metamaterials” (2010–2016), Programme on “The Physics and Technology of Photonic Metadevices and Metasystems” (2015–2021), the Singapore Ministry of Education Tier 3 Programmes on “Disruptive Photonic Technologies” (2012–2017) and “Quantum and Topological Nanophotonics” (2016–2022). He served as the editor-in-chief of the Journal of Optics from 2010 until 2020, and he is currently an Advisory Board Member for Nanophotonics and ACS Photonics. In 2007, he established the European Physical Society international biennial meeting for nanophotonics and metamaterials, the NANOMETA conference. Awards and honours Zheludev was awarded the Thomas Young Medal and Prize in 2015 for “Global Leadership and Pioneering, Seminal Work in Optical Metamaterials and Nanophotonics”. In 2022 he was awarded the Michael Faraday Medal and Prize for ""For international leadership, discoveries and in-depth studies of new phenomena and functionalities in photonic nanostructures and nanostructured matter". In 2020 he was awarded the President's Science and Technology Award, the highest honours bestowed on research scientists in Singapore. Zheludev has also been awarded the Leverhulme Trust Senior Research Fellowship (2000); Senior Research Professorship of the EPSRC (2002); and The Royal Society Wolfson Research Merit Award & Fellowship (2009). He is a Fellow of the European Physical Society (EPS), The Optical Society (OSA), The Institute of Physics (IOP) and the American Physical Society (APS). In 2018 he was elected as a Fellow of the Royal Society, a fellowship of many of the world's most eminent scientists and the oldest scientific academy in continuous existence. In 2019 he was elected as a foreign member of the United States of America National Academy of Engineering. Personal life Nikolay was born in Moscow, Russia. His father physicist and crystallographer Prof. Ivan S. Zheludev worked at the Institute of Crystallography Russian Academy of Sciences, Moscow, Russia and combined his academic work with the post of the Deputy Director General of International Atomic Energy Agency, Austria, Vienna. His mother Dr. Galina Zheludeva was a faculty at Moscow State University. Nikolay's sister, Prof. Svetlana Zheludeva worked at the Russian Academy of Sciences and his brother Andrey is professor at ETH Zurich. Nikolay is married to linguist Tanya Nousinova, daughter of playwright Ilya Nousinov. They have two sons, Ilya and Ivan. References External links 1955 births Living people Academics of the University of Southampton Optical physicists Academic staff of Nanyang Technological University Fellows of the American Physical Society Fellows of the Institute of Physics Fellows of Optica (society) Fellows of the Royal Society Foreign associates of the National Academy of Engineering Royal Society Wolfson Research Merit Award holders Metamaterials scientists Academic staff of Moscow State University
Nikolay Zheludev
Materials_science
936
2,129,346
https://en.wikipedia.org/wiki/TILLING%20%28molecular%20biology%29
TILLING (Targeting Induced Local Lesions in Genomes) is a method in molecular biology that allows directed identification of mutations in a specific gene. TILLING was introduced in 2000, using the model plant Arabidopsis thaliana, and expanded on into other uses and methodologies by a small group of scientists including Luca Comai. TILLING has since been used as a reverse genetics method in other organisms such as zebrafish, maize, wheat, rice, soybean, tomato and lettuce. Overview The method combines a standard and efficient technique of mutagenesis using a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA screening-technique that identifies single base mutations (also called point mutations) in a target gene. The TILLING method relies on the formation of DNA heteroduplexes that are formed when multiple alleles are amplified by PCR and are then heated and slowly cooled. A “bubble” forms at the mismatch of the two DNA strands, which is then cleaved by a single stranded nuclease. The products are then separated by size on several different platforms (see below). Mismatches may be due to induced mutation, heterozygosity within an individual, or natural variation between individuals. EcoTILLING is a method that uses TILLING techniques to look for natural mutations in individuals, usually for population genetics analysis. DEcoTILLING is a modification of TILLING and EcoTILLING which uses an inexpensive method to identify fragments. Since the advent of NGS sequencing technologies, TILLING-by-sequencing has been developed based on Illumina sequencing of target genes amplified from multidimensionally pooled templates to identify possible single-nucleotide changes. Single strand cleavage enzymes There are several sources for single strand nucleases. The first widely used enzyme was mung bean nuclease, but this nuclease has been shown to have high non-specific activity, and only works at low pH, which can degrade PCR products and dye-labeled primers. The original source for single strand nuclease was from CEL1, or CJE (celery juice extract), but other products have entered the market including Frontier Genomics’ SNiPerase enzymes, which have been optimized for use on platforms that use labeled and unlabeled PCR products (see next section). Transgenomic isolated the single strand nuclease protein and sells it as a recombinant form. The advantage of the recombinant form is that unlike the enzyme mixtures, it does not contain non-specific nuclease activity, which can degrade the dyes on the PCR primers. The disadvantage is a substantially higher cost. Separation of cleaved products The first paper describing TILLING used HPLC to identify mutations (McCallum et al., 2000a). The method was made more high throughput by using the restriction enzyme Cel-I combined with the LICOR gel based system to identify mutations (Colbert et al., 2001). Advantages to using this system are that mutation sites can be easily confirmed and differentiated from noise. This is because different colored dyes can be used for the forward and reverse primers. Once the cleavage products have been run on a gel, it can be viewed in separate channels, and much like an RFLP, the fragment sizes within a lane in each channel should add up to the full length product size. Advantages to the LICOR system are separation of large fragments (~ 2kb), high sample throughput (96 samples loaded on paper combs), and freeware to identify the mutations (GelBuddy). Drawbacks to the LICOR system is having to pour slab gels and long run times (~4 hours). TILLING and EcoTILLING methods are now being used on capillary systems from, Advanced Analytical Technologies, ABI and Beckman. Several systems can be used to separate PCR products that are not labeled with dyes. Simple agarose electrophoresis systems will separate cleavage products inexpensively and with standard lab equipment. This was used to discover SNPs in chum salmon and was referred to as DEcoTILLING. The disadvantage of this system is reduced resolution compared to polyacrylamide systems. Elchrom Scientific sells Spreadex gels which are precast, can be high throughput and are more sensitive than standard polyacrylamide gels. Advanced Analytical Technologies Inc sells the AdvanCE FS96 dsDNA Fluorescent System which is a 96 capillary electrophoresis system that has several advantages over traditional methods; including ability to separate large fragments (up to 40kb), no desalting or precipitation step required, short run times (~30 minutes), sensitivity to 5pg/ul and no need for fluorescent labeled primers. TILLING centers Several TILLING centers exist over the world that focus on agriculturally important species: Rice – UC Davis (USA) Maize – Purdue University (USA) Brassica napus – University of British Columbia (CA) Brassica rapa – John Innes Centre (UK) Arabidopsis – Fred Hutchinson Cancer Research Soybean – Southern Illinois University (USA) Lotus and Medicago – John Innes Centre (UK)] Wheat – UC Davis (USA) Pea, Tomato - INRA (France) Tomato - RTGR, University of Hyderabad (India) References Scientific literature External links The Arabidopsis Tilling project Introduction to TILLING Zebrafish TILLING project Sanger Institute Zebrafish Mutation Project Biochemistry detection methods Genetics techniques Molecular biology
TILLING (molecular biology)
Chemistry,Engineering,Biology
1,153
11,763,375
https://en.wikipedia.org/wiki/Concatenated%20error%20correction%20code
In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s. Background The field of channel coding is concerned with sending a stream of data at the highest possible rate over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates less than a certain threshold , called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with , so such an optimum decoder rapidly becomes infeasible. In his doctoral thesis, Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length. Description Let Cin be a [n, k, d] code, that is, a block code of length n, dimension k, minimum Hamming distance d, and rate r = k/n, over an alphabet A: Let Cout be a [N, K, D] code over an alphabet B with |B| = |A|k symbols: The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet B. We use this channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout∘Cin, is thus a code of length Nn over the alphabet A: It maps each input message m = (m1, m2, ..., mK) to a codeword (Cin(m'1), Cin(m'2), ..., Cin(m'N)), where (m'1, m'2, ..., m'N) = Cout(m1, m2, ..., mK). The key insight in this approach is that if Cin is decoded using a maximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), and Cout is a code with length N = 2nr that can be decoded in polynomial time of N, then the concatenated code can be decoded in polynomial time of its combined length n2nr = O(N⋅log(N)) and shows an exponentially decreasing error probability, even if Cin has exponential decoding complexity. This is discussed in more detail in section Decoding concatenated codes. In a generalization of above concatenation, there are N possible inner codes Cin,i and the i-th symbol in a codeword of Cout is transmitted across the inner channel using the i-th inner code. The Justesen codes are examples of generalized concatenated codes, where the outer code is a Reed–Solomon code. Properties 1. The distance of the concatenated code Cout∘Cin is at least dD, that is, it is a [nN, kK, D'] code with D' ≥ dD. Proof: Consider two different messages m1 ≠ m2 ∈ BK. Let Δ denote the distance between two codewords. Then Thus, there are at least D positions in which the sequence of N symbols of the codewords Cout(m1) and Cout(m2) differ. For these positions, denoted i, we have Consequently, there are at least d⋅D positions in the sequence of n⋅N symbols taken from the alphabet A in which the two codewords differ, and hence 2. If Cout and Cin are linear block codes, then Cout∘Cin is also a linear block code. This property can be easily shown based on the idea of defining a generator matrix for the concatenated code in terms of the generator matrices of Cout and Cin. Decoding concatenated codes A natural concept for a decoding algorithm for concatenated codes is to first decode the inner code and then the outer code. For the algorithm to be practical it must be polynomial-time in the final block length. Consider that there is a polynomial-time unique decoding algorithm for the outer code. Now we have to find a polynomial-time decoding algorithm for the inner code. It is understood that polynomial running time here means that running time is polynomial in the final block length. The main idea is that if the inner block length is selected to be logarithmic in the size of the outer code then the decoding algorithm for the inner code may run in exponential time of the inner block length, and we can thus use an exponential-time but optimal maximum likelihood decoder (MLD) for the inner code. In detail, let the input to the decoder be the vector y = (y1, ..., yN) ∈ (An)N. Then the decoding algorithm is a two-step process: Use the MLD of the inner code Cin to reconstruct a set of inner code words y' = (y'1, ..., y'N), with y'i = MLDCin(yi), 1 ≤ i ≤ N. Run the unique decoding algorithm for Cout on y'. Now, the time complexity of the first step is O(N⋅exp(n)), where n = O(log(N)) is the inner block length. In other words, it is NO(1) (i.e., polynomial-time) in terms of the outer block length N. As the outer decoding algorithm in step two is assumed to run in polynomial time the complexity of the overall decoding algorithm is polynomial-time as well. Remarks The decoding algorithm described above can be used to correct all errors up to less than dD/4 in number. Using minimum distance decoding, the outer decoder can correct all inputs y' with less than D/2 symbols y'i in error. Similarly, the inner code can reliably correct an input yi if less than d/2 inner symbols are erroneous. Thus, for an outer symbol y'i to be incorrect after inner decoding at least d/2 inner symbols must have been in error, and for the outer code to fail this must have happened for at least D/2 outer symbols. Consequently, the total number of inner symbols that must be received incorrectly for the concatenated code to fail must be at least d/2⋅D/2 = dD/4. The algorithm also works if the inner codes are different, e.g., for Justesen codes. The generalized minimum distance algorithm, developed by Forney, can be used to correct up to dD/2 errors. It uses erasure information from the inner code to improve performance of the outer code, and was the first example of an algorithm using soft-decision decoding. Applications Although a simple concatenation scheme was implemented already for the 1971 Mariner Mars orbiter mission, concatenated codes were starting to be regularly used for deep space communication with the Voyager program, which launched two space probes in 1977. Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention of turbo codes and LDPC codes. Typically, the inner code is not a block code but a soft-decision convolutional Viterbi-decoded code with a short constraint length. For the outer code, a longer hard-decision block code, frequently a Reed-Solomon code with eight-bit symbols, is used. The larger symbol size makes the outer code more robust to error bursts that can occur due to channel impairments, and also because erroneous output of the convolutional code itself is bursty. An interleaving layer is usually added between the two codes to spread error bursts across a wider range. The combination of an inner Viterbi convolutional code with an outer Reed–Solomon code (known as an RSV code) was first used in Voyager 2, and it became a popular construction both within and outside of the space sector. It is still notably used today for satellite communications, such as the DVB-S digital television broadcast standard. In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within the DVB-S2 standard, a highly efficient LDPC code is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherent error floor. A simple concatenation scheme is also used on the compact disc (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks. Turbo codes: A parallel concatenation approach The description above is given for what is now called a serially concatenated code. Turbo codes, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes. This design has a better performance than any previously conceived concatenated codes. However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the "Galileo code" of the Galileo space probe. See also Gilbert–Varshamov bound Justesen code Singleton bound Zyablov bound References Further reading External links University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra Error detection and correction Coding theory Finite fields Information theory
Concatenated error correction code
Mathematics,Technology,Engineering
2,225
19,649,338
https://en.wikipedia.org/wiki/Biological%20motion
Biological motion is motion that comes from actions of a biological organism. Humans and animals are able to understand those actions through experience, identification, and higher level neural processing. Humans use biological motion to identify and understand familiar actions, which is involved in the neural processes for empathy, communication, and understanding other's intentions. The neural network for biological motion is highly sensitive to the observer's prior experience with the action's biological motions, allowing for embodied learning. This is related to a research field that is broadly known as embodied cognitive science, along with research on mirror neurons. For instance, a well known example of sensitiveness to a specific type of biological motion is expert dancers observing others dancing. Compared to people who do not know how to dance, expert dancers show more sensitiveness to the biological motion from the dance style of their expertise. The same expert dancer would also show similar but less sensitivity to dance styles outside of their expertise. The differences in perception of dance motions suggests that the ability to perceive and understand biological motion is strongly influenced by the observer's experience with the action. A similar expertise effect has been observed in different types of action, such as music making, language, scientific thinking, basketball, and walking. History The phenomenon of human sensitivity to biological motion was first documented by Swedish perceptual psychologist, Gunnar Johansson, in 1973. He is best known for his experiments that used point light displays (PLDs). Johansson attached light bulbs to body parts and joints of actors performing various actions in the dark. He filmed these actions, yielding point lights from each bulb moving on a black background. Johansson found that people were able to recognize what the actors were doing when the PLD was moving, but not when it was stationary. Johansson's invention of PLDs inspired a new field of research into human perception. Modern technology to make PLDs involves the same principles, except that film has been replaced by multiple cameras attached to computers that construct a 3D representation of actors' movements, allowing for considerable control of the PLDs. Interest in biological motion was renewed with the publication of a 1996 article on mirror neurons. Mirror neurons were found to be active in an animal's brain both when that animal observed another animal making a movement and when that animal made the same movement. The mirror neurons were initially observed in the premotor cortex, however they were also found in supramarginal gyrus and temporoparietal junction, areas of the brain that is associated with biological motion processing. The coding of both visual and motor actions within same set of neurons suggests that biological motion understanding and perception is influenced by not only the visual information of the motion but also by the observer's experience with the biological motion. Today, the discovery of mirror neurons has led to an explosion of research on biological motion and action perception and understanding in research fields such as social and affective neuroscience, language, action, motion capture technology, and artificial intelligence such as androids and virtual embodied agents, and the uncanny valley phenomenon. Research on Biological Motion Findings from research on biological motion has shown that humans are highly sensitive to biological motions of actions and those observations has developed into studies on different possible factors in the perception and understanding of the biological motions of bodily actions. Through studies with point-light display (PLD), findings in psychology and neuroscience fields has grown into a sizable body of research that stretches across different fields. General Observations on Biological Motion In a PLD experiment, participants are presented with a static, dynamic, or randomized dynamic white dots that consists of light sources or motion capture markers that were placed on the joints that are involved in actions for biological organisms. Even though individual dots in PLD do not show any overt visual connection with other dots, observers are able to perceive cohesive biological motion of actions in dynamic PLD.[4] Studies using PLD methods have found that people are better at identifying PLD of their own gaits compared to others.[3] People are also able to recognize different emotions in PLD. With special attention to body language, an observer can identify anger, sadness, and happiness. Observers can also identify the actors' gender with some actions in PLD. Lesion Damage In a large study with stroke patients, significant regions that was found to be associated with deficient biological motion perception include the superior temporal sulcus and premotor cortex. The cerebellum also is involved in biological motion processing. A recent study on a patient with developmental agnosia, an impairment in recognizing objects, found that the ability to recognize the form of biological organisms through biological motion remains intact, despite deficiency in perception of non-biological form through motion. Neuroimaging Recent cognitive neuroscience research has begun to focus on the brain structures and neural networks that are involved in biological motion processing. The use of transcranial magnetic stimulation methods provided with evidence that suggests that biological motion processing occurs outside of the MT+/V5 area, which can include both visual form and motion. The posterior superior temporal sulcus has been shown to be active during biological motion perception. Also, premotor cortex has been shown to be active during biological motion processing, showing that the mirror neuron system is recruited for perception and understanding of PLD. Further evidence from another study shows that the default mode network is essential in distinguishing between biological and non-biological motion. Such findings aforementioned studies show that biological motion perception is a process that pulls in several different neural systems outside of networks involved in the visual processing of non-biological motions and objects. Development in Children The human perception and understanding of biological motion in animal actions develops with age, usually capping at approximately five years of age. In an experiment, with three-year-old, four-year-old, and five-year-old children and adults, participants were asked to identify PLD of animals actions such as walking human, running and walking dog, and flying bird. Results showed that adults and five-year-olds were able to accurately identify animal actions. However four-year-olds and three-year-olds struggled, although four-year-olds were significantly better at identifying animal actions than three-year-olds. This suggests that our perception and understanding of biological motion and actions goes through developmental process in human children, arriving at a performance ceiling for identifying animal actions at five years. While most animals, for example cats, tend to recognize their own species' point-light displays over others species and scrambled PLDs, the three-year-olds had the most success at identifying a walking dog PLD and had the least success with a walking human PLD. A possible explanation of this contradictory results might be because of the children's small physical stature and their resulting experiences with visual perspectives: dogs are closer in height to smaller kids, while the experience of observing and performing similar biological motions of walking human are harder to come by due to height of adults, along with their low amount of experience with walking. In the next part of the experiment, different participants were asked to identify the same point-light display animals but with static images instead of moving dots. Five-year-olds and adults gave results of chance performance, while the younger participants were omitted due to the higher error rates from the harder nature of the task. Therefore, this experiment suggests that at five years old, we are adept at identifying animal actions and visual forms with point-light displays. This study also shows that experience with biological motion is critical for our perception and understanding of actions. Language Humans seem to use similar cognitive functions in identifying real verbs and biologically possible motions. In another experiment, researchers gave participants a lexical and action decision tasks to measure how long it took them to identify whether the words were real or the scrambled PLDs an action. Participants took much longer to identify pseudo words and scrambled PLDs. The correlation in reaction time between verb words and PLD actions was found to be rather strong (r= 0.56), while the correlation between nouns and PLD actions was much lower (r= 0.31). Those findings suggest that humans use similar cognitive functions to identify biological motion and words, whether it is presented through written language or point-light displays. The researcher suggests that these findings supports a theoretical framework called embodied cognition, which suggests that the cognition of actions and words can be supported by the motor system. Psychophysics Some research looks into the differences between global and local processing of biological motion; how the whole PLD figure is processed compared to how individual dots in the PLD are processed. One study investigated both types of processing in a PLD of human walking in different directions by replacing individual dots with human images or stick figures facing in different directions. The results showed that the humans struggle to perceive the walking direction of the global PLD when the local dots does not face in the same direction, indicating that the brain uses a similar form-based mechanism for the recognition of both global and local stimuli during processing. The results also show that processing local images is an automatic process that can interfere with the subsequent processing of the global form of the walking PLD. Perception of biological motion in PLD depends both on the motions of individual dots and the configuration/orientation of the body as a whole, as well as interactions between these local and global cues. Similar to the Thatcher Effect in face perception, inversion of individual points is easy to detect when the entire figure is presented normally, but difficult to detect when the entire display is presented upside-down. However, recent electrophysiological work suggest that the configuration/orientation of the PLD might influence the processing PLD's motion, in the early stages of neural processing. See also Biological motion perception Motion perception Theory of mind Mirror neurons Motor Cognition Uncanny Valley Cognitive neuroscience Embodied Cognitive Science Embodied cognition Empathy Social neuroscience Affective neuroscience Temporoparietal junction Premotor cortex Common coding theory Language processing in the brain Motion capture References Neuroscience Visual perception
Biological motion
Biology
2,006
13,660,531
https://en.wikipedia.org/wiki/Heinz%20Gerischer
Heinz Gerischer (31 March 1919 – 14 September 1994) was a German chemist who specialized in electrochemistry. He was the thesis advisor of future Nobel laureate Gerhard Ertl. The Heinz Gerischer Award of the European section of The Electrochemical Society is named in his honour. Academic career Gerischer studied chemistry at the University of Leipzig between 1937 and 1944 with a two-year interruption because of military service. In 1942, he was expelled from the German army because his mother was born Jewish; he was thus found “undeserving to have a part in the great victories of the German Army.” The war years were difficult for Gerischer, and his mother committed suicide on the eve of her 65th birthday in 1943. His only sister, Ruth (born in 1913), lived underground after escaping from a Gestapo prison and was subsequently killed in an air raid in 1944. In Leipzig, Gerischer joined the group of Karl-Friedrich Bonhöffer, a member of a distinguished family, whose members were persecuted and murdered because of their opposition to Nazi ideology. Bonhöffer descended from an illustrious chemical lineage of Wilhelm Ostwald (1853–1932) and Walther Hermann Nernst (1864–1941). He kindled Gerischer’s interest in electrochemistry, supervising his doctoral work on oscillating reactions on electrode surfaces. Gerischer completed his doctoral thesis in 1946. Gerischer followed Bonhöffer to Berlin where his Ph.D. supervisor had accepted the directorship of the Institute of Physical Chemistry at the Humboldt University of Berlin. There he also became the department head at the Kaiser Wilhelm Institute for Physical Chemistry in Berlin-Dahlem (later the Fritz Haber Institute of the Max Planck Society). Gerischer himself was appointed as an “Assistänt”; in 1970 he would return to the Fritz Haber Institute as its director. With the Berlin Blockade and the prevailing economic conditions, the post-war research was carried out under extremely difficult conditions. Gerischer met his future wife, Renate Gersdorf, at the University of Leipzig where she was doing her diploma work with Conrad Weygand. They were married in Berlin in October 1948. In 1949 Gerischer moved his young family to Göttingen to join Bonhöffer as a research associate at the newly established Max Planck Institute for Physical Chemistry. In Berlin and Göttingen and especially during the period from 1949 to 1955, Gerischer was interested in electrode kinetics and developed instruments and techniques for their study. It was he who developed the electronic potentiostat, the most widely used instrument of electrochemists. He also monitored fast electrode processes by double potential step and AC modulation. This work laid the foundation for a mechanistic interpretation of electrode reactions and had a lasting impact on our understanding of electrode kinetics. It was recognized by the newly minted Bodenstein Prize of the Deutsche Bunsen-Gesellschaft, which Gerischer and Klaus Vetter jointly received in 1953. Gerischer was appointed in 1954 to the position of Department Head and Senior Research Fellow at the Max Planck Institute for Metal Research in Stuttgart. A year later, he received the Habilitation from the University of Stuttgart for his comprehensive study of the discharge of metal ions in corrosion. The years 1954–1961 in Stuttgart were prolific and it was here that Gerischer began his work on semiconductor electrochemistry. It began with a short note on the electrochemistry of n-type and p-type germanium; a study that grew out of a seminar on solid state physics at the university, where the recent results of Brattain and Garrett on germanium were discussed. Gerischer recognized the theoretical implications of semiconductor electrochemistry in charge transfer and its potential applications in photochemistry and photovoltaic devices. His papers considered the differentiation between Faradaic reactions of electrons and holes (1959), the theory of electron tunneling at semiconductor-electrolyte interfaces, solution Fermi levels, and densities of states. He extended his studies to metal electrodes which he studied with his electronic potentiostat (1957), to stress corrosion (1957), to hydrogen evolution and hydrogen adatom formation (1957), to fast electrode processes (1960) and to the reaction kinetics of water dissociation, which he probed by the microwave pulse method (1961). His work was recognized by his appointment as Associate Professor (“Extraordinariat”) in Electrochemistry at the Technical University Munich in 1962–63 followed by his promotion to full professor in 1964 and his appointment as Director of the Institute of Physical Chemistry and Electrochemistry. The 1964–1968 period witnessed a flurry of studies from his group on photoelectrochemistry and photosensitization on electrode materials such as ZnO, CdS, GaAs, silver halides, anthracene, and perylene. In 1969–1970 he was named Dean of Natural Sciences at the Technical University Munich. Gerischer returned to Berlin in 1970 to assume the directorship of the Fritz Haber Institute of the Max Planck Society, where he continued his studies of electrode kinetics, semiconductor electrochemistry, and photoelectrochemistry. After becoming Emeritus Director of the Institute, he worked with Adam Heller in 1990–1991 at the University of Texas at Austin on the rate-controlling role of adsorbed oxygen in titania-assisted photocatalytic processes. His honors and awards included the Olin Palladium Award of the Electrochemical Society (1977), Centenary Lectureship, the Chemical Society, London (1979), DECHEMA Medal, Frankfurt (1982), Electrochemistry Group Medal, The Royal Society of Chemistry, London (1987), Galvani Medal, The Italian Chemical Society (1988), and the Bruno Breyer Medal, The Royal Australian Chemistry Institute (1992). Selected contributions Relating Concentration Polarizations and Electrode Potentials (Kaiser Wilhelm Inst. Berlin, 1951) “Concentration polarization due to the initial chemical reaction in electrolytes and its contribution to the stationary polarization resistance corresponding to the equilibrium potential.” Gerischer, Heinz; Vetter, Klaus J.; Z. physik. Chem.(1951)197, 92–104. Theory of AC Electrochemistry (Max Planck Inst. Phys. Chem. Göttingen, 1951) “Alternating-current polarization of electrodes with a potential-determining step for equilibrium potential.” Gerischer, H., Z. physik. Chem. (1951) 198, 286–313 Discovery of Radicals on Electrodes (Max Planck Inst. Phys. Chem., Göttingen, 1956) “Catalytic decomposition of hydrogen peroxide on metallic platinum.” Gerischer, R; Gerischer, H.; Z. physik. Chem. (1956) 6, 178–200 Observation of the Different Electrochemical Etching Rates of p and n Type Semiconductors (Max Planck Inst. Metallforsch., Stuttgart, 1957) “Solution of n- and p-germanium in aqueous electrolyte solution under the action of oxidizing agents.” Gerischer, H.; Beck, F.; Z. physik. Chem. (1957) 13, 389-95. Invention of the Potentiostat (Max Planck Inst. Metallforsch., Stuttgart, 1957) “The electronic potentiostat and its application in the investigation of fast electrode reactions” Gerischer, H.; Staubach, K. E.; Z. Electrochem.(1957)61, 789-94. Explanation of Stress Corrosion (Max-Planck-Inst. Metallforschung, Stuttgart, 1957) “Electrochemical processes in stress corrosion” Gerischer, H.; Werkstoffe u. Korrosion (1957)8, 394-401. Discovery of Adatoms, the Existence of Adsorbed Atoms on Electrodes (Max-Planck-Inst. Metallforschung, Stuttgart, 1958) “Mechanism of electrolytic discharge of hydrogen and adsorption energy of atomic hydrogen” Gerischer, H.; Bull. soc. chim. Belges (1958) 67, 506-27. Observation of Differently Reacting Valence and Conduction Band Carriers (Max-Planck-Inst. Metallforschung, Stuttgart, 1959) “Oxidation-reduction processes in germanium electrodes.”Beck, F.; Gerischer, H.; Z. Elektrochem.(1959) 63, 943-50. Relating Band Positions to Electrode Kinetics (Max-Planck-Inst. Metallforsch., Stuttgart, 1960) “Kinetics of oxidation-reduction reactions on metals and semiconductors. I &II General remarks on the electron transition between a solid body and a reduction-oxidation electrolyte.” Gerischer, H.; Z. physik. Chem. (1960) 26, 223-47; 325-38; (1961) 27, 48-79. On the use of single crystal electrodes (Techn. Hochsch. Munich, 1963) “Preparation of spherical single crystal electrodes for use in electrocrystallization studies." Roe, D.K., Gerischer H.; J. Electrochem. Soc.(1963) 110, 350-352. Role of Surface States in Electron Transfer at Semiconductor-Solution Interfaces (Tech. Hochsch., Munich, 1967) “Surface activity in redox reactions on semiconductors.” Gerischer, H.; Wallem Mattes; I. Zeitschrift für Physikalische Chemie (1967) 52,60-72. Dye Photosensitization of Zinc Oxide (Tech. Hochsch., Munich,1969) “Electrochemical studies on the mechanism of sensitization and supersensitization of zinc oxide single crystals.” Tributsch, H.; Gerischer, H.; Berichte der Bunsen-Gesellschaft (1969) 73,251-60. “Use of semiconductor electrodes in the study of photochemical reactions.” Tributsch, H.; Gerischer, H.; Berichte der Bunsen-Gesellschaft(1969)73,850-4. Electrochemistry of electronically excited states (Fritz-Haber-Institut der MPG, 1973) "Elektrodenreaktionen mit angeregten elektronischen Zuständen.“ Gerischer, H.; Ber. Bunsenges. Phys. Chem. (1973) 77, 284-288. Semiconductor Photodecomposition (Fritz-Haber-Institut der MPG, 1977 “On the stability of semiconductor electrodes against photodecomposition”. Gerischer H. J. Electroanal. Chem. (1977) 82, 133-143. Relating Fermi Levels to Redox Potentials (Fritz-Haber-Inst., Max-Planck-Ges., Berlin, 1983)“Fermi levels in electrolytes and the absolute scale of redox potentials.“ Gerischer, H.; Ekardt, W.; Appl. Phy.s Lett.(1983) 43, 393-5. References External links 1919 births 1994 deaths 20th-century German chemists Scientists from Wittenberg Electrochemists Academic staff of the Technical University of Munich Max Planck Institute directors Leipzig University alumni Max Planck Society people Academic staff of the University of Stuttgart
Heinz Gerischer
Chemistry
2,393
18,586,236
https://en.wikipedia.org/wiki/CommScope
CommScope Holding Company, Inc. is an American network infrastructure provider based in Claremont, North Carolina. CommScope employs over 22,000 employees. The company joined the Nasdaq stock exchange on October 25, 2013. CommScope designs and manufactures network infrastructure products. It has the following business segments: broadband networks, venue and campus Networks, and outdoor wireless networks. History CommScope was originally a product line of Superior Continental Cable, which was founded in 1953 in Hickory, North Carolina. In 1961, Superior created a division called Comm/Scope, which developed CATV systems and sold a coaxial cable named CommScope. In 1967, Superior was acquired by Continental Telephone Company, with CommScope becoming a division of Continental. In 1975, Frank Drendel headed a team charged with selling the product line. Drendel and Jearld Leonhardt founded CommScope in August 1976 after raising $5.1 million to purchase the CommScope product line. Two years later, CommScope and Valtech merged under the Valtech name. In 1979 Valtech donated fiber optics line and equipment to link the U.S. House of Representatives to the C-SPAN studios, enabling live broadcasting of U.S. Congressional proceedings for the first time. In the 1980s, Valtech sold to M/A-COM. and CommScope became part of the Cable Home Group for M/A-COM. In 1983, CommScope formed the Network Cable division for the local area network, data communications, television-receive only, and specialized wire markets. In 1986 M/A-COM, sold the Cable Home Group to General Instrument Corporation, and CommScope became a division of General Instrument. In 1997, General Instrument split into three independent, publicly traded companies, with its cable operation spun off as CommScope. At the time, CommScope had annual revenues of $560 million and was the largest provider of coaxial cable to cable TV operators. In 2000, CommScope opened its new global headquarters in Hickory, North Carolina. In 2004, CommScope acquired Avaya's Connectivity Solutions cabling unit and inherited the SYSTIMAX brand, a company perhaps best known for its enterprise cabling systems. Avaya's Carrier Solutions, which offered products designed for switching and transmission applications in telephone central offices and secure environmental enclosures, also became part of CommScope. This acquisition doubled CommScope's size. In 2007, CommScope acquired the global wireless infrastructure provider Andrew Corporation, which would help CommScope meet demand from mobile phone companies. In 2008, CommScope was chosen to provide the Dallas Cowboys with the connectivity for their new stadium starting with the 2009 NFL season, using over 5 million feet of copper and fiber-optic cabling. In 2011, The Carlyle Group acquired CommScope. This acquisition made CommScope privately owned by The Carlyle Group and removed it from the New York Stock Exchange. Eddie Edwards was appointed president and chief executive officer, succeeding Frank Drendel, who had served as CommScope's CEO since the company's founding in 1976. Drendel continued as the chairman of the board. On October 25, 2013, CommScope had its initial public offering on the NASDAQ, being listed as COMM. In February 2016, it was announced that the Daytona International Speedway had a new wiring infrastructure from CommScope. In June 2016, CommScope was signed by the Carolina Panthers to upgrade the wireless and wired communications at the team's Bank of America Stadium. In November 2016, the Carlyle Group announced the sale of its remaining shares of CommScope. In 2019, for the Hong Kong-Zhuhai-Macao Bridge, a 55 kilometer bridge-tunnel system, CommScope supplied over 110 multiband antennas supporting 2G, 3G, and 4G network bands. On October 1, 2020, CommScope announced that Charles Treadway would succeed Eddie Edwards as the company's new president and CEO. The company also announced that Bud Watts would replace Frank Drendel as chairman, with Drendel being named chairman emeritus. In October 2023, CommScope sold its home networks division to Vantiva for a 25% stake in Vantiva. Acquisitions In 2004, the company acquired Avaya's connectivity business, including the legacy intellectual property and patents from Western Electric, AT&T, Lucent Technologies, and Avaya. In June 2007, CommScope acquired Andrew Corporation for $2.6 billion. Andrew's products included antennas, cables, amplifiers, repeaters, transceivers, as well as software and training for the broadband and cellular industries. In 2015, CommScope acquired TE Connectivity's Broadband Network Solutions (BNS) division. Later in 2015 CommScope acquired Airvana, a privately held company specializing in small cell solutions for wireless networks. On April 4, 2019, CommScope completed the acquisition of Arris International, a telecommunications equipment manufacturing company and owner of Ruckus Networks. Both Arris and Ruckus were made brands of CommScope. In October 2020, CommScope acquired the patent portfolio for virtual radio access networks (vRAN) from Phluido, a company specializing in RAN virtualization and disaggregation. References External links Telecommunications companies of the United States Telecommunications companies established in 1976 Telecommunications equipment vendors Computer companies of the United States Computer hardware companies Electronics companies established in 1976 American companies established in 1976 Manufacturing companies based in North Carolina Companies listed on the Nasdaq 1976 establishments in North Carolina 2013 initial public offerings Hickory, North Carolina Corporate spin-offs The Carlyle Group companies
CommScope
Technology
1,161
22,971,835
https://en.wikipedia.org/wiki/Just%20and%20Unjust%20Wars
Just and Unjust Wars: A Moral Argument with Historical Illustrations is a 1977 book by the philosopher Michael Walzer. Published by Basic Books, it is still in print, now as part of the Basic Books Classics Series. A second edition was published in 1992, a third edition in 2000, a fourth edition in 2006, and a fifth edition in 2015. The book resulted from Walzer's reflections on the Vietnam War. Summary Walzer draws on medieval Just War theory to explore the reasons that can justify war jus ad bellum and the ethical limits on the conduct of war jus in bello in an attempt to work out a modern, secular theory of just war. Walzer precises that in war not all action is equal, a just war exists and must be implemented through a strict display of rules. Reception Just and Unjust Wars has, together with Spheres of Justice (1983) and Interpretation and Social Criticism (1987), been identified as one of Walzer's most important works by the philosopher Will Kymlicka in The Oxford Companion to Philosophy (2005). The work is considered a standard in the philosophical literature on the ethics of warfare, with the Stanford Encyclopedia of Philosophy calling Just and Unjust Wars "extraordinarily influential" and "the major contemporary statement of just war theory." The book has been translated to French by Simone Chambon and Anne Wicke and published in the collection of Claude Lefort in Belin. References 1977 non-fiction books Basic Books books Books by Michael Walzer English-language non-fiction books Ethics books Political books Just war theory
Just and Unjust Wars
Biology
325
4,169,622
https://en.wikipedia.org/wiki/Dynamic%20data
In data management, dynamic data or transactional data is information that is periodically updated, meaning it changes asynchronously over time as new information becomes available. The concept is important in data management, since the time scale of the data determines how it is processed and stored. Data that is not dynamic is considered either static (unchanging) or persistent, which is data that is infrequently accessed and not likely to be modified. Dynamic data is also different from streaming data, which is a constant flow of information. Dynamic data may be updated at any time, with periods of inactivity in between. Examples In enterprise data management, dynamic data is likely to be transactional, but it is not limited to financial or business transactions. It may also include engineering transactions, such as a revised schematic diagram or architectural document. In this context static data is either unchanged or so rarely changed that it can be stored remotely ("basement" or far) storage, whereas dynamic data is reused or changed frequently and therefore requires online ("office" or near) storage. An original copy of a wiring schematic will change from dynamic to static as the new versions make it obsolete. It is still possible to reuse the original, but in the normal course of business there is rarely a need to access obsoleted data. The current version of the wiring schematic is considered dynamic or changeable. These two different contexts for "dynamic" are similar, but differ their time scale. Dynamic data can become static. Persistent data is or is likely to be in the context of the execution of a program. Static data is in the context of the business historical data, regardless of any one application or program. The "dynamic" data is the new/updated/revised/deleted data in both cases, but again over different time horizons. Your paycheck stub is dynamic data for 1 week, or 1 day, then it becomes read-only and read-rarely, which would be either or both static and persistent. See also Transaction data Computer data
Dynamic data
Technology
419
14,404,325
https://en.wikipedia.org/wiki/Delco%20Carousel
The Delco Carousel — proper name Carousel IV — was an inertial navigation system (INS) for aircraft developed by Delco Electronics. Before the advent of sophisticated flight management systems, Carousel IV allowed pilots to automate navigation of an aircraft along a series of waypoints that they entered via a control console in the cockpit. Carousel IV consisted of an inertial measurement unit (IMU) as its position reference, a digital computer to compute the navigation solution, and a control panel mounted in an aircraft's cockpit. It was used for long over water and over the North Pole aircraft navigation. Many aircraft were equipped with dual or triple Carousels for redundancy. Operation was relatively simple: a pilot or flight engineer would enter the individual waypoints by their latitude and longitude points and then the pilot or engineer would enter the starting location in latitude and longitude. The system used spinning mass gyroscopes and proof-mass accelerometers to measure movement from the start point. An involved calculation followed by sampling those sensors to determine a current position relative to the surface of the Earth. The Carousel IV system derives its name from the fact that the inertial reference platform was rotated 360° every 60 seconds as a technique to reduce drift and increase accuracy by countering systematic errors. Low drift operation was aided by maintaining the gyroscopes and accelerometers at a constant temperature of 60 °C. The elevated temperature was maintained whenever the system was switched on in either the 'Standby', 'Align', 'Navigate' or 'Attitude' mode, as selected on the Control Display Unit (CDU). During the 1982 Falklands war, RAF Avro Vulcans were fitted with Carousels from RAF Vickers VC10s to enable Operation Black Buck. Applications Military: C-5A/B, KC-135 and its derivatives, C-141 Missiles: Thor IRBM, Titan II ICBM, Titan III heavy-lift launch system Spacecraft: Apollo Command Module (IMU only) Commercial: Boeing 747 (early variants), Airbus A300 (early variants), Concorde, McDonnell Douglas DC-10, Vickers VC-10 References Avionics
Delco Carousel
Technology
445
50,783,862
https://en.wikipedia.org/wiki/Energy%20Conversion%20and%20Management
Energy Conversion and Management is a biweekly peer-reviewed scientific journal covering research on energy generation, utilization, conversion, storage, transmission, conservation, management, and sustainability that was established in 1979. It is published by Elsevier and the editor-in-chief is Moh'd Ahmad Al-Nimr (Jordan University of Science and Technology). Abstracting and indexing The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2021 impact factor of 11.533. References External links Elsevier academic journals Academic journals established in 1979 English-language journals Energy and fuel journals Biweekly journals
Energy Conversion and Management
Environmental_science
150
19,170,396
https://en.wikipedia.org/wiki/IT%20baseline%20protection
The IT baseline protection () approach from the German Federal Office for Information Security (BSI) is a methodology to identify and implement computer security measures in an organization. The aim is the achievement of an adequate and appropriate level of security for IT systems. To reach this goal the BSI recommends "well-proven technical, organizational, personnel, and infrastructural safeguards". Organizations and federal agencies show their systematic approach to secure their IT systems (e.g. Information Security Management System) by obtaining an ISO/IEC 27001 Certificate on the basis of IT-Grundschutz. Overview baseline security The term baseline security signifies standard security measures for typical IT systems. It is used in various contexts with somewhat different meanings. For example: Microsoft Baseline Security Analyzer: Software tool focused on Microsoft operating system and services security Cisco security baseline: Vendor recommendation focused on network and network device security controls Nortel baseline security: Set of requirements and best practices with a focus on network operators ISO/IEC 13335-3 defines a baseline approach to risk management. This standard has been replaced by ISO/IEC 27005, but the baseline approach was not taken over yet into the 2700x series. There are numerous internal baseline security policies for organizations, The German BSI has a comprehensive baseline security standard, that is compliant with the ISO/IEC 27000-series BSI IT baseline protection The foundation of an IT baseline protection concept is initially not a detailed risk analysis. It proceeds from overall hazards. Consequently, sophisticated classification according to damage extent and probability of occurrence is ignored. Three protection needs categories are established. With their help, the protection needs of the object under investigation can be determined. Based on these, appropriate personnel, technical, organizational and infrastructural security measures are selected from the IT Baseline Protection Catalogs. The Federal Office for Security in Information Technology's IT Baseline Protection Catalogs offer a "cookbook recipe" for a normal level of protection. Besides probability of occurrence and potential damage extents, implementation costs are also considered. By using the Baseline Protection Catalogs, costly security analyses requiring expert knowledge are dispensed with, since overall hazards are worked with in the beginning. It is possible for the relative layman to identify measures to be taken and to implement them in cooperation with professionals. The BSI grants a baseline protection certificate as confirmation for the successful implementation of baseline protection. In stages 1 and 2, this is based on self declaration. In stage 3, an independent, BSI-licensed auditor completes an audit. Certification process internationalization has been possible since 2006. ISO/IEC 27001 certification can occur simultaneously with IT baseline protection certification. (The ISO/IEC 27001 standard is the successor of BS 7799-2). This process is based on the new BSI security standards. This process carries a development price which has prevailed for some time. Corporations having themselves certified under the BS 7799-2 standard are obliged to carry out a risk assessment. To make it more comfortable, most deviate from the protection needs analysis pursuant to the IT Baseline Protection Catalogs. The advantage is not only conformity with the strict BSI, but also attainment of BS 7799-2 certification. Beyond this, the BSI offers a few help aids like the policy template and the GSTOOL. One data protection component is available, which was produced in cooperation with the German Federal Commissioner for Data Protection and Freedom of Information and the state data protection authorities and integrated into the IT Baseline Protection Catalog. This component is not considered, however, in the certification process. Baseline protection process The following steps are taken pursuant to the baseline protection process during structure analysis and protection needs analysis: The IT network is defined. IT structure analysis is carried out. Protection needs determination is carried out. A baseline security check is carried out. IT baseline protection measures are implemented. Creation occurs in the following steps: IT structure analysis (survey) Assessment of protection needs Selection of actions Running comparison of nominal and actual. IT structure analysis An IT network includes the totality of infrastructural, organizational, personnel, and technical components serving the fulfillment of a task in a particular information processing application area. An IT network can thereby encompass the entire IT character of an institution or individual division, which is partitioned by organizational structures as, for example, a departmental network, or as shared IT applications, for example, a personnel information system. It is necessary to analyze and document the information technological structure in question to generate an IT security concept and especially to apply the IT Baseline Protection Catalogs. Due to today's usually heavily networked IT systems, a network topology plan offers a starting point for the analysis. The following aspects must be taken into consideration: The available infrastructure, The organizational and personnel framework for the IT network, Networked and non-networked IT systems employed in the IT network. The communications connections between IT systems and externally, IT applications run within the IT network. Protection needs determination The purpose of the protection needs determination is to investigate what protection is sufficient and appropriate for the information and information technology in use. In this connection, the damage to each application and the processed information, which could result from a breach of confidentiality, integrity or availability, is considered. Important in this context is a realistic assessment of the possible follow-on damages. A division into the three protection needs categories "low to medium", "high" and "very high" has proved itself of value. "Public", "internal" and "secret" are often used for confidentiality. Modelling Heavily networked IT systems typically characterize information technology in government and business these days. As a rule, therefore, it is advantageous to consider the entire IT system and not just individual systems within the scope of an IT security analysis and concept. To be able to manage this task, it makes sense to logically partition the entire IT system into parts and to separately consider each part or even an IT network. Detailed documentation about its structure is prerequisite for the use of the IT Baseline Protection Catalogs on an IT network. This can be achieved, for example, via the IT structure analysis described above. The IT Baseline Protection Catalog’s' components must ultimately be mapped onto the components of the IT network in question in a modelling step. Baseline security check The baseline security check is an organisational instrument offering a quick overview of the prevailing IT security level. With the help of interviews, the status quo of an existing IT network (as modelled by IT baseline protection) relative to the number of security measures implemented from the IT Baseline Protection Catalogs are investigated. The result is a catalog in which the implementation status "dispensable", "yes", "partly", or "no" is entered for each relevant measure. By identifying not yet, or only partially, implemented measures, improvement options for the security of the information technology in question are highlighted. The baseline security check gives information about measures, which are still missing (nominal vs. actual comparison). From this follows what remains to be done to achieve baseline protection through security. Not all measures suggested by this baseline check need to be implemented. Peculiarities are to be taken into account! It could be that several more or less unimportant applications are running on a server, which have lesser protection needs. In their totality, however, these applications are to be provided with a higher level of protection. This is called the (cumulation effect). The applications running on a server determine its need for protection. Several IT applications can run on an IT system. When this occurs, the application with the greatest need for protection determines the IT system’s protection category. Conversely, it is conceivable that an IT application with great protection needs does not automatically transfer this to the IT system. This may happen because the IT system is configured redundantly, or because only an inconsequential part is running on it. This is called the (distribution effect). This is the case, for example, with clusters. The baseline security check maps baseline protection measures. This level suffices for low to medium protection needs. This comprises about 80% of all IT systems according to BSI estimates. For systems with high to very high protection needs, risk analysis-based information security concepts, like for example ISO/IEC 27000-series standards, are usually used. IT Baseline Protection Catalog and standards During its 2005 restructuring and expansion of the IT Baseline Protection Catalogs, the BSI separated methodology from the IT Baseline Protection Catalog. The BSI 100-1, BSI 100-2, and BSI 100-3 standards contain information about construction of an information security management system (ISMS), the methodology or basic protection approach, and the creation of a security analysis for elevated and very elevated protection needs building on a completed baseline protection investigation. BSI 100-4, the "Emergency management" standard, is currently in preparation. It contains elements from BS 25999, ITIL Service Continuity Management combined with the relevant IT Baseline Protection Catalog components, and essential aspects for appropriate Business Continuity Management (BCM). Implementing these standards renders certification is possible pursuant to BS 25999-2. The BSI has submitted the BSI 100-4 standards design for online commentary under. The BSI brings its standards into line with international norms like the ISO/IEC 27001 this way. Literature BSI:IT Baseline Protection Guidelines (pdf, 420 kB) BSI: IT Baseline Protection Cataloge 2007 (pdf) BSI: BSI IT Security Management and IT Baseline Protection Standards Frederik Humpert: IT-Grundschutz umsetzen mit GSTOOL. Anleitungen und Praxistipps für den erfolgreichen Einsatz des BSI-Standards, Carl Hanser Verlag München, 2005. () Norbert Pohlmann, Hartmut Blumberg: Der IT-Sicherheitsleitfaden. Das Pflichtenheft zur Implementierung von IT-Sicherheitsstandards im Unternehmen, References External links Federal Office for Information Security IT Security Yellow Pages IT Baseline protection tools Open Security Architecture- Controls and patterns to secure IT systems Information technology management Computer security
IT baseline protection
Technology
2,087
168,340
https://en.wikipedia.org/wiki/Sewage%20sludge
Sewage sludge is the residual, semi-solid material that is produced as a by-product during sewage treatment of industrial or municipal wastewater. The term "septage" also refers to sludge from simple wastewater treatment but is connected to simple on-site sanitation systems, such as septic tanks. After treatment, and dependent upon the quality of sludge produced (for example with regards to heavy metal content), sewage sludge is most commonly either disposed of in landfills, dumped in the ocean or applied to land for its fertilizing properties, as pioneered by the product Milorganite. The term "Biosolids" is often used as an alternative to the term sewage sludge in the United States, particularly in conjunction with reuse of sewage sludge as fertilizer after sewage sludge treatment. Biosolids can be defined as organic wastewater solids that can be reused after stabilization processes such as anaerobic digestion and composting. Opponents of sewage sludge reuse reject this term as a public relations term. Treatment process Sewage sludge treatment is the process of removing contaminants from wastewater. Sewage sludge is produced from the treatment of wastewater in sewage treatment plants and consists of two basic forms — raw primary sludge and secondary sludge, also known as activated sludge in the case of the activated sludge process. Sewage sludge is usually treated by one or several of the following treatment steps: lime stabilization, thickening, dewatering, drying, anaerobic digestion or composting. Some treatment processes, such as composting and alkaline stabilization, that involve significant amendments may affect contaminant strength and concentration: depending on the process and the contaminant in question, treatment may decrease or in some cases increase the bioavailability and/or solubility of contaminants. Regarding sludge stabilization processes, anaerobic and aerobic digestion seem to be the most common used methods in EU-27. When fresh sewage or wastewater enters a primary settling tank, approximately 50% of the suspended solid matter will settle out in an hour and a half. This collection of solids is known as raw sludge or primary solids and is said to be "fresh" before anaerobic processes become active. The sludge will become putrescent in a short time once anaerobic bacteria take over, and must be removed from the sedimentation tank before this happens. This is accomplished in one of two ways. Most commonly, the fresh sludge is continuously extracted from the bottom of a hopper-shaped tank by mechanical scrapers and passed to separate sludge-digestion tanks. In some treatment plants an Imhoff tank is used: sludge settles through a slot into the lower story or digestion chamber, where it is decomposed by anaerobic bacteria, resulting in liquefaction and reduced volume of the sludge. The secondary treatment process also generates a sludge largely composed of bacteria and protozoa with entrained fine solids, and this is removed by settlement in secondary settlement tanks. Both sludge streams are typically combined and are processed by anaerobic or aerobic treatment process at either elevated or ambient temperatures. After digesting for an extended period, the result is called "digested" sludge and may be disposed of by drying and then landfilling. Following treatment, sewage sludge is either landfilled, dumped in the ocean, incinerated, applied on agricultural land or, in some cases, retailed or given away for free to the general public. According to a review article published in 2012, sludge reuse (including direct agricultural application and composting) was the predominant choice for sludge management in EU-15 (53% of produced sludge), following by incineration (21% of produced sludge). On the other hand, the most common disposal method in EU-12 countries was landfilling. Quantities produced The amount of sewage sludge produced is proportional to the amount and concentration of wastewater treated, and it also depends on the type of wastewater treatment process used. It can be expressed as kg dry solids per cubic metre of wastewater treated. The total sludge production from a wastewater treatment process is the sum of sludge from primary settling tanks (if they are part of the process configuration) plus excess sludge from the biological treatment step. For example, primary sedimentation produces about 110–170 kg/ML of so-called primary sludge, with a value of 150 kg/ML regarded as being typical for municipal wastewater in the U.S. or Europe. The sludge production is expressed as kg of dry solids produced per ML of wastewater treated; one mega litre (ML) is 103 m3. Of the biological treatment processes, the activated sludge process produces about 70–100 kg/ML of waste activated sludge, and a trickling filter process produces slightly less sludge from the biological part of the process: 60–100 kg/ML. This means that the total sludge production of an activated sludge process that uses primary sedimentation tanks is in the range of 180–270 kg/ML, being the sum of primary sludge and waste activated sludge. United States municipal wastewater treatment plants in 1997 produced about 7.7 million dry tons of sewage sludge, and about 6.8 million dry tons in 1998 according to EPA estimates. As of 2004, about 60% of all sewage sludge was applied to land as a soil amendment and fertilizer for growing crops. In a review article published in 2012, it was reported that a total amount of 10.1 million tn DS/year were produced in EU-27 countries. As of 2023, the EU produced 2 to 3 million tons of sludge each year. Worldwide it is estimated that as much as 75 Million Mg of dry sewage sludge per year. Production of sewage sludge can be reduced by conversion from flush toilets to dry toilets such as urine-diverting dry toilets and composting toilets. Disposal Landfill Sewage sludge deposition in landfills can circulate human-virulent species of Cryptosporidium and Giardia pathogens. Sonication and quicklime stabilization are most effective in inactivation of these pathogens; microwave energy disintegration and top-soil stabilization were less effective. A Texas county has launched a first-of-its-kind criminal investigation into waste management giant Synagro over PFAS-contaminated sewage sludge it is selling to Texas farmers as a cheap alternative to fertilizer. As of 2023, 11% of sludge produced in the EU was disposed of in landfills. The EU is attempting to phase out the disposal of sludge in landfills. Ocean dumping It used to be common practice to dump sewage sludge into the ocean, however, this practice has stopped in many nations due to environmental concerns as well to domestic and international laws and treaties. Ronald Reagan signed the law that prohibited ocean dumping as a means of disposal of sewage sludge in the US in 1988. Incineration Sludge can also be incinerated in sludge incineration plants which comes with its own set of environmental concerns (air pollution, disposal of the ash). Pyrolysis of the sludge to create syngas and potentially biochar is possible, as is combustion of biofuel produced from drying sewage sludge or incineration in a waste-to-energy facility for direct production of electricity and steam for district heating or industrial uses. Thermal processes can greatly reduce the volume of the sludge, as well as achieve remediation of all or some of the biological concerns. Direct waste-to-energy incineration and complete combustion systems (such as the Gate 5 Energy System) will require multi-step cleaning of the exhaust gas, to ensure no hazardous substances are released. In addition, the ash produced by incineration or incomplete combustion processes (such as fluidized-bed dryers) may be difficult to use without subsequent treatment due to high heavy metal content; solutions to this include leaching of the ashes to remove heavy metals or in the case of ash produced in a complete-combustion process, or with biochar produced from a pyrolytic process, the heavy metals may be fixed in place and the ash material readily usable as a LEEDs preferred additive to concrete or asphalt. Examples of other ways to use dried sewage sludge as an energy resource include the Gate 5 Energy System, an innovative process to power a steam turbine using heat from burning milled and dried sewage sludge, or combining dried sewage sludge with coal in coal-fired power stations. In both cases this allows for production of electricity with less carbon-dioxide emissions than conventional coal-fired power stations. As of 2023, 27% of sludge produced in the EU was incinerated. Use Land application Biosolids is a term widely used to denote the byproduct of domestic and commercial sewage and wastewater treatment that is to be used in agriculture. National regulations that dictate the practice of land application of treated sewage sludge differ widely and e.g. in the US there are widespread disputes about this practice. Depending on their level of treatment and resultant pollutant content, biosolids can be used in regulated applications for non-food agriculture, food agriculture, or distribution for unlimited use. Treated biosolids can be produced in cake, granular, pellet, or liquid form and are spread over land before being incorporated into the soil or injected directly into the soil by specialist contractors. Such use was pioneered by the production of Milorganite in 1926. Use of sewage sludge has shown an increase in level of soil available phosphorus and soil salinity. The findings of a 20-year field study of air, land, and water in Arizona, concluded that use of biosolids is sustainable and improves the soil and crops. Other studies report that plants uptake large quantities of heavy metals and toxic pollutants that are retained by produce, which is then consumed by humans. A PhD thesis studying the addition of sludge to neutralize soil acidity concluded that the practice was not recommended if large amounts are used because the sludge produces acids when it oxidizes. Studies have indicated that pharmaceuticals and personal care products, which often adsorb to sludge during wastewater treatment, can persist in agricultural soils following biosolid application. Some of these chemicals, including potential endocrine disruptor triclosan, can also travel through the soil column and leach into agricultural tile drainage at detectable levels. Other studies, however, have shown that these chemicals remain adsorbed to surface soil particles, making them more susceptible to surface erosion than infiltration. These studies are also mixed in their findings regarding the persistence of chemicals such as triclosan, triclocarban, and other pharmaceuticals. The impact of this persistence in soils is unknown, but the link to human and land animal health is likely tied to the capacity for plants to absorb and accumulate these chemicals in their consumed tissues. Studies of this kind are in early stages, but evidence of root uptake and translocation to leaves did occur for both triclosan and triclocarban in soybeans. This effect was not present in corn when tested in a different study. A cautionary approach to land application of biosolids has been advocated by some for regions where soils have lower capacities for toxics absorption or due to the presence of unknowns in sewage biosolids. In 2007 the Northeast Regional Multi-State Research Committee (NEC 1001) issued conservative guidelines tailored to the soils and conditions typical of the northeastern US. Use of sewage sludge is prohibited for produce to be labeled USDA-certified organic. In 2014 the United States grocery chain Whole Foods banned produce grown in sewage sludge. Treated sewage sludge has been used in the UK, Europe and China agriculturally for more than 80 years, though there is increasing pressure in some countries to stop the practice of land application due to farm land contamination and negative public opinion. In the 1990s, there was pressure in some European countries to ban the use of sewage sludge as a fertilizer. Switzerland, Sweden, Austria, and others introduced a ban. Still, the dominant method for disposal of sewage sludge in the EU is via application to agricultural lands. As of 2023, 40% of sludge produced in the EU was used on agricultural land. Since the 1960s there has been cooperative activity with industry to reduce the inputs of persistent substances from factories. This has been very successful and, for example, the content of cadmium in sewage sludge in major European cities is now only 1% of what it was in 1970. Transformation into products Sewage sludge is an agglomeration of concentrated wastes, and therefore it contains many potentially extractable and useable components. These can include using sludge to produce energy, create carbon-based components, extract phosphorus and nitrogen, or make bricks or other construction materials. Recycling of phosphate is regarded as especially important because the phosphate industry predicts that at the current rate of extraction the economic reserves will be exhausted in 100 or at most 250 years. Phosphate can be recovered with minimal capital expenditure as technology currently exists, but municipalities have little political will to attempt nutrient extraction, instead opting for a "take all the other stuff" mentality. One potential drawback of extracting products from sludge — as opposed to land application — is that only some of the sludge is used and the rest still needs disposal. It can also be very expensive to develop and use appropriate technologies for extracting resources. Contaminants The specific content of sewage sludge is affected by what enters the sewage stream, and how the sewage is treated and processed. As wastewater treatment policies are passed or amended to allow or regulate potential contaminants into the sewage stream, the content of the sewage sludge reflects those changes. For example, the EU's Urban Waste Water Treatment Directive shapes the types of contaminants that enter the EU's sewage treatment stream. Pathogens Bacteria in treated sludge products can actually regrow under certain environmental conditions. Pathogens could easily remain undetected in untreated sewage sludge. Pathogens are not a significant health issue if sewage sludge is properly treated and site-specific management practices are followed. Heavy metals One of the main concerns in the treated sludge is the concentrated metals content (lead, arsenic, cadmium, thallium, etc.); certain metals are regulated while others are not. Leaching methods can be used to reduce the metal content and meet the regulatory limit. In 2009, the EPA released the Targeted National Sewage Sludge Study, which reports on the level of metals, chemicals, hormones, and other materials present in a statistical sample of sewage sludges. Some highlights include: Lead, arsenic, chromium, and cadmium are estimated by the EPA to be present in detectable quantities in 100% of national sewage sludges in the US, while thallium is only estimated to be present in 94.1% of sludges. Silver is present to the degree of 20 mg/kg of sludge, on average, while some sludges have up to 200 milligrams of silver per kilogram of sludge; one outlier demonstrated a silver lode of 800–900 mg per kg of sludge. Barium is present at the rate of 500 mg/kg, while manganese is present at the rate of 1 g/kg sludge. Micro-pollutants Micro-pollutants are compounds which are normally found at concentrations up to microgram per liter and milligram per kilogram in the aquatic and terrestrial environment, respectively, and they are considered to be potential threats to environmental ecosystems. They can become concentrated in sewage sludge. Each of these disposal options comes with myriad potential—and in some cases proven—human health and environment impacts. Several organic micro-pollutants such as endocrine disrupting compounds, pharmaceuticals and per-fluorinated compounds have been detected in sewage sludge samples around the world at concentrations ranging up to some hundreds mg/kg of dried sludge. Sterols and other hormones have also been detected. Other hazardous substances Sewage treatment plants receive various forms of hazardous waste from hospitals, nursing homes, industry and households. Low levels of constituents such as PCBs, dioxin, and brominated flame retardants, may remain in treated sludge. There are potentially thousands of other components of sludge that remain untested/undetected disposed of from modern society that also end up in sludge (pharmaceuticals, nano particles, etc.) which have been proven to be hazardous to both human and ecological health. In 2013, in South Carolina PCBs were discovered in very high levels in wastewater sludge. The problem was not discovered until thousands of acres of farm land in South Carolina were discovered to be contaminated by this hazardous material. SCDHEC issued emergency regulatory order banning all PCB laden sewage sludge from being land applied on farm fields or deposited into landfills in South Carolina. Also in 2013, after DHEC request, the city of Charlotte decided to stop land applying sewage sludge in South Carolina while authorities investigated the source of PCB contamination. In February 2014, the city of Charlotte admitted PCBs have entered their sewage treatment centers as well. Contaminants of concern in sewage sludge are plasticizers, PDBEs, PFASs ("forever chemicals"), and others generated by human activities, including personal care products and medicines. Synthetic fibers from fabrics persist in treated sewage sludge as well as in biosolids-treated soils and may thus serve as an indicator of past biosolids application. Pollutant ceiling concentration The term "pollutant" is defined as part of the EPA 503 rule. The components of sludge have pollutant limits defined by the EPA. "A Pollutant is an organic substance, an inorganic substance, a combination of organic and inorganic substances, or a pathogenic organism that, after discharge and upon exposure, ingestion, inhalation, or assimilation into an organism either directly from the environment or indirectly by ingestion through the food chain, could, on the basis of information available to the Administrator of EPA, cause death, disease, behavioral abnormalities, cancer, genetic mutations, physiological malfunctions (including malfunction in reproduction), or physical deformations in either organisms or offspring of the organisms." The maximum component pollutant limits by the US EPA are: Health risks In 2011, the EPA commissioned a study at the United States National Research Council (NRC) to determine the health risks of sludge. In this document the NRC pointed out that many of the dangers of sludge are unknown and unassessed. The NRC published "Biosolids Applied to Land: Advancing Standards and Practices" in July 2002. The NRC concluded that while there is no documented scientific evidence that sewage sludge regulations have failed to protect public health, there is persistent uncertainty on possible adverse health effects. The NRC noted that further research is needed and made about 60 recommendations for addressing public health concerns, scientific uncertainties, and data gaps in the science underlying the sewage sludge standards. The EPA responded with a commitment to conduct research addressing the NRC recommendations. Residents living near Class B sludge processing sites may experience asthma or pulmonary distress due to bioaerosols released from sludge fields. A 2004 survey of 48 individuals near affected sites found that most reported irritation symptoms, about half reported an infection within a month of the application, and about a fourth were affected by Staphylococcus aureus, including two deaths. The number of reported S. aureus infections was 25 times as high as in hospitalized patients, a high-risk group. The authors point out that regulations call for protective gear when handling Class B biosolids and that similar protections could be considered for residents in nearby areas given the wind conditions. In 2007, a health survey of persons living in close proximity to Class B sludged land was conducted. A sample of 437 people exposed to Class B sludge (living within of sludged land) - and using a control group of 176 people not exposed to sludge (not living within of sludged land) reported the following: Although correlation does not imply causation, such extensive correlations may lead reasonable people to conclude that precaution is necessary in dealing with sludge and sludged farmlands. Harrison and Oakes suggest that, in particular, "until investigations are carried out that answer these questions (...about the safety of Class B sludge...), land application of Class B sludges should be viewed as a practice that subjects neighbors and workers to substantial risk of disease." They further suggest that even Class A treated sludge may have chemical contaminants (including heavy metals, such as lead) or endotoxins present, and a precautionary approach may be justified on this basis, though the vast majority of incidents reported by Lewis, et al. have been correlated with exposure to Class B untreated sludge and not Class A treated sludge. A 2005 report by the state of North Carolina concluded that "a surveillance program of humans living near application sites should be developed to determine if there are adverse health effects in humans and animals as a result of biosolids application." Studies of the potential uses of sewage sludge around homes, such as covering lead-contaminated soil in Baltimore, have created debates over whether participants should have been informed about potential risks, when there remains uncertainty about those risks. The chain of sewage sledge to biosolids to fertilizers has resulted in PFASs ("forever chemicals") contamination of farm produce in Maine in 2021 and beef raised in Michigan in 2022. The EPA PFAS Strategic Roadmap initiative, running from 2021 to 2024, will consider the full lifecycle of PFAS including health risks of PFAS in wastewater sludge. Regulation and guidelines European Union The EC encourages the use of sewage sludge in agriculture because it conserves organic matter and completes nutrient cycles. European countries that joined the EU after 2004 favor landfills as a means of disposal for sewage sludge. In 2006, the predicted sewage sludge growth rate was 10 million tons of sewage sludge per year. This increase in the amount of sewage sludge accumulation in the EU can be due to the increase in the number of households that are connected to the sewage system. The EU has directives in place to encourage the use of sewage sludge in agriculture, in a way that the soil, humans, and the environment are not harmed. A guideline the EU has put into place it that sewage sludge should not be added to fruit and vegetable crops that are in season. In Austria, in order to dispose of the sewage sludge in a landfill, it must first be treated in a way that reduces its biological reactivity. Sweden no longer allows sewage sludge to be disposed in the land fills. In the EU, regulations regarding sewage sludge disposal differ because legislation regarding landfill disposal in not in the national regulations for the EU. Sewage Sludge Directive The EU's Sewage Sludge Directive (86/278/EEC) sets out regulations to pursue the dual purpose of promoting the use of sewage sludge as an agricultural fertilizer, while ensuring environmental protections and human health. These rules include sludge treatment requirements, as well as limits on the time and place of sewage sludge applications, depending on the type of food crop. This is intended to protect human health while maintaining the ecological health of the soil and water. The directive explicitly regulates the allowable levels of seven heavy metals (cadmium, copper, nickel, lead, zinc, mercury, and chromium) in soil and sludge, and regulates any application of sewage sludge that would cause levels of these heavy metals in soil to exceed those limits. EU member states are tasked with implementing and enforcing the Directive within their borders, as well as monitoring and reporting on sludge production, treatment, characteristics, and use. Member states are allowed to set more stringent limits for heavy metals than set out in the Sewage Sludge Directive, and can set limits for other pollutants. As of 2021, more than half of the EU member states had stricter limits for mercury and cadmium than required under the Directive. Member states are also allowed to limit or promote the use of sewage sludge for agriculture as they choose, meaning that some countries prohibit the use of sludge in agriculture, while some use up to 50% of the sludge they generate in agriculture. Spain, France, Italy, and the United Kingdom (while it was still part of the EU) have particularly promoted the use of sludge in agriculture. Each of Austria's federal states has its own regulations for the use of sewage sludge in agriculture, including different limits for heavy metals. For example, Tyrol has banned the use of sludge on agricultural lands, while in Salzburg it is only allowed under certain conditions. Since the Directive's passage, there has been the substantial decrease in heavy metal residues in agricultural soils over time (well below the limits set), though it is not possible to determine what proportion of the decrease is due to the Directive itself, as opposed to other national and EU legislation. The Sewage Sludge Directive has been evaluated several times under EU proposals to build a circular economy through the reduction and reuse of wastes. In 2014, a European Commission evaluation of the Sewage Sludge Directive suggested it was appropriate for its goals, and did not need revision. In 2023, as part of the European Green Deal and Circular Economy Action Plan, the EU re-evaluated the Sewage Sludge Directive, and found that it should be maintained – as the use of sewage sludge as fertilizer aligns with circular economy goals and potentially reduces the EU carbon emissions – but that the potential pollutants and contaminants regulated under the Directive should be reviewed and potentially revised. This evaluation noted that, as of 2023, the original Directive had not been seriously updated since its original passage in 1986, even though in the intervening decades there had been many developments in both environmental policy, expectations, and research, as well as member states' national policies around sewage sludge. The evaluation particularly emphasized concerns about methane emissions, microplastic contamination, and antibiotic resistances. The Sewage Sludge Directive has not yet set limits for other contaminants, such as organic pollutants, pathogens, microplastics, pharmaceutical residues, and personal care product residues. With the identification of these new contaminants in sludge since the Sewage Sludge Directive originally passed, several researchers have suggested that the EU should consider revising the Directive to address their potential risks to health and environment. United States After the 1991 Congressional ban on ocean dumping, the U.S. Environmental Protection Agency (EPA) instituted a policy of digested sludge reuse on agricultural land. The US EPA promulgated regulations – 40 CFR Part 503 – that continued to allow the use of biosolids on land as fertilizers and soil amendments which had been previously allowed under Part 257. The EPA promoted biosolids recycling throughout the 1990s. The EPA's Part 503 regulations were developed with input from university, EPA, and USDA researchers from around the country and involved an extensive review of the scientific literature and the largest risk assessment the agency had conducted to that time. The Part 503 regulations became effective in 1993. According to the EPA, biosolids that meet treatment and pollutant content criteria of Part 503.13 "can be safely recycled and applied as fertilizer to sustainably improve and maintain productive soils and stimulate plant growth." However, they can not be disposed of in a sludge only landfill under Part 503.23 because of high chromium levels and boundary restrictions. Under the Obama Administration, the Biosolids Center of Excellence (headquartered in EPA Region 7) was created to monitor and enforce compliance with biosolids regulation. The Center receives and reviews annual reports from the major producers of biosolids. Eight U.S. states oversee their own biosolids programs: Arizona, Michigan, Ohio, Oklahoma, South Dakota, Texas, Utah, and Wisconsin; other states' programs are overseen by the EPA. Classes of sewage sludge in the United States In the United States, two classes of sewage sludge are defined by the amount of pathogens (i.e. bacteria, viruses) remaining in the sludge, and therefore the types of uses allowed by law. Both classes of sludge may still contain radioactive or pharmaceutical wastes. Class A sludge must be treated so that specific pathogens (like Salmonella) are no longer detected. This class of sludge can be used for all land applications, including where the public may come into contact with it (i.e. agricultural land, home use, for public sale). Biosolids that meet Class A pathogen reduction requirements or equivalent treatment by a "Process to Further Reduce Pathogens" (PFRP) have the least restrictions on use. PFRPs include pasteurization, heat drying, thermophilic composting (aerobic digestion, most common method), and beta or gamma ray irradiation. Class B sludge also requires treatment to reduce pathogens, but pathogens are still detectable in the sludge (such as some parasitic worm eggs). This class of sludge has much stricter restrictions on its use. Biosolids that meet the Class B pathogen treatment and pollutant criteria, in accordance with the EPA "Standards for the use or disposal of sewage sludge" (40 CFR Part 503), can be land applied with formal site restrictions and strict record keeping. Evaluation of the U.S. sewage sludge program The EPA Office of the Inspector General (OIG) completed two assessments in 2000 and 2002 of the EPA sewage sludge program. The follow-up report in 2002 documented that "the EPA cannot assure the public that current land application practices are protective of human health and the environment." The report also documented that there had been an almost 100% reduction in EPA enforcement resources since the earlier assessment. This is probably the greatest issue with the practice: under both the federal program operated by the EPA and those of the several states, there is limited inspection and oversight by agencies charged with regulating these practices. To some degree, this lack of oversight is a function of the perceived (by the regulatory agencies) benign nature of the practice. However, a greater underlying issue is funding. Few states and the US EPA have the discretionary funds necessary to establish and implement a full enforcement program for biosolids. As detailed in the 1995 Plain English Guide to the Part 503 Risk Assessment, the EPA's most comprehensive risk assessment was completed for biosolids. Court cases in the United States In 2009, James Rosendall of Grand Rapids, MI was sentenced by United States District Judge Avern Cohn to 11 months in prison followed by three years of supervised release for conspiring to commit bribery. Rosendall was the former president of Synagro of Michigan, a subsidiary of Synagro Technologies. His duties included obtaining the approval of the City of Detroit to process and dispose of the city's wastewater. In 2011, Travis County Commissioners declared that Synagro's solid waste disposal activities would be inappropriate and prohibited land use according to the towns already established ordinances. A battle between the home rule of local government and states rights/commerce rights has been waged between the small town of Kern County, California, and Los Angeles, California. Kern county passed an ordinance "Keep Kern Clean" ballot initiative which banned sludge from being applied in Kern County. Los Angeles sued and after a protracted verdict, won the case in 2016. In 2012, two families won a $225,000 tort lawsuit against a sludge company that contaminated their properties. In 2013 in Pennsylvania, the case Gilbert vs. Synagro, a judge barred a nuisance, negligence and trespass lawsuit under Pennsylvania's Right to Farm Act. History of sewage sludge disposal in New York City Since 1884 when sewage was first treated the amount of sludge has increased along with population and more advanced treatment technology (secondary treatment in addition to primary treatment). In the case of New York City, at first the sludge was discharged directly along the banks of rivers surrounding the city, then later piped further into the rivers, and then further still out into the harbor. In 1924, to relieve a dismal condition in New York Harbor, New York City began dumping sludge at sea at a location in the New York Bight called the 12-Mile Site. This was deemed a successful public health measure and not until the late 1960s was there any examination of its consequences to marine life or to humans. There was accumulation of sludge particles on the seafloor and consequent changes in the numbers and types of benthic organisms. In 1970 a large area around the site was closed to shellfishing. From then until 1986, the practice of dumping at the 12-Mile Site came under increasing pressure stemming from a series of untoward environmental crises in the New York Bight that were attributed partly to sludge dumping. In 1986, sludge dumping was moved still further seaward to a site over the deep ocean called the 106-Mile Site. Then, again in response to political pressure arising from events unrelated to ocean dumping, the practice ended entirely in 1992. Since 1992, New York City sludge has been applied to land (outside of New York state). The wider question is whether or not changes on the sea floor caused by the portion of sludge that settles are severe enough to justify the added operational cost and human health concerns of applying sludge to land. See also Milorganite References Further reading "Biosolids Applied to Land: Advancing Standards and Practices", National Research Council, July 2002 Biogas substrates Sewerage Sanitation
Sewage sludge
Chemistry,Engineering,Environmental_science
6,983
52,490,761
https://en.wikipedia.org/wiki/Gangadhar%20J.%20Sanjayan
Gangadhar J. Sanjayan (born 1968) is an Indian bioorganic chemist, scientist and the head of The Sanjayan Lab at the National Chemical Laboratory, Pune. He is known for his researches on the synthesis of designer peptide/protein mimetics and hetero-foldamers and is a recipient of the Bronze Medal of the Chemical Research Society of India. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards, in 2012, for his contributions to chemical sciences. Biography Born on 1 June 1968 in the south Indian state of Kerala, G. J. Sanjayan completed his graduate studies in chemistry from the University of Kerala in 1988 and obtained a master's degree from Banaras Hindu University in 1990. He continued his studies at BHU under the guidance of Arya K. Mukerjee and after securing a PhD in 1994, he did his post doctoral studies at National Chemical Laboratory (NCL) during 1995–98 under the supervision of Krishna N. Ganesh, a Shanti Swarup Bhatnagar laureate. On completion of his studies, he joined NCL as a scientist, but had a second stint of post-doctoral studies at the University of Oxford at the laboratory of G. W. J. Fleet during 2000–01. At NCL, he serves as a senior scientist and as the head of the Sanjayan Lab. Legacy Sanjayan and his team of scientists at Sanjayan Lab are involved in the studies of molecular architecture with designer characteristics as well as the design and synthesis of molecules that has relevance in medicinal chemistry, especially in therapeutic uses for treating cancer and cardiac diseases. He is known to have synthesized designer peptide and protein mimetics, hetero-foldamers and tubulin-binding agents for treating cancer. His work also covers the development of new organic dyes. His researches have been documented by way of a number of peer-reviewed articles; ResearchGate, an online article repository of scientific articles, has listed 74 of them. Awards and honors Sanjayan, who held the research grant of the International Foundation for Science in 2007, received the Scientist of the Year Award of the National Chemical Laboratory in 2008, the same year as he received the fellowship of the Indo-US Science and Technology Forum. Two years later, he received the Ranbaxy Research Award followed by the Award for Excellence in Drug Research of the Central Drug Research Institute in 2011. The Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards, in 2012. The year 2013 brought him two awards, viz. the annual award of the Organization of Pharmaceutical producers of India and the Bronze Medal of the Chemical Research Society of India. Panjab University honored him with the Bhagyatara Award in 2014. See also Krishna N. Ganesh References Recipients of the Shanti Swarup Bhatnagar Award in Chemical Science 1968 births Indian organic chemists Living people Scientists from Kerala University of Kerala alumni Banaras Hindu University alumni Alumni of the University of Oxford Malayali people
Gangadhar J. Sanjayan
Chemistry
653
7,975,294
https://en.wikipedia.org/wiki/Complex%20line
In mathematics, a complex line is a one-dimensional affine subspace of a vector space over the complex numbers. A common point of confusion is that while a complex line has complex dimension one over C (hence the term "line"), it has ordinary dimension two over the real numbers R, and is topologically equivalent to a real plane, not a real line. The "complex plane" commonly refers to the graphical representation of the complex line on the real plane, and is thus generally synonymous with the complex line, not the complex coordinate plane. See also Algebraic geometry Complex vector Riemann sphere References Complex geometry Complex analysis
Complex line
Mathematics
128
2,396,669
https://en.wikipedia.org/wiki/Puzz%203D
Puzz 3D is the brand name of three-dimensional jigsaw puzzles, manufactured by Hasbro and formerly by Wrebbit, Inc. Unlike traditional puzzles which are composed of series of flat pieces that when put together, create a single unified image, the Puzz 3D series of puzzles are composed on plastic foam, with part of an image graphed on a stiff paper facade glued to the underlying foam piece and cut to match the piece's dimensions. When the pieces are put together, they create a standing structure. History Puzz 3D puzzles, invented by Paul Gallant, were first made in 1991 under the Quebec-based company Wrebbit. Throughout the 1990s, three-dimensional puzzles were made, leading to a rapid growth in the company. In 1993, Hasbro's Milton Bradley Company bought Wrebbit's Puzz 3D Line, and in 2005 Hasbro themselves completely bought Wrebbit, in 2006, moved the manufacture of Wrebbit's puzzles to its East Longmeadow, Massachusetts facility. The last series made under Hasbro was Towers Made to Scale. The series consisted of 13 skyscrapers from around the world. All of the structures were made to be at 1:585 scale, and all of the towers glowed in the dark. By 2006, all of the Puzz 3D puzzles had been discontinued, but in 2011, the Puzz 3D line was revitalized by Winning Solutions, Inc. Winning Solutions first released the Eiffel Tower and Empire State Building, and released a model of Anif Palace in 2012. As of 2014, Hasbro has brought some of the old Puzz 3D line back into production (made in China) in the same boxes. A separate company Wrebbit3D, making new products along with some of the old line, has been created by some of the old Wrebbit staff. Puzzles Typically, the structures released were famous landmarks, including the White House, Big Ben, CN Tower, and the Neuschwanstein castle. In addition to this, Puzz 3D has also released science-fiction themed puzzles, such as the Millennium Falcon spacecraft from the Star Wars franchise, and structures from legends such as King Arthur's Castle at Camelot. Puzz 3D also released buildings of notable time periods, including a Victorian Mansion. In addition to these structures, they have also produced classic cars, such as the 1956 Ford Thunderbird. Puzz 3D also released a puzzle based on Monopoly, which allows for putting the puzzle together and then playing the game. The puzzles released also include a New York City puzzle, which includes the area around the World Trade Center and the Empire State Building. This New York puzzle is set before the September 11 attacks so the Twin Towers are included. This puzzle is Hasbro's largest, at 3141 pieces. Games Puzz 3D games were also made for Microsoft Windows and Macintosh computers. Users first build the puzzle, which is a digital version of an existing 3D puzzle, by clicking and dragging the pieces. When completed, a fictional mystery must be solved taking place in that landmark. Four games were made for Windows and Mac. These are Puzz 3D: Neuschwanstein Bavarian Castle, Puzz 3D: Notre Dame Cathedral, Puzz 3D: The Orient Express, and Puzz 3D: Victorian Mansion. A fifth game was also made of The Lamplight Manor featuring a 3D tour when completed. References External links Winning Solutions website Hasbro products Tiling puzzles Jigsaw puzzle manufacturers Products introduced in 1991
Puzz 3D
Physics,Mathematics
712
37,530
https://en.wikipedia.org/wiki/Tornado
A tornado is a violently rotating column of air that is in contact with both the surface of the Earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. It is often referred to as a twister, whirlwind or cyclone, although the word cyclone is used in meteorology to name a weather system with a low-pressure area in the center around which, from an observer looking down toward the surface of the Earth, winds blow counterclockwise in the Northern Hemisphere and clockwise in the Southern. Tornadoes come in many shapes and sizes, and they are often (but not always) visible in the form of a condensation funnel originating from the base of a cumulonimbus cloud, with a cloud of rotating debris and dust beneath it. Most tornadoes have wind speeds less than , are about across, and travel several kilometers (a few miles) before dissipating. The most extreme tornadoes can attain wind speeds of more than , can be more than in diameter, and can stay on the ground for more than . Various types of tornadoes include the multiple-vortex tornado, landspout, and waterspout. Waterspouts are characterized by a spiraling funnel-shaped wind current, connecting to a large cumulus or cumulonimbus cloud. They are generally classified as non-supercellular tornadoes that develop over bodies of water, but there is disagreement over whether to classify them as true tornadoes. These spiraling columns of air frequently develop in tropical areas close to the equator and are less common at high latitudes. Other tornado-like phenomena that exist in nature include the gustnado, dust devil, fire whirl, and steam devil. Tornadoes occur most frequently in North America (particularly in central and southeastern regions of the United States colloquially known as Tornado Alley; the United States has by far the most tornadoes of any country in the world). Tornadoes also occur in South Africa, much of Europe (except most of the Alps), western and eastern Australia, New Zealand, Bangladesh and adjacent eastern India, Japan, the Philippines, and southeastern South America (Uruguay and Argentina). Tornadoes can be detected before or as they occur through the use of pulse-Doppler radar by recognizing patterns in velocity and reflectivity data, such as hook echoes or debris balls, as well as through the efforts of storm spotters. Tornado rating scales There are several scales for rating the strength of tornadoes. The Fujita scale rates tornadoes by damage caused and has been replaced in some countries by the updated Enhanced Fujita Scale. An F0 or EF0 tornado, the weakest category, damages trees, but not substantial structures. An F5 or EF5 tornado, the strongest category, rips buildings off their foundations and can deform large skyscrapers. The similar TORRO scale ranges from T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. The International Fujita scale is also used to rate the intensity of tornadoes and other wind events based on the severity of the damage they cause. Doppler radar data, photogrammetry, and ground swirl patterns (trochoidal marks) may also be analyzed to determine intensity and assign a rating. Etymology The word tornado comes from the Spanish (meaning 'thunderstorm', past participle of tronar 'to thunder', itself in turn from the Latin tonāre 'to thunder'). The metathesis of the r and o in the English spelling was influenced by the Spanish tornado (past participle of tornar 'to twist, turn,', from Latin tornō 'to turn'). The English word has been reborrowed into Spanish, referring to the same weather phenomenon. Tornadoes' opposite phenomena are the widespread, straight-line derechos (, from , 'straight'). A tornado is also commonly referred to as a "twister" or the old-fashioned colloquial term cyclone. Definitions A tornado is a violently rotating column of air, in contact with the ground, either pendant from a cumuliform cloud or underneath a cumuliform cloud, and often (but not always) visible as a funnel cloud. For a vortex to be classified as a tornado, it must be in contact with both the ground and the cloud base. The term is not precisely defined; for example, there is disagreement as to whether separate touchdowns of the same funnel constitute separate tornadoes. Tornado refers to the vortex of wind, not the condensation cloud. Funnel cloud A tornado is not necessarily visible; however, the intense low pressure caused by the high wind speeds (as described by Bernoulli's principle) and rapid rotation (due to cyclostrophic balance) usually cause water vapor in the air to condense into cloud droplets due to adiabatic cooling. This results in the formation of a visible funnel cloud or condensation funnel. There is some disagreement over the definition of a funnel cloud and a condensation funnel. According to the Glossary of Meteorology, a funnel cloud is any rotating cloud pendant from a cumulus or cumulonimbus, and thus most tornadoes are included under this definition. Among many meteorologists, the "funnel cloud" term is strictly defined as a rotating cloud which is not associated with strong winds at the surface, and condensation funnel is a broad term for any rotating cloud below a cumuliform cloud. Tornadoes often begin as funnel clouds with no associated strong winds at the surface, and not all funnel clouds evolve into tornadoes. Most tornadoes produce strong winds at the surface while the visible funnel is still above the ground, so it is difficult to discern the difference between a funnel cloud and a tornado from a distance. Outbreaks and families Occasionally, a single storm will produce more than one tornado, either simultaneously or in succession. Multiple tornadoes produced by the same storm cell are referred to as a "tornado family". Several tornadoes are sometimes spawned from the same large-scale storm system. If there is no break in activity, this is considered a tornado outbreak (although the term "tornado outbreak" has various definitions). A period of several successive days with tornado outbreaks in the same general area (spawned by multiple weather systems) is a tornado outbreak sequence, occasionally called an extended tornado outbreak. Characteristics Size and shape Most tornadoes take on the appearance of a narrow funnel, a few hundred meters (yards) across, with a small cloud of debris near the ground. Tornadoes may be obscured completely by rain or dust. These tornadoes are especially dangerous, as even experienced meteorologists might not see them. Small, relatively weak landspouts may be visible only as a small swirl of dust on the ground. Although the condensation funnel may not extend all the way to the ground, if associated surface winds are greater than , the circulation is considered a tornado. A tornado with a nearly cylindrical profile and relatively low height is sometimes referred to as a "stovepipe" tornado. Large tornadoes which appear at least as wide as their cloud-to-ground height can look like large wedges stuck into the ground, and so are known as "wedge tornadoes" or "wedges". The "stovepipe" classification is also used for this type of tornado if it otherwise fits that profile. A wedge can be so wide that it appears to be a block of dark clouds, wider than the distance from the cloud base to the ground. Even experienced storm observers may not be able to tell the difference between a low-hanging cloud and a wedge tornado from a distance. Many, but not all major tornadoes are wedges. Tornadoes in the dissipating stage can resemble narrow tubes or ropes, and often curl or twist into complex shapes. These tornadoes are said to be "roping out", or becoming a "rope tornado". When they rope out, the length of their funnel increases, which forces the winds within the funnel to weaken due to conservation of angular momentum. Multiple-vortex tornadoes can appear as a family of swirls circling a common center, or they may be completely obscured by condensation, dust, and debris, appearing to be a single funnel. In the United States, tornadoes are around across on average. However, there is a wide range of tornado sizes. Weak tornadoes, or strong yet dissipating tornadoes, can be exceedingly narrow, sometimes only a few feet or couple meters across. One tornado was reported to have a damage path only long. On the other end of the spectrum, wedge tornadoes can have a damage path a mile (1.6 km) wide or more. A tornado that affected Hallam, Nebraska on May 22, 2004, was up to wide at the ground, and a tornado in El Reno, Oklahoma on May 31, 2013, was approximately wide, the widest on record. Track length In the United States, the average tornado travels on the ground for . However, tornadoes are capable of both much shorter and much longer damage paths: one tornado was reported to have a damage path only long, while the record-holding tornado for path length—the Tri-State Tornado, which affected parts of Missouri, Illinois, and Indiana on March 18, 1925—was on the ground continuously for . Many tornadoes which appear to have path lengths of or longer are composed of a family of tornadoes which have formed in quick succession; however, there is no substantial evidence that this occurred in the case of the Tri-State Tornado. A 2007 reanalysis of the path suggests that the tornado may have begun further west than previously thought. Appearance Tornadoes can have a wide range of colors, depending on the environment in which they form. Those that form in dry environments can be nearly invisible, marked only by swirling debris at the base of the funnel. Condensation funnels that pick up little or no debris can be gray to white. While traveling over a body of water (as a waterspout), tornadoes can turn white or even blue. Slow-moving funnels, which ingest a considerable amount of debris and dirt, are usually darker, taking on the color of debris. Tornadoes in the Great Plains can turn red because of the reddish tint of the soil, and tornadoes in mountainous areas can travel over snow-covered ground, turning white. Lighting conditions are a major factor in the appearance of a tornado. A tornado which is "back-lit" (viewed with the sun behind it) appears very dark. The same tornado, viewed with the sun at the observer's back, may appear gray or brilliant white. Tornadoes which occur near the time of sunset can be many different colors, appearing in hues of yellow, orange, and pink. Dust kicked up by the winds of the parent thunderstorm, heavy rain and hail, and the darkness of night are all factors that can reduce the visibility of tornadoes. Tornadoes occurring in these conditions are especially dangerous, since only weather radar observations, or possibly the sound of an approaching tornado, serve as any warning to those in the storm's path. Most significant tornadoes form under the storm's updraft base, which is rain-free, making them visible. Also, most tornadoes occur in the late afternoon, when the bright sun can penetrate even the thickest clouds. There is mounting evidence, including Doppler on Wheels mobile radar images and eyewitness accounts, that most tornadoes have a clear, calm center with extremely low pressure, akin to the eye of tropical cyclones. Lightning is said to be the source of illumination for those who claim to have seen the interior of a tornado. Rotation Tornadoes normally rotate cyclonically (when viewed from above, this is counterclockwise in the northern hemisphere and clockwise in the southern). While large-scale storms always rotate cyclonically due to the Coriolis effect, thunderstorms and tornadoes are so small that the direct influence of the Coriolis effect is negligible, as indicated by their large Rossby numbers. Supercells and tornadoes rotate cyclonically in numerical simulations even when the Coriolis effect is neglected. Low-level mesocyclones and tornadoes owe their rotation to complex processes within the supercell and ambient environment. Approximately 1 percent of tornadoes rotate in an anticyclonic direction in the northern hemisphere. Typically, systems as weak as landspouts and gustnadoes can rotate anticyclonically, and usually only those which form on the anticyclonic shear side of the descending rear flank downdraft (RFD) in a cyclonic supercell. On rare occasions, anticyclonic tornadoes form in association with the mesoanticyclone of an anticyclonic supercell, in the same manner as the typical cyclonic tornado, or as a companion tornado either as a satellite tornado or associated with anticyclonic eddies within a supercell. Sound and seismology Tornadoes emit widely on the acoustics spectrum and the sounds are caused by multiple mechanisms. Various sounds of tornadoes have been reported, mostly related to familiar sounds for the witness and generally some variation of a whooshing roar. Popularly reported sounds include a freight train, rushing rapids or waterfall, a nearby jet engine, or combinations of these. Many tornadoes are not audible from much distance; the nature of and the propagation distance of the audible sound depends on atmospheric conditions and topography. The winds of the tornado vortex and of constituent turbulent eddies, as well as airflow interaction with the surface and debris, contribute to the sounds. Funnel clouds also produce sounds. Funnel clouds and small tornadoes are reported as whistling, whining, humming, or the buzzing of innumerable bees or electricity, or more or less harmonic, whereas many tornadoes are reported as a continuous, deep rumbling, or an irregular sound of "noise". Since many tornadoes are audible only when very near, sound is not to be thought of as a reliable warning signal for a tornado. Tornadoes are also not the only source of such sounds in severe thunderstorms; any strong, damaging wind, a severe hail volley, or continuous thunder in a thunderstorm may produce a roaring sound. Tornadoes also produce identifiable inaudible infrasonic signatures. Unlike audible signatures, tornadic signatures have been isolated; due to the long-distance propagation of low-frequency sound, efforts are ongoing to develop tornado prediction and detection devices with additional value in understanding tornado morphology, dynamics, and creation. Tornadoes also produce a detectable seismic signature, and research continues on isolating it and understanding the process. Electromagnetic, lightning, and other effects Tornadoes emit on the electromagnetic spectrum, with sferics and E-field effects detected. There are observed correlations between tornadoes and patterns of lightning. Tornadic storms do not produce more lightning than other storms and some tornadic cells never produce lightning at all. More often than not, overall cloud-to-ground (CG) lightning activity decreases as a tornado touches the surface and returns to the baseline level when the tornado dissipates. In many cases, intense tornadoes and thunderstorms exhibit an increased and anomalous dominance of positive polarity CG discharges. Luminosity has been reported in the past and is probably due to misidentification of external light sources such as lightning, city lights, and power flashes from broken lines, as internal sources are now uncommonly reported and are not known to ever have been recorded. In addition to winds, tornadoes also exhibit changes in atmospheric variables such as temperature, moisture, and atmospheric pressure. For example, on June 24, 2003, near Manchester, South Dakota, a probe measured a pressure decrease. The pressure dropped gradually as the vortex approached then dropped extremely rapidly to in the core of the violent tornado before rising rapidly as the vortex moved away, resulting in a V-shape pressure trace. Temperature tends to decrease and moisture content to increase in the immediate vicinity of a tornado. Life cycle Supercell relationship Tornadoes often develop from a class of thunderstorms known as supercells. Supercells contain mesocyclones, an area of organized rotation a few kilometers/miles up in the atmosphere, usually across. Most intense tornadoes (EF3 to EF5 on the Enhanced Fujita Scale) develop from supercells. In addition to tornadoes, very heavy rain, frequent lightning, strong wind gusts, and hail are common in such storms. Most tornadoes from supercells follow a recognizable life cycle which begins when increasing rainfall drags with it an area of quickly descending air known as the rear flank downdraft (RFD). This downdraft accelerates as it approaches the ground, and drags the supercell's rotating mesocyclone towards the ground with it. Formation As the mesocyclone lowers below the cloud base, it begins to take in cool, moist air from the downdraft region of the storm. The convergence of warm air in the updraft and cool air causes a rotating wall cloud to form. The RFD also focuses the mesocyclone's base, causing it to draw air from a smaller and smaller area on the ground. As the updraft intensifies, it creates an area of low pressure at the surface. This pulls the focused mesocyclone down, in the form of a visible condensation funnel. As the funnel descends, the RFD also reaches the ground, fanning outward and creating a gust front that can cause severe damage a considerable distance from the tornado. Usually, the funnel cloud begins causing damage on the ground (becoming a tornado) within a few minutes of the RFD reaching the ground. Many other aspects of tornado formation (such as why some storms form tornadoes while others do not, or what precise role downdrafts, temperature, and moisture play in tornado formation) are still poorly understood. Maturity Initially, the tornado has a good source of warm, moist air flowing inward to power it, and it grows until it reaches the "mature stage". This can last from a few minutes to more than an hour, and during that time a tornado often causes the most damage, and in rare cases can be more than across. The low pressured atmosphere at the base of the tornado is essential to the endurance of the system. Meanwhile, the RFD, now an area of cool surface winds, begins to wrap around the tornado, cutting off the inflow of warm air which previously fed the tornado. The flow inside the funnel of the tornado is downward, supplying water vapor from the cloud above. This is contrary to the upward flow inside hurricanes, supplying water vapor from the warm ocean below. Therefore, the energy of the tornado is supplied from the cloud above. Dissipation As the RFD completely wraps around and chokes off the tornado's air supply, the vortex begins to weaken, becoming thin and rope-like. This is the "dissipating stage", often lasting no more than a few minutes, after which the tornado ends. During this stage, the shape of the tornado becomes highly influenced by the winds of the parent storm, and can be blown into fantastic patterns. Even though the tornado is dissipating, it is still capable of causing damage. The storm is contracting into a rope-like tube and, due to conservation of angular momentum, winds can increase at this point. As the tornado enters the dissipating stage, its associated mesocyclone often weakens as well, as the rear flank downdraft cuts off the inflow powering it. Sometimes, in intense supercells, tornadoes can develop cyclically. As the first mesocyclone and associated tornado dissipate, the storm's inflow may be concentrated into a new area closer to the center of the storm and possibly feed a new mesocyclone. If a new mesocyclone develops, the cycle may start again, producing one or more new tornadoes. Occasionally, the old (occluded) mesocyclone and the new mesocyclone produce a tornado at the same time. Although this is a widely accepted theory for how most tornadoes form, live, and die, it does not explain the formation of smaller tornadoes, such as landspouts, long-lived tornadoes, or tornadoes with multiple vortices. These each have different mechanisms which influence their development—however, most tornadoes follow a pattern similar to this one. Types Multiple vortex A multiple-vortex tornado is a type of tornado in which two or more columns of spinning air rotate about their own axes and at the same time revolve around a common center. A multi-vortex structure can occur in almost any circulation, but is very often observed in intense tornadoes. These vortices often create small areas of heavier damage along the main tornado path. This is a phenomenon that is distinct from a satellite tornado, which is a smaller tornado that forms very near a large, strong tornado contained within the same mesocyclone. The satellite tornado may appear to "orbit" the larger tornado (hence the name), giving the appearance of one, large multi-vortex tornado. However, a satellite tornado is a distinct circulation, and is much smaller than the main funnel. Waterspout A waterspout is defined by the National Weather Service as a tornado over water. However, researchers typically distinguish "fair weather" waterspouts from tornadic (i.e. associated with a mesocyclone) waterspouts. Fair weather waterspouts are less severe but far more common, and are similar to dust devils and landspouts. They form at the bases of cumulus congestus clouds over tropical and subtropical waters. They have relatively weak winds, smooth laminar walls, and typically travel very slowly. They occur most commonly in the Florida Keys and in the northern Adriatic Sea. In contrast, tornadic waterspouts are stronger tornadoes over water. They form over water similarly to mesocyclonic tornadoes, or are stronger tornadoes which cross over water. Since they form from severe thunderstorms and can be far more intense, faster, and longer-lived than fair weather waterspouts, they are more dangerous. In official tornado statistics, waterspouts are generally not counted unless they affect land, though some European weather agencies count waterspouts and tornadoes together. Landspout A landspout, or dust-tube tornado, is a tornado not associated with a mesocyclone. The name stems from their characterization as a "fair weather waterspout on land". Waterspouts and landspouts share many defining characteristics, including relative weakness, short lifespan, and a small, smooth condensation funnel that often does not reach the surface. Landspouts also create a distinctively laminar cloud of dust when they make contact with the ground, due to their differing mechanics from true mesoform tornadoes. Though usually weaker than classic tornadoes, they can produce strong winds which could cause serious damage. Similar circulations Gustnado A gustnado, or gust front tornado, is a small, vertical swirl associated with a gust front or downburst. Because they are not connected with a cloud base, there is some debate as to whether or not gustnadoes are tornadoes. They are formed when fast-moving cold, dry outflow air from a thunderstorm is blown through a mass of stationary, warm, moist air near the outflow boundary, resulting in a "rolling" effect (often exemplified through a roll cloud). If low level wind shear is strong enough, the rotation can be turned vertically or diagonally and make contact with the ground. The result is a gustnado. They usually cause small areas of heavier rotational wind damage among areas of straight-line wind damage. Dust devil A dust devil (also known as a whirlwind) resembles a tornado in that it is a vertical swirling column of air. However, they form under clear skies and are no stronger than the weakest tornadoes. They form when a strong convective updraft is formed near the ground on a hot day. If there is enough low-level wind shear, the column of hot, rising air can develop a small cyclonic motion that can be seen near the ground. They are not considered tornadoes because they form during fair weather and are not associated with any clouds. However, they can, on occasion, result in major damage. Fire whirls Small-scale, tornado-like circulations can occur near any intense surface heat source. Those that occur near intense wildfires are called fire whirls. They are not considered tornadoes, except in the rare case where they connect to a pyrocumulus or other cumuliform cloud above. Fire whirls usually are not as strong as tornadoes associated with thunderstorms. They can, however, produce significant damage. Steam devils A steam devil is a rotating updraft between that involves steam or smoke. These formations do not involve high wind speeds, only completing a few rotations per minute. Steam devils are very rare. They most often form from smoke issuing from a power plant's smokestack. Hot springs and deserts may also be suitable locations for a tighter, faster-rotating steam devil to form. The phenomenon can occur over water, when cold arctic air passes over relatively warm water. Intensity and damage The Fujita scale, Enhanced Fujita scale (EF), and International Fujita scale rate tornadoes by damage caused. The EF scale was an update to the older Fujita scale, by expert elicitation, using engineered wind estimates and better damage descriptions. The EF scale was designed so that a tornado rated on the Fujita scale would receive the same numerical rating, and was implemented starting in the United States in 2007. An EF0 tornado will probably damage trees but not substantial structures, whereas an EF5 tornado can rip buildings off their foundations leaving them bare and even deform large skyscrapers. The similar TORRO scale ranges from a T0 for extremely weak tornadoes to T11 for the most powerful known tornadoes. Doppler weather radar data, photogrammetry, and ground swirl patterns (cycloidal marks) may also be analyzed to determine intensity and award a rating. Tornadoes vary in intensity regardless of shape, size, and location, though strong tornadoes are typically larger than weak tornadoes. The association with track length and duration also varies, although longer track tornadoes tend to be stronger. In the case of violent tornadoes, only a small portion of the path is of violent intensity, most of the higher intensity from subvortices. In the United States, 80% of tornadoes are EF0 and EF1 (T0 through T3) tornadoes. The rate of occurrence drops off quickly with increasing strength—less than 1% are violent tornadoes (EF4, T8 or stronger). Current records may significantly underestimate the frequency of strong (EF2-EF3) and violent (EF4-EF5) tornadoes, as damage-based intensity estimates are limited to structures and vegetation that a tornado impacts. A tornado may be much stronger than its damage-based rating indicates if its strongest winds occur away from suitable damage indicators, such as in an open field. Outside Tornado Alley, and North America in general, violent tornadoes are extremely rare. This is apparently mostly due to the lesser number of tornadoes overall, as research shows that tornado intensity distributions are fairly similar worldwide. A few significant tornadoes occur annually in Europe, Asia, southern Africa, and southeastern South America. Climatology The United States has the most tornadoes of any country, nearly four times more than estimated in all of Europe, excluding waterspouts. This is mostly due to the unique geography of the continent. North America is a large continent that extends from the tropics north into arctic areas, and has no major east–west mountain range to block air flow between these two areas. In the middle latitudes, where most tornadoes of the world occur, the Rocky Mountains block moisture and buckle the atmospheric flow, forcing drier air at mid-levels of the troposphere due to downsloped winds, and causing the formation of a low pressure area downwind to the east of the mountains. Increased westerly flow off the Rockies force the formation of a dry line when the flow aloft is strong, while the Gulf of Mexico fuels abundant low-level moisture in the southerly flow to its east. This unique topography allows for frequent collisions of warm and cold air, the conditions that breed strong, long-lived storms throughout the year. A large portion of these tornadoes form in an area of the central United States known as Tornado Alley. This area extends into Canada, particularly Ontario and the Prairie Provinces, although southeast Quebec, the interior of British Columbia, and western New Brunswick are also tornado-prone. Tornadoes also occur across northeastern Mexico. The United States averages about 1,200 tornadoes per year, followed by Canada, averaging 62 reported per year. NOAA's has a higher average 100 per year in Canada. The Netherlands has the highest average number of recorded tornadoes per area of any country (more than 20, or annually), followed by the UK (around 33, per year), although those are of lower intensity, briefer and cause minor damage. Tornadoes kill an average of 179 people per year in Bangladesh, the most in the world. Reasons for this include the region's high population density, poor construction quality, and lack of tornado safety knowledge. Other areas of the world that have frequent tornadoes include South Africa, the La Plata Basin area, portions of Europe, Australia and New Zealand, and far eastern Asia. Tornadoes are most common in spring and least common in winter, but tornadoes can occur any time of year that favorable conditions occur. Spring and fall experience peaks of activity as those are the seasons when stronger winds, wind shear, and atmospheric instability are present. Tornadoes are focused in the right front quadrant of landfalling tropical cyclones, which tend to occur in the late summer and autumn. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall. Tornados can even form during snow squalls events with no rain present. Tornado occurrence is highly dependent on the time of day, because of solar heating. Worldwide, most tornadoes occur in the late afternoon, between 15:00 (3 pm) and 19:00 (7 pm) local time, with a peak near 17:00 (5 pm). Destructive tornadoes can occur at any time of day. The Gainesville Tornado of 1936, one of the deadliest tornadoes in history, occurred at 8:30 am local time. The United Kingdom has the highest incidence of tornadoes per unit area of land in the world. Unsettled conditions and weather fronts transverse the British Isles at all times of the years, and are responsible for spawning the tornadoes, which consequently form at all times of the year. The United Kingdom has at least 34 tornadoes per year and possibly as many as 50. Most tornadoes in the United Kingdom are weak, but they are occasionally destructive. For example, the Birmingham tornado of 2005 and the London tornado of 2006 both registered F2 on the Fujita scale and both caused significant damage and injury. Associations with climate and climate change Associations with various climate and environmental trends exist. For example, an increase in the sea surface temperature of a source region (e.g. Gulf of Mexico and Mediterranean Sea) increases atmospheric moisture content. Increased moisture can fuel an increase in severe weather and tornado activity, particularly in the cool season. Some evidence does suggest that the Southern Oscillation is weakly correlated with changes in tornado activity, which vary by season and region, as well as whether the ENSO phase is that of El Niño or La Niña. Research has found that fewer tornadoes and hailstorms occur in winter and spring in the U.S. central and southern plains during El Niño, and more occur during La Niña, than in years when temperatures in the Pacific are relatively stable. Ocean conditions could be used to forecast extreme spring storm events several months in advance. Climatic shifts may affect tornadoes via teleconnections in shifting the jet stream and the larger weather patterns. The climate-tornado link is confounded by the forces affecting larger patterns and by the local, nuanced nature of tornadoes. Although it is reasonable to suspect that global warming may affect trends in tornado activity, any such effect is not yet identifiable due to the complexity, local nature of the storms, and database quality issues. Any effect would vary by region. Detection Rigorous attempts to warn of tornadoes began in the United States in the mid-20th century. Before the 1950s, the only method of detecting a tornado was by someone seeing it on the ground. Often, news of a tornado would reach a local weather office after the storm. However, with the advent of weather radar, areas near a local office could get advance warning of severe weather. The first public tornado warnings were issued in 1950 and the first tornado watches and convective outlooks came about in 1952. In 1953, it was confirmed that hook echoes were associated with tornadoes. By recognizing these radar signatures, meteorologists could detect thunderstorms probably producing tornadoes from several miles away. Radar Today most developed countries have a network of weather radars, which serves as the primary method of detecting hook signatures that are likely associated with tornadoes. In the United States and a few other countries, Doppler weather radar stations are used. These devices measure the velocity and radial direction (towards or away from the radar) of the winds within a storm, and so can spot evidence of rotation in storms from over away. When storms are distant from a radar, only areas high within the storm are observed and the important areas below are not sampled. Data resolution also decreases with distance from the radar. Some meteorological situations leading to tornadogenesis are not readily detectable by radar and tornado development may occasionally take place more quickly than radar can complete a scan and send the batch of data. Doppler weather radar systems can detect mesocyclones within a supercell thunderstorm. This allows meteorologists to predict tornado formations throughout thunderstorms. Storm spotting Spotters usually are trained by the NWS on behalf of their respective organizations, and report to them. The organizations activate public warning systems such as sirens and the Emergency Alert System (EAS), and they forward the report to the NWS. There are more than 230,000 trained Skywarn weather spotters across the United States. In Canada, a similar network of volunteer weather watchers, called Canwarn, helps spot severe weather, with more than 1,000 volunteers. In Europe, several nations are organizing spotter networks under the auspices of Skywarn Europe and the Tornado and Storm Research Organisation (TORRO) has maintained a network of spotters in the United Kingdom since 1974. Storm spotters are required because radar systems such as NEXRAD detect signatures that suggest the presence of tornadoes, rather than tornadoes as such. Radar may give a warning before there is any visual evidence of a tornado or an imminent one, but ground truth from an observer can give definitive information. The spotter's ability to see what radar cannot is especially important as distance from the radar site increases, because the radar beam becomes progressively higher in altitude further away from the radar, chiefly due to curvature of Earth, and the beam also spreads out. Visual evidence Storm spotters are trained to discern whether or not a storm seen from a distance is a supercell. They typically look to its rear, the main region of updraft and inflow. Under that updraft is a rain-free base, and the next step of tornadogenesis is the formation of a rotating wall cloud. The vast majority of intense tornadoes occur with a wall cloud on the backside of a supercell. Evidence of a supercell is based on the storm's shape and structure, and cloud tower features such as a hard and vigorous updraft tower, a persistent, large overshooting top, a hard anvil (especially when backsheared against strong upper level winds), and a corkscrew look or striations. Under the storm and closer to where most tornadoes are found, evidence of a supercell and the likelihood of a tornado includes inflow bands (particularly when curved) such as a "beaver tail", and other clues such as strength of inflow, warmth and moistness of inflow air, how outflow- or inflow-dominant a storm appears, and how far is the front flank precipitation core from the wall cloud. Tornadogenesis is most likely at the interface of the updraft and rear flank downdraft, and requires a balance between the outflow and inflow. Only wall clouds that rotate spawn tornadoes, and they usually precede the tornado between five and thirty minutes. Rotating wall clouds may be a visual manifestation of a low-level mesocyclone. Barring a low-level boundary, tornadogenesis is highly unlikely unless a rear flank downdraft occurs, which is usually visibly evidenced by evaporation of cloud adjacent to a corner of a wall cloud. A tornado often occurs as this happens or shortly afterwards; first, a funnel cloud dips and in nearly all cases by the time it reaches halfway down, a surface swirl has already developed, signifying a tornado is on the ground before condensation connects the surface circulation to the storm. Tornadoes may also develop without wall clouds, under flanking lines and on the leading edge. Spotters watch all areas of a storm, and the cloud base and surface. Extremes The tornado which holds most records in history was the Tri-State Tornado, which roared through parts of Missouri, Illinois, and Indiana on March 18, 1925. It was likely an F5, though tornadoes were not ranked on any scale in that era. It holds records for longest path length (), longest duration (about 3.5 hours), and fastest forward speed for a significant tornado () anywhere on Earth. In addition, it is the deadliest single tornado in United States history (695 dead). The tornado was also the costliest tornado in history at the time (unadjusted for inflation), but in the years since has been surpassed by several others if population changes over time are not considered. When costs are normalized for wealth and inflation, it ranks third today. The deadliest tornado in world history was the Daultipur-Salturia Tornado in Bangladesh on April 26, 1989, which killed approximately 1,300 people. One of the most extensive tornado outbreaks on record was the 1974 Super Outbreak, which affected a large area of the central United States and extreme southern Ontario on April 3 and 4, 1974. The outbreak featured 148 tornadoes in 18 hours, many of which were violent; seven were of F5 intensity, and twenty-three peaked at F4 strength. Sixteen tornadoes were on the ground at the same time during its peak. More than 300 people, possibly as many as 330, were killed. While direct measurement of the most violent tornado wind speeds is nearly impossible, since conventional anemometers would be destroyed by the intense winds and flying debris, some tornadoes have been scanned by mobile Doppler radar units, which can provide a good estimate of the tornado's winds. The highest wind speed ever measured in a tornado, which is also the highest wind speed ever recorded on the planet, is 301 ± 20 mph (484 ± 32 km/h) in the F5 Bridge Creek-Moore, Oklahoma, tornado which killed 36 people. The reading was taken about above the ground. Storms that produce tornadoes can feature intense updrafts, sometimes exceeding . Debris from a tornado can be lofted into the parent storm and carried a very long distance. A tornado which affected Great Bend, Kansas, in November 1915, was an extreme case, where a "rain of debris" occurred from the town, a sack of flour was found away, and a cancelled check from the Great Bend bank was found in a field outside of Palmyra, Nebraska, to the northeast. Waterspouts and tornadoes have been advanced as an explanation for instances of raining fish and other animals. Safety Though tornadoes can strike in an instant, there are precautions and preventative measures that can be taken to increase the chances of survival. Authorities such as the Storm Prediction Center in the United States advise having a pre-determined plan should a tornado warning be issued. When a warning is issued, going to a basement or an interior first-floor room of a sturdy building greatly increases chances of survival. In tornado-prone areas, many buildings have underground storm cellars, which have saved thousands of lives. Some countries have meteorological agencies which distribute tornado forecasts and increase levels of alert of a possible tornado (such as tornado watches and warnings in the United States and Canada). Weather radios provide an alarm when a severe weather advisory is issued for the local area, mainly available only in the United States. Unless the tornado is far away and highly visible, meteorologists advise that drivers park their vehicles far to the side of the road (so as not to block emergency traffic), and find a sturdy shelter. If no sturdy shelter is nearby, getting low in a ditch is the next best option. Highway overpasses are one of the worst places to take shelter during tornadoes, as the constricted space can be subject to increased wind speed and funneling of debris underneath the overpass. Myths and misconceptions Folklore often identifies a green sky with tornadoes, and though the phenomenon may be associated with severe weather, there is no evidence linking it specifically with tornadoes. It is often thought that opening windows will lessen the damage caused by the tornado. While there is a large drop in atmospheric pressure inside a strong tornado, the pressure difference is unlikely to cause significant damage. Opening windows may instead increase the severity of the tornado's damage. A violent tornado can destroy a house whether its windows are open or closed. Another commonly held misconception is that highway overpasses provide adequate shelter from tornadoes. This belief is partly inspired by widely circulated video captured during the 1991 tornado outbreak near Andover, Kansas, where a news crew and several other people took shelter under an overpass on the Kansas Turnpike and safely rode out a tornado as it passed nearby. However, a highway overpass is a dangerous place during a tornado, and the subjects of the video remained safe due to an unlikely combination of events: the storm in question was a weak tornado, the tornado did not directly strike the overpass, and the overpass itself was of a unique design. Due to the Venturi effect, tornadic winds are accelerated in the confined space of an overpass. Indeed, in the 1999 Oklahoma tornado outbreak of May 3, 1999, three highway overpasses were directly struck by tornadoes, and at each of the three locations there was a fatality, along with many life-threatening injuries. By comparison, during the same tornado outbreak, more than 2,000 homes were completely destroyed and another 7,000 damaged, and yet only a few dozen people died in their homes. An old belief is that the southwest corner of a basement provides the most protection during a tornado. The safest place is the side or corner of an underground room opposite the tornado's direction of approach (usually the northeast corner), or the central-most room on the lowest floor. Taking shelter in a basement, under a staircase, or under a sturdy piece of furniture such as a workbench further increases the chances of survival. There are areas which people believe to be protected from tornadoes, whether by being in a city, near a major river, hill, or mountain, or even protected by supernatural forces. Tornadoes have been known to cross major rivers, climb mountains, affect valleys, and have damaged several city centers. As a general rule, no area is safe from tornadoes, though some areas are more susceptible than others. Ongoing research Meteorology is a relatively young science and the study of tornadoes is newer still. Although researched for about 140 years and intensively so for around 60 years, there are still aspects of tornadoes which remain a mystery. Meteorologists have a fairly good understanding of the development of thunderstorms and mesocyclones, and the meteorological conditions conducive to their formation. However, the step from supercell, or other respective formative processes, to tornadogenesis and the prediction of tornadic vs. non-tornadic mesocyclones is not yet well known and is the focus of much research. Also under study are the low-level mesocyclone and the stretching of low-level vorticity which tightens into a tornado, in particular, what are the processes and what is the relationship of the environment and the convective storm. Intense tornadoes have been observed forming simultaneously with a mesocyclone aloft (rather than succeeding mesocyclogenesis) and some intense tornadoes have occurred without a mid-level mesocyclone. In particular, the role of downdrafts, particularly the rear-flank downdraft, and the role of baroclinic boundaries, are intense areas of study. Reliably predicting tornado intensity and longevity remains a problem, as do details affecting characteristics of a tornado during its life cycle and tornadolysis. Other rich areas of research are tornadoes associated with mesovortices within linear thunderstorm structures and within tropical cyclones. Meteorologists still do not know the exact mechanisms by which most tornadoes form, and occasional tornadoes still strike without a tornado warning being issued. Analysis of observations including both stationary and mobile (surface and aerial) in-situ and remote sensing (passive and active) instruments generates new ideas and refines existing notions. Numerical modeling also provides new insights as observations and new discoveries are integrated into our physical understanding and then tested in computer simulations which validate new notions as well as produce entirely new theoretical findings, many of which are otherwise unattainable. Importantly, development of new observation technologies and installation of finer spatial and temporal resolution observation networks have aided increased understanding and better predictions. Research programs, including field projects such as the VORTEX projects (Verification of the Origins of Rotation in Tornadoes Experiment), deployment of TOTO (the TOtable Tornado Observatory), Doppler on Wheels (DOW), and dozens of other programs, hope to solve many questions that still plague meteorologists. Universities, government agencies such as the National Severe Storms Laboratory, private-sector meteorologists, and the National Center for Atmospheric Research are some of the organizations very active in research; with various sources of funding, both private and public, a chief entity being the National Science Foundation. The pace of research is partly constrained by the number of observations that can be taken; gaps in information about the wind, pressure, and moisture content throughout the local atmosphere; and the computing power available for simulation. Solar storms similar to tornadoes have been recorded, but it is unknown how closely related they are to their terrestrial counterparts. See also Cultural significance of tornadoes Cyclone Derecho List of tornadoes and tornado outbreaks List of F5 and EF5 tornadoes List of F4 and EF4 tornadoes List of F4 and EF4 tornadoes (2020–present) List of tropical cyclone-spawned tornadoes List of tornadoes with confirmed satellite tornadoes Secondary flow Skipping tornado Space tornado Tornado preparedness Tornadoes of Tropical cyclone Hypercane Typhoon Vortex Whirlwind References Further reading Heavily illustrated. External links NOAA Storm Events Database 1950–present European Severe Weather Database Tornado Detection and Warnings Electronic Journal of Severe Storms Meteorology NOAA Tornado Preparedness Guide Tornado History Project – Maps and statistics from 1950 to present "What we know and don’t know about tornado formation", Physics Today, September 2014 U.S. Billion-dollar Weather and Climate Disasters Weather hazards Severe weather and convection Types of cyclone Wind Storm Spanish words and phrases Natural disasters Articles containing video clips Hazards of outdoor recreation
Tornado
Physics
9,786
25,069,329
https://en.wikipedia.org/wiki/Journal%20of%20Borderlands%20Studies
The Journal of Borderlands Studies is a peer-reviewed academic journal covering all aspects of borderlands studies. The journal was established in 1986 and is published by Routledge on behalf of the Association for Borderlands Studies. It appears five times a year and the editors-in-chief are Sergio Peña (El Colegio de la Frontera Norte) and Christophe Sohn (Luxembourg Institute of Socio-Economic Research). Abstracting and indexing The journal is abstracted and indexed in the Emerging Sources Citation Index and Scopus. References External links Academic journals established in 1986 English-language journals Routledge academic journals 5 times per year journals Area studies journals Borders
Journal of Borderlands Studies
Physics
130
31,013,910
https://en.wikipedia.org/wiki/Tyndall%20Medal
The Tyndall Medal is a prize from the Institute of Acoustics awarded every two years to a citizen of the UK, preferably under the age of 40, for "achievement and services in the field of acoustics". The prize is named after John Tyndall (1820-1893) who preceded Rayleigh as the Professor of Natural Philosophy at the Royal Institution. He investigated the acoustic properties of the atmosphere and though a distinguished experimental physicist, he is remembered primarily as one of the world’s most brilliant scientific lecturers. List of recipients Source: Institute of Acoustics See also List of physics awards References Physics awards British science and technology awards
Tyndall Medal
Technology
132
25,050,762
https://en.wikipedia.org/wiki/Suparnostic
suPARnostic is a simplified double monoclonal antibody sandwich enzyme-linked immunosorbent assay (ELISA) that measures the amount of soluble urokinase plasminogen activator receptor (suPAR) in blood. Elevated plasma suPAR levels have been observed in various infectious, inflammatory and autoimmune diseases. suPAR concentration positively correlates to the activation level of the immune system. suPARnostic can be used as a prognostic tool to determine the severity of a disease within a patient, but is not used as a reliable diagnostic tool, as it can detect the severity of the immune response in a patient, but does not reveal the specific disease from which the patient may be suffering. Recently, increase suPAR levels were shown to be associated with increased risk of systemic inflammatory response syndrome (SIRS)/sepsis, cardiovascular disease, type 2 diabetes, infectious diseases, HIV, cancer tuberculosis, malaria, bacterial and viral CNS infections, rheumatoid arthritis, multiple sclerosis and mortality in the general population. Performing the suPARnostic ELISA Performing the suPARnostic ELISA requires two antibodies with high specificity for suPAR. The blood plasma sample from the patient that contains an unknown amount of suPAR is immobilized on the microwells on the clear microtiter plate and a detection antibody forms a complex with suPAR. Between each step the plate is rinsed with a wash buffer to dispose of any proteins that do not specifically bind to any of the wells on the plate. After the final wash step, the plate is developed by adding the TMB substrate to produce a visible signal, which indicates the quantity of suPAR in the sample. The measured absorbance can, based on the values from the standard curve, be converted to the concentration (ng/mL) of suPAR in the sample. This level can then suggest whether or not the patient is experiencing challenges to their immune system. Principles The suPARnostic ELISA is a simplified double monoclonal antibody sandwich assay that measures the level of suPAR and suPARII-III in the body . The suPARnostic ELISA utilizes monoclonal mouse and rat antibodies against human suPAR. The advantages of using monoclonal antibodies compared to using polyclonal antibodies includes: High homogeneity, absence of nonspecific antibodies and no batch-to-batch or lot-to-lot variability. This results in a very robust and reliable assay. A 'sandwich' is formed of solid-phase antibody, suPAR and peroxidase-conjugated antibody. The concentration (ng/mL plasma) of suPAR in the patient sample is determined via interpolation, based on a calibration curve prepared from seven suPAR standards. Recombinant suPAR standards are calibrated against healthy human blood donor samples. Absorbance is measured using a microtiter plate reader, at 450 nm with a 650 nm reference filter. Measurement of suPAR levels from blood samples provides greater accuracy and precision than measurement from urine or cerebral spinal fluid. suPAR level is not changed by transient illness such as cold. It also remains stable after a blood sample is taken despite storage. suPARnostic measurements between 0.1 and 4.0 ng/mL suggest that a patient is healthy, with no challenges to their immune system and no signs or symptoms of an opportunistic infection or inflammation; the average level among the population is 3.4 ng/mL. However, a patient's immune system can be considered 'negatively activated' at suPAR levels above 4.0 and up to 6.0 ng/mL, indicating a potential infection or high level of inflammation. In this case, a patient's health is likely to worsen and he or she should be referred for further testing. suPARnostic measurements from 6.0 ng/mL to double digit levels can indicate a serious illness that is progressing rapidly to a critical situation. Patients in the intensive care unit average a level of 10.0 ng/mL. There is no difference in suPAR levels intrinsic to various races; however, the scale varies for male and female. There are two suPARnostic tests available. The suPARnostic Standard ELISA (Code No. A001) is for research use and large trials, one batch consisting of 41 samples in doublets. The suPARnostic Flex ELISA (Code No. A002) has been developed for clinical applications consisting of 93 samples, is modular and flexible, and gives fully quantitative results in 2 hours. Practical Considerations The suPARnostic kit has a refrigerated shelf life of several years and when frozen, may be kept for longer. The kit should sit at room temperature for half an hour before use but it may be held at room temperature for as long as three to four hours. The suPARnostic Flex ELISA (Code No. A002) is able to provide fully quantitative results in 2 hours. suPARnostic is run as large, batch test with up to 41 samples in doublets for research purposes or 93 samples for clinical use at one time. Although suPARnostic currently does not have FDA approval, it is CE/IVD marked for distribution throughout Europe. suPAR is a prognostic test to indicate general health, and it cannot be used as a diagnostic tool to suggest a particular illness. suPAR cannot be used in the detection of brain tumors because the suPAR molecule cannot migrate through the blood brain barrier. References Blood tests
Suparnostic
Chemistry
1,123
35,099,255
https://en.wikipedia.org/wiki/Fenchel%E2%80%93Moreau%20theorem
In convex analysis, the Fenchel–Moreau theorem (named after Werner Fenchel and Jean Jacques Moreau) or Fenchel biconjugation theorem (or just biconjugation theorem) is a theorem which gives necessary and sufficient conditions for a function to be equal to its biconjugate. This is in contrast to the general property that for any function . This can be seen as a generalization of the bipolar theorem. It is used in duality theory to prove strong duality (via the perturbation function). Statement Let be a Hausdorff locally convex space, for any extended real valued function it follows that if and only if one of the following is true is a proper, lower semi-continuous, and convex function, , or . References Convex analysis Theorems in analysis Theorems involving convexity
Fenchel–Moreau theorem
Mathematics
169
21,912,073
https://en.wikipedia.org/wiki/Neutron%20capture%20nucleosynthesis
Neutron capture nucleosynthesis describes two nucleosynthesis pathways: the r-process and the s-process, for rapid and slow neutron captures, respectively. R-process describes neutron capture in a region of high neutron flux, such as during supernova nucleosynthesis after core-collapse, and yields neutron-rich nuclides. S-process describes neutron capture that is slow relative to the rate of beta decay, as for stellar nucleosynthesis in some stars, and yields nuclei with stable nuclear shells. Each process is responsible for roughly half of the observed abundances of elements heavier than iron. The importance of neutron capture to the observed abundance of the chemical elements was first described in 1957 in the B2FH paper. References Further reading capture nucleosynthesis Nucleosynthesis
Neutron capture nucleosynthesis
Physics,Chemistry
170
45,079,586
https://en.wikipedia.org/wiki/Siderocalin
Siderocalin (Scn), lipocalin-2, NGAL, 24p3 is a mammalian lipocalin-type protein that can prevent iron acquisition by pathogenic bacteria by binding siderophores, which are iron-binding chelators made by microorganisms. Iron serves as a key nutrient in host-pathogen interactions, and pathogens can acquire iron from the host organism via synthesis and release siderophores such as enterobactin. Siderocalin is a part of the mammalian defence mechanism and acts as an antibacterial agent. Crystallographic studies of Scn demonstrated that it includes a calyx, a ligand-binding domain that is lined with polar cationic groups. Central to the siderophore/siderocalin recognition mechanism are hybrid electrostatic/cation-pi interactions. To evade the host defences, pathogens evolved to produce structurally varied siderophores that would not be recognized by siderocalin, allowing the bacteria to acquire iron. Iron requirements of host organisms Organisms require iron for a variety of chemical reactions. Although iron can be found throughout the biosphere, free ferric iron forms insoluble hydroxides at physiological pH, limiting its accessibility in aerobic conditions to living organisms. In order to preserve homeostasis, organisms have evolved specific protein networks, with proteins and receptors translated in accordance with intracellular iron levels. Export and import are supplemented by a cycling process between the ferrous Fe(II) available in the reducing environment of the cell, and ferric Fe(III) found primarily under aerobic conditions. The iron acquisition mechanisms of pathogenic bacteria demonstrate the role of iron as a key component at the interface between pathogens and hosts. Lipocalin family of iron binding proteins The lipocalin family of binding proteins are produced by the immune system and sequester ferric siderophore complexes from the siderophore receptors of bacteria. The lipocalin family of binding proteins typically have a conserved eight-stranded β-barrel fold with a calyx binding site, which are lined with positively charged amino acid residues, allowing for binding interactions with siderophores. Clinical significance Mycobacterial infections The lipocalin siderocalin is found in neutrophil granules, uterine secretions, and at particularly high levels in serum during bacterial infection. Upon infection, pathogens use siderophores to capture iron from the host organism. This strategy is, however, complicated by the human protein siderocalin, which can sequester siderophores, and prevent their use by pathogenic bacteria as iron delivery agents. This effect has been demonstrated by studies with siderocalin-knock-out mice, which are more sensitive to infections under iron-limiting conditions. Mycobacterial virulence Siderophores are iron chelators, allowing organisms to acquire iron from their environment. In the case of pathogens, iron can be acquired from the host organism. Siderophores and ferric iron can associate to form stable complexes. Siderophores bind iron using a variety of ligands, most commonly as α-hydroxycarboxylates (e.g. citrate), catecholates, and hydroxamates. As a defence mechanism, siderocalin can substitute ferric bis-catechol complexes (formed under physiological conditions) with a third catechol, in order to achieve a hexacoordinate ferric complex, resulting in higher affinity binding. As a mediator of mammalian iron transport Mammalian siderophores, specifically catechols, can be found in the human gut and in siderophores, such as enterobactin, and serve as iron-binding moieties. Catechol resembling molecules can act as iron ligands in the cell and in systematic circulation, allowing siderocalin to bind to the iron-catechol complex. Catechols can be bound by siderocalin, in the form of free ligands, or in the iron complex. 24p3 is a vertebrate lipocalin-2 receptor which allows for import of the ferric siderophore complex into mammalian cells. During kidney embryogenesis, siderocalin mediated iron transport occurs, as iron concentration has to be highly controlled in order to restrict inflammation. Following secretion by neutrophils, siderocalin can bind to pathogenic siderophores, such as bacillibactin, and prevent siderophore trafficking. Siderocalin has been linked with various cellular processes apart from iron transport, including apoptosis, cellular differentiation, tumorigenesis, and metastasis. Structure The avian orthologs of siderocalin (Q83 and Ex-FABP) and NGAL (neutrophil gelatinase-associated lipocalin-2) contain calyces with positively charged lysine and arginine side chains. These side chains interact via cation-pi and coulombic interactions with the negatively charged siderophores that contain aromatic catecholate groups. Crystallographic studies of siderocalin have shown that the ligand binding domain of Scn, known as the calyx, is shallow and broad, and is lined with polar cationic groups from the three positively charged residues of Arg81, Lys125, and Lys134. Scn can also bind non-ferric complexes and has been identified as a potential transporter for heavy actinide ions. Scn crystal structures containing heavy metals (thorium, plutonium, americium, curium, and californium) have been obtained. Scn has been found as a monomer, homo-dimer, or trimer in human plasma. The siderocalin fold is exceptionally stable. The calyx is structurally stable and rigid, and conformational change does not typically occur upon a change in pH, ionic strength, or ligand binding. Binding pocket The structural stability of the calyx has been attributed to the three binding pockets within the calyx that sterically limit which ligands are compatible with siderocalin. The Scn calyx can accommodate three aromatic rings of the catecholate moieties, in the three available binding pockets. Solid-state and solution structural results demonstrated that bacteria-derived enterobactin is bound to the binding pocket of Scn, allowing for Scn to be involved in the acute immune response to bacterial infection. One method by which pathogens can circumvent immunity mechanisms is by modifying the siderophore chemical structure to prevent interaction with Scn. One example is the addition of glucose molecules to the enterobactin backbone of salmochelin (C-glucosylated enterobactin) in order to increase the hydrophilicity and bulkiness of a siderophore and inhibit binding to Scn. Binding interactions Siderophores are typically bound to siderocalin with subnanomolar affinities, and interact with siderocalin specifically. The Kd value of the siderocalin/siderophore interaction, measured by fluorescence quenching (Kd= 0.4 nM), indicates that siderocalin can capture siderophores with high affinity. This Kd value is similar to that of the FepA bacterial receptor (Kd= 0.3 nM). Siderophore/siderocalin binding is directed by electrostatic interactions. Specifically, the mechanism involves hybrid electrostatic and cation-pi interactions in the positively charged protein calyx. The siderophore is positioned in the centre of the siderocalin calyx, and is associated with multiple direct polar interactions. Structural analysis of the siderocalin/siderophore interaction has shown that the siderophore is accompanied by a poor and diffuse quality of electron density, with the majority of the ligand exposed to the solvent when the siderophore is fit in the calyx. Siderocalin typically does not bind hydroxamate-based siderophores because these substrates do not have the necessary aromatic electronic structure for cation-pi interactions. In order to acquire iron in the presence of siderocalin, pathogenic bacteria utilize several siderophores that do not bind to siderocalin, or structurally modify siderophores to inhibit siderocalin binding. Siderocalin can bind soluble siderophores of mycobacteria, including carboxymycobactins. In vivo studies have shown that the binding interactions between carboxymycobactin and siderocalin serve to protect the host organism from mycobacterial infections, with siderocalin inhibiting mycobacterial iron acquisition. Siderocalin can sequester ferric carboxymycobactins by employing a polyspecific recognition mechanism. The siderophore/siderocalin recognition mechanism primarily involves hybrid electrostatic/cation-pi interactions. The fatty acid tails of carboxymycobactin reside in a ‘tail-in’ or ‘tail-out’ conformation within pocket 2. The ‘tail-in’ conformation of the fatty acid chain lengths introduces a significant interaction between the calyx and the ligand, increasing the affinity of the siderocalin calyx and carboxymycobactin. The fatty acid tails of short lengths have a correspondingly less favorable binding to siderocalin, and cannot maintain the necessary interaction with the binding pocket. Since lipocalin-2 cannot bind the long fatty acid chain carboxymycobactins of mycobacteria, it is apparent that a number of pathogens have evolved to avoid the activity of lipocalin-2. Recognition mechanism Electrostatic interaction play a key role in the recognition mechanism of siderophores by siderocalin. The binding of the siderophore and the siderocalin binding pocket is primarily directed by cation-pi interactions, with the positively charged binding pocket of siderocalin attracting the negatively charged complex. A structural factor involved in the siderocalin mediated recognition mechanism of phenolate/catecholate-type siderophores includes a backbone linker which allows for siderocalin to interact with different phenolate/catecholate siderophores. While siderocalin recognition is minimally affected by the substitution of different metals, methylating the three catecholate rings of enterobactin can impede the recognition of siderocalin. A strategy used by pathogens to overcome immune response is the production of siderophores that will not be recognized by siderocalin. For example, siderocalin cannot recognize the siderophores of the C-glucosylated analog of enterobactin, as the donor groups are glycosylated, introducing steric interactions at the position 5-carbons of the catechol groups. History The requirement for iron by humans and pathogens has been known for many years. The link between iron and mycobactins, iron-chelating growth factors from mycobacteria, was first made in the 1960s. At the time, interest was growing in resolving an application of mycobactins as target molecules for a rational anti-tuberculosis agent. Experiments in the 1960s and 1970s demonstrated that iron deficiency in mycobacteria was the cause of 'anaemic’ cells. The majority of the genes and systems necessary for high affinity iron acquisition have been identified in pathogenic and saprophytic mycobacteria. These genes encode proteins for iron storage, uptake of ferric-siderophores, and heme. Humans have evolved a defense for siderophore-mediated iron acquisition by developing siderocalin. To combat this, various pathogens have evolved siderophores that can evade siderocalin recognition. Siderocalin has been shown to bind to siderophores and inhibit iron acquisition, and prevent the growth of Mycobacterium tuberculosis in extracellular cultures; however, the effect of siderocalin on this pathogen within macrophages remains unclear. See also LCN2 LCN1 Animal pathogens Mycobacteria References Proteins Human genes
Siderocalin
Chemistry
2,567
1,030,860
https://en.wikipedia.org/wiki/Buffer%20state
A buffer state is a country geographically lying between two rival or potentially hostile great powers. Its existence can sometimes be thought to prevent conflict between them. A buffer state is sometimes a mutually agreed upon area lying between two greater powers, which is demilitarised in the sense of not hosting the armed forces of either power (though it will usually have its own military forces). The invasion of a buffer state by one of the powers surrounding it will often result in war between the powers. Research shows that buffer states are significantly more likely to be conquered and occupied than are nonbuffer states. This is because "states that great powers have an interest in preserving—buffer states—are in fact in a high-risk group for death. Regional or great powers surrounding buffer states face a strategic imperative to take over buffer states: if these powers fail to act against the buffer, they fear that their opponent will take it over instead. By contrast, these concerns do not apply to nonbuffer states, where powers face no competition for influence or control." Buffer states, when authentically independent, typically pursue a neutralist foreign policy, which distinguishes them from satellite states. The concept of buffer states is part of a theory of the balance of power that entered European strategic and diplomatic thinking in the 18th century. After the First World War, notable examples of buffer states were Poland and Czechoslovakia, situated between major powers such as Germany and the Soviet Union. Lebanon is another significant example, positioned between Syria and Israel, thereby experiencing challenges as a result. Examples Americas Bolivia, created by Gran Colombia as a buffer between Peru and Argentina during the Upper Peru question Uruguay, served as a demilitarised buffer between Argentina and the Empire of Brazil during the early independence period in South America Paraguay, maintained after the end of the Paraguayan War in 1870, as a buffer separating Argentina and Brazil Georgia, a colony established by Great Britain in 1732 as a buffer between its other colonies along the Atlantic coast of North America and Spanish Florida Ecuador, served as a "cushion state" between Colombia and Peru, which had a bigger extension and military force and fought a war in the 1820s. Asia Kingdom of Judah was a buffer state between Egyptian Empire and Neo-Babylonian Empire. Multiple buffer states played major roles during the Roman–Persian Wars (66 BC – 628 AD). Armenia was a frequently contested buffer between the Roman Empire (as well as the later Byzantine Empire) and the various Persian and Muslim states. North Korea, during and after the Cold War, has been seen by some analysts as a buffer state between the military forces of China, the Soviet Union and those of South Korea, Japan, and the United States (stationed in South Korea, Japan, and Taiwan from 1954 to 1979). Manchukuo was a pro-Japanese buffer state between the Empire of Japan, the Soviet Union, and the Republic of China during World War II. Thailand, historically known as Siam, was an independent buffer state between the British Raj, British Malaya, French Indochina, and their competing colonial interests in Laos and Cambodia. Korea acted as a buffer zone between the growing superpowers of Imperial Japan and the Russian Empire. The Far Eastern Republic was a formally independent state created to act as a buffer between Bolshevik Russia and the Empire of Japan. Afghanistan was a buffer state between the British Empire, which ruled much of South Asia, and the Russian Empire, which ruled much of Central Asia, during the Anglo–Russian conflicts of the 19th century. Later, the Wakhan Corridor extended the buffer eastwards to the Chinese border. The Himalayan nations of Tibet, Nepal, Bhutan, and Sikkim were buffer states between the British Empire and China. Later, during the Sino-Indian War of 1962, they became buffers between China and India as the two powers fought along their borders. Mongolia acted as a buffer between the Soviet Union and China until 1991. It currently serves as a buffer between Russia and China. is a buffer state between Israel and Syria. and are buffer states between Iran and Saudi Arabia. Africa Morocco served as a buffer state between the Ottoman Empire, Spain, and Portugal in the 16th century. The Bechuanaland Protectorate (present-day Botswana) was initially created as a buffer between the British Empire and the two Boer republics of the Orange Free State and the Transvaal Republic until the Second Boer War. Europe Principality of Transylvania was a buffer state between Ottoman Empire and Habsburg Empire until the Treaty of Karlowitz was signed. Switzerland has been a buffer state between Italy, Austria, France, Germany, and other state powers in medieval and modern Europe. The United Kingdom of the Netherlands, composed of today's Belgium and Netherlands, was created by the Congress of Vienna in 1815 to maintain peace between France, Prussia, and the United Kingdom. The kingdom existed for 15 years until the Belgian Revolution. Belgium acted as buffer state between France, the German Empire, the Netherlands, and the British Empire before the First World War. The Rhineland served as a demilitarised zone between France and Germany during the interwar years of the 1920s and early 1930s. There were early French attempts at creating a Rhenish Republic. The Socialist Soviet Republic of Byelorussia was founded as a buffer state between Soviet Russia and the European powers. The Qasim Khanate (1452–1681) may have served as a buffer between Muscovy and the Kazan Khanate. Austria acted as a buffer state between Germany and Italy during the interwar period. Poland and other states between Germany and the Soviet Union have sometimes been described as buffer states, both as non-communist states before World War II and later as communist states of the Eastern Bloc. Yugoslavia, which broke with the Soviet Union before the formation of the Warsaw Pact, became a buffer state between NATO and the Eastern Bloc during the Cold War. West Germany and East Germany were also regarded as buffer states between NATO and the Warsaw Pact during the Cold War in Europe. During the Cold War, Sweden and Finland were sometimes regarded as buffer states between NATO and the Soviet Union. More recently, the Russo-Ukrainian War has helped push both countries into joining NATO. Oceania New Hebrides served as a buffer between the United Kingdom and France in Oceania during the New Imperialism period. Papua New Guinea served as a buffer state between Indonesia, the Solomon Islands, and Vanuatu. Indonesia accused both the Solomon Islands and Vanuatu of supporting the Free Papua Movement during the Papua conflict. See also Indian barrier state, a British proposal to establish a Native American buffer state in the Great Lakes region of North America during the 18th and early 19th centuries Limitrophe states Neutral and Non-Aligned European States Puppet state Satellite state References Former countries Types of countries Independence Sovereignty Borders Geopolitics
Buffer state
Physics
1,357
11,187,755
https://en.wikipedia.org/wiki/Insect%20pins
Insect pins are used by entomologists for mounting collected insects. They can also be used in dressmaking for very fine silk or antique fabrics. As standard, they are long and come in sizes from 000 (the smallest diameter), through 00, 0, and 1, to 8 (the largest diameter). The most generally useful size in entomology is size 2, which is in diameter, with sizes 1 and 3 being the next most useful. They were once commonly made from brass or silver, but these would corrode from contact with insect bodies and are no longer commonly used. Instead they are nickel-plated brass, yielding "white" or "black" enameling, or even made from stainless steel. Similarly, the smallest sizes from 000 to 1 used to be impractical for mounting until plastic and polyethylene became commonly used for pinning bases. There are also micro-pins, which are long. minutens are headless micropins that are generally only made of stainless steel, and used for double-mounting. The insect is mounted on the minuten, which is pinned to a small block of soft material, which is in turn mounted on a standard, larger, insect pin. References Cross-reference Sources Entomology equipment Fasteners Collecting
Insect pins
Engineering,Biology
261
1,027,046
https://en.wikipedia.org/wiki/Psychic%20vampire
A psychic vampire is a creature in folklore said to feed off the "life force" of other living creatures. The term can also be used to describe a person who gets increased energy around other people, but leaves those other people exhausted or "drained" of energy. Psychic vampires are represented in the occult beliefs of various cultures and in fiction. Psychic energy Terms used to describe the substance or essence that psychic vampires take or receive from others include: energy, qi (or ch'i), life force, prana, and vitality. There is no scientific or medical evidence supporting the existence of the bodily or psychic energy they allegedly drain. Emotional vampires American author Albert Bernstein uses the phrase "emotional vampire" for people with various personality disorders who are often considered to drain emotional energy from others. Energy vampires The term "energy vampire" is also used metaphorically to refer to people whose influence leaves a person feeling exhausted, unfocused, and depressed, without ascribing the phenomenon to psychic interference. Dion Fortune wrote of psychic parasitism in relation to vampirism as early as 1930 in her book, Psychic Self-Defense. Fortune considered psychic vampirism a combination of psychic and psychological pathology, and distinguished between what she considered to be true psychic vampirism and mental conditions that produce similar symptoms. For the latter, she named folie à deux and similar phenomena. The term "psychic vampire" was popularized in the 1960s by Anton LaVey and his Church of Satan. LaVey wrote on the topic in his book, The Satanic Bible, and claimed to have coined the term. LaVey used psychic vampire to mean a spiritually or emotionally weak person who drains vital energy from other people. Adam Parfrey likewise attributed the term to LaVey in an introduction to The Devil's Notebook. The English singer-songwriter Peter Hammill credits his erstwhile Van der Graaf Generator colleague, violinist Graham Smith, with coining the term "energy vampires" in the 1970s in order to describe intrusive, over-zealous fans. Hammill included a song of the same name on his 1978 album The Future Now. In the 1982 horror movie One Dark Night, Karl “Raymar” Raymarseivich is the name of a Russian psychic vampire who gains power from the lifeforce of young victims by frightening them to death. This is done by demonstrations of telekinesis which emanates as visible electrical currents of bioenergy. How he dies is unclear, but his malevolence posthumously remains in his body. Effectively, Raymar is a poltergeist in the mausoleum he is interred in, opening crypts (including his own), sliding out the caskets to the floor and randomly exhuming his fellow corpses to terrify unfortunate teenagers who have chosen the wrong place to have an overnight initiation. The terms "energy vampire" and "psychic vampire" have been used as synonyms in Russia since the fall of the Soviet Union as part of an occult revival. The 2019 American comedy horror television series What We Do in the Shadows includes the character Colin Robinson, a metaphorical and literal "energy vampire" who drains people's life forces by being boring or frustrating. Vampire subculture Sociologists such as Mark Benecke and A. Asbjørn Jøn have identified a subculture of people who present themselves as vampires. Jon has noted that enthusiasts of the vampire subculture emulate traditional psychic vampires in that they describe 'prey[ing] upon life-force or 'pranic' energy'. Prominent figures in the subculture include Michelle Belanger, a self-described psychic vampire, who wrote a book titled The Psychic Vampire Codex: A Manual of Magick and Energy Work, published in 2004 by Weiser Books. Belanger details a vampiric approach to energy work which she believes psychic vampires can use to heal others, representing an attempt to disassociate the psychic vampire subculture from negative connotations of vampirism. Sexual vampires A related mythological creature is a sexual vampire, which is supposed to feed off sexual energy. Sexual vampires include succubi or incubi. See also Asura Huli Jing Hungry ghost Lifeforce (film) Doctor Sleep (2019 film) Obake Odic force Pranayama Rakshasa What We Do in the Shadows (TV series) References Further reading External links Energy Vampires(Band): Energy Vampires Llewellyn (Bookstore): Psychic Vampires Article on Identifying Energy Vampires In Our Life By Divya Toshniwal Church of Satan Magical terminology Psychics Vampires Vampirism Vitalism
Psychic vampire
Biology
940
77,366,328
https://en.wikipedia.org/wiki/Infostealer
In computing, infostealers are a form of malicious software created to breach computer systems to steal sensitive informationincluding login details, financial information, and other personally identifiable information. The stolen information is then packaged, sent to the attacker, and often traded on illicit markets to other cybercriminals. Infostealers usually consist of a bot framework that allows the attacker to configure the behaviour of the infostealer, and a management panel that takes the form of a server to which the infostealer sends data. Infostealers infiltrate devices through phishing attacks, infected websites, and malicious software downloads, including video game mods and pirated software, among other methods. Once downloaded, the infostealers gather sensitive information about the user's device and send the data back to the server. Infostealers are usually distributed under the malware-as-a-service (MaaS) model, where developers allow other parties to use their infostealers for subscription fees. The functionality of infostealers can vary, with some focused on data harvesting, while others offer remote access that allows additional malware to be executed. Stolen data may then be used in spearphishing campaigns for other cyber-attacks, such as the deployment of ransomware. The number of stolen data logs being sold on the Russian Market, a cybercrime forum, has increased significantly since 2022. According to Kaspersky's research in mid-2023, 24% of malware offered as a service are infostealers. Overview In cybercrime, credential theft is a well-known mechanism through which malicious individuals steal personal information such as usernames, passwords, or cookies to illegitimately gain access to a victim's online accounts and computer. This crime typically unfolds in four stages, with the first being the acquisition of the stolen credentials. Infostealers are a specific type of malware that are designed for this initial stage. They usually consist of two distinct parts: the bot framework and a command and control server, often known as the management panel or interface. The bot framework includes a builder that allows the attacker to configure how the infostealer will behave on a user's computer and what kind of information it will steal. The management interface, usually written in traditional web development languages like PHP, HTML, and JavaScript, is typically hosted on the commercial cloud infrastructure. The management interface primarily functions as a web server to which the infostealer sends confidential information. The interface also provides the attacker with information about the status of deployed infostealers and allows the attacker to control their behaviour. Distribution and use Infostealers are commonly distributed through the malware-as-a-service (MaaS) model, enabling individuals with varying technical knowledge to deploy these malicious programs. Under this model, three distinct groups typically emerge: developers, malware service providers, and operators. Developers, the most technically skilled, write the infostealer code. Malware service providers purchase licenses for the malware and offer it as a service to other cybercriminals. The operators, who can be developers or service providers themselves depending on their skill level, use these services to perform credential theft. Once the malware is purchased, it is spread to target victim machines using various social engineering techniques. Phishing, including spear phishing campaigns that target specific victims, is commonly employed. Infostealers are commonly embedded in email attachments or malicious links that link to websites performing drive-by downloads. Additionally, they are often bundled with compromised or malicious browser extensions, infected game cheating packages, and pirated or otherwise compromised software. After the stealer is downloaded and run by a victim, it communicates with the attacker's command-and-control servers, allowing the attacker to steal information from the user's computer. While most infostealers primarily target credentials, some also enable attackers to remotely introduce and execute other malware, such as ransomware, on the victim's computer. Credentials obtained from infostealer attacks are often distributed as logs or credential dumps, typically shared on paste sites like Pastebin, where cybercriminals may offer free samples, or sold in bulk on underground hacking forums, often for amounts as low as $10. Buyers of these stolen credentials usually log in to assess their value, particularly looking for credentials associated with financial services or linked to other credentials with similar patterns, as these are especially valuable. High-value credentials are often sold to other cybercriminals at higher prices, who may then use them for various crimes, including financial fraud, integrating the credentials into zombie networks and reputation-boosting operations, or as springboards for more sophisticated attacks such as scamming businesses, distributing ransomware, or conducting state-sponsored espionage. Additionally, some cybercriminals use stolen credentials for social engineering attacks, impersonating the original owner to claim they have been a victim of a crime and soliciting money from the victim's contacts. Many buyers of these stolen credentials take precautions to maintain access for longer periods, such as changing passwords and using Tor networks to obscure their locations, which helps avoid detection by services that might otherwise identify and shut down the stolen credentials. Features An infostealer's primary function is to exfiltrate sensitive information about the victim to an attacker's command-and-control servers. The exact type of data that is exfiltrated will depend on the data-stealing features enabled by the operator and the specific variant of infostealer used. Most infostealers, however, do contain functionality to harvest a variety of information about the host operating system, as well as system settings and user profiles. Some more advanced infostealers include the capability to introduce secondary malware like remote access trojans and ransomware. In 2009, researchers at the Symantec Rapid Response team released a technical analysis of the Zeus infostealer, one of the first infostealers to be created. They found that the malware automatically exfiltrated all data stored in a computer's protected storage service (which was usually used by Internet Explorer to store passwords) and tries to capture any passwords sent to the computer using the POP3 and FTP protocols. In addition to this, the malware allowed the researchers to define a set of configuration files to specify a list of web injections to perform on a user's computer as well as another configuration file that controlled which web URLs the malware would monitor. Another configuration also allowed the researchers to define a set of rules that could be used to test if additional HTTP requests contained passwords or other sensitive information. More recently, in 2020, researchers at the Eindhoven University of Technology conducted a study analysing the information available for sale on the underground credential black market impaas.ru. As part of their study, they were able to replicate the workings of a version of the AZORult infostealer. Amongst the functions discovered by the researchers was a builder, which allowed operators to define what kind of data would be stolen. The researchers also found evidence of plugins that stole a user's browsing history, a customisable regex-based mechanism that allows the attacker to retrieve arbitrary files from a user's computer, a browser password extractor module, a module to extract Skype history, and a module to find and exfiltrate cryptocurrency wallet files. The researchers also found that the data most frequently stolen using the AZORult infostealers and sold on the black market could be broadly categorised into three main types: fingerprints, cookies, and resources. Fingerprints consisted of identifiers that were constructed by probing a variety of features made available by the browser. These were not tied to a specific service but were considered to be an accurately unique identifier for a user's browsers. Cookies allowed buyers to hijack a victim's browser session by injecting it into a browser environment. Resources refer to browser-related files found on a user's operating system, such as password storage files. Economics and impact Setting up an infostealer operation has become increasingly accessible due to the proliferation of stealer-as-a-service enterprises, significantly lowering financial and technical barriers. This makes it feasible for even less sophisticated cybercriminals to engage in such activities. In a 2023 paper, researchers from the Georgia Institute of Technology noted that the hosted stealer market is extremely mature and highly competitive, with some operators offering to set up infostealers for as low as $12. For the service providers running these stealer operations, the researchers estimated that a typical infostealer operator incurs only a few one-off costs: the license to use the infostealer, which is obtained from a malware developer, and the registration fee for the domain used to host the command-and-control server. The primary ongoing cost incurred by these operators is the cost associated with hosting the servers. Based on these calculations, the researchers concluded that the stealer-as-a-service business model is extremely profitable, with many operators achieving profit margins of over 90% with revenues in the high thousands. Due to their extreme profitability and accessibility, the number of cybersecurity incidents that involve infostealers has risen. The COVID-19 post-pandemic shift towards remote and hybrid work, where companies give employees access to enterprise services on their home machines, has also been cited as one of the reasons behind the increase in the effectiveness of infostealers. In 2023, research by Secureworks discovered that the number of infostealer logsdata exfiltrated from each computerbeing sold on the Russian market, the biggest underground market, increased from 2 million to 5 million logs from June 2022 to February 2023. According to Kaspersky's research in mid-2023, 24% of malware offered as a service are infostealers. References Citations Sources Security breaches Cybercrime Types of malware Malware by type Social engineering (security) Cyberwarfare
Infostealer
Technology
2,105
34,628,742
https://en.wikipedia.org/wiki/SCRIPDB
SCRIPDB is a database of chemical structures associated to patents. References External links http://dcv.uhnres.utoronto.ca/SCRIPDB. Biological databases Patent law Public records
SCRIPDB
Biology
45
24,978,923
https://en.wikipedia.org/wiki/Gene%20theft
In bioethics and law, gene theft or DNA theft is the act of acquiring the genetic material of another individual, usually from public places, without his or her permission. The DNA may be harvested from a wide variety of common objects such as discarded cigarettes, used condoms, coffee cups, and hairbrushes. In addition, a variety of people can be interested on collecting someone's genetic material. This includes the police, political parties, historians, professional sports teams, personal enemies, etc. DNA contains adequate amount of information about someone and it can be used for many purposes such as establishing paternity, proving genealogical connections or even unmasking private medical conditions. Criminal law Currently, there are not many laws pertaining to the punishment that one may receive from obtaining the genetic material of others without their consent. However, due to the Health Insurance Portability and Accountability Act (HIPAA), one's genetic material cannot be given to his or her school or employer as the genome is a part of one's personal health data, but, law enforcement can have access to it without consent. This only occurs when a person is either a victim or a suspect of a criminal investigation. Great Britain criminalized the acquisition of DNA without consent in 2006 at the urging of the Human Genetics Commission. Australia's legislature debated a two-year jail sentence for such theft in 2008. In the United States, eight states currently have criminal or civil prohibitions on such non-consensual appropriation of genetic materials. In Alaska, Florida, New Jersey, New York and Oregon, individuals caught swiping DNA face fines or short jail sentences. Lawsuits against "gene snatchers" are permitted in Minnesota, New Hampshire and New Mexico. In jurisdictions where such non-consensual taking of DNA is illegal, exceptions are generally made for law enforcement. Ethics Many bioethicists believe that such conduct is an unethical invasion of human privacy. Professor Jacob Appel has warned that criminals may acquire the capability to copy DNA of innocent people and deposit it at crimes scenes, endangering the blameless and undermining a key tool of forensic investigation." In addition, there have been ethical concerns on law enforcement using the DNA of the family members of criminals to catch them. This concept was used for the Golden State Killer case in California, who was connected to at least 50 rapes and 12 murders between 1976 and 1986. After the case went cold, investigators used a website that compared the genetic information of those who had uploaded their information and found a relative of the killer. However, others defend the appropriation of genetic material on the grounds that doing so may further human knowledge in productive ways. One particularly controversial case which received widespread attention in the media was that of Derrell Teat, a wastewater coordinator, who sought to acquire without consent the DNA of a man who was allegedly the last male descendant of her great-great-great-grandfather's brother. Another prominent case was a United States paternity suit involving film producer Steve Bing and billionaire investor Kirk Kerkorian. See also Genetic testing Genetic privacy Bioethics Forensic testing References Genetics Crime Identity theft
Gene theft
Biology
641
4,379,833
https://en.wikipedia.org/wiki/Chromate%20conversion%20coating
Chromate conversion coating or alodine coating is a type of conversion coating used to passivate steel, aluminium, zinc, cadmium, copper, silver, titanium, magnesium, and tin alloys. The coating serves as a corrosion inhibitor, as a primer to improve the adherence of paints and adhesives, as a decorative finish, or to preserve electrical conductivity. It also provides some resistance to abrasion and light chemical attack (such as dirty fingers) on soft metals. Chromate conversion coatings are commonly applied to items such as screws, hardware and tools. They usually impart a distinctively iridescent, greenish-yellow color to otherwise white or gray metals. The coating has a complex composition including chromium salts, and a complex structure. The process is sometimes called alodine coating, a term used specifically in reference to the trademarked Alodine process of Henkel Surface Technologies. Process Chromate conversion coatings are usually applied by immersing the part in a chemical bath until a film of the desired thickness has formed, removing the part, rinsing it and letting it dry. The process is usually carried out at room temperature, with a few minutes of immersion. Alternatively, the solution can be sprayed, or the part can be briefly dipped in the bath, in which case the coating reactions take place while the part is still wet. The coating is soft and gelatinous when first applied, but hardens and becomes hydrophobic as it dries, typically in 24 hours or less. Curing can be accelerated by heating to , but higher temperature will gradually damage the coating on steel. Bath composition The composition of the bath varies greatly, depending on the material to be coated and the desired effect. Most bath formulae are proprietary. The formulations typically contain hexavalent chromium compounds, such as chromates and dichromates. The widely used Cronak process for zinc and cadmium consists of 5–10 seconds of immersion in a room-temperature solution consisting of 182 g/L sodium dichromate (Na2Cr2O7 · 2H2O) and 6 mL/L concentrated sulfuric acid. Chemistry The chromate coating process starts with a redox reaction between the hexavalent chromium and the metal. In the case of aluminum, for example, + 0 → + The resulting trivalent cations react with hydroxide ions in water to form the corresponding hydroxides, or a solid solution of both hydroxides: + 3 → + 3 → Under appropriate conditions, these hydroxides condense with elimination of water to form a colloidal sol of very small particles, that are deposited as a hydrogel on the metal's surface. The gel consists of a three-dimensional solid skeleton of oxides and hydroxides, with nanoscale elements and voids, enclosing a liquid phase. The structure of the gel depends on metal ion concentration, pH, and other ingredients of the solution, such as chelating agents and counterions. The gel film contracts as it dries, compressing the skeleton and causing it to stiffen. Eventually shrinkage stops, and further drying leaves the pores open but dry, turning the film into a xerogel. In the case of aluminum, the dry coating consists mostly chromium(III) oxide , or mixed (III)/(VI) oxide, with very little . Typically the process variables are adjusted to give a dry coating that is 200-300 nm thick. The coating contracts as it dries, which causes it to crack into many microscopic scales, described as "dried mud" pattern. The trapped solution keeps reacting with any metal that gets exposed in the cracks, so that the final coating is continuous and covers the entire surface. Although the main reactions turn most of the chromium(VI) anions (chromates and dichromates) in the deposited gel into insoluble chromium(III) compounds, a small quantity of them remains un-reacted in the dried-out coating. For example, in the coating formed on aluminum by a commercial bath, about 23% of the chromium atoms were found to be hexavalent , except in a region close to the metal. These chromium(VI) residues can migrate when the coating is wetted, and are believed to play a role in preventing corrosion in the finished part—specifically, by restoring the coating in any new microscopic cracks where corrosion could start. Substrates Zinc Chromating is often performed on galvanized parts to make them more durable. The chromate coating acts as paint does, protecting the zinc from white corrosion, thus making the part considerably more durable, depending on the chromate layer's thickness. The protective effect of chromate coatings on zinc is indicated by color, progressing from clear/blue to yellow, gold, olive drab and black. Darker coatings generally provide more corrosion resistance. The coating color can also be changed with dyes, so color is not a complete indicator of the process used. ISO 4520 specifies chromate conversion coatings on electroplated zinc and cadmium coatings. ASTM B633 Type II and III specify zinc plating plus chromate conversion on iron and steel parts. Recent revisions of ASTM B633 defer to ASTM F1941 for zinc plating mechanical fasteners, like bolts, nuts, etc. 2019 is the current revision for ASTM B633 (superseded the revision from 2015), which raised required tensile thresholds when confronting hydrogen embrittlement issues and addressed embrittlement concerns in a new appendix. Aluminium and its alloys For aluminum, the chromate conversion bath can be simply a solution of chromic acid. The process is rapid (1–5 min), requires a single ambient temperature process tank and associated rinse, and is relatively trouble free. As of 1995, Henkel's Alodine 1200s commercial formula for aluminum consisted of 50-60% chromic anhydride , 20-30% potassium tetrafluoroborate , 10-15% potassium ferricyanide , 5-10% potassium hexafluorozirconate , and 5-10% sodium fluoride by weight. The formula was meant to be dissolved in water at the concentration of 9.0 g/L, giving a bath with pH = 1.5. It yielded a light gold color after 1 min, and a golden-brown film after 3 min. The average thickness ranged between 200 and 1000 nm. Iridite 14-2 is a chromate conversion bath for aluminum. Its ingredients include chromium(IV) oxide, barium nitrate, sodium silicofluoride and ferricyanide. In the aluminum industry, the process is also called chemical film or yellow iridite, Commercial trademarked names include Iridite and Bonderite (formerly known as Alodine, or Alocrom in the UK). The main standards for chromate conversion coating of aluminium are MIL-DTL-5541 in the US, and Def Stan 03/18 in the UK. Magnesium Alodine may also refer to chromate-coating magnesium alloys. Steel Steel and iron cannot be chromated directly. Steel plated with zinc or zinc-aluminum alloy may be chromated. Chromating zinc plated steel does not enhance zinc's cathodic protection of the underlying steel from rust. Phosphate coatings Chromate conversion coatings can be applied over the phosphate conversion coatings often used on ferrous substrates. The process is used to enhance the phosphate coating. Safety Hexavalent chromium compounds have been the topic of intense workplace and public health concern for their carcinogenicity, and have become highly regulated. In particular, concerns about the exposure of workers to chromates and dichromates while handling the immersion bath and the wet parts, as well as the small residues of those anions that remain trapped in the coating, have motivated the development of alternative commercial bath formulations that do not contain hexavalent chromium; for instance, by replacing the chromates by trivalent chromium salts, which are considerably less toxic and provide as good or better corrosion resistance than traditional hexavalent chromate conversion. In Europe, the RoHS and REACH Directives encourage elimination of hexavalent chromium in a broad range of industrial applications and products, including chromate conversion coating processes. References External links Yellow and green chromating chemistry on aluminium Coatings Corrosion prevention Chromium
Chromate conversion coating
Chemistry
1,783
76,014,273
https://en.wikipedia.org/wiki/Ferdinand%20Giese
Ferdinand Giese (Johann Emanuel Ferdinand Giese; 13 January 1781 – 22 May 1821) was a Baltic German pharmacologist. 1817–1818 he was the rector of Tartu University. He graduated from Erfurt University. 1804–1814 he worked at Kharkiv University. Since 1814 he taught at Tartu University. References 1781 births 1821 deaths Academic staff of the University of Tartu Rectors of the University of Tartu Pharmacologists Baltic-German people from the Russian Empire Scientists from the Russian Empire
Ferdinand Giese
Chemistry
107
73,170,392
https://en.wikipedia.org/wiki/Anne-Christine%20Hladky
Anne-Christine Hladky-Hennion (born 1965) is a French researcher in acoustic metamaterials. She is a director of research for the French National Centre for Scientific Research (CNRS), and scientific deputy director of the CNRS (INSIS). Education and career Hladky is originally from Lille, where she was born in 1965. After earning a diploma in 1987 from the Institut supérieur de l'électronique et du numérique in Lille, she continued her education at the Lille University of Science and Technology, where she earned a doctorate in 1990, in materials science. Her doctoral dissertation, Application de la méthode des éléments finis à la modélisation de structures périodiques utilisées en acoustique, was supervised by Jean-Noël Decarpigny. She joined CNRS in 1992, and became a director of research in 2015. Recognition Hladky was the 1990 winner of the Young Researcher Prize of the French Acoustical Society. In 2018 she received the CNRS Silver Medal. References 1965 births Living people Scientists from Lille French materials scientists Women materials scientists and engineers Metamaterials scientists Research directors of the French National Centre for Scientific Research Acousticians
Anne-Christine Hladky
Materials_science,Technology
246
36,074,045
https://en.wikipedia.org/wiki/Alan%20Turing%20Centenary%20Conference
The Alan Turing Centenary Conference was an academic conference celebrating the life and research of Alan Turing by bringing together distinguished scientists to understand and analyse the history and development of Computer Science and Artificial intelligence. The conference was organised by Andrei Voronkov and hosted by the School of Computer Science, University of Manchester where Turing worked from 1948 until 1954. It ran from June 22 to June 25, 2012 as part of Alan Turing Year in Manchester Town Hall. Keynote speakers Several of the keynote speakers for the conference were distinguished Turing Award winners including: Rodney Brooks, Massachusetts Institute of Technology Fred Brooks, University of North Carolina Turing Award winner Vint Cerf, Google, Turing Award winner Edmund M. Clarke, Carnegie Mellon University, Turing Award winner Jack Copeland, University of Canterbury George Ellis, University of Cape Town, Templeton Prize winner David Ferrucci, IBM TJ Watson Research Center Principal Investigator of the Watson/Jeopardy! project Tony Hoare, Microsoft Research, Turing Award winner Garry Kasparov, Kasparov Chess Foundation Samuel Klein, Trustee of the Wikimedia Foundation and a Director of the One Laptop per Child Foundation. Donald Knuth, Stanford University, Turing Award winner Yuri Matiyasevich, Institute of Mathematics, St. Petersburgh Hans Meinhardt, Max Planck Institute for Developmental Biology Roger Penrose, University of Oxford, Wolf Prize winner Michael O. Rabin, Harvard University, Turing Award winner Adi Shamir, Weizmann Institute of Science, Turing Award winner Leslie Valiant, Harvard University, Turing Award winner Manuela M. Veloso, Carnegie Mellon University Andrew Yao, Tsinghua University, Turing Award winner Panelists There were a wide range of panels during the conference chaired by: Samson Abramsky, University of Oxford Ronald J. Brachman, Yahoo! Labs Martin Davis, New York University Steve Furber, University of Manchester Carole Goble, University of Manchester Pat Hayes, Florida Institute for Human and Machine Cognition Bertrand Meyer, ETH Zurich Moshe Y. Vardi, Rice University Sponsors The conference was sponsored by the Kurt Gödel Society, the John Templeton Foundation, the Artificial Intelligence (journal), Google, the Office of Naval Research, Microsoft and IOS Press. References 2012 in England 2012 conferences Computer science conferences Department of Computer Science, University of Manchester Alan Turing
Alan Turing Centenary Conference
Technology
460
49,253,469
https://en.wikipedia.org/wiki/Holmium%E2%80%93magnesium%E2%80%93zinc%20quasicrystal
A holmium–magnesium–zinc (Ho–Mg–Zn) quasicrystal is a quasicrystal made of an alloy of the three metals holmium, magnesium and zinc that has the shape of a regular dodecahedron, a Platonic solid with 12 five-sided faces. Unlike the similar pyritohedron shape of some cubic-system crystals such as pyrite, this quasicrystal has faces that are true regular pentagons. The crystal is part of the R–Mg–Zn family of crystals, where R=Y, Gd, Tb, Dy, Ho or Er. They were first discovered in 1994. These form quasicrystals in the stoichiometry around . Magnetically, they form a spin glass at cryogenic temperatures. While the experimental discovery of quasicrystals dates back to the 1980s, the relatively large, single grain nature of some Ho–Mg–Zn quasicrystals has made them a popular way to illustrate the concept. See also Complex metallic alloys References Quasicrystals Tessellation Magnesium alloys Zinc alloys Rare earth alloys Holmium
Holmium–magnesium–zinc quasicrystal
Physics,Chemistry,Materials_science,Mathematics
232
14,161,069
https://en.wikipedia.org/wiki/Gallium%20manganese%20arsenide
Gallium manganese arsenide, chemical formula is a magnetic semiconductor. It is based on the world's second most commonly used semiconductor, gallium arsenide, (chemical formula ), and readily compatible with existing semiconductor technologies. Differently from other dilute magnetic semiconductors, such as the majority of those based on II-VI semiconductors, it is not paramagnetic but ferromagnetic, and hence exhibits hysteretic magnetization behavior. This memory effect is of importance for the creation of persistent devices. In , the manganese atoms provide a magnetic moment, and each also acts as an acceptor, making it a p-type material. The presence of carriers allows the material to be used for spin-polarized currents. In contrast, many other ferromagnetic magnetic semiconductors are strongly insulating and so do not possess free carriers. is therefore a candidate material for spintronic devices but it is likely to remain only a testbed for basic research as its Curie temperature could only be raised up to approximately 200 K. Growth Like other magnetic semiconductors, is formed by doping a standard semiconductor with magnetic elements. This is done using the growth technique molecular beam epitaxy, whereby crystal structures can be grown with atom layer precision. In the manganese substitute into gallium sites in the GaAs crystal and provide a magnetic moment. Because manganese has a low solubility in GaAs, incorporating a sufficiently high concentration for ferromagnetism to be achieved proves challenging. In standard molecular beam epitaxy growth, to ensure that a good structural quality is obtained, the temperature the substrate is heated to, known as the growth temperature, is normally high, typically ~600 °C. However, if a large flux of manganese is used in these conditions, instead of being incorporated, segregation occurs where the manganese accumulate on the surface and form complexes with elemental arsenic atoms. This problem was overcome using the technique of low-temperature molecular beam epitaxy. It was found, first in and then later used for , that by utilising non-equilibrium crystal growth techniques larger dopant concentrations could be successfully incorporated. At lower temperatures, around 250 °C, there is insufficient thermal energy for surface segregation to occur but still sufficient for a good quality single crystal alloy to form. In addition to the substitutional incorporation of manganese, low-temperature molecular beam epitaxy also causes the inclusion of other impurities. The two other common impurities are interstitial manganese and arsenic antisites. The former is where the manganese atom sits between the other atoms in the zinc-blende lattice structure and the latter is where an arsenic atom occupies a gallium site. Both impurities act as double donors, removing the holes provided by the substitutional manganese, and as such they are known as compensating defects. The interstitial manganese also bond antiferromagnetically to substitutional manganese, removing the magnetic moment. Both these defects are detrimental to the ferromagnetic properties of the , and so are undesired. The temperature below which the transition from paramagnetism to ferromagnetism occurs is known as the Curie temperature, TC. Theoretical predictions based on the Zener model suggest that the Curie temperature scales with the quantity of manganese, so TC above 300K is possible if manganese doping levels as high as 10% can be achieved. After its discovery by Ohno et al., the highest reported Curie temperatures in rose from 60K to 110K. However, despite the predictions of room-temperature ferromagnetism, no improvements in TC were made for several years. As a result of this lack of progress, predictions started to be made that 110K was a fundamental limit for . The self-compensating nature of the defects would limit the possible hole concentrations, preventing further gains in TC. The major breakthrough came from improvements in post-growth annealing. By using annealing temperatures comparable to the growth temperature it was possible to pass the 110K barrier. These improvements have been attributed to the removal of the highly mobile interstitial manganese. Currently, the highest reported values of TC in are around 173K, still well below the much sought room-temperature. As a result, measurements on this material must be done at cryogenic temperatures, currently precluding any application outside of the laboratory. Naturally, considerable effort is being spent in the search for an alternative magnetic semiconductors that does not share this limitation. In addition to this, as molecular beam epitaxy techniques and equipment are refined and improved it is hoped that greater control over growth conditions will allow further incremental advances in the Curie temperature of . Properties Regardless of the fact that room-temperature ferromagnetism has not yet been achieved, magnetic semiconductors materials such as , have shown considerable success. Thanks to the rich interplay of physics inherent to magnetic semiconductors a variety of novel phenomena and device structures have been demonstrated. It is therefore instructive to make a critical review of these main developments. A key result in magnetic semiconductors technology is gateable ferromagnetism, where an electric field is used to control the ferromagnetic properties. This was achieved by Ohno et al. using an insulating-gate field-effect transistor with as the magnetic channel. The magnetic properties were inferred from magnetization dependent Hall measurements of the channel. Using the gate action to either deplete or accumulate holes in the channel it was possible to change the characteristic of the Hall response to be either that of a paramagnet or of a ferromagnet. When the temperature of the sample was close to its TC it was possible to turn the ferromagnetism on or off by applying a gate voltage which could change the TC by ±1K. A similar transistor device was used to provide further examples of gateable ferromagnetism. In this experiment the electric field was used to modify the coercive field at which magnetization reversal occurs. As a result of the dependence of the magnetic hysteresis on the gate bias the electric field could be used to assist magnetization reversal or even demagnetize the ferromagnetic material. The combining of magnetic and electronic functionality demonstrated by this experiment is one of the goals of spintronics and may be expected to have a great technological impact. Another important spintronic functionality that has been demonstrated in magnetic semiconductors is that of spin injection. This is where the high spin polarization inherent to these magnetic materials is used to transfer spin polarized carriers into a non-magnetic material. In this example, a fully epitaxial heterostructure was used where spin polarized holes were injected from a layer to an (In,Ga)As quantum well where they combine with unpolarized electrons from an n-type substrate. A polarization of 8% was measured in the resulting electroluminescence. This is again of potential technological interest as it shows the possibility that the spin states in non-magnetic semiconductors can be manipulated without the application of a magnetic field. offers an excellent material to study domain wall mechanics because the domains can have a size of the order of 100 μm. Several studies have been done in which lithographically defined lateral constrictions or other pinning points are used to manipulate domain walls. These experiments are crucial to understanding domain wall nucleation and propagation which would be necessary for the creation of complex logic circuits based on domain wall mechanics. Many properties of domain walls are still not fully understood and one particularly outstanding issue is of the magnitude and size of the resistance associated with current passing through domain walls. Both positive and negative values of domain wall resistance have been reported, leaving this an open area for future research. An example of a simple device that utilizes pinned domain walls is provided by reference. This experiment consisted of a lithographically defined narrow island connected to the leads via a pair of nanoconstrictions. While the device operated in a diffusive regime the constrictions would pin domain walls, resulting in a giant magnetoresistance signal. When the device operates in a tunnelling regime another magnetoresistance effect is observed, discussed below. A furtherproperty of domain walls is that of current induced domain wall motion. This reversal is believed to occur as a result of the spin-transfer torque exerted by a spin polarized current. It was demonstrated in reference using a lateral device containing three regions which had been patterned to have different coercive fields, allowing the easy formation of a domain wall. The central region was designed to have the lowest coercivity so that the application of current pulses could cause the orientation of the magnetization to be switched. This experiment showed that the current required to achieve this reversal in was two orders of magnitude lower than that of metal systems. It has also been demonstrated that current-induced magnetization reversal can occur across a vertical tunnel junction. Another novel spintronic effect, which was first observed in based tunnel devices, is tunnelling anisotropic magnetoresistance. This effect arises from the intricate dependence of the tunnelling density of states on the magnetization, and can result in magnetoresistance of several orders of magnitude. This was demonstrated first in vertical tunnelling structures and then later in lateral devices. This has established tunnelling anisotropic magnetoresistance as a generic property of ferromagnetic tunnel structures. Similarly, the dependence of the single electron charging energy on the magnetization has resulted in the observation of another dramatic magnetoresistance effect in a device, the so-called Coulomb blockade anisotropic magnetoresistance. References Semiconductor materials Ferromagnetic materials Gallium compounds Arsenides Manganese(III) compounds Zincblende crystal structure
Gallium manganese arsenide
Physics,Chemistry
2,022
51,757,526
https://en.wikipedia.org/wiki/NGC%20258
NGC 258 is a lenticular galaxy located in the Andromeda constellation. It was discovered by George Stoner in 1848. References External links Andromeda (constellation) Lenticular galaxies Astronomical objects discovered in 1848 0258 002829
NGC 258
Astronomy
49
7,499
https://en.wikipedia.org/wiki/RDX
RDX (abbreviation of "Research Department eXplosive" or Royal Demolition eXplosive) or hexogen, among other names, is an organic compound with the formula (CH2N2O2)3. It is white, odorless, and tasteless, widely used as an explosive. Chemically, it is classified as a nitroamine alongside HMX, which is a more energetic explosive than TNT. It was used widely in World War II and remains common in military applications. RDX is often used in mixtures with other explosives and plasticizers or phlegmatizers (desensitizers); it is the explosive agent in C-4 plastic explosive and a key ingredient in Semtex. It is stable in storage and is considered one of the most energetic and brisant of the military high explosives, with a relative effectiveness factor of 1.60. Name RDX is also less commonly known as cyclonite, hexogen (particularly in Russian, French and German-influenced languages), T4, and, chemically, as cyclotrimethylene trinitramine. In the 1930s, the Royal Arsenal, Woolwich, started investigating cyclonite to use against German U-boats that were being built with thicker hulls. The goal was to develop an explosive more energetic than TNT. For security reasons, Britain termed cyclonite "Research Department Explosive" (R.D.X.). The term RDX appeared in the United States in 1946. The first public reference in the United Kingdom to the name RDX, or R.D.X., to use the official title, appeared in 1948; its authors were the managing chemist, ROF Bridgwater, the chemical research and development department, Woolwich, and the director of Royal Ordnance Factories, Explosives. Usage RDX was widely used during World War II, often in explosive mixtures with TNT such as Torpex, Composition B, Cyclotols, and H6. RDX was used in one of the first plastic explosives. The bouncing bomb depth charges used in the "Dambusters Raid" each contained of Torpex; The Tallboy and Grand Slam bombs designed by Barnes Wallis also used Torpex. RDX is believed to have been used in many bomb plots, including terrorist plots. RDX is the base for a number of common military explosives: Composition A: Granular explosive consisting of RDX and plasticizing wax, such as composition A-3 (91% RDX coated with 9% wax) and composition A-5 (98.5 to 99.1% RDX coated with 0.95 to 1.54% stearic acid). Composition B: Castable mixtures of 59.5% RDX and 39.4% TNT with 1% wax as desensitizer. Composition C: The original composition C was used in World War II, but there have been subsequent variations including C-2, C-3, and C-4. C-4 consists of RDX (91%); a plasticizer, dioctyl sebacate (5.3%); and a binder, which is usually polyisobutylene (2.1%); and oil (1.6%). Composition CH-6: 97.5% RDX, 1.5% calcium stearate, 0.5% polyisobutylene, and 0.5% graphite DBX (Depth Bomb Explosive): Castable mixture consisting of 21% RDX, 21% ammonium nitrate, 40% TNT, and 18% powdered aluminium, developed during World War II, it was to be used in underwater munitions as a substitute for Torpex employing only half the amount of then-scarce RDX, as the supply of RDX became more adequate, however, the mixture was shelved Cyclotol: Castable mixture of RDX (50–80%) with TNT (20–50%) designated by the amount of RDX/TNT, such as Cyclotol 70/30 HBX: Castable mixtures of RDX, TNT, powdered aluminium, and D-2 wax with calcium chloride H-6: Castable mixture of RDX, TNT, powdered aluminum, and paraffin wax (used as a phlegmatizing agent) PBX: RDX is also used as a major component of many polymer-bonded explosives (PBX); RDX-based PBXs typically consist of RDX and at least thirteen different polymer/co-polymer binders. Examples of RDX-based PBX formulations include, but are not limited to: PBX-9007, PBX-9010, PBX-9205, PBX-9407, PBX-9604, PBXN-106, PBXN-3, PBXN-6, PBXN-10, PBXN-201, PBX-0280, PBX Type I, PBXC-116, PBXAF-108, etc. Semtex (trade name): Plastic demolition explosive containing RDX and PETN as major energetic components Torpex: 42% RDX, 40% TNT, and 18% powdered aluminium; the mixture was designed during World War II and used mainly in underwater ordnance Outside military applications, RDX is also used in controlled demolition to raze structures. The demolition of the Jamestown Bridge in the U.S. state of Rhode Island was one instance where RDX shaped charges were used to remove the span. Synthesis RDX is classified by chemists as a hexahydro-1,3,5-triazine derivative. In laboratory settings (industrial routes are described below separately) it is obtained by treating hexamine with white fuming nitric acid. This nitrolysis reaction also produces methylene dinitrate, ammonium nitrate, and water as by-products. The overall reaction is: C6H12N4 + 10 HNO3 → C3H6N6O6 + 3 CH2(ONO2)2 + NH4NO3 + 3 H2O The conventional cheap nitration agent, called "mixed acid", cannot be used for RDX synthesis because concentrated sulfuric acid conventionally used to stimulate the nitronium ion formation decomposes hexamine into formaldehyde and ammonia. Modern syntheses employ hexahydro triacyl triazine as it avoids formation of HMX. History RDX was used by both sides in World War II. The US produced about per month during WWII and Germany about per month. RDX had the major advantages of possessing greater explosive force than TNT and required no additional raw materials for its manufacture. Thus, it was also extensively used in World War I Germany RDX was reported in 1898 by Georg Friedrich Henning (1863-1945), who obtained a German patent for its manufacture by nitrolysis of hexamine (hexamethylenetetramine) with concentrated nitric acid. In this patent, only the medical properties of RDX were mentioned. During WWI, Heinrich Brunswig (1865-1946) at the private military-industrial laboratory (Center for Scientific-Technical Research) in Neubabelsberg studied the compound more closely and in June 1916 filed two patent applications, one for its use in smokeless propellants and another for its use as an explosive, noting its excellent characteristics. The German military hadn't considered its adoption during the war due to the expense of production but started investigating its use in 1920, referring to it as hexogen. Research and development findings were not published further until Edmund von Herz, described as an Austrian and later a German citizen, rediscovered the explosive properties of RDX and applied for an Austrian patent in 1919, obtaining a British one in 1921 and an American one in 1922. All patents described the synthesis of the compound by nitrating hexamethylenetetramine. The British patent claims included the manufacture of RDX by nitration, its use with or without other explosives, its use as a bursting charge and as an initiator. The US patent claim was for the use of a hollow explosive device containing RDX and a detonator cap containing it. Herz was also the first to identify the cyclic nature of the molecule. In the 1930s, Germany developed improved production methods. During World War II, Germany used the code names W Salt, SH Salt, K-method, the E-method, and the KA-method. These names represented the identities of the developers of the various chemical routes to RDX. The W-method was developed by Wolfram in 1934 and gave RDX the code name "W-Salz". It used sulfamic acid, formaldehyde, and nitric acid. SH-Salz (SH salt) was from Schnurr, who developed a batch-process in 1937–38 based on nitrolysis of hexamine. The K-method, from Knöffler, involved addition of ammonium nitrate to the hexamine/nitric acid process. The E-method, developed by Ebele, proved to be identical to the Ross and Schiessler process described below. The KA-method, also developed by Knöffler, turned out to be identical to the Bachmann process described below. The explosive shells fired by the MK 108 cannon and the warhead of the R4M rocket, both used in Luftwaffe fighter aircraft as offensive armament, both used hexogen as their explosive base. UK In the United Kingdom (UK), RDX was manufactured from 1933 by the research department in a pilot plant at the Royal Arsenal in Woolwich, London, a larger pilot plant being built at the RGPF Waltham Abbey just outside London in 1939. In 1939 a twin-unit industrial-scale plant was designed to be installed at a new site, ROF Bridgwater, away from London and production of RDX started at Bridgwater on one unit in August 1941. The ROF Bridgwater plant brought in ammonia and methanol as raw materials: the methanol was converted to formaldehyde and some of the ammonia converted to nitric acid, which was concentrated for RDX production. The rest of the ammonia was reacted with formaldehyde to produce hexamine. The hexamine plant was supplied by Imperial Chemical Industries. It incorporated some features based on data obtained from the United States (US). RDX was produced by continually adding hexamine and concentrated nitric acid to a cooled mixture of hexamine and nitric acid in the nitrator. The RDX was purified and processed for its intended use; recovery and reuse of some methanol and nitric acid also was carried out. The hexamine-nitration and RDX purification plants were duplicated (i.e. twin-unit) to provide some insurance against loss of production due to fire, explosion, or air attack. The United Kingdom and British Empire were fighting without allies against Nazi Germany until the middle of 1941 and had to be self-sufficient. At that time (1941), the UK had the capacity to produce (160,000 lb) of RDX per week; both Canada, an allied country and self-governing dominion within the British Empire, and the US were looked upon to supply ammunition and explosives, including RDX. By 1942 the Royal Air Force's annual requirement was forecast to be of RDX, much of which came from North America (Canada and the US). Canada A different method of production to the Woolwich process was found and used in Canada, possibly at the McGill University department of chemistry. This was based on reacting paraformaldehyde and ammonium nitrate in acetic anhydride. A UK patent application was made by Robert Walter Schiessler (Pennsylvania State University) and James Hamilton Ross (McGill, Canada) in May 1942; the UK patent was issued in December 1947. Gilman states that the same method of production had been independently discovered by Ebele in Germany prior to Schiessler and Ross, but that this was not known by the Allies. Urbański provides details of five methods of production, and he refers to this method as the (German) E-method. UK, US, and Canadian production and development At the beginning of the 1940s, the major US explosive manufacturers, E. I. du Pont de Nemours & Company and Hercules, had several decades of experience of manufacturing trinitrotoluene (TNT) and had no wish to experiment with new explosives. US Army Ordnance held the same viewpoint and wanted to continue using TNT. RDX had been tested by Picatinny Arsenal in 1929, and it was regarded as too expensive and too sensitive. The Navy proposed to continue using ammonium picrate. In contrast, the National Defense Research Committee (NDRC), who had visited The Royal Arsenal, Woolwich, thought new explosives were necessary. James B. Conant, chairman of Division B, wished to involve academic research into this area. Conant therefore set up an experimental explosives research laboratory at the Bureau of Mines, Bruceton, Pennsylvania, using Office of Scientific Research and Development (OSRD) funding. Woolwich method In 1941, the UK's Tizard Mission visited the US Army and Navy departments and part of the information handed over included details of the "Woolwich" method of manufacture of RDX and its stabilisation by mixing it with beeswax. The UK was asking that the US and Canada, combined, supply (440,000 lb) of RDX per day. A decision was taken by William H. P. Blandy, chief of the Bureau of Ordnance, to adopt RDX for use in mines and torpedoes. Given the immediate need for RDX, the US Army Ordnance, at Blandy's request, built a plant that copied the equipment and process used at Woolwich. The result was the Wabash River Ordnance Works run by E. I. du Pont de Nemours & Company. At that time, this works had the largest nitric acid plant in the world. The Woolwich process was expensive: it needed of strong nitric acid for every pound of RDX. By early 1941, the NDRC was researching new processes. The Woolwich or direct nitration process has at least two serious disadvantages: (1) it used large amounts of nitric acid and (2) at least one-half of the formaldehyde is lost. One mole of hexamethylenetetramine could produce at most one mole of RDX. At least three laboratories with no previous explosive experience were instructed to develop better production methods for RDX; they were based at Cornell, Michigan, and Pennsylvania State universities. Werner Emmanuel Bachmann, from Michigan, successfully developed the "combination process" by combining the Ross and Schiessler process used in Canada (aka the German E-method) with direct nitration. The combination process required large quantities of acetic anhydride instead of nitric acid in the old British "Woolwich process". Ideally, the combination process could produce two moles of RDX from each mole of hexamethylenetetramine. The expanded production of RDX could not continue to rely on the use of natural beeswax to desensitize the explosive as in the original British composition (RDX/BWK-91/9). A substitute stabilizer based on petroleum was developed at the Bruceton Explosives Research Laboratory in Pennsylvania, with the resulting explosive designated Composition A-3. Bachmann process The National Defence Research Committee (NDRC) instructed three companies to develop pilot plants. They were the Western Cartridge Company, E. I. du Pont de Nemours & Company, and Tennessee Eastman Company, part of Eastman Kodak. At the Eastman Chemical Company (TEC), a leading manufacturer of acetic anhydride, Werner Emmanuel Bachmann developed a continuous-flow process for RDX utilizing an ammonium nitrate/nitric acid mixture as a nitrating agent in a medium of acetic acid and acetic anhydride. RDX was crucial to the war effort and the current batch-production process was too slow. In February 1942, TEC began producing small amounts of RDX at its Wexler Bend pilot plant, which led to the US government authorizing TEC to design and build Holston Ordnance Works (H.O.W.) in June 1942. By April 1943, RDX was being manufactured there. At the end of 1944, the Holston plant and the Wabash River Ordnance Works, which used the Woolwich process, were producing (50 million pounds) of Composition B per month. The Bachmann process yields both RDX and HMX, with the major product determined by the specific reaction conditions. Military compositions The United Kingdom's intention in World War II was to use "desensitised" RDX. In the original Woolwich process, RDX was phlegmatized with beeswax, but later paraffin wax was used, based on the work carried out at Bruceton. In the event the UK was unable to obtain sufficient RDX to meet its needs, some of the shortfall was met by substituting amatol, a mixture of ammonium nitrate and TNT. Karl Dönitz was reputed to have claimed that "an aircraft can no more kill a U-boat than a crow can kill a mole". Nonetheless, by May 1942 Wellington bombers began to deploy depth charges containing Torpex, a mixture of RDX, TNT, and aluminium, which had up to 50 percent more destructive power than TNT-filled depth charges. Considerable quantities of the RDX–TNT mixture were produced at the Holston Ordnance Works, with Tennessee Eastman developing an automated mixing and cooling process based around the use of stainless steel conveyor belts. Terrorism A Semtex bomb was used in the Pan Am Flight 103 (known also as the Lockerbie) bombing in 1988. A belt laden with of RDX explosives tucked under the dress of the assassin was used in the assassination of former Indian prime minister Rajiv Gandhi in 1991. The 1993 Bombay bombings used RDX placed into several vehicles as bombs. RDX was the main component used for the 2006 Mumbai train bombings and the Jaipur bombings in 2008. It also is believed to be the explosive used in the 2010 Moscow Metro bombings. Traces of RDX were found on pieces of wreckage from 1999 Russian apartment bombings and 2004 Russian aircraft bombings. FSB reports on the bombs used in the 1999 apartment bombings indicated that while RDX was not a part of the main charge, each bomb contained plastic explosive used as a booster charge. Ahmed Ressam, the al-Qaeda Millennium Bomber, used a small quantity of RDX as one of the components in the bomb that he prepared to detonate in Los Angeles International Airport on New Year's Eve 1999–2000; the bomb could have produced a blast forty times greater than that of a devastating car bomb. In July 2012, the Kenyan government arrested two Iranian nationals and charged them with illegal possession of of RDX. According to the Kenyan Police, the Iranians planned to use the RDX for "attacks on Israeli, US, UK and Saudi Arabian targets". RDX was used in the assassination of Lebanese Prime Minister Rafic Hariri on February 14, 2005. In the 2019 Pulwama attack in India, 250 kg of high-grade RDX was used by Jaish-e-Mohammed. The attack resulted in the deaths of 44 Central Reserve Police Force (CRPF) personnel as well as the attacker. Two letter bombs sent to journalists in Ecuador were disguised as USB flash drives which contained RDX that would detonate when plugged in. Stability RDX has a high nitrogen content and a high oxygen to carbon ratio, (O:C ratio), both of which indicate its explosive potential for formation of N2 and CO2. RDX undergoes a deflagration to detonation transition (DDT) in confinement and certain circumstances. The velocity of detonation of RDX at a density of 1.80 g/cm3 is 8750 m/s. It starts to decompose at approximately 170 °C and melts at 204 °C. At room temperature, it is very stable. It burns rather than explodes. It detonates only with a detonator, being unaffected even by small arms fire. This property makes it a useful military explosive. It is less sensitive than pentaerythritol tetranitrate (PETN). Under normal conditions, RDX has a Figure of Insensitivity of exactly 80 (RDX defines the reference point). RDX sublimes in vacuum, which restricts or prevents its use in some applications. RDX, when exploded in air, has about 1.5 times the explosive energy of TNT per unit weight and about 2.0 times per unit volume. RDX is insoluble in water, with solubility 0.05975 g/L at temperature of 25 °C. Toxicity The substance's toxicity has been studied for many years. RDX has caused convulsions (seizures) in military field personnel ingesting it, and in munition workers inhaling its dust during manufacture. At least one fatality was attributed to RDX toxicity in a European munitions manufacturing plant. During the Vietnam War, at least 40 American soldiers were hospitalized with composition C-4 (which is 91% RDX) intoxication from December 1968 to December 1969. C-4 was frequently used by soldiers as a fuel to heat food, and the food was generally mixed by the same knife that was used to cut C-4 into small pieces prior to burning. Soldiers were exposed to C-4 either due to inhaling the fumes, or due to ingestion, made possible by many small particles adhering to the knife having been deposited into the cooked food. The symptom complex involved nausea, vomiting, generalized seizures, and prolonged postictal confusion and amnesia; which indicated toxic encephalopathy. Oral toxicity of RDX depends on its physical form; in rats, the LD50 was found to be 100 mg/kg for finely powdered RDX, and 300 mg/kg for coarse, granular RDX. A case has been reported of a human child hospitalized in status epilepticus following the ingestion of 84.82 mg/kg dose of RDX (or 1.23 g for the patient's body weight of 14.5 kg) in the "plastic explosive" form. The substance has low to moderate toxicity with a possible human carcinogen classification. Further research is ongoing, however, and this classification may be revised by the United States Environmental Protection Agency (EPA). Remediating RDX-contaminated water supplies has proven to be successful. It is known to be a kidney toxin in humans and highly toxic to earthworms and plants, thus army testing ranges where RDX was used heavily may need to undergo environmental remediation. Concerns have been raised by research published in late 2017 indicating that the issue has not been addressed correctly by U.S. officials. Civilian use RDX has been used as a rodenticide because of its toxicity. Biodegradation RDX is degraded by the organisms in sewage sludge as well as the fungus Phanaerocheate chrysosporium. Both wild and transgenic plants can phytoremediate explosives from soil and water. One by-product of the environmental decomposition is R-salt. Alternatives FOX-7 is considered to be approximately a 1-to-1 replacement for RDX in almost all applications. Notes References Bibliography . See also . Urbański translation openlibrary.org, Macmillan, NY, 1964, . Further reading External links ADI Limited (Australia). Archive.org leads to Thales group products page that shows some military specifications. NLM Hazardous Substances Databank (US) – Cyclonite (RDX) CDC – NIOSH Pocket Guide to Chemical Hazards nla.gov.au, Army News (Darwin, NT), October 2, 1943, p 3. "Britain's New Explosive: Experts Killed in Terrific Blast", uses "Research Department formula X" nla.gov.au, The Courier-Mail (Brisbane, Qld.), September 27, 1943, p 1. Explosive chemicals Nitroamines Triazines Convulsants GABAA receptor negative allosteric modulators Rodenticides Rocket propellants
RDX
Chemistry,Biology
5,107
59,630,526
https://en.wikipedia.org/wiki/Transition%20metal%20imido%20complex
In coordination chemistry and organometallic chemistry, transition metal imido complexes is a coordination compound containing an imido ligand. Imido ligands can be terminal or bridging ligands. The parent imido ligand has the formula NH, but most imido ligands have alkyl or aryl groups in place of H. The imido ligand is generally viewed as a dianion, akin to oxide. Structural classes Complexes with terminal imido ligands In some terminal imido complexes, the M=N−C angle is 180° but often the angle is decidedly bent. Complexes of the type M=NH are assumed to be intermediates in nitrogen fixation by synthetic catalysts. Complexes with bridging imido ligands Imido ligands are observed as doubly and, less often, triply bridging ligands. Synthesis From metal oxo complexes Commonly metal-imido complexes are generated from metal oxo complexes. They arise by condensation of amines and metal oxides and metal halides: LnMO + H2NR → LnMNR + H2O This approach is illustrated by the conversion of MoO2Cl2 to the diimido derivative MoCl2(NAr)2(dimethoxyethane), precursors to the Schrock carbenes of the type Mo(OR)2(NAr)(CH-t-Bu). LnMCl2 + 3 H2NR → LnMNR + 2 RNH3Cl Aryl isocyanates react with metal oxides concomitant with decarboxylation: LnMO + O=C=NR → LnMNR + CO2 Alternative routes Some are generated from the reaction of low-valence metal complexes with azides: LnM + N3R → LnMNR + N2 A few imido complexes have been generated by the alkylation of metal nitride complexes: LnMN− + RX → LnMNR + X− Utility Metal imido complexes are mainly of academic interest. They are however assumed to be intermediates in ammoxidation catalysis, in the Sharpless oxyamination, and in nitrogen fixation. In nitrogen fixation A molybdenum imido complex appears in a common nitrogen fixation cycle: Mo•NH3 (ammine); with the oxidation state of molybdenum varying to accommodate the number bonds from nitrogen. References Coordination chemistry
Transition metal imido complex
Chemistry
520
77,981,664
https://en.wikipedia.org/wiki/IRAS%2010565%2B2448
IRAS 10565+2448 known as IRAS F10565+2448, is a galaxy merger located in the constellation of Leo. It is located at a distance of 625 million light years from Earth. It is classified as an ultraluminous infrared galaxy with an infrared luminosity of 1.2 x 1012 LΘ. It has a star formation rate of 131.8 MΘ yr−1. IRAS 10565+2448 has a disturbed morphology. The large galaxy in the system shows dust lanes running through its main body while the smaller galaxy (the westernmost object), has a curved tidal tail pulled downwards from the object. A third galaxy is possibly shown as secondary smaller nucleus located northwest from the primary nucleus in the large galaxy. It is also a late-stage merger as both east and west nuclei components in the system have a projected separation of 6.7 kiloparsecs. It has an obscured X-ray emission with a luminosity of both LSX = 1.21 x 1041 erg s−1 and LHX = 1.6 x 1041 erg s−1. The source appears as Compton-thin obscurer with an absorption column density of 0.05+0.07-0.04 x 1022 cm−2. The large galaxy in the IRAS 10565+2448 is found to be active. It is categorized as a H II galaxy and a starburst galaxy. It is more luminous when compared to its smaller companion galaxy. It contains a superficial and broad blueshifted HI absorption interpreted as molecular outflows with a mass rate of 140 MΘ yr−1 suggesting it is driven by a radio jet. The large galaxy also shows detections of dust continuum, J = 4-3 ground rotational transition of carbon monoxide (CO) and atomic carbon. It has a compact radio source appearing structured at 8.44 GHz with a rotating CO ring found nearly-face on but lesser inside an outer disk beyond the galaxy's nuclear ring. The smaller galaxy contains a source of CO(1–0) emission. It has blue and redshifted CO(1–0) wings with an approximate size of 2.15 ± 0.32 kiloparsecs and 2.22 ± 0.30 kiloparsecs based on a circular Gaussian fit. However, the emission from the CO(1–0) narrow core is more condensed than its wings. There is also proof of a plume of CO(1–0) stretching southwest at blueshifted velocities of -150 km−1 and systematic velocity. References 10565+2448 Leo (constellation) Galaxy mergers Interacting galaxies 033083 Luminous infrared galaxies 1709876 Active galaxies
IRAS 10565+2448
Astronomy
566
319,153
https://en.wikipedia.org/wiki/Messier%2083
Messier 83 or M83, also known as the Southern Pinwheel Galaxy and NGC 5236, is a barred spiral galaxy approximately 15 million light-years away in the constellation borders of Hydra and Centaurus. Nicolas-Louis de Lacaille discovered M83 on 17 February 1752 at the Cape of Good Hope. Charles Messier added it to his catalogue of nebulous objects (now known as the Messier Catalogue) in March 1781. It is one of the closest and brightest barred spiral galaxies in the sky, and is visible with binoculars. It has an isophotal diameter at about . Its nickname of the Southern Pinwheel derives from its resemblance to the Pinwheel Galaxy (M101). Characteristics M83 is a massive, grand design spiral galaxy. Its morphological classification in the De Vaucouleurs system is SAB(s)c, where the 'SAB' denotes a weak-barred spiral, '(s)' indicates a pure spiral structure with no ring, and 'c' means the spiral arms are loosely wound. The peculiar dwarf galaxy NGC 5253 lies near M83, and the two likely interacted within the last billion years resulting in starburst activity in their central regions. The star formation rate in M83 is higher along the leading edge of the spiral arms, as predicted by density wave theory. NASA's Galaxy Evolution Explorer project on 16 April 2008 reported finding large numbers of new stars in the outer reaches of the galaxy— from the center. It had been thought that these areas lacked the materials necessary for star formation. Supernovae Six supernovae have been observed in M83: SN 1923A (type unknown, mag. 14) was discovered by Carl Otto Lampland on 5 May 1923. SN 1945B (type unknown, mag. 14.2) was discovered by William Liller on 13 July 1945. SN 1950B (type unknown, mag. 14.5) was discovered by Guillermo Haro on 15 March 1950. SN 1957D (type unknown, mag. 15) was discovered by H. S. Gates on 28 December 1957. SN 1968L (type II-P, mag. 11.9) was discovered by J. C. Bennett on 17 July 1968. SN 1983N (type Ia, mag. 11.9) was discovered by Robert Evans from Australia on July 3, 1983. On July 6, it was observed with the Very Large Array and became the first type I supernova to have a radio emission detected. The supernova reached peak optical brightness on July 17, achieving an apparent visual magnitude of 11.54. Although identified as type I, the spectrum was considered peculiar. A year after the explosion, about of iron was discovered in the ejecta. This was the first time that such a large amount of iron was unambiguously detected from a supernova explosion. SN 1983N became the modern prototype of a hydrogen deficient type Ib supernova, with the progenitor being inferred as a Wolf–Rayet star. Environment M83 is at the center of one of two subgroups within the Centaurus A/M83 Group, a nearby galaxy group. Centaurus A is at the center of the other subgroup. These are sometimes identified as one group, and sometimes as two. However, the galaxies around Centaurus A and the galaxies around M83 are physically close to each other, and both subgroups appear not to be moving relative to each other. See also List of Messier objects M83 (band), the band named after the galaxy References External links ESO Photo Release eso0136, An Infrared Portrait of the Barred Spiral Galaxy Messier 83 M83, SEDS Messier pages Spiral Galaxy Messier 83 at the astro-photography site of Takayuki Yoshida M83 The Southern Pinwheel X-rays Discovered From Young Supernova Remnant (SN 1957D) Messier 83 (Southern Pinwheel Galaxy) at Constellation Guide 17520223 Barred spiral galaxies Centaurus A/M83 Group Hydra (constellation) Intermediate spiral galaxies 083 -05-32-050 NGC objects 048082 444-081 13341-2936 Starburst galaxies 366
Messier 83
Astronomy
876
9,193,086
https://en.wikipedia.org/wiki/Skip%20counting
Skip counting is a mathematics technique taught as a kind of multiplication in reform mathematics textbooks such as TERC. In older textbooks, this technique is called counting by twos (threes, fours, etc.). In skip counting by twos, a person can count to 10 by only naming every other even number: 2, 4, 6, 8, 10. Combining the base (two, in this example) with the number of groups (five, in this example) produces the standard multiplication equation: two multiplied by five equals ten. References Mathematics education Multiplication
Skip counting
Mathematics
115
13,342,698
https://en.wikipedia.org/wiki/Robbins%20lemma
In statistics, the Robbins lemma, named after Herbert Robbins, states that if X is a random variable having a Poisson distribution with parameter λ, and f is any function for which the expected value E(f(X)) exists, then Robbins introduced this proposition while developing empirical Bayes methods. References Theorems in statistics Lemmas Poisson distribution
Robbins lemma
Mathematics
74
4,725,497
https://en.wikipedia.org/wiki/Slipform%20stonemasonry
Slipform stonemasonry is a method for making a reinforced concrete wall with stone facing in which stones and mortar are built up in courses within reusable slipforms. It is a cross between traditional mortared stone wall and a veneered stone wall. Short forms, up to 60 cm high, are placed on both sides of the wall to serve as a guide for the stone work. The stones are placed inside the forms with the good faces against the form work. Concrete is poured in behind the rocks. Rebar is added for strength, to make a wall that is approximately half reinforced concrete and half stonework. The wall can be faced with stone on one side or both sides. After the concrete sets enough to hold the wall together, the forms are "slipped" up to pour the next level. With slipforms it is easy for a novice to build free-standing stone walls. History Slipform stonemasonry was developed by New York architect Ernest Flagg in 1920. Flagg built a vertical framework as tall as the wall, then inserted 2x6 or 2x8 planks as forms to guide the stonework. When the masonry work reached the top of a plank, Flagg inserted another one, adding more planks until he reached the top of the wall. Helen and Scott Nearing modified the technique in Vermont in the 1930s, using slipforms that were slipped up the wall. Gallery Notes The diagram of the slipform wall section is completely misleading without showing the 2nd form. External links Slipform Stone Masonry Stonemasonry Construction Types of wall
Slipform stonemasonry
Engineering
319
1,419,520
https://en.wikipedia.org/wiki/Demetri%20Martin
Demetri Martin (, Dimitrios Evangelos Martin; born May 25, 1973) is an American comedian, actor, writer, director, cartoonist and musician. He was a contributor on The Daily Show. In stand-up, he is known for his deadpan delivery, playing his guitar for jokes, and his satirical cartoons. He starred as Ice Bear in Cartoon Network's We Bare Bears. Early life Martin was born into a Greek-American family in New York City on May 25, 1973, the son of Lillian (1951–2019) and Greek Orthodox priest Dean C. Martin (1948–1994). His grandparents migrated from Sparta and Crete. He grew up in Toms River, New Jersey, and has a younger brother named Spyro and a younger sister named Christene. As a teenager, he worked at his family’s diner in Beachwood near the Jersey Shore. He attended Toms River High School North and graduated in 1991. Martin graduated from Yale University in 1995 with a B.A in History. During his time there, he wrote a 224-word poem as a project for a fractal geometry class, which became a well-known palindromic poem. He was also a member of the Anti-Gravity Society, whose members juggle objects on Sunday evenings on Yale's Old Campus. Although Martin was admitted to Harvard Law School, he instead decided to attend New York University School of Law upon receiving a full scholarship. Martin withdrew from law school before the start of his final year, opting to pursue comedy over obtaining his Juris Doctor degree. Career Shortly after leaving law school, Martin started performing stand-up in the summer of 1997. He was an intern for The Daily Show in 1997, and in 2001 he had a set featured on the NBC late night stand up comedy showcase Late Friday. In 2001, Martin caught his first big break in stand-up comedy when he appeared on Comedy Central's stand-up showcase Premium Blend. At the 2003 Edinburgh Festival Fringe he won the Perrier Award with his show If I.... The show was turned into a BBC television special in 2004. From 2003 to 2004, Martin wrote for Late Night with Conan O'Brien. In 2004, Martin had his own Comedy Central Presents stand-up special. His special was divided into three parts. In the first, he performed in a traditional stand-up comedy fashion. In the second segment, he used humorous drawings as visual aids, which would serve either as the punchline or a background. During the third segment, he played the guitar and put on a pseudo-play where he would strum his guitar while alternating between playing harmonica and talking; some of his comedian friends, wearing fairy and dragon costumes, acted according to the story he was telling, detailing the magical land from where his jokes came. Martin's mother and grandmother also appeared. Starting in late 2005, he was credited as a contributor on The Daily Show, on which he appeared as the named "Senior Youth Correspondent" and hosted a segment called "Trendspotting". He used this segment to talk about so-called hip trends among youth such as hookahs, wine, guerilla marketing and Xbox 360. A piece about social networking featured his profile on Myspace. On March 22, 2007, Martin made another appearance on The Daily Show, talking about the Viacom lawsuit against Google and YouTube. He is no longer a Daily Show contributor as of 2014. Before starting at The Daily Show, he was offered to audition for Saturday Night Live but turned it down due to the seven year commitment. He has recorded a comedy album titled These Are Jokes, which was released on September 26, 2006. This album also features Saturday Night Live member Will Forte and stand-up comedian Leo Allen. Martin returned to The Daily Show on March 22, 2006, as the new Youth Correspondent, calling his segment "Professional Important News with Demetri Martin". In 2007, he starred with Faryl Millet, a comedian and actress better known for her show Fancy Nancy's Funny Hour, in a Fountains of Wayne music video for "Someone to Love" as Seth Shapiro, and Millet as Beth Mackenzie. Both of them are characters in the song. He also starred in the video for the Travis single "Selfish Jean", in which he wears multiple T-shirts with lyrics written on them. On September 2, 2007, Martin appeared on the season finale of the HBO series Flight of the Conchords. He appeared as a keytar player named Demetri. He also had a part in the movie The Rocker (2008) starring Rainn Wilson. Martin played the part of the videographer when the band in the movie was making their first music video. In 2009, he hosted and starred in his own television show called Important Things With Demetri Martin on Comedy Central. Later in June, it was announced his show had been renewed for a second season. The second season premiered, again on Comedy Central, on February 4, 2010. Martin has stated that Important Things will not return for a third season. Prior to completing work on his second season, Martin starred in the comedy-drama film Taking Woodstock (2009), directed by Ang Lee, which premiered at the 2009 Cannes Film Festival. In the film Martin plays Elliot Tiber, a closeted gay artist who has given up his ambitions in the city to move upstate and help his old-world Jewish family run their Catskill Mountains motel. The film is based on the book written by Tiber. On April 25, 2011, Martin released his first book, titled This Is a Book. Martin played a small role in the 2011 film Contagion. Martin sold his movie concept Will to DreamWorks, and is expected to play a key supporting role. He will play the lead in the film Moon People, a pitch that he sold to Columbia Pictures. He also signed a blind script deal with CBS in October 2010 to produce, write, and star in his own television series. After CBS was shown the pilot for the series, they decided not to air it. On October 2, 2012, Martin released his second comedy album entitled Demetri Martin. Standup Comedian. Martin voices Ice Bear in the Cartoon Network series We Bare Bears, and the narrator in its spin-off series We Baby Bears. He wrote, directed, edited, and starred in the 2016 film Dean. Comedic style Martin is known for being an unconventional stand-up comic. He uses one-liners and drawings on a "large pad", as well as accompanying his jokes with music on either guitar, harmonica, piano, keyboard, glockenspiel, toy bells, ukulele, or tambourine, sometimes all at once. His style is often compared to Mitch Hedberg. He has cited comedian Steven Wright as an important influence (both use deadpan one-liners in their acts) as well as The Far Side cartoonist Gary Larson. He has submitted cartoons to the New Yorker magazine at its invitation – and had them rejected. "You gotta get better at drawing. These aren't funny enough." Martin plays instruments on stage and has music playing in the background of his performances as a way of preventing any editing of his performances to better fit for television. However, Martin has also confessed a desire to evolve his comedic style. "I love one-liners, I love jokes...but I also want to talk about how I feel. I want to talk about below-the-neck stuff. It's hard, if that's not where your head goes, it's hard to get comedy out of that...[But] I want to dig deeper, I want to connect in a different way with the audience." Personal life According to a July 2011 interview on the podcast WTF with Marc Maron, Martin had a short-lived marriage with a former high school classmate named Jen. They began dating after high school and got married when he was at NYU Law School and she was attending NYU Medical School. This relationship was further analyzed in his one-man show Spiral Bound. On June 1, 2012, Martin married his long-time partner Rachael Beame in Santa Monica, California. They have a daughter named Eve and a son named Paul. They currently reside in Los Feliz, California. Martin has anaphylactic reactions to seafood, poultry, nuts and certain legumes. Works Albums & specials If I (2004) These Are Jokes (2006) Person (2007) Standup Comedian. (2012) Live (At The Time) (2015) The Overthinker (2018) Demetri Deconstructed (2024) Television shows Important Things with Demetri Martin (2009–2010) Books This Is a Book, April 2011, . 19 1/2 Stories, 2017 Art collections Point Your Face at This, March 2013, . If It's Not Funny It's Art, September 2017 . Films Dean (2016) Filmography Awards and nominations References External links Interview from November, 2006, in The A.V. Club Interview from October 2006, in The DePaulia Interview by Brian M. Palmer MP3 Audio Interview on The Sound of Young America public radio show and podcast Interview Believer Mag 1973 births Living people 21st-century American comedians 21st-century American male actors 21st-century American male writers 21st-century American screenwriters American cartoonists American comedy musicians American comedy writers American humorists American male comedians American male film actors American male television actors American male television writers American male voice actors American sketch comedians American stand-up comedians American television writers American writers of Greek descent Comedians from Los Angeles Comedians from New Jersey Comedians from New York City Male actors from Los Angeles Male actors from New Jersey Male actors from New York City New York University School of Law alumni People from Los Feliz, Los Angeles People from Toms River, New Jersey Screenwriters from New Jersey Screenwriters from New York (state) Toms River High School North alumni Writers from New York City Writers Guild of America Award winners Yale University alumni Palindromists Actors from Ocean County, New Jersey
Demetri Martin
Physics
2,060
53,369,661
https://en.wikipedia.org/wiki/Thermonema%20lapsum
Thermonema lapsum is a Gram-negative and thermophilic bacterium from the genus of Thermonema which has been isolated from a hot spring in Rotorua in New Zealand.Homospermidine and homospermine are the major polyamines of Thermonema lapsum References External links Type strain of Thermonema lapsum at BacDive - the Bacterial Diversity Metadatabase Sphingobacteriia Bacteria described in 1989 Thermophiles Biota of New Zealand
Thermonema lapsum
Biology
106
8,942,458
https://en.wikipedia.org/wiki/Hill%E2%80%93Robertson%20effect
In population genetics, the Hill–Robertson effect, or Hill–Robertson interference, is a phenomenon first identified by Bill Hill and Alan Robertson in 1966. It provides an explanation as to why there may be an evolutionary advantage to genetic recombination. Explanation In a population of finite but effective size which is subject to natural selection, varying extents of linkage disequilibria (LD) will occur. These can be caused by genetic drift or by mutation, and they will tend to slow down the process of evolution by natural selection. This is most easily seen by considering the case of disequilibria caused by mutation: Consider a population of individuals whose genome has only two genes, a and b. If an advantageous mutant (A) of gene a arises in a given individual, that individual's genes will through natural selection become more frequent in the population over time. However, if a separate advantageous mutant (B) of gene b arises before A has gone to fixation, and happens to arise in an individual who does not carry A, then individuals carrying B and individuals carrying A will be in competition. If recombination is present, then individuals carrying both A and B (of genotype AB) will eventually arise. Provided there are no negative epistatic effects of carrying both, individuals of genotype AB will have a greater selective advantage than aB or Ab individuals, and AB will hence go to fixation. However, if there is no recombination, AB individuals can only occur if the latter mutation (B) happens to occur in an Ab individual. The chance of this happening depends on the frequency of new mutations, and on the size of the population, but is in general unlikely unless A is already fixed, or nearly fixed. Hence one should expect the time between the A mutation arising and the population becoming fixed for AB to be much longer in the absence of recombination. Hence recombination allows evolution to progress faster. [Note: This effect is often erroneously equated with "clonal interference", which happens when A and B mutations arise in different wild type (ab) individuals and describes the ensuing competition between Ab and aB lineages.] There tends to be a correlation between the rate of recombination and the likelihood of the preferred haplotype (in the above example labeled as AB) goes into fixation in a population. Joe Felsenstein (1974) showed this effect to be mathematically identical to the Fisher–Muller model proposed by R. A. Fisher (1930) and H. J. Muller (1932), although the verbal arguments were substantially different. Although the Hill-Robertson effect is usually thought of as describing a disproportionate buildup of fitness-reducing (relative to fitness increasing) LD over time, these effects also have immediate consequences for mean population fitness. See also Clonal interference Genetic hitchhiking References Genetics in the United Kingdom Population genetics Evolutionary biology
Hill–Robertson effect
Biology
604
71,906,829
https://en.wikipedia.org/wiki/Monorackbahn
Monorackbahn is a small monorail rack railway manufactured by the Doppelmayr/Garaventa Group. Its style is derived from industrial monorails used in 1960s vineyards. There are more than 650 Monorackbahn systems installed across Switzerland, Germany and Italy. History The idea for the development of the Monorackbahn started in the 1960's and came from Japan in the form of slope cars which were used on orchards. The original manufacturer Yoneyama Industry named them "Monorack" (モノラック, Monorakku) by 1966. The first models were primarily used for transporting bags of fruit in the beginning. Garaventa designed similar systems for usage in vineyards in the 1960s which could also carry workers. It did pick up the brand name Monorack by 1976. The main difference between the Japanese and European systems was the type of rail being used for tracks with the Japanese systems using 4 cm and the European systems using 6 cm square tubing. The cooperation between Nikkari in Japan and Habegger in Switzerland started in 1975, so the Monorack tractors are mostly identical. The Garaventa system is designed for loads up to and 100% (45°) slopes. In the newest system (as of 2021) an 48 Volt Li ion battery pack is used with a 6 kW motor. The base size of 3.6 kWh allows for 60 min of operation. Connector pads for the charging stations can be attached to the rail so that recharging starts automatically at the end points. The system is so prevalent in vineyards along the Rhine that it is also named Vinayard rail (German ). This is ambiguous as Feldbahn system are also used for agricultural transportation including vineyards. Apart from usage in vineyards, Monorackbahn systems are also found at complex construction sites in Europe. Types References External links https://www.doppelmayr.com/de/systeme/monorack/ - home site at the Manufacturer https://www.vinitorum-quaterni.de/index.php?lang=de&page=monorack for a new vineyard Monorails in Germany Vertical transport devices
Monorackbahn
Technology
449
38,900,265
https://en.wikipedia.org/wiki/Tricholoma%20colossus
Tricholoma colossus is a mushroom of the agaric genus Tricholoma. Due to its ringed foot, it was previously classified as Armillaria, even though it is otherwise clearly Tricholoma. The cap is 6–30 cm wide, brick red to chestnut brown, hemispherical when young, widening later on, uneven, always with very thick flesh, surface slightly sticky, developing somewhat into scales. The stipe is thick, 2–5 (at the stem even 10) cm thick, solid. Ring-like formation at the top of the foot, above which it is white. Grows in pine forests to September to October; very rare. See also List of North American Tricholoma List of Tricholoma species References colossus Fungi described in 1836 Fungi of Europe Fungi of North America Fungus species
Tricholoma colossus
Biology
177
58,887,579
https://en.wikipedia.org/wiki/Coprothermobacterota
Coprothermobacterota is a phylum of nonmotile, rod-shaped bacteria. Its members are strictly anaerobic and thermophilic, growing at optimal temperatures between 55 °C and 70 °C. The name of this phylum is based on an early genus, dubbed "Coprothermobacter", a term whose etymology derives from the Greek words "kopros", meaning manure, and "thermos", warm, referring to the fact that these bacteria are capable of living at relatively high temperatures, with a maximum growth temperature of 75 °C. Notes In October 2021, the name of this phylum has been accepted as validly published, according to the emendations of the rules of the International Code of Nomenclature of Prokaryotes proposed to include the rank of phylum. References Bacteria phyla
Coprothermobacterota
Biology
180
681,781
https://en.wikipedia.org/wiki/Magnetic%20particle%20inspection
Magnetic particle inspection (MPI) is a nondestructive testing process where a magnetic field is used for detecting surface, and shallow subsurface, discontinuities in ferromagnetic materials. Examples of ferromagnetic materials include iron, nickel, cobalt, and some of their alloys. The process puts a magnetic field into the part. The piece can be magnetized by direct or indirect magnetization. Direct magnetization occurs when the electric current is passed through the test object and a magnetic field is formed in the material. The magnetic lines of force are perpendicular to the direction of the electric current, which may be either alternating current (AC) or some form of direct current (DC) (rectified AC). Indirect magnetization occurs when no electric current is passed through the test object, but a magnetic field is applied from an outside source. The presence of a surface or subsurface discontinuity in the material allows the magnetic flux to leak, since air cannot support as much magnetic field per unit volume as metals. To identify a leak, ferrous particles, either dry or in a wet suspension, are applied to a part. These are attracted to an area of flux leakage and form what is known as an indication, which is evaluated to determine its nature, cause, and course of action, if any. Types of electrical currents used There are several types of electrical currents used in magnetic particle inspection. For a proper current to be selected one needs to consider the part geometry, material, the type of discontinuity one is seeking, and how far the magnetic field needs to penetrate into the part. Alternating current (AC) is commonly used to detect surface discontinuities. Using AC to detect subsurface discontinuities is limited due to what is known as the skin effect, where the current runs along the surface of the part. Because the current alternates in polarity at 50 to 60 cycles per second it does not penetrate much past the surface of the test object. This means the magnetic domains will only be aligned equal to the distance AC current penetration into the part. The frequency of the alternating current determines how deep the penetration. (FWDC) is used to detect subsurface discontinuities where AC can not penetrate deep enough to magnetize the part at the depth needed. The amount of magnetic penetration depends on the amount of current through the part. DC is also limited on very large cross-sectional parts in terms of how effectively it will magnetize the part. Half wave DC (HWDC, pulsating DC) works similar to full wave DC, but allows for detection of surface breaking indications and has more magnetic penetration into the part than FWDC. HWDC is advantageous for inspection process as it actually helps move the magnetic particles during the bathing of the test object. The aid in particle mobility is caused by the half-wave pulsating current waveform. In a typical mag pulse of 0.5 seconds there are 15 pulses of current using HWDC. This gives the particle more of an opportunity to come in contact with areas of magnetic flux leakage. An AC electromagnet is the preferred method for find surface breaking indication. The use of an electromagnet to find subsurface indications is difficult. An AC electromagnet is a better means to detect a surface indication than HWDC, DC, or permanent magnet, while some form of DC is better for subsurface defects. Equipment A wet horizontal MPI machine is the most commonly used mass-production inspection machine. The machine has a head and tail stock where the part is placed to magnetize it. In between the head and tail stock is typically an induction coil, which is used to change the orientation of the magnetic field by 90° from the head stock. Most of the equipment is built for a specific application. Mobile power packs are custom-built magnetizing power supplies used in wire wrapping applications. Magnetic yoke is a hand-held device that induces a magnetic field between two poles. Common applications are for outdoor use, remote locations, and weld inspection. The drawback of magnetic yokes is that they only induce a magnetic field between the poles, so large-scale inspections using the device can be time-consuming. For proper inspection the yoke needs to be rotated 90 degrees for every inspection area to detect horizontal and vertical discontinuities. Subsurface detection using a yoke is limited. These systems used dry magnetic powders, wet powders, or aerosols. Demagnetizing parts After the part has been magnetized it needs to be demagnetized. This requires special equipment that works the opposite way of the magnetizing equipment. The magnetization is normally done with a high current pulse that reaches a peak current very quickly and instantaneously turns off leaving the part magnetized. To demagnetize a part, the current or magnetic field needed has to be equal to or greater than the current or magnetic field used to magnetize the part. The current or magnetic field is then slowly reduced to zero, leaving the part demagnetized. A popular method to record residual magnetism is by using a Gauss meter. AC demagnetizing Pull-through AC demagnetizing coils: seen in the figure to the right are AC powered devices that generate a high magnetic field where the part is slowly pulled through by hand or on a conveyor. The act of pulling the part through and away from the coil's magnetic field slows drops the magnetic field in the part. Note that many AC demagnetizing coils have power cycles of several seconds so the part must be passed through the coil and be several feet (meters) away before the demagnetizing cycle finishes or the part will have residual magnetization. AC decaying demagnetizing: this is built into most single phase MPI equipment. During the process the part is subjected to an equal or greater AC current, after which the current is reduced over a fixed period of time (typically 18 seconds) until zero output current is reached. As AC is alternating from a positive to a negative polarity this will leave the magnetic domains of the part randomized. AC demag does have significant limitations on its ability to demag a part depending on the geometry and the alloys used. Reversing full wave DC demagnetizing: this is a demagnetizing method that must be built into the machine during manufacturing. It is similar to AC decaying except the DC current is stopped at intervals of half a second, during which the current is reduced by a quantity and its direction is reversed. Then current is passed through the part again. The process of stopping, reducing and reversing the current will leave the magnetic domains randomized. This process is continued until zero current is passed through the part. The normal reversing DC demag cycle on modern equipment should be 18 seconds or longer. This method of demag was developed to overcome the limitations presented by the AC demag method where part geometry and certain alloys prevented the AC demag method from working. Halfwave DC demagnetizing (HWDC): this process is identical to full-wave DC demagnetization, except the waveform is half-wave. This method of demagnetization is new to the industry and only available from a single manufacturer. It was developed to be a cost-effective method to demagnetize without needing a full-wave DC bridge design power supply. This method is only found on single-phase AC/HWDC power supplies. HWDC demagnetization is just as effective as full-wave DC, without the extra cost and added complexity. Of course, other limitations apply due to inductive losses when using HWDC waveform on large-diameter parts. Also, HWDC effectiveness is limited past 410 mm (16 in) diameter using a 12-volt power supply. Magnetic particle powder A common particle used to detect cracks is iron oxide, for both dry and wet systems. Wet system particle range in size from less than 0.5 micrometres to 10 micrometres for use with water or oil carriers. Particles used in wet systems have pigments applied that fluoresce at 365 nm (ultraviolet A) requiring 1000 μW/cm2 (10 W/m2) at the surface of the part for proper inspection. If the particles do not have the correct light applied in a darkroom the particles cannot be detected/seen. It is industry practice to use UV goggles/glasses to filter the UV light and amplify the visible light spectrum (normally green and yellow) created by the fluorescing particles. Green and yellow fluorescence was chosen because the human eye reacts best to these colors. Dry particle powders range in size from 5 to 170 micrometres, designed to be seen in white light conditions. The particles are not designed to be used in wet environments. Dry powders are normally applied using hand-operated air powder applicators. Aerosol applied particles are similar to wet systems, sold in premixed aerosol cans similar to hair spray. Magnetic particle carriers It is common industry practice to use specifically designed oil and water-based carriers for magnetic particles. Deodorized kerosene and mineral spirits have not been commonly used in the industry for 40 years. It is dangerous to use kerosene or mineral spirits as a carrier due to the risk of fire. Inspection The following are general steps for inspecting on a wet horizontal machine: Workpiece is cleaned of oil and other contaminants. Necessary calculations done to know the amount of current required to magnetize the workpiece. Refer ASTM E1444/E1444M for formulas. The magnetizing pulse is applied for 0.5 seconds, during which the operator washes the workpiece with the particle, stopping before the magnetic pulse is completed. Failure to stop prior to end of the magnetic pulse will wash away indications. UV light is applied while the operator looks for indications of defects that are 0 to ±45 degrees from path the current flowed through the workpiece. Indications only appear 45 to 90 degrees of the magnetic field applied. The easiest way to quickly determine the direction of the magnetic field is running is to grasp the workpiece with either hand between the head stocks laying the thumb against the workpiece (do not wrap the thumb around the workpiece) this is called either left or right thumb rule or right hand grip rule. The direction the thumb points reveals the direction current is flowing. The magnetic field will be running 90 degrees from the current path. On complex geometry, like a crankshaft, the operator needs to visualize the changing direction of the current and magnetic field created. The current starts at 0 degrees then 45 degrees to 90 degree back to 45 degrees to 0 then -45 to -90 to -45 to 0 and this is repeated for each crankpin. Thus, it can be time consuming to find indications that are only 45 to 90 degrees from the magnetic field. The workpiece is either accepted or rejected, based on pre-defined criteria. The workpiece is demagnetized. Depending on requirements, the orientation of the magnetic field may need to be changed 90 degrees to inspect for indications that cannot be detected from steps 3 to 5. The most common way to change magnetic field orientation is to use a "coil shot". In Fig 1 a 36-inch coil can be seen then steps 4, 5, and 6 are repeated. Standards International Organization for Standardization (ISO) ISO 3059, Non-destructive testing - Penetrant testing and magnetic particle testing - Viewing conditions ISO 9934-1, Non-destructive testing - Magnetic particle testing - Part 1: General principles ISO 9934-2, Non-destructive testing - Magnetic particle testing - Part 2: Detection media ISO 9934-3, Non-destructive testing - Magnetic particle testing - Part 3: Equipment ISO 10893-5, Non-destructive testing of steel tubes. Magnetic particle inspection of seamless and welded ferromagnetic steel tubes for the detection of surface imperfections ISO 17638, Non-destructive testing of welds - Magnetic particle testing ISO 23278, Non-destructive testing of welds - Magnetic particle testing of welds - Acceptance levels European Committee for Standardization (CEN) EN 1330-7, Non-destructive testing - Terminology - Part 7: Terms used in magnetic particle testing EN 1369, Founding - Magnetic particle inspection EN 10228-1, Non-destructive testing of steel forgings - Part 1: Magnetic particle inspection American Society of Testing and Materials (ASTM) ASTM E1444/E1444M Standard Practice for Magnetic Particle Testing ASTM A 275/A 275M Test Method for Magnetic Particle Examination of Steel Forgings ASTM A456 Specification for Magnetic Particle Inspection of Large Crankshaft Forgings ASTM E543 Practice Standard Specification for Evaluating Agencies that Performing Nondestructive Testing ASTM E 709 Guide for Magnetic Particle Testing Examination ASTM E 1316 Terminology for Nondestructive Examinations ASTM E 2297 Standard Guide for Use of UV-A and Visible Light Sources and Meters used in the Liquid Penetrant and Magnetic Particle Methods Canadian Standards Association (CSA) CSA W59 Society of Automotive Engineers (SAE) AMS 2641 Magnetic Particle Inspection Vehicle AMS 3040 Magnetic Particles, Nonfluorescent, Dry Method AMS 3041 Magnetic Particles, Nonfluorescent, Wet Method, Oil Vehicle, Ready-To-Use AMS 3042 Magnetic Particles, Nonfluorescent, Wet Method, Dry Powder AMS 3043 Magnetic Particles, Nonfluorescent, Wet Method, Oil Vehicle, Aerosol Packaged AMS 3044 Magnetic Particles, Fluorescent, Wet Method, Dry Powder AMS 3045 Magnetic Particles, Fluorescent, Wet Method, Oil Vehicle, Ready-To-Use AMS 3046 Magnetic Particles, Fluorescent, Wet Method, Oil Vehicle, Aerosol Packaged5 AMS 5062 Steel, Low Carbon Bars, Forgings, Tubing, Sheet, Strip, and Plate 0.25 Carbon, Maximum AMS 5355 Investment Castings AMS I-83387 Inspection Process, Magnetic Rubber AMS-STD-2175 Castings, Classification and Inspection of AS 4792 Water Conditioning Agents for Aqueous Magnetic Particle Inspection AS 5282 Tool Steel Ring Standard for Magnetic Particle Inspection AS5371 Reference Standards Notched Shims for Magnetic Particle Inspection United States Military Standard A-A-59230 Fluid, Magnetic Particle Inspection, Suspension References Further reading External links Video on Magnetic Particle Inspection, Karlsruhe University of Applied Sciences Nondestructive testing Casting (manufacturing) Welding
Magnetic particle inspection
Materials_science,Engineering
3,015
20,104,879
https://en.wikipedia.org/wiki/Intermittent%20fasting
Intermittent fasting is any of various meal timing schedules that cycle between voluntary fasting (or reduced calorie intake) and non-fasting over a given period. Methods of intermittent fasting include alternate-day fasting, periodic fasting, such as the 5:2 diet, and daily time-restricted eating. Intermittent fasting has been studied to find whether it can reduce the risk of diet-related diseases, such as metabolic syndrome. A 2019 review concluded that intermittent fasting may help with obesity, insulin resistance, dyslipidemia, hypertension, and inflammation. There is preliminary evidence that intermittent fasting is generally safe. Adverse effects of intermittent fasting have not been comprehensively studied, leading some academics to point out its risk as a dietary fad. The US National Institute on Aging states that there is insufficient evidence to recommend intermittent fasting, and encourages speaking to one's healthcare provider about the benefits and risks before making any significant changes to one's eating pattern. Fasting exists in various religious practices, including Buddhism, Christianity, Hinduism, Islam, Jainism, and Judaism. History Fasting is an ancient tradition, having been practiced by many cultures and religions over centuries. Therapeutic intermittent fasts for the treatment of obesity have been investigated since at least 1915, with a renewed interest in the medical community in the 1960s after Bloom and his colleagues published an "enthusiastic report". Intermittent fasts, or "short-term starvation periods", ranged from 1 to 14 days in these early studies. This enthusiasm penetrated lay magazines, which prompted researchers and clinicians to caution about the use of intermittent fasts without medical monitoring. Types There are multiple methods of intermittent fasting. Time-restricted eating involves eating only during a certain number of hours each day, often establishing a consistent daily pattern of caloric intake within an 8–12-hour time window. This schedule may align food intake with circadian rhythms (establishing eating windows that begin after sunrise and end around sunset). One meal a day fasting is having just one meal a day, and not having anything for the rest of the day. Alternate-day fasting involves alternating between a 24-hour "fast day" when the person eats less than 25% of usual energy needs, followed by a 24-hour non-fasting "feast day" period. There are two subtypes: Complete alternate-day fasting is total intermittent energy restriction (IER vs. CER = continuous energy restriction), where no calories are consumed on fasting days. Modified alternate-day fasting involves partial intermittent energy restriction which allows the consumption of up to 25% of daily calorie needs on fasting days instead of complete fasting. This is akin to alternating days with normal eating and days with a very-low-calorie diet. 5:2 diet is a type of periodic fasting (that does not follow a particular food pattern) which focuses entirely on calorie content. In other words, two days of the week are devoted to consumption of approximately 500 to 600 calories, or about 25% of regular daily caloric intake, with normal calorie intake during the other five days of the week. It was first documented in a 2011 article co-authored by Michelle Harvie, Mark Mattson, and 14 additional scientists. It was later published in the UK and Australia by Michael Mosley through the 2012 BBC documentary Eat, Fast and Live Longer (where he learned about the 5:2 diet from Mark Mattson). It also became common in Australia. Periodic fasting or whole-day fasting involves intermittent periods of water fasting longer than 24 hours. The science concerning intermittent fasting is preliminary and uncertain due to an absence of studies on its long-term effects. Preliminary evidence indicates that intermittent fasting may be effective for weight loss, may decrease insulin resistance and fasting insulin, and may improve cardiovascular and metabolic health, although the long term sustainability of these effects has not been studied. Research Body weight and metabolic disease risk There is limited evidence that intermittent fasting produces weight loss comparable to a calorie-restricted diet. Most studies on intermittent fasting in humans have observed weight loss, ranging from 2.5% to 9.9%. The reductions in body weight can be attributed to the loss of fat mass and some lean mass. For time restricted eating the ratio of weight loss is 4:1 for fat mass to lean mass, respectively. Alternate-day fasting does not affect lean body mass, although one review found a small decrease. Alternate-day fasting improves cardiovascular and metabolic biomarkers similarly to a calorie restriction diet in people who are overweight, obese or have metabolic syndrome. As of 2021, it remains uncertain whether intermittent fasting could prevent cardiovascular disease. Intermittent fasting has not been studied in children, elderly, or underweight people, and may be harmful in these populations. Intermittent fasting is not recommended for people who are not overweight, and the long-term sustainability of intermittent fasting is unknown . A 2021 review found that moderate alternate-day fasting for two to six months was associated with reductions of body weight, body mass index, and cardiometabolic risk factors in overweight or obese adults. Other effects Cancer and other diseases Intermittent fasting is not recommended to treat cancer in France, the United Kingdom, or the United States, although a few small-scale clinical studies suggest that it may reduce chemotherapy side effects. Periodic fasting may have a minor effect on chronic pain and mood disorders. Exercise Athletic performance does not benefit from intermittent fasting. Overnight fasting before exercise increases lipolysis, but reduces performance in prolonged exercise (more than 60 min). Side effects There is preliminary evidence that intermittent fasting appears safe for people without diabetes or eating disorders. Reviews of preliminary clinical studies found that short-term intermittent fasting may produce minor side effects, such as continuous feelings of hunger, irritability, dizziness, nausea, headaches, and impaired thinking, although these effects disappear within a month from beginning the fasting practice. A 2018 systematic review found no major adverse effects. Intermittent fasting is not recommended for pregnant or breastfeeding women, growing children and adolescents, the elderly, or individuals with or vulnerable to eating disorders. Tolerance Tolerance of a diet is a determinant of the potential effectiveness and maintenance of benefits obtained, such as weight loss or biomarker improvement. A 2019 review found that drop-out rates varied widely from 2% to 38% for intermittent fasting, and from 0% to 50% for calorie restriction diet. Possible mechanisms Preliminary research indicates that fasting may induce a transition through four states: The fed state or absorptive state during satiety, when the primary fuel source is glucose and body fat storage is active, lasting for about 4 hours; The postabsorptive state, lasting for up to 18 hours, when glucagon is secreted and the body uses liver glucose reserves as a fuel source; The fasted state, transitioning progressively to other reserves, such as fat, lactic acid, and alanine, as fuel sources, when the liver glucose reserves are depleted, occurring after 12 to 36 hours of continued fast; The shift from preferential lipid synthesis and fat storage, to the mobilization of fat (in the form of free fatty acids), metabolized into fatty acid-derived ketones to provide energy. Some authors call this transition the "metabolic switch". A 2019 review of weight-change interventions, including alternate day fasting, time-restricted feeding, exercise and overeating, found that body weight homeostasis could not precisely correct "energetic errors" – the loss or gain of calories – in the short-term. Another pathway for effects of meal timing on metabolism lies in the influence of the circadian rhythm over the endocrine system, especially on glucose metabolism and leptin. Preliminary studies found that eating when melatonin is secreted during darkness and commonly when sleeping at night is associated with increased glucose levels in young healthy adults, and obesity and cardiovascular disorders in less healthy individuals. Reviews on obesity prevention concluded that "meal timing appears as a new potential target in weight control strategies" and suggest that "timing and content of food intake, physical activity, and sleep may be modulated to counteract" circadian and metabolic genetic predispositions to obesity. Intermittent feeding Other feeding schemes, such as hypocaloric feeding and intermittent feeding, also called bolus feeding were under study. A 2019 meta-analysis found that intermittent feeding may be more beneficial for premature infants, although better designed studies are required to devise clinical practices. In adults, reviews have not found intermittent feeding to increase glucose variability or gastrointestinal intolerance. A meta-analysis found intermittent feeding had no influence on gastric residual volumes and aspiration, pneumonia, mortality nor morbidity in people with a trauma, but increased the risk of diarrhea. Food production Intermittent fasting, or "skip-a-day" feeding, is supposedly the most common feeding strategy for poultry in broiler breeder farms worldwide, as an alternative to adding bulky fibers to the diet to reduce growth. It is perceived as welfare-reducing and thus illegal in several European countries including Sweden. Intermittent fasting in poultry appears to increase food consumption but reduce appetitive behaviors such as foraging. Religious fasting Some different types of fastings exist in some religious practices. These include the Black Fast of Christianity (commonly practiced during Lent), Vrata (Hinduism), Ramadan (Islam), Yom Kippur (Judaism), Fast Sunday (The Church of Jesus Christ of Latter-day Saints), Jain fasting, and Buddhist fasting. Religious fasting practices may only require abstinence from certain foods or last for a short period of time and cause negligible effects. Hinduism A Vrata/Nombu is observed either as an independent private ritual at a date of one's choice, as part of a particular ceremony such as wedding, or as a part of a major festival such as Diwali (Lakshmi, festival of lights), Shivaratri (Shiva), Navratri (Durga or Rama), Kandasashti (Muruga), Ekadashi (Krishna, Vishnu avatars). Christianity In Christianity, many adherents of Christian denominations including Catholics, Lutherans, Methodists, Anglicans, and the Orthodox, often observe the Friday Fast throughout the year, which commonly includes abstinence from meat. Throughout the liturgical season of Lent (and especially on Ash Wednesday and Good Friday) in the Christian calendar, many Christians practice a form of intermittent fasting in which one can consume two collations and one full meal; others practice the Black Fast, in which no food or water is consumed until after sunset with prayer. Buddhism In Buddhism, fasting is undertaken as part of the monastic training of Theravada Buddhist monks, who fast daily from noon to sunrise of the next day. This daily fasting pattern may be undertaken by laypeople following the eight precepts. Islam During Ramadan, Islamic practices are similar to intermittent fasting by not eating or drinking from dawn until sunset, while permitting food intake in the morning before dawn and in the evening after dusk for 30 days. A meta-analysis on the health of Muslims during Ramadan shows significant weight loss during the fasting period of up to , but this weight was regained within about two weeks thereafter. The analysis concluded that "Ramadan provides an opportunity to lose weight, but structured and consistent lifestyle modifications are necessary to achieve lasting weight loss." One review found similarities between Ramadan and time-restricted feeding, with the main dissimilarity being the disallowance of water drinking with Islamic fasting. In a 2020 review, Ramadan fasting caused a significant decrease in LDL cholesterol levels, and a slight decline in total cholesterol. A review of the metabolic effects of fasting showed that religious fasting proved to be beneficial in terms of "body weight and glycemia, cardiometabolic risk markers, and oxidative stress parameters", where animals, in the study, that followed a diet regimen consistent with that of religious fasting, were observed to have weight loss in addition to "lowered plasma levels of glucose, triacylglycerols, and insulin growth factor-1". Negative effects of Ramadan fasting include increased risk of hypoglycemia in diabetics, as well as inadequate levels of certain nutrients. Ramadan disallows fluids during the fasting period. This type of fasting would be hazardous for pregnant women, as it is associated with risks of inducing labor and causing gestational diabetes, although it does not appear to affect the child's weight. For these reasons, pregnant women, as well as children who have not reached puberty, the elderly, those who are physically or mentally incapable of fasting, travelers, and breast-feeding mothers are often exempt from religious fasting – Ramadan being one example. Ramadan diurnal intermittent fasting is associated with healthier lifestyle behaviors and a reduction in smoking rate by more than 50% among university students. Guidelines United States The American Heart Association (AHA) says that as with other "popular or fad diets", there is no good evidence of heart health benefits from intermittent fasting. The American Diabetes Association "found limited evidence about the safety and/or effects of intermittent fasting on type 1 diabetes" and preliminary results of weight loss for type 2 diabetes, and so does not recommend any specific dietary pattern for the management of diabetes until more research is done, recommending instead that "health care providers should focus on the key factors that are common among the patterns". The National Institute on Aging states that although intermittent fasting showed weight loss success in several studies on obese or overweight individuals, it does not recommend intermittent fasting for non-overweight individuals because of uncertainties about its effectiveness and safety, especially for older adults. Europe Given the lack of advantage and the increased incidence of diarrhea, European guidelines do not recommend intermittent feeding for people in intensive care units. United Kingdom According to NHS Choices, people considering the 5:2 diet should first consult a physician, as fasting can sometimes be unsafe. New Zealand The New Zealand's Ministry of Health considers that intermittent fasting can be advised by doctors to some people, except diabetics, stating that these "diets can be as effective as other energy-restricted diets, and some people may find them easier to stick to" but there are possible side effects during fasting days such as "hunger, low energy levels, light-headedness and poor mental functioning" and note that healthy food must be chosen on non-fasting days. Usage trends , intermittent fasting was a common fad diet, attracting celebrity endorsements and public interest. UK and Australia Intermittent fasting (specifically the 5:2 diet) was popularized by Michael Mosley in the UK and Australia in 2012 after the BBC2 television Horizon documentary Eat, Fast and Live Longer. North America In the United States, intermittent fasting became a trend in Silicon Valley, California. It was the most popular diet in 2018, according to a survey by the International Food Information Council. Commercial activity , interest in intermittent fasting led some companies to commercialize diet coaching, dietary supplements, and full meal packages. These companies were criticized for offering expensive products or services that were not backed by science. See also List of diets References External links The benefits of intermittent fasting Jane E. Brody, The New York Times, 17 February 2020 Intermittent fasting Harriet Hall, Science-Based Medicine, December 2015 Does Intermittent Fasting Work? Steven Novella, Science-Based Medicine, June 2023 Diets Eating behaviors of humans Fad diets Fasting
Intermittent fasting
Biology
3,254
4,431,243
https://en.wikipedia.org/wiki/Gas%E2%80%93oil%20separation%20plant
In the upstream oil industry, a gas–oil separation plant (GOSP) is temporary or permanent facilities that separate wellhead fluids into constituent vapor (gas) and liquid (oil and produced water) components. Temporary plant Temporary gas–oil separation facilities are associated with newly drilled or newly sidetracked wells where the production potential of the well is being assessed. The plant, comprising a test separator vessel, is connected to the wellhead after the choke valve. The separator allows the fluids to separate by gravity into its component phases: solids such as sand (the densest phase) settle to the bottom of the separator, then produced water and oil which are drawn separately from the base of the separator, and vapor or gas (the lightest phase) separates to the top of the separator vessel from where it is withdrawn. Each of the three fluid phases is metered to determine the relative flow-rates of the components and production potential of the well. In temporary facilities the vapor is generally flared; produced water is disposed of overboard after treatment to reduce its oil content to statutory levels; and the crude oil phase may be diverted to tote tanks for removal and treatment onshore. Alternatively, if the temporary GOSP plant is associated with a permanent production facility, the oil phase may be treated in the installation's permanent gas–oil separation plant. Permanent plant Permanent gas–oil separation plant is associated with permanent offshore production facilities. For a full description of such a plant, see Oil production plant. A gas–oil–and–water separator is called a 3-phase separator. The gas and oil or condensate are pumped through designated pipelines, while the sand and other solids are washed from the separator and disposed of overboard. Reasons for processing Multi-phase production Water need not be separated, and a single liquid (oil and water) phase produced together with a separate gas phase. Chemicals are added so that the crude and water emulsify. This process is then reversed at the storage and processing facility by adding demulsifiers that make the water separate out, and is drawn from the bottom of the tank. After storage, the crude oil can be sold to refineries, which produce fuels, chemicals, and energy products. Pressure The well fluids at the wellhead are at high pressure. Production pressures of greater than are not uncommon, but typically are lower than this. The high pressure is reduced at the choke valve to typically 7 to 30 bar at the separator, although the first stage separator could operate at higher pressure c. 250 bar. Modern oil recovery practice may place a hydro-cyclone to replace the temporary GOSP, allowing the water to be removed immediately and re-injected into the reservoir. The hydro-cyclone will vary the flows according to the water content and can also separate condensate from the gas where separate storage and export can be provided for the products close to the production well (e.g. on offshore platforms). Contaminants Crude oil leaving the well may contain quantities of sulfur (e.g. hydrogen sulfide and thiols) and/or carbon dioxide, and is known as "sour" crude. The gas–oil separator will typically partition the hydrogen sulfide and carbon dioxide preferentially into the vapor or gas phase, where it may be further treated. The most usual "crude sweetening packages" use amines to remove the sulfur and CO2 content. Crude that contains water is called "wet", and the water can then be bound in an emulsion in the crude to allow pumping through a pipeline. The crude is processed and treated to make it acceptable for the entry and transportation specification of the pipeline, before it can be transported to a refinery for processing. Phase separation It is often appropriate to separate gases and liquids for separate processing. This also involves the separation of oily and water liquid phases. Gas recovery In the past, and in some places today, the gas is considered a waste product and was flared off (burned). Collecting the gas reduces carbon emissions, and produces a marketable commodity. See also Oil platform Oil production plant Oil refinery Petroleum Petroleum industry (Oil industry) Upstream (petroleum industry) References External links The Gas-Oil Separation Process Petroleum technology
Gas–oil separation plant
Chemistry,Engineering
874
6,665,926
https://en.wikipedia.org/wiki/Data%20Design%20System
Data Design System AS (DDS) supplies the construction industry with software tools for building information modelling (BIM). The company was founded in 1984 in Stavanger, Norway. In 2021, the company merged into Graphisoft. in the Nemetschek Group. DDS is an active member of buildingSMART. DDS has its headquarters at Stavanger, Norway. Other locations include Oslo and Bergen (both in Norway). DDS has several subsidiaries, among them DDS Building Innovation AS and Data Design System GmbH. The main product line is tools for building services/MEP (mechanical, electrical, plumbing) engineers. The company distributes DDScad MEP, mainly in continental Europe from its office in Ascheberg, Germany. The company also develops software tools for the design and production of timber-frame buildings, DDScad Architect & Construction, from its office in Stavanger. See also Comparison of CAD editors for AEC Comparison of CAD, CAM and CAE file viewers References External links Building information modeling Computer-aided design software Computer-aided design software for Windows Software companies of Norway Software companies established in 1984
Data Design System
Engineering
234
53,299,882
https://en.wikipedia.org/wiki/Sharp%20pocket%20computer%20character%20sets
The Sharp pocket computer character sets are a number of 8-bit character sets used by various Sharp pocket computers and calculators in the 1980s and mid 1990s. Character sets PC-12xx and PC-14xx series The Sharp PC-14xx series (like the Sharp PC-1403 (1986), PC-1403H or PC-1475) uses an 8-bit extended ASCII character set. With minor exceptions the lower half resembles the 7-bit ASCII character set. The upper half contains a full set of half-width Katakana glyphs as well as a number of graphical and mathematical symbols. The Japanese glyphs are not documented and are available only after enabling an undocumented Japanese mode. PC-150x series The Sharp PC-1500 series uses a 7-bit character set derived from ASCII. Differences show the Unicode code point below the glyph. PC-160x series The Sharp PC-1600 supports two character sets. In "MODE 0", the character set resembles code page 437, whereas in "MODE 1" certain code points are changed to become compatible with the character set of the predecessor, the PC-1500. PC-E220 series The Sharp PC-E220 uses an 8-bit character set where the lower half resembles ASCII and the upper half contains various Greek letters, super- and subscript digits as well as various mathematical symbols. PC-E500 series The Sharp PC-E500 (1989) and PC-E500S (1995) use an 8-bit character set almost identical to the IBM PC code page 437. Differences are highlighted. See also Calculator character sets Notes References Calculator character sets
Sharp pocket computer character sets
Mathematics
356
15,514,744
https://en.wikipedia.org/wiki/Enabling
In psychotherapy and mental health, enabling is the encouragement of some behaviour, especially if said behaviour is either particularly positive or dysfunctional. Positive As a positive term, "enabling" describes patterns of interaction which allow individuals to develop and grow in a healthy direction. These patterns may be on any scale, for example within the family. Negative In a negative sense, "enabling" can describe dysfunctional behavior approaches that are intended to help resolve a specific problem but, in fact, may perpetuate or exacerbate the problem. A common theme of enabling in this latter sense is that third parties take responsibility or blame, or make accommodations for a person's ineffective or harmful conduct (often with the best of intentions, or from fear or insecurity which inhibits action). The practical effect is that the person themselves does not have to do so, and is shielded from awareness of the harm it may do, and the need or pressure to change. Codependency Codependency is a theory that attempts to explain imbalanced relationships in which one person enables another person's self-destructive behavior such as addiction, poor mental health, immaturity, irresponsibility, or under-achievement. Enabling may be observed in the relationship between a person with a substance use disorder and their partner, spouse or a parent. Enabling behaviors may include making excuses that prevent others from holding the person accountable, or cleaning up messes that occur in the wake of their impaired judgment. Enabling may prevent psychological growth in the person being enabled, and may contribute to negative symptoms in the enabler. Enabling may be driven by concern for retaliation, or fear of consequence to the person with the substance use disorder, such as job loss, injury or suicide. A parent may allow an addicted adult child to live at home without contributing to the household such as by helping with chores, and be manipulated by the child's excuses, emotional attacks, and threats of self-harm. Abuse In the context of abuse, enablers are distinct from flying monkeys (proxy abusers). Enablers allow or cover for the abuser's own bad behavior while flying monkeys actually perpetrate bad behavior to a third party on their behalf. Padilla et al. (2007), in analyzing destructive leadership, distinguished between conformers and colluders, in which the latter are those who actively participate in the destructive behavior. Emotional abuse is a brainwashing method that over time can turn someone into an enabler. While the abuser often plays the victim, it is quite common for the true victim to believe that he or she is responsible for the abuse and thus must adapt and adjust to it. Examples of enabling in an abusive context are as follows: Making excuses for another's violent rages. Cleaning up someone else's mess. Hiding an abuser's dysfunctional actions from public view. Absorbing the negative consequences of someone else's bad choices. Paying off another person's debts. Refusing to confront or protect oneself when exposed to physical, emotional or verbal assault. Regurgitating the abuser's 'facts' / version of reality to a third party without seeking evidence. Revictimising the abuser's other victims with behaviour such as gaslighting, denial, or scapegoating. Triangulation (playing the part in an abuse triangle as either victim or protector, but never seeing themselves as perpetrator). Keeping secrets for the abuser such as affairs, extramarital children, alcoholism, gambling, incest. Projecting / passing on their own shame (the shame projected on to them by the abuser) to third parties. Giving up/over knowledge of their finances to be taken care of by the abuser (oftentimes resulting in considerable debt). See also Personal boundaries Sycophancy References Motivation Counseling Behavior modification Behavioural syndromes associated with physiological disturbances and physical factors Interpersonal relationships Narcissism Abuse
Enabling
Biology
804
47,532,691
https://en.wikipedia.org/wiki/Tillie%20the%20All-Time%20Teller
Tillie the All-Time Teller was one of the first ATMs, run by the First National Bank of Atlanta and considered to be one of the most successful ATMs in the banking industry. Tillie the All-Time Teller had a picture of a smiling blonde girl on the front of the machine to suggest it was user-friendly, had an apparent personality, and could greet people by name. Many banks hired women dressed as this person to show their customers how to use Tillie the All-Time Teller. History It was introduced by the First National Bank of Atlanta on May 15, 1974. It started out at only eleven locations. They were in commerce starting May 20, 1974. Starting 1977, other banks purchased rights to use Tillie the All-Time Teller as their ATM system. By March 21, 1981, they were available at 70 locations, including on a college campus. On October 15, 2013, Susan Bennett revealed that she played the voice for Tillie the All-Time Teller, noting that she "started [her] life as a machine quite young." Appearance Tillie the All-Time Teller machines were red and gold to make them look more attractive. On the bottom left was the place to enter an "access card," which featured a cartoon character. Above that was a place to enter a "secret code" that the customer chose. On the bottom center was a picture of a cartoon blonde girl with china-blue eyes and a red hat. Above that was the place it handed out cash and coins. On the top right was the place to enter a desired amount of money. How it worked Customers could use Tillie the All-Time Teller by following these steps: Inserting an "Alltime Tellercard" Following instructions presented on its TV screen Entering a "secret code" and entering a desired amount of money on the "money keyboard" ($200 was the limit) The machine would automatically hand out the desired amount of money. Entering a transaction envelope into the deposit slot Advertising There were a variety of advertisements made by the First National Bank of Atlanta in order to promote Tillie the All-Time Teller. These include: In one of the advertisements, a blonde woman that wore a red and white polka-dotted dress sang "I'm Tillie the All-Time Teller, I work for First National Bank" while standing beside the machine. In another advertisement, a balding, middle-aged man approached the machine singing "If You Knew Tillie" to the tune of "If You Knew Susie." The song went like: For Tillie the All-Time Teller's third anniversary, the machine was featured in an advertisement where they sang "She's a Jolly Good Teller." It originally aired on KSEL-TV and KAMC. In popular culture The word "Tillie" has become a slang to describe any ATM. References Further reading Advertisement: I'm Tillie : Florida National's Alltime Teller, Page 10A, Lakeland Ledger - Oct 17, 1979 "Tillie The All Time Teller" from Wells Fargo (from archive.org) Automated teller machines Banking equipment Banking technology Computer-related introductions in 1974 Embedded systems Wells Fargo
Tillie the All-Time Teller
Technology,Engineering
648
50,781,701
https://en.wikipedia.org/wiki/Cache%20control%20instruction
In computing, a cache control instruction is a hint embedded in the instruction stream of a processor intended to improve the performance of hardware caches, using foreknowledge of the memory access pattern supplied by the programmer or compiler. They may reduce cache pollution, reduce bandwidth requirement, bypass latencies, by providing better control over the working set. Most cache control instructions do not affect the semantics of a program, although some can. Examples Several such instructions, with variants, are supported by several processor instruction set architectures, such as ARM, MIPS, PowerPC, and x86. Prefetch Also termed data cache block touch, the effect is to request loading the cache line associated with a given address. This is performed by the PREFETCH instruction in the x86 instruction set. Some variants bypass higher levels of the cache hierarchy, which is useful in a 'streaming' context for data that is traversed once, rather than held in the working set. The prefetch should occur sufficiently far ahead in time to mitigate the latency of memory access, for example in a loop traversing memory linearly. The GNU Compiler Collection intrinsic function __builtin_prefetch can be used to invoke this in the programming languages C or C++. Instruction prefetch A variant of prefetch for the instruction cache. Data cache block allocate zero This hint is used to prepare cache lines before overwriting the contents completely. In this example, the CPU needn't load anything from main memory. The semantic effect is equivalent to an aligned memset of a cache-line sized block to zero, but the operation is effectively free. Data cache block invalidate This hint is used to discard cache lines, without committing their contents to main memory. Care is needed since incorrect results are possible. Unlike other cache hints, the semantics of the program are significantly modified. This is used in conjunction with allocate zero for managing temporary data. This saves unneeded main memory bandwidth and cache pollution. Data cache block flush This hint requests the immediate eviction of a cache line, making way for future allocations. It is used when it is known that data is no longer part of the working set. Other hints Some processors support a variant of load–store instructions that also imply cache hints. An example is load last in the PowerPC instruction set, which suggests that data will only be used once, i.e., the cache line in question may be pushed to the head of the eviction queue, whilst keeping it in use if still directly needed. Alternatives Automatic prefetch In recent times, cache control instructions have become less popular as increasingly advanced application processor designs from Intel and ARM devote more transistors to accelerating code written in traditional languages, e.g., performing automatic prefetch, with hardware to detect linear access patterns on the fly. However the techniques may remain valid for throughput-oriented processors, which have a different throughput vs latency tradeoff, and may prefer to devote more area to execution units. Scratchpad memory Some processors support scratchpad memory into which temporaries may be put, and direct memory access (DMA) to transfer data to and from main memory when needed. This approach is used by the Cell processor, and some embedded systems. These allow greater control over memory traffic and locality (as the working set is managed by explicit transfers), and eliminates the need for expensive cache coherency in a manycore machine. The disadvantage is it requires significantly different programming techniques to use. It is very hard to adapt programs written in traditional languages such as C and C++ which present the programmer with a uniform view of a large address space (which is an illusion simulated by caches). A traditional microprocessor can more easily run legacy code, which may then be accelerated by cache control instructions, whilst a scratchpad based machine requires dedicated coding from the ground up to even function. Cache control instructions are specific to a certain cache line size, which in practice may vary between generations of processors in the same architectural family. Caches may also help coalescing reads and writes from less predictable access patterns (e.g., during texture mapping), whilst scratchpad DMA requires reworking algorithms for more predictable 'linear' traversals. As such scratchpads are generally harder to use with traditional programming models, although dataflow models (such as TensorFlow) might be more suitable. Vector fetch Vector processors (for example modern graphics processing unit (GPUs) and Xeon Phi) use massive parallelism to achieve high throughput whilst working around memory latency (reducing the need for prefetching). Many read operations are issued in parallel, for subsequent invocations of a compute kernel; calculations may be put on hold awaiting future data, whilst the execution units are devoted to working on data from past requests data that has already turned up. This is easier for programmers to leverage in conjunction with the appropriate programming models (compute kernels), but harder to apply to general purpose programming. The disadvantage is that many copies of temporary states may be held in the local memory of a processing element, awaiting data in flight. References Computer architecture
Cache control instruction
Technology,Engineering
1,060
55,017
https://en.wikipedia.org/wiki/Fusion%20power
Fusion power is a proposed form of power generation that would generate electricity by using heat from nuclear fusion reactions. In a fusion process, two lighter atomic nuclei combine to form a heavier nucleus, while releasing energy. Devices designed to harness this energy are known as fusion reactors. Research into fusion reactors began in the 1940s, but as of 2024, no device has reached net power, although net positive reactions have been achieved. Fusion processes require fuel and a confined environment with sufficient temperature, pressure, and confinement time to create a plasma in which fusion can occur. The combination of these figures that results in a power-producing system is known as the Lawson criterion. In stars the most common fuel is hydrogen, and gravity provides extremely long confinement times that reach the conditions needed for fusion energy production. Proposed fusion reactors generally use heavy hydrogen isotopes such as deuterium and tritium (and especially a mixture of the two), which react more easily than protium (the most common hydrogen isotope) and produce a helium nucleus and an energized neutron, to allow them to reach the Lawson criterion requirements with less extreme conditions. Most designs aim to heat their fuel to around 100 million kelvins, which presents a major challenge in producing a successful design. Tritium is extremely rare on Earth, having a half life of only ~12.3 years. Consequently, during the operation of envisioned fusion reactors, known as breeder reactors, helium cooled pebble beds (HCPBs) are subjected to neutron fluxes to generate tritium to complete the fuel cycle. As a source of power, nuclear fusion has a number of potential advantages compared to fission. These include reduced radioactivity in operation, little high-level nuclear waste, ample fuel supplies (assuming tritium breeding or some forms of aneutronic fuels), and increased safety. However, the necessary combination of temperature, pressure, and duration has proven to be difficult to produce in a practical and economical manner. A second issue that affects common reactions is managing neutrons that are released during the reaction, which over time degrade many common materials used within the reaction chamber. Fusion researchers have investigated various confinement concepts. The early emphasis was on three main systems: z-pinch, stellarator, and magnetic mirror. The current leading designs are the tokamak and inertial confinement (ICF) by laser. Both designs are under research at very large scales, most notably the ITER tokamak in France and the National Ignition Facility (NIF) laser in the United States. Researchers are also studying other designs that may offer less expensive approaches. Among these alternatives, there is increasing interest in magnetized target fusion and inertial electrostatic confinement, and new variations of the stellarator. Background Mechanism Fusion reactions occur when two or more atomic nuclei come close enough for long enough that the nuclear force pulling them together exceeds the electrostatic force pushing them apart, fusing them into heavier nuclei. For nuclei heavier than iron-56, the reaction is endothermic, requiring an input of energy. The heavy nuclei bigger than iron have many more protons resulting in a greater repulsive force. For nuclei lighter than iron-56, the reaction is exothermic, releasing energy when they fuse. Since hydrogen has a single proton in its nucleus, it requires the least effort to attain fusion, and yields the most net energy output. Also since it has one electron, hydrogen is the easiest fuel to fully ionize. The repulsive electrostatic interaction between nuclei operates across larger distances than the strong force, which has a range of roughly one femtometer—the diameter of a proton or neutron. The fuel atoms must be supplied enough kinetic energy to approach one another closely enough for the strong force to overcome the electrostatic repulsion in order to initiate fusion. The "Coulomb barrier" is the quantity of kinetic energy required to move the fuel atoms near enough. Atoms can be heated to extremely high temperatures or accelerated in a particle accelerator to produce this energy. An atom loses its electrons once it is heated past its ionization energy. An ion is the name for the resultant bare nucleus. The result of this ionization is plasma, which is a heated cloud of ions and free electrons that were formerly bound to them. Plasmas are electrically conducting and magnetically controlled because the charges are separated. This is used by several fusion devices to confine the hot particles. Cross section A reaction's cross section, denoted σ, measures the probability that a fusion reaction will happen. This depends on the relative velocity of the two nuclei. Higher relative velocities generally increase the probability, but the probability begins to decrease again at very high energies. In a plasma, particle velocity can be characterized using a probability distribution. If the plasma is thermalized, the distribution looks like a Gaussian curve, or Maxwell–Boltzmann distribution. In this case, it is useful to use the average particle cross section over the velocity distribution. This is entered into the volumetric fusion rate: where: is the energy made by fusion, per time and volume n is the number density of species A or B, of the particles in the volume is the cross section of that reaction, average over all the velocities of the two species v is the energy released by that fusion reaction. Lawson criterion The Lawson criterion considers the energy balance between the energy produced in fusion reactions to the energy being lost to the environment. In order to generate usable energy, a system would have to produce more energy than it loses. Lawson assumed an energy balance, shown below. where: is the net power from fusion is the efficiency of capturing the output of the fusion is the rate of energy generated by the fusion reactions is the conduction losses as energetic mass leaves the plasma is the radiation losses as energy leaves as light. The rate of fusion, and thus Pfusion, depends on the temperature and density of the plasma. The plasma loses energy through conduction and radiation. Conduction occurs when ions, electrons, or neutrals impact other substances, typically a surface of the device, and transfer a portion of their kinetic energy to the other atoms. The rate of conduction is also based on the temperature and density. Radiation is energy that leaves the cloud as light. Radiation also increases with temperature as well as the mass of the ions. Fusion power systems must operate in a region where the rate of fusion is higher than the losses. Triple product: density, temperature, time The Lawson criterion argues that a machine holding a thermalized and quasi-neutral plasma has to generate enough energy to overcome its energy losses. The amount of energy released in a given volume is a function of the temperature, and thus the reaction rate on a per-particle basis, the density of particles within that volume, and finally the confinement time, the length of time that energy stays within the volume. This is known as the "triple product": the plasma density, temperature, and confinement time. In magnetic confinement, the density is low, on the order of a "good vacuum". For instance, in the ITER device the fuel density is about , which is about one-millionth atmospheric density. This means that the temperature and/or confinement time must increase. Fusion-relevant temperatures have been achieved using a variety of heating methods that were developed in the early 1970s. In modern machines, , the major remaining issue was the confinement time. Plasmas in strong magnetic fields are subject to a number of inherent instabilities, which must be suppressed to reach useful durations. One way to do this is to simply make the reactor volume larger, which reduces the rate of leakage due to classical diffusion. This is why ITER is so large. In contrast, inertial confinement systems approach useful triple product values via higher density, and have short confinement intervals. In NIF, the initial frozen hydrogen fuel load has a density less than water that is increased to about 100 times the density of lead. In these conditions, the rate of fusion is so high that the fuel fuses in the microseconds it takes for the heat generated by the reactions to blow the fuel apart. Although NIF is also large, this is a function of its "driver" design, not inherent to the fusion process. Energy capture Multiple approaches have been proposed to capture the energy that fusion produces. The simplest is to heat a fluid. The commonly targeted D-T reaction releases much of its energy as fast-moving neutrons. Electrically neutral, the neutron is unaffected by the confinement scheme. In most designs, it is captured in a thick "blanket" of lithium surrounding the reactor core. When struck by a high-energy neutron, the blanket heats up. It is then actively cooled with a working fluid that drives a turbine to produce power. Another design proposed to use the neutrons to breed fission fuel in a blanket of nuclear waste, a concept known as a fission-fusion hybrid. In these systems, the power output is enhanced by the fission events, and power is extracted using systems like those in conventional fission reactors. Designs that use other fuels, notably the proton-boron aneutronic fusion reaction, release much more of their energy in the form of charged particles. In these cases, power extraction systems based on the movement of these charges are possible. Direct energy conversion was developed at Lawrence Livermore National Laboratory (LLNL) in the 1980s as a method to maintain a voltage directly using fusion reaction products. This has demonstrated energy capture efficiency of 48 percent. Plasma behavior Plasma is an ionized gas that conducts electricity. In bulk, it is modeled using magnetohydrodynamics, which is a combination of the Navier–Stokes equations governing fluids and Maxwell's equations governing how magnetic and electric fields behave. Fusion exploits several plasma properties, including: Self-organizing plasma conducts electric and magnetic fields. Its motions generate fields that can in turn contain it. Diamagnetic plasma can generate its own internal magnetic field. This can reject an externally applied magnetic field, making it diamagnetic. Magnetic mirrors can reflect plasma when it moves from a low to high density field.:24 Methods Magnetic confinement Tokamak: the most well-developed and well-funded approach. This method drives hot plasma around in a magnetically confined torus, with an internal current. When completed, ITER will become the world's largest tokamak. As of September 2018 an estimated 226 experimental tokamaks were either planned, decommissioned or operating (50) worldwide. Spherical tokamak: also known as spherical torus. A variation on the tokamak with a spherical shape. Stellarator: Twisted rings of hot plasma. The stellarator attempts to create a natural twisted plasma path, using external magnets. Stellarators were developed by Lyman Spitzer in 1950 and evolved into four designs: Torsatron, Heliotron, Heliac and Helias. One example is Wendelstein 7-X, a German device. It is the world's largest stellarator. Internal rings: Stellarators create a twisted plasma using external magnets, while tokamaks do so using a current induced in the plasma. Several classes of designs provide this twist using conductors inside the plasma. Early calculations showed that collisions between the plasma and the supports for the conductors would remove energy faster than fusion reactions could replace it. Modern variations, including the Levitated Dipole Experiment (LDX), use a solid superconducting torus that is magnetically levitated inside the reactor chamber. Magnetic mirror: Developed by Richard F. Post and teams at Lawrence Livermore National Laboratory (LLNL) in the 1960s. Magnetic mirrors reflect plasma back and forth in a line. Variations included the Tandem Mirror, magnetic bottle and the biconic cusp. A series of mirror machines were built by the US government in the 1970s and 1980s, principally at LLNL. However, calculations in the 1970s estimated it was unlikely these would ever be commercially useful. Bumpy torus: A number of magnetic mirrors are arranged end-to-end in a toroidal ring. Any fuel ions that leak out of one are confined in a neighboring mirror, permitting the plasma pressure to be raised arbitrarily high without loss. An experimental facility, the ELMO Bumpy Torus or EBT was built and tested at Oak Ridge National Laboratory (ORNL) in the 1970s. Field-reversed configuration: This device traps plasma in a self-organized quasi-stable structure; where the particle motion makes an internal magnetic field which then traps itself. Spheromak: Similar to a field-reversed configuration, a semi-stable plasma structure made by using the plasmas' self-generated magnetic field. A spheromak has both toroidal and poloidal fields, while a field-reversed configuration has no toroidal field. Dynomak is a spheromak that is formed and sustained using continuous magnetic flux injection. Reversed field pinch: Here the plasma moves inside a ring. It has an internal magnetic field. Moving out from the center of this ring, the magnetic field reverses direction. Inertial confinement Indirect drive: Lasers heat a structure known as a Hohlraum that becomes so hot it begins to radiate x-ray light. These x-rays heat a fuel pellet, causing it to collapse inward to compress the fuel. The largest system using this method is the National Ignition Facility, followed closely by Laser Mégajoule. Direct drive: Lasers directly heat the fuel pellet. Notable direct drive experiments have been conducted at the Laboratory for Laser Energetics (LLE) and the GEKKO XII facilities. Good implosions require fuel pellets with close to a perfect shape in order to generate a symmetrical inward shock wave that produces the high-density plasma. Fast ignition: This method uses two laser blasts. The first blast compresses the fusion fuel, while the second ignites it. this technique had lost favor for energy production. Magneto-inertial fusion or Magnetized Liner Inertial Fusion: This combines a laser pulse with a magnetic pinch. The pinch community refers to it as magnetized liner inertial fusion while the ICF community refers to it as magneto-inertial fusion. Ion Beams: Ion beams replace laser beams to heat the fuel. The main difference is that the beam has momentum due to mass, whereas lasers do not. As of 2019 it appears unlikely that ion beams can be sufficiently focused spatially and in time. Z-machine: Sends an electric current through thin tungsten wires, heating them sufficiently to generate x-rays. Like the indirect drive approach, these x-rays then compress a fuel capsule. Magnetic or electric pinches Z-pinch: A current travels in the z-direction through the plasma. The current generates a magnetic field that compresses the plasma. Pinches were the first method for human-made controlled fusion. The z-pinch has inherent instabilities that limit its compression and heating to values too low for practical fusion. The largest such machine, the UK's ZETA, was the last major experiment of the sort. The problems in z-pinch led to the tokamak design. The dense plasma focus is a possibly superior variation. Theta-pinch: A current circles around the outside of a plasma column, in the theta direction. This induces a magnetic field running down the center of the plasma, as opposed to around it. The early theta-pinch device Scylla was the first to conclusively demonstrate fusion, but later work demonstrated it had inherent limits that made it uninteresting for power production. Sheared Flow Stabilized Z-Pinch: Research at the University of Washington under Uri Shumlak investigated the use of sheared-flow stabilization to smooth out the instabilities of Z-pinch reactors. This involves accelerating neutral gas along the axis of the pinch. Experimental machines included the FuZE and Zap Flow Z-Pinch experimental reactors. In 2017, British technology investor and entrepreneur Benj Conway, together with physicists Brian Nelson and Uri Shumlak, co-founded Zap Energy to attempt to commercialize the technology for power production. Screw Pinch: This method combines a theta and z-pinch for improved stabilization. Inertial electrostatic confinement Fusor: An electric field heats ions to fusion conditions. The machine typically uses two spherical cages, a cathode inside the anode, inside a vacuum. These machines are not considered a viable approach to net power because of their high conduction and radiation losses. They are simple enough to build that amateurs have fused atoms using them. Polywell: Attempts to combine magnetic confinement with electrostatic fields, to avoid the conduction losses generated by the cage. Other Magnetized target fusion: Confines hot plasma using a magnetic field and squeezes it using inertia. Examples include LANL FRX-L machine, General Fusion (piston compression with liquid metal liner), HyperJet Fusion (plasma jet compression with plasma liner). Uncontrolled: Fusion has been initiated by man, using uncontrolled fission explosions to stimulate fusion. Early proposals for fusion power included using bombs to initiate reactions. See Project PACER. Colliding beam fusion: A beam of high energy particles fired at another beam or target can initiate fusion. This was used in the 1970s and 1980s to study the cross sections of fusion reactions. However beam systems cannot be used for power because keeping a beam coherent takes more energy than comes from fusion. Muon-catalyzed fusion: This approach replaces electrons in diatomic molecules of isotopes of hydrogen with muons—more massive particles with the same electric charge. Their greater mass compresses the nuclei enough such that the strong interaction can cause fusion. As of 2007 producing muons required more energy than can be obtained from muon-catalyzed fusion. Lattice confinement fusion: Lattice confinement fusion (LCF) is a type of nuclear fusion in which deuteron-saturated metals are exposed to gamma radiation or ion beams, such as in an IEC fusor, avoiding the confined high-temperature plasmas used in other methods of fusion. Common tools Many approaches, equipment, and mechanisms are employed across multiple projects to address fusion heating, measurement, and power production. Machine learning A deep reinforcement learning system has been used to control a tokamak-based reactor. The system was able to manipulate the magnetic coils to manage the plasma. The system was able to continuously adjust to maintain appropriate behavior (more complex than step-based systems). In 2014, Google began working with California-based fusion company TAE Technologies to control the Joint European Torus (JET) to predict plasma behavior. DeepMind has also developed a control scheme with TCV. Heating Electrostatic heating: an electric field can do work on charged ions or electrons, heating them. Neutral beam injection: hydrogen is ionized and accelerated by an electric field to form a charged beam that is shone through a source of neutral hydrogen gas towards the plasma which itself is ionized and contained by a magnetic field. Some of the intermediate hydrogen gas is accelerated towards the plasma by collisions with the charged beam while remaining neutral: this neutral beam is thus unaffected by the magnetic field and so reaches the plasma. Once inside the plasma the neutral beam transmits energy to the plasma by collisions which ionize it and allow it to be contained by the magnetic field, thereby both heating and refueling the reactor in one operation. The remainder of the charged beam is diverted by magnetic fields onto cooled beam dumps. Radio frequency heating: a radio wave causes the plasma to oscillate (i.e., microwave oven). This is also known as electron cyclotron resonance heating, using for example gyrotrons, or dielectric heating. Magnetic reconnection: when plasma gets dense, its electromagnetic properties can change, which can lead to magnetic reconnection. Reconnection helps fusion because it instantly dumps energy into a plasma, heating it quickly. Up to 45% of the magnetic field energy can heat the ions. Magnetic oscillations: varying electric currents can be supplied to magnetic coils that heat plasma confined within a magnetic wall. Antiproton annihilation: antiprotons injected into a mass of fusion fuel can induce thermonuclear reactions. This possibility as a method of spacecraft propulsion, known as antimatter-catalyzed nuclear pulse propulsion, was investigated at Pennsylvania State University in connection with the proposed AIMStar project. Measurement The diagnostics of a fusion scientific reactor are extremely complex and varied. The diagnostics required for a fusion power reactor will be various but less complicated than those of a scientific reactor as by the time of commercialization, many real-time feedback and control diagnostics will have been perfected. However, the operating environment of a commercial fusion reactor will be harsher for diagnostic systems than in a scientific reactor because continuous operations may involve higher plasma temperatures and higher levels of neutron irradiation. In many proposed approaches, commercialization will require the additional ability to measure and separate diverter gases, for example helium and impurities, and to monitor fuel breeding, for instance the state of a tritium breeding liquid lithium liner. The following are some basic techniques. Flux loop: a loop of wire is inserted into the magnetic field. As the field passes through the loop, a current is made. The current measures the total magnetic flux through that loop. This has been used on the National Compact Stellarator Experiment, the polywell, and the LDX machines. A Langmuir probe, a metal object placed in a plasma, can be employed. A potential is applied to it, giving it a voltage against the surrounding plasma. The metal collects charged particles, drawing a current. As the voltage changes, the current changes. This makes an IV Curve. The IV-curve can be used to determine the local plasma density, potential and temperature. Thomson scattering: "Light scatters" from plasma can be used to reconstruct plasma behavior, including density and temperature. It is common in Inertial confinement fusion, Tokamaks, and fusors. In ICF systems, firing a second beam into a gold foil adjacent to the target makes x-rays that traverse the plasma. In tokamaks, this can be done using mirrors and detectors to reflect light. Neutron detectors: Several types of neutron detectors can record the rate at which neutrons are produced. X-ray detectors Visible, IR, UV, and X-rays are emitted anytime a particle changes velocity. If the reason is deflection by a magnetic field, the radiation is cyclotron radiation at low speeds and synchrotron radiation at high speeds. If the reason is deflection by another particle, plasma radiates X-rays, known as Bremsstrahlung radiation. Power production Neutron blankets absorb neutrons, which heats the blanket. Power can be extracted from the blanket in various ways: Steam turbines can be driven by heat transferred into a working fluid that turns into steam, driving electric generators. Neutron blankets: These neutrons can regenerate spent fission fuel. Tritium can be produced using a breeder blanket of liquid lithium or a helium cooled pebble bed made of lithium-bearing ceramic pebbles. Direct conversion: The kinetic energy of a particle can be converted into voltage. It was first suggested by Richard F. Post in conjunction with magnetic mirrors, in the late 1960s. It has been proposed for Field-Reversed Configurations as well as Dense Plasma Focus devices. The process converts a large fraction of the random energy of the fusion products into directed motion. The particles are then collected on electrodes at various large electrical potentials. This method has demonstrated an experimental efficiency of 48 percent. Traveling-wave tubes pass charged helium atoms at several megavolts and just coming off the fusion reaction through a tube with a coil of wire around the outside. This passing charge at high voltage pulls electricity through the wire. Confinement Confinement refers to all the conditions necessary to keep a plasma dense and hot long enough to undergo fusion. General principles: Equilibrium: The forces acting on the plasma must be balanced. One exception is inertial confinement, where the fusion must occur faster than the dispersal time. Stability: The plasma must be constructed so that disturbances will not lead to the plasma dispersing. Transport or conduction: The loss of material must be sufficiently slow. The plasma carries energy off with it, so rapid loss of material will disrupt fusion. Material can be lost by transport into different regions or conduction through a solid or liquid. To produce self-sustaining fusion, part of the energy released by the reaction must be used to heat new reactants and maintain the conditions for fusion. Magnetic confinement Magnetic Mirror Magnetic mirror effect. If a particle follows the field line and enters a region of higher field strength, the particles can be reflected. Several devices apply this effect. The most famous was the magnetic mirror machines, a series of devices built at LLNL from the 1960s to the 1980s. Other examples include magnetic bottles and Biconic cusp. Because the mirror machines were straight, they had some advantages over ring-shaped designs. The mirrors were easier to construct and maintain and direct conversion energy capture was easier to implement. Poor confinement has led this approach to be abandoned, except in the polywell design. Magnetic loops Magnetic loops bend the field lines back on themselves, either in circles or more commonly in nested toroidal surfaces. The most highly developed systems of this type are the tokamak, the stellarator, and the reversed field pinch. Compact toroids, especially the field-reversed configuration and the spheromak, attempt to combine the advantages of toroidal magnetic surfaces with those of a simply connected (non-toroidal) machine, resulting in a mechanically simpler and smaller confinement area. Inertial confinement Inertial confinement is the use of rapid implosion to heat and confine plasma. A shell surrounding the fuel is imploded using a direct laser blast (direct drive), a secondary x-ray blast (indirect drive), or heavy beams. The fuel must be compressed to about 30 times solid density with energetic beams. Direct drive can in principle be efficient, but insufficient uniformity has prevented success.:19–20 Indirect drive uses beams to heat a shell, driving the shell to radiate x-rays, which then implode the pellet. The beams are commonly laser beams, but ion and electron beams have been investigated.:182–193 Electrostatic confinement Electrostatic confinement fusion devices use electrostatic fields. The best known is the fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse. Ions typically hit the cathode, however, creating prohibitory high conduction losses. Fusion rates in fusors are low because of competing physical effects, such as energy loss in the form of light radiation. Designs have been proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These include a plasma oscillating device, a magnetically shielded-grid, a penning trap, the polywell, and the F1 cathode driver concept. Fuels The fuels considered for fusion power have all been light elements like the isotopes of hydrogen—protium, deuterium, and tritium. The deuterium and helium-3 reaction requires helium-3, an isotope of helium so scarce on Earth that it would have to be mined extraterrestrially or produced by other nuclear reactions. Ultimately, researchers hope to adopt the protium–boron-11 reaction, because it does not directly produce neutrons, although side reactions can. Deuterium, tritium The easiest nuclear reaction, at the lowest energy, is D+T: + → (3.5 MeV) + (14.1 MeV) This reaction is common in research, industrial and military applications, usually as a neutron source. Deuterium is a naturally occurring isotope of hydrogen and is commonly available. The large mass ratio of the hydrogen isotopes makes their separation easy compared to the uranium enrichment process. Tritium is a natural isotope of hydrogen, but because it has a short half-life of 12.32 years, it is hard to find, store, produce, and is expensive. Consequently, the deuterium-tritium fuel cycle requires the breeding of tritium from lithium using one of the following reactions: + → + + → + + The reactant neutron is supplied by the D-T fusion reaction shown above, and the one that has the greatest energy yield. The reaction with 6Li is exothermic, providing a small energy gain for the reactor. The reaction with 7Li is endothermic, but does not consume the neutron. Neutron multiplication reactions are required to replace the neutrons lost to absorption by other elements. Leading candidate neutron multiplication materials are beryllium and lead, but the 7Li reaction helps to keep the neutron population high. Natural lithium is mainly 7Li, which has a low tritium production cross section compared to 6Li so most reactor designs use breeding blankets with enriched 6Li. Drawbacks commonly attributed to D-T fusion power include: The supply of neutrons results in neutron activation of the reactor materials.:242 80% of the resultant energy is carried off by neutrons, which limits the use of direct energy conversion. It requires the radioisotope tritium. Tritium may leak from reactors. Some estimates suggest that this would represent a substantial environmental radioactivity release. The neutron flux expected in a commercial D-T fusion reactor is about 100 times that of fission power reactors, posing problems for material design. After a series of D-T tests at JET, the vacuum vessel was sufficiently radioactive that it required remote handling for the year following the tests. In a production setting, the neutrons would react with lithium in the breeding blanket composed of lithium ceramic pebbles or liquid lithium, yielding tritium. The energy of the neutrons ends up in the lithium, which would then be transferred to drive electrical production. The lithium blanket protects the outer portions of the reactor from the neutron flux. Newer designs, the advanced tokamak in particular, use lithium inside the reactor core as a design element. The plasma interacts directly with the lithium, preventing a problem known as "recycling". The advantage of this design was demonstrated in the Lithium Tokamak Experiment. Deuterium Fusing two deuterium nuclei is the second easiest fusion reaction. The reaction has two branches that occur with nearly equal probability: + → + + → + This reaction is also common in research. The optimum energy to initiate this reaction is 15 keV, only slightly higher than that for the D-T reaction. The first branch produces tritium, so that a D-D reactor is not tritium-free, even though it does not require an input of tritium or lithium. Unless the tritons are quickly removed, most of the tritium produced is burned in the reactor, which reduces the handling of tritium, with the disadvantage of producing more, and higher-energy, neutrons. The neutron from the second branch of the D-D reaction has an energy of only , while the neutron from the D-T reaction has an energy of , resulting in greater isotope production and material damage. When the tritons are removed quickly while allowing the 3He to react, the fuel cycle is called "tritium suppressed fusion". The removed tritium decays to 3He with a 12.5 year half life. By recycling the 3He decay into the reactor, the fusion reactor does not require materials resistant to fast neutrons. Assuming complete tritium burn-up, the reduction in the fraction of fusion energy carried by neutrons would be only about 18%, so that the primary advantage of the D-D fuel cycle is that tritium breeding is not required. Other advantages are independence from lithium resources and a somewhat softer neutron spectrum. The disadvantage of D-D compared to D-T is that the energy confinement time (at a given pressure) must be 30 times longer and the power produced (at a given pressure and volume) is 68 times less. Assuming complete removal of tritium and 3He recycling, only 6% of the fusion energy is carried by neutrons. The tritium-suppressed D-D fusion requires an energy confinement that is 10 times longer compared to D-T and double the plasma temperature. Deuterium, helium-3 A second-generation approach to controlled fusion power involves combining helium-3 (3He) and deuterium (2H): + → + This reaction produces 4He and a high-energy proton. As with the p-11B aneutronic fusion fuel cycle, most of the reaction energy is released as charged particles, reducing activation of the reactor housing and potentially allowing more efficient energy harvesting (via any of several pathways). In practice, D-D side reactions produce a significant number of neutrons, leaving p-11B as the preferred cycle for aneutronic fusion. Proton, boron-11 Both material science problems and non-proliferation concerns are greatly diminished by aneutronic fusion. Theoretically, the most reactive aneutronic fuel is 3He. However, obtaining reasonable quantities of 3He implies large scale extraterrestrial mining on the Moon or in the atmosphere of Uranus or Saturn. Therefore, the most promising candidate fuel for such fusion is fusing the readily available protium (i.e. a proton) and boron. Their fusion releases no neutrons, but produces energetic charged alpha (helium) particles whose energy can directly be converted to electrical power: + → 3 Side reactions are likely to yield neutrons that carry only about 0.1% of the power,:177–182 which means that neutron scattering is not used for energy transfer and material activation is reduced several thousand-fold. The optimum temperature for this reaction of 123 keV is nearly ten times higher than that for pure hydrogen reactions, and energy confinement must be 500 times better than that required for the D-T reaction. In addition, the power density is 2500 times lower than for D-T, although per unit mass of fuel, this is still considerably higher compared to fission reactors. Because the confinement properties of the tokamak and laser pellet fusion are marginal, most proposals for aneutronic fusion are based on radically different confinement concepts, such as the Polywell and the Dense Plasma Focus. In 2013, a research team led by Christine Labaune at École Polytechnique, reported a new fusion rate record for proton-boron fusion, with an estimated 80 million fusion reactions during a 1.5 nanosecond laser fire, 100 times greater than reported in previous experiments. Material selection Structural material stability is a critical issue. Materials that can survive the high temperatures and neutron bombardment experienced in a fusion reactor are considered key to success. The principal issues are the conditions generated by the plasma, neutron degradation of wall surfaces, and the related issue of plasma-wall surface conditions. Reducing hydrogen permeability is seen as crucial to hydrogen recycling and control of the tritium inventory. Materials with the lowest bulk hydrogen solubility and diffusivity provide the optimal candidates for stable barriers. A few pure metals, including tungsten and beryllium, and compounds such as carbides, dense oxides, and nitrides have been investigated. Research has highlighted that coating techniques for preparing well-adhered and perfect barriers are of equivalent importance. The most attractive techniques are those in which an ad-layer is formed by oxidation alone. Alternative methods utilize specific gas environments with strong magnetic and electric fields. Assessment of barrier performance represents an additional challenge. Classical coated membranes gas permeation continues to be the most reliable method to determine hydrogen permeation barrier (HPB) efficiency. In 2021, in response to increasing numbers of designs for fusion power reactors for 2040, the United Kingdom Atomic Energy Authority published the UK Fusion Materials Roadmap 2021–2040, focusing on five priority areas, with a focus on tokamak family reactors: Novel materials to minimize the amount of activation in the structure of the fusion power plant; Compounds that can be used within the power plant to optimise breeding of tritium fuel to sustain the fusion process; Magnets and insulators that are resistant to irradiation from fusion reactions—especially under cryogenic conditions; Structural materials able to retain their strength under neutron bombardment at high operating temperatures (over 550 degrees C); Engineering assurance for fusion materials—providing irradiated sample data and modelled predictions such that plant designers, operators and regulators have confidence that materials are suitable for use in future commercial power stations. Superconducting materials In a plasma that is embedded in a magnetic field (known as a magnetized plasma) the fusion rate scales as the magnetic field strength to the 4th power. For this reason, many fusion companies that rely on magnetic fields to control their plasma are trying to develop high temperature superconducting devices. In 2021, SuperOx, a Russian and Japanese company, developed a new manufacturing process for making superconducting YBCO wire for fusion reactors. This new wire was shown to conduct between 700 and 2000 Amps per square millimeter. The company was able to produce 186 miles of wire in nine months. Containment considerations Even on smaller production scales, the containment apparatus is blasted with matter and energy. Designs for plasma containment must consider: A heating and cooling cycle, up to a 10 MW/m2 thermal load. Neutron radiation, which over time leads to neutron activation and embrittlement. High energy ions leaving at tens to hundreds of electronvolts. Alpha particles leaving at millions of electronvolts. Electrons leaving at high energy. Light radiation (IR, visible, UV, X-ray). Depending on the approach, these effects may be higher or lower than fission reactors. One estimate put the radiation at 100 times that of a typical pressurized water reactor. Depending on the approach, other considerations such as electrical conductivity, magnetic permeability, and mechanical strength matter. Materials must also not end up as long-lived radioactive waste. Plasma-wall surface conditions For long term use, each atom in the wall is expected to be hit by a neutron and displaced about 100 times before the material is replaced. These high-energy neutron collisions with the atoms in the wall result in the absorption of the neutrons, forming unstable isotopes of the atoms. When the isotope decays, it may emit alpha particles, protons, or gamma rays. Alpha particles, once stabilized by capturing electrons, form helium atoms which accumulate at grain boundaries and may result in swelling, blistering, or embrittlement of the material. Selection of materials Tungsten is widely regarded as the optimal material for plasma-facing components in next-generation fusion devices due to its unique properties and potential for enhancements. Its low sputtering rates and high melting point make it particularly suitable for the high-stress environments of fusion reactors, allowing it to withstand intense conditions without rapid degradation. Additionally, tungsten's low tritium retention through co-deposition and implantation is essential in fusion contexts, as it helps to minimize the accumulation of this radioactive isotope. Liquid metals (lithium, gallium, tin) have been proposed, e.g., by injection of 1–5 mm thick streams flowing at 10 m/s on solid substrates. Graphite features a gross erosion rate due to physical and chemical sputtering amounting to many meters per year, requiring redeposition of the sputtered material. The redeposition site generally does not exactly match the sputter site, allowing net erosion that may be prohibitive. An even larger problem is that tritium is redeposited with the redeposited graphite. The tritium inventory in the wall and dust could build up to many kilograms, representing a waste of resources and a radiological hazard in case of an accident. Graphite found favor as material for short-lived experiments, but appears unlikely to become the primary plasma-facing material (PFM) in a commercial reactor. Ceramic materials such as silicon carbide (SiC) have similar issues like graphite. Tritium retention in silicon carbide plasma-facing components is approximately 1.5-2 times higher than in graphite, resulting in reduced fuel efficiency and heightened safety risks in fusion reactors. SiC tends to trap more tritium, limiting its availability for fusion and increasing the risk of hazardous accumulation, complicating tritium management. Furthermore, the chemical and physical sputtering of SiC remains significant, contributing to tritium buildup through co-deposition over time and with increasing particle fluence. As a result, carbon-based materials have been excluded from ITER, DEMO, and similar devices. Tungsten's sputtering rate is orders of magnitude smaller than carbon's, and tritium is much less incorporated into redeposited tungsten. However, tungsten plasma impurities are much more damaging than carbon impurities, and self-sputtering can be high, requiring the plasma in contact with the tungsten not be too hot (a few tens of eV rather than hundreds of eV). Tungsten also has issues around eddy currents and melting in off-normal events, as well as some radiological issues. Recent advances in materials for containment apparatus materials have found that certain ceramics can actually improve the longevity of the material of the containment apparatus. Studies on MAX phases, such as titanium silicon carbide, show that under the high operating temperatures of nuclear fusion, the material undergoes a phase transformation from a hexagonal structure to a face-centered-cubic (FCC) structure, driven by helium bubble growth. Helium atoms preferentially accumulate in the Si layer of the hexagonal structure, as the Si atoms are more mobile than the Ti-C slabs. As more atoms are trapped, the Ti-C slab is peeled off, causing the Si atoms to become highly mobile interstitial atoms in the new FCC structure. Lattice strain induced by the He bubbles cause Si atoms to diffuse out of compressive areas, typically towards the surface of the material, forming a protective silicon dioxide layer. Doping vessel materials with iron silicate has emerged as a promising approach to enhance containment materials in fusion reactors, as well. This method targets helium embrittlement at grain boundaries, a common issue that arises as helium atoms accumulate and form bubbles. Over time, these bubbles coalesce at grain boundaries, causing them to expand and degrade the material's structural integrity. By contrast, introducing iron silicate creates nucleation sites within the metal matrix that are more thermodynamically favorable for helium aggregation. This localized congregation around iron silicate nanoparticles induces matrix strain rather than weakening grain boundaries, preserving the material’s strength and longevity. Safety and the environment Accident potential Accident potential and effect on the environment are critical to social acceptance of nuclear fusion, also known as a social license. Fusion reactors are not subject to catastrophic meltdown. It requires precise and controlled temperature, pressure and magnetic field parameters to produce net energy, and any damage or loss of required control would rapidly quench the reaction. Fusion reactors operate with seconds or even microseconds worth of fuel at any moment. Without active refueling, the reactions immediately quench. The same constraints prevent runaway reactions. Although the plasma is expected to have a volume of or more, the plasma typically contains only a few grams of fuel. By comparison, a fission reactor is typically loaded with enough fuel for months or years, and no additional fuel is necessary to continue the reaction. This large fuel supply is what offers the possibility of a meltdown. In magnetic containment, strong fields develop in coils that are mechanically held in place by the reactor structure. Failure of this structure could release this tension and allow the magnet to "explode" outward. The severity of this event would be similar to other industrial accidents or an MRI machine quench/explosion, and could be effectively contained within a containment building similar to those used in fission reactors. In laser-driven inertial containment the larger size of the reaction chamber reduces the stress on materials. Although failure of the reaction chamber is possible, stopping fuel delivery prevents catastrophic failure. Most reactor designs rely on liquid hydrogen as a coolant and to convert stray neutrons into tritium, which is fed back into the reactor as fuel. Hydrogen is flammable, and it is possible that hydrogen stored on-site could ignite. In this case, the tritium fraction of the hydrogen would enter the atmosphere, posing a radiation risk. Calculations suggest that about of tritium and other radioactive gases in a typical power station would be present. The amount is small enough that it would dilute to legally acceptable limits by the time they reached the station's perimeter fence. The likelihood of small industrial accidents, including the local release of radioactivity and injury to staff, are estimated to be minor compared to fission. They would include accidental releases of lithium or tritium or mishandling of radioactive reactor components. Magnet quench A magnet quench is an abnormal termination of magnet operation that occurs when part of the superconducting coil exits the superconducting state (becomes normal). This can occur because the field inside the magnet is too large, the rate of change of field is too large (causing eddy currents and resultant heating in the copper support matrix), or a combination of the two. More rarely a magnet defect can cause a quench. When this happens, that particular spot is subject to rapid Joule heating from the current, which raises the temperature of the surrounding regions. This pushes those regions into the normal state as well, which leads to more heating in a chain reaction. The entire magnet rapidly becomes normal over several seconds, depending on the size of the superconducting coil. This is accompanied by a loud bang as the energy in the magnetic field is converted to heat, and the cryogenic fluid boils away. The abrupt decrease of current can result in kilovolt inductive voltage spikes and arcing. Permanent damage to the magnet is rare, but components can be damaged by localized heating, high voltages, or large mechanical forces. In practice, magnets usually have safety devices to stop or limit the current when a quench is detected. If a large magnet undergoes a quench, the inert vapor formed by the evaporating cryogenic fluid can present a significant asphyxiation hazard to operators by displacing breathable air. A large section of the superconducting magnets in CERN's Large Hadron Collider unexpectedly quenched during start-up operations in 2008, destroying multiple magnets. In order to prevent a recurrence, the LHC's superconducting magnets are equipped with fast-ramping heaters that are activated when a quench event is detected. The dipole bending magnets are connected in series. Each power circuit includes 154 individual magnets, and should a quench event occur, the entire combined stored energy of these magnets must be dumped at once. This energy is transferred into massive blocks of metal that heat up to several hundred degrees Celsius—because of resistive heating—in seconds. A magnet quench is a "fairly routine event" during the operation of a particle accelerator. Effluents The natural product of the fusion reaction is a small amount of helium, which is harmless to life. Hazardous tritium is difficult to retain completely. Although tritium is volatile and biologically active, the health risk posed by a release is much lower than that of most radioactive contaminants, because of tritium's short half-life (12.32 years) and very low decay energy (~14.95 keV), and because it does not bioaccumulate (it cycles out of the body as water, with a biological half-life of 7 to 14 days). ITER incorporates total containment facilities for tritium. Radioactive waste Fusion reactors create far less radioactive material than fission reactors. Further, the material it creates is less damaging biologically, and the radioactivity dissipates within a time period that is well within existing engineering capabilities for safe long-term waste storage. In specific terms, except in the case of aneutronic fusion, the neutron flux turns the structural materials radioactive. The amount of radioactive material at shut-down may be comparable to that of a fission reactor, with important differences. The half-lives of fusion and neutron activation radioisotopes tend to be less than those from fission, so that the hazard decreases more rapidly. Whereas fission reactors produce waste that remains radioactive for thousands of years, the radioactive material in a fusion reactor (other than tritium) would be the reactor core itself and most of this would be radioactive for about 50 years, with other low-level waste being radioactive for another 100 years or so thereafter. The fusion waste's short half-life eliminates the challenge of long-term storage. By 500 years, the material would have the same radiotoxicity as coal ash. Nonetheless, classification as intermediate level waste rather than low-level waste may complicate safety discussions. The choice of materials is less constrained than in conventional fission, where many materials are required for their specific neutron cross-sections. Fusion reactors can be designed using "low activation", materials that do not easily become radioactive. Vanadium, for example, becomes much less radioactive than stainless steel. Carbon fiber materials are also low-activation, are strong and light, and are promising for laser-inertial reactors where a magnetic field is not required. Nuclear proliferation In some scenarios, fusion power technology could be adapted to produce materials for military purposes. A huge amount of tritium could be produced by a fusion power station; tritium is used in the trigger of hydrogen bombs and in modern boosted fission weapons, but it can be produced in other ways. The energetic neutrons from a fusion reactor could be used to breed weapons-grade plutonium or uranium for an atomic bomb (for example by transmutation of to , or to ). A study conducted in 2011 assessed three scenarios: Small-scale fusion station: As a result of much higher power consumption, heat dissipation and a more recognizable design compared to enrichment gas centrifuges, this choice would be much easier to detect and therefore implausible. Commercial facility: The production potential is significant. But no fertile or fissile substances necessary for the production of weapon-usable materials needs to be present at a civil fusion system at all. If not shielded, detection of these materials can be done by their characteristic gamma radiation. The underlying redesign could be detected by regular design information verification. In the (technically more feasible) case of solid breeder blanket modules, it would be necessary for incoming components to be inspected for the presence of fertile material, otherwise plutonium for several weapons could be produced each year. Prioritizing weapon-grade material regardless of secrecy: The fastest way to produce weapon-usable material was seen in modifying a civil fusion power station. No weapons-compatible material is required during civil use. Even without the need for covert action, such a modification would take about two months to start production and at least an additional week to generate a significant amount. This was considered to be enough time to detect a military use and to react with diplomatic or military means. To stop the production, a military destruction of parts of the facility while leaving out the reactor would be sufficient. Another study concluded "...large fusion reactors—even if not designed for fissile material breeding—could easily produce several hundred kg Pu per year with high weapon quality and very low source material requirements." It was emphasized that the implementation of features for intrinsic proliferation resistance might only be possible at an early phase of research and development. The theoretical and computational tools needed for hydrogen bomb design are closely related to those needed for inertial confinement fusion, but have very little in common with magnetic confinement fusion. Fuel reserves Fusion power commonly proposes the use of deuterium as fuel and many current designs also use lithium. Assuming a fusion energy output equal to the 1995 global power output of about 100 EJ/yr (= 1 × 1020 J/yr) and that this does not increase in the future, which is unlikely, then known current lithium reserves would last 3000 years. Lithium from sea water would last 60 million years, however, and a more complicated fusion process using only deuterium would have fuel for 150 billion years. To put this in context, 150 billion years is close to 30 times the remaining lifespan of the Sun, and more than 10 times the estimated age of the universe. Economics The EU spent almost through the 1990s. ITER represents an investment of over twenty billion dollars, and possibly tens of billions more, including in kind contributions. Under the European Union's Sixth Framework Programme, nuclear fusion research received (in addition to ITER funding), compared with for sustainable energy research, putting research into fusion power well ahead of that of any single rival technology. The United States Department of Energy has allocated $US367M–$US671M every year since 2010, peaking in 2020, with plans to reduce investment to $US425M in its FY2021 Budget Request. About a quarter of this budget is directed to support ITER. The size of the investments and time lines meant that fusion research was traditionally almost exclusively publicly funded. However, starting in the 2010s, the promise of commercializing a paradigm-changing low-carbon energy source began to attract a raft of companies and investors. Over two dozen start-up companies attracted over one billion dollars from roughly 2000 to 2020, mainly from 2015, and a further three billion in funding and milestone related commitments in 2021, with investors including Jeff Bezos, Peter Thiel and Bill Gates, as well as institutional investors including Legal & General, and energy companies including Equinor, Eni, Chevron, and the Chinese ENN Group. In 2021, Commonwealth Fusion Systems (CFS) obtained $1.8 billion in scale-up funding, and Helion Energy obtained a half-billion dollars with an additional $1.7 billion contingent on meeting milestones. Scenarios developed in the 2000s and early 2010s discussed the effects of the commercialization of fusion power on the future of human civilization. Using nuclear fission as a guide, these saw ITER and later DEMO as bringing online the first commercial reactors around 2050 and a rapid expansion after mid-century. Some scenarios emphasized "fusion nuclear science facilities" as a step beyond ITER. However, the economic obstacles to tokamak-based fusion power remain immense, requiring investment to fund prototype tokamak reactors and development of new supply chains, a problem which will affect any kind of fusion reactor. Tokamak designs appear to be labour-intensive, while the commercialization risk of alternatives like inertial fusion energy is high due to the lack of government resources. Scenarios since 2010 note computing and material science advances enabling multi-phase national or cost-sharing "Fusion Pilot Plants" (FPPs) along various technology pathways, such as the UK Spherical Tokamak for Energy Production, within the 2030–2040 time frame. Notably, in June 2021, General Fusion announced it would accept the UK government's offer to host the world's first substantial public-private partnership fusion demonstration plant, at Culham Centre for Fusion Energy. The plant will be constructed from 2022 to 2025 and is intended to lead the way for commercial pilot plants in the late 2025s. The plant will be 70% of full scale and is expected to attain a stable plasma of 150 million degrees. In the United States, cost-sharing public-private partnership FPPs appear likely, and in 2022 the DOE announced a new Milestone-Based Fusion Development Program as the centerpiece of its Bold Decadal Vision for Commercial Fusion Energy, which envisages private sector-led teams delivering FPP pre-conceptual designs, defining technology roadmaps, and pursuing the R&D necessary to resolve critical-path scientific and technical issues towards an FPP design. Compact reactor technology based on such demonstration plants may enable commercialization via a fleet approach from the 2030s if early markets can be located. The widespread adoption of non-nuclear renewable energy has transformed the energy landscape. Such renewables are projected to supply 74% of global energy by 2050. The steady fall of renewable energy prices challenges the economic competitiveness of fusion power. Some economists suggest fusion power is unlikely to match other renewable energy costs. Fusion plants are expected to face large start up and capital costs. Moreover, operation and maintenance are likely to be costly. While the costs of the China Fusion Engineering Test Reactor are not well known, an EU DEMO fusion concept was projected to feature a levelized cost of energy (LCOE) of $121/MWh. Fuel costs are low, but economists suggest that the energy cost for a one-gigawatt plant would increase by $16.5 per MWh for every $1 billion increase in the capital investment in construction. There is also the risk that easily obtained lithium will be used up making batteries. Obtaining it from seawater would be very costly and might require more energy than the energy that would be generated. In contrast, renewable levelized cost of energy estimates are substantially lower. For instance, the 2019 levelized cost of energy of solar energy was estimated to be $40-$46/MWh, on shore wind was estimated at $29-$56/MWh, and offshore wind was approximately $92/MWh. However, fusion power may still have a role filling energy gaps left by renewables, depending on how administration priorities for energy and environmental justice influence the market. In the 2020s, socioeconomic studies of fusion that began to consider these factors emerged, and in 2022 EUROFusion launched its Socio-Economic Studies and Prospective Research and Development strands to investigate how such factors might affect commercialization pathways and timetables. Similarly, in April 2023 Japan announced a national strategy to industrialise fusion. Thus, fusion power may work in tandem with other renewable energy sources rather than becoming the primary energy source. In some applications, fusion power could provide the base load, especially if including integrated thermal storage and cogeneration and considering the potential for retrofitting coal plants. Regulation As fusion pilot plants move within reach, legal and regulatory issues must be addressed. In September 2020, the United States National Academy of Sciences consulted with private fusion companies to consider a national pilot plant. The following month, the United States Department of Energy, the Nuclear Regulatory Commission (NRC) and the Fusion Industry Association co-hosted a public forum to begin the process. In November 2020, the International Atomic Energy Agency (IAEA) began working with various nations to create safety standards such as dose regulations and radioactive waste handling. In January and March 2021, NRC hosted two public meetings on regulatory frameworks. A public-private cost-sharing approach was endorsed in the 27 December H.R.133 Consolidated Appropriations Act, 2021, which authorized $325 million over five years for a partnership program to build fusion demonstration facilities, with a 100% match from private industry. Subsequently, the UK Regulatory Horizons Council published a report calling for a fusion regulatory framework by early 2022 in order to position the UK as a global leader in commercializing fusion power. This call was met by the UK government publishing in October 2021 both its Fusion Green Paper and its Fusion Strategy, to regulate and commercialize fusion, respectively. Then, in April 2023, in a decision likely to influence other nuclear regulators, the NRC announced in a unanimous vote that fusion energy would be regulated not as fission but under the same regulatory regime as particle accelerators. Then, in October 2023 the UK government, in enacting the Energy Act 2023, made the UK the first country to legislate for fusion separately from fission, to support planning and investment, including the UK's planned prototype fusion power plant for 2040; STEP the UK is working with Canada and Japan in this regard. Meanwhile, in February 2024 the US House of Representatives passed the Atomic Energy Advancement Act, which includes the Fusion Energy Act, which establishes a regulatory framework for fusion energy systems. Geopolitics Given the potential of fusion to transform the world's energy industry and mitigate climate change, fusion science has traditionally been seen as an integral part of peace-building science diplomacy. However, technological developments and private sector involvement has raised concerns over intellectual property, regulatory administration, global leadership; equity, and potential weaponization. These challenge ITER's peace-building role and led to calls for a global commission. Fusion power significantly contributing to climate change by 2050 seems unlikely without substantial breakthroughs and a space race mentality emerging, but a contribution by 2100 appears possible, with the extent depending on the type and particularly cost of technology pathways. Developments from late 2020 onwards have led to talk of a "new space race" with multiple entrants, pitting the US against China and the UK's STEP FPP, with China now outspending the US and threatening to leapfrog US technology. On 24 September 2020, the United States House of Representatives approved a research and commercialization program. The Fusion Energy Research section incorporated a milestone-based, cost-sharing, public-private partnership program modeled on NASA's COTS program, which launched the commercial space industry. In February 2021, the National Academies published Bringing Fusion to the U.S. Grid, recommending a market-driven, cost-sharing plant for 2035–2040, and the launch of the Congressional Bipartisan Fusion Caucus followed. In December 2020, an independent expert panel reviewed EUROfusion's design and R&D work on DEMO, and EUROfusion confirmed it was proceeding with its Roadmap to Fusion Energy, beginning the conceptual design of DEMO in partnership with the European fusion community, suggesting an EU-backed machine had entered the race. In October 2023, the UK-oriented Agile Nations group announced a fusion working group. One month later, the UK and the US announced a bilateral partnership to accelerate fusion energy. Then, in December 2023 at COP28 the US announced a US global strategy to commercialize fusion energy. Then, in April 2024, Japan and the US announced a similar partnership, and in May of the same year, the G7 announced a G7 Working Group on Fusion Energy to promote international collaborations to accelerate the development of commercial energy and promote R&D between countries, as well as rationalize fusion regulation. Later the same year, the US partnered with the IAEA to launch the Fusion Energy Solutions Taskforce, to collaboratively crowdsource ideas to accelerate commercial fusion energy, in line with the US COP28 statement. Specifically to resolve the tritium supply problem, in February 2024, the UK (UKAEA) and Canada (Canadian Nuclear Laboratories) announced an agreement by which Canada could refurbish its Candu deuterium-uranium tritium-generating heavywater nuclear plants and even build new ones, guaranteeing a supply of tritium into the 2070s, while the UKAEA would test breeder materials and simulate how tritium could be captured, purified, and injected back into the fusion reaction. In 2024, both South Korea and Japan announced major initiatives to accelerate their national fusion strategies, by building electricity-generating public-private fusion plants in the 2030s, aiming to begin operations in the 2040s and 2030s respectively. Advantages Fusion power promises to provide more energy for a given weight of fuel than any fuel-consuming energy source currently in use. The fuel (primarily deuterium) exists abundantly in the ocean: about 1 in 6500 hydrogen atoms in seawater is deuterium. Although this is only about 0.015%, seawater is plentiful and easy to access, implying that fusion could supply the world's energy needs for millions of years. First generation fusion plants are expected to use the deuterium-tritium fuel cycle. This will require the use of lithium for breeding of the tritium. It is not known for how long global lithium supplies will suffice to supply this need as well as those of the battery and metallurgical industries. It is expected that second generation plants will move on to the more formidable deuterium-deuterium reaction. The deuterium-helium-3 reaction is also of interest, but the light helium isotope is practically non-existent on Earth. It is thought to exist in useful quantities in the lunar regolith, and is abundant in the atmospheres of the gas giant planets. Fusion power could be used for so-called "deep space" propulsion within the solar system and for interstellar space exploration where solar energy is not available, including via antimatter-fusion hybrid drives. Helium production Deuterium–tritium fusion produces helium as a by-product. Disadvantages Fusion power has a number of disadvantages. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, such reactors share many of the drawbacks of fission reactors. This includes the production of large quantities of radioactive waste and serious radiation damage to reactor components. Additionally, naturally occurring tritium is extremely rare. While the hope is that fusion reactors can breed their own tritium, tritium self-sufficiency is extremely challenging, not least because tritium is difficult to contain (tritium has leaked from 48 of 65 nuclear sites in the US). In any case the reserve and start-up tritium inventory requirements are likely to be unacceptably large. If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue is eliminated and neutron radiation damage may be reduced. However, the probabilities of deuterium-deuterium reactions are about 20 times lower than for deuterium-tritium. Additionally, the temperature needed is about 3 times higher than for deuterium-tritium (see cross section). The higher temperatures and lower reaction rates thus significantly complicate the engineering challenges. In any case, other drawbacks remain, for instance reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential. History Early experiments The first machine to achieve controlled thermonuclear fusion was a pinch machine at Los Alamos National Laboratory called Scylla I at the start of 1958. The team that achieved it was led by a British scientist named James Tuck and included a young Marshall Rosenbluth. Tuck had been involved in the Manhattan project, but had switched to working on fusion in the early 1950s. He applied for funding for the project as part of a White House sponsored contest to develop a fusion reactor along with Lyman Spitzer. The previous year, 1957, the British had claimed that they had achieved thermonuclear fusion reactions on the Zeta pinch machine. However, it turned out that the neutrons they had detected were from beam-target interactions, not fusion, and they withdrew the claim. Scylla I was a classified machine at the time, so the achievement was hidden from the public. A traditional Z-pinch passes a current down the center of a plasma, which makes a magnetic force around the outside which squeezes the plasma to fusion conditions. Scylla I was a θ-pinch, which used deuterium to pass a current around the outside of its cylinder to create a magnetic force in the center. After the success of Scylla I, Los Alamos went on to build multiple pinch machines over the next few years. Spitzer continued his stellarator research at Princeton. While fusion did not immediately transpire, the effort led to the creation of the Princeton Plasma Physics Laboratory. First tokamak In the early 1950s, Soviet physicists I.E. Tamm and A.D. Sakharov developed the concept of the tokamak, combining a low-power pinch device with a low-power stellarator. A.D. Sakharov's group constructed the first tokamaks, achieving the first quasistationary fusion reaction.:90 Over time, the "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, operation in the "H-mode" island of increased stability, and the compact tokamak, with the magnets on the inside of the vacuum chamber. First inertial confinement experiments Laser fusion was suggested in 1962 by scientists at Lawrence Livermore National Laboratory (LLNL), shortly after the invention of the laser in 1960. Inertial confinement fusion experiments using lasers began as early as 1965. Several laser systems were built at LLNL, including the Argus, the Cyclops, the Janus, the long path, the Shiva laser, and the Nova. Laser advances included frequency-tripling crystals that transformed infrared laser beams into ultraviolet beams and "chirping", which changed a single wavelength into a full spectrum that could be amplified and then reconstituted into one frequency. Laser research cost over one billion dollars in the 1980s. 1980s The Tore Supra, JET, T-15, and JT-60 tokamaks were built in the 1980s. In 1984, Martin Peng of ORNL proposed the spherical tokamak with a much smaller radius. It used a single large conductor in the center, with magnets as half-rings off this conductor. The aspect ratio fell to as low as 1.2.:B247:225 Peng's advocacy caught the interest of Derek Robinson, who built the Small Tight Aspect Ratio Tokamak, (START). 1990s In 1991, the Preliminary Tritium Experiment at the Joint European Torus achieved the world's first controlled release of fusion power. In 1996, Tore Supra created a plasma for two minutes with a current of almost 1 million amperes, totaling 280 MJ of injected and extracted energy. In 1997, JET produced a peak of 16.1 MW of fusion power (65% of heat to plasma), with fusion power of over 10 MW sustained for over 0.5 sec. 2000s "Fast ignition" saved power and moved ICF into the race for energy production. In 2006, China's Experimental Advanced Superconducting Tokamak (EAST) test reactor was completed. It was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields. In March 2009, the laser-driven ICF NIF became operational. In the 2000s, privately backed fusion companies entered the race, including TAE Technologies, General Fusion, and Tokamak Energy. 2010s Private and public research accelerated in the 2010s. General Fusion developed plasma injector technology and Tri Alpha Energy tested its C-2U device. The French Laser Mégajoule began operation. NIF achieved net energy gain in 2013, as defined in the very limited sense as the hot spot at the core of the collapsed target, rather than the whole target. In 2014, Phoenix Nuclear Labs sold a high-yield neutron generator that could sustain 5×1011 deuterium fusion reactions per second over a 24-hour period. In 2015, MIT announced a tokamak it named the ARC fusion reactor, using rare-earth barium-copper oxide (REBCO) superconducting tapes to produce high-magnetic field coils that it claimed could produce comparable magnetic field strength in a smaller configuration than other designs. In October, researchers at the Max Planck Institute of Plasma Physics in Greifswald, Germany, completed building the largest stellarator to date, the Wendelstein 7-X (W7-X). The W7-X stellarator began Operational phase 1 (OP1.1) on 10 December 2015, successfully producing helium plasma. The objective was to test vital systems and understand the machine's physics. By February 2016, hydrogen plasma was achieved, with temperatures reaching up to 100 million Kelvin. The initial tests used five graphite limiters. After over 2,000 pulses and achieving significant milestones, OP1.1 concluded on 10 March 2016. An upgrade followed, and OP1.2 in 2017 aimed to test an uncooled divertor. By June 2018, record temperatures were reached. W7-X concluded its first campaigns with limiter and island divertor tests, achieving notable advancements by the end of 2018. It soon produced helium and hydrogen plasmas lasting up to 30 minutes. In 2017, Helion Energy's fifth-generation plasma machine went into operation. The UK's Tokamak Energy's ST40 generated "first plasma". The next year, Eni announced a $50 million investment in Commonwealth Fusion Systems, to attempt to commercialize MIT's ARC technology. 2020s In January 2021, SuperOx announced the commercialization of a new superconducting wire with more than 700 A/mm2 current capability. TAE Technologies announced results for its Norman device, holding a temperature of about 60 MK for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. In October, Oxford-based First Light Fusion revealed its projectile fusion project, which fires an aluminum disc at a fusion target, accelerated by a 9 mega-amp electrical pulse, reaching speeds of . The resulting fusion generates neutrons whose energy is captured as heat. On November 8, in an invited talk to the 63rd Annual Meeting of the APS Division of Plasma Physics, the National Ignition Facility claimed to have triggered fusion ignition in the laboratory on August 8, 2021, for the first time in the 60+ year history of the ICF program. The shot yielded 1.3 MJ of fusion energy, an over 8X improvement on tests done in spring of 2021. NIF estimates that 230 kJ of energy reached the fuel capsule, which resulted in an almost 6-fold energy output from the capsule. A researcher from Imperial College London stated that the majority of the field agreed that ignition had been demonstrated. In November 2021, Helion Energy reported receiving $500 million in Series E funding for its seventh-generation Polaris device, designed to demonstrate net electricity production, with an additional $1.7 billion of commitments tied to specific milestones, while Commonwealth Fusion Systems raised an additional $1.8 billion in Series B funding to construct and operate its SPARC tokamak, the single largest investment in any private fusion company. In April 2022, First Light announced that their hypersonic projectile fusion prototype had produced neutrons compatible with fusion. Their technique electromagnetically fires projectiles at Mach 19 at a caged fuel pellet. The deuterium fuel is compressed at Mach 204, reaching pressure levels of 100 TPa. On December 13, 2022, the US Department of Energy reported that researchers at the National Ignition Facility had achieved a net energy gain from a fusion reaction. The reaction of hydrogen fuel at the facility produced about 3.15 MJ of energy while consuming 2.05 MJ of input. However, while the fusion reactions may have produced more than 3 megajoules of energy—more than was delivered to the target—NIF's 192 lasers consumed 322 MJ of grid energy in the conversion process. In May 2023, the United States Department of Energy (DOE) provided a grant of $46 million to eight companies across seven states to support fusion power plant design and research efforts. This funding, under the Milestone-Based Fusion Development Program, aligns with objectives to demonstrate pilot-scale fusion within a decade and to develop fusion as a carbon-neutral energy source by 2050. The granted companies are tasked with addressing the scientific and technical challenges to create viable fusion pilot plant designs in the next 5–10 years. The recipient firms include Commonwealth Fusion Systems, Focused Energy Inc., Princeton Stellarators Inc., Realta Fusion Inc., Tokamak Energy Inc., Type One Energy Group, Xcimer Energy Inc., and Zap Energy Inc. In December 2023, the largest and most advanced tokamak JT-60SA was inaugurated in Naka, Japan. The reactor is a joint project between Japan and the European Union. The reactor had achieved its first plasma in October 2023. Subsequently, South Korea's fusion reactor project, the Korean Superconducting Tokamak Advanced Research, successfully operated for 102 seconds in a high-containment mode (H-mode) containing high ion temperatures of more than 100 million degrees in plasma tests conducted from December 2023 to February 2024. In January 2025, EAST fusion reactor in China was reported to maintain a steady-state high-confinement plasma operation for 1066 seconds. Future development Claims of commercially viable fusion power being relatively imminent have often attracted ridicule within the scientific community. A common joke is that human-engineered fusion has always been promised as 30 years away since the concept was first discussed, or that it has been "20 years away for 50 years". In 2024, Commonwealth Fusion Systems announced plans to build the world's first grid-scale commercial nuclear fusion power plant at the James River Industrial Center in Chesterfield County, Virginia, which is part of the Greater Richmond Region; the plant is designed to produce about 400 MW of electric power, and is intended to come online in the early 2030s. Records Fusion records continue to advance: See also COLEX process, for production of Li-6 Fusion ignition High beta fusion reactor Inertial electrostatic confinement Levitated dipole List of fusion experiments Magnetic mirror Starship References Bibliography (manuscript) Nuttall, William J., Konishi, Satoshi, Takeda, Shutaro, and Webbe-Wood, David (2020). Commercialising Fusion Energy: How Small Businesses are Transforming Big Science. IOP Publishing. . Further reading Oreskes, Naomi, "Fusion's False Promise: Despite a recent advance, nuclear fusion is not the solution to the climate crisis", Scientific American, vol. 328, no. 6 (June 2023), p. 86. External links Fusion Device Information System Fusion Energy Base Fusion Industry Association Princeton Satellite Systems News U.S. Fusion Energy Science Program Sustainable energy
Fusion power
Physics,Chemistry
16,024
478,185
https://en.wikipedia.org/wiki/Disinfectant
A disinfectant is a chemical substance or compound used to inactivate or destroy microorganisms on inert surfaces. Disinfection does not necessarily kill all microorganisms, especially resistant bacterial spores; it is less effective than sterilization, which is an extreme physical or chemical process that kills all types of life. Disinfectants are generally distinguished from other antimicrobial agents such as antibiotics, which destroy microorganisms within the body, and antiseptics, which destroy microorganisms on living tissue. Disinfectants are also different from biocides—the latter are intended to destroy all forms of life, not just microorganisms. Disinfectants work by destroying the cell wall of microbes or interfering with their metabolism. It is also a form of decontamination, and can be defined as the process whereby physical or chemical methods are used to reduce the amount of pathogenic microorganisms on a surface. Disinfectants can also be used to destroy microorganisms on the skin and mucous membrane, as in the medical dictionary historically the word simply meant that it destroys microbes. Sanitizers are substances that simultaneously clean and disinfect. Disinfectants kill more germs than sanitizers. Disinfectants are frequently used in hospitals, dental surgeries, kitchens, and bathrooms to kill infectious organisms. Sanitizers are mild compared to disinfectants and are used majorly to clean things that are in human contact whereas disinfectants are concentrated and are used to clean surfaces like floors and building premises. Bacterial endospores are most resistant to disinfectants, but some fungi, viruses and bacteria also possess some resistance. In wastewater treatment, a disinfection step with chlorine, ultra-violet (UV) radiation or ozonation can be included as tertiary treatment to remove pathogens from wastewater, for example if it is to be discharged to a river or the sea where there body contact immersion recreations is practiced (Europe) or reused to irrigate golf courses (US). An alternative term used in the sanitation sector for disinfection of waste streams, sewage sludge or fecal sludge is sanitisation or sanitization. Definitions The Australian Therapeutic Goods Order No. 54 defines several grades of disinfectant as will be used below. Sterilant Sterilant means a chemical agent which is used to sterilize critical medical devices or medical instruments. A sterilant kills all micro-organisms with the result that the sterility assurance level of a microbial survivor is less than 10^-6. Sterilant gases are not within this scope. Low level disinfectant Low level disinfectant means a disinfectant that rapidly kills most vegetative bacteria as well as medium-sized lipid containing viruses, when used according to labelling. It cannot be relied upon to destroy, within a practical period, bacterial endospores, mycobacteria, fungi, or all small nonlipid viruses. Intermediate level disinfectant Intermediate level disinfectant means a disinfectant that kills all microbial pathogens except bacterial endospores, when used as recommended by the manufacturer. It is bactericidal, tuberculocidal, fungicidal (against asexual spores but not necessarily dried chlamydospores or sexual spores), and virucidal. High level disinfectant High level disinfectant means a disinfectant that kills all microbial pathogens, except large numbers of bacterial endospores when used as recommended by its manufacturer. Instrument grade Instrument grade disinfectant means: a disinfectant which is used to reprocess reusable therapeutic devices; and when associated with the words "low", "intermediate" or "high" means "low", "intermediate" or "high" level disinfectant respectively. Hospital grade Hospital grade means a disinfectant that is suitable for general purpose disinfection of building and fitting surfaces, and purposes not involving instruments or surfaces likely to come into contact with broken skin: in premises used for: the investigation or treatment of a disease, ailment or injury; or procedures that are carried out involving the penetration of the human skin; or, in connection with: the business of beauty therapy or hairdressing; or the practice of podiatry; but does not include : Instrument grade disinfectants; or sterilant; or an antibacterial clothes preparation; or a sanitary fluid; or a sanitary powder; or a sanitiser. Household/commercial grade Household/commercial grade disinfectant means a disinfectant that is suitable for general purpose disinfection of building or fitting surfaces, and for other purposes, in premises or involving procedures other than those specified for a hospital-grade disinfectant, but is not: an antibacterial clothes preparation; or a sanitary fluid; or a sanitary powder; or a sanitiser Measurements of effectiveness One way to compare disinfectants is to compare how well they do against a known disinfectant and rate them accordingly. Phenol is the standard, and the corresponding rating system is called the "Phenol coefficient". The disinfectant to be tested is compared with phenol on a standard microbe (usually Salmonella typhi or Staphylococcus aureus). Disinfectants that are more effective than phenol have a coefficient > 1. Those that are less effective have a coefficient < 1. The standard European approach for disinfectant validation consists of a basic suspension test, a quantitative suspension test (with low and high levels of organic material added to act as 'interfering substances') and a two part simulated-use surface test. A less specific measurement of effectiveness is the United States Environmental Protection Agency (EPA) classification into either high, intermediate or low levels of disinfection. "High-level disinfection kills all organisms, except high levels of bacterial spores" and is done with a chemical germicide marketed as a sterilant by the U.S. Food and Drug Administration (FDA). "Intermediate-level disinfection kills mycobacteria, most viruses, and bacteria with a chemical germicide registered as a 'tuberculocide' by the Environmental Protection Agency. Low-level disinfection kills some viruses and bacteria with a chemical germicide registered as a hospital disinfectant by the EPA." An alternative assessment is to measure the Minimum inhibitory concentrations (MICs) of disinfectants against selected (and representative) microbial species, such as through the use of microbroth dilution testing. However, those methods are obtained at standard inoculum levels without considering the inoculum effect. More informative methods are nowadays in demand to determine the minimum disinfectant dose as a function of the density of the target microbial species. Properties A perfect disinfectant would also offer complete and full microbiological sterilisation, without harming humans and useful form of life, be inexpensive, and noncorrosive. However, most disinfectants are also, by nature, potentially harmful (even toxic) to humans or animals. Most modern household disinfectants contain denatonium, an exceptionally bitter substance added to discourage ingestion, as a safety measure. Those that are used indoors should never be mixed with other cleaning products as chemical reactions can occur. The choice of disinfectant to be used depends on the particular situation. Some disinfectants have a wide spectrum (kill many different types of microorganisms), while others kill a smaller range of disease-causing organisms but are preferred for other properties (they may be non-corrosive, non-toxic, or inexpensive). There are arguments for creating or maintaining conditions that are not conducive to bacterial survival and multiplication, rather than attempting to kill them with chemicals. Bacteria can increase in number very quickly, which enables them to evolve rapidly. Should some bacteria survive a chemical attack, they give rise to new generations composed completely of bacteria that have resistance to the particular chemical used. Under a sustained chemical attack, the surviving bacteria in successive generations are increasingly resistant to the chemical used, and ultimately the chemical is rendered ineffective. For this reason, some question the wisdom of impregnating cloths, cutting boards and worktops in the home with bactericidal chemicals. Types Air disinfectants Air disinfectants are typically chemical substances capable of disinfecting microorganisms suspended in the air. Disinfectants are generally assumed to be limited to use on surfaces, but that is not the case. In 1928, a study found that airborne microorganisms could be killed using mists of dilute bleach. An air disinfectant must be dispersed either as an aerosol or vapour at a sufficient concentration in the air to cause the number of viable infectious microorganisms to be significantly reduced. In the 1940s and early 1950s, further studies showed inactivation of diverse bacteria, influenza virus, and Penicillium chrysogenum (previously P. notatum) mold fungus using various glycols, principally propylene glycol and triethylene glycol. In principle, these chemical substances are ideal air disinfectants because they have both high lethality to microorganisms and low mammalian toxicity. Although glycols are effective air disinfectants in controlled laboratory environments, it is more difficult to use them effectively in real-world environments because the disinfection of air is sensitive to continuous action. Continuous action in real-world environments with outside air exchanges at door, HVAC, and window interfaces, and in the presence of materials that absorb and remove glycols from the air, poses engineering challenges that are not critical for surface disinfection. The engineering challenge associated with creating a sufficient concentration of the glycol vapours in the air have not to date been sufficiently addressed. Alcohols Alcohol and alcohol plus Quaternary ammonium cation based compounds comprise a class of proven surface sanitizers and disinfectants approved by the EPA and the Centers for Disease Control for use as a hospital grade disinfectant. Alcohols are most effective when combined with distilled water to facilitate diffusion through the cell membrane; 100% alcohol typically denatures only external membrane proteins. A mixture of 70% ethanol or isopropanol diluted in water is effective against a wide spectrum of bacteria, though higher concentrations are often needed to disinfect wet surfaces. Additionally, high-concentration mixtures (such as 80% ethanol + 5% isopropanol) are required to effectively inactivate lipid-enveloped viruses (such as HIV, hepatitis B, and hepatitis C). The efficacy of alcohol is enhanced when in solution with the wetting agent dodecanoic acid (coconut soap). The synergistic effect of 29.4% ethanol with dodecanoic acid is effective against a broad spectrum of bacteria, fungi, and viruses. Further testing is being performed against Clostridioides difficile (C.Diff) spores with higher concentrations of ethanol and dodecanoic acid, which proved effective with a contact time of ten minutes. Aldehydes Aldehydes, such as formaldehyde and glutaraldehyde, have a wide microbicidal activity and are sporicidal and fungicidal. They are partly inactivated by organic matter and have slight residual activity. Some bacteria have developed resistance to glutaraldehyde, and it has been found that glutaraldehyde can cause asthma and other health hazards, hence ortho-phthalaldehyde is replacing glutaraldehyde. Oxidizing agents Oxidizing agents act by oxidizing the cell membrane of microorganisms, which results in a loss of structure and leads to cell lysis and death. A large number of disinfectants operate in this way. Chlorine and oxygen are strong oxidizers, so their compounds figure heavily here. Electrolyzed water or "Anolyte" is an oxidizing, acidic hypochlorite solution made by electrolysis of sodium chloride into sodium hypochlorite and hypochlorous acid. Anolyte has an oxidation-reduction potential of +600 to +1200 mV and a typical pH range of 3.5––8.5, but the most potent solution is produced at a controlled pH 5.0–6.3 where the predominant oxychlorine species is hypochlorous acid. Hydrogen peroxide is used in hospitals to disinfect surfaces and it is used in solution alone or in combination with other chemicals as a high level disinfectant. Hydrogen peroxide is sometimes mixed with colloidal silver. It is often preferred because it causes far fewer allergic reactions than alternative disinfectants. Also used in the food packaging industry to disinfect foil containers. A 3% solution is also used as an antiseptic. Hydrogen peroxide vapor is used as a medical sterilant and as room disinfectant. Hydrogen peroxide has the advantage that it decomposes to form oxygen and water thus leaving no long term residues, but hydrogen peroxide as with most other strong oxidants is hazardous, and solutions are a primary irritant. The vapor is hazardous to the respiratory system and eyes and consequently the OSHA permissible exposure limit is 1 ppm (29 CFR 1910.1000 Table Z-1) calculated as an eight-hour time weighted average and the NIOSH immediately dangerous to life and health limit is 75 ppm. Therefore, engineering controls, personal protective equipment, gas monitoring etc. should be employed where high concentrations of hydrogen peroxide are used in the workplace. Vaporized hydrogen peroxide is one of the chemicals approved for decontamination of anthrax spores from contaminated buildings, such as occurred during the 2001 anthrax attacks in the U.S. It has also been shown to be effective in removing exotic animal viruses, such as avian influenza and Newcastle disease from equipment and surfaces. The antimicrobial action of hydrogen peroxide can be enhanced by surfactants and organic acids. The resulting chemistry is known as Accelerated hydrogen peroxide. A 2% solution, stabilized for extended use, achieves high-level disinfection in 5 minutes, and is suitable for disinfecting medical equipment made from hard plastic, such as in endoscopes. The evidence available suggests that products based on Accelerated Hydrogen Peroxide, apart from being good germicides, are safer for humans and benign to the environment. Ozone is a gas used for disinfecting water, laundry, foods, air, and surfaces. It is chemically aggressive and destroys many organic compounds, resulting in rapid decolorization and deodorization in addition to disinfection. Ozone decomposes relatively quickly. However, due to this characteristic of ozone, tap water chlorination cannot be entirely replaced by ozonation, as the ozone would decompose already in the water piping. Instead, it is used to remove the bulk of oxidizable matter from the water, which would produce small amounts of organochlorides if treated with chlorine only. Regardless, ozone has a very wide range of applications from municipal to industrial water treatment due to its powerful reactivity. Potassium permanganate (KMnO4) is a purplish-black crystalline powder that colours everything it touches, through a strong oxidising action. This includes staining "stainless" steel, which somewhat limits its use and makes it necessary to use plastic or glass containers. It is used to disinfect aquariums and is used in some community swimming pools as a foot disinfectant before entering the pool. Typically, a large shallow basin of KMnO4 / water solution is kept near the pool ladder. Participants are required to step in the basin and then go into the pool. Additionally, it is widely used to disinfect community water ponds and wells in tropical countries, as well as to disinfect the mouth before pulling out teeth. It can be applied to wounds in dilute solution. Peroxy and peroxo acids Peroxycarboxylic acids and inorganic peroxo acids are strong oxidants and extremely effective disinfectants. Peroxyformic acid Peracetic acid Peroxypropionic acid Monoperoxyglutaric acid Monoperoxysuccinic acid Peroxybenzoic acid Peroxyanisic acid Chloroperbenzoic acid Monoperoxyphthalic acid Peroxymonosulfuric acid Phenolics Phenolics are active ingredients in some household disinfectants. They are also found in some mouthwashes and in disinfectant soap and handwashes. Phenols are toxic to cats and newborn humans Phenol is probably the oldest known disinfectant as it was first used by Lister, when it was called carbolic acid. It is rather corrosive to the skin and sometimes toxic to sensitive people. Impure preparations of phenol were originally made from coal tar, and these contained low concentrations of other aromatic hydrocarbons including benzene, which is an IARC Group 1 carcinogen. o-Phenylphenol is often used instead of phenol, since it is somewhat less corrosive. Chloroxylenol is the principal ingredient in Dettol, a household disinfectant and antiseptic. Hexachlorophene is a phenolic that was once used as a germicidal additive to some household products but was banned due to suspected harmful effects. Thymol, derived from the herb thyme, is the active ingredient in some "broad spectrum" disinfectants that often bear ecological claims. It is used as a stabilizer in pharmaceutic preparations. It has been used for its antiseptic, antibacterial, and antifungal actions, and was formerly used as a vermifuge. Amylmetacresol is found in Strepsils, a throat disinfectant. Although not a phenol, 2,4-dichlorobenzyl alcohol has similar effects as phenols, but it cannot inactivate viruses. Quaternary ammonium compounds Quaternary ammonium compounds ("quats"), such as benzalkonium chloride, are a large group of related compounds. Some concentrated formulations have been shown to be effective low-level disinfectants. Quaternary ammonia at or above 200ppm plus alcohol solutions exhibit efficacy against difficult to kill non-enveloped viruses such as norovirus, rotavirus, or polio virus. Newer synergous, low-alcohol formulations are highly effective broad-spectrum disinfectants with quick contact times (3–5 minutes) against bacteria, enveloped viruses, pathogenic fungi, and mycobacteria. Quats are biocides that also kill algae and are used as an additive in large-scale industrial water systems to minimize undesired biological growth. Inorganic compounds Chlorine This group comprises aqueous solution of chlorine, hypochlorite, or hypochlorous acid. Occasionally, chlorine-releasing compounds and their salts are included in this group. Frequently, a concentration of < 1 ppm of available chlorine is sufficient to kill bacteria and viruses, spores and mycobacteria requiring higher concentrations. Chlorine has been used for applications, such as the deactivation of pathogens in drinking water, swimming pool water and wastewater, for the disinfection of household areas and for textile bleaching Sodium hypochlorite Calcium hypochlorite Monochloramine Chloramine-T Trichloroisocyanuric acid Chlorine dioxide Hypochlorous acid Iodine Iodine Iodophors Acids and bases Sodium hydroxide Potassium hydroxide Calcium hydroxide Magnesium hydroxide Sulfurous acid Sulfur dioxide phosphoric acid dodecylbenzenesulfonic acid Metals Most metals, especially those with high atomic weights can inhibit the growth of pathogens by disrupting their metabolism. Terpenes Thymol Pine oil Other The biguanide polymer polyaminopropyl biguanide is specifically bactericidal at very low concentrations (10 mg/L). It has a unique method of action: The polymer strands are incorporated into the bacterial cell wall, which disrupts the membrane and reduces its permeability, which has a lethal effect to bacteria. It is also known to bind to bacterial DNA, alter its transcription, and cause lethal DNA damage. It has very low toxicity to higher organisms such as human cells, which have more complex and protective membranes. Common sodium bicarbonate (NaHCO3) has antifungal properties, and some antiviral and antibacterial properties, though those are too weak to be effective at a home environment. Non-chemical Ultraviolet germicidal irradiation is the use of high-intensity shortwave ultraviolet light for disinfecting smooth surfaces such as dental tools, but not porous materials that are opaque to the light such as wood or foam. Ultraviolet light is also used for municipal water treatment. Ultraviolet light fixtures are often present in microbiology labs, and are activated only when there are no occupants in a room (e.g., at night). Heat treatment can be used for disinfection and sterilization. The phrase "sunlight is the best disinfectant" was popularized in 1913 by United States Supreme Court Justice Louis Brandeis and later advocates of government transparency. While sunlight's ultraviolet rays can act as a disinfectant, the Earth's ozone layer blocks the rays' most effective wavelengths. Ultraviolet light-emitting machines, such as those used to disinfect some hospital rooms, make for better disinfectants than sunlight. Since the mid-1990s cold plasma has been shown to be an efficient sterilization/disinfection agent. Cold plasma is an ionized gas that remains at room temperature. It generates reactive oxygen and reactive nitrogen species that interact with bacterial wall and membrane and cause oxidation of the lipids and proteins and can also lyse the cells. Cold plasma can inactivate bacteria, viruses, and fungi. Electrostatic disinfection There has been a rise in the use of electrostatic disinfectants in recent years. Electrostatic disinfection is a process achieved by use of electrostatic sprayers notable examples of which include the Vycel -Vycel 4 or the Techtronics Ryobi. Electrostatic Sprayers are a new technology for disinfecting surfaces. Unlike conventional spraying bottles or devices electrostatic sprayers apply a positive ionic charge to liquid disinfectants as they pass through the nozzle of the device. The positively charged disinfectant distributed through the nozzle of an electrostatic sprayer is attracted to negatively charged surfaces, which allows for efficient coating of disinfectant solutions on to hard nonporous surfaces. There are a number of specific disinfectants designed for use with electrostatic sprayers and these are often dissolved in solution or diluted with water. Notable disinfectant sprays that are designed for use with electrostatic sprayers include Citrox Disinfectant Solution and Vital Oxide Disinfectant Solution. Health and safety concerns Production Individuals who work manufacturing disinfectants have higher exposure to the raw and harsh chemicals used in the production of disinfectants compared to the general population. This is due to the use of manual labor and automated machinery. However, the use of automated machinery does not dismiss any direct contact with the chemicals within the production of disinfectants. Chemicals used in disinfectants vary in forms, such as gel, liquid, and powder. Minimal information remains about the health and safety of workers in other sectors of the production and manufacturing process of disinfectants. Inspection is a process of disinfectant manufacturing that only requires human intervention. Many workers in the inspection phase of mass production of disinfectants have reported accidental inhalation of fumes, direct dermal contact, eye irritation, and accidental ingestion of disinfectant substances. Studies have shown reports of workers with short-term neurological impairments, dermal hypersensitivity, skin irritation, chemical burns, dermatitis, occupational asthma and work-related asthma, mucus membrane (nasal) and lung irritation, and some types of cancer after direct and consistent contact with disinfectants. The chemicals, quaternary ammonium compounds (QACs), phenolic compounds, iodophors, glutaraldehyde, alcohols, and chlorine, were most associated with the previous health effects. This evidence of dermal exposure was associated with the misuse or lack of Personal Protective Equipment (PPE). Cancer has been shown to only develop in consistent exposure, along with the lack of use of Personal Protective Equipment (PPE). Among these numerous health effects, evidence showed that dermal exposure was more hazardous than inhalation. These health effects can be minimized with the implementation of guidelines from the CDC, NPIC, OSHA, and NIOSH. Healthcare Settings There is evidence that exposure to cleaning and disinfectant products can cause acute health effects on healthcare workers. Observed effects include eye irritation and watery eyes, headaches, dizziness, throat irritation and wheezing, skin irritation, and work-related asthma. Most of these have a low severity. Some chemicals in cleaning and disinfectants that have been associated with health impacts include chlorine, ammonia,[2] ethanolamine, 2-butoxyethanol, quaternary ammonium compounds (QACs), and bleach. The adverse health impacts of disinfectants are still not well studied, which makes it difficult to develop guidelines for use in healthcare settings that take mind of potential effects. There is also little information about how effective and safe alternative cleaning technology, so-called “green cleaning,” is. New guidelines would need to maintain high hygiene standards and prevent healthcare-associated infections. Professional Cleaning and Commercial Use Professional and Industrial cleaners, despite being essential in maintaining hygiene and safety are one of the understudied occupational groups. Continuous exposure to cleaning agents containing ethanolamine, chloramine-T, and Quaternary Ammonium Compounds (QACs) was found to cause Occupational Asthma (OA) in cleaners. QAC was also found to be involved in developing antimicrobial resistance. Symptoms reported were dyspnea, cough, and wheezing. Females had more risk of acquiring OA due to higher exposures both at home and work. Exposures happen through dermal contact, hand-to-mouth, and inhalation of aerosolized quats. Researchers suggest continuous use of Personal Protective Equipment (PPE), periodic medical examinations, and guidelines on how to handle chemicals. Dermal, respiratory, immune, reproductive, and developmental effects of exposure are investigated but there is a currently limited scope of this study. Other concerns found were its impact on wastewater management, soil, and food especially in dissolved concentrations. In the United States, the Environmental Protection Agency (EPA), and Food and Drug Administration (FDA) regulate QACs depending on their intended purposes. Stricter regulations and policies are warranted for safer use and search for alternatives to limit exposures. See also Drug resistance Diethylene glycol - a raw material for air sanitation Hand sanitizer Hygiene List of cleaning products Sanitation Standard Operating Procedures Virucide References Further reading External links Ohio State University lecture on Sterilization and Disinfection What Germs Are We Killing? Testing and Classifying Disinfectants Disinfectant Selection Guide Disinfectant and Non-Chlorine Bleach —Office of DOE Science Education The Viennese Database for Disinfectants (WIDES Database) Cleaning and Custodial Services and Your Safety, by the National Institute for Occupational Safety and Health Hygiene Bactericides Occupational safety and health
Disinfectant
Biology
5,913
47,814,732
https://en.wikipedia.org/wiki/Amanita%20excelsa
Amanita excelsa, also known as the European false blushing amanita, is a species of agaric fungus in the family Amanitaceae. It is found in Asia, Europe, and North America, where it grows in deciduous forests. Toxicity Amanita excelsa var. alba is inedible. A. excelsa var. spissa is edible, but can easily be confused with the highly poisonous A. pantherina. References Fungi described in 1821 Fungi of Asia Fungi of Europe Fungi of North America excelsa Taxa named by Elias Magnus Fries Inedible fungi Fungus species
Amanita excelsa
Biology
122
78,699,108
https://en.wikipedia.org/wiki/Karl%20F.%20Lindman
Karl Ferdinand Lindman (7 June 1874 – 14 February 1952) was a Finnish physicist and educator. Best known for his work on chiral media, he has performed the experimental demonstration of optical rotation of microwaves in an artificial chiral medium in 1914. For the most of his career, he was a professor of physics at Åbo Akademi University. Biography Karl Ferdinand Lindman was born on 7 June 1874 in Ekenäs, Grand Duchy of Finland to Karl Gustav and Lovisa Lindman. His father was a farmer with clerical duties. Receiving a degree of physics in 1895, Lindman obtained his PhD degree from University of Helsinki in 1901. He briefly resided in Leipzig from 1899 to 1901; his thesis work was partially done in Leipzig University. Following his doctoral stufies, Lindman served as a secondary school teacher and authored textbooks in physics, chemistry and astronomy in Swedish and Finnish. He was a lecturer at Svenska normallyceum i Helsingfors, where he introduced laboratory courses. In 1907, he took sabbatical in England and Scotland to study teaching methods. Becoming a faculty member at Åbo Akademi University in 1918, he was appointed as the chair in physics in 1921, and served as the vice rector from 1921 to 1929. He also served as the dean of the Faculty of Mathematics and Natural Sciences during his tenure. Despite retiring in 1942, he carried a full teaching load until 1945. Lindman was married to Hilma Lovisa Tallqvist. He died on 14 February 1952 and was survived by his son, Sven Lindman, who was a professor of political science in Åbo Akademi. A conference in honor of Lindman was organized in 1991 at Abo Akademi by Finnish chapters of URSI and IEEE. Electromagnetic Waves in Chiral and Bi-isotropic Media, a 1994 monograph on chiral and bi-isotropic media by Ismo Lindell and his colleagues, is dedicated to his honour. Research and contributions to chiral media Lindman was mainly an experimental physicist and his research work focused on electromagnetics: he is best known for his work on chiral media. In 1914, he has demonstrated the optical rotation in an artificial chiral medium experimentally. He has constructed the artificial medium from left- and right-handed copper helices that are suspended in cotton; he has observed that this composite material rotates the linearly polarized microwave signal in a circular waveguide apparatus. He has also shown that same number of left- and right-handed helices does not cause any polarization rotation. His observations were first reported in the same year in the proceedings of Finnish Society of Sciences and Letters; these were subsequently published in 1920 and 1922 in the German-language journal Annalen der Physik. Even though this experiment came after the Jagadish Chandra Bose's 1898 study on optical rotation of microwaves, it has acted as a progenitor to artificial dielectrics and metamaterials. The experiment was repeated in 1950s with more advanced apparatus and was subsequently adapted to terahertz waves in 2009. Following his publications from 1914 to early 1920s, Lindman continued his experiments in chirality and proposed different configurations to induce optical activity. Lindman was also active in other areas of electromagnetics. His doctoral studies in University of Leipzig focused on the resonances and standing waves in a dipole antenna. In addition to resonances of wire antennas, Lindman has studied millimeter and infrared wave propagation, diffraction grids, scattering and waveguides. In 1940s, he studied the wave propagation in circular waveguides and parallel plates: these studies coincided with the flurry of interest in microwave propagation of waveguides for radar applications, stemming from the World War II. Even though he did not publish any original research regarding the theory of relativity, he was critical of it and expressed his criticisms in his textbooks. Selected publications References 1874 births 1952 deaths 20th-century Finnish physicists Microwave engineers University of Helsinki alumni Academic staff of Åbo Akademi University People from Raseborg 20th-century Finnish educators 20th-century Finnish non-fiction writers Textbook writers Finnish schoolteachers Finnish writers in Swedish Relativity critics Finnish expatriates in Germany Experimental physicists
Karl F. Lindman
Physics
861
60,694,300
https://en.wikipedia.org/wiki/Lia%20Addadi
Lia Addadi (born 1950) is a professor of structural biology at the Weizmann Institute of Science. She works on crystallization in biology, including biomineralization, interactions with cells, and crystallization in cell membranes. She was elected a member of the National Academy of Sciences (NAS) in 2017 for “distinguished and continuing achievements in original research”, and the American Philosophical Society (2020). Early life and education Addadi was born in Padua. She earned her bachelor's and master's degrees in chemistry at the University of Padua and graduated in 1973. She moved to Rehovot for her PhD supervised by Meir Lahav on the synthesis of chiral polymers at the Weizmann Institute of Science, which she completed in 1979. Research and career After her PhD Addadi joined Jeremy R. Knowles at Harvard University. She started to work on crystal growth during her PhD, and, by chance, met Steve Weiner, who was working on biomineralization. Together they investigated many biominerals, including demonstrating the matrix sheets of crystals in nacre (mother of pearl). Addadi returned to the Weizmann Institute of Science as an associate professor in 1988. She was promoted to full professor in 1993 and head of the Structural Biology in 1994. She works on ordered crystal arrays and mineralized tissues. She has investigated the relationship between acidic proteins with biominerals including calcite and apatite. She demonstrated that macromolecules in the shells of mollusks determine the polymorphism of aragonite and calcite. She went on to establish the role of amorphous calcium carbonate in biomineralization. Addadi identified that mollusks build their shells using hydrophobic silk gels, aspartic acid, acid-rich proteins, and an amorphous precursor. Addadi is interested in how macromolecules nucleate oriented growth and how morphology changes through interactions with surfaces. Addadi looks at the structures of crystal protein composites. She demonstrated that protein intercalation into the lattice can change the texture and mechanical properties of the material. She showed that immunoglobulins and serum albumins can selectively adhere to the surfaces of crystals and nucleate further crystal growth. This can help too understand how diseases such as gout, osteoarthritis, and atherosclerosis form crystals in body fluids. Addadi studies the formation pathways of these mineralized tissues in foraminifera and zebrafish bone. She was the first woman to win the ETH Zurich prelog prize in 1989. She was appointed dean of the faculty of chemistry in 2001. Her work has considered molecular recognition at crystal interfaces. When introduced to an organism, crystals appear as highly structured, repetitive macromolecular substrates. She studies monoclonal antibodies that are sensitive to specific crystalline organisations. She also investigates cross-talk between crystals and the biological environments they exist in. Her inaugural year article for Proceedings of the National Academy of Sciences of the United States of America (PNAS) considered the formation of cholesterol crystals in atherosclerosis. She demonstrated that in cell culture, crystals adopt a similar shape to the atherosclerotic plaque that forms in cells, because they are formed from the same cholesterol. The crystals adopt helical or tubular forms. Addadi used stochastic optical reconstruction microscopy and soft X-ray tomography to identify the cholesterol inside cells. Awards and honours 1986 Weizmann Institute of Science Ernst David Bergmann prize in Chemistry 1989 Israel Chemical Society Annual Award 1996 NIDR Prize for Distinguished Scientists 1998 ETH Zurich Prelog Medal in Stereochemistry 2006 Technion – Israel Institute of Technology Kolthof prize 2009 Israel Chemical Society Prize for Excellence 2011 Royal Swedish Academy of Sciences Gregori Aminoff Prize for Crystallography 2017 Elected to the National Academy of Sciences 2018 ETH Zurich Honorary Doctorate References 1950 births Living people Italian women scientists Italian women chemists Academic staff of Weizmann Institute of Science Weizmann Institute of Science alumni University of Padua alumni Women biochemists Members of the American Philosophical Society Foreign associates of the National Academy of Sciences Date of birth missing (living people)
Lia Addadi
Chemistry
858
13,693,330
https://en.wikipedia.org/wiki/Thiazyl%20trifluoride
Thiazyl trifluoride is a chemical compound of nitrogen, sulfur, and fluorine, having the formula . It exists as a stable, colourless gas, and is an important precursor to other sulfur-nitrogen-fluorine compounds. It has tetrahedral molecular geometry around the sulfur atom, and is regarded to be a prime example of a compound that has a sulfur-nitrogen triple bond. Preparation can be synthesised by the fluorination of thiazyl fluoride, NSF, with silver(II) fluoride, : or by the oxidative decomposition of by silver(II) fluoride: It is also a product of the oxidation of ammonia by . Direct fluorination of mercury difluorosulfinimide (Hg(NSF2)2) does not give thiazyl trifluoride, but instead the isomeric fluoriminosulfur difluoride (F2SNF). Reactions is much more stable than thiazyl fluoride, does not react with ammonia and hydrogen chloride, and only reacts with sodium at 400 °C. However, the fluoride ligands are labile, and can be displaced by secondary amines. Thiazyl trifluoride reacts with carbonyl fluoride () in the presence of hydrogen fluoride to form pentafluorosulfanyl isocyanate (). References Fluorides Nonmetal halides Nitrides Sulfur–nitrogen compounds
Thiazyl trifluoride
Chemistry
316
14,513,559
https://en.wikipedia.org/wiki/Environmental%20Research
Environmental Research is a peer-reviewed environmental science and environmental health journal published by Elsevier. The editor in chief is Jose L. Domingo. The journal's 2020 impact factor of 6.498 placed it 16th out of 203 journals in the category Public, Environmental, and Occupational Health; the 2021 impact factor increased to 8.431. References External links Environmental science journals Elsevier academic journals Academic journals established in 1967 Environmental health journals
Environmental Research
Environmental_science
89
54,426,651
https://en.wikipedia.org/wiki/Ackermann%27s%20formula
In control theory, Ackermann's formula is a control system design method for solving the pole allocation problem for invariant-time systems by Jürgen Ackermann. One of the primary problems in control system design is the creation of controllers that will change the dynamics of a system by changing the eigenvalues of the matrix representing the dynamics of the closed-loop system. This is equivalent to changing the poles of the associated transfer function in the case that there is no cancellation of poles and zeros. State feedback control Consider a linear continuous-time invariant system with a state-space representation where is the state vector, is the input vector, and are matrices of compatible dimensions that represent the dynamics of the system. An input-output description of this system is given by the transfer function where is the determinant and is the adjugate. Since the denominator of the right equation is given by the characteristic polynomial of , the poles of are eigenvalues of (note that the converse is not necessarily true, since there may be cancellations between terms of the numerator and the denominator). If the system is unstable, or has a slow response or any other characteristic that does not specify the design criteria, it could be advantageous to make changes to it. The matrices , however, may represent physical parameters of a system that cannot be altered. Thus, one approach to this problem might be to create a feedback loop with a gain that will feed the state variable into the input . If the system is controllable, there is always an input such that any state can be transferred to any other state . With that in mind, a feedback loop can be added to the system with the control input , such that the new dynamics of the system will be In this new realization, the poles will be dependent on the characteristic polynomial of , that is Ackermann's formula Computing the characteristic polynomial and choosing a suitable feedback matrix can be a challenging task, especially in larger systems. One way to make computations easier is through Ackermann's formula. For simplicity's sake, consider a single input vector with no reference parameter , such as where is a feedback vector of compatible dimensions. Ackermann's formula states that the design process can be simplified by only computing the following equation: in which is the desired characteristic polynomial evaluated at matrix , and is the controllability matrix of the system. Proof This proof is based on Encyclopedia of Life Support Systems entry on Pole Placement Control. Assume that the system is controllable. The characteristic polynomial of is given by Calculating the powers of results in Replacing the previous equations into yields Rewriting the above equation as a matrix product and omitting terms that does not appear isolated yields From the Cayley–Hamilton theorem, , thus Note that is the controllability matrix of the system. Since the system is controllable, is invertible. Thus, To find , both sides can be multiplied by the vector giving Thus, Example Consider We know from the characteristic polynomial of that the system is unstable since the matrix will only have positive eigenvalues. Thus, to stabilize the system we shall put a feedback gain From Ackermann's formula, we can find a matrix that will change the system so that its characteristic equation will be equal to a desired polynomial. Suppose we want Thus, and computing the controllability matrix yields Also, we have that Finally, from Ackermann's formula State observer design Ackermann's formula can also be used for the design of state observers. Consider the linear discrete-time observed system with observer gain . Then Ackermann's formula for the design of state observers is noted as with observability matrix . Here it is important to note, that the observability matrix and the system matrix are transposed: and . Ackermann's formula can also be applied on continuous-time observed systems. See also Full state feedback References External links Chapter about Ackermann's Formula on Wikibook of Control Systems and Control Engineering Engineering concepts Control engineering Control theory
Ackermann's formula
Mathematics,Engineering
832
17,904,953
https://en.wikipedia.org/wiki/Rational%20dependence
In mathematics, a collection of real numbers is rationally independent if none of them can be written as a linear combination of the other numbers in the collection with rational coefficients. A collection of numbers which is not rationally independent is called rationally dependent. For instance we have the following example. Because if we let , then . Formal definition The real numbers ω1, ω2, ... , ωn are said to be rationally dependent if there exist integers k1, k2, ... , kn, not all of which are zero, such that If such integers do not exist, then the vectors are said to be rationally independent. This condition can be reformulated as follows: ω1, ω2, ... , ωn are rationally independent if the only n-tuple of integers k1, k2, ... , kn such that is the trivial solution in which every ki is zero. The real numbers form a vector space over the rational numbers, and this is equivalent to the usual definition of linear independence in this vector space. See also Baker's theorem Dehn invariant Gelfond–Schneider theorem Hamel basis Hodge conjecture Lindemann–Weierstrass theorem Linear flow on the torus Schanuel's conjecture Bibliography Dynamical systems
Rational dependence
Physics,Mathematics
265
69,889,778
https://en.wikipedia.org/wiki/Laurent%20Charlet
Laurent Charlet (born 1955 in Paris, France) is a French environmental molecular geochemist working at the Institute of Earth Science within the University of Grenoble-Alpes (France). In 2007, he was appointed Distinguished Professor to reflect his major scientific achievements. He holds several adjunct or affiliated positions at the Lawrence Berkeley Laboratory (USA), the University of Swansea (Honorary Chair, UK) and the University of Waterloo (Canada). His research interests aim to advance our scientific knowledge for protecting our natural resources like healthy soils and clean water, using subsurface resources responsibly, and developing strategies for resilience in a changing world. Early life and education Laurent Charlet was born in Paris within a family of artists (painters, architects, sculptors). He attended a Montessori-like school (Ecole Alsacienne) where he was encouraged to develop an independent thinking. Through his parents, Dr. Charlet cultivated a strong interest for the humanities, his father's favorite field, but also for the science thanks to his mother, who was among the first women to earn an architecture DPLG diploma in France. Dr. Charlet studied at the most prestigious French institutions for his high school and bachelor-lie degree in mathematics and biology (Louis Le Grand, Saint Louis and Henri IV). Later on, he earned a Master in Agronomy Engineering at Agrocampus (Agronomy Institutes), with a specialization in Mediterranean and Desert Agronomy. Concomitantly, he kept practicing music & modern dance Because of his engineering training, Dr. Charlet has always kept a keen interest for working on environmental issues of worldwide societal and health importance, applying a molecular approach to decipher processes occurring at macro scale. In 1981, Dr Charlet moved to California, USA with a 6-month scholarship at the University of California – Riverside to work under the supervision of Dr. Garrison Sposito. Fascinated by his experience, he decided to stay within the research group of Dr. Sposito to earn a M.S. and a Ph.D. in Soil and Environmental Sciences. His mentor, Dr. Garrison Sposito taught him how to conceptualize in a mathematic framework any natural process. His M.S research project examined the sorption of the CaCl+ ion pair on clay and its detrimental impact on soil stability in desert agriculture. His Ph.D. project focused on surface chemistry of Amazonia soils exposed to intensive agricultural stress. In 1986, Dr. Charlet moved back to Europe to work as a postdoctoral research scholar at the Swiss Federal Institute of Aquatic Science and Engineering (EAWAG) within Dr. Werner Stumm's research group. He completed a second postdoc at the University of Bern within Dr. Paul Schindler's research group. While living in Bern, Dr Charlet met and married Dr. Barbara Dehn, a German biologist. They have two children, Alvaro and Anaïs. Scientific position and Awards Full professor at the age of 35 while the European Synchrotron Research Facility (ESRF) synchrotron facility was under construction in Grenoble, Laurent Charlet work gained global spotlight by performing research at drastically different scales: (i) large scale multidisciplinary field investigations and (ii) molecular level investigations at synchrotron facilities with a special focus on the redox surface chemistry of nanoparticles, particularly for oxyanion trace elements and their importance in human health, origin of life and environmental safety. In 2007 Dr. Charlet was awarded the CNRS Silver Medal for Excellence in Research. He was given visiting professor positions at UC-Berkeley (US), EPFL-Lausanne (Switzerland), and Uni. Utrecht (The Netherlands). Honorary member of the Institut Universitaire de France, he was for 10-years both Editor in Chief of Journal of Hydrology and International Research advisor to UGA Chancellor. He heads since 2018 the International Medical Geology Association French Chapter. He also interacts with archeologists and has been till 2009 member of the Lascaux UNESCO Heritage scientific committee. Research Avenues His research focuses on the chemical reactivity of (nano) particles, either natural (soil, sediment and water) or engineered (oncology, nanotechnology or environmental engineering). While continuing his life-long research on soil and deep underground storage issues regarding water quality, he is now collaborating with the medical community to investigate diseases induced by the presence of nanoparticles in the organism (e.g. podoconiosis) or develop treatments. His approach consists in examining trace elements and their speciation at the molecular level in a large variety of biological and environmental media. Dr. Laurent Charlet's work is based on the development of advanced chemical concepts, methodology and instrumentation methods to investigate biological and geochemical processes governing the chemical speciation and impact on mobility, bioavailability and toxicity of trace elements (Se, As, Sb, Re, Hg) or organic molecules (antibiotics and other waste water treatment plants (WWTP) non treated contaminants) in heterogeneous chemistry. By combining field & toxicological measurements using spectroscopic (μXAFS, ESR, Mössbauer), neutron and X-Ray diffractometric techniques, he contributes to develop new concepts and new tools for the geochemical community working on water quality, paleoenvironmental reconstruction, environment risk assessment, or geomedicine. • Mineral particles surface chemistry Natural nanomaterial such as clays and oxides can store large amounts of major and trace elements (either bioessential or toxic). After many years investigating clays, calcite and Fe and Mn oxide surface chemistry, Dr. Laurent Charlet shifted his research focus on redox sensitive minerals such as magnetite, pyrite, mackinawite which structure which could have participated to the emergence of life. Examining the surface reactivity of such iron and manganese nanoparticles and pyrite−greigite nanocomposite when exposed to highly toxic contaminants present in water (e.g., Se, As, and Sb oxyanions) contributes to the development of new remediation and filtration techniques, but also to our understanding of particle toxicity. • Cancer nanotherapeutics, nanotoxicity and trace element deficiency Dr. Laurent Charlet's research aims to study both the impact of trace element deficiency on human health (e.g., impact of selenium deficiency on osteoarthrosis, thyroid cancer and Keshin Beck disease), and, conversely, the use of nanoparticles as therapeutic agents (e.g., in fighting ovarian and prostate cancer), where selenium nanoparticles were shown to play a direct role in histone methylation. Other nanomaterials to have a toxicity depending on their diameter (e.g., the silver nanowires to be used in display screens) or on the reactivity of surface iron atoms (e.g., in asbestos used for many years). • Hydrogen, Water and Waste Geological Storage Hydrogen. Efficient, large-scale, and long-duration energy storage for intermittent renewable energy sources is a critical unsolved problem for the expansion of carbon-free electricity. Underground hydrogen storage (UHS) coupled to reversible water splitting and hydrogen oxidation has the potential to play a significant role in grid-scale energy storage. Dr. Charlet has proposed an innovative approach to UHS, through the study of hydrogen on clay showing various water content, that will provide new opportunities for deployment of hydrogen storage and integration into energy and electricity systems. Water. The Sponge City concept, popularized by Dr Yu Kongjian (Peking University), corresponds to future cities that do not act like an impermeable system, but allow water to filter through the ground, like a sponge absorbing rain water, particularly flash flood water (in dry areas) or water outflow from treatment plants (e.g. after reverse osmosis treatment in Los Angeles after 2035). Dr. Charlet contributes to this research by exploring the impact of natural contamination, their removal by passive filtration systems and the potential of bioremediation techniques that will allow the removal of emerging contaminants and the recharge urban aquifers with high quality waters. Dr. Charlet's research contributes also to the treatment of urban and industrial wastewater. Two major worldwide environmental issues are being investigated: (i) the immobilization by clay and biochars of antibiotics and other organics common organic micropollutants present in surface water not addressed by WWTP and (ii) the decontamination of phosphogypsum stack effluents. Nuclear waste. The safety of radioactive waste geological storage remains a major challenge. Vitrified waste steel canisters will be stored either in a > 300 m thick clay rock, or in granite surrounded by a clay and concrete “near field” barrier. Dr. Charlet team has shown that radioactive oxyanions are sorbed on edge face of clay minerals and concrete component, leading via redox reaction to the immobilization of otherwise extremely mobile radionuclides such as 79Se and high valence 235,238U. In addition, iron sulfides, that both exist in granitic and claystone host rocks or are formed during canister during steel corrosion, are key actors in controlling the redox potential and inhibiting the transport of redox-sensitive radionuclides. • Paleoenvironments and Archeology Trace elements can be used as proxies to reconstruct environmental or human histories. Their geochemical signatures can be stored in natural archives or archeological artefacts. Dr. Charlet and his collaborators showed the impact of early Bronze Age and medieval metallurgy in the Alps and later the rise (and impact) of cement industry. He also showed how, depending on climate, well crystalline vs. granular calcite can respectively protect or obliterate prehistorical wall paintings. Please see research details: . Papers As of January 2022, Laurent Charlet is co-author to 213 peer reviewed international journal papers, and another 200 publications of various types, with total 20,000 citations and an h-index of 75. They can be found at the ORCID . MOOC and conferences Laurent Charlet has emerged as a science communicator on the web, as he developed, together with Prof. R. Latmani at EPFL, an EdX-MOOC entitled: “Water Quality: the Bio-Geo-Chemical Engine”, given on line since 2019, and he gives regular lectures in China and USA. Laurent Charlet presents regularly his work at geochemistry (Goldschmidt 2021), chemistry (ACS Fall 2021 and Spring 2022), ecological engineering (ACEER 21), food and nutrition (VirtualFood 22) and toxicology (Toxico 2022) international conferences, in Lyon, Atlanta, San Diego, Beijing, London and Barcelona respectively. References External links Papers and publications 1955 births Living people French geochemists Scientists from Paris
Laurent Charlet
Chemistry
2,309
15,245,685
https://en.wikipedia.org/wiki/CLEC2D
C-type lectin domain family 2 member D is a protein that in humans is encoded by the CLEC2D gene. This gene encodes a member of the natural killer cell receptor C-type lectin family. The encoded protein inhibits osteoclast formation and contains a transmembrane domain near the N-terminus as well as the C-type lectin-like extracellular domain. Several alternatively spliced transcript variants have been identified, but the full-length nature of every transcript has not been defined. CLEC2D encodes the gene for the Lectin Like Transcript-1 (LLT1) protein which is a functional ligand for the human NKR-P1A receptor, encoded by the KLRB1 gene. In mice, there are many orthologs of the CLEC2D gene, and the presumed homolog is Clr-b/Ocil (Clec2d). Clr-b has been implicated in missing-self recognition by natural killer cells through engagement of the NKR-P1B receptor. References Further reading External links
CLEC2D
Chemistry
234
11,569,785
https://en.wikipedia.org/wiki/Thecaphora%20solani
Thecaphora solani, potato smut, is a fungal plant pathogen. It affects plants, primarily potatoes, in the Andean part of South America. The disease of potatoes that it causes is economically important (there have been reports of crop losses up to 80 percent in South America). References External links Index Fungorum USDA ARS Fungal Database Pictures Information Fungal plant pathogens and diseases Fungi of South America Ustilaginomycotina Fungi described in 1944 Fungus species
Thecaphora solani
Biology
97
11,460,668
https://en.wikipedia.org/wiki/Ophiostoma%20wageneri
Ophiostoma wageneri is a plant pathogen. Leptographium wageneri var. pseudotsugae develops on Douglas-fir. See also List of Douglas-fir diseases References Fungal conifer pathogens and diseases Ophiostomatales Fungi described in 1962 Fungus species
Ophiostoma wageneri
Biology
62
9,369,090
https://en.wikipedia.org/wiki/Department%20of%20Physics%20and%20Astronomy%2C%20University%20of%20Manchester
The Department of Physics and Astronomy at the University of Manchester is one of the largest and most active physics departments in the UK, taking around 330 new undergraduates and 50 postgraduates each year, and employing more than 80 members of academic staff and over 100 research fellows and associates. The department is based on two sites: the Schuster Laboratory on Brunswick Street and the Jodrell Bank Centre for Astrophysics in Cheshire, international headquarters of the Square Kilometre Array (SKA). According to the Academic Ranking of World Universities, the department is the 9th best physics department in the world and best in Europe. It is ranked 2nd place in the UK by Grade Point Average (GPA) according to the Research Excellence Framework (REF) in 2021, being only behind the University of Sheffield. The University has a long history of physics dating back to 1874, which includes 12 Nobel Prize laureates]], most recently Andre Geim and Konstantin Novoselov who were awarded the Nobel Prize in Physics in 2010 for their discovery of graphene. Research groups The Department of Physics and Astronomy comprises eight research groups: Astronomy and Astrophysics Biological Physics Condensed Matter Physics Nonlinear Dynamics and Liquid Crystal Physics Photon Physics Particle Physics Nuclear Physics Theoretical Physics Research in the department of Physics has been funded by the Particle Physics and Astronomy Research Council (PPARC), the Science and Technology Facilities Council (STFC) and the Royal Society. Notable faculty the department employs 53 Professors, including Emeritus Professors. Teresa Anderson Professor of Physics and co-founder of the Bluedot Festival Philippa Browning Professor Astrophysics Brian Cox, Professor of Particle Physics, working on the ATLAS experiment at the Large Hadron Collider Philip Diamond, Professor of Photon Physics and Director General of the Square Kilometre Array (SKA) Wendy Flavell, Vice Dean for Research and a Professor of Surface Physics Jeffrey Forshaw, Professor of Particle Physics and co-author of The Quantum Universe Sir Andre Geim, Regius Professor & Royal Society Research Professor Sir Konstantin Novoselov, Langworthy Professor of Physics Tim O'Brien, Professor of Astrophysics Terry Wyatt Professor of Particle Physics Notable alumni and former staff Sarah Bridle, Professor of Food, Climate and Society at the University of York Neil Burgess, University College London Tamsin Edwards, King's College London Yvonne Elsworth, University of Birmingham Danielle George, Professor of Radiofrequency Engineering History The department has origins dating back to 1874 when Balfour Stewart was appointed the first Langworthy Professor of Physics at Owens College, Manchester. Stewart was the first to identify an electrified atmospheric layer (now known as the ionosphere) which could distort the Earth's magnetic field. The theory of the ionosphere was postulated by Carl Friedrich Gauss in 1839, Stewart published the first experimental confirmation of the theory in 1878. Since then, the department has hosted many award-winning scientists including: Hans Bethe, awarded the Nobel Prize in Physics in 1967 Patrick Blackett, Baron Blackett, awarded the Nobel Prize in Physics in 1948 Niels Bohr, awarded the Nobel Prize in Physics in 1922 Sir William Lawrence Bragg, discovered Bragg's law and awarded the Nobel Prize in Physics in 1915 Sir James Chadwick, awarded the Nobel Prize in Physics in 1935 Sir John Cockcroft, awarded the Nobel Prize in Physics in 1951 Rod Davies, Professor of Radio Astronomy Richard Davis, Professor of Astrophysics Samuel Devons, Brian Flowers, Baron Flowers, Sir Francis Graham-Smith, Astronomer Royal from 1982 to 1990 Henry Hall, who built the first dilution refrigerator Sir Bernard Lovell, creator of the Lovell Telescope at the Jodrell Bank Observatory Henry Moseley, creator of Moseley's law Nevill Francis Mott, awarded the Nobel Prize in Physics in 1977 Ernest Rutherford, awarded the Nobel Prize in Chemistry in 1908 for splitting the atom Sir Arthur Schuster, Balfour Stewart, first Langworthy Professor of Physics Sir Joseph John "J. J." Thomson, studied Physics at Owens College, Manchester aged 14, went on to run the Cavendish Laboratory in Cambridge and was awarded the 1906 Nobel Prize in Physics. In 2004, the two separate departments of Physics at the Victoria University of Manchester and the University of Manchester Institute of Science and Technology (UMIST) were merged to form the current Department of Physics and Astronomy at the University of Manchester. The department was known as the School of Physics and Astronomy until a 2019 reshuffle. Emeritus professors The department is also home to several Emeritus Scientists, pursuing their research interests after their formal retirement including: Alexander Donnachie, Research Professor Andrew Lyne, Emeritus Professor and co-discoverer of the binary pulsar Robin Marshall, Professor of Physics and Biology Michael Moore, Emeritus Professor of Theoretical Physics References Physics Astronomy education Physics departments in the United Kingdom Astronomy in the United Kingdom Professional education in Manchester
Department of Physics and Astronomy, University of Manchester
Astronomy
966
5,440,905
https://en.wikipedia.org/wiki/Winlink
Winlink, or formally, Winlink Global Radio Email (registered US Service Mark), also known as the Winlink 2000 Network, is a worldwide radio messaging system that uses amateur-band radio frequencies and government frequencies to provide radio interconnection services that include email with attachments, position reporting, weather bulletins, emergency and relief communications, and message relay. The system is built and administered by volunteers and is financially supported by the Amateur Radio Safety Foundation. Network Winlink networking started by providing interconnection services for amateur radio (also known as ham radio). It is well known for its central role in emergency and contingency communications worldwide. The system used to employ multiple central message servers around the world for redundancy, but in 2017–2018 upgraded to Amazon Web Services that provides a geographically-redundant cluster of virtual servers with dynamic load balancers and global content-distribution. Gateway stations have operated on sub-bands of HF since 2013 as the Winlink Hybrid Network, offering message forwarding and delivery through a mesh-like smart network whenever Internet connections are damaged or inoperable. During the late 2000s, it increasingly became what is now the standard network system for radio email, worldwide. Additionally, in response to the need for better disaster response communications in the mid to later part of the 2000s, the network was expanded to provide separate parallel radio email networking systems for the US Department of Homeland Security SHARES Winlink Radio Email System, along with other governments (non-amateur radio) services, also to include Non-government Organizations such as the US American Red Cross, the Austrian International Red Cross, and other such critical infrastructure Non-Government Organizations. Although these services are separate, and for reasons of security may be unknown to each other, the capability to cross services with complete Interoperability is available. For example, a US ham using Winlink on the amateur radio spectrum may email a Winlink user on the DHS SHARES Winlink system (non-amateur) radio service, which may then be picked up on the DHS SHARES Winlink network system. Of course, the originator of any service must be familiar with the regulatory environment of the recipient's service should it be another Winlink service. Amateur radio HF e-mail E-mail via HF can be used nearly everywhere on the planet, and is made possible by connecting an HF single sideband (SSB) transceiver system to a computer, modem interface, and appropriate software. The HF modem technologies include PACTOR, Winmor (deprecated), ARDOP, Vara HF, and Automatic Link Establishment (ALE). VHF/UHF protocols include AX.25 Packet and Vara FM. Guidelines Operators in each country must, as a baseline, follow the appropriate regulatory guidelines for their license. Some countries may limit or regulate types of amateur messaging (such as e-mail) by content, origination location, end destination, or license class of the operator. Origination of third party messages (messages sent on behalf of, or sent to, an end destination who is not an amateur operator) may also be regulated in some countries; those that limit such third party messages normally have exceptions for emergency communications. In accordance with long standing amateur radio tradition, international guidelines and FCC rules section 97.113, hams using the Winlink system are advised that it is not appropriate to use it for business communications. Users The Winlink system is open to properly licensed amateur radio operators, worldwide. The system primarily serves radio users without normal access to the internet, government and non-government public service organizations, medical and humanitarian non-profits, and emergency communications organizations. As of July 2008, there were approximately 12,000 radio users and approximately 100,000 internet correspondents. Monthly traffic volume averages over 100,000 messages. For offshore cruising yachts, Winlink is widely used as an alternative, or alongside, Sailmail, which is an HF PACTOR based-email system using marine HF frequencies rather than amateur, and unlike the amateur radio use of Winlink, allows business to be conducted over radio. In addition to email, Winlink uses a system called "Saildocs," and other file delivery methods, which allows properly licensed amateur radio cruisers to retrieve meteorological, maritime safety and other crucial files over Winlink email. As example, Winlink was found to be more useful in and around South Africa where best weather was provided by SAMNet (South African Mobile Maritime Net). Supported radio technologies 802.11 wifi ALE (Automatic Link Establishment) APRS (Automatic Packet Reporting System) AX.25 Packet Radio D-Star PACTOR PACTOR-II PACTOR-III PACTOR-IV WINMOR(Deprecated) ARDOP Vara HF Vara FM TCP/IP (Telnet and other Wireless Technologies) Technical protocols PACTOR-I, WINMOR(deprecated), ARDOP, HSMM (WiFi), AX.25 packet, D-Star, TCP/IP, and ALE are non-proprietary protocols used in various RF applications to access the Winlink network systems. Later versions of PACTOR are proprietary and supported only by commercially available modems from Special Communications Systems GmbH. In amateur radio service, AirMail, Winlink Express, and other email client programs used by the Winlink system, disable the proprietary compression technology for PACTOR-II, PACTOR-III, and PACTOR-IV modems and instead relies on the open FBB protocol, also widely used worldwide by packet radio BBS forwarding systems. Controversies and US regulatory issues In May 1995, the American Radio Relay League (ARRL) privately asked the FCC to change Part 97.309(a) to allow fully documented G-TOR, Clover, and original open source PacTOR (Pactor I) modes. The FCC granted this request in DA-95-2106 based on the ARRL's representation that it had worked with developers to ensure complete technical documentation of these codes were available to all amateur radio operators. However, subsequent versions of Pactor contained proprietary compression algorithms that prevent over-the-air interception. As of July 9th, 2024, the Winlink Development Team has stated that their software only uses an open compressed binary format called Open B2F, which is publicly listed on the Winlink website, and replaces proprietary compression used by some manufacturers of protocols used. In 2007, a US amateur radio operator filed a formal petition with the Federal Communications Commission (FCC) aimed at reducing the signal bandwidth in automatic operation subbands; but, in May 2008 FCC ruled against the petition. In the Official Order, FCC said, "Additionally, we believe that amending the amateur service rules to limit the ability of amateur stations to experiment with various communications technologies or otherwise impeding their ability to advance the radio art would be inconsistent with the definition and purpose of the amateur service. Moreover, we do not believe that changing the rules to prohibit a communications technology currently in use is in the public interest." In 2013, the FCC ruled in Report and Order 13-1918 against the use of encryption in the US amateur radio bands for any purpose, including emergency communications. The FCC cited the need for all amateur radio communications to be open and unobscured, to uphold the Commission's long-standing requirement that the service be able to police itself. Winlink itself uses point-to-point protocols that may be copied by a third party through methods provided by the authors of these protocols as well as from independent sources. Because the content of data is not obstructed on the amateur spectrum, those government agencies who do use Winlink for Continuity of Government and public safety emergency communications requested (or in some cases, mandated) that they be allowed to encrypt their messages. On non-amateur radio frequencies worldwide, Winlink provides for encryption via AES-256 for its most used protocols, Pactor and VARA. Such transmission encryption, once set up properly, is seamless to the end-user and requires no additional effort, but is left up to the individual operator or government agency to setup. In addition to "readers" being made available for protocols used by the Winlink system, in the US, all messages passing through licensed US amateur radio stations by radio are freely accessible by other licensed amateurs via the WinLink Open Message Viewer on the Winlink WebSite. Amateurs concerned about encryption are encouraged to help the US amateur radio community police itself by search and viewing such messages, and reporting messages if they spot a violation (https://winlink.org/content/us_amateur_radio_message_viewer). Deletion of the Symbol Rate Rule RM-11708 This change was requested in 2013 by ARRL, and the FCC released notice of proposed rulemaking in 2016. In November of 2023, the FCC finally removed the symbol rate limit of 300 baud in favor of an occupied bandwidth limit of 2.8 KHz (WT Docket No. 16-239). In the Report and Order, the FCC stated, "The amateur radio community can and does play a vital role in emergency response communications, but is often unnecessarily hindered by the baud rate limitations in the rules." Supporting this change were a host of federal, state and local emergency management agencies, who continually wrote ex parte comments to the FCC regarding their concerns with the impact such a limitation had on emergency email communications via Winlink. In addition, Amateur Radio Relay League (ARRL) continued to push its efforts toward this change through Congressional pathways. Because Winlink is a worldwide service, similar issues are the concern of other countries, who are also pushing for innovative changes that will positively impact their ability to provide a “no infrastructure” resilient system to bridge SMTP mail over radio, both over the amateur radio spectrum as well as for government service uses as an emergency service option. See also Amateur radio emergency communications Automatic Link Establishment PACTOR Winmor Footnotes References External links The official Winlink Web Site Winlink Research Project Winlink Tutorial Winlink wide-area HF MESH network Introduction to RMS Express Winlink client program Guida italiana completa per l'uso di RMS Express /-/ Winlink 2000 The Wiki for Pat - a cross platform Winlink client Guia rápida en Español de introducción a la Red WL2K, Winmor y uso del RMS Express, (Spanish White Paper) Packet radio
Winlink
Technology
2,148
26,449,667
https://en.wikipedia.org/wiki/Kulabyte
Kulabyte was a private company headquartered in San Marcos, Texas that developed live video encoding and [[video streamEdit summary https://en.m.wikipedia.org/wiki/Help:Edit_summarying]] software and provided streaming event services. KulaByte was acquired by Haivision in 2011 and is now part of Haivision's product line. Kulabyte's claimed advantage in video encoding is that it provides higher quality live HD H.264 video than any other encoder on the market while requiring lower delivery bandwidth. History Kulabyte was founded by Chris Gottschalk and Blake Wenzel in November 2004. In 2005, Kulabyte first unveiled its video encoding technology at the IBC show in Amsterdam In 2006, Kulabyte announced a partnership with MainConcept to use the MainConcept video encoding codec. Kulabyte announced a partnership with On2 Technologies in 2007 to use KulaByte's TimeSlice technology with On2's VP6 for Flash video based personal and professional grade desktop encoding and publishing solutions. At the same time Kulabyte also announced support for H.264 in Adobe Flash Player using MainConcept's H.264 codec. By 2008, Kulabyte delivered its first major live streaming event using its video encoding technology. The event was a live concert webcast from Kuwait called "Operation MySpace" and was done in partnership with MySpace.com using Adobe Flash Media Server through Akamai's content delivery network. Kulabyte released its XStream Live version 2 Flash encoding software and iStream Live version 2 HTTP Streaming software for iPhone in 2009. Haivision, a Montreal-based encoding technology company, acquired KulaByte in July, 2011. References External links Kulabyte (now redirects to Haivision's KulaByte page) Software companies based in Texas San Marcos, Texas Streaming media systems Software companies established in 2004 Defunct software companies of the United States American companies established in 2004 2004 establishments in Texas
Kulabyte
Technology
415
856,432
https://en.wikipedia.org/wiki/Appliance%20classes
Appliance classes (also known as protection classes) specify measures to prevent dangerous contact voltages on unenergized parts, such as the metallic casing, of an electronic device. In the electrical appliance manufacturing industry, the following appliance classes are defined in IEC 61140 and used to differentiate between the protective-earth connection requirements of devices. Class 0 These appliances have no protective-earth connection and feature only a single level of insulation between live parts and exposed metalwork. If permitted at all, Class 0 items are intended for use in dry areas only. A single fault could cause an electric shock or other dangerous occurrence, without triggering the automatic operation of any fuse or circuit breaker. Sales of such items have been prohibited in much of the world for safety reasons, for example in the UK by Section 8 of The Low Voltage Electrical Equipment (Safety) Regulations 1989 and New Zealand by the Electricity Act. A typical example of a Class 0 appliance is the old style of Christmas fairy lights. However, equipment of this class is common in some 120V countries, and in much of the 230V developing world, whether permitted officially or not. These appliances do not have their chassis connected to electrical earth. In many countries the plug of a class 0 equipment is such that it cannot be inserted to grounded outlet like Schuko. The failure of such an equipment in a location where there are grounded equipment can cause fatal shock if one touches both. Any Class 1 equipment will act like a Class 0 equipment when connected to an ungrounded outlet. Class I Appliance class I is not only based on the basic insulation, but the casing and other conductive parts are also connected with a low-resistant earth conductor. Hence, these appliances must have their chassis connected to electrical earth (US: ground) by a separate earth conductor (coloured green/yellow in most countries, green in India, USA, Canada and Japan). The earth connection is achieved with a three-conductor mains cable, typically ending with three-prong AC connector which plugs into a corresponding AC outlet. Plugs are designed such that the connection to the protective earth conductor should be the first connection when plugged in. It should also be the last to be broken when the plug is removed. A fault in the appliance which causes a live conductor to contact the casing will cause a current to flow in the earth conductor. If large enough, this current will trip an over-current device (fuse or circuit breaker [CB]) and disconnect the supply. The disconnection time has to be fast enough not to allow fibrillation to start if a person is in contact with the casing at the time. This time and the current rating in turn sets a maximum earth resistance permissible. To provide supplementary protection against high-impedance faults it is common to recommend a residual-current device (RCD) also known as a residual current circuit breaker (RCCB), ground fault circuit interrupter (GFCI), or residual current operated circuit-breaker with integral over-current protection (RCBO), which will cut off the supply of electricity to the appliance if the currents in the two poles of the supply are not equal and opposite. Class 0I Electrical installations where the chassis is connected to earth with a separate terminal, instead of via the mains cable. In effect this provides the same automatic disconnection as Class I, for equipment that otherwise would be Class 0. Class II A Class II or double insulated electrical appliance uses reinforced protective insulation in addition to basic insulation. Hence, it has been designed in such a way that it does not require a safety connection to electrical earth (ground). The basic requirement is that no single failure can result in dangerous voltage becoming exposed so that it might cause an electric shock and that this is achieved without relying on an earthed metal casing. This is usually achieved at least in part by having at least two layers of insulating material between live parts and the user, or by using reinforced insulation. In Europe, a double insulated appliance must be labelled Class II or double insulated or bear the double insulation symbol: ⧈ (a square inside another square). As such, the appliance should not be connected to an earth conductor because the high-impedance casing will cause only low-fault currents that are unable to trigger the fusible cut-out. Insulated AC/DC power supplies (such as cell-phone chargers) are typically designated as Class II, meaning that the DC output wires are isolated from the AC input. The designation "Class II" should not be confused with the designation "Class 2", as the latter is unrelated to insulation (it originates from standard UL 1310, setting limits on maximum output voltage/current/power). EDl ss IIFE These devices have a Functional Earth "FE". This differs from a protective earth ground in that it does not offer shock protection from a hazardous voltage. However, it does help to mitigate electromagnetic noise or EMI. This is often important in Audio and Medical design. Note as they also include double insulation it means that users will not be able to come into contact with any live parts. Class III A Class III appliance is designed to be supplied from a separated extra-low voltage (SELV) power source. The voltage from a SELV supply is low enough such that under normal conditions a person can safely come into contact with an energized conductor without risk of electrical shock. The additional safety features required by Class I and Class II appliances are therefore not required. Specifically, Class III appliances are designed without an earth conductor and should not be connected to the earth grounding of the SELV power source. For medical devices, compliance with Class III is not considered sufficient protection, and furthermore, stringent regulations apply to such equipment. The Class III label does not guarantee a device's safety in any aspect other than electrical shock. Even at low voltages, abusive or unintended use (such as disassembly or incorrect installation) may still yield dangerous outcomes. Although there should be no risk of receiving an electrical shock from Class III appliances, other electrical hazards (such as overheating and fire) must still be considered. For instance, a laptop or mobile phone might qualify as a Class III appliance if it is charged via an external SELV adapter, even though the onboard battery could pose a fire risk. See also Double switching IP Code Mains power plug Portable appliance testing References Sources IEC 61140: Protection against electric shock — Common aspects for installation and equipment. International Electrotechnical Commission. 2001. (formerly: IEC 536-2: Classification of electrical and electronic equipment with regard to protection against electric shock, 1992) BS 2754 : 1976 (1999): Memorandum. Construction of electrical equipment for protection against electric shock. Electric power IEC standards
Appliance classes
Physics,Technology,Engineering
1,420
25,522
https://en.wikipedia.org/wiki/History%20of%20radio
The early history of radio is the history of technology that produces and uses radio instruments that use radio waves. Within the timeline of radio, many people contributed theory and inventions in what became radio. Radio development began as "wireless telegraphy". Later radio history increasingly involves matters of broadcasting. Discovery In an 1864 presentation, published in 1865, James Clerk Maxwell proposed theories of electromagnetism and mathematical proofs demonstrating that light, radio and x-rays were all types of electromagnetic waves propagating through free space. Between 1886 and 1888 Heinrich Rudolf Hertz published the results of experiments wherein he was able to transmit electromagnetic waves (radio waves) through the air, proving Maxwell's electromagnetic theory. Exploration of optical qualities After their discovery many scientists and inventors experimented with transmitting and detecting "Hertzian waves" (it would take almost 20 years for the term "radio" to be universally adopted for this type of electromagnetic radiation). Maxwell's theory showing that light and Hertzian electromagnetic waves were the same phenomenon at different wavelengths led "Maxwellian" scientists such as John Perry, Frederick Thomas Trouton and Alexander Trotter to assume they would be analogous to optical light. Following Hertz' untimely death in 1894, British physicist and writer Oliver Lodge presented a widely covered lecture on Hertzian waves at the Royal Institution on June 1 of the same year. Lodge focused on the optical qualities of the waves and demonstrated how to transmit and detect them (using an improved variation of French physicist Édouard Branly's detector Lodge named the "coherer"). Lodge further expanded on Hertz' experiments showing how these new waves exhibited like light refraction, diffraction, polarization, interference and standing waves, confirming that Hertz' waves and light waves were both forms of Maxwell's electromagnetic waves. During part of the demonstration the waves were sent from the neighboring Clarendon Laboratory building, and received by apparatus in the lecture theater. After Lodge's demonstrations researchers pushed their experiments further down the electromagnetic spectrum towards visible light to further explore the quasioptical nature at these wavelengths. Oliver Lodge and Augusto Righi experimented with 1.5 and 12 GHz microwaves respectively, generated by small metal ball spark resonators. Russian physicist Pyotr Lebedev in 1895 conducted experiments in the 50 GHz 50 (6 millimeter) range. Bengali Indian physicist Jagadish Chandra Bose conducted experiments at wavelengths of 60 GHz (5 millimeter) and invented waveguides, horn antennas, and semiconductor crystal detectors for use in his experiments. He would later write an essay, "Adrisya Alok" ("Invisible Light") on how in November 1895 he conducted a public demonstration at the Town Hall of Kolkata, India using millimeter-range-wavelength microwaves to trigger detectors that ignited gunpowder and rang a bell at a distance. Proposed applications Between 1890 and 1892 physicists such as John Perry, Frederick Thomas Trouton and William Crookes proposed electromagnetic or Hertzian waves as a navigation aid or means of communication, with Crookes writing on the possibilities of wireless telegraphy based on Hertzian waves in 1892. Among physicist, what were perceived as technical limitations to using these new waves, such as delicate equipment, the need for large amounts of power to transmit over limited ranges, and its similarity to already existent optical light transmitting devices, lead them to a belief that applications were very limited. The Serbian American engineer Nikola Tesla considered Hertzian waves relatively useless for long range transmission since "light" could not transmit further than line of sight. There was speculation that this fog and stormy weather penetrating "invisible light" could be used in maritime applications such as lighthouses, including the London journal The Electrician (December 1895) commenting on Bose's achievements, saying "we may in time see the whole system of coast lighting throughout the navigable world revolutionized by an Indian Bengali scientist working single handed[ly] in our Presidency College Laboratory." In 1895, adapting the techniques presented in Lodge's published lectures, Russian physicist Alexander Stepanovich Popov built a lightning detector that used a coherer based radio receiver. He presented it to the Russian Physical and Chemical Society on May 7, 1895. Marconi and radio telegraphy In 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building long-distance a wireless transmission systems based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Marconi read through the literature and used the ideas of others who were experimenting with radio waves but did a great deal to develop devices such as portable transmitters and receiver systems that could work over long distances, turning what was essentially a laboratory experiment into a useful communication system. By August 1895, Marconi was field testing his system but even with improvements he was only able to transmit signals up to one-half mile, a distance Oliver Lodge had predicted in 1894 as the maximum transmission distance for radio waves. Marconi raised the height of his antenna and hit upon the idea of grounding his transmitter and receiver. With these improvements the system was capable of transmitting signals up to and over hills. This apparatus proved to be the first engineering-complete, commercially successful radio transmission system and Marconi went on to file British patent GB189612039A, Improvements in transmitting electrical impulses and signals and in apparatus there-for, in 1896. This patent was granted in the UK on 2 July 1897. Nautical and transatlantic transmissions In 1897, Marconi established a radio station on the Isle of Wight, England and opened his "wireless" factory in the former silk-works at Hall Street, Chelmsford, England, in 1898, employing around 60 people. On 12 December 1901, using a kite-supported antenna for reception—signals transmitted by the company's new high-power station at Poldhu, Cornwall, Marconi transmitted a message across the Atlantic Ocean to Signal Hill in St. John's, Newfoundland. Marconi began to build high-powered stations on both sides of the Atlantic to communicate with ships at sea. In 1904, he established a commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907 between Clifden, Ireland, and Glace Bay, but even after this the company struggled for many years to provide reliable communication to others. Marconi's apparatus is also credited with saving the 700 people who survived the tragic Titanic disaster. Audio transmission In the late 1890s, Canadian-American inventor Reginald Fessenden came to the conclusion that he could develop a far more efficient system than the spark-gap transmitter and coherer receiver combination. To this end he worked on developing a high-speed alternator (referred to as "an alternating-current dynamo") that generated "pure sine waves" and produced "a continuous train of radiant waves of substantially uniform strength", or, in modern terminology, a continuous-wave (CW) transmitter. While working for the United States Weather Bureau on Cobb Island, Maryland, Fessenden researched using this setup for audio transmissions via radio. By fall of 1900, he successfully transmitted speech over a distance of about 1.6 kilometers (one mile), which appears to have been the first successful audio transmission using radio signals. Although successful, the sound transmitted was far too distorted to be commercially practical. According to some sources, notably Fessenden's wife Helen's biography, on Christmas Eve 1906, Reginald Fessenden used an Alexanderson alternator and rotary spark-gap transmitter to make the first radio audio broadcast, from Brant Rock, Massachusetts. Ships at sea heard a broadcast that included Fessenden playing O Holy Night on the violin and reading a passage from the Bible. Around the same time American inventor Lee de Forest experimented with an arc transmitter, which unlike the discontinuous pulses produced by spark transmitters, created steady "continuous wave" signal that could be used for amplitude modulated (AM) audio transmissions. In February 1907 he transmitted electronic telharmonium music from his laboratory station in New York City. This was followed by tests that included, in the fall, Eugenia Farrar singing "I Love You Truly". In July 1907 he made ship-to-shore transmissions by radiotelephone—race reports for the Annual Inter-Lakes Yachting Association (I-LYA) Regatta held on Lake Erie—which were sent from the steam yacht Thelma to his assistant, Frank E. Butler, located in the Fox's Dock Pavilion on South Bass Island. Broadcasting The Dutch company Nederlandsche Radio-Industrie and its owner-engineer, Hanso Idzerda, made its first regular entertainment radio broadcast over station PCGG from its workshop in The Hague on 6 November 1919. The company manufactured both transmitters and receivers. Its popular program was broadcast four nights per week using narrow-band FM transmissions on 670 metres (448 kHz), until 1924 when the company ran into financial trouble. Regular entertainment broadcasts began in Argentina, pioneered by Enrique Telémaco Susini and his associates. At 9 pm on August 27, 1920, Sociedad Radio Argentina aired a live performance of Richard Wagner's opera Parsifal from the Coliseo Theater in downtown Buenos Aires. Only about twenty homes in the city had receivers to tune in this program. On 31 August 1920 the Detroit News began publicized daily news and entertainment "Detroit News Radiophone" broadcasts, originally as licensed amateur station 8MK, then later as WBL and WWJ in Detroit, Michigan. Union College in Schenectady, New York began broadcasting on October 14, 1920, over 2ADD, an amateur station licensed to Wendell King, an African-American student at the school. Broadcasts included a series of Thursday night concerts initially heard within a radius and later for a radius. In 1922 regular audio broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England. Wavelength and frequency In early radio, and to a limited extent much later, the transmission signal of the radio station was specified in meters, referring to the wavelength, the length of the radio wave. This is the origin of the terms long wave, medium wave, and short wave radio. Portions of the radio spectrum reserved for specific purposes were often referred to by wavelength: the 40-meter band, used for amateur radio, for example. The relation between wavelength and frequency is reciprocal: the higher the frequency, the shorter the wave, and vice versa. As equipment progressed, precise frequency control became possible; early stations often did not have a precise frequency, as it was affected by the temperature of the equipment, among other factors. Identifying a radio signal by its frequency rather than its length proved much more practical and useful, and starting in the 1920s this became the usual method of identifying a signal, especially in the United States. Frequencies specified in number of cycles per second (kilocycles, megacycles) were replaced by the more specific designation of hertz (cycles per second) about 1965. Radio companies British Marconi Using various patents, the British Marconi company was established in 1897 by Guglielmo Marconi and began communication between coast radio stations and ships at sea. A year after, in 1898, they successfully introduced their first radio station in Chelmsford. This company, along with its subsidiaries Canadian Marconi and American Marconi, had a stranglehold on ship-to-shore communication. It operated much the way American Telephone and Telegraph operated until 1983, owning all of its equipment and refusing to communicate with non-Marconi equipped ships. Many inventions improved the quality of radio, and amateurs experimented with uses of radio, thus planting the first seeds of broadcasting. Telefunken The company Telefunken was founded on May 27, 1903, as "Telefunken society for wireless telefon" of Siemens & Halske (S & H) and the Allgemeine Elektrizitäts-Gesellschaft (General Electricity Company) as joint undertakings for radio engineering in Berlin. It continued as a joint venture of AEG and Siemens AG, until Siemens left in 1941. In 1911, Kaiser Wilhelm II sent Telefunken engineers to West Sayville, New York to erect three 600-foot (180-m) radio towers there. Nikola Tesla assisted in the construction. A similar station was erected in Nauen, creating the only wireless communication between North America and Europe. Technological development Amplitude-modulated (AM) The invention of amplitude-modulated (AM) radio, which allows more closely spaced stations to simultaneously send signals (as opposed to spark-gap radio, where each transmission occupies a wide bandwidth) is attributed to Reginald Fessenden, Valdemar Poulsen and Lee de Forest. Crystal set receivers The most common type of receiver before vacuum tubes was the crystal set, although some early radios used some type of amplification through electric current or battery. Inventions of the triode amplifier, motor-generator, and detector enabled audio radio. The use of amplitude modulation (AM), by which soundwaves can be transmitted over a continuous-wave radio signal of narrow bandwidth (as opposed to spark-gap radio, which sent rapid strings of damped-wave pulses that consumed much bandwidth and were only suitable for Morse-code telegraphy) was pioneered by Fessenden, Poulsen and Lee de Forest. The art and science of crystal sets is still pursued as a hobby in the form of simple un-amplified radios that 'runs on nothing, forever'. They are used as a teaching tool by groups such as the Boy Scouts of America to introduce youngsters to electronics and radio. As the only energy available is that gathered by the antenna system, loudness is necessarily limited. Vacuum tubes During the mid-1920s, amplifying vacuum tubes revolutionized radio receivers and transmitters. John Ambrose Fleming developed a vacuum tube diode. Lee de Forest placed a screen, added a "grid" electrode, creating the triode. Early radios ran the entire power of the transmitter through a carbon microphone. In the 1920s, the Westinghouse company bought Lee de Forest's and Edwin Armstrong's patent. During the mid-1920s, Amplifying vacuum tubes revolutionized radio receivers and transmitters. Westinghouse engineers developed a more modern vacuum tube. The first radios still required batteries, but in 1926 the "battery eliminator" was introduced to the market. This tube technology allowed radios to be powered through the grid instead. They still required batteries to heat up the vacuum-tube filaments, but after the invention of indirectly heated vacuum tubes, the first completely battery free radios became available in 1927. In 1929 a new screen grid tube called UY-224 was introduced, an amplifier designed to operate directly on alternating current. A problem with the early radios was fading stations and fluctuating volume. The invention of the superheterodyne receiver solved this problem, and the first radios with a heterodyne radio receiver went for sale in 1924. But it was costly, and the technology was shelved while waiting for the technology to mature, and in 1929 the Radiola 66 and Radiola 67 went for sale. Loudspeakers In the early days one had to use headphones to listen to radio. Later loudspeakers in the form of a horn of the type used by phonographs, equipped with a telephone receiver, became available. But the sound quality was poor. In 1926 the first radios with electrodynamic loudspeakers went for sale, which improved the quality significantly. At first the loudspeakers were separated from the radio, but soon radios would come with a built-in loudspeaker. Other inventions related to sound included the automatic volume control (AVC), first commercially available in 1928. In 1930 a tone control knob was added to the radios. This allowed listeners to improve imperfect broadcasting. The magnetic cartridge, which was introduced in the mid 20's, greatly improved the broadcasting of music. When playing music from a phonograph before the magnetic cartridge, a microphone had to be placed close to a horn loudspeaker. The invention allowed the electric signals to be amplified and then fed directly to the broadcast transmitter. Transistor technology Following development of transistor technology, bipolar junction transistors led to the development of the transistor radio. In 1954, the Regency company introduced a pocket transistor radio, the TR-1, powered by a "standard 22.5 V Battery." In 1955, the newly formed Sony company introduced its first transistorized radio, the TR-55. It was small enough to fit in a vest pocket, powered by a small battery. It was durable, because it had no vacuum tubes to burn out. In 1957, Sony introduced the TR-63, the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. Over the next 20 years, transistors replaced tubes almost completely except for high-power transmitters. By the mid-1960s, the Radio Corporation of America (RCA) were using metal–oxide–semiconductor field-effect transistors (MOSFETs) in their consumer products, including FM radio, television and amplifiers. Metal–oxide–semiconductor (MOS) large-scale integration (LSI) provided a practical and economic solution for radio technology, and was used in mobile radio systems by the early 1970s. Integrated circuit The first integrated circuit (IC) radio, P1740 by General Electric, became available in 1966. Car radio The first car radio was introduced in 1922, but it was so large that it took up too much space in the car. The first commercial car radio that could easily be installed in most cars went for sale in 1930. Radio telex Telegraphy did not go away on radio. Instead, the degree of automation increased. On land-lines in the 1930s, teletypewriters automated encoding, and were adapted to pulse-code dialing to automate routing, a service called telex. For thirty years, telex was the cheapest form of long-distance communication, because up to 25 telex channels could occupy the same bandwidth as one voice channel. For business and government, it was an advantage that telex directly produced written documents. Telex systems were adapted to short-wave radio by sending tones over single sideband. CCITT R.44 (the most advanced pure-telex standard) incorporated character-level error detection and retransmission as well as automated encoding and routing. For many years, telex-on-radio (TOR) was the only reliable way to reach some third-world countries. TOR remains reliable, though less-expensive forms of e-mail are displacing it. Many national telecom companies historically ran nearly pure telex networks for their governments, and they ran many of these links over short wave radio. Documents including maps and photographs went by radiofax, or wireless photoradiogram, invented in 1924 by Richard H. Ranger of Radio Corporation of America (RCA). This method prospered in the mid-20th century and faded late in the century. Radio navigation One of the first developments in the early 20th century was that aircraft used commercial AM radio stations for navigation, AM stations are still marked on U.S. aviation charts. Radio navigation played an important role during war time, especially in World War II. Before the discovery of the crystal oscillator, radio navigation had many limits. However, as radio technology expanding, navigation is easier to use, and it provides a better position. Although there are many advantages, the radio navigation systems often comes with complex equipment such as the radio compass receiver, compass indicator, or the radar plan position indicator. All of these require users to obtain certain knowledge. In the 1960s VOR systems became widespread. In the 1970s, LORAN became the premier radio navigation system. Soon, the US Navy experimented with satellite navigation. In 1987, the Global Positioning System (GPS) constellation of satellites was launched; it was followed by other GNSS systems like Glonass, BeiDou and Galileo. FM In 1933, FM radio was patented by inventor Edwin H. Armstrong. FM uses frequency modulation of the radio wave to reduce static and interference from electrical equipment and the atmosphere. In 1937, W1XOJ, the first experimental FM radio station after Armstrong's W2XMN in Alpine, New Jersey, was granted a construction permit by the US Federal Communications Commission (FCC). FM in Europe After World War II, FM radio broadcasting was introduced in Germany. At a meeting in Copenhagen in 1948, a new wavelength plan was set up for Europe. Because of the recent war, Germany (which did not exist as a state and so was not invited) was only given a small number of medium-wave frequencies, which were not very good for broadcasting. For this reason Germany began broadcasting on UKW ("Ultrakurzwelle", i.e. ultra short wave, nowadays called VHF) which was not covered by the Copenhagen plan. After some amplitude modulation experience with VHF, it was realized that FM radio was a much better alternative for VHF radio than AM. Because of this history, FM radio is still referred to as "UKW Radio" in Germany. Other European nations followed a bit later, when the superior sound quality of FM and the ability to run many more local stations because of the more limited range of VHF broadcasts were realized. Television In the 1930s, regular analog television broadcasting began in some parts of Europe and North America. By the end of the decade there were roughly 25,000 all-electronic television receivers in existence worldwide, the majority of them in the UK. In the US, Armstrong's FM system was designated by the FCC to transmit and receive television sound. Color television 1953: NTSC compatible color television introduced in the US. 1962: Telstar 1, the first communications satellite, relayed the first publicly available live transatlantic television signal. Mid-1960s: Metal–oxide–semiconductor field-effect transistor (MOSFET) first used for television, by the Radio Corporation of America (RCA). The power MOSFET was later widely adopted for television receiver circuits. By 1963, color television was being broadcast commercially (though not all broadcasts or programs were in color), and the first (radio) communication satellite, Telstar, was launched. In the 1970s, Mobile phones In 1947 AT&T commercialized the Mobile Telephone Service. From its start in St. Louis in 1946, AT&T then introduced Mobile Telephone Service to one hundred towns and highway corridors by 1948. Mobile Telephone Service was a rarity with only 5,000 customers placing about 30,000 calls each week. Because only three radio channels were available, only three customers in any given city could make mobile telephone calls at one time. Mobile Telephone Service was expensive, costing US$15 per month, plus $0.30–0.40 per local call, equivalent to (in 2012 US dollars) about $176 per month and $3.50–4.75 per call. The Advanced Mobile Phone System analog mobile phone system, developed by Bell Labs, was introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. The development of metal–oxide–semiconductor (MOS) large-scale integration (LSI) technology, information theory and cellular networking led to the development of affordable mobile communications. The Advanced Mobile Phone System analog mobile phone system, developed by Bell Labs and introduced in the Americas in 1978, gave much more capacity. It was the primary analog mobile phone system in North America (and other locales) through the 1980s and into the 2000s. Broadcast and copyright The British government and the state-owned postal services found themselves under massive pressure from the wireless industry (including telegraphy) and early radio adopters to open up to the new medium. In an internal confidential report from February 25, 1924, the Imperial Wireless Telegraphy Committee stated: "We have been asked 'to consider and advise on the policy to be adopted as regards the Imperial Wireless Services so as to protect and facilitate public interest.' It was impressed upon us that the question was urgent. We did not feel called upon to explore the past or to comment on the delays which have occurred in the building of the Empire Wireless Chain. We concentrated our attention on essential matters, examining and considering the facts and circumstances which have a direct bearing on policy and the condition which safeguard public interests." When radio was introduced in the early 1920s, many predicted it would kill the phonograph record industry. Radio was a free medium for the public to hear music for which they would normally pay. While some companies saw radio as a new avenue for promotion, others feared it would cut into profits from record sales and live performances. Many record companies would not license their records to be played over the radio, and had their major stars sign agreements that they would not perform on radio broadcasts. Indeed, the music recording industry had a severe drop in profits after the introduction of the radio. For a while, it appeared as though radio was a definite threat to the record industry. Radio ownership grew from two out of five homes in 1931 to four out of five homes in 1938. Meanwhile, record sales fell from $75 million in 1929 to $26 million in 1938 (with a low point of $5 million in 1933), though the economics of the situation were also affected by the Great Depression. The copyright owners were concerned that they would see no gain from the popularity of radio and the 'free' music it provided. What they needed to make this new medium work for them already existed in previous copyright law. The copyright holder for a song had control over all public performances 'for profit.' The problem now was proving that the radio industry, which was just figuring out for itself how to make money from advertising and currently offered free music to anyone with a receiver, was making a profit from the songs. The test case was against Bamberger's Department Store in Newark, New Jersey in 1922. The store was broadcasting music from its store on the radio station WOR. No advertisements were heard, except at the beginning of the broadcast which announced "L. Bamberger and Co., One of America's Great Stores, Newark, New Jersey." It was determined through this and previous cases (such as the lawsuit against Shanley's Restaurant) that Bamberger was using the songs for commercial gain, thus making it a public performance for profit, which meant the copyright owners were due payment. With this ruling the American Society of Composers, Authors and Publishers (ASCAP) began collecting licensing fees from radio stations in 1923. The beginning sum was $250 for all music protected under ASCAP, but for larger stations the price soon ballooned to $5,000. Edward Samuels reports in his book The Illustrated Story of Copyright that "radio and TV licensing represents the single greatest source of revenue for ASCAP and its composers […] and [a]n average member of ASCAP gets about $150–$200 per work per year, or about $5,000-$6,000 for all of a member's compositions." Not long after the Bamberger ruling, ASCAP had to once again defend their right to charge fees, in 1924. The Dill Radio Bill would have allowed radio stations to play music without paying and licensing fees to ASCAP or any other music-licensing corporations. The bill did not pass. Regulations of radio stations in the U.S Wireless Ship Act of 1910 Radio technology was first used for ships to communicate at sea. To ensure safety, the Wireless Ship Act of 1910 marks the first time the U.S. government implies regulations on radio systems on ships. This act requires ships to have a radio system with a professional operator if they want to travel more than 200 miles offshore or have more than 50 people on board. However, this act had many flaws including the competition of radio operators including the two majors company (British and American Marconi). They tended to delay communication for ships that used their competitor's system. This contributed to the tragic incident of the sinking of the Titanic in 1912. Radio Act of 1912 In 1912, distress calls to aid the sinking Titanic were met with a large amount of interfering radio traffic, severely hampering the rescue effort. Subsequently, the US government passed the Radio Act of 1912 to help mitigate the repeat of such a tragedy. The act helps distinguish between normal radio traffic and (primarily maritime) emergency communication, and specifies the role of government during such an emergency. The Radio Act of 1927 The Radio Act of 1927 gave the Federal Radio Commission the power to grant and deny licenses, and to assign frequencies and power levels for each licensee. In 1928 it began requiring licenses of existing stations and setting controls on who could broadcast from where on what frequency and at what power. Some stations could not obtain a license and ceased operations. In section 29, the Radio Act of 1927 mentioned that the content of the broadcast should be freely present, and the government cannot interfere with this. The Communications Act of 1934 The introduction of the Communications Act of 1934 led to the establishment of the Federal Communications Commissions (FCC). The FCC's responsibility is to control the industry including "telephone, telegraph, and radio communications." Under this Act, all carriers have to keep records of authorized interference and unauthorized interference. This Act also supports the President in time of war. If the government needs to use the communication facilities in time of war, they are allowed to. The Telecommunications Act of 1996 The Telecommunications Act of 1996 was the first significant overhaul in over 60 years amending the work of the Communications Act of 1934. Coming only two dozen years after the breakup of AT&T, the act sets out to move telecommunications into a state of competition with their markets and the networks they are a part of. Up to this point the effects of the Telecommunications Act of 1996 have been seen, but some of the changes the Act set out to fix are still ongoing problems, such as being unable to create an open competitive market. Licensed commercial public radio stations The question of the 'first' publicly targeted licensed radio station in the U.S. has more than one answer and depends on semantics. Settlement of this 'first' question may hang largely upon what constitutes 'regular' programming It is commonly attributed to KDKA in Pittsburgh, Pennsylvania, which in October 1920 received its license and went on the air as the first US licensed commercial broadcasting station on November 2, 1920, with the presidential election results as its inaugural show, but was not broadcasting daily until 1921. (Their engineer Frank Conrad had been broadcasting from on the two call sign signals of 8XK and 8YK since 1916.) Technically, KDKA was the first of several already-extant stations to receive a 'limited commercial' license. On February 17, 1919, station 9XM at the University of Wisconsin in Madison broadcast human speech to the public at large. 9XM was first experimentally licensed in 1914, began regular Morse code transmissions in 1916, and its first music broadcast in 1917. Regularly scheduled broadcasts of voice and music began in January 1921. That station is still on the air today as WHA. On August 20, 1920, 8MK, began broadcasting daily and was later claimed by famed inventor Lee de Forest as the first commercial station. 8MK was licensed to a teenager, Michael DeLisle Lyons, and financed by E. W. Scripps. In 1921 8MK changed to WBL and then to WWJ in 1922, in Detroit. It has carried a regular schedule of programming to the present and also broadcast the 1920 presidential election returns just as KDKA did. Inventor Lee de Forest claims to have been present during 8MK's earliest broadcasts, since the station was using a transmitter sold by his company. The first station to receive a commercial license was WBZ, then in Springfield, Massachusetts. Lists provided to the Boston Globe by the U.S. Department of Commerce showed that WBZ received its commercial license on 15 September 1921; another Westinghouse station, WJZ, then in Newark, New Jersey, received its commercial license on November 7, the same day as KDKA did. What separates WJZ and WBZ from KDKA is the fact that neither of the former stations remain in their original city of license, whereas KDKA has remained in Pittsburgh for its entire existence. 2XG: Launched by Lee de Forest in the Highbridge section of New York City, that station began daily broadcasts in 1916. Like most experimental radio stations, however, it had to go off the air when the U.S. entered World War I in 1917, and did not return to the air. 1XE: Launched by Harold J. Power in Medford, Massachusetts, 1XE was an experimental station that started broadcasting in 1917. It had to go off the air during World War I, but started up again after the war, and began regular voice and music broadcasts in 1919. However, the station did not receive its commercial license, becoming WGI, until 1922. WWV, the U.S. Government time service, which was believed to have started 6 months before KDKA in Washington, D.C. but in 1966 was transferred to Ft. Collins, Colorado. WRUC, the Wireless Radio Union College, located on Union College in Schenectady, New York; was launched as W2XQ KQV, one of Pittsburgh's five original AM stations, signed on as amateur station "8ZAE" on November 19, 1919, but did not receive a commercial license until January 9, 1922. See also History of electrical engineering History of electromagnetic theory History of amateur radio History of broadcasting History of podcasting History of radar History of radio receivers History of telecommunication History of television A.S. Popov Central Museum of Communications Digital audio broadcasting (DAB) Internet radio Spark-gap transmitter Timeline of the introduction of radio in countries Wireless Wireless LANs Footnotes References Primary sources De Lee Forest. Father of Radio: The Autobiography of Lee de Forest (1950). Gleason L. Archer Personal Papers (MS108), Suffolk University Archives, Suffolk University; Boston, Massachusetts. Gleason L. Archer Personal Papers (MS108) finding aid Kahn Frank J., ed. Documents of American Broadcasting, fourth edition (Prentice-Hall, Inc., 1984). Lichty Lawrence W., and Topping Malachi C., eds. American Broadcasting: A Source Book on the History of Radio and Television (Hastings House, 1975). Secondary sources Aitkin, Hugh G. J. The Continuous Wave: Technology and the American Radio, 1900–1932 (Princeton University Press, 1985). Anderson, Leland. "Nikola Tesla On His Work With Alternating Currents and Their Application to Wireless Telegraphy, Telephony, and Transmission of Power", Sun Publishing Company, LC , (ed. excerpts available online) Anderson, Leland I. Priority in the Invention of Radio – Tesla vs. Marconi, Antique Wireless Association monograph, 1980, examining the 1943 decision by the US Supreme Court holding the key Marconi patent invalid (9 pages). (21st Century Books) Archer, Gleason L. Big Business and Radio (The American Historical Society, Inc., 1939) Archer, Gleason L. History of Radio to 1926 (The American Historical Society, Inc., 1938). Barnouw, Erik. The Golden Web (Oxford University Press, 1968); The Sponsor (1978); A Tower in Babel (1966). Belrose, John S., "Fessenden and Marconi: Their Differing Technologies and Transatlantic Experiments During the First Decade of this Century". International Conference on 100 Years of Radio (5–7 September 1995). Briggs, Asa. The BBC – the First Fifty Years (Oxford University Press, 1984). Briggs, Asa. The History of Broadcasting in the United Kingdom (Oxford University Press, 1961). Brodsky, Ira. "The History of Wireless: How Creative Minds Produced Technology for the Masses" (Telescope Books, 2008) Butler, Lloyd (VK5BR), "Before Valve Amplification – Wireless Communication of an Early Era" Coe, Douglas and Kreigh Collins (ills), "Marconi, pioneer of radio". New York, J. Messner, Inc., 1943. Covert, Cathy and Stevens John L. Mass Media Between the Wars (Syracuse University Press, 1984). Craig, Douglas B. Fireside Politics: Radio and Political Culture in the United States, 1920–1940 (2005) Crook, Tim. International Radio Journalism: History, Theory and Practice Routledge, 1998 Douglas, Susan J., Listening in : radio and the American imagination : from Amos 'n' Andy and Edward R. Murrow to Wolfman Jack and Howard Stern , New York: Times Books, 1999. Ewbank Henry and Lawton Sherman P. Broadcasting: Radio and Television (Harper & Brothers, 1952). Garratt, G. R. M., "The early history of radio : from Faraday to Marconi", London, Institution of Electrical Engineers in association with the Science Museum, History of technology series, 1994. Geddes, Keith, "Guglielmo Marconi, 1874–1937". London : H.M.S.O., A Science Museum booklet, 1974. (ed. Obtainable in the US from Pendragon House Inc., Palo Alto, California.) Gibson, George H. Public Broadcasting; The Role of the Federal Government, 1919–1976 (Praeger Publishers, 1977). Hancock, Harry Edgar, "Wireless at sea; the first fifty years. A history of the progress and development of marine wireless communications written to commemorate the jubilee of the Marconi International Marine Communication Company limited". Chelmsford, Eng., Marconi International Marine Communication Co., 1950. Jackaway, Gwenyth L. Media at War: Radio's Challenge to the Newspapers, 1924–1939 Praeger Publishers, 1995 Journal of the Franklin Institute. "Notes and comments; Telegraphy without wires", Journal of the Franklin Institute, December 1897, pp. 463–64. Katz, Randy H., "Look Ma, No Wires": Marconi and the Invention of Radio". History of Communications Infrastructures. Lazarsfeld, Paul F. The People Look at Radio (University of North Carolina Press, 1946). Maclaurin, W. Rupert. Invention and Innovation in the Radio Industry (The Macmillan Company, 1949). Marconi's Wireless Telegraph Company, "Year book of wireless telegraphy and telephony", London : Published for the Marconi Press Agency Ltd., by the St. Catherine Press / Wireless Press. Marincic, Aleksandar and Djuradj Budimir, "Tesla contribution to radio wave propagation". (PDF) Masini, Giancarlo. "Guglielmo Marconi". Turin: Turinese typographical-publishing union, 1975. (ed. Contains 32 tables outside of the text) Massie, Walter Wentworth, "Wireless telegraphy and telephony popularly explained". New York, Van Nostrand, 1908. McChesney, Robert W. Telecommunications, Mass Media, and Democracy: The Battle for the Control of U.S. Broadcasting, 1928–1935 Oxford University Press, 1994 McCourt, Tom. Conflicting Communication Interests in America: The Case of National Public Radio Praeger Publishers, 1999 McNicol, Donald. "The Early Days of Radio in America". The Electrical Experimenter, April 1917, pp. 893, 911. Peers, Frank W. The Politics of Canadian Broadcasting, 1920–1951 (University of Toronto Press, 1969). Pimsleur, J. L. "Invention of Radio Celebrated in S.F.; 100th birthday exhibit this weekend ". San Francisco Chronicle, 1995. The Prestige, 2006, Touchstone Pictures. The Radio Staff of the Detroit News, WWJ-The Detroit News (The Evening News Association, Detroit, 1922). Ray, William B. FCC: The Ups and Downs of Radio-TV Regulation (Iowa State University Press, 1990). Rosen, Philip T. The Modern Stentors; Radio Broadcasting and the Federal Government 1920–1934 (Greenwood Press, 1980). Rugh, William A. Arab Mass Media: Newspapers, Radio, and Television in Arab Politics Praeger, 2004 Scannell, Paddy, and Cardiff, David. A Social History of British Broadcasting, Volume One, 1922–1939 (Basil Blackwell, 1991). Schramm Wilbur, ed. Mass Communications (University of Illinois Press, 1960). Schwoch James. The American Radio Industry and Its Latin American Activities, 1900–1939 (University of Illinois Press, 1990). Seifer, Marc J., "The Secret History of Wireless". Kingston, Rhode Island. Slater, Robert. This ... is CBS: A Chronicle of 60 Years (Prentice Hall, 1988). Smith, F. Leslie, John W. Wright II, David H. Ostroff; Perspectives on Radio and Television: Telecommunication in the United States Lawrence Erlbaum Associates, 1998 Sterling, Christopher H. Electronic Media, A Guide to Trends in Broadcasting and Newer Technologies 1920–1983 (Praeger, 1984). Sterling, Christopher, and Kittross John M. Stay Tuned: A Concise History of American Broadcasting (Wadsworth, 1978). Stone, John Stone. "John Stone Stone on Nikola Tesla's Priority in Radio and Continuous-Wave Radiofrequency Apparatus". Twenty First Century Books, 2005. Sungook Hong, "Wireless: from Marconi's Black-box to the Audion", Cambridge, Massachusetts: MIT Press, 2001, Waldron, Richard Arthur, "Theory of guided electromagnetic waves". London, New York, Van Nostrand Reinhold, 1970. Weightman, Gavin, "Signor Marconi's magic box : the most remarkable invention of the 19th century & the amateur inventor whose genius sparked a revolution" 1st Da Capo Press ed., Cambridge, Massachusetts : Da Capo Press, 2003. White, Llewellyn. The American Radio (University of Chicago Press, 1947). White, Thomas H. "Pioneering U.S. Radio Activities (1897–1917)", United States Early Radio History. Wunsch, A. David "Misreading the Supreme Court: A Puzzling Chapter in the History of Radio" Mercurians.org. Media and documentaries Empire of the Air: The Men Who Made Radio (1992) by Ken Burns, PBS documentary based on the 1991 book, Empire of the Air: The Men Who Made Radio by Tom Lewis, 1st ed., New York : E. Burlingame Books, External links "A Comparison of the Tesla and Marconi Low-Frequency Wireless Systems ". Twenty First Century Books, Breckenridge, Co. Sparks Telegraph Key Review "Presentation of the Edison Medal to Nikola Tesla". Minutes of the Annual Meeting of the American Institute of Electrical Engineers. Held at the Engineering Society Building, New York City, Friday evening, May 18, 1917. Timeline of the First Thirty Years of Radio 1895–1925 ; An important chapter in the Death of Distance. Nova Scotia, Canada, March 14, 2006. Cybertelecom :: Radio History (legal and regulatory) Western Historic Radio Museum: Radio Communication Equipment from 1909 to 1959. Guglielmo Marconi Radio
History of radio
Technology
8,985
65,493,518
https://en.wikipedia.org/wiki/Batya%20Friedman
Batya Friedman is an American professor in the University of Washington Information School. She is also an adjunct professor in the Paul G. Allen School of Computer Science and Engineering and adjunct professor in the Department of Human-Centered Design and Engineering, where she directs the Value Sensitive Design Research Lab. She received her PhD in learning sciences from the University of California, Berkeley School of Education in 1988, and has an undergraduate degree from Berkeley in computer science and mathematics. Work Friedman is known for pioneering value sensitive design (VSD), an approach to account for human values in the design of information systems. Currently, Friedman is the Co-Director of Value Sensitive Design Lab, and was the former Co-Director of the UW Technology Policy Lab. Awards 2021 ACM Fellow "Gilles Hondius Fellow" - Technical University of Delft, 2020 Honorary Doctorate - Technical University of Delft, 2020 ACM SIGCHI Academy - ACM SIGCHI, 2019 Induction into Membership - ACM SIGCHI Academy, 2019 Social Impact Award - ACM SIGCHI, 2012 Multi-disciplinary Privacy Paper Award, 2010 Multi-disciplinary Privacy Paper Award, Honorable Mention, 2010 Best Paper Award, Organizational Systems Track - HCISS, 2002 TAP: ACM list of notable female computer scientists, 1997 Selected publications Friedman, B., & Hendry, D. G. (2019). Value sensitive design: shaping technology with moral imagination. Cambridge, MA. MIT Press. Friedman, B. (2008). Value Sensitive Design. In D. Schuler, Liberating Voices: A Pattern Language for Communication Revolution (pp. 366–368). The MIT Press Friedman, B., & Hendry, D. (2012). The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1145–1148. Friedman, B. (2004). Value Sensitive Design. In W. S. Bainbridge (Ed.), Encyclopedia of Human-computer Interaction (pp. 769–774). Berkshire Publishing Group. Friedman, B., & Kahn, P. H. (2003). Human values, ethics, and design. In A. Sears, J. A. Jacko, & S. Garfinkel, The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (2nd ed., pp. 1241–1266). CRC Press. Friedman, B., & Kahn, P. H. (2000). New directions: a value-sensitive design approach to augmented reality. Proceedings of DARE 2000 on Designing Augmented Reality Environments, 163–164. Friedman, B. (1996, December 1). Value-sensitive design. ACM interactions, 3(6), 16–23. Friedman, B., & Kahn, P. H. (1992). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software, 17(1), 7–14. References External links Friedman's UW faculty profile page Year of birth missing (living people) Living people Information systems researchers American information theorists American women computer scientists 21st-century American women scientists 2021 fellows of the Association for Computing Machinery UC Berkeley Graduate School of Education alumni University of Washington faculty Artificial intelligence ethicists Information ethicists
Batya Friedman
Technology
694
18,428,835
https://en.wikipedia.org/wiki/Maxwell%20construction
In thermodynamic equilibrium, a necessary condition for stability is that pressure, , does not increase with volume, or molar volume, ; this is expressed mathematically as , where is the temperature. This basic stability requirement, and similar ones for other conjugate pairs of variables, is violated in analytic models of first order phase transitions. The most famous case is the van der Waals equation, where are dimensional constants. This violation is not a defect, rather it is the origin of the observed discontinuity in properties that distinguish liquid from vapor, and defines a first order phase transition. Figure 1 shows an isotherm drawn, for , as a continuously differentiable solid black, dotted black, and dashed gray curve. The decreasing part of the curve to the right of point C in Fig. 1 describes a gas, while the decreasing part to the left of point E describes a liquid. These two parts are separated by a region between the local minimum and local maximum on the curve with positive slope that violates the stability criterion. This mathematical criterion expresses a physical condition which Epstein described as follows: "It is obvious that this middle part, dotted in our curves [dashed in Fig.1 here], can have no physical reality. In fact, let us imagine the fluid in a state corresponding to this part of the curve contained in a heat conducting vertical cylinder whose top is formed by a piston. The piston can slide up and down in the cylinder, and we put on it a load exactly balancing the pressure of the gas. If we take a little weight off the piston, there will no longer be equilibrium and it will begin to move upward. However, as it moves the volume of the gas increases and with it its pressure. The resultant force on the piston gets larger, retaining its upward direction. The piston will, therefore, continue to move and the gas to expand until it reaches the state represented by the maximum of the isotherm. Vice versa, if we add ever so little to the load of the balanced piston, the gas will collapse to the state corresponding to the minimum of the isotherm." This situation is similar to a body exactly balanced at the top of a smooth surface that, with the slightest disturbance will depart from its equilibrium position and continue until it reaches a local minimum. As they are described such states are dynamically unstable, and consequently they are not observed. The gap is a precursor of the actual phase change from liquid to vapor. The points E and C , where , that delimit the largest possible liquid and smallest possible vapor states are called spinodal points. Their locus forms a spinodal curve which bounds a region where no homogeneous stable states can exist. Experiments show that if the volume of a vessel containing a fixed amount of liquid is heated and expands at constant temperature, at a certain pressure, , vapor, (denoted by dots at points and in Fig. 1) bubbles nucleate so the fluid is no longer homogeneous, but rather it has become a heterogeneous mixture of boiling liquid and condensing vapor. Gravity separates the boiling (saturated) liquid, , from the less dense condensing (saturated) vapor, that coexist at the same saturation temperature and pressure. As the heating continues the amount of vapor, increases and that of the liquid, , decreases. All the while the pressure, and temperature, , remain constant and the volume increases. In this situation the molar volume of the mixture is a weighted average of its components where , the mole fraction of the vapor, , increases continuously; however, the molar volume of the substance itself has only the largest possible stable value for its liquid state, and smallest possible stable value for its vapor state at the given . To repeat, although the mixture molar volume passes continuously from to (denoted by the dashed line in Fig. 1), the underlying fluid has a discontinuity in this property, and others as well. This equation of state of the mixture is called the lever rule. The dotted parts of the curve in Fig. 1 are metastable states. For many years such states were an academic curiosity; Callen gave as an example, "water that has been cooled below 0°C at a pressure of 1 atm. A tap on a beaker of water in this condition precipitates a sudden dramatic crystallization of the system." However, studies of boiling heat transfer have made clear that metastable states occur routinely as an integral part of this process. In it the heating surface temperature is higher than the saturation temperature, often significantly so, hence the adjacent liquid must be superheated. Further the advent of devices that operate with very high heat fluxes has created interest in the metastable states, and the thermodynamic properties associated with them, in particular the superheated liquid states. Moreover, the fact that they are predicted by the van der Waals equation, and cubic equations in general, is compelling evidence of its efficacy in describing phase transitions; Sommerfeld described this as follows:It is very remarkable that the theory due to van der Waals is in a position to predict, at least qualitatively, the existence of the unstable [called metastable here] states along the branches AA` or BB` [BC and FE in Fig. 1 here]. Equal area rule The discontinuity in , and other properties, e.g. internal energy, , and entropy,, of the substance, is called a first order phase transition. In order to specify the unique experimentally observed pressure, , at which it occurs another thermodynamic condition is required, for from Fig.1 it could clearly occur for any pressure in the range . Such a condition was first enunciated in a clever thermodynamic argument by Maxwell at a lecture he delivered to the British Chemical Society on Feb 18, 1875 (Fig.1, including the letters B C D E F, is the curve he described): The portion of the curve from C to E represents points which are essentially unstable, and which cannot therefore be realized. Now let us suppose the medium to pass from B to F along the hypothetical curve B C D E F in a state always homogeneous, and to return along the straight line path F B in the form of a mixture of liquid and vapor. Since the temperature has been constant throughout, no heat can have been transformed into work. Now the heat transformed into work is represented by the excess of the area F D E over B C D. Hence the condition which determines the maximum pressure of the vapor at given temperature is that the line B F cuts off equal areas from the curve above and below. The easiest way to understand Maxwell's argument is to consider the cycle he suggested on a temperature—molar entropy plane. Every introductory thermodynamics text presents the fact that on such a plane the area under any curve is the heat transfer to the substance per mole, positive going from left to right and negative from right to left; moreover, in a cyclic process the net heat transfer to the substance is the area enclosed by the cycle's closed curve. Since the cycle he considered is composed of the two gray dashed isothermals at the same temperature, one proceeding from B to F (through C D and E), and the other directly back from F to B, the two lines are identical, just traversed in reverse; there is zero area enclosed, and hence . Further the same texts describe the area under these curves when plotted on a pressure—molar volume, see Fig. 1, as being the work done by the substance, positive going from left to right, and negative from right to left. Likewise the net work done in a cycle is the area enclosed by the closed curve. Since the first law of thermodynamics yields in the special case of a cycle , for the cycle envisioned by Maxwell ; then since the area enclosed is I+II=0, see Fig.1, with I positive and II negative, the transition pressure must be such that the two areas are equal. Written as a mathematical equation in terms of the work done in each process this is This equation together with the equation of state written for each of the states and are three equations for the four variables, , so given any one of them, say , the other three are determined. In other words, there is a unique value of , as well as and , at which the phase transition can occur. Gibbs criterion At the end of his lecture, after complimenting van der Waals by referring to his work as "an exceedingly ingenious thesis", Maxwell finished it by saying: I must not, however, omit to mention a most important American contribution to this part of thermodynamics by Prof. Willard Gibbs of Yale College U.S., who has given us a remarkably simple and thoroughly satisfactory method of representing the relations of the different states of matter by means of a model. By means of this model, problems which had long resisted the efforts of myself and others may be solved at once. This remark proved prescient because in 1876-1878 Gibbs published his definitive work on thermodynamics in which he showed that thermodynamic equilibrium of a heterogeneous substance requires that, in addition to mechanical equilibrium (the same pressure for each component) and thermal equilibrium (the same temperature for each component), there must also be material equilibrium (the same chemical potential for each component). In the present instance of one substance and two phases in addition to and , material equilibrium requires (for the special case of one substance its chemical potential is the molar Gibbs function, where ). This condition can be deduced by a simple physical argument as follows: the energy required to vaporize a mole is from the second law at constant temperature , and from the first law at constant pressure , then equating these two and rearranging produces the result since . The conditions of material equilibrium lead to the famous Gibbs phase rule, , where is the number of substances, the number of phases, and the number of independent intensive variables required to specify the state. In the case of one substance and two phases discussed here this gives, , the experimentally observed number. Now is a thermodynamic potential function, its differential is Integrating this at constant temperature produces here is a constant of integration, but the constant is different for each isotherm hence it is written as a function of . In order to evaluate one must invert to obtain . However, it is the nature of the phase transition phenomenon that this inversion is not unique; for example the van der Waals quation written for is, a cubic with either 1 or, in this case, 3 real roots. Thus there are three curves, as seen in Fig. 2, consisting of stable (shown solid black), metastable (shown dotted black), and unstable (shown dashed gray) states. Actually, the figure was not produced by solving the cubic and integrating, rather was obtained from its definition by first obtaining and , which is easily done analytically for the van der Waals equation, and plotting it parametrically with , using as the parameter. Considering only its stable states is continuous with discontinuous partial derivatives, and , at the phase transition point. In the Ehrenfest classification, a first order phase transition refers to the discontinuity of the first partial derivatives of while a second order phase transition would involve discontinuities of the second partial derivatives. Relationship between the Gibbs and Maxwell criteria Evaluating the integral expression for given previously between the saturated liquid and vapor states and applying the Gibbs criterion of material equilibrium to this phase change process requires writing it as Here the integral has been split into three parts using the three real roots of the cubic corresponding to the liquid, , unstable, , and vapor, , states respectively. These integrals can best be visualized by viewing Fig. 1 rotated counterclockwise in the paper plane then about the axis so that appears on the leftside ordinate of the curve as shown in the accompanying graph. In this view the function clearly is multi-valued; this is the reason it requires three real functions to describe its behavior between and . Now on splitting the middle integral into two The first two integrals here are area I while the second two are the negative of area II. The two areas add to zero hence their magnitudes are equal according to this Gibbs criterion. This is again the equal area rule of Maxwell, the Maxwell construction, and it can also be shown analytically. Since , Integrating this for constant temperature from state to with the Gibbs condition produces which is Maxwell's result. This equal area rule can also be derived by making use of the Helmholtz free energy. In any event the Maxwell construction derives from the Gibbs condition of material equilibrium. However, even though is more fundamental it is more abstract than the equal area rule, which is understood geometrically. Common tangent construction Another method to determine the coexistence points is based on the Helmholtz potential minimum principle, which states that in a system in diathermal contact with a heat reservoir , and , namely at equilibrium the Helmholtz potential is a minimum. Since, like , the molar Helmholtz function is also a potential function whose differential is, this minimum principle leads to the stability condition . This condition requires that at any stable state of the system the function is strictly convex, namely that in its vicinity the curve lies on or above its tangent. Moreover, for those states the previous stability condition for the pressure is necessarily satisfied as well. A plot of this function for the same subcritical isotherm of the vdW equation as Figs. 1 and 2 is shown in Fig. 3. Included in this figure is the (dashed/solid) straight line that has a double (common) tangent with the curve of the function at B and F. This straight line is, , with constant, which can be written as . The last equality follows from the relation , together with . All this means that every point on the line has the same values of , in particular the points B and F, which produces the Gibbs condition for material equilibrium as well as eqality of temperature and pressure. Therefore this construction is equivalent to both the Gibbs conditions and the Maxwell construction. This construction, based on defined earlier by Gibbs, was originally used by van der Waals (he called it both a double and common tangent), because it could be easily extended to include binary fluid mixtures for which an isotherm of , with a composition variable, forms a surface that can have a common tangent plane. It has subsequently become a popular way to treat phase change problems in mixtures. Application to the van der Waals equation From the van der Waals equation applied to the saturated liquid, , and vapor, , states These two equations specify 4 variables so they can be solved for in terms of . This results in where , , and are a characteristic pressure, molar volume, and temperature defined by the constants (note that ). Applying the Maxwell construction to the van der Waals equation gives These three equations can be solved numerically. This has been done given a value for either or , and tabular results presented; however, the equations also admit an analytic parametric solution that, according to Lenkner, was obtained by Gibbs. Lenkner himself devised a simple, elegant, method to obtain this solution, by eliminating and from the equations, and writing them in terms of a stretched dimensionless density, , that varies between and 0 as varies from to ; this produces Although transcendental, this equation has a simple analytic, parametric solution obtained by writing the left side of the equation, which is just , as Then and when used to eliminate from the right hand side a linear equation for is obtained, whose solution is Accordingly, the fundamental variable that specifies all the others in this phase transition process is . This solution to the saturation problem is easily extended to encompass all its variables where . The values of all other property discontinuities across the saturation curve also follow from this solution. These functions define the coexistence curve which is the locus of the saturated liquid and saturated vapor states of the van der Waals fluid. In Fig. 4 this curve is plotted in blue together with the spinodal curve in black, calculated from where is a parameter. The variables used in making these plots are the reduced (dimensionless) variables, , , and where the subscripted quantities are the critical point values. They are defined by, , and at the critical point, and are measurable quantities. The relations , , are used to convert the star quantities in the solution to the quantities used in the figures. The curve agrees completely with the numerical results referenced earlier. In the region inside the spinodal curve there are two states at each point, one stable and one metastable, either superheated liquid to the right of the blue curve, or subcooled vapor to the left, while outside the spinodal curve there is one stable state at each point. In Fig. 5 the region under the (dot dash black) spinodal curve contains no homogeneous stable states while between the (dot dash red) coexistence and spinodal curves there is one metastable state at every point, and outside the coexistence curve there is one stable state at each point. The two blue and two green circles denote the saturated liquid and vapor states on their respective isotherms. There are also observed heterogeneous states everywhere under the coexistence curve that satisfy the lever rule; however, they are not homogeneous states of the van der Waals equation, so their existence, indicated by horizontal lines connecting the saturation points on each subcritical isotherm, is not displayed. Also the abscissa in this figure is logarithmic, not linear, in order to show more of the vapor region at large without excessively compressing the liquid and unstable regions at small ; however, this device distorts areas, so the two areas I and II in Fig.1 would not appear equal here. Over the parameter range , decreases monotonically from and approaches 0 as in the limit . Therefore and in the limit , and . The behavior of and follow from the equations. Both these properties also decrease monotonically from and , and approach 0 as and in the limit . Note from these that ; the van der Waals saturated vapor is an ideal gas in this limit. To paraphrase Sommerfeld, it is remarkable that the theory due to van der Waals is able to predict that when the saturated vapor behaves like an ideal gas; the saturated vapor of real gases behave exactly this way. In addition for the liquid spinodal point occurs at a negative pressure, and the isotherm is included in Fig. 4 to illustrate this point. This means that some part of those liquid metastable states are in tension, and the lower the temperature the greater the tensile stress. Although this seems counterintuitive it is known that under some circumstances liquids can support tension. Tien and Lienhard noted this and wrote: The van der Waals equation predicts that at low temperatures liquids sustain enormous tension---a fact that has led some authors to take the equation lightly. In recent years measurements have been made that reveal this to be entirely correct. Liquids that are clean and free of dissolved gas can be subjected to tensions greater in magnitude than . This is another interesting feature of the van der Waals theory. Notes References Thermodynamics
Maxwell construction
Physics,Chemistry,Mathematics
4,015
67,507,369
https://en.wikipedia.org/wiki/Proprioception%20and%20motor%20control
Proprioception refers to the sensory information relayed from muscles, tendons, and skin that allows for the perception of the body in space. This feedback allows for more fine control of movement. In the brain, proprioceptive integration occurs in the somatosensory cortex, and motor commands are generated in the motor cortex. In the spinal cord, sensory and motor signals are integrated and modulated by motor neuron pools called central pattern generators (CPGs). At the base level, sensory input is relayed by muscle spindles in the muscle and Golgi tendon organs (GTOs) in tendons, alongside cutaneous sensors in the skin. Physiology Central pattern generators Central pattern generators are groups of neurons in the spinal cord that are responsible for generating stereotyped movement. It has been shown that in cats, rhythmic activation patterns are still observed following removal of sensory afferents and removal of the brain., indicating that there is neural pattern generation in the spinal cord independent of descending signals from the brain and sensory information. It is currently understood that the spinal cord receives sensory input from proprioceptive organs and descending commands from the brain, integrates these signals, and sends activation signals to muscle through alpha motoneurons and fusimotor signals through gamma motoneurons in a coordinated and rhythmic fashion. Muscle spindles The muscle spindle is a proprioceptive organ that lies embedded in the muscle. It consists of bag- and chain-type fibers, which correspond to dynamic and static responses, respectively. Spindles relay information through primary (Group Ia) and secondary (Group II) sensory afferents, with the primary afferent attached at the nucleus of the spindle and the secondary afferent attached at the end of the spindle. Spindles are conventionally thought of as encoding muscle length, velocity, and acceleration, however there is evidence to suggest that they respond to the force and yank (the first time-derivative of force) exerted on intrafusal muscle. Spindles are also composed of bag- and chain-type fibers, with dynamic and static stretch responses, respectively. Key features of muscle spindle firing responses include initial bursts, history-dependence, and rate relaxation. Initial bursts occur at the onset of stretch and only last a very short time. History dependence refers to how the response of muscle spindles is affected by past stretch inputs. Rate relaxation refers to how the firing rate of muscle spindles decreases over time when held at a constant length. Golgi tendon organs The Golgi tendon organ (GTO) is a proprioceptive organ that lies at the muscle-tendon junction. GTOs relay information through group Ib afferents, and encode active muscle force. As they are connected at one end to motor units, individual GTOs only relay information on a few fibers. At the same time, GTOs exhibit self-adaptation, in which GTO response decreases after prior activation, and cross-adaptation, in which GTO activity is modulated by prior activation of another GTO. Similar to muscle spindles, GTO firing is characterized by a heightened response at the onset of activity (dynamic response) and gradual relaxation to a resting firing rate (static response). Fusimotor system While muscle spindles relay information via primary afferents, they receive descending efferent signals from the spinal cord via gamma motoneurons. This gamma innervation modulates the sensitivity of muscle spindle afferents to stretch. In cat studies, muscle spindle afferent firing rates with gamma fusimotor innervation were shown to be approximately equal to the sum of the gamma motoneuron firing rate and muscle spindle firing rate with no gamma innervation. In these same studies, gamma activity was shown to be correlated with joint angle during locomotion, indicating that fusimotor activity is periodically modulated during locomotion. Similar to muscle spindles, gamma motoneurons are also categorized according to static and dynamic response properties. Motor control In motor control, proprioceptors provide critical feedback to the central nervous system. Muscle spindles relay information regarding muscle stretch, Golgi tendon organs relay information regarding tendon force, and gamma motoneurons modulate muscle spindle feedback. Afferent signals from spindles and tendon organs are integrated in the spinal cord, which then output muscle activation commands to muscle via alpha motoneurons. Because muscle spindles and tendon organs exhibit burst-like activity in response to rapid stretch, they play a vital role in reflexive perturbation responses. In a simulation study, it has been shown that the controllability of a limb in response to a perturbation is significantly increased when utilizing muscle spindle and tendon organ feedback in conjunction. However, proprioceptive feedback is also critical in controlling steady movements. In one study, de-afferented mice were unable to walk as quickly as the control group, and showed some reduced activity in extensor muscles. It's also been shown in cats that disruption of feedback from muscle spindles impairs inter-joint coordination during ramp descent tasks. In a study on people with amputations, those with a higher degree of proprioceptive feedback from muscle spindles were able to better control the movement of a virtual limb. Pathologies Proprioceptive feedback is also linked to motor deficits in Parkinson's disease and cerebral palsy. People with cerebral palsy often suffer from spasticity due to hyperreflexia. A common clinical test of spasticity is the pendulum test, in which the subject remains seated and the relaxed leg is dropped from horizontal. In individuals with spasticity, the leg comes to rest much more quickly due to increased reflexive muscle contraction. Computational models have shown that results from pendulum tests in children with spastic cerebral palsy are explained by increased muscle tone, short-range stiffness, and increased stretch reflex responses due to increased muscle force feedback. Pendulum test results are also dependent on prior motion, indicating that muscle spindle feedback is a large component of spastic movement due to the history-dependent behavior of muscle spindles. Increased proprioceptive feedback has also explained properties of gait in children with spastic cerebral palsy In addition to functional impairments, proprioceptive deficits are linked to compensatory adaptations in the central nervous system. In the study on people with amputations mentioned previously, those with a lower degree of proprioception showed stronger connectivity between their visual and motor cortices, which is interpreted as a greater reliance on visual feedback to coordinate movement. Those with higher degrees of proprioception also showed higher connectivity between brain regions associated with sensorimotor feedback and sensory integration. References Proprioception Motor control
Proprioception and motor control
Biology
1,386
70,141,217
https://en.wikipedia.org/wiki/Punctularia%20strigosozonata
Punctularia strigosozonata is a fungus species of the genus Punctularia. It was originally described in 1832 by Lewis David de Schweinitz as a member of genus Merulius. Patrick Talbot transferred it to genus Punctularia in 1958. Punctularia strigosozonata produces the antibiotic phlebiarubrone. References Punctulariaceae Fungi described in 1832 Taxa named by Lewis David de Schweinitz Fungus species
Punctularia strigosozonata
Biology
102
346,287
https://en.wikipedia.org/wiki/Mode%20X
Mode X is a 256-color graphics display mode of the VGA graphics hardware for IBM PC compatibles. It was first publicized by Michael Abrash in his July 1991 column in Dr. Dobb's Journal and then in chapters 47-49 of Abrash's Graphics Programming Black Book. The term "Mode X" was coined by Abrash. Mode X is a variant of the Mode 13h with the resolution increased to , giving square pixels instead of the slightly elongated pixels of Mode 13h. It is enabled by entering Mode 13h via an BIOS system call, then changing the values of several VGA registers. Additionally, Abrash enabled the VGA's planar memory mode (also called "unchained mode"). Even though planar memory mode is a documented part of the VGA standard and was used in earlier commercial games, it was first widely publicized in the Mode X articles, leading many programmers to consider Mode X and planar memory synonymous. It is possible to enable planar memory in standard mode, which became known as Mode Y in the Usenet rec.games.programmer group. Planar memory arrangement splits the pixels horizontally into groups of four. For any given byte in video memory, four pixels on screen can be accessed depending on which plane(s) are enabled. This is more complicated for the programmer, but the advantages gained by this arrangement—primarily the ability to use all 256 KB of VGA memory for one or more display buffers, instead of only one quarter of that (64 KB)—were considered worthwhile by many. Variants In addition to unchained being called Mode Y, Mode Q (short for "cube") is sometimes used to refer to a 256-color mode. The Y coordinate can simply be put in the high byte of the address, and the X coordinate in the low byte, forming the address of the pixel without a multiply. References External links Graphics Programming Black Book by Michael Abrash, chapters 47, 48, 49. Mode X tutorial at GameDev.net (archived copy) Tweaked VGA Modes by Robert C. Pendleton (archived copy) Introduction to Mode X by Robert Jambor (archived copy) Computer display standards
Mode X
Technology
459
11,555,840
https://en.wikipedia.org/wiki/Tau%20Puppis
Tau Puppis, Latinized from τ Puppis, is a star in the southern constellation of Puppis, near the southern constellation boundary with Carina. It is visible to the naked with an apparent visual magnitude of +2.95 and is located at a distance of about from Earth. The variable radial velocity of this system was detected by H. D. Curtis and H. K. Palmer in 1908, based on observations made at the D. O. Mills Observatory. It is a spectroscopic binary star system, with the presence of the secondary component being revealed by the shifts of absorption lines in the spectrum resulting from the Doppler effect. The two components orbit each other with a period of and a low eccentricity of 0.090. The primary component of this system has a stellar classification of K1 III. A luminosity class 'III' indicates this has expanded into a giant star after exhausting the supply of hydrogen at its core and evolving away from the main sequence of stars like the Sun. The interferometry-measured angular diameter of this star, after correcting for limb darkening, is , which, at its estimated distance, equates to a physical radius of about 27 times the radius of the Sun. It appears to be rotating slowly, with a projected rotational velocity of . This gives a lower bound on the azimuthal velocity of rotation along the star's equator. Tau Puppis is radiating energy from its outer envelope at an effective temperature of around , giving it the orange hue of a cool, K-type star. References External links EAAS: Puppis K-type giants Spectroscopic binaries Puppis Puppis, Tau Durchmusterung objects 050310 032768 2553
Tau Puppis
Astronomy
358
32,325,621
https://en.wikipedia.org/wiki/Top%20cap
In vacuum tube technology, a top cap is a terminal at the top of the tube envelope that connects one of the electrodes, the other electrodes being connected via the tube socket. Top caps have most commonly been used for: Amplifier or similar tube control grid connection, to provide greater circuit stability by isolating the low-signal circuit from the rest of the tube connections. Anode connection, to isolate the high-tension circuit, allowing higher voltages to be used. Physical convenience. Grid top caps on frequency converters could be connected directly to an adjacent coil; Anode top caps on output tubes could be connected by flying leads directly to an output transformer. Shorter leads generally mean greater stability. A few amplifier tubes used two top caps, symmetrically placed, one for anode and the other for grid. In audio amplifier tube application, the top cap was originally used for the grid connection, and a serviceman could apply a moist finger to the terminal to confirm that the stage and subsequent circuits were working by listening for the hum this produced in the loudspeaker. This practice led to some nasty accidents when anode top caps were first introduced to amplifier stages (they had been used on rectifiers for some time). References Vacuum tubes
Top cap
Physics
255
4,243,417
https://en.wikipedia.org/wiki/NGC%204833
NGC 4833 (also known as Caldwell 105) is a globular cluster discovered by Abbe Lacaille during his 1751-1752 journey to South Africa, and catalogued in 1755. It was subsequently observed and catalogued by James Dunlop and Sir John Herschel whose instruments could resolve it into individual stars. The globular cluster is situated in the very southerly constellation Musca at a distance of 21,200 light years from Earth. It is partially obscured by a dusty region of the galactic plane. After corrections for the reddening by dust, evidence was obtained that it is in the order of 2 billion years older than globular clusters M5 or M92. See also New General Catalogue References CCD Photometry of the Globular Cluster NGC 4833 and Extinction Near the Galactic Plane, Melbourne et al., 25 September 2000, Astrophysical Journal External links Basic information and data Photographed by the Antilhue amateur astronomical observatory CCD Photometry of the Globular Cluster NGC 4833 and Extinction Near the Galactic Plane Position relative to nearby cluster NGC 4372 Globular clusters 4833 Musca 105b
NGC 4833
Astronomy
232
29,593,616
https://en.wikipedia.org/wiki/Arctic%20front
The Arctic front is the semipermanent, semi-continuous weather front between the cold arctic air mass and the warmer air of the polar cell. It can also be defined as the southern boundary of the Arctic air mass. Mesoscale cyclones known as polar lows can form along the arctic front in the wake of extratropical cyclones. Arctic air masses in their wake are shallow with a deep layer of stable air above the shallow cold cool. Appearance in satellite images Arctic Fronts form in the Arctic region, and move southwards in southerly flows. When they reach Northern Europe, they have usually travelled over an open sea, and convective cloudiness has developed. The appearance of an Arctic Cold Fronts is then, essentially, that of a shallow Cold Front. Arctic Cold Fronts are usually so far north that Meteosat images alone are inadequate to recognize them. Also, the following conceptual models may look like Arctic Cold Fronts: polar Cold Front, Polar Low and Comma. The final check is best made using a loop of AVHRR images with the help of numerical model parameter fields. Types of Arctic cold fronts Arctic Cold Fronts can be classified into two types: Baroclinic fronts These fronts resemble polar cold fronts, but are usually not so extensive. The frontal cloudiness becomes more convective with time. Ice/sea boundary fronts These fronts form over the ice/sea boundary and move southwards with the basic flow. There is only an isolated Cold Front. Often this type is so shallow and weak that it can not be detected in Meteosat water vapour images. See also Polar climate References Atmospheric dynamics Environment of the Arctic Weather fronts
Arctic front
Chemistry
335
75,324,191
https://en.wikipedia.org/wiki/Epstein%20drag
In fluid dynamics, Epstein drag is a theoretical result, for the drag force exerted on spheres in high Knudsen number flow (i.e., rarefied gas flow). This may apply, for example, to sub-micron droplets in air, or to larger spherical objects moving in gases more rarefied than air at standard temperature and pressure. Note that while they may be small by some criteria, the spheres must nevertheless be much more massive than the species (molecules, atoms) in the gas that are colliding with the sphere, in order for Epstein drag to apply. The reason for this is to ensure that the change in the sphere's momentum due to individual collisions with gas species is not large enough to substantially alter the sphere's motion, such as occurs in Brownian motion. The result was obtained by Paul Sophus Epstein in 1924. His result was used for. high-precision measurements of the charge on the electron in the oil drop experiment performed by Robert A. Millikan, as cited by Millikan in his 1930 review paper on the subject. For the early work on that experiment, the drag was assumed to follow Stokes' law. However, for droplets substantially below the submicron scale, the drag approaches Epstein drag instead of Stokes drag, since the mean free path of air species (atoms and molecules) is roughly of order of a tenth of a micron. Statement of the law The magnitude of the force on a sphere moving through a rarefied gas, in which the diameter of the sphere is of order or less than the collisional mean free path in the gas, is where is the radius of the spherical particle, is the number density of gas species, is their mass, is the arithmetic mean speed of gas species, and is the relative speed of the sphere with respect to the rest frame of the gas. The factor encompasses the microphysics of the gas-sphere interaction and the resultant distribution of velocities of the reflected particles, which is not a trivial problem. It is not uncommon to assume (see below) presumably in part because empirically is found to be close to 1 numerically, and in part because in many applications, the uncertainty due to is dwarfed by other uncertainties in the problem. For this reason, one sometimes encounters Epstein drag written with the factor left absent. The force acts in a direction opposite to the direction of motion of the sphere. Forces acting normal to the direction of motion are known as "lift", not "drag", and in any case are not present in the stated problem when the sphere is not rotating. For mixtures of gases (e.g. air), the total force is simply the sum of the forces due to each component of the gas, noting with care that each component (species) will have a different , a different and a different . Note that where is the gas density, noting again, with care, that in the case of multiple species, there are multiple different such densities contributing to the overall force. The net force is due both to momentum transfer to the sphere due to species impinging on it, and momentum transfer due to species leaving, due either to reflection, evaporation, or some combination of the two. Additionally, the force due to reflection depends upon whether the reflection is purely specular or, by contrast, partly or fully diffuse, and the force also depends upon whether the reflection is purely elastic, or inelastic, or some other assumption regarding the velocity distribution of reflecting particles, since the particles are, after all, in thermal contact - albeit briefly - with the surface. All of these effects are combined in Epstein's work in an overall prefactor "". Theoretically, for purely elastic specular reflection, but may be less than or greater than unity in other circumstances. For reference, note that kinetic theory gives For the specific cases considered by Epstein, ranges from a minimum value of 1 up to a maximum value of 1.444. For example, Epstein predicts for diffuse elastic collisions. One may sometimes encounter where is the accommodation coefficient, which appears in the Maxwell model for the interaction of gas species with surfaces, characterizing the fraction of reflection events that are diffuse (as opposed to specular). (There are other accommodation coefficients that describe thermal energy transfer as well, but are beyond the scope of this article.) In-line with theory, an empirical measurement, for example, for melamine-formaldehyde spheres in argon gas, gives as measured by one method, and by another method, as reported by the same authors in the same paper. According to Epstein himself, Millikan found for oil drops, whereas Knudsen found for glass spheres. In his paper, Epstein also considered modifications to allow for nontrivial . That is, he treated the leading terms in what happens if the flow is not fully in the rarefied regime. Also, he considered the effects due to rotation of the sphere. Normally, by "Epstein drag," one does not include such effects. As noted by Epstein himself, previous work on this problem had been performed by Langevin by Cunningham, and by Lenard. These previous results were in error, however, as shown by Epstein; as such, Epstein's work is viewed as definitive, and the result goes by his name. Applications As mentioned above, the original practical application of Epstein drag was to refined estimates of the charge on the electron in the Millikan oil-drop experiment. Several substantive practical applications have ensued. One application among many in astrophysics is the problem of gas-dust coupling in protostellar disks. See also section 4.1.1, "Epstein drag," page 110-111 of. Another application is the drag on stellar dust in red giant atmospheres, which counteracts the acceleration due to radiation pressure Another application is to dusty plasmas. References Drag (physics) Theoretical physics
Epstein drag
Physics,Chemistry
1,204