id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
33,346,197
https://en.wikipedia.org/wiki/List%20of%20sequenced%20fungi%20genomes
This list of sequenced fungi genomes contains all the fungal species known to have publicly available complete genome sequences that have been assembled, annotated and published; draft genomes are not included, nor are organelle only sequences. Ascomycota Dothideomycetes Aureobasidium pullulans, A. melanogenum, A. subglaciale and A. namibiae, polyextremotolerant (2014) Hortaea werneckii, extremely halotolerant (2013 2017) Leptosphaeria maculans, plant pathogen (2011) Macrophomina phaseolina, plant pathogen (2012) Mycosphaerella fijiensis, plant pathogen (2007) Mycosphaerella graminicola IPO323, wheat pathogen (2008) Phaeosphaeria nodorum SN15, wheat pathogen (2005) Pyrenophora tritici-repentis Pt-1C-BFP, wheat pathogen (2007) Eurotiomycetes Ajellomyces capsulata several strains, Darling's disease (2009, unpubl.) Ajellomyces dermatitidis several strains (2009, unpubl.) Arthroderma benhamiae CBS 112371, skin infection (2010, unpubl.) Arthroderma gypseum CBS 118893, athlete's foot (2008) Arthroderma otae CBS 113480, athlete's foot (2008) Aspergillus aculeatus ATCC16872, industrial use (2010) Aspergillus carbonarius ITEM 5010, food pathogen (2009) Aspergillus clavatus Strain:NRRL1 (2008) Aspergillus fumigatus Strain:A1163, human pathogen (2008) Aspergillus fumigatus Strain:Af293, human pathogen (2005) Aspergillus kawachii IFO 4308, food industry (2011) Aspergillus nidulans Strain:FGSC A4, model organism (2005) Aspergillus niger Strain:ATCC 1015 (DOE Joint Genome institute) Aspergillus niger Strain:CBS 513.88, industrial use (2007) Aspergillus oryzae Strain:RIB40, industrial use (2005) Aspergillus terreus NIH 2624, statin producer and pathogen (2005, unpubl.) Coccidioides immitis, human pathogen, Valley fever (2009) Coccidioides posadasii C735 delta SOWgp, human pathogen, Valley fever (2009) Neosartorya fischeri Strain:NRRL181 (2008) Paracoccidioides brasiliensis, several strains, human pathogen (2007 Penicillium chrysogenum Strain: Wisconsin54-1255, industrial use (2008) Penicillium digitatum Strain PHI26 (2012) Penicillium digitatum Strain Pd1 (2012 Talaromyces marneffei, human pathogen (2011 Uncinocarpus reesii (2009) Leotiomycetes Blumeria graminis ffsp hordei Strain:DH14, plant pathogen (2010) Botrytis cinerea (Botryotinia fuckeliana) Strain:B05.10 and T4, plant pathogen (2011) Glarea lozoyensis (2012) Sclerotinia sclerotiorum Strain:1980 (2011) Ascocoryne sarcoides Strain: NRRL50072 (2012) Pezizomycetes Cladobotryum protrusum (2019) Tuber melanosporum Mel28, Périgord black truffle (2010) Saccharomycetes Ashbya gossypii Strain:ATCC 10895, plant pathogen (2004) Candida albicans Strain:SC5314, human pathogen (2004) Candida albicans Strain:WO-1, human pathogen (2009) Candida dubliniensis CD36, human pathogen (2009) Candida glabrata Strain:CBS138, human pathogen (2004) Candida guilliermondii, human pathogen (2009) Candida lusitaniae, human pathogen (2009) Candida parapsilosis, human pathogen (2009) Candida orthopsilosis, human pathogen (2012) Candida tropicalis, human pathogen (2009) Debaryomyces hansenii Strain:CBS767, industrial use (2004) Debaryomyces hansenii Strain:MTCC 234, salt-tolerant (2012) Dekkera bruxellensis Strain:CBS2499, wine yeast (2012) Hansenula polymorpha NCYC 495 leu1.1, industrial use (2010) Kluyveromyces aestuarii ATCC 18862 (2010, unpubl.) Kluyveromyces lactis Strain:CLIB210, industrial use (2004) Kluyveromyces wickerhamii UCD 54-210 (2010, unpubl.) Lachancea kluyveri (Saccharomyces kluyveri) NRRL Y-12651, plant pathogen (2009) Lodderomyces elongisporus, human pathogen (2009) Naumovozyma castellii Strain:AS 2.2404, CBS 4309 (Saccharomyces castellii; 2003, 2011) Naumovozyma dairenensis Strain:CBS 421 (2011) Saccharomyces bayanus (2003, 2011) Saccharomyces arboricolus (2013,) Saccharomyces cerevisiae Strain:JAY291, industrial/model (2009) Saccharomyces cerevisiae Strain:S288C, industrial/model (1996) Saccharomyces cerevisiae Strain:Sigma1278b, industrial/model (2010) Saccharomyces kudriavzevii (2003) Saccharomyces mikatae (2003, 2011) Saccharomyces paradoxus (2003 2009) Saccharomyces pastorianus Weihenstephan 34/70, industrial, beer (2009) Scheffersomyces stipitis (Pichia stipitis) CBS 6054, lignin/xylose degrader (2007) Spathaspora passalidarum NRRL Y-27907, model xylose fermenter (2010) Tetrapisispora phaffii van der Walt Y 89, CBS 4417 (2011) Torulaspora delbrueckii Strain:Wallerstein 129, CBS 1146 (2011) Vanderwaltozyma polyspora DSM 70294 (2007) Yarrowia lipolytica Strain:CLIB99, industrial use (2004) Zygosaccharomyces rouxii strain CBS732, food spoiler (2009) Schizosaccharomycetes Schizosaccharomyces japonicus yFS275, model for invasive growth (2006) Schizosaccharomyces pombe Strain:972h, model eukaryote (2002) Sordariomycetes Colletotrichum graminicola, corn pathogen (2012) Colletotrichum higginsianum, Arabidopsis thaliana pathogen (2012) Chaetomium cochliodes Strain:CCM F-232, soil fungus (2016) Chaetomium globosum Strain:CBS 148.51, soil fungus (2005) Chaetomium thermophilum Strain:CBS 144.50, soil fungus (2011) Fusarium oxysporum f. sp. lycopersici 4287, human/plant pathogen (2010) Gibberella moniliformis 7600, plant pathogen (2010) Gibberella zeae PH-1, plant pathogen (2008) Gaeumannomyces graminis tritici R3-111a-1 (2010, unpubl.) Grosmannia clavigera kw1407, plant pathogen (2011) Magnaporthe grisea, plant pathogen (20054) Metarhizium acridum CQMa 102, and Metarhizium anisopliae ARSEF 23, insect pathogens (2011) Neurospora crassa, model eukaryote (2003) Neurospora tetrasperma FGSC 2508 mat A, model (2010) Nectria haematococca MPVI, plastic/pest./lignin degrader (2009) Podospora anserina :S mat+ Sporotrichum thermophile, thermophilic cellulose degrader (2010) Thielavia terrestris, model thermophile/industrial (2010) Trichoderma atroviride, industrial/soil, (2010) Trichoderma reesei QM6a, biomass-degrading (2008) Trichoderma virens Gv29-8, industrial/pathogen (2007) Verticillium albo-atrum VaMs.102, plant pathogen (2008, unpubl.) Basidiomycota Agaricomycetes Agaricus bisporus var. bisporus Strain:H97, Champignon (2009) Agrocybe aegerita, ack Poplar or Sword-belt Mushroom (2018) ) Auricularia delicata (2012) Auricularia heimuer, Chinese Auricularia (2019) Coniophora puteana (2012) Coprinopsis cinerea (Coprinus cinereus), model organism for multicellular fungi (2010) Dichomitus squalens (2012) Fibroporia radiculosa Strain:TFFH 294 (2012) Fomitiporia mediterranea (2012) Fomitopsis pinicola (2012) Ganoderma leucocontextum strain:GL72 (2023) Gloeophyllum trabeum (2012) Hebeloma cylindrosporum http://genome.jgi.doe.gov/Hebcy2/Hebcy2.home.html Heterobasidion annosum, plant pathogen (2009) Laccaria bicolor Strain:S238N-H82, mycorrhiza (2008) Lentinula edodes, Shiitake mushroom (2016) Moniliophthora perniciosa, Witches' Broom Disease of cacao (2008) Oudemansiella raphanipes, edible mushroom "Changgengu"(2023) Phanerochaete chrysosporium Strain:RP78, mycoremediation (2004) Piriformospora indica endophyte (2011) Pleurotus ostreatus, industrial/lignin degrader (2010) Pleurotus tuber-regium, White-rot fungus (2018) Postia placenta, cellulose degrader (2008) Punctularia strigosozonata (2012) Schizophyllum commune, mushroom (2010) Serpula lacrymans, plant pathogen (2011) Stereum hirsutum (2012) Trametes versicolor (2012) Wolfiporia cocos (2012) Dacrymycetes Dacryopinax spathularia, edible jelly fungus (2024) Pucciniomycetes (formerly Urediniomycetes) Melampsora laricis-populina, pathogen of poplars (2008) Puccinia graminis f. sp. tritici, plant pathogen (2011) Puccinia triticina 1-1 BBBD Race 1, pathogen of wheat() Rhodotorula graminis strain WP1, plant symbiont (2010) Sporobolomyces roseus, associated with plants () Tremellomycetes Cryptococcus (Filobasidiella) neoformans JEC21, human pathogen (2005, other strains unpubl.) Dacryopinax sp. (2012) Tremella mesenterica (2012) Ustilaginomycetes Malassezia globosa CBS 7966, dandruff-associated (2007) Malassezia restricta CBS 7877, dandruff-associated (2007) Sporisorium rellianum, plant pathogen (2010) Ustilago maydis, plant pathogen (2006) Wallemiomycetes Wallemia ichthyophaga, obligate halophile (2013) Wallemia sebi, xerophile (2012) Chytridiomycota Chytridiomycota includes fungi with spores that have flagella (zoospores) and are a sister group to more advanced land fungi that lack flagella. Several chytrid species are pathogens, but have not had their genomes sequenced yet. Batrachochytrium dendrobatidis JEL423, amphibian pathogen (2006) Batrachochytrium dendrobatidis JAM81, amphibian pathogen (2006) Spizellomyces punctatus DAOM BR117 (2009) Gonapodya prolifera JEL478 (Monoblepharidomycetes) (2011) Chytriomyces sp. MP 71 Entophlycits helioformis JEL805 Gaertneriomyces semiglobifer Barr43 Globomyces pollinis-pini Rhizoclomsatium globosum Blastocladiomycota Allomyces macrogynus ATCC 38327 (Blastocladiomycota) (2009) Catenaria anguillulae PL171 (Blastocladiomycota) Neocallimastigomycota Piromyces sp. E2 (Neocallimastigomycota) (2011) Anaeromyces sp. S4 Neocallimastix sp. G1 Orpinomyces sp. C1A Microsporidia Encephalitozoon cuniculi, human pathogen (2001) Encephalitozoon intestinalis ATCC 50506, human pathogen (2010) Enterocytozoon bieneusi, human pathogen particularly in the context of HIV infection (~60% of genome 2009, 2010) Nosema ceranae, honey bee pathogen (2009) Octosporea bayeri OER 3-3, Daphnia pathogen (2009) Mucoromycota Mucoromycotina Absidia padenii Absidia repens Backusella circina Circinella umbellata Cokeromyces recurvatus Cunninghamella echinulata Dichotomocladium elegans Fennellomyces sp. Gilbertella persicaria var. persicaria Gongronella butleri Hesseltinella vesiculosa Lichtheimia corymbifera Lichtheimia hyalospora Mucor circinelloides Mucor cordense Mucor indicus Mucor heterogamus Mycotypha africana Parasitella parasitica Phascolomyces articulosus Phycomyces blakesleeanus Phycomyces nitens Pilaira anomala Pilobolus umbonatus Radiomyces spectabilis Rhizopus delemar Rhizopus oryzae, human pathogen (mucormycosis) (2009) Rhizopus microsporus Saksenaea vasiformis Spinellus fuiger Sporodiniella umbellata Syncephalastrum racemosum Thamnidium elegans Umbelopsis isabellina Umbelopsis ramanniana Zychaea mexicana Glomeromycotina Rhizophagus irregularis Mortierellomycotina Mortierella alpina Strain: ATCC 32222, commercial source of arachidonic acid (2011) Lobosporangium transversale Mortierella elongata Mortierella humilis Mortierella multidivaricata Mortierella verticillata Zoopagomycota Kickxellomycotina Coemansia reversa Coemansia spiralis Kickxella alabastrina Linderina pennicpora Martensiomyces pterosporus Ramicandelaber brevisporus Smittium culicis Smittium mucronatum Zancudomyces culisetae Entomophthoromycotina Basidiobolus meristosporus Conidiobolus coronatus Condidiobolus thromboides Massospora cicadina Zoopagomycotina Syncephalis fuscata Syncephalis plumigaleata Syncephalis pseudoplumigaleata Piptocephalis cylindrospora See also List of sequenced animal genomes List of sequenced archaeal genomes List of sequenced bacterial genomes List of sequenced eukaryotic genomes List of sequenced plant genomes External links Fungal Genome Initiative (includes draft genomes) UniProt query (complete proteome and fungi) References Lists of fungi Biology-related lists Fungi
List of sequenced fungi genomes
Engineering,Biology
3,700
53,645,232
https://en.wikipedia.org/wiki/Zellballen
A zellballen is a small nest of chromaffin cells or chief cells with pale eosinophilic staining. Zellballen are separated into groups by segmenting bands of fibrovascular stroma, and are surrounded by supporting sustentacular cells. A zellballen pattern is diagnostic for paraganglioma or pheochromocytoma. Zellballen is German for "ball of cells". References Cell biology Human cells Neuroendocrine cells Adrenal gland
Zellballen
Chemistry,Biology
112
70,151,056
https://en.wikipedia.org/wiki/LQ%20Hydrae
LQ Hydrae is a single variable star in the equatorial constellation of Hydra. It is sometimes identified as Gl 355 from the Gliese Catalogue; LQ Hydrae is the variable star designation, which is abbreviated LQ Hya. The brightness of the star ranges from an apparent visual magnitude of 7.79 down to 7.86, which is too faint to be readily visible to the naked eye. Based on parallax measurements, this star is located at a distance of 59.6 light years from the Sun. It is drifting further away with a radial velocity of 7.6 km/s. During a 1981 survey of southern stars, W. P. Bidelman found the H and K lines of ionized calcium for LQ Hya were filled in with emission. (W. D. Heintz independently made the same observation.) In 1986, F. C. Fekel and associates determined this is a young, rapidly rotating BY Draconis-type variable. A decade of photometry was used to determine a rotation period of 1.601136 days (1 day, 14 hours, and 24 minutes) The star spots on the surface showed significant evolution over time scales of a few months. Variations in rotational modulation of surface activity suggested the star is undergoing differential rotation. The high lithium abundance and rapid rotation of this star indicate it is a zero age main sequence star, or possibly even a pre-main sequence star. A strong flare event was observed on December 22, 1993, with an estimated energy release of . Additional flares were detected thereafter, with ROSAT X-ray data from 1992 showing a strong flare during that time period. Observations from December 2000 and 2001 showed that the magnetic field of the star is dramatically changing its topology on a time frame of a year or less. The stellar classification of LQ Hya is K1Vp, indicating it is a K-type main-sequence star with some peculiar features in the spectrum. In some respects it is considered an analog of a young Sun around the age of 60 million years. It shows strong emission of ultraviolet and has been detected in the X-ray band, showing an X-ray emission of and indicating high chromospheric activity levels. The star shows dual magnetic activity cycles with period of 6.8 and 11.4 years, which are somewhat comparable to the solar cycle in the Sun. References Further reading K-type main-sequence stars BY Draconis variables Solar analogs Hydra (constellation) BD-10 2857 082558 046816 Hydrae, LQ
LQ Hydrae
Astronomy
527
37,019,370
https://en.wikipedia.org/wiki/List%20of%20electromagnetism%20equations
This article summarizes equations in the theory of electromagnetism. Definitions Here subscripts e and m are used to differ between electric and magnetic charges. The definitions for monopoles are of theoretical interest, although real magnetic dipoles can be described using pole strengths. There are two possible units for monopole strength, Wb (Weber) and A m (Ampere metre). Dimensional analysis shows that magnetic charges relate by qm(Wb) = μ0 qm(Am). Initial quantities Electric quantities Contrary to the strong analogy between (classical) gravitation and electrostatics, there are no "centre of charge" or "centre of electrostatic attraction" analogues. Electric transport Electric fields Magnetic quantities Magnetic transport Magnetic fields Electric circuits DC circuits, general definitions AC circuits Magnetic circuits Electromagnetism Electric fields General Classical Equations Magnetic fields and moments General classical equations Electric circuits and electronics Below N = number of conductors or circuit components. Subscript net refers to the equivalent and resultant property value. See also Defining equation (physical chemistry) Fresnel equations List of equations in classical mechanics List of equations in fluid mechanics List of equations in gravitation List of equations in nuclear and particle physics List of equations in quantum mechanics List of equations in wave theory List of photonics equations List of relativistic equations SI electromagnetism units Table of thermodynamic equations Footnotes Sources Further reading Physical quantities SI units electromagnetism Electromagnetism
List of electromagnetism equations
Physics,Mathematics
300
2,185,680
https://en.wikipedia.org/wiki/Multi-configuration%20time-dependent%20Hartree
Multi-configuration time-dependent Hartree (MCTDH) is a general algorithm to solve the time-dependent Schrödinger equation for multidimensional dynamical systems consisting of distinguishable particles. MCTDH can thus determine the quantal motion of the nuclei of a molecular system evolving on one or several coupled electronic potential energy surfaces. MCTDH by its very nature is an approximate method. However, it can be made as accurate as any competing method, but its numerical efficiency deteriorates with growing accuracy. MCTDH is designed for multi-dimensional problems, in particular for problems that are difficult or even impossible to attack in a conventional way. There is no or only little gain when treating systems with less than three degrees of freedom by MCTDH. MCTDH will in general be best suited for systems with 4 to 12 degrees of freedom. Because of hardware limitations it may in general not be possible to treat much larger systems. For a certain class of problems, however, one can go much further. The MCTDH program package has recently been generalised to enable the propagation of density operators. References External links The Heidelberg MCTDH Homepage Quantum chemistry Scattering
Multi-configuration time-dependent Hartree
Physics,Chemistry,Materials_science
241
34,814,274
https://en.wikipedia.org/wiki/Group%20I%20pyridoxal-dependent%20decarboxylases
In molecular biology, the group I pyridoxal-dependent decarboxylases, also known as glycine cleavage system P-proteins, are a family of enzymes consisting of glycine cleavage system P-proteins (glycine dehydrogenase (decarboxylating)) from bacterial, mammalian and plant sources. The P protein is part of the glycine decarboxylase multienzyme complex (GDC) also annotated as glycine cleavage system or glycine synthase. The P protein binds the alpha-amino group of glycine through its pyridoxal phosphate cofactor, carbon dioxide is released and the remaining methylamine moiety is then transferred to the lipoamide cofactor of the H protein. GDC consists of four proteins P, H, L and T. Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence. Group I comprises glycine decarboxylases. See also Group II pyridoxal-dependent decarboxylases Group III pyridoxal-dependent decarboxylases Group IV pyridoxal-dependent decarboxylases References Protein families
Group I pyridoxal-dependent decarboxylases
Biology
270
14,042,799
https://en.wikipedia.org/wiki/Fusarium%20gibbosum
Fusarium gibbosum (syn. Gibberella intricans) is a fungal plant pathogen. It is an opportunistic pathogen of durians such as Durio graveolens and Durio kutejensis. References External links USDA ARS Fungal Database Fungal plant pathogens and diseases gibbosum Fungi described in 1910 Fungus species
Fusarium gibbosum
Biology
75
7,284,042
https://en.wikipedia.org/wiki/Band%20cell
A band cell (also called band neutrophil, band form or stab cell) is a cell undergoing granulopoiesis, derived from a metamyelocyte, and leading to a mature granulocyte. It is characterized by having a curved but not lobular nucleus. The term "band cell" implies a granulocytic lineage (e.g., neutrophils). Clinical significance Band neutrophils are an intermediary step prior to the complete maturation of segmented neutrophils. Polymorphonuclear neutrophils are initially released from the bone marrow as band cells. As the immature neutrophils become activated or exposed to pathogens, their nucleus will take on a segmented appearance. An increase in the number of these immature neutrophils in circulation can be indicative of an infection for which they are being called to fight against, or some inflammatory process. The increase of band cells in the circulation is called bandemia and is a "left shift" process. Blood reference ranges for neutrophilic band cells in adults are 3 to 5% of white blood cells, or up to 0.7 × 109/L. An excess may sometimes be referred to as bandemia. See also Pluripotential hemopoietic stem cell Additional images References External links - "Bone Marrow and Hemopoiesis: bone marrow smear, neutrophil series" Histology at okstate.edu Slide at hematologyatlas.com - "Neutrophil band" visible in second row Interactive diagram at lycos.es Histology
Band cell
Chemistry
346
23,879,702
https://en.wikipedia.org/wiki/Aciculosporium%20monostipum
Aciculosporium monostipum is a species of fungi in the family Clavicipitaceae. It infects individual florets in the same way as species of the genus Claviceps but does not produce sclerotia. When grown in pure culture it shows dimorphism in production of a yeast-like ephelidial phase as well as a mycelial phase. References Clavicipitaceae Fungus species
Aciculosporium monostipum
Biology
96
3,836,385
https://en.wikipedia.org/wiki/Inside%20Mac%20Games
Inside Mac Games (IMG) started in 1993 as an electronic magazine about video games for the Mac. It was distributed on floppy disk, then CD-ROM, and eventually became a website. History In 1992, Tuncer Deniz, who was unemployed, decided to create a magazine called Inside Mac Games — he came up with the name after seeing a copy of Inside Sports at a newsstand — that would be dedicated to reviews of new and upcoming Macintosh computer games. Deniz interested a friend, Jon Blum, in the project, but neither of them had the capital or the expertise to publish a print magazine. Instead, they envisioned an electronic magazine. Using a shareware lay-out program, Deniz and Blum created the first issue, which contained reviews of four flight simulators — Parsoft Interactive's Hellcats Over the Pacific and Missions at Leyte Gulf, Spectrum HoloByte's Falcon MC, and Microsoft Flight Simulator 4.0 — as well as hints, Easter eggs and reviews about older games such as Maelstrom and Tom Landry Strategy Football, and most importantly, a playable preview of F/A-18 Hornet that Graphic Simulations planned to release in a few months. Deniz and Blum decided to offer two annual subscription plans: either $18 for a downloadable version of the magazine; or for $24, the subscriber would receive a monthly floppy disk in the mail that would not only contain the magazine, but also software patches and updates for popular games, as well as a shareware Game of the Month. In February 1993, they uploaded a promotional file to AOL that contained portions of Issue 1. Enough people downloaded the file and subsequently paid for a subscription that Deniz and Blum were able to produce Issue 2 the next month. Several months later, sales increased substantially when Graphic Simulations released F/A-18 Hornet and included a promotional flyer for IMG in the box. In 1995, IMG switched from floppy disks to CD-ROMs, allowing for much more high quality content and games, and increased the annual subscription rate to $59. In August of that year, Paul Murphy reviewed one of their CD-ROMs for Dragon and called it "a great deal", although he noted that the magazine itself was "somewhat unexciting [...] IMG articles are competent and serviceable, with no distinctive voices, styles or viewpoints." It was the commercial software demos and shareware included on the CD-ROMs that Murphy called "the real charm and value of the IMG CD." Murphy concluded that in the absence of any other magazines dedicated to Mac games, "Mac game fans need Inside Mac Games to separate the wheat from the chaff. The demos and shareware [are] a barrel of fun and solid value." In 1996, Deniz left IMG to work for Bungie, but returned in 1999. The following year, the CD-ROM distribution of the magazine was dropped in favour of downloads from the IMG website. By 2005, Deniz had opened an on-line software store through the IMG website, using a subscription model of $29 per month for a monthly free game and discounts on other products. From 2005 to 2006, IMG produced a weekly podcast, hosted by game designers Justin Ficarrotta and Will Miller, and critic Blake Buck. that featured Mac game news, reviews and general discussion. After 33 episodes, the original hosts left to start a new podcast, and the IMG podcast was relaunched later the same year with a new host, running for a further 38 episodes. By 2010, interest in Mac-exclusive games had cooled, and by 2018, the IMG website was reduced to the user forums, with a link to Tuncer Deniz's on-line software store. References Macintosh websites Video game news websites Video game platform websites
Inside Mac Games
Technology
789
2,673,983
https://en.wikipedia.org/wiki/J%C3%B8rg%20Tofte%20Jebsen
Jørg Tofte Jebsen (27 April 1888 – 7 January 1922) was a physicist from Norway, where he was the first to work on Einstein's general theory of relativity. In this connection he became known after his early death for what many now call the Jebsen-Birkhoff theorem for the metric tensor outside a general, spherical mass distribution. Biography Jebsen was born and grew up in Berger, Vestfold, where his father Jens Johannes Jebsen ran two large textile mills. His mother was Agnes Marie Tofte and they had married in 1884. After elementary school he went through middle school and gymnasium in Oslo. He showed already then particular talents for mathematical topics. After the final examen artium in 1906, he did not continue his academic studies at a university as would be normal at that time. He was meant to enter his father's company and spent for that purpose two years in Aachen in Germany where he studied textile manufacturing. After a shorter stay in England, he came back to Norway and started to work with his father. But his interests for natural science took over so that in 1909 he started this field of study at University of Oslo. His work there was interrupted in the period 1911-12 when he was an assistant for Sem Sæland at the newly established Norwegian Institute of Technology (NTH) in Trondheim. Back in Oslo he took up investigations of X-ray crystallography with Lars Vegard. With his help he could pursue this work at University of Berlin starting in the spring of 1914. That was at the same time as Einstein took up his new position there. Theory of relativity During the stay in Berlin it became clear that his main interests were in theoretical physics and electrodynamics in particular. This is central to Einstein's special theory of relativity and would define his future work back in Norway. From 1916 he took a new job as assistant in Trondheim, but had to resign after a year because of health problems. In the summer of 1917 he married Magnhild Andresen in Oslo and they had a child a year later. They had then moved back to his parents home in Berger where he worked alone on a larger treatise with the title Versuch einer elektrodynamischen Systematik. It was finished a year later in 1918 and he hoped that it could be used to obtain a doctors degree at the university. In the fall the same year he received treatment at a sanatorium for what turned out to be tuberculosis. The faculty at the University in Oslo sent Jebsen's thesis for evaluation to Carl Wilhelm Oseen at the University of Uppsala. He had some critical comments with the result that it was approved for the more ordinary cand.real. degree. But Oseen had found this student so promising that he shortly thereafter was invited to work with him. Jebsen came to Uppsala in the fall of 1919 where he could follow lectures by Oseen on general relativity. Jebsen–Birkhoff theorem At that time it was natural to study the exact solution of Einstein’s equations for the metric outside a static, spherical mass distribution found by Karl Schwarzschild in 1916. Jebsen set out to extend this achievement to the more general case for a spherical mass distribution that varied with time. This would be of relevance for pulsating stars. After a relative short time he came to the surprising result that the static Schwarzschild solution still gives the exact metric tensor outside the mass distribution. It means that such a spherical, pulsating star will not emit gravitational waves. During the spring 1920 he hoped to get the results published through the Royal Swedish Academy of Sciences. This was met by some difficulties, but after the intervention by Oseen it was accepted for publication in a Swedish journal for the natural sciences where it appeared the following year. His work did not seem to generate much interest. One reason can be that the Swedish journal was not so well-known abroad. A couple of years later it was rediscovered by George David Birkhoff who included it in a popular science book he wrote. Thus it became known as "Birkhoff's theorem." The original discovery of Jebsen was pointed out first in 2005, and translated into English. From that time on it is now more often called the Jebsen–Birkhoff theorem. Most modern-day proofs are along the lines of the original Jebsen derivation. Final years Einstein came on a visit to Oslo in June 1920. He would give three public lectures about the theory of relativity after the invitation by the Student Society. Jebsen was also there, but it is not clear if he met him personally. In the fall the same year Jebsen traveled with his family to Bolzano in northern Italy in order to find a milder climate to improve his deteriorating health. Here he wrote the first Norwegian presentation of the differential geometry used in general relativity. He also found time to write a popular book on Galileo Galilei and his struggle with the church. But his health did not improve and he died there on January 7, 1922. A few weeks later he was buried near his home in Norway. Referanser Norwegian physicists Relativity theorists 1888 births 1922 deaths People from Vestfold
Jørg Tofte Jebsen
Physics
1,074
998,941
https://en.wikipedia.org/wiki/BlueJ
BlueJ is an integrated development environment (IDE) for the Java programming language, developed mainly for educational purposes, but also suitable for small-scale software development. It runs with the help of Java Development Kit (JDK). BlueJ was developed to support the learning and teaching of object-oriented programming, and its design differs from other development environments as a result. The main screen graphically shows the class structure of an application under development (in a UML-like diagram), and objects can be interactively created and tested. This interaction facility, combined with a clean, simple user interface, allows easy experimentation with objects under development. Object-oriented concepts (classes, objects, communication through method calls) are represented visually and in its interaction design in the interface. History The development of BlueJ was started in 1999 by Michael Kölling and John Rosenberg at Monash University, as a successor to the Blue system. BlueJ is an IDE (Integrated Development Environment). Blue was an integrated system with its own programming language and environment, and was a relative of the Eiffel language. BlueJ implements the Blue environment design for the Java programming language. In March 2009, the BlueJ project became free and open source software, and licensed under GPL-2.0-or-later with the Classpath exception. BlueJ is currently being maintained by a team at King's College London, England, where Kölling works. Supported language BlueJ supports programming in Java and in Stride. Java support has been provided in BlueJ since its inception, while Stride support was added in 2017. See also Greenfoot DrJava Educational programming language References Bibliography External links BlueJ textbook Integrated development environments Free integrated development environments Cross-platform free software Free software programmed in Java (programming language) Java development tools Java platform Linux programming tools Software development kits MacOS programming tools Programming tools for Windows Linux software Educational programming languages Pedagogic integrated development environments
BlueJ
Technology
395
31,845,142
https://en.wikipedia.org/wiki/Luminex%20Corporation
Luminex Corporation is a biotechnology company which develops, manufactures and markets proprietary biological testing technologies with applications in life-sciences. Background Luminex's Multi-Analyte Profiling (xMAP) technology allows simultaneous analysis of up to 500 bioassays from a small sample volume, typically a single drop of fluid, by reading biological tests on the surface of microscopic polystyrene beads called microspheres. The xMAP technology combines this miniaturized liquid array bioassay capability with small lasers, light emitting diodes (LEDs), digital signal processors, photo detectors, charge-coupled device imaging and proprietary software to create a system offering advantages in speed, precision, flexibility and cost. The technology is currently being used within various segments of the life sciences industry, which includes the fields of drug discovery and development, and for clinical diagnostics, genetic analysis, bio-defense, food safety and biomedical research. The Luminex MultiCode technology is used for real-time polymerase chain reaction (PCR) and multiplexed PCR assays. Luminex Corporation owns 315 issued patents worldwide, including over 124 issued patents in the United States based on its multiplexing xMAP platform. References External links Immunology organizations Biological techniques and tools Companies based in Austin, Texas Life sciences industry Companies formerly listed on the Nasdaq Biotechnology companies of the United States 2021 mergers and acquisitions American subsidiaries of foreign companies
Luminex Corporation
Biology
292
19,196,788
https://en.wikipedia.org/wiki/HD%2036041
HD 36041 is giant star in the northern constellation Auriga. It has an apparent magnitude of 6.37, making it faintly visible to the naked eye. References External links HR 1825 CCDM J05307+ 3950 Image HD 36041 Auriga 036041 025810 G-type giants 1825 Durchmusterung objects
HD 36041
Astronomy
74
11,722,993
https://en.wikipedia.org/wiki/Raster%20Document%20Object
The .RDO (Raster Document Object) file format is the native format used by Xerox's DocuTech range of hardware and software, that underpins the company's "Xerox Document On Demand" "XDOD" systems. It is therefore a significant file format for the "print on demand" market sector, along with PostScript and PDF. RDO is a metafile format based on the Open Document Architecture (ODA) specifications: In Xerox's RDO implementation, description and control information is stored within the RDO file, while raster images are stored separately, usually in a separate folder, as TIFF files. The RDO file dictates which bitmap images will be used on each page of a document, and where they will be placed. Features and disadvantages This approach has advantages and disadvantages over the monolithic approach used by PDF: The disadvantages of RDO are that it is a largely proprietary format, and the multi-file approach means that file management and orphan control is more of an issue: one cannot tell from a computer's file system whether all the files required for a document to print are present and correct. In RDO's favor, the multi-file approach allows a networked device to load the small RDO file and then request the larger bitmap files only when necessary: This allows a full job specification to be loaded and installed over a network almost immediately, with the larger bitmap files only having to be transferred as and when needed, allowing more flexibility for managing network traffic loading. The TIFF file format is highly portable, and Xerox's MakeReady software, supplied with its XDOD systems, readily imports and export postscript files: however, the Xerox "on demand" systems typically require a document library to be stored as RDO / TIFF files, and most non-Xerox applications will not read RDO structures directly. See also Xerox DocuTech Print on demand Open Document Architecture Tag Image File Format Portable Document Format References "Document encoding formats for Phoenix: an example of on-demand publishing" - Summary Report prepared by South Bank University Oya Y. Rieger and Anne R. Kenney "Risk Management of Digital Information Case Study for Image File Format" Xerox Page description languages Digital press Computer file formats RDO
Raster Document Object
Technology
491
40,941,011
https://en.wikipedia.org/wiki/Belphegor%27s%20prime
Belphegor's prime is the palindromic prime number (1030 + 666 × 1014 + 1), a number which reads the same both backwards and forwards and is only divisible by itself and one. History Belphegor's prime was first discovered by Harvey Dubner, a mathematician known for his discoveries of many large prime numbers and prime number forms. For Belphegor's prime in particular, he discovered the prime while determining a sequence of primes it belongs to. The name "Belphegor's prime" was coined by author Clifford A. Pickover in 2012. Belphegor is one of the Seven Princes of Hell; specifically, "the demon of inventiveness." The number itself contains superstitious elements that have given it its name: the number 666 at the heart of Belphegor's prime is widely associated as being the number of the beast, used in symbolism to represent one of the creatures in the apocalypse or, more commonly, the devil. This number is surrounded on either side by thirteen zeroes and is 31 digits in length (thirteen reversed), with thirteen itself long regarded superstitiously as an unlucky number in Western culture. Mathematics A Belphegor number is a palindromic number in the form of . The sequence of the first four Belphegor numbers is: Dubner noticed that 16661 is a prime number. By adding zeroes directly on both sides of the 666, Dubner found more palindromic prime numbers, including the Belphegor prime, which is second in the sequence. This sequence eventually became the Belphegor primes, named after the number. The number of zeroes required to create each of the first few Belphegor primes is: Belphegor's prime contains 13 zeroes on either side of the central 666, and thus corresponds to the second number in this sequence. In the short scale, this number would be named "one nonillion, sixty-six quadrillion, six hundred trillion one." In the long scale, this number's name would be "one quintillion, sixty-six billiard, six hundred billion one." References External links Belphegor's Prime: 1000000000000066600000000000001 from Clifford Pickover Prime numbers Superstitions about numbers Palindromes Numbers Large numbers
Belphegor's prime
Physics,Mathematics
514
69,158,898
https://en.wikipedia.org/wiki/Sulli%20Deals
"Sulli Deals" was an open-source app which contained photographs and personal information of some 100 Muslim women online. An FIR was filed by the Delhi Police with National Commission for Women India taking suo moto cognisance of the matter on 8 July 2021. The creator of the app was a BCA Student from Indore, Madhya Pradesh. On 9 January 2022, Thakur, who created the app to "defame" Muslim women, was arrested by the Delhi Police. Thakur was granted bail on 29 March by the court. Incident On 4 July 2021, several pictures of Muslim women were posted on Twitter, where each was described as a "deal of the day". Several accounts spoke against the app, which was hosted on GitHub as an "open-source project." After multiple complaints, GitHub took the app down and suspended the "sullideals" account which hosted the app. This was not the first time Muslim women were harassed by right-wing social media users. In May 2021, a YouTuber named Liberal Doge "rated out of 10" Pakistani women in his livestream and some group of anonymous accounts harassed Hasiba Amin, the National Convenor of Congress IT Cell. According to analysts, the same group of people were behind the Sulli Deals app. However, Liberal Doge denied any connection with the Sulli Deals app. Reaction Commentators have described the app as targeted harassment against Muslim women, with NCW taking suo moto cognizance over the matter and Delhi Police Cyber Cell registering an FIR under section 354-A. One of the targeted women, pilot Hina Khan, filed a separate FIR with Delhi Police under section 509 and section 66,67 of the IT Act. Shiv Sena MP Priyanka Chaturvedi wrote to the IT minister demanding strict and urgent action against the creators of the app. In addition, 56 MPs across party lines signed a letter to the home minister Amit Shah seeking redressal. Congress MP Shashi Tharoor and AIMIM president Asaduddin Owaisi showed solidarity with the targeted women and assured that they would pursue the case to prevent further misuse of social media. More than 800 women-rights activists from all over India released a statement seeking action against the culprits. Both GitHub and Twitter were criticized for failing to prevent further harassment of Muslim women after the auction fiasco and not taking quick action against the sullideals account. As of November 2021, the identity of the creator of the app was still unknown, with GitHub refusing to share data to Indian authorities through the usual CrPC notice. On 6 January 2022, the Delhi Police said they were looking for the actual creator of the app through MLAT. On 11 January 2022, the United Nations Special Rapporteur on Minority Issues, Fernand de Varennes, said, "Minority Muslim women in India are harassed and ‘sold’ in social media apps, #SulliDeals, a form of hate speech, must be condemned and prosecuted as soon as they occur. All Human Rights of minorities need to be fully and equally protected". Arrest In January 2022, Delhi police arrested the creator of Sulli Deals, in Indore, Madhya Pradesh. Aftermath In 2022, a similar app named Bulli Bai was created to auction Muslim women, in which the same Trad group was involved. Police arrested Neeraj Bishnoi, an engineering student, in the case. In March 2022, Niraj Bishnoi was granted bail by the court. Notes References 2021 in India July 2021 crimes in Asia 2021 controversies Sexual harassment in India Cybercrime in India Cyberbullying Defunct websites 2021 in Internet culture Online sex crimes Islamophobia in India Indian websites Internet properties established in 2021 Internet properties disestablished in 2021 Internet trolling Online obscenity controversies Shock sites Internet forums Internet-related controversies Stalking Delisted applications Hindutva Sexism in India
Sulli Deals
Biology
812
37,178,161
https://en.wikipedia.org/wiki/Pharmaceuticals%20and%20Medical%20Devices%20Agency
The (PhMDA) is an Independent Administrative Institution responsible for ensuring the safety, efficacy and quality of pharmaceuticals and medical devices in Japan. It is similar in function to the Food and Drug Administration in the United States, the Medicines and Healthcare products Regulatory Agency in the United Kingdom, the Spanish Agency of Medicines and Medical Devices in Spain or the Food and Drug Administration in the Philippines. The PhMDA has been eCTD compliant at least since December 2017. Tasks Among other things, the agency is tasked with the following: Drug and medical device testing: Scientific review of market authorization applications based on Japanese pharmaceutical law Advice in clinical trials or in the preparation of dossiers for the registration procedure (New Drug Applications (NDA)) Inspection and conformity assessment of Good Clinical Practice (GCP), Good Laboratory Practice (GLP), and Good Practice Systems and Programs (GPSP) Auditing of manufacturers to ensure they conform to Good Manufacturing Practice (GMP) and have a suitable Quality Management System (QMS) Post-marketing drug safety: The collection, analysis and distribution of data on the quality, efficacy, and safety data of medicines and medical devices Advising consumers on approved products Research on the development of industry standards Victim compensation: Payment of medical costs, lost wages, and pain and suffering for those who experience injury or disability resulting from the use of medical products Disbursement of funds to those infected with HIV as a result of blood transfusions Leadership The chief executive of the agency is Yasuhiro Fujiwara, former head of the National Cancer Center Japan. From 2008 to 2018, the chief executive of the agency was Tatsuya Kondo, a neurosurgeon and graduate of the University of Tokyo. References External links Medical and health organizations based in Japan National agencies for drug regulation Independent Administrative Institutions of Japan Government agencies established in 2004 2004 establishments in Japan
Pharmaceuticals and Medical Devices Agency
Chemistry
380
76,879,876
https://en.wikipedia.org/wiki/Brain%20rot
In internet culture, brain rot (or brainrot) describes internet content deemed to be of low quality or value, or the supposed negative psychological and cognitive effects caused by such material. The term also refers to the deleterious effects associated with excessive use of digital media, especially short-form entertainment and doomscrolling, which may affect cognitive and mental health. The term originated within the online cultures of Generation Z and Generation Alpha and has since become mainstream. Origin and usage According to Oxford University Press, the first recorded use of the term traces back to the 1854 book Walden by Henry David Thoreau. Thoreau was criticizing what he saw as a decline in intellectual standards, with complex ideas being less highly regarded, and compared this to the 1840s "potato rot" in Europe. In online settings, it was used as early as 2004. In 2007, the term "brain rot" was used by Twitter users to describe dating game shows, video games and "hanging out online". Usage of the phrase increased online in the 2010s before becoming rapidly more popular in 2020 on Discord, when it became an Internet meme. In 2024, it is most frequently used in the context of Generation Alpha's digital habits, by critics expressing that the generation is "excessively immersed in online culture". It is commonly associated with an individual's vocabulary consisting exclusively of internet references. From 2023 to 2024, Oxford reported the term's usage increased by 230% in frequency per million words. Linguist Brent Henderson predicted that the term will stay around, citing its memorability and relevance. The term is often linked with slang and trends popular among Generation Alpha and Generation Z, such as "skibidi" (a reference to the YouTube shorts series Skibidi Toilet), "rizz" (charm), "gyatt" (referring to the buttocks), "fanum tax" (stealing food), "sigma" (referring to a leader or alpha male), and "delulu" (truncation of delusional). Some online content are commonly labelled "brainrot", such as the web series Skibidi Toilet. Impact The term was named Oxford Word of the Year in 2024, beating other words like demure and romantasy. Its modern usage is defined by the Oxford University Press as "the supposed deterioration of a person's mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging". In the same year, millennial Australian senator Fatima Payman made headlines by making a short speech to the Australian parliament using Generation Alpha slang. She introduced the speech as addressing "an oft-forgotten section of our society", referring to Generations Z and Alpha, and said that she would "render the remainder of my statement using language they're familiar with". Using slang terms, Payman criticised the government's plans to ban under-14s from social media and closed by saying that, "Though some of you cannot yet vote, I hope that, when you do, it will be in a more goated Australia for a government with more aura. Skibidi!" The speech, written by a 21-year-old staff member, was labeled by some as an example of "brainrot" outside the online world. See also Elsagate, a YouTube controversy References Internet memes introduced in 2023 Internet terminology Generation Alpha Generation Z 2020s fads and trends
Brain rot
Technology
717
57,362,029
https://en.wikipedia.org/wiki/Pump%20as%20turbine
A pump as turbine (PAT), also known as a pump in reverse, is an unconventional type of reaction water turbine, which behaves in a similar manner to that of a Francis turbine. The function of a PAT is comparable to that of any turbine, to convert kinetic and pressure energy of the fluid into mechanical energy of the runner. They are commonly commercialized as composite pump and motor/generator units, coupled by a fixed shaft to an asynchronous induction type motor unit. Unlike other conventional machines which require being manufactured according to the client’s specifications, pumps are a very common piece of equipment widely available in different sizes and functionality anywhere around the globe. When used as a turbine, the rotor moves in the opposite direction, or in reverse, as to when it is operating as a pump. In this manner, it allows the motor to generate electrical power. History First mentions of the possibility of using pumps as turbines (PAT) dates back to the early 1930s and are associated to lab experiments performed by Thoma and Kittredge, who first identified the potential for a common pump to function quite efficiently as a turbine by reversing the flow. Subsequently, in the second half of the 20th century, a new impulse for research on this topic came from the pump manufacturing industry. During this time, established collaborations with several research institutes helped develop an in-depth understanding of the phenomena associated with PAT utilization. Efforts were made to develop methods to predict characteristic and efficiency curves. This helped determine the Best Efficiency Point (BEP) of these machines, in turbine mode, and related it to its specifications when used as a pump. The adoption of PATs has the potential to turn economically feasible even hydropower potentials in the "pico" scale (i.e. less than 5 kW of installed capacity), since they only cost a fraction of a conventional hydro turbine. Recent examples of such schemes are two pilot plants built in 2019 in Ireland and Wales. Pumped-storage hydroelectricity In micro Pumped-storage hydroelectricity (PSH), the same pump/PAT could be used for pumping and generating phases by changing rotational direction and speed. The best efficiency point in pumping usually differs from its reverse mode: a variable-frequency drive coupled to the motor/generator would be needed in order to change from pumping to generating mode and to react efficiently to the PSH load fluctuation. Types Among the existing designs of hydraulic pumps/PATs, "centrifugal" or "radial" units are the most used worldwide in a wide variety of application fields. The name is derived from the radial path followed by the fluid in the rotor: from the centre to the periphery when running as a pump and in the opposite direction when flow is reversed. To achieve a higher head drop across the machine, more impellers can be assembled in series to create a multistage unit. Conversely, a double flow radially split pump/PAT design involves a single radial open rotor fed by two symmetric inlets and enable processing a higher flow rate with respect to a standard radial unit. A second type of pump/PAT design is the axial one, in which the fluid interacts with a propeller following a trajectory parallel to the pump axis. Such units are particularly suitable to processing high flow rates with low head difference. Finally, mixed flow pumps/PATs stand in between the applicability range of radial and axial units and have an impeller shaped in a similar way as a Francis turbine. Another special pump/PAT design is that of submersible units, which can possibly be fitted inside a pipe connected to draft tube exploiting small head differences in flowing rivers. References Pumps Turbines
Pump as turbine
Physics,Chemistry
743
67,834,111
https://en.wikipedia.org/wiki/Ameristar%20Charters%20Flight%209363
Ameristar Charters Flight 9363 was a charter flight from Willow Run Airport to Washington Dulles Airport on March 8, 2017, which rejected takeoff and overran the runway. The crash was caused by a jammed elevator, which was damaged by high winds the day before the crash. All 116 passengers and crew survived the crash, with only one minor injury, but the aircraft was damaged beyond repair. The NTSB investigation found that the elevator was damaged while the aircraft was parked, and then was not noticed due to flaws in the aircraft's design and Ameristar's operating procedures. Accident The aircraft had been chartered to transport the Michigan Wolverines men's basketball team to the Big Ten tournament in Washington, D.C. for the following day's game against the Illinois Fighting Illini. Prior to the flight, the aircraft had been parked at Willow Run Airport since it arrived from Lincoln Airport in Lincoln, Nebraska on March 6. Hours before the accident, the air traffic control tower at Willow Run Airport had been evacuated due to high winds. The windstorm affected much of Southeast Michigan, and resulted in power outages for over 800,000 DTE customers. A power outage at Willow Run disabled most of the weather instrumentation in the airport's automated surface observing system (ASOS), and manual weather observations were also unavailable due to the evacuation of the control tower. As a result, the flight crew of Flight 9363 obtained weather information from alternate sources, contacting company operations personnel for a temperature setting, and calling the nearby Detroit Metropolitan Airport on one of the pilots' cell phones to get the current weather information at the latter airport. Lacking information from the ASOS, the crew used windsocks at the airport to determine the predominant wind direction and inform their choice of runway. The flight crew modified their planned takeoff to protect against the danger of wind shear, selecting a higher rotation speed than would otherwise be prescribed. The flight was delayed slightly, due to the communication difficulties caused by the power outages at the airport. Flight 9363 taxied uneventfully to runway 23L, and received its takeoff clearance from Detroit Metropolitan via cell phone due to the lack of ATC services at Willow Run. The check airman acting as pilot in command, 41-year-old Andreas Gruseus, directed the captain, 54-year-old Mark Radloff, to begin the takeoff roll, which began at 14:51:12 EST. The takeoff roll was normal until rotation speed (VR), at indicated airspeed (KIAS). At VR, when the captain pulled back on the control column to rotate the aircraft, the aircraft failed to respond, even after the captain applied additional back force to the control column. Judging the aircraft to be incapable of flight, the captain performed a rejected takeoff, immediately applying maximum braking followed by spoilers and reverse thrust. By the time the captain rejected the takeoff, the aircraft had accelerated to , over above the decision speed (V1), and was moving too fast to stop in the remaining runway distance. The aircraft ran off the end of the runway and across the grassy runway safety area (RSA), before striking the raised pavement of an access road along the airport perimeter. Upon striking the road pavement, the aircraft's landing gear collapsed, and the aircraft slid on its belly over the road and a ditch just beyond, causing substantial damage to the belly and underside of the nose. The aircraft came to a stop with its empennage on the road and its nose in a grassy field on the far side of the road and ditch. An orderly, rapid evacuation followed. The aircraft had 8 emergency exits, of which 4 were used. One emergency exit was rendered unusable by a faulty evacuation slide, and another was blocked by a seatbelt stuck in the door. All 110 passengers and 6 crew members survived the crash, with one injury, a passenger who suffered a laceration to the leg. Aircraft The aircraft involved was a McDonnell Douglas MD-83 (DC-9-83), registration N786TW, manufacturer serial number (MSN) 53123, line number 1987. Constructed at Long Beach Airport, it was first delivered to Avianca on 14 April 1992 on lease from GECAS with Irish registration EI-CEQ. Between 2005 and 2006, it was named Ciudad de Leticia. It was painted in the Juan Valdez special livery in December 2007. It was registered in Colombia as HK-4589X on 26 March 2010. It was purchased by Ameristar on 17 December 2010, registered in the United States as N786TW. It was damaged beyond repair in the accident and written off aged 25 years. Investigation Aircraft design The crash occurred after the aircraft failed to rotate upwards, and the investigation focused on the aircraft's elevator system as a cause of the failure. The elevators of the MD-80 series aircraft are controlled indirectly via a system of servo tabs, using a design similar to the MD-80's predecessor, the DC-9. During a normal takeoff in an MD-80 aircraft, the pilot rotates the aircraft off the runway by pulling the control column back (aft), which moves the elevator control tab into a trailing-edge-down (TED) position. The elevator control tab directs airflow around the elevator, and causes lift from forward airflow to move the elevators in the opposite direction of the tab. The elevator is in turn linked to two more servo tabs, including a geared tab that provides mechanical advantage to the pilot's control inputs. During takeoff, the pilot's commands through the control column, via the system of three servo tabs, ultimately moves the elevator into a trailing-edge-up (TEU) position. This affects the pitch angle of the aircraft, and rotates it up and off the runway. As a consequence of this design, the elevators are not able to be moved during a typical preflight inspection, when the aircraft is stationary and there is no airflow over the elevators. A more thorough inspection of the elevators involves moving them by hand, but it requires a scissor lift (or similar equipment) to reach the top of the T-tail in the air, and is not typically performed during a preflight inspection. Another consequence of the elevator system design is that when the aircraft is parked, the elevators move freely with the wind, within limits. The MD-80 is not equipped with a gust lock, which would prevent this motion. The range of motion of the elevator is constrained by stops, which are equipped with shock absorbers for protection. This system is designed to withstand high-speed airflow from straight ahead during flight, but strong forces from other directions can overcome the shock absorbers. If the linkages in the geared tab move too far, they can become "overcentered," jamming the elevator in place. The MD-80 was designed to withstand horizontal wind gusts of up to from any direction while on the ground. Postaccident condition of the control systems When the aircraft was inspected on site following the accident, the right elevator was found to be jammed in a full trailing-edge-down (TED) position slightly beyond its normal limit of motion, and could not be moved by hand. The inboard control linkage of the right elevator's geared tab was damaged, being locked in an overcenter position, beyond its normal limit of travel, and with portions of the control linkage bent and displaced outboard. When the damaged linkage was disconnected by investigators, the elevator could be freely moved by hand from stop to stop. The cockpit controls could be moved throughout their full range of motion, and the control tabs were observed to move properly in response to control column inputs. Company policies and maintenance Ameristar's procedures were intended to protect aircraft from damage to flight controls from high winds. Per company policy, aircraft stored outside in winds of over 60 knots had to be parked facing into the wind. If aircraft had been exposed to wind gusts in excess of 65 knots from other than straight ahead while parked, a physical inspection of all flight control surfaces would have been required, including a check confirming that the control surfaces were free to move. Measurement equipment at Willow Run recorded maximum wind gusts of , below both thresholds. A review of elevator-position data from the aircraft's flight data recorder (FDR) showed that the right elevator moved properly on the morning of March 6, during a maintenance check. By the next time the aircraft was powered up, at 12:38 on the day of the accident, the right elevator was already at the full trailing-edge-down position, and remained there in all elevator-position data recorded during the preparations for the flight to Dulles. In contrast, the left elevator moved several times throughout its full range of motion under the influence of ground winds. During the attempted takeoff, the left elevator followed the captain's commands, but the right elevator remained in the full trailing-edge-down position until partway through the attempted rotation, and then only moved slightly. Prior elevator jam incident (Munich, 1999) Prior to the Flight 9363 accident, the aircraft manufacturer had record of only one wind-induced elevator jam on any DC-9-series aircraft, which occurred at Munich Airport, Germany, in December 1999, and involved exposure to winds exceeding the elevator system's design limits. In that incident, the airport had been subjected to a severe windstorm while the incident aircraft (another MD-83) was on the ground, with peak winds of up to . This exceeded the manufacturer's mandatory inspection limits for the DC-9/MD-80 flight control system, and the flight crew requested an inspection of the aircraft's flight control system. A full inspection of the aircraft's elevators, which would have involved moving the elevators by hand, was not conducted due to personnel-safety concerns in the continuing high winds. Instead, maintenance personnel had the flight crew perform a flight control check by moving the control column throughout its entire range of motion and checking for any abnormal resistance. No abnormalities were detected during this check, and the aircraft was released for flight. The aircraft was unable to rotate off the runway, and the flight crew were forced to reject the takeoff at very high speed. In this instance, the aircraft was safely brought to a stop on the runway. The German Federal Bureau of Aircraft Accident Investigation (BFU) found that the Munich aircraft's left elevator was jammed in a full trailing-edge-down position, having been forced into that position by the high winds experienced on the ground. Boeing, as recommended by the BFU, instituted new procedures for DC-9, MD-80, and Boeing 717 operators, requiring inspections of elevator systems after aircraft were exposed to high winds on the ground. The threshold set was without the aircraft's nose pointed into the wind, and the requirements following exposure to winds below this threshold remained unchanged. Wind field analysis and load testing of elevator system The aircraft in the Ameristar 9363 incident was damaged in the same way as the aircraft in Munich, but it was not subject to winds nearly as strong. Investigators identified a hangar immediately upwind of the aircraft's parking position as a potential cause of wind conditions that could have affected the aircraft. The investigators performed computational fluid dynamics (CFD) modeling of the wind field downwind of the hangar and around the parked aircraft, using a detailed three-dimensional model of the hangar obtained via drone imagery. The CFD analysis showed that the hangar had a significant impact on the local winds at the parked aircraft. A horizontal gust passing over the hangar was found to produce a 58-knot gust at the aircraft itself. The hangar also introduced significant turbulence, which produced vertical forces. These forces could slam the aircraft's elevators forcefully between their stops, potentially resulting in flight-control damage. To determine whether this theory was possible, the NTSB performed a series of static and dynamic load tests on the accident aircraft's undamaged horizontal stabilizers and left elevator. The tests, conducted at a Boeing laboratory in Huntington Beach, California, simulated the wind conditions calculated by the CFD analysis. The static tests consisted of hanging weights from the elevator while in its trailing-edge-down position, simulating constant wind speeds. Static testing resulted in no damage to the geared tab linkage (the damaged component of the accident aircraft), even at wind speeds of up to . The dynamic load tests simulated turbulence in the wind flow by lifting the elevator and dropping it. The investigators used the same quantity of weight as the static tests, simulating the same horizontal wind speeds with more fluctuation of vertical wind speed. A simulated 60-knot gust applied to the elevator in its full trailing-edge-up position, slamming down to its full TED position, was sufficient to overcenter the geared tab linkage. A simulated 70-knot gust was able to achieve similar effects from the elevator's neutral position. As a final test, with the inboard geared-tab linkage of the test elevator locked in an overcenter position, a TEU force was applied to the elevator using the forklift, simulating the conditions during the takeoff roll. The overcentered links failed and bent outboard, in the same manner as the right elevator did during the takeoff roll. Probable cause The NTSB released their final report in February 2019, which concluded that Pilots' actions The report praised the actions of the flight crew for contributing to the lack of serious injuries or fatalities in the accident. In a press release on March 7th, NTSB chairman Robert Sumwalt stated "This is the kind of extreme scenario that most pilots never encounter – discovering that their plane won't fly only after they know they won't be able to stop it on the available runway. These two pilots did everything right after things started to go very wrong." Aftermath The morning after the crash, the Wolverines men's basketball team traveled to Washington on the Detroit Pistons team plane. The team arrived at the Verizon Center in Washington at 10:30 AM, in time for their noon game against Illinois. The Wolverines played the game in their practice uniforms, as the team's luggage was still on the crashed plane. The Wolverines won against Illinois, 7555, and went on to win the Big Ten tournament. Following their Big Ten tournament victory, the Wolverines advanced in the NCAA tournament, reaching the Sweet Sixteen round before losing to Oregon on March 24. Legacy In response to the crash, Boeing developed a modification to the DC-9 elevator system, which would add a second stop to the elevator system. This secondary stop would physically prevent the elevator from moving far enough past its limits to allow the geared-tab linkages to become locked in an overcenter configuration. For DC-9s with tab-driven elevators not yet equipped with the secondary elevator stop, including DC-9s, MD-80s, and 717s, the maintenance manual was revised to decrease the wind strengths which would necessitate a physical inspection of the elevator system before further flight. The NTSB recommended that Boeing finalize and fully implement these changes, and also develop a means for DC-9 flight crews to detect an elevator jam before attempting to take off. See also 2021 Houston MD-87 crash, an MD-80 runway excursion that resulted in the total destruction of the aircraft due to flight-control damage similar to Flight 9363 Notes References External links NTSB accident report (summary, PDF) NTSB investigation docket (archive) Accident description at the Aviation Safety Network (archive) Aviation accidents and incidents in 2017 Aviation accidents and incidents in Michigan Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents involving runway overruns Aviation accidents and incidents involving sports teams Accidents and incidents involving the McDonnell Douglas MD-83 2017 in Michigan Michigan
Ameristar Charters Flight 9363
Materials_science
3,241
77,413,647
https://en.wikipedia.org/wiki/South%20America%20Galaxy
The South America Galaxy, also known as LEDA 69877 and IRAS 22491-1808, is a merging pair of ultraluminous infrared galaxies located in the constellation Aquarius. It is estimated to be 1.045 million light-years from the Milky Way and about 90,000 light-years in diameter. The object is moving away from the Solar System with a calculated radial velocity of approximately 23.300 kilometers per second. The galaxy got its nickname due to its physical resemblance to the continent of South America. The galaxy was selected as ESA/HUBBLE's picture of the week on 10 June 2013. In the complex central region of the galaxy, scientists have been able to distinguish two nuclei, remains of the two different galaxies that are currently colliding. IRAS 22491-1808 is among the most luminous of these types of galaxies, and is considered to be mid-way through its merging stage. According to a study published in 2017, the mass of the molecular gas outflow in IRAS 22491-1808 is estimated to be MH2(hot)~ 6−8 × 103 M⊙. Notable, it also shows lack of polarization. See also Lists of galaxies External links Image at ESA/HUBBLE References Aquarius (constellation) Galaxy mergers 069877 Luminous infrared galaxies IRAS catalogue objects 069877
South America Galaxy
Astronomy
283
54,834,563
https://en.wikipedia.org/wiki/ADA%20%28buffer%29
ADA is a zwitterionic organic chemical buffering agent; one of Good's buffers. It has a useful pH range of 6.0-7.2 in the physiological range, making it useful for cell culture work. It has a pKa of 6.6 with ΔpKa/°C of -0.011 and is most often prepared in 1 M NaOH where it has a solubility of 160 mg/mL. ADA has been used in protein-free media for chicken embryo fibroblasts, as a chelating agent for H+, Ca2+, and Mg2+, and for isoelectric focusing in immobilized pH gradients. Its effects on dog kidney Na+/K+-ATPase and rat brain GABA receptors have also been studied. ADA does, however, alter coloring in bicinchoninic acid assays. References Zwitterions Amines Dicarboxylic acids Buffer solutions
ADA (buffer)
Physics,Chemistry
201
403,676
https://en.wikipedia.org/wiki/Gesture
A gesture is a form of non-verbal communication or non-vocal communication in which visible bodily actions communicate particular messages, either in place of, or in conjunction with, speech. Gestures include movement of the hands, face, or other parts of the body. Gestures differ from physical non-verbal communication that does not communicate specific messages, such as purely expressive displays, proxemics, or displays of joint attention. Gestures allow individuals to communicate a variety of feelings and thoughts, from contempt and hostility to approval and affection, often together with body language in addition to words when they speak. Gesticulation and speech work independently of each other, but join to provide emphasis and meaning. Gesture processing takes place in areas of the brain such as Broca's and Wernicke's areas, which are used by speech and sign language. In fact, language is thought by some scholars to have evolved in Homo sapiens from an earlier system consisting of manual gestures. The theory that language evolved from manual gestures, termed Gestural Theory, dates back to the work of 18th-century philosopher and priest Abbé de Condillac, and has been revived by contemporary anthropologist Gordon W. Hewes, in 1973, as part of a discussion on the origin of language. Research throughout the ages Gestures have been studied throughout time from different philosophers. Marcus Fabius Quintilianus was a Roman Rhetorician who studied in his Institutio Oratoria on how gesture can be used on rhetorical discourses. One of his greatest works and foundation for communication was the "Institutio Oratoria" where he explains his observations and nature of different oratories. A study done in 1644, by John Bulwer an English physician and early Baconian natural philosopher wrote five works exploring human communications pertaining to gestures. Bulwer analyzed dozens of gestures and provided a guide under his book named Chirologia which focused on hand gestures. In the 19th century, Andrea De Jorio an Italian antiquarian who considered a lot of research about body language published an extensive account of gesture expressions. Andrew N. Meltzoff an American psychologist internationally renown for infant and child development conducted a study in 1977 on the imitation of facial and manual gestures by newborns. The study concluded that "infants between 12 and 21 days of age can imitate the facial and manual gestures of parents". In 1992, David Mcneill, a professor of linguistics and psychology at the University of Chicago, wrote a book based on his ten years of research and concluded that "gestures do not simply form a part of what is said, but have an impact on thought itself." Meltzoff argues that gestures directly transfer thoughts into visible forms, showing that ideas and language cannot always be express. A peer-reviewed journal Gesture has been published since 2001, and was founded by Adam Kendon and Cornelia Müller. The International Society for Gesture Studies (ISGS) was founded in 2002. Gesture has frequently been taken up by researchers in the field of dance studies and performance studies in ways that emphasize the ways they are culturally and contextually inflected. Performance scholar Carrie Noland describes gestures as "learned techniques of the body" and stresses the way gestures are embodied corporeal forms of cultural communication. But rather than just residing within one cultural context, she describes how gestures migrate across bodies and locations to create new cultural meanings and associations. She also posits how they might function as a form of "resistance to homogenization" because they are so dependent on the specification of the bodies that perform them. Gesture has also been taken up within queer theory, ethnic studies and their intersections in performance studies, as a way to think about how the moving body gains social meaning. José Esteban Muñoz uses the idea of gesture to mark a kind of refusal of finitude and certainty and links gesture to his ideas of ephemera. Muñoz specifically draws on the African-American dancer and drag queen performer Kevin Aviance to articulate his interest not in what queer gestures might mean, but what they might perform. Juana María Rodríguez borrows ideas of phenomenology and draws on Noland and Muñoz to investigate how gesture functions in queer sexual practices as a way to rewrite gender and negotiate power relations. She also connects gesture to Giorgio Agamben's idea of "means without ends" to think about political projects of social justice that are incomplete, partial, and legibile within culturally and socially defined spheres of meaning. Within the field of linguistics, the most hotly contested aspect of gesture revolves around the subcategory of Lexical or Iconic Co-Speech Gestures. Adam Kendon was the first to hypothesize on their purpose when he argued that Lexical gestures do work to amplify or modulate the lexico-semantic content of the verbal speech with which they co-occur. However, since the late 1990s, most research has revolved around the contrasting hypothesis that Lexical gestures serve a primarily cognitive purpose in aiding the process of speech production. As of 2012, there is research to suggest that Lexical Gesture does indeed serve a primarily communicative purpose and cognitive only secondary, but in the realm of socio-pragmatic communication, rather than lexico-semantic modification. Typology (categories) Humans have the ability to communicate through language, but they can also express through gestures. In particular, gestures can be transmitted through movements of body parts, face, and body expressions. Researchers Goldin Meadow and Brentari D. conducted research in 2015 and concluded that communicating through sign language is no different from spoken language. Communicative vs. informative The first way to distinguish between categories of gesture is to differentiate between communicative gesture and informative gesture. While most gestures can be defined as possibly happening during the course of spoken utterances, the informative-communicative dichotomy focuses on intentionality of meaning and communication in co-speech gesture. Informative (passive) Informative gestures are passive gestures that provide information about the speaker as a person and not about what the speaker is trying to communicate. Some movements are not purely considered gestures, however a person could perform these adapters in such way like scratching, adjusting clothing, and tapping. These gestures can occur during speech, but they may also occur independently of communication, as they are not a part of active communication. While informative gestures may communicate information about the person speaking (e.g. itchy, uncomfortable, etc.), this communication is not engaged with any language being produced by the person gesturing. Communicative (active) Communicative gestures are gestures that are produced intentionally and meaningfully by a person as a way of intensifying or modifying speech produced in the vocal tract (or with the hands in the case of sign languages), even though a speaker may not be actively aware that they are producing communicative gestures. For instance, on the U.S. Army recruitment poster of Uncle Sam, he is pointing and sending a non-verbal form of gesture by implying he wants the viewer to join the U.S. Army. This is a form of symbolic gesture, usually used in the absence of speech. Body language relating to gestures Body language is a form of nonverbal communication that allows visual cues that transmit messages without speaking. Gestures are movement that are made with the body: arms, hands, facial, etc. Authors Barbara Pease and Allan Pease, of "The Definitive Book of Body Language" concluded that everyone does a shoulder shrug, a gesture signifying that the person is not comprehending what they are supposed to be understanding. Also, that showing the palms of both hands to show a person is not hiding anything, and raising the eyebrows to indicate a greeting. Finger gestures are commonly used in a variety of ways, from point at something to indicate that you want to show a person something to indicating a thumbs up to show everything is good. Some gestures are near universals, i.e., found all over the world with only some exceptions. An example is the head shake to signify "no". Also, in most cultures nodding your head signifies "Yes", which the book "The Definitive Book of Body Language" describes as submissive gesture to representing the conversation is going the direction of the person speaking. The book explains that people who are born deaf can show a form of submissive gesture to signify "Yes". Manual vs. non-manual communicative gestures Within the realm of communicative gestures, the first distinction to be made is between gestures made with the hands and arms, and gestures made with other parts of the body. Examples of Non-manual gestures may include head nodding and shaking, shoulder shrugging, and facial expression, among others. Non-manual gestures are attested in languages all around the world, but have not been the primary focus of most research regarding co-speech gesture. Manual gestures A gesture that is a form of communication in which bodily actions communicate particular messages. Manual gestures are most commonly broken down into four distinct categories: Symbolic (Emblematic), Deictic (Indexical), Motor (Beat), and Lexical (Iconic) Manual gesture in the sense of communicative co-speech gesture does not include the gesture-signs of sign languages, even though sign language is communicative and primarily produced using the hands, because the gestures in sign language are not used to intensify or modify the speech produced by the vocal tract, rather they communicate fully productive language through a method alternative to the vocal tract. Symbolic (emblematic) The most familiar are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as the handwave used in the US for "hello" and "goodbye". A single emblematic gesture can have a very different significance in different cultural contexts, ranging from complimentary to highly offensive. The page List of gestures discusses emblematic gestures made with one hand, two hands, hand and other body parts, and body and facial gestures. Symbolic gestures can occur either concurrently or independently of vocal speech. Symbolic gestures are iconic gestures that are widely recognized, fixed, and have conventionalized meanings. Deictic (indexical) Deictic gestures can occur simultaneously with vocal speech or in place of it. Deictic gestures are gestures that consist of indicative or pointing motions. These gestures often work in the same way as demonstrative words and pronouns like "this" or "that". Deictic gestures can refer to concrete or intangible objects or people. Motor (beat) Motor or beat gestures usually consist of short, repetitive, rhythmic movements that are closely tied with prosody in verbal speech. Unlike symbolic and deictic gestures, beat gestures cannot occur independently of verbal speech and convey no semantic information. For example, some people wave their hands as they speak to emphasize a certain word or phrase. These gestures are closely coordinated with speech. The so-called beat gestures are used in conjunction with speech and keep time with the rhythm of speech to emphasize certain words or phrases. These types of gestures are integrally connected to speech and thought processes. Lexical (iconic) Other spontaneous gestures used during speech production known as iconic gestures are more full of content, and may echo, or elaborate, the meaning of the co-occurring speech. They depict aspects of spatial images, actions, people, or objects. For example, a gesture that depicts the act of throwing may be synchronous with the utterance, "He threw the ball right into the window." Such gestures that are used along with speech tend to be universal. For example, one describing that they are feeling cold due to a lack of proper clothing and/or a cold weather can accompany their verbal description with a visual one. This can be achieved through various gestures such as by demonstrating a shiver and/or by rubbing the hands together. In such cases, the language or verbal description of the person does not necessarily need to be understood as someone could at least take a hint at what's being communicated through the observation and interpretation of body language which serves as a gesture equivalent in meaning to what's being said through communicative speech. The elaboration of lexical gestures falls on a spectrum of iconic-metaphorical in how closely tied they are to the lexico-semantic content of the verbal speech they coordinate with. More iconic gesture very obviously mirrors the words being spoken (such as drawing a jagged horizontal line in the air to describe mountains) whereas more metaphorical gestures clearly contain some spatial relation to the semantic content of the co-occurring verbal speech, but the relationship between the gesture and the speech might be more ambiguous. Lexical gestures, like motor gestures, cannot occur independently of verbal speech. The purpose of lexical gestures is still widely contested in the literature with some linguists arguing that lexical gestures serve to amplify or modulate the semantic content of lexical speech, or that it serves a cognitive purpose in aiding in lexical access and retrieval or verbal working memory. Most recent research suggests that lexical gestures serve a primarily socio-pragmatic role. Language development Studies affirm a strong link between gesture typology and language development. Young children under the age of two seem to rely on pointing gestures to refer to objects that they do not know the names of. Once the words are learned, they eschewed those referential (pointing) gestures. One would think that the use of gesture would decrease as the child develops spoken language, but results reveal that gesture frequency increased as speaking frequency increased with age. There is, however, a change in gesture typology at different ages, suggesting a connection between gestures and language development. Children most often use pointing and adults rely more on iconic and beat gestures. As children begin producing sentence-like utterances, they also begin producing new kinds of gestures that adults use when speaking (iconics and beats). Evidence of this systematic organization of gesture is indicative of its association to language development. Sign languages Gestural languages such as American Sign Language operate as complete natural languages that are gestural in modality. They should not be confused with finger spelling, in which a set of emblematic gestures are used to represent a written alphabet. Sign languages are different from gesturing in that concepts are modeled by certain hand motions or expressions and has a specific established structure while gesturing is more malleable and has no specific structure rather it supplements speech. Before an established sign language was created in Nicaragua after the 1970s, deaf Nicaraguans would use "home signs" in order to communicate with others. These home signs were not part of a unified language but were still used as familiar motions and expressions used within their family—still closely related to language rather than gestures with no specific structure. Home signs are similar to the gestural actions of chimpanzees. Gestures are used by these animals in place of verbal language, which is restricted in animals due to their lacking certain physiological and articulation abilities that humans have for speech. Corballis (2010) asserts that "our hominid ancestors were better pre-adapted to acquire language-like competence using manual gestures than using vocal sounds." This leads to a debate about whether humans, too, looked to gestures first as their modality of language in the early existence of the species. The function of gestures may have been a significant player in the evolution of language. Social significance Gesturing is probably universal; there has been no report of a community that does not gesture. Gestures are a crucial part of everyday conversation such as chatting, describing a route, negotiating prices on a market; they are ubiquitous. Gestures are learned embodied cultural practices that can function as a way to interpret ethnic, gender, and sexual identity. Gestures, commonly referred to as "body language", play an important role in industry. Proper body language etiquette in business dealings can be crucial for success. However, gestures can have different meanings according to the country in which they are expressed. In an age of global business, diplomatic cultural sensitivity has become a necessity. Gestures that we take as innocent may be seen by someone else as deeply insulting. The following gestures are examples of proper etiquette with respect to different countries' customs on salutations: In the United States, "a firm handshake, accompanied by direct eye contact, is the standard greeting. Direct eye contact in both social and business situations is very important." In the People's Republic of China, "the Western custom of shaking a person's hand upon introduction has become widespread throughout the country. However, oftentimes a nod of the head or a slight bow will suffice." In Japan, "the act of presenting business cards is very important. When presenting, one holds the business card with both hands, grasping it between the thumbs and forefingers. The presentation is to be accompanied by a slight bow. The print on the card should point towards the person to which one is giving the card." In Germany, "it is impolite to shake someone's hand with your other hand in your pocket. This is seen as a sign of disrespect". In France, "a light, quick handshake is common. To offer a strong, pumping handshake would be considered uncultured. When one enters a room, be sure to greet each person present. A woman in France will offer her hand first." Gestures are also a means to initiate a mating ritual. This may include elaborate dances and other movements. Gestures play a major role in many aspects of human life. Additionally, when people use gestures, there is a certain shared background knowledge. Different cultures use similar gestures when talking about a specific action such as how we gesture the idea of drinking out of a cup. When an individual makes a gesture, another person can understand because of recognition of the actions/shapes. Gestures have been documented in the arts such as in Greek vase paintings, Indian Miniatures or European paintings. In religion Gestures play a central role in religious or spiritual rituals. In Hinduism and Buddhism, a mudra (Sanskrit, literally "seal", "gesture" or "attitude") is a symbolic gesture made with the hand, body or mind. Each mudra has a specific meaning, and is associated with a specific spiritual quality or state. In Yoga Mudras are considered to be higher practices which lead to awakening of the pranas, chakras and kundalini, and which can bestow major siddhis, psychic powers, on the advanced practitioner In Hindu and Buddhist iconography mudras play a central role. For example, Vitarka Vicara, the gesture of discussion and transmission of Buddhist teaching, is done by joining the tips of the thumb and the index together, while keeping the other fingers straight. A common Christian religious gesture is crossing oneself as a sign of respect, also known as doing the sign of the cross, often accompanied by kneeling before a sacred object. Neurology Gestures are processed in the same areas of the brain as speech and sign language such as the left inferior frontal gyrus (Broca's area) and the posterior middle temporal gyrus, posterior superior temporal sulcus and superior temporal gyrus (Wernicke's area). It has been suggested that these parts of the brain originally supported the pairing of gesture and meaning and then were adapted in human evolution "for the comparable pairing of sound and meaning as voluntary control over the vocal apparatus was established and spoken language evolved". As a result, it underlies both symbolic gesture and spoken language in the present human brain. Their common neurological basis also supports the idea that symbolic gesture and spoken language are two parts of a single fundamental semiotic system that underlies human discourse. The linkage of hand and body gestures in conjunction with speech is further revealed by the nature of gesture use in blind individuals during conversation. This phenomenon uncovers a function of gesture that goes beyond portraying communicative content of language and extends David McNeill's view of the gesture-speech system. This suggests that gesture and speech work tightly together, and a disruption of one (speech or gesture) will cause a problem in the other. Studies have found strong evidence that speech and gesture are innately linked in the brain and work in an efficiently wired and choreographed system. McNeill's view of this linkage in the brain is just one of three currently up for debate; the others declaring gesture to be a "support system" of spoken language or a physical mechanism for lexical retrieval. Because of this connection of co-speech gestures—a form of manual action—in language in the brain, Roel Willems and Peter Hagoort conclude that both gestures and language contribute to the understanding and decoding of a speaker's encoded message. Willems and Hagoort's research suggest that "processing evoked by gestures is qualitatively similar to that of words at the level of semantic processing." This conclusion is supported through findings from experiments by Skipper where the use of gestures led to "a division of labor between areas related to language or action (Broca's area and premotor/primary motor cortex respectively)", The use of gestures in combination with speech allowed the brain to decrease the need for "semantic control". Because gestures aided in understanding the relayed message, there was not as great a need for semantic selection or control that would otherwise be required of the listener through Broca's area. Gestures are a way to represent the thoughts of an individual, which are prompted in working memory. The results of an experiment revealed that adults have increased accuracy when they used pointing gestures as opposed to simply counting in their heads (without the use of pointing gestures) Furthermore, the results of a study conducted by Marstaller and Burianová suggest that the use of gestures affect working memory. The researchers found that those with low capacity of working memory who were able to use gestures actually recalled more terms than those with low capacity who were not able to use gestures. Although there is an obvious connection in the aid of gestures in understanding a message, "the understanding of gestures is not the same as understanding spoken language." These two functions work together and gestures help facilitate understanding, but they only "partly drive the neural language system". Electronic interface The movement of gestures can be used to interact with technology like computers, using touch or multi-touch popularised by the iPhone, physical movement detection and visual motion capture, used in video game consoles. It can be recorded using kinematic methodology. Kendon's continuum In order to better understand the linguistic values that gestures hold, Adam Kendon, a pioneer in gesture research has proposed to look at it as a continuum from less linguistic to fully linguistic. Using the continuum, speech declines as "the language-like properties of gestural behaviors increase and idiosyncratic gestures are replaced by socially regulated signs". Gestures of different kinds fall within this continuum and include spontaneous gesticulations, language-like gestures, pantomime, emblems, and sign language. Spontaneous gesticulations are not evident without the presence of speech, assisting in the process of vocalization, whereas language-like gestures are "iconic and metaphoric, but lack consistency and are context-dependent". "Language-like gesture" implies that the gesture is assuming something linguistic (Loncke, 2013). Pantomime falls in the middle of the continuum and requires shared conventions. This kind of gesture helps convey information or describe an event. Following pantomime are emblems, which have specific meanings to denote "feelings, obscenities, and insults" and are not required to be used in conjunction with speech. The most linguistic gesture on Kendon's continuum is sign language, where "single manual signs have specific meanings and are combined with other manual signs according to specific rules". Philosophy Giorgio Agamben, in the book Karman, says gesture is a pure means without purpose, as an intermediate form between the doing of praxis and that of poiesis. In an opposite spirit, Giovanni Maddalena introduced The philosophy of gesture where gesture is defined as any performed act with a beginning and an end that carries on a meaning (from the latin gero = to bear, to carry on). According to this philosophy, gesture is our normal procedure to embody vague ideas in singular actions with a general meaning. Gesture is forged by a dense blending of icons, indices, and symbols and by a complexity of phenomenological characteristics, such as feelings, actual actions, general concepts, and habits (firstness, secondness, and thirdness in Charles S. Peirce's phenomenology). See also Chironomia Growth point Haptic communication Kinesics List of gestures Musical gesture Posture (psychology) Rock, Paper, Scissors Sign language Taunt Orans Salute Enactment effect References Further reading Bulwer, J (1644). Chirologia: or the Natural Language of the Hand. Goldin-Meadow, S (2003). Hearing gesture: How our hands help us think. Cambridge, Massachusetts: Harvard University Press. . Hoste, L. & Signer, B. (2014) "Criteria, Challenges and Opportunities for Gesture Programming Languages" In Proceedings of 1st International Workshop on Engineering Gestures for Multimodal Interfaces (EGMI 2014). Rome, Italy. Kendon, A (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. . Kita, S (2003). Pointing: Where Language, Culture and Cognition Meet. Lawrence Erlbaum Associates. . Lippit, Akira Mizuta (2008). "Digesture: Gesture and Inscription in Experimental Cinema." Migration of Gesture. Ed. Carrie Noland and Sally Ann Ness. Minneapolis: University of Minnesota Press. Maddalena, Giovanni (2015). The Philosophy of Gesture, Montreal: McGill–Queen’s University Press. McNeill, D (2005). Gesture and Thought. Chicago: University of Chicago Press. . Muñoz, Jose Esteban (2001). "Gesture, Ephemera and Queer Feeling: Approaching Kevin Aviance." Dancing Desires: Choreographing Sexualities on and off Stage. Ed. Jane Desmond. Madison, WI: University of Wisconsin Press, 423–442. Muñoz, José Esteban (2009). Cruising Utopia: The Then and There of Queer Futurity. New York: New York University Press. Noland, Carrie (2009). Agency and Embodiment: Performing Gestures/Producing Culture. Cambridge, Massachusetts: Harvard University Press. Noland, Carrie, and Sally Ann Ness, editors (2008). Migration of Gesture. Minneapolis: University of Minnesota Press. Rodríguez, Juana María (2007). "Gesture and Utterance Fragments from a Butch-Femme Archive." A Companion to Lesbian, Gay, Bisexual, Transgender, and Queer Studies. Ed. George E. Haggerty and Molly McGarry. Blackwell Publishing Ltd, 2007. 282–291. Rodríguez, Juana María (2014). Sexual Futures, Queer Gestures, and Other Latina Longings. New York: NYU Press. External links International Society for Gesture Studies devoted to the study of human gesture
Gesture
Biology
5,565
50,311,265
https://en.wikipedia.org/wiki/Ionic%20liquids%20in%20carbon%20capture
The use of ionic liquids in carbon capture is a potential application of ionic liquids as absorbents for use in carbon capture and sequestration. Ionic liquids, which are salts that exist as liquids near room temperature, are polar, nonvolatile materials that have been considered for many applications. The urgency of climate change has spurred research into their use in energy-related applications such as carbon capture and storage. Carbon capture using absorption Ionic liquids as solvents Amines are the most prevalent absorbent in postcombustion carbon capture technology today. In particular, monoethanolamine (MEA) has been used in industrial scales in postcombustion carbon capture, as well as in other CO2 separations, such as "sweetening" of natural gas. However, amines are corrosive, degrade over time, and require large industrial facilities. Ionic liquids on the other hand, have low vapor pressures . This property results from their strong Coulombic attractive force. Vapor pressure remains low through the substance's thermal decomposition point (typically >300 °C). In principle, this low vapor pressure simplifies their use and makes them "green" alternatives. Additionally, it reduces risk of contamination of the CO2 gas stream and of leakage into the environment. The solubility of CO2 in ionic liquids is governed primarily by the anion, less so by the cation. The hexafluorophosphate (PF6–) and tetrafluoroborate (BF4–) anions have been shown to be especially amenable to CO2 capture. Ionic liquids have been considered as solvents in a variety of liquid-liquid extraction processes, but never commercialized. Beside that, ionic liquids have replaced the conventional volatile solvents in industry such as absorption of gases or extractive distillation. Additionally, ionic liquids are used as co-solutes for the generation of aqueous biphasic systems, or purification of biomolecules. Process A typical CO2 absorption process consists of a feed gas, an absorption column, a stripper column, and output streams of CO2-rich gas to be sequestered, and CO2-poor gas to be released to the atmosphere. Ionic liquids could follow a similar process to amine gas treating, where the CO2 is regenerated in the stripper using higher temperature. However, ionic liquids can also be stripped using pressure swings or inert gases, reducing the process energy requirement. A current issue with ionic liquids for carbon capture is that they have a lower working capacity than amines. Task-specific ionic liquids that employ chemisorption and physisorption are being developed in an attempt to increase the working capacity. 1-butyl-3-propylamineimidazolium tetrafluoroborate is one example of a TSIL. Research In 2023, a research team composed of Chuo University, Nihon University, Kanazawa University, and the Research Institute of Innovative Technology for the Earth utilized electronic state informatics to design and synthesize ionic liquids. Subsequently, they conducted precise measurements of CO2 solubility and successfully developed ionic liquids with the highest physical absorption capacity for CO2 to date. Drawbacks Selectivity In carbon capture an effective absorbent is one which demonstrates a high selectivity, meaning that CO2 will preferentially dissolve in the absorbent compared to other gaseous components. In post-combustion carbon capture the most salient separation is CO2 from N2, whereas in pre-combustion separation CO is primarily separated from H2. Other components and impurities may be present in the flue gas, such as hydrocarbons, SO2, or H2S. Before selecting the appropriate solvent to use for carbon capture it is critical to ensure that at the given process conditions and flue gas composition CO2 maintains a much higher solubility in the solvent than the other species in the flue gas and thus has a high selectivity. The selectivity of CO2 in ionic liquids has been widely studied by researchers. Generally, polar molecules and molecules with an electric quadrupole moment are highly soluble in liquid ionic substances. It has been found that at high process temperatures the solubility of CO2 decreases, while the solubility of other species, such as CH4 and H2, may increase with increasing temperature, thereby reducing the effectiveness of the solvent. However, the solubility of N2 in ionic liquids is relatively low and does not increase with increasing temperature so the use of ionic liquids in post-combustion carbon capture may be appropriate due to the consistently high CO2/N2 selectivity. The presence of common flue gas impurities such as H2S severely inhibits CO2 solubility in ionic liquids and should be carefully considered by engineers when choosing an appropriate solvent for a particular flue gas. Viscosity A primary concern with the use of ionic liquids for carbon capture is their high viscosity compared with that of commercial solvents. Ionic liquids which employ chemisorption depend on a chemical reaction between solute and solvent for CO2 separation. The rate of this reaction is dependent on the diffusivity of CO2 in the solvent and is thus inversely proportional to viscosity. The self diffusivity of CO2 in ionic liquids are generally to the order of 10−10 m2/s, approximately an order of magnitude less than similarly performing commercial solvents used on CO2 capture. The viscosity of an ionic liquid can vary significantly according to the type of anion and cation, the alkyl chain length, and the amount of water or other impurities in the solvent. Because these solvents can be “designed” and these properties chosen, developing ionic liquids with lowered viscosities is a current topic of research. Supported ionic liquid phases (SILPs) are one proposed solution to this problem. Tunability As required for all separation techniques, ionic liquids exhibit selectivity towards one or more of the phases of a mixture. 1-Butyl-3-methylimidazolium hexafluorophosphate (BMIM-PF6) is a room-temperature ionic liquid that was identified early on as a viable substitute for volatile organic solvents in liquid-liquid separations. Other [PF6]- and [BF4]- containing ionic liquids have been studied for their CO2 absorption properties, as well as 1-ethyl-3-methylimidazolium (EMIM) and unconventional cations like trihexyl(tetradecyl) phosphonium ([P66614]). Selection of different anion and cation combinations in ionic liquids affects their selectivity and physical properties. Additionally, the organic cations in ionic liquids can be "tuned" by changing chain lengths or by substituting radicals. Finally, ionic liquids can be mixed with other ionic liquids, water, or amines to achieve different properties in terms of absorption capacity and heat of absorption. This tunability has led some to call ionic liquids "designer solvents." 1-butyl-3-propylamineimidazolium tetrafluoroborate was specifically developed for CO2 capture; it is designed to employ chemisorption to absorb CO2 and maintain efficiency under repeated absorption/regeneration cycles. Other ionic liquids have been simulated or experimentally tested for potential use as CO2 absorbents. Proposed industrial applications Currently, CO2 capture uses mostly amine-based absorption technologies, which are energy intensive and solvent intensive. Volatile organic compounds alone in chemical processes represent a multibillion-dollar industry. Therefore, ionic liquids offer an alternative that prove attractive should their other deficiencies be addressed. During the capture process, the anion and cation play a crucial role in the dissolution of CO2. Spectroscopic results suggest a favorable interaction between the anion and CO2, wherein CO2 molecules preferentially attach to the anion. Furthermore, intermolecular forces, such as hydrogen bonds, van der Waals bonds, and electrostatic attraction, contributes to the solubility of CO2 in ionic liquids. This makes ionic liquids promising candidates for CO2 capture because the solubility of CO2 can be modeled accurately by the regular solubility theory (RST), which reduces operational costs in developing more sophisticated model to monitor the capture process. References Further reading Carbon capture and storage Ions Ionic liquids
Ionic liquids in carbon capture
Physics,Chemistry,Engineering
1,737
58,881,336
https://en.wikipedia.org/wiki/CICE%20%28sea%20ice%20model%29
CICE () is a computer model that simulates the growth, melt and movement of sea ice. It has been integrated into many coupled climate system models as well as global ocean and weather forecasting models and is often used as a tool in Arctic and Southern Ocean research. CICE development began in the mid-1990s by the United States Department of Energy (DOE), and it is currently maintained and developed by a group of institutions in North America and Europe known as the CICE Consortium. Its widespread use in Earth system science in part owes to the importance of sea ice in determining Earth's planetary albedo, the strength of the global thermohaline circulation in the world's oceans, and in providing surface boundary conditions for atmospheric circulation models, since sea ice occupies a significant proportion (4-6%) of Earth's surface. CICE is a type of cryospheric model. Development Development of CICE began in 1994 by Elizabeth Hunke at Los Alamos National Laboratory (LANL). Since its initial release in 1998 following development of the Elastic-Viscous-Plastic (EVP) sea ice rheology within the model, it has been substantially developed by an international community of model users and developers. Enthalpy-conserving thermodynamics and improvements to the sea ice thickness distribution were added to the model between 1998 and 2005. The first institutional user outside of LANL was Naval Postgraduate School in the late-1990s, where it was subsequently incorporated into the Regional Arctic System Model (RASM) in 2011. The National Center for Atmospheric Research (NCAR) was the first to incorporate CICE into a global climate model in 2002, and developers of the NCAR Community Earth System Model (CESM) have continued to contribute to CICE innovations and have used it to investigate polar variability in Earth's climate system. The United States Navy began using CICE shortly after 2000 for polar research and sea ice forecasting and it continues to do so today. Since 2000, CICE development or coupling to oceanic and atmospheric models for weather and climate prediction has occurred at the University of Reading, University College London, the U.K. Met Office Hadley Centre, Environment and Climate Change Canada, the Danish Meteorological Institute, the Commonwealth Science and Industrial Research Organisation, and Beijing Normal University, among other institutions. As a result of model development in the global community of CICE users, the model's computer code now includes a comprehensive saline ice physics and biogeochemistry library that incorporates mushy-layer thermodynamics, anisotropic continuum mechanics, Delta-Eddington radiative transfer, melt-pond physics and land-fast ice. CICE version 6 is open-source software and was released in 2018 on GitHub. Keystone Equations There are two main physics equations solved using numerical methods in CICE that underpin the model's predictions of sea ice thickness, concentration and velocity, as well as predictions made with many equations not shown here giving, for example, surface albedo, ice salinity, snow cover, divergence, and biogeochemical cycles. The first keystone equation is Newton's second law for sea ice: where is the mass per unit area of saline ice on the sea surface, is the drift velocity of the ice, is the Coriolis parameter, is the upward unit vector normal to the sea surface, and are the wind and water stress on the ice, respectively, is acceleration due to gravity, is sea surface height and is internal ice the two-dimensional stress tensor within the ice. Each of the terms require information about the ice thickness, roughness, and concentration, as well as the state of the atmospheric and oceanic boundary layers. Ice mass per unit area is determined using the second keystone equation in CICE, which describes evolution of the sea ice thickness distribution for different thicknesses spread of the area for which sea ice velocity is calculated above: where is the change in the thickness distribution due to thermodynamic growth and melt, is redistribution function due to sea ice mechanics and is associated with internal ice stress , and describes advection of sea ice in a Lagrangian reference frame. From this, ice mass is given by: for density of sea ice. Code Design CICE version 6 is coded in FORTRAN90. It is organized into a dynamical core (dycore) and a separate column physics package called Icepack, which is maintained as a CICE submodule on GitHub. The momentum equation and thickness advection described above are time-stepped on a quadrilateral Arakawa B-grid within the dynamical core, while Icepack solves diagnostic and prognostic equations necessary for calculating radiation physics, hydrology, thermodynamics, and vertical biogeochemistry, including terms necessary to calculate , , , , and defined above. CICE can be run independently, as in the first figure on this page, but is frequently coupled with earth systems models through an external flux coupler, such as the CESM Flux Coupler from NCAR for which results are shown in the second figure for the CESM Large Ensemble. The column physics were separated into Icepack for the version 6 release to permit insertion into earth system models that use their own sea ice dynamical core, including the new DOE Energy Exascale Earth System Model (E3SM), which uses an unstructured grid in the sea ice component of the Model for Prediction Across Scales (MPAS), as demonstrated in the final figure. See also Sea ice Sea ice microbial communities Sea ice emissivity modeling Sea ice growth processes Sea ice concentration Sea ice thickness Sea ice physics and ecosystem experiment Arctic Ocean Southern Ocean Climate model Weather forecasting Northern Sea Route Northwest Passage Antarctica References External links CICE Consortium GitHub Information Page CICE Consortium Model for Sea-Ice Development Icepack: Essential Physics for Sea Ice Models Community-Driven Sea Ice Modeling with the CICE Consortium (Witness the Arctic) NOAA press release Oceans Deeply Pacific Standard phys.org: Arctic ice model upgrade to benefit polar research, industry and military Sea ice:  More than just frozen water (Santa Fe New Mexican) Energy Exascale Earth System Model (E3SM) Community Earth System Model (CESM) Sea ice Numerical climate and weather models Physical oceanography Physics software
CICE (sea ice model)
Physics
1,302
23,441,092
https://en.wikipedia.org/wiki/Rocket%20sled%20launch
A rocket sled launch, also known as ground-based launch assist, catapult launch assist, and sky-ramp launch, is a proposed method for launching space vehicles. With this concept the launch vehicle is supported by an eastward pointing rail or maglev track that goes up the side of a mountain while an externally applied force is used to accelerate the launch vehicle to a given velocity. Using an externally applied force for the initial acceleration reduces the propellant the launch vehicle needs to carry to reach orbit. This allows the launch vehicle to carry a larger payload and reduces the cost of getting to orbit. When the amount of velocity added to the launch vehicle by the ground accelerator becomes great enough, single-stage-to-orbit flight with a reusable launch vehicle becomes possible. For hypersonic research in general, tracks at Holloman Air Force Base have tested, as of 2011, small rocket sleds moving at up to (; Mach 8.5). Effectively a sky ramp would make the most expensive, first stage of a rocket fully reusable since the sled is returned to its starting position to be refueled, and may be reused in the order of hours after use. Present launch vehicles have performance-driven costs of thousands of dollars per kilogram of dry weight; sled launch would aim to reduce performance requirements and amortize hardware expenses over frequent, repeated launches. Designs for mountain based inclined-rail sleds often use jet engines or rockets to accelerate the spacecraft mounted on it. Electromagnetic methods (such as Bantam, Maglifter, and StarTram) are another technique investigated to accelerate a rocket before launch, potentially scalable to greater rocket masses and velocities than air launch. Overview of the problem Rockets carrying their own propellant with them use the vast majority of that propellant at the beginning of their journey to accelerate most of that very same propellant, as enshrined in the rocket equation. For example, the Space Shuttle used more than a third of its fuel just to reach . If that energy was provided without (yet, or at all) using a propellant the rocket carries, its propellant need would be much reduced, and its payload could be a larger fraction of its liftoff mass, increasing its efficiency. An example Due to factors including the exponential nature of the rocket equation and higher propulsive efficiency than if a rocket takes off stationary, a NASA Maglifter study estimated that a launch of an ELV rocket from a 3000-meter altitude mountain peak could increase payload to low Earth orbit by 80% compared to the same rocket from a conventional launch pad. Mountains of such height are available within the mainland U.S. for the easiest logistics, or nearer to the Equator for a little more gain from Earth's rotation. Among other possibilities, a larger single-stage-to-orbit (SSTO) could be reduced in liftoff mass by 35% with such launch assist, dropping to 4 instead of 6 engines in one case considered. At an anticipated efficiency close to 90%, electrical energy consumed per launch of a 500-ton rocket would be around (each kilowatt-hour costing a few cents at the current cost of electricity in the United States), aside from any additional losses in energy storage. It is a system with low marginal costs dominated by initial capital costs Although a fixed site, it was estimated to provide a substantial net payload increase for a high portion of the varying launch azimuths needed by different satellites, with rocket maneuvering during the early stage of post-launch ascent (an alternative to adding electric propulsion for later orbital inclination change). Maglev guideway costs were estimated as $10–20 million per mile in the 1994 study, which had anticipated annual maglev maintenance costs on the order of 1% of capital costs. Benefits of high altitude launches Rocket sled launch helps a vehicle gain altitude, and proposals commonly involve the track curving up a mountain. Advantages to any launch system that starts from high altitudes include reduce gravity drag (the cost of lifting fuel in a gravity well). The thinner air will reduce air resistance and allow more efficient engine geometries. Rocket nozzles have different shapes (expansion ratios) to maximize thrust at different air pressures. (Though NASA's aerospike engine for the Lockheed Martin X-33 was designed to change geometry to remain efficient at a variety of different pressures, the aerospike engine had added weight and complexity; X-33 funding was canceled in 2001; and other benefits from launch assist would remain even if aerospike engines reached flight testing). For example, the air is 39% thinner at 2500 meters. The more efficient rocket plume geometry and the reduced air friction allows the engine to be 5% more efficient per amount of fuel burned. Another advantage to high altitude launches is that it eliminates the need to throttle back the engine when the max Q limit is attained. Rockets launched in thick atmosphere can go so fast that air resistance may cause structural damage. Engines are throttled back when maxQ is reached, until the rocket is high enough that they can resume full power. The Atlas V 551 gives an example of this. It reaches its maxQ at 30,000 feet. Its engine is throttled back to 60% thrust for 30 seconds. This reduced acceleration adds to the gravity drag the rocket must overcome. Additionally, space craft engines concerned with maxQ are more complex as they must be throttled during launch. A launch from high altitude need not throttle back at maxQ as it starts above the thickest portion of the Earth's atmosphere. Debora A. Grant and James L. Rand, in "The Balloon Assisted Launch System – A Heavy Lift Balloon", wrote: "It was established some time ago that a ground launched rocket capable of reaching 20 km would be able to reach an altitude of almost 100km if it was launched from 20km." They suggest that small rockets are lifted above the majority of the atmosphere by balloon in order to avoid the problems discussed above. Compatibility with reusable launch vehicles A sled track that gave a Mach 2 or greater launch assist could reduce the fuel to orbit by 40% or more, while helping counter the weight penalty when aiming to make a fully reusable launch vehicle. Angled at 55° to vertical, a track on a tall mountain could allow a single stage to orbit reusable vehicle with no new technology. Rocket sled launches in fiction Robert A. Heinlein used a lunar maglev launcher in his 1966 novel The Moon is a Harsh Mistress. One on Earth was built by the novel's end. Dean Ing used a similar system in his 1988 novel The Big Lifters. Fireball XL5 was launched on a sled from sea level. Silver Tower has a rocket launch sled, used to assist in the takeoff of the hypersonic spaceplane America. The 1951 film version of When Worlds Collide used a rocket sled to launch the Ark, although the book did not. Borderlands: The Pre-Sequel has a rocket sled in its initial cut-scene. Permission To Die (graphic novel) in an original James Bond story by Mike Grell, a sled-propelled rocket plays a crucial element to the plot. Interstella 5555 has the Crescendolls band leaving Earth using a rocket sled to assist with takeoff. Ace Combat 5: The Unsung War is a video game that features a mission requiring a rocket sled site to be defended from an air raid during a launch. Mobile Suit Gundam: Char's Counterattack shows a rocket sled was used to assist in the takeoff of the civilian space shuttle. The 2015 film Tomorrowland depicts a vertical-launch rocket sled used in a civilian "starport" as the main character explores the titular city. See also Rocket sled Laser propulsion Non-rocket spacelaunch Silbervogel Spaceplane StarTram Inductrack Railgun Ram accelerator CAM ship Fighter catapult ship Coilgun Mass driver Hopper (spacecraft) Index of aviation articles References External links A website discussing "Skyramps": http://www.g2mil.com/skyramp.htm "A Light Gas Gun Approach To Achieving 'First Stage Acceleration' for the Highly Reusable Space Transportation System" 1997 M. Frank Rose, R .M. Jenkins, M. R. Brown, Space Power Institute, Auburn University, AL, 36849 Link to Lockheed Proposal for a sled based reusable launch vehicle. http://www.astronautix.com/lvs/recstics.htm Europe's Phoenix: Test Craft Sets Stage For Reusable Rocketry http://www.space.com/missionlaunches/europe_phoenix_020621.html Holloman Air Force Base: http://www.holloman.af.mil/photos/index.asp?galleryID=2718 NASA Closed End Launch Tube proposal for pneumatic rocket boosts: https://ntrs.nasa.gov/citations/20010027422 Describes rocket efficiency at various air pressures & aerospike engine: http://www.aerospaceweb.org/design/aerospike/compensation.shtml Exploratory engineering Hypothetical technology Single-stage-to-orbit Rocket propulsion Maglev Electrodynamics Sled launch Types of take-off and landing
Rocket sled launch
Mathematics,Technology
1,929
56,938,988
https://en.wikipedia.org/wiki/Verbenol
Verbenol (2-pine-4-ol) is a group of stereoisomeric bicyclic monoterpene alcohols. These compounds have been found to be active components of insect pheromones and essential oils. Isomers Four stereoisomers of verbenol are known. For the cis isomer, the two methyl groups (-CH3) are on the same side of the carbon ring as the hydroxy group (-OH), and for the trans isomer, they are on the opposite sides. Again, there are enantiomers of each form that exhibit optical activity, that is, turn the plane of linearly polarized light as it passes through the substance or its solution. trans-Verbenol is a mountain pine beetle pheromone that attracts insects to a tree. cis-Verbenol is an aggregation pheromone of Ips typographus and Dendroctonus ponderosae Hopkins. Enantiomeric composition Typically, verbenol and related cyclic monoterpenes are available as non‐racemic mixtures of their enantiomers. There are methods to increase enantiomeric excess (optical purity) of verbenol and to isolate individual enantiomers. References Pheromones Bicyclic compounds Monoterpenes
Verbenol
Chemistry
274
20,268,049
https://en.wikipedia.org/wiki/Manifaxine
Manifaxine (developmental code name GW-320,659) is a norepinephrine–dopamine reuptake inhibitor developed by GlaxoSmithKline through structural modification of radafaxine, an isomer of hydroxybupropion and one of the active metabolites of bupropion. Manifaxine was researched for treatment of attention deficit hyperactivity disorder (ADHD) and obesity and was found to be safe, reasonably effective, and well-tolerated for both applications. However, no results were reported following these initial trials and development was discontinued. Synthesis The Grignard reaction between 3,5-difluorobenzonitrile [64248-63-1] (1) and ethylmagnesium bromide gives 3,5-difluoropropiophenone [135306-45-5] (2). Halogenation with molecular bromine occurs at the alpha-keto position providing 2-bromo-3',5'-difluoropropiophenone [135306-46-6] (3). Intermolecular ring formation with DL-Alaninol (2-Aminopropanol) [6168-72-5] completed the synthesis of Manifaxine (4). See also 3,5-Difluoromethcathinone 3-Fluorophenmetrazine References Abandoned drugs Beta-Hydroxyamphetamines Fluoroarenes Drugs developed by GSK plc Norepinephrine–dopamine reuptake inhibitors Phenylmorpholines Stimulants Tertiary alcohols
Manifaxine
Chemistry
359
60,819,517
https://en.wikipedia.org/wiki/Crabb%C3%A9%20reaction
The Crabbé reaction (or Crabbé allene synthesis, Crabbé–Ma allene synthesis) is an organic reaction that converts a terminal alkyne and aldehyde (or, sometimes, a ketone) into an allene in the presence of a soft Lewis acid catalyst (or stoichiometric promoter) and secondary amine. Given continued developments in scope and generality, it is a convenient and increasingly important method for the preparation of allenes, a class of compounds often viewed as exotic and synthetically challenging to access. Overview and scope The transformation was discovered in 1979 by Pierre Crabbé and coworkers at the Université Scientifique et Médicale (currently merged into Université Grenoble Alpes) in Grenoble, France. As initially discovered, the reaction was a one-carbon homologation reaction (the Crabbé homologation) of a terminal alkyne into a terminal allene using formaldehyde as the carbon source, with diisopropylamine as base and copper(I) bromide as catalyst. Despite the excellent result for the substrate shown, yields were highly dependent on substrate structure and the scope of the process was narrow. The author noted that iron salts were completely ineffective, while cupric and cuprous chloride and bromide, as well as silver nitrate provided the desired product, but in lower yield under the standard conditions. Shengming Ma (麻生明) and coworkers at the Shanghai Institute of Organic Chemistry (SIOC, Chinese Academy of Sciences) investigated the reaction in detail, including clarifying the critical role of the base, and developed conditions that exhibited superior functional-group compatibility and generally resulted in higher yields of the allene. One of the key changes was the use of dicyclohexylamine as the base. In another important advance, the Ma group found that the combination of zinc iodide and morpholine allowed aldehydes besides formaldehyde, including benzaldehyde derivatives and a more limited range of aliphatic aldehydes, to be used as coupling partners, furnishing 1,3-disubstituted allenes via an alkyne-aldehyde coupling method of substantial generality and utility. A separate protocol utilizing copper catalysis and a fine-tuned amine base was later developed to obtain better yields for aliphatic aldehydes. The Crabbé reaction is applicable to a limited range of ketone substrates for the synthesis of trisubstituted allenes; however, a near stoichiometric quantity (0.8 equiv) of cadmium iodide (CdI2) is needed to promote the reaction. Alternatively, the use of cuprous bromide and zinc iodide sequentially as catalysts is also effective, provided the copper catalyst is filtered before zinc iodide is added. Prevailing mechanism The reaction mechanism was first investigated by Scott Searles and coworkers at the University of Missouri. Overall, the reaction can be thought of as a reductive coupling of the carbonyl compound and the terminal alkyne. In the Crabbé reaction, the secondary amine serves as the hydride donor, which results in the formation of the corresponding imine as the byproduct. Thus, remarkably, the secondary amine serves as Brønsted base, ligand for the metal ion, iminium-forming carbonyl activator, and the aforementioned two-electron reductant in the same reaction. In broad strokes, the mechanism of the reaction is believed to first involve a Mannich-like addition of the species into the iminium ion formed by condensation of the aldehyde and the secondary amine. This first part of the process is a so-called A3 coupling reaction (A3 stands for aldehyde-alkyne-amine). In the second part, the α-amino alkyne then undergoes a formal retro-imino-ene reaction, an internal redox process, to deliver the desired allene and an imine as the oxidized byproduct of the secondary amine. These overall steps are supported by deuterium labeling and kinetic isotope effect studies. Density functional theory computations were performed to better understand the second part of the reaction. These computations indicate that the uncatalyzed process (either a concerted but highly asynchronous process or a stepwise process with a fleeting intermediate) involves a prohibitively high-energy barrier. The metal-catalyzed reaction, on the other hand, is energetically reasonable and probably occurs via a stepwise hydride transfer to the alkyne followed by C–N bond scission in a process similar to those proposed for formal [3,3]-sigmatropic rearrangements and hydride transfer reactions catalyzed by gold(I) complexes. A generic mechanism showing the main features of the reaction (under Crabbé's original conditions) is given below:(The copper catalyst is shown simply as "CuBr" or "Cu+", omitting any additional amine or halide ligands or the possibility of dinuclear interactions with other copper atoms. Condensation of formaldehyde and diisopropylamine to form the iminium ion and steps involving complexation and decomplexation of Cu+ are also omitted here for brevity.) Since 2012, Ma has reported several catalytic enantioselective versions of the Crabbé reaction in which chiral PINAP (aza-BINAP) based ligands for copper are employed. The stepwise application of copper and zinc catalysis was required: the copper promotes the Mannich-type condensation, while subsequent one-step addition of zinc iodide catalyzes the imino-retro-ene reaction. See also Mannich reaction Ene reaction Coupling reaction Alkynylation References Organic chemistry Name reactions
Crabbé reaction
Chemistry
1,224
26,861,142
https://en.wikipedia.org/wiki/Anatomical%20plane
An anatomical plane is a hypothetical plane used to transect the body, in order to describe the location of structures or the direction of movements. In human and non-human anatomy, three principal planes are used: The sagittal plane or lateral plane (longitudinal, anteroposterior) is a plane parallel to the sagittal suture. It divides the body into left and right. The coronal plane or frontal plane (vertical) divides the body into dorsal and ventral (back and front, or posterior and anterior) portions. The transverse plane or axial plane (horizontal) divides the body into cranial and caudal (head and tail) portions. Terminology There could be any number of sagittal planes, but only one cardinal sagittal plane exists. The term cardinal refers to the one plane that divides the body into equal segments, with exactly one half of the body on either side of the cardinal plane. The term cardinal plane appears in some texts as the principal plane. The terms are interchangeable. Human anatomy In human anatomy, the anatomical planes are defined in reference to a body in the upright or standing orientation. A transverse plane (also known as axial or horizontal plane) is parallel to the ground; it separates the superior from the inferior, or the head from the feet. The transverse planes identified in Terminologia Anatomica are the transpyloric plane, the subcostal plane, the transumbilical (or umbilical) plane, the supracristal plane, the intertubercular plane, and the interspinous plane. A coronal plane (also known as frontal plane) is perpendicular to the ground; it separates the anterior from the posterior, the front from the back, and the ventral from the dorsal. A sagittal plane (also known as anteroposterior plane) is perpendicular to the ground, separating left from right. The median (or midsagittal) plane is the sagittal plane in the middle of the body; it passes through midline structures such as the navel and the spine. All other sagittal planes (also known as parasagittal planes) are parallel to it. The axes and sagittal plane are the same for bipeds and quadrupeds, but the orientations of the coronal and transverse planes switch. The axes on particular pieces of equipment may or may not correspond to the axes of the body, especially since the body and the equipment may be in different relative orientations. Uses Motion When describing anatomical motion, these planes describe the axis along which an action is performed. So by moving through the transverse plane, movement travels from head to toe. For example, if a person jumped directly up and then down, their body would be moving through the transverse plane in the coronal and sagittal planes. A longitudinal plane is any plane perpendicular to the transverse plane. The coronal plane and the sagittal plane are examples of longitudinal planes. Medical imaging Sometimes the orientation of certain planes needs to be distinguished, for instance in medical imaging techniques such as sonography, CT scans, MRI scans, or PET scans. There are a variety of different standardized coordinate systems. For the DICOM format, the one imagines a human in the anatomical position, and an X-Y-Z coordinate system with the x-axis going from front to back, the y-axis going from right to left, and the z-axis going from toe to head. The right-hand rule applies. Finding anatomical landmarks In humans, reference may take origin from superficial anatomy, made to anatomical landmarks that are on the skin or visible underneath. As with planes, lines and points are imaginary. Examples include: The midaxillary line, a line running vertically down the surface of the body passing through the apex of the axilla (armpit). Parallel are the anterior axillary line, which passes through the anterior axillary skinfold, and the posterior axillary line, which passes through the posterior axillary skinfold. The mid-clavicular line, a line running vertically down the surface of the body passing through the midpoint of the clavicle. In addition, reference may be made to structures at specific levels of the spine (e.g. the 4th cervical vertebra, abbreviated "C4"), or the rib cage (e.g., the 5th intercostal space). Occasionally, in medicine, abdominal organs may be described with reference to the trans-pyloric plane, which is a transverse plane passing through the pylorus. Comparative embryology In discussing the neuroanatomy of animals, particularly rodents used in neuroscience research, a simplistic convention has been to name the sections of the brain according to the homologous human sections. Hence, what is technically a transverse (orthogonal) section with respect to the body length axis of a rat (dividing anterior from posterior) may often be referred to in rat neuroanatomical coordinates as a coronal section, and likewise a coronal section with respect to the body (i.e. dividing ventral from dorsal) in a rat brain is referred to as transverse. This preserves the comparison with the human brain, whose length axis in rough approximation is rotated with respect to the body axis by 90 degrees in the ventral direction. It implies that the planes of the brain are not necessarily the same as those of the body. However, the situation is more complex, since comparative embryology shows that the length axis of the neural tube (the primordium of the brain) has three internal bending points, namely two ventral bendings at the cervical and cephalic flexures (cervical flexure roughly between the medulla oblongata and the spinal cord, and cephalic flexure between the diencephalon and the midbrain), and a dorsal (pontine or rhombic flexure) at the midst of the hindbrain, behind the cerebellum. The latter flexure mainly appears in mammals and sauropsids (reptiles and birds), whereas the other two, and principally the cephalic flexure, appear in all vertebrates (the sum of the cervical and cephalic ventral flexures is the cause of the 90-degree angle mentioned above in humans between body axis and brain axis). This more realistic concept of the longitudinal structure of vertebrate brains implies that any section plane, except the sagittal plane, will intersect variably different parts of the same brain as the section series proceeds across it (relativity of actual sections with regard to topological morphological status in the ideal unbent neural tube). Any precise description of a brain section plane therefore has to make reference to the anteroposterior part of the brain to which the description refers (e.g., transverse to the midbrain, or horizontal to the diencephalon). A necessary note of caution is that modern embryologic orthodoxy indicates that the brain's true length axis finishes rostrally somewhere in the hypothalamus where basal and alar zones interconnect from left to right across the median line; therefore, the axis does not enter the telencephalic area, although various authors, both recent and classic, have assumed a telencephalic end of the axis. The causal argument for this lies in the end of the axial mesoderm -mainly the notochord, but also the prechordal plate- under the hypothalamus. Early inductive effects of the axial mesoderm upon the overlying neural ectoderm is the mechanism that establishes the length dimension upon the brain primordium, jointly with establishing what is ventral in the brain (close to the axial mesoderm) in contrast with what is dorsal (distant from the axial mesoderm). Apart from the lack of a causal argument for introducing the axis in the telencephalon, there is the obvious difficulty that there is a pair of telencephalic vesicles, so that a bifid axis is actually implied in these outdated versions. History Some of these terms come from Latin. Sagittal means "like an arrow", a reference to the position of the spine that naturally divides the body into right and left equal halves, the exact meaning of the term "midsagittal", or to the shape of the sagittal suture, which defines the sagittal plane and is shaped like an arrow. See also Anatomical terms of location Horizontal plane Radial plane References Anatomical planes planes plane Human surface anatomy
Anatomical plane
Mathematics,Biology
1,769
76,090,272
https://en.wikipedia.org/wiki/Connie%20Roth
Connie Barbara Roth (born 1974) is a Canadian-American soft matter physicist and polymer scientist whose research concerns the glass transition and aging in polymer films. She is a professor of physics at Emory University. Education and career Roth became interested in physics as a teenager in Toronto through the MacGyver television show, and began her interest in polymer films through studying paper and toner in a summer internship at the Xerox Research Centre of Canada. She studied physics as an undergraduate at McMaster University in Ontario, graduating in 1997. She went to the University of Guelph, also in Ontario, for graduate study in physics, earning a master's degree in 1999 and completing her Ph.D. in 2004. After postdoctoral research at Simon Fraser University in British Columbia and Northwestern University in Chicago, Roth joined the Emory University faculty in 2007. She was promoted to associate professor in 2013 and full professor in 2021. Recognition Roth was named as a Fellow of the American Physical Society (APS) in 2019, after a nomination from the APS Division of Polymer Physics, "for exceptional contributions to the understanding of glass transition and aging phenomena in polymer films and blends". She was the 2019 recipient of the Fellows Award of the North American Thermal Analysis Society. References External links Home page 1974 births Living people Canadian physicists Canadian women physicists American physicists American women physicists Polymer scientists and engineers McMaster University alumni University of Guelph alumni Emory University faculty Fellows of the American Physical Society
Connie Roth
Chemistry,Materials_science
299
22,929,977
https://en.wikipedia.org/wiki/Alan%20J.%20Hoffman
Alan Jerome Hoffman (May 30, 1924 – January 18, 2021) was an American mathematician and IBM Fellow emeritus, T. J. Watson Research Center, IBM, in Yorktown Heights, New York. He was the founding editor of the journal Linear Algebra and its Applications, and held several patents. He contributed to combinatorial optimization and the eigenvalue theory of graphs. Hoffman and Robert Singleton constructed the Hoffman–Singleton graph, which is the unique Moore graph of degree 7 and diameter 2. Hoffman died on January 18, 2021, at the age of 96. Early life Alan Hoffman was born and raised in New York City, residing first in Bensonhurst, Brooklyn and then on the Upper West Side of Manhattan, with his sister Mildred and his parents Muriel and Jesse. Alan knew from an early age that he wanted a career in mathematics. He was a good student in all disciplines, finding inspiration in both the liberal arts and the sciences. But he was enthralled by the rigor of deductive reasoning found in mathematics. He graduated from the George Washington High School in 1940 and entered Columbia University that fall, on a Pulitzer scholarship in 1940 at the age of 16. Education At Columbia, Hoffman joined the Debate Council, in part to overcome his fear of public speaking, and was active in both movements to increase American support for the Allies in the growing war against the Axis, and in the movement to have America directly enter the war. Although his coursework consisted primarily of mathematics, including small classes with luminaries in the field, he also studied philosophy, literature, and the history of governments. World War II interrupted Hoffman's studies but not his interest in mathematics. He was called to service in February 1943 and served in the U.S. Army from 1943 to 1946, spending time in both Europe and the Pacific. Hoffman eloquently refers to these three years as "the climatic event of my life, with adventure magnified by the sensibilities of youth." While in basic training in the anti-aircraft artillery school he considered the possibility of developing axioms for the geometry of circles. Unable to draw, he carried in his head a vision of the configurations in space – points, circles and spheres – depicting phenomena analogous to the geometry of lines. These ideas would later become the genesis of his doctoral dissertation on the foundations of inversion geometry. The experience of developing ideas in the mind rather than on paper or the chalkboard remained a practice throughout his career – a practice he did not recommend to others but which served his unique mind remarkably well. After additional Army training, Hoffman became an instructor at the anti-aircraft metrology [IL1] [2] school, teaching basic trigonometry used to track balloons to plot deduce winds aloft. Following additional training in Electrical Engineering at the University of Maine and on the rudiments of long-lines telephony Hoffman was assigned to the 3186th Signal Service Battalion and sent to the European theatre in December 1944, as the war there was nearing its end. He spent a brief period in the Pacific theatre before returning home in February 1946. During his time abroad he and others taught some mathematics in small self-organized courses and he recorded his forays into circular geometry to share with faculty back at Columbia. Upon returning to Columbia in the Fall of 1946, Hoffman was assigned to teach a mathematical survey course to the Columbia College of Pharmacy. He viewed this as an opportunity to improve his pedagogical[3] skills and determine whether the planned career in university teaching would be the most suitable choice. During that academic year, he gained confidence and skills in his teaching, crystallized his ideas on axioms for circular geometry, and proposed marriage to Esther Walker, the sister of an Army friend. Hoffman began graduate studies at Columbia, in the Fall of 1947, "brimming with confidence." Early career Following successful completion of exams and defense of his doctoral dissertation on the foundations of inversion geometry in 1950, Hoffman spent a postdoctoral year at the Institute for Advanced Study in Princeton sponsored by the Office of Naval Research. During this year he established a rhythm for his work, based on the mantra "You are a mathematician, you do mathematics." At the end of the postdoctoral year, having not secured an academic appointment anywhere he would want to live, Hoffman joined the Applied Mathematics Division of the National Bureau of Standards (NBS, now the National Institute of Standards and Technology) in Washington DC. This, choice, advocated against by friends and colleagues, was fortuitous. "The entire arc of my career is based on the experience of the five years I spent in Washington at NBS." Hoffman had been hired to help fulfill a contract (Project SCOOP) with the Office of the Air Controller of the United States Air Force to pursue a program of research and computing in an area he had never heard of: linear programming. Hoffman found the new (both to him and the world) subject "a delicious combination of challenge and fun." Hoffman learned linear programming from George Dantzig, who believed that their work would help organizations operate more efficiently through the use of mathematics – a concept that is now, 70 years later, continuing to be realized[IL4]. Through this work Hoffman became exposed to business concepts from management consulting, manufacturing, and finance, areas he enjoyed, but never felt fully at home in. Through Project SCOOP Hoffman became acquainted with other operations research notables such as Richard Bellman and Harold Kuhn. Although the code he wrote in 1951 "just didn't run," an experience disheartening enough that he never wrote another program, Hoffman and coauthors published a paper showing, based on experiments, that the simplex method was computationally superior to its contemporary competitors. This paper contained the first computational experiments with the simplex method and serves as a model for doing computational experiments in mathematical programming. During these early years at NBS Hoffman developed the first example of cycling in the simplex method, an example which appears in numerous textbooks on the subject. A short NBS technical paper, apparently not widely circulated, showed that a point which "almost" satisfies a set of linear inequalities is "close: to some other point that does, under any reasonable definitions of "almost" and "close." The implications for linear programming algorithms that consider "lazy" or "soft" constraints, or for which the constraint data (matrix coefficients and right-hand side) are subject to noise are worth considering. Hoffman was a key organizer of the influential Second Symposium in Linear Programming, held at the Bureau in January 1955. NBS's paper on the simplex method, ("How to solve a linear programming problem," Proc. Second Linear Programming Symposium, 1955) was widely distributed to other groups working on their own codes for the simplex algorithm. In 2020 this paper is a fascinating glimpse into the challenges of solving linear programs on tiny (by today's standards) computers. Hoffman's work at NBS included failed attempts to use linear programming to solve a combinatorial procurement auction problem. Combinatorial auctions remain challenging to this day, due to the overwhelming computational burden associated with computing optimal solutions[IL5]. The NBS effort used an approach which resembles branch-and-bound, which is now the standard method for solving integer programming problems. With the German mathematician Helmut Wielandt, Hoffman used linear programming to estimate how distant the eigenvalues of one normal matrix were from the eigenvalues of another normal matrix, in terms of how distant the two matrices were from each other. The result relies on the observation that every doubly stochastic matrix is the convex hull of permutation matrices. For the Operations Research community, this result implies that for the subclass of linear programming problems which are called transportation problems, if the data (right hand side, or supply and demand values) consists of integers, then there is an optimal solution taking only integer values. The general result is known as the Hoffman-Wielandt Theorem and there are mathematicians who know Hoffman only through this result. At NBS Hoffman explored the connection between linear programming duality and other combinatorial problems. This led to a simple but elegant proof to the König-Egerváry Theorem which states that for a 0-1 matrix, the maximum number of 1s that appear in different rows and columns is equal to the minimum number of rows and columns that in combination include all of the 1s in the matrix. This early work at NBS, and Hoffman's continued interest using linear equalities to prove combinatorial theorems led to collaborations with Harold Kuhn, David Gale and Al Tucker and to the birth of a subfield that later became known as polyhedral combinatorics. Hoffman was influential in later bringing Jack Edmonds to NBS (1959-1969), where the subject flourished. While at NBS, Joe Kruskal and Hoffman showed that total unimodularity (the concept, not the name) provided an explanation of why some linear programs with integer data have integer solutions, and some do not. They also identified some sufficient conditions for a matrix to have the required property. Hoffman also wrote about Lipschitz conditions for systems of linear inequalities, bounds on eigenvalues of normal matrices and the properties of smooth patterns of production. In 1956, Hoffman left the Bureau and moved to England with Esther and two young daughters, Eleanor (then 2) and Elizabeth (then less than 6 months) for the glamorous role of Scientific Liaison Officer (mathematics) at the London branch of the Office of Naval Research, with the mission of reestablishing connections between American and European mathematicians. This was a year of listening and learning, establishing and renewing friendships, and of course, doing mathematics. He did mathematics across Europe, discovering on a train to Frankfurt a beautiful theorem (but a flawed proof, later corrected by Jeff Kahn) connecting a topic in algebra to his early work on the geometry of circles. Another paper produced during this period further explores consequences of total unimodularity, and introduces the concept of a circulation in a directed graph as n generalization of the concept of an s-t flow, in which two of the graph's nodes play a special role. As the year abroad came to a close Hoffman investigated two industrial positions in New York, one in a tiny mathematical research group at the nascent IBM Research Lab in northern Westchester county and the other teaching and providing general operations research support for GE employees at the Manhattan headquarters of the company. Hoffman choose the role in the larger, more established organization due to the location, the salary, and the opportunity to see if he, and the field of operations research could succeed in business. Hoffman found the job fascinating and, in many respects, satisfying. He was allowed by management to do mathematics, as long as it did not interfere with his assigned duties. Hoffman continued his research, most of which was orthogonal to the mission of the Management Consulting group, in an elegant office in the heart of Manhattan. In the summer of 1960, Hoffman participated in a summer workshop on Combinatorics hosted by the mathematics department at IBM Research. He was dazzled by the atmosphere and "people all around doing mathematics." In 1961 he accepted the invitation of Herman Goldstine, Herb Greenberg, and Ralph Gomory to join IBM Research, thinking that it would be a great place to work, but that it probably wouldn't last, and in a few years he would get a "real job" in academia. Although Hoffman served as a visiting or adjunct professor at Technion Israel Institute of Technology (which awarded him an honorary doctorate), Yale, Stanford (where he spent cold winters for almost a decade), Rutgers, the Georgia Institute of Technology, Yeshiva University, the New School, and the City University of New York and supervised Ph.D. dissertations at City University of New York, Stanford, Yale and Princeton, he remained a member of the Mathematics Department at IBM Research until his retirement as an IBM Fellow in 2002. IBM career Upon joining IBM, Hoffman was one of the oldest members of the department, which was composed primarily of new Ph.D.’s. Despite being a mere 11 years post-PhD, Hoffman quickly assumed the role of mentor to these young researchers, discussing their work and interest and providing guidance. He served briefly as director of the department and was appointed IBM Fellow in 1977. Over the course of his career he published upwards of 200 academic papers, more than a third of them with coauthors. His mathematical range spanned numerous areas in both Algebra and Operations Research. He co-authored papers with many of his IBM colleagues, collaborating effectively with everyone from his fellow IBM Fellows) to summer interns and postdocs. Hoffman's humor, enthusiasm for math, music and puns, kindness and generosity were appreciated by all who encountered him. Summary of Mathematical Contributions (from his notes in Selected Papers of Alan Hoffman) Hoffman's work in Geometry, beginning with his dissertation "On the foundations of inversion geometry," included proofs of properties of affine planes, and the study of points of correlation of finite projective planes, conditions on patterns of the union and intersection of cones (derived largely from his generalization of his earlier results on the rank of real matrices). He produced an alternate proof, based on axioms for certain abstract systems of convex sets, of a result (by Scarf and others) on the number of inequalities required to specify a solution to an integer programming problem. A theorem about this abstract system appears closely related to antimatroids (also known as convex geometries), although the connect has not been fully explored. Hoffman's work in combinatorics extended our understanding of several classes of graphs. A 1956 lecture by G. Hajós on interval graphs led to Hoffman's characterization of comparability graphs, and later, through collaboration with Paul Gilmore, the GH theorem (also attributed to A. Ghouia-Houri). Motivated by Edmonds' matching algorithm, Hoffman collaborated with Ray Fulkerson and M. McAndrew Hoffman to characterize sets of integers that could correspond to the degrees and bounds on each vertex-pair edge counts of such a graph. They further considered which graphs in the class of all graphs having a prescribed set of degrees and edge count bounds could be transformed by a specific set of interchanges to any other set in the class. The proofs relate intimately to an important concept of Hilbert basis. The paper on self-orthogonal Latin squares, with IBM co-authors Don Coppersmith and R. Brayton, was inspired by a request to schedule a spouse avoiding mixed doubles tournament for a local racquet club. It has the distinction of being the only paper Hoffman would discuss outside of the mathematics community. Partially ordered sets were a frequent topic of study for Hoffman. The 1977 paper with D. E. Schwartz uses linear programming duality to generalize Green and Kleitman's 1976 generalization of Dilworth's theorem on the decomposition of partially ordered sets, in yet another example of the unifying role duality plays in many combinatorial results. Throughout his career Hoffman sought simple elegant alternative proofs to established theorems. These alternate proofs often gave rise to generalizations and extensions. In the late 1990s he collaborated with Cao, Chvátal and Vince to develop an alternate proof, using elementary methods rather than linear algebra or Ryser's theorem about square 0-1 matrices. Hoffman's work on matrix inequalities and eigenvalues are staples in any course on matrix theory. Of particular charm, and in keeping with his fondness for unifying approaches is his 1975 paper on Linear G-Functions. While the proof of the specified Gerschgorin Variation is longer and more complex than others, it covers all the Ostrowski variations and many additional variations as special cases. Hoffman was an encouraging elder but not an active participant in IBM's development of a series of linear and integer programming products. He did, however, continue research on combinatorial and algebraic aspects of linear programming and linear inequalities, including a delightful abstraction of linear programming duality (1963). He also continued to use properties of linear inequalities to prove (or re-prove, more elegantly) results in convexity. A collaboration with Shmuel Winograd, also an IBM Fellow in the Mathematics department, produced an efficient algorithm for finding all shortest distances in a directed network, using pseudo-multiplication of matrices. A series of papers on lattice polyhedral (some with Don Schwarts) introduced the concept of lattice polyhedral, which gave rise to yet another instance of combinatorial duality. Following a collaboration with Ray Fulkerson and Rosa Oppenheim on balanced matrices, Hoffman generalized the Ford-Fulkerson Max Flow – Min Cut result to other cases (flow at nodes, undirected arcs, etc.) by providing a proof of which all previously known instances were special cases. This paper also introduced the concept of (but again, not the name) of total dual integrality, an idea behind most uses of linear programming to prove extremal combinatorial theorems. Over his career Hoffman studied the class of integer programming problems that were solvable by successively maximizing the variables in some order. One such instance is the complete transportation problem, in the case where the cost coefficient exhibit a particular property discovered more than a century earlier by the French Mathematician Gaspard Monge. This approach, called simply "simple" in the Hoffman paper, was later deemed "greedy" by Edmonds and Fulkerson. The Monge property gives rise to an antimatroid, and through the use of that antimatroid, Hoffman's result is easily extended to the case of incomplete transportation problems. Hoffman re-used the term "greedy" to describe a subclass of 0-1 matrices for which the dual linear program can be solved by the greedy algorithm for all right-hand sides and all objective functions with decreasing (in the variable index) coefficients. Together with Kolen and Sakarovitch he showed that for these matrices, the corresponding integer program has an integer optimal solution for integer data. The elegant and brief 1992 paper provides a characterization on 0-1 matrices for which packing and covering problems can be solved through a greedy approach. It provides a unification of results for shortest path and minimum spanning tree problems. His final paper on this topic "On greedy algorithms, partially ordered sets and submodular functions," co-authored with Dietrich, appeared in 2003. Hoffman visited and revisited the topic of Graph Spectra, addressing the uniqueness of the triangular association scheme in a 1959 paper, Moore Graphs with diameters 2 and 3 in 1960 (with R Singleton), the polynomial of a graph in 1963, the line graph of a symmetric balanced incomplete block design (with Ray-Chaudhuri) in 1965, connections between Eigenvalues and colorings of a graph (in 1970), connections between Eigenvalues and partitionings of the edges in a Graph in 1972, and many more, including exploring properties of the edge versus path incidence matrix of series parallel graphs (related to greedy packings) with Schieber in 2002. Recognition Hoffman was elected to the National Academy of Sciences in 1982, to the American Academy of Arts and Sciences in 1987, and to the inaugural class of INFORMS Fellows in 2002. Over his long career, Hoffman served on the editorial board of eleven journals and as the founding editor of Linear Algebra and its Applications. In 1992, together with Phillip Wolfe (also of IBM) he was awarded the John von Neumann Theory Prize by ORSA and TIMS[6], predecessors of INFORMS[7]. In presenting the award George Nemhauser recognized Hoffman and Wolfe as the intellectual leaders of the mathematical programming group at IBM. He cited Hoffman for his work in combinatorics and linear programming and for his early work on the computational efficiency of the simplex method during his time at NBS. In August 2000, Hoffman was honored by the Mathematical Programming Society as one of 10 recipients (3 from IBM) of the Founders Award. In a biography published in an issue of Linear Algebra and its Applications dedicated to Hoffman on the occasion of his sixty-fifth birthday, Uriel Rothblum wrote that "Above and beyond his scholarly and professional contributions, Hoffman has unparalleled ability to enjoy everything he does. He enjoys singing, ping pong, puns, witty stories, and -- possibly as much as anything else -- doing mathematics." Esther Hoffman died of a blood disease in summer of 1988. Alan married Elinor Hershaft, an interior designer, in 1990. They divorced in 2014. Elinor died in October 2020. Hoffman spent his last years at The Osborn retirement community in Rye, New York. He is survived by his daughters, Eleanor and Elizabeth. Awards Alan Hoffman was a recipient of a number of awards. IBM Fellow, 1978– Member, National Academy of Sciences, 1982– Fellow, American Academy of Arts and Sciences, 1987– D. Sc. (Hon.) Technion – Israel Institute of Technology, 1986 1992 John von Neumann Theory Prize with Philip Wolfe 2002 class of Fellows of the Institute for Operations Research and the Management Sciences Select publications Hoffman A. J. & Jacobs W. (1954) Smooth patterns of production. In Management Science, 1(1): 86–91. Hoffman A. J. & Wolfe P. (1985) History. Lawler E. L., Lenstra J. K., Rinnooy Kan A. H. G., & Shmoys D. B., eds. In The Traveling Salesman Problem. John Wiley & Sons: New York. References 1924 births 2021 deaths 20th-century American mathematicians 21st-century American mathematicians Columbia College (New York) alumni Combinatorialists Fellows of the Institute for Operations Research and the Management Sciences John von Neumann Theory Prize winners
Alan J. Hoffman
Mathematics
4,492
40,802,552
https://en.wikipedia.org/wiki/Screen%20scroll%20centrifuge
Screen/Scroll centrifuge is a filtering or screen centrifuge which is also known as worm screen or conveyor discharge centrifuge. This centrifuge was first introduced in the midst of 19th century. After developing new technologies over the decades, it is now one of the widely used processes in many industries for the separation of crystalline, granular or fibrous materials from a solid-liquid mixture. Also, this process is considered to dry the solid material. This process has been some of the most frequently seen within, especially, coal preparation industry. Moreover, it can be found in other industries such as chemical, environmental, food and other mining fields. Fundamentals Screen scroll centrifuge is a filtering centrifuge which separates solids and liquid from a solid-liquid mixture. This type of centrifuge is commonly used with a continuous process in which slurry containing both solid and liquid is continuously fed into and continuously discharged from the centrifuge. In a typical screen scroll centrifuge, the basic principle is that entering feed is separated into liquid and solids as two products. The feed is transported from small to larger diameter end of frustoconical basket by the inclination of the screen basket and slightly different speed of the scraper worm. The solid material retained on the screen is moved along the cone via an internal screw conveyor while the liquid output is obtained due to centrifugal force causes the feed slurry to pass through the screen openings. Furthermore, screen scroll centrifuge may rotate either in horizontal or vertical position. Range of applications The use of screen scroll centrifuge has been seen in numerous process engineering industries. One of the most noticeable applications is within coal preparation industry. In addition to that, this centrifuge is also employed in the dewatering of potash, gilsonite, in salt processes and in dewatering various sands. Moreover, it is also designed for use in the food processing industry, for instant, dairy production, and cocoa butter equivalents and other confectionery fats. Designs available Screen scroll centrifuges, which are also known as worm screen or the conveyor discharge, instigate the solids to move along the cone through an internal screw conveyor. The conveyor in the centrifuge spins at a differential speed to the conical screen and centrifugal forces approximately 1800g - 2600g facilitate reasonable throughputs. Some of the screen scroll centrifuges are available with up to four separate stages for improved performance. The first stage is used to de-liquor the feed which is followed by a washing stage, with the final stage being used for drying. In an advanced screen scroll centrifuge with four stages, two separate washes are employed in order to segregate the wash liquors. The two most common types of screen/scroll centrifuge used in many industrial applications are vertical screen/scroll centrifuge and horizontal screen/scroll centrifuge. Vertical screen scroll centrifuge Vertical screen scroll is built with the main components of screen, scroll, basket, housing, and helical screw. Feed containing liquid and solid materials is introduced into vertical screen scroll centrifuge from the top. This is sped up by centrifugal acceleration produced from the rotating parts contacted. As such, centrifugal force slings liquids through the openings, while solids are held on the screen surface as they cannot pass through because of granular particles larger than the screen pores or due to agglomeration. Movement of solids across the screen surface is manipulated by flights. Liquids that have gone through screen are obtained and discharged through effluent outlet from the side of machine, while solids collected from the screen fall by gravity through the bottom discharge of the machine. Some of the available vertical screen scroll centrifuges are CMI model EBR and CMI model EBW which are manufactured by Centrifugal & Mechanical Industries (CMI). The former can dewater coarser particles size ranging from 1.5 in to 28 mesh whereas the latter can dewater finer particles size ranging from 1 mm to 150 mesh. Horizontal screen scroll centrifuge Similar to a vertical screen scroll centrifuge, a horizontal screen scroll centrifuge is constructed of several main parts: screen, scroll, basket, housing, and helical screw. The screen and the basket with frustoconical geometry are assembled into the housing in a horizontal axis. Inside the frustoconical structure there is a tubular wall. Inside the tubular wall there is a cylinder of helical screw which flight on scroll pass. The tubular wall will have a slightly different angular speed to the helical screw. The solid-liquid mixture is fed into the closed rearward portion of the scroll. The rotation movement of the scroll, screen, and basket allows the liquid to pass through from the openings on the screen (via centrifugal force). The solid remains will be separated according to size due to the difference of the angular velocity of the helical screw and the basket. The helical screw pushes the solid material to be discharged to the forward end of the scroll. The processing time depends on helical screw pitch and the angular velocity difference. It may also be influenced by the design of the scroll feed opening. The solid particles exiting are usually collected via a conveyor in the collection unit. Main process characteristics and its assessment The performance and output efficiency of the screen scroll centrifuge can be affected by several factors, such as particle size and feed concentration, flow rate of feed and screen mesh size of the centrifuge. Particle size and feed solids Particle size in the feed is one of the most important parameters to be taken into account since the choice of slot and screen holes size of screen scroll centrifuge or different types of process depends on feed contents. Non-uniform particles size in the feed can cause partial blockage on the screen due to the small size solids blocking the holes besides normal and larger particles. So, liquids flow over the screen instead of passing through it. As such, it requires higher solids content in the feed in order to obtain good and reasonable results - normally greater than 15% and up to 60% w/w. Nevertheless, the flow rate of the feed can be monitored to overcome this setback. Another possible method is to carry out pre-treatment on the feed to be used for screen scroll centrifuge, for example, by the filtration process. Particle size, thereafter, can be analysed and the selection of particular screen size can be determined. However, it increases the total operating cost. Typical operating range of particle size and feed concentration for screen scroll centrifuges are 100 – 20,000 μm and 3 – 90% mass of the solids in the feed. In general, slot and screen holes size range 40 - 200 μm with open areas from 5 - 15%. Nevertheless, recent products are claimed to be able to handle the particle size as low as 50 μm. Screens are generally metallic foil or wedge wire and more recently metallic and composite screens perforated with micro-waterjet cutting. Feed flow rate As mentioned in the previous section, feed flow rate is one of the crucial parameters to be controlled to achieve highly efficient output. Centrifuge performance is sensitive to feed flow rate. Even though increasing the feed flow rate can prevent from blocking the screens, it is mentioned that wetter solids is achieved. This is due to increase in hydraulic load on the centrifuge when higher feed rate is applied, while differential rotation speed between the cone and scroll, and retention time within dewatering zone of the basket are fixed. In addition, higher feed rate leads to a surge in the effective thickness of the bed since it is dragged down by the scroll. Basket geometry and its material The material variations for constructing and the design of main components of centrifuge such as the screen plate, helical screw and basket could actually improve the longer life term of the machine. Another important factor is the conical basket size and its angle within the centrifuge. Different basket size and angle between basket and helical screw can vary the angular speed; as a result, the quality of the product is affected. Moreover, the shape of the helical screw is also important since it optimizes the transportation of cake. A selection of typical screen scroll centrifuge with different basket sizes found in the market is presented in the following Table 1. The helical scroll and conical basket sections are commonly built at the angle of 10°, 15° and 20°. Table 1 A selection of screen scroll centrifuge sizes Advantages and limitations over competitive processes The screen scroll centrifuge has an advantage of having a driven scroll helical conveyor which gives a small differential speed relative to the conical basket. The helical conveyor is installed in the centrifuge to control the transport of the incoming feed, allowing the residence time of the solids in the basket to be increased giving enhanced process performance. Moreover, the helical conveyor and conical basket sections are designed in certain angle of 10°, 15° and 20° being common such that solid particles are dragged on the conveyor along the cone towards the discharge point. As a result, there is no formation of even solids layer but form piles of triangular section in front of the blades of the conveyor. The residence time within screen scroll centrifuge is typically about 4 to 15 seconds which is longer than normal simpler conical basket centrifuge. This permits a sufficient interaction time between wash liquids and cake. However, the presence of the conveyor causes crystals breakage and abrasion problem as well as the formation of uneven solids layer which can lead to poor washing. This can be controlled by conveyor speed. TEMA engineers, specialist in centrifuges, claims that horizontal screen scroll centrifuge can achieve higher overall recovery of fines up to 99% can be achieved, combining with very low product moisture. Furthermore, it is recommended that operating with the feed containing more than 40% solids with minimal size of 100 μm achieve the best results. The use of the screen scroll centrifuge with horizontal orientation is more economical as its capacity is 40% more tonnage than that of vertical orientation of the same size for the same energy cost. In addition, maintenance of the horizontal screen scroll centrifuge can be carried out easily since total disassembly is not needed. Nowadays, screen scroll centrifuges are equipped with CIP-cleaning system for the purpose of self-cleaning within the centrifuge. On the other hand, it has a downside of possible blockage to the screen due to the feed slurry containing small crystals besides large and normal solids crystals. Consequently, this causes the screen to become less permeable so the liquids flow over the screen rather than passing through the screen mesh. This problem, however, can be overcome by reducing the flow rate of feed. Possible heuristics to be used during design of the process The basket, helical screw, screen filter, and other parts are designed to meet up the process input and certain performance. Most of the parts are made from metal to be able to handle the separation process. The bigger the bowl could contain more input but at the same time could increase the process and residence time. The helical screw is made to be able to hold and move the particle around to be able to control the cake movement. The screen filter is made to be able to sieve the particle and the water. The cleaning process for this type of machine could be difficult compare to other separation model. The design mostly being optimized with low maintenance feature and provided with good sealing to prevent the leaking and breakup of the construction. Necessary post-treatment systems After removing liquids from the slurry to form a cake of solids in the centrifuge, further or post-treatment is required to completely dry the solids. Drying is the most common process used in the industry. Another post-treatment system is to treat the products with another stage of deliquoring process. New development The modern screen/scroll centrifuge has been modified in several ways from the original design: The addition of a long-life parts package which reduces sliding abrasion in the feed zone by having a cone cap to deflect the feed input from the top. The mechanic of the process has also been optimized to achieve better products. New screens have become available that are perforated with a micro-waterjet process. These screens offer significantly greater product recovery in combination with dryer output. This manufacturing process also allows screens to be made from extreme abrasion resistant materials such as tungsten-carbide composites for very high wear applications such as coal. Ultrafine screening quality modification allows up to 50 micrometre. The modification is made through the screen filter which could produce higher solid recovery. Other developments made on the screen scroll centrifuge are tight sealing, ability to work on do continuous mode, minimum power consumption, low friction gear, and less maintenance design. All of these modifications are made to ensure safety of the process with less power consumption and for the ease of maintenance. References Centrifuges
Screen scroll centrifuge
Chemistry,Engineering
2,745
65,807,245
https://en.wikipedia.org/wiki/Black-box%20obfuscation
In cryptography, black-box obfuscation was a proposed cryptographic primitive which would allow a computer program to be obfuscated in a way such that it was impossible to determine anything about it except its input and output behavior. Black-box obfuscation has been proven to be impossible, even in principle. Impossibility The unobfuscatable programs Barak et al. constructed a family of unobfuscatable programs, for which an efficient attacker can always learn more from any obfuscated code than from black-box access. Broadly, they start by engineering a special pair of programs that cannot be obfuscated together. For some randomly selected strings of a fixed, pre-determined length , define one program to be one that computes and the other program to one that computes (Here, interprets its input as the code for a Turing machine. The second condition in the definition of is to prevent the function from being uncomputable.) If an efficient attacker only has black-box access, Barak et al. argued, then the attacker only has an exponentially small chance of guessing the password , and so cannot distinguish the pair of programs from a pair where is replaced by some program that always outputs "0". However, if the attacker has access to any obfuscated implementations of , then the attacker will find with probability 1, whereas the attacker will always find unless (which should happen with negligible probability). This means that the attacker can always distinguish the pair from the pair with obfuscated code access, but not black-box access. Since no obfuscator can prevent this attack, Barak et al. conclude that no black-box obfuscator for pairs of programs exists. To conclude the argument, Barak et al. define a third program to implement the functionality of the two previous: Since equivalently efficient implementations of can be recovered from one of by hardwiring the value of , Barak et al. conclude that cannot be obfuscated either, which concludes their argument. Impossible variants of black-box obfuscation and other types of unobfuscable programs In their paper, Barak et al. also prove the following (conditional to appropriate cryptographic assumptions): There are unobfuscatable circuits. There is no black-box approximate obfuscator. There are unobfuscatable, secure, probabilistic private-key cryptosystems. There are unobfuscatable, secure, deterministic digital signature schemes. There are unobfuscatable, secure, deterministic message authentication schemes. There are unobfuscatable, secure pseudorandom functions. For many protocols that are secure in the random oracle model, the protocol becomes insecure if the random oracle is replaced with an artificial cryptographic hash function; in particular, Fiat-Shamir schemes can be attacked. There are unobfuscatable circuits in TC0 (that is, constant-depth threshold circuits). There are unobfuscatable sampling algorithms (in fact, these cannot be obfuscated approximately). There is no secure software watermarking scheme. Weaker variants In their original paper exploring black-box obfuscation, Barak et al. defined two weaker notions of cryptographic obfuscation which they did not rule out: indistinguishability obfuscation and extractability obfuscation (which they called "differing-inputs obfuscation".) Informally, an indistinguishability obfuscator should convert input programs with the same functionality into output programs such that the outputs cannot be efficiently related to the inputs by a bounded attacker, and an extractability obfuscator should be an obfuscator such that if the efficient attacker could relate the outputs to the inputs for any two programs, then the attacker could also produce an input such that the two programs being obfuscated produce different outputs. (Note that an extractability obfuscator is necessarily an indistinguishability obfuscator.) , a candidate implementation of indistinguishability obfuscation is under investigation. In 2013, Boyle et al. explored several candidate implementations of extractability obfuscation. References Software obfuscation Cryptographic primitives Unsolvable puzzles
Black-box obfuscation
Mathematics,Technology,Engineering
875
46,492,478
https://en.wikipedia.org/wiki/Nexus%20for%20Exoplanet%20System%20Science
The Nexus for Exoplanet System Science (NExSS) initiative is a National Aeronautics and Space Administration (NASA) virtual institute designed to foster interdisciplinary collaboration in the search for life on exoplanets. Led by the Ames Research Center, the NASA Exoplanet Science Institute, and the Goddard Institute for Space Studies, NExSS will help organize the search for life on exoplanets from participating research teams and acquire new knowledge about exoplanets and extrasolar planetary systems. History In 1995, astronomers using ground-based observatories discovered 51 Pegasi b, the first exoplanet orbiting a Sun-like star. NASA launched the Kepler space telescope in 2009 to search for Earth-size exoplanets. By 2015, they had confirmed more than a thousand exoplanets, while several thousand additional candidates awaited confirmation. To help coordinate efforts to sift through and understand the data, NASA needed a way for researchers to collaborate across disciplines. The success of the Virtual Planetary Laboratory research network at the University of Washington led Mary A. Voytek, director of the NASA Astrobiology Program, to model its structure and create the Nexus for Exoplanet System Science (NExSS) initiative. Leaders from three NASA research centers will run the program: Natalie Batalha of NASA's Ames Research Center, Dawn Gelino of the NASA Exoplanet Science Institute, and Anthony Del Genio of NASA's Goddard Institute for Space Studies. Research Functioning as a virtual institute, NExSS is currently composed of sixteen interdisciplinary science teams from ten universities, three NASA centers and two research institutes, who will work together to search for habitable exoplanets that can support life. The US teams were initially selected from a total of about 200 proposals; however, the coalition is expected to expand nationally and internationally as the project gets underway. Teams will also work with amateur citizen scientists who will have the ability to access the public Kepler data and search for exoplanets. NExSS will draw from scientific expertise in each of the four divisions of the Science Mission Directorate: Earth science, planetary science, heliophysics and astrophysics. NExSS research will directly contribute to understanding and interpreting future exoplanet data from the upcoming launches of the Transiting Exoplanet Survey Satellite and James Webb Space Telescope, as well as the planned Nancy Grace Roman Space Telescope mission. Current NExSS research projects as of 2015: See also Notes References Research institutes in the United States NASA groups, organizations, and centers Astrobiology Astrochemistry Exoplanetology Exoplanet search projects
Nexus for Exoplanet System Science
Chemistry,Astronomy,Biology
534
19,298,923
https://en.wikipedia.org/wiki/Soldering%20station
A soldering station is a multipurpose power soldering device designed for electronic components soldering. This type of equipment is mostly used in electronics and electrical engineering. Soldering station consists of one or more soldering tools connected to the main unit, which includes the controls (temperature adjustment), means of indication, and may be equipped with an electric transformer. Soldering stations may include some accessories – holders and stands, soldering tip cleaners, etc. Soldering stations are widely used in electronics repair workshops, electronic laboratories, in industry. Sometimes simple soldering stations are used for household applications and for hobbies. Soldering Station Components The main soldering station elements which determine its compatibilities are soldering tools. Different tools are used for different applications and soldering stations may be equipped with more than one of them at a time. The main tools for soldering are: contact soldering irons; desoldering tweezers or SMD hot tweezers; desoldering gun; hot air gun; infrared heater. Soldering iron is the most common working tool of a soldering station. Some stations may use simultaneously several soldering irons to make the process quicker and more convenient, as there is no need to change the soldering tips or readjust the station or the soldering temperature. Some stations may use some specialized soldering irons, such as ultrasonic soldering irons or induction soldering irons. Soldering Irons Soldering iron as a part of soldering station has a number of advantages. Increased operability The operator may set the temperature according to the solder alloy in use Stability of the preset temperature. Operation mode indication, including temperature display. Better heating element quality Main unit with power supply Galvanic isolation of the heating element from the electricity network. This increases the safety of the operator and protects the components that are being soldered. Heating element operates under low voltage (10-30 V), increasing safety. It also prolongs the lifetime of the heating element. Grounding of the whole unit. The station has a fuse. Increased user comfort The working part has smaller size and weight. Station design integrates soldering aid accessories: soldering iron stands, soldering tip cleaners, etc. Some stations have auto switch-off function. However, most soldering stations can only be used on a desk and they usually cost more than a standalone soldering iron. Desoldering Tools Desoldering is a very important stage in PCB repair. It is often needed to disassemble some components just to make sure they work or check their condition. That is why it is important to detach the elements without any possible damage to them. The means that may be integrated in soldering stations are: SMD hot tweezers heat up and may not only melt the solder alloy but grab the needed component as well. They may have different types of tips for different applications. Desoldering iron is usually made in a shape of a gun. It is capable of taking in the air (vacuum pickup) and solder alloy. Non-contact heating tools include hot air and infrared heaters. They are used for SMT disassembling. Hot Air Guns They use a hot air stream for heating up the components. Hot air is focused on the certain area using special hot air nozzles. Usually soldering hot air guns are capable of providing temperatures from 100 to 480 °C. Infrared Heaters Soldering stations with infrared (IR) heaters are a separate type of soldering stations and differ a lot. Such stations provide high-precision soldering and the process is more like that in electronics industry. The temperature profile may be set based on the components being soldered. This minimizes the risk of components deformation or damage due to the temperature difference. Soldering Station Classification Contact Soldering Stations This type requires a soldering iron equipped with an electronic temperature adjustment unit. The main technical parameter of the contact soldering station is the power. The power determines the operation convenience and soldering effectiveness. The modern stations have power from 10 to 200 W and more. The most common are the models with 50-80 W power. The higher is the power the more amount of heat you may transmit for the same time. It allows reducing the temperature on the heating element to the minimal possible value for melting the solder alloy. And vice versa – the lower is the power the higher temperature you need to melt the solder. High temperature means a risk of a component overheating. Especially it concerns semiconductor components or electrolytic conductors. According to the solder alloy used these soldering stations may be divided into two subtypes: Tin/lead soldering stations Lead-free soldering stations This type of stations is characterized by heating element with a power up to 160 W. Lead-free solder alloys need higher temperatures to be melted, so the station needs more power. If the station is equipped with a temperature regulator, it may be used for operation with a traditional lead-containing solder. Digital and analogue soldering stations The stations may be divided into digital and analogue according to the control unit operation method. Analogue stations have a temperature stabilization that operates as follows: The heating element is working till the soldering tip reaches certain temperature, then the power switches off. When the temperature becomes lower than a certain level the heating element is on again and the soldering tip is being heated. The operation is ensured by the magnetoelectric relay. It is controlled by electronics and a temperature sensor. Analogue control system has an advantage – it is the cost. The disadvantage is a low operation precision that results in soldering tip overheating. This leads to problems like: electronic components overheating, often tip change. Digital soldering station is operating using a PID regulator that is controlled by a microprocessor. Digital control method is more precise. Induction soldering stations Induction soldering stations are characterized by high power and excellent thermal stability. They use the technology of heating and thermal stabilization based on Curie temperature. American manufacturer Metcal is a leader in this market segment, however there are other brands. Non-contact soldering stations Infrared soldering stations Hot air soldering stations Stations with hot air guns are used in cases when just a soldering iron is not enough. Disassembling microchips requires a hot air gun. SMD components soldering with hot air is much more convenient. Hot air guns usually come with special nozzles for hot air stream regulation. Popular manufacturers: Hakko, Quick, Accta, Goot, etc. Rework systems For professional laptop, game console and other electronics repair, special repair systems are used. These repair systems usually combine several components: hot air gun, soldering iron, desoldering gun, etc. This equipment allows effective desoldering and soldering large BGAs. These operations require special approach and certain amount of process automation. The most popular manufacturers: Ersa, Martin, Jovy Systems, Quick, Scotle. See also Soldering Soldering iron Soldering gun Solder External links How to Use a Soldering Station Everything about Soldering Station an Article by TMS Virdi Types of Smd Rework Station Soldering Power tools
Soldering station
Physics
1,477
66,483,113
https://en.wikipedia.org/wiki/Guinardia
Guinardia is a genus of diatoms belonging to the family Rhizosoleniaceae. The genus was first described by H. Peragallo in 1892. The genus has cosmopolitan distribution. Species: Guinardia delicatula Guinardia flaccida Guinardia pungens Guinardia striata References Diatoms Diatom genera
Guinardia
Biology
76
47,634,345
https://en.wikipedia.org/wiki/Ferro%20%28architecture%29
A ferro (plural ferri) or is an item of functional wrought-iron work on the façade of an Italian building. Ferri are a common feature of Medieval and Renaissance architecture in Lazio, Tuscany and Umbria. They are of three main types: have a ring for tethering horses, and are set at about from the ground; holders for standards and torches are placed higher on the façade and on the corners of the building; have a cup-shaped hook or hooks to support cloth for shade or to be dried, and are set near balconies. In Florence, ferri da cavallo and arpioni were often made to resemble the head of a lion, the symbolic marzocco of the Republic of Florence. Later, cats, dragons, horses and fantastic animals were also represented. References Further reading Assunta Maria Adorisio (1996). Per Uso e Per Decoro: L’arte del ferro a Firenze e in Toscana dal eta gotica al XX secolo. Florence: Maria Christina de Montemayor. Giulio Ferrari. ([1920?]) Il ferro nell'arte Italiana. Centosettanta tavole riproduzioni in parte inedite di 368 soggetti, del medio evo, del rinascimento, del periodo barocco e neo-classico raccolte e ordinate con testo esplicativo. Kraus Reprint, 1973. James Lindow (2007). The Renaissance Palace in Florence: magnificence and splendor in fifteenth-century Italy. Aldershot, England; Burlington, VT: Ashgate. Claudio Paolini. Repertorio delle architettura civili di Firenze. [Database] Palazzo Spinelli – Ente Cassa di Risparmio di Firenze. Augusto Pedrini (1929). Il ferro battuto, sbalzato e cesellato, nell-arte italiana, dal secolo undicesimo al secolo diciottesimo. Milan: Ulrico Hoepli. (Published in English: Decorative ironwork of Italy. Atglen PA: Schiffer Publishers, 2010.) Urbano Quinto (1998). Gli antichi segreti del fabbro. Galleria Urbano Quinto. Herbert Railton (1900). Pen drawings of Florence. Cleveland, Ohio: J.H. Jansen. John Superti (2014). I Cavalli di Firenze = The Horses of Florence. Florence: Polistampa. John Superti (2013) Florence's Ironworks - Ferri https://www.youtube.com/watch?v=zKQ5s9Lk1Bo Architectural elements
Ferro (architecture)
Technology,Engineering
586
4,462,117
https://en.wikipedia.org/wiki/Partnership%20for%20a%20New%20Generation%20of%20Vehicles
The Partnership for a New Generation of Vehicles was a co-operative research program between the US government and the three major domestic auto corporations that was aimed at bringing extremely fuel-efficient (up to vehicles to market by 2003. The partnership, formed in 1993, involved eight federal agencies, the national laboratories, universities, and the United States Council for Automotive Research (USCAR), which comprises DaimlerChrysler, Ford Motor Company, and General Motors Corporation. "Supercar" was the unofficial description for the research-and-development program. On track to achieving its objectives, the program was canceled by the George W. Bush administration in 2001 at the request of the automakers, with some of its aspects shifted to the much more distant FreedomCAR program. Objectives The main purposes of the program were to develop technologies to reduce the impact of cars and light trucks on the environment and to decrease the US dependency on imported petroleum. The program was to make working vehicles achieving up to triple the contemporaryng fuel efficiency as and further minimizing emissions but without sacrificing affordability, performance, or safety. The common term for the vehicles was "supercar" because of the technological advances. The goal of achieving the target with a family-sized sedan included using new fuel sources, powerplants, aerodynamics, and lightweight materials. The program was established in 1993 to support the domestic US automakers (GM, Ford, and Chrysler) to develop prototype automobiles which would be safe, clean, and affordable; the target was a car the size of the Ford Taurus with triple its fuel efficiency. Results The program "overcame many challenges and has forged a useful and productive partnership of industry and government participants" by "resulting in three concept cars that demonstrate the feasibility of a variety of new automotive technologies" with diesel-electric transmission. The three domestic automakers (GM, Ford, and Chrysler) developed fully-operational concept cars. They were full-sized five-passenger family cars and achieved at least . General Motors developed the 80 mpg Precept, Ford designed the 72 mpg Prodigy, and Chrysler built the 72 mpg ESX-3. They featured aerodynamic lightweight aluminum or thermoplastic construction and used a hybrid vehicle drivetrain, pairing 3- or 4-cylinder diesel engines with electric motors drawing from or lithium ion batteries. Researchers for the PNGV identified a number of ways to reach 80 mpg, including reducing vehicle weight, increasing engine efficiency, combining gasoline engines and electric motors in hybrid vehicles, implementing regenerative braking, and switching to high-efficiency fuel cell powerplants. Specific new technology breakthroughs achieved under the program included the following: Development of carbon foam with extremely high heat conductivity (2000 R&D 100 Award) Near frictionless carbon coating, many times slicker than Teflon (1998 R&D 100 Award) Oxygen-rich air supplier for clean diesel technology (1999 R&D 100 Award) Development of a compact microchannel fuel vaporizer to convert gasoline to hydrogen for fuel cells (1999 R&D 100 Award) Development of aftertreatment devices to remove nitrogen oxides from diesel exhaust with efficiencies greater than 90 percent when used with diesel fuel containing 3 ppm of sulfur Improvement of the overall efficiency and power-to-weight ratios of power electronics to within 25 percent of targets while reducing the cost by 86 percent to $10/kW since 1995 Reduction in cost of lightweight aluminum, magnesium, and glass-fiber-reinforced polymer components to less than 50% of the cost of steel Reduction in the cost of fuel cells from $10,000/kW in 1994 to $300/kW in 2000 Substantial weight reduction to within 5-10% of the vehicle weight reduction goal Criticisms Ralph Nader called the program "an effort to coordinate the transfer of property rights for federally funded research and development to the automotive industry." The program was also criticized by some groups for a focus on diesel solutions; the fuel is seen by some as having inherently high air pollutant emissions. Elizabeth Kolbert, a staff writer at The New Yorker, described that renewable energy is the main problem: "If someone, somewhere, comes up with a source of power that is safe, inexpensive, and for all intents and purposes inexhaustible, then we, the Chinese, the Indians, and everyone else on the planet can keep on truckin'. Barring that, the car of the future may turn out to be no car at all." Notes External links DOE vehicle technologies homepage USCAR Website Energy policy Air pollution Diesel hybrid vehicles
Partnership for a New Generation of Vehicles
Environmental_science
928
3,059,064
https://en.wikipedia.org/wiki/Wood%20warping
Wood warping is a deviation from flatness in timber as a result of internal residual stress caused by uneven shrinkage. Warping primarily occurs due to uneven expansion or contraction caused by changes in moisture content. Warping can occur in wood considered "dry" (wood can take up and release moisture indefinitely) when it takes up moisture unevenly, or when it is allowed to return to its "dry" equilibrium state unevenly, too slowly, or too quickly. Many factors can contribute to wood warp susceptibility: wood species, grain orientation, air flow, sunlight, uneven finishing, temperature, and cutting season. The types of wood warping include: bow: a warp along the length of the face of the wood crook: a warp along the length of the edge of the wood kink: a localized crook, often due to a knot cup: a warp across the width of the face, in which the edges are higher or lower than the center of the wood twist or wind: a distortion in which the two ends do not lie on the same plane. Winding sticks assist in viewing this defect. curl: a warp in the center that creates a sort of bow Wood warping costs the wood industry in the U.S. millions of dollars per year. Straight wood boards that leave a cutting facility sometimes arrive at the store yard warped. Although wood warping has been studied for years, the warping control model for manufacturing composite wood hasn't been updated for about 40 years. Zhiyong Cai, researcher at Texas A&M University, has researched wood warping and was working on a computer software program in 2003 to help manufacturers make changes in the manufacturing process so that wood doesn't arrive at its destination warped after it leaves the mill or factory. See also Drunken trees Forest pathology Dancing Forest Crooked Forest References Further reading WoodWeb – Warp in Drying Society of American Foresters – Warped Wood Woodworking Timber industry Deformation (mechanics) Wood-related terminology
Wood warping
Materials_science,Engineering
399
16,798,439
https://en.wikipedia.org/wiki/HD%2024040%20b
HD 24040 b is a long-period exoplanet taking approximately 3500 days to orbit at 4.6 astronomical units in an almost circular orbit. It has a minimum mass 4 times that of Jupiter. Discovery HD 24040b was discovered in 2006 based on observations made at the W. M. Keck Observatory in Hawaii. However, because the observations covered less than one complete orbit there were only weak constraints on the period and mass. The first reliable orbit for HD 24040b was obtained by astronomers at Haute-Provence Observatory in 2012 who combined the keck measurements with ones from the SOPHIE and ELODIE spectrographs. The most recent orbit published in 2015 added additional keck measurements and refined the orbital parameters. References External links Giant planets Taurus (constellation) Exoplanets discovered in 2006 Exoplanets detected by radial velocity
HD 24040 b
Astronomy
174
5,828,178
https://en.wikipedia.org/wiki/List%20of%20gravitationally%20rounded%20objects%20of%20the%20Solar%20System
This is a list of most likely gravitationally rounded objects (GRO) of the Solar System, which are objects that have a rounded, ellipsoidal shape due to their own gravity (but are not necessarily in hydrostatic equilibrium). Apart from the Sun itself, these objects qualify as planets according to common geophysical definitions of that term. The radii of these objects range over three orders of magnitude, from planetary-mass objects like dwarf planets and some moons to the planets and the Sun. This list does not include small Solar System bodies, but it does include a sample of possible planetary-mass objects whose shapes have yet to be determined. The Sun's orbital characteristics are listed in relation to the Galactic Center, while all other objects are listed in order of their distance from the Sun. Star The Sun is a G-type main-sequence star. It contains almost 99.9% of all the mass in the Solar System. Planets In 2006, the International Astronomical Union (IAU) defined a planet as a body in orbit around the Sun that was large enough to have achieved hydrostatic equilibrium and to have "cleared the neighbourhood around its orbit". The practical meaning of "cleared the neighborhood" is that a planet is comparatively massive enough for its gravitation to control the orbits of all objects in its vicinity. In practice, the term "hydrostatic equilibrium" is interpreted loosely. Mercury is round but not actually in hydrostatic equilibrium, but it is universally regarded as a planet nonetheless. According to the IAU's explicit count, there are eight planets in the Solar System; four terrestrial planets (Mercury, Venus, Earth, and Mars) and four giant planets, which can be divided further into two gas giants (Jupiter and Saturn) and two ice giants (Uranus and Neptune). When excluding the Sun, the four giant planets account for more than 99% of the mass of the Solar System. Dwarf planets Dwarf planets are bodies orbiting the Sun that are massive and warm enough to have achieved hydrostatic equilibrium, but have not cleared their neighbourhoods of similar objects. Since 2008, there have been five dwarf planets recognized by the IAU, although only Pluto has actually been confirmed to be in hydrostatic equilibrium (Ceres is close to equilibrium, though some anomalies remain unexplained). Ceres orbits in the asteroid belt, between Mars and Jupiter. The others all orbit beyond Neptune. Astronomers usually refer to solid bodies such as Ceres as dwarf planets, even if they are not strictly in hydrostatic equilibrium. They generally agree that several other trans-Neptunian objects (TNOs) may be large enough to be dwarf planets, given current uncertainties. However, there has been disagreement on the required size. Early speculations were based on the small moons of the giant planets, which attain roundness around a threshold of 200 km radius. However, these moons are at higher temperatures than TNOs and are icier than TNOs are likely to be. Estimates from an IAU question-and-answer press release from 2006, giving 800 km radius and mass as cut-offs that normally would be enough for hydrostatic equilibrium, while stating that observation would be needed to determine the status of borderline cases. Many TNOs in the 200–500 km radius range are dark and low-density bodies, which suggests that they retain internal porosity from their formation, and hence are not planetary bodies (as planetary bodies have sufficient gravitation to collapse out such porosity). In 2023, Emery et al. wrote that near-infrared spectroscopy by the James Webb Space Telescope (JWST) in 2022 suggests that Sedna, Gonggong, and Quaoar underwent internal melting, differentiation, and chemical evolution, like the larger dwarf planets Pluto, Eris, Haumea, and Makemake, but unlike "all smaller KBOs". This is because light hydrocarbons are present on their surfaces (e.g. ethane, acetylene, and ethylene), which implies that methane is continuously being resupplied, and that methane would likely come from internal geochemistry. On the other hand, the surfaces of Sedna, Gonggong, and Quaoar have low abundances of CO and CO2, similar to Pluto, Eris, and Makemake, but in contrast to smaller bodies. This suggests that the threshold for dwarf planethood in the trans-Neptunian region is around 500 km radius. In 2024, Kiss et al. found that Quaoar has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium, but that its shape became "frozen in" and did not change as it spun down due to tidal forces from its moon Weywot. If so, this would resemble the situation of Saturn's moon Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless, though not always. The table below gives Orcus, Quaoar, Gonggong, and Sedna as additional consensus dwarf planets; slightly smaller Salacia, which is larger than 400 km radius, has been included as a borderline case for comparison, (and is therefore italicized). As for objects in the asteroid belt, none are generally agreed as dwarf planets today among astronomers other than Ceres. The second- through fifth-largest asteroids have been discussed as candidates. Vesta (radius ), the second-largest asteroid, appears to have a differentiated interior and therefore likely was once a dwarf planet, but it is no longer very round today. Pallas (radius ), the third-largest asteroid, appears never to have completed differentiation and likewise has an irregular shape. Vesta and Pallas are nonetheless sometimes considered small terrestrial planets anyway by sources preferring a geophysical definition, because they do share similarities to the rocky planets of the inner solar system. The fourth-largest asteroid, Hygiea (radius ), is icy. The question remains open if it is currently in hydrostatic equilibrium: while Hygiea is round today, it was probably previously catastrophically disrupted and today might be just a gravitational aggregate of the pieces. The fifth-largest asteroid, Interamnia (radius ), is icy and has a shape consistent with hydrostatic equilibrium for a slightly shorter rotation period than it now has. Satellites There are at least 19 natural satellites in the Solar System that are known to be massive enough to be close to hydrostatic equilibrium: seven of Saturn, five of Uranus, four of Jupiter, and one each of Earth, Neptune, and Pluto. Alan Stern calls these satellite planets, although the term major moon is more common. The smallest natural satellite that is gravitationally rounded is Saturn I Mimas (radius ). This is smaller than the largest natural satellite that is known not to be gravitationally rounded, Neptune VIII Proteus (radius ). Several of these were once in equilibrium but are no longer: these include Earth's moon and all of the moons listed for Saturn apart from Titan and Rhea. The status of Callisto, Titan, and Rhea is uncertain, as is that of the moons of Uranus, Pluto and Eris. The other large moons (Io, Europa, Ganymede, and Triton) are generally believed to still be in equilibrium today. Other moons that were once in equilibrium but are no longer very round, such as Saturn IX Phoebe (radius ), are not included. In addition to not being in equilibrium, Mimas and Tethys have very low densities and it has been suggested that they may have non-negligible internal porosity, in which case they would not be satellite planets. The moons of the trans-Neptunian objects (other than Charon) have not been included, because they appear to follow the normal situation for TNOs rather than the moons of Saturn and Uranus, and become solid at a larger size (900–1000 km diameter, rather than 400 km as for the moons of Saturn and Uranus). Eris I Dysnomia and Orcus I Vanth, though larger than Mimas, are dark bodies in the size range that should allow for internal porosity, and in the case of Dysnomia a low density is known. Satellites are listed first in order from the Sun, and second in order from their parent body. For the round moons, this mostly matches the Roman numeral designations, with the exceptions of Iapetus and the Uranian system. This is because the Roman numeral designations originally reflected distance from the parent planet and were updated for each new discovery until 1851, but by 1892, the numbering system for the then-known satellites had become "frozen" and from then on followed order of discovery. Thus Miranda (discovered 1948) is Uranus V despite being the innermost of Uranus' five round satellites. The missing Saturn VII is Hyperion, which is not large enough to be round (mean radius ). See also List of Solar System objects by size Lists of astronomical objects List of former planets Planetary-mass object Notes Unless otherwise cited Manual calculations (unless otherwise cited) Individual calculations Other notes References Hydrostatic equilibrium Solar System
List of gravitationally rounded objects of the Solar System
Astronomy
1,902
15,227,236
https://en.wikipedia.org/wiki/MED26
Mediator of RNA polymerase II transcription subunit 26 is an enzyme that in humans is encoded by the MED26 gene. It forms part of the Mediator complex. The activation of gene transcription is a multistep process that is triggered by factors that recognize transcriptional enhancer sites in DNA. These factors work with co-activators to direct transcriptional initiation by the RNA polymerase II apparatus. The protein encoded by this gene is a subunit of the CRSP (cofactor required for SP1 activation) complex, which, along with TFIID, is required for efficient activation by SP1. This protein is also a component of other multisubunit complexes e.g. thyroid hormone receptor-(TR-) associated proteins which interact with TR and facilitate TR function on DNA templates in conjunction with initiation factors and cofactors. Activity MED26 is a transcription elongation factor that increases the overall transcription rate of RNA polymerase II by reactivating transcription elongation complexes that have arrested transcription. It does this through recruiting ELL/EAF- and P-TEFb- containing complexes to promoters via a direct interaction with the N-terminal domain (NTD). The MED26 NTD also binds TFIID, and TFIID and elongation complexes interact with MED26 through overlapping binding sites. MED26 NTD may function as a molecular switch contributing to the transition of Pol II into productive elongation. The three structural domains of TFIIS are conserved from yeast to human. The 80 or so N-terminal residues form a protein interaction domain containing a conserved motif, which has been called the LW motif because of the invariant leucine and tryptophan residues it contains. Although the N-terminal domain is not needed for transcriptional activity, a similar sequence has been identified in other transcription factors and proteins that are predominantly nuclear localized. Specific examples are listed below: MED26 (also known as CRSP70 and ARC70), a subunit of the Mediator complex, which is required for the activity of the enhancer-binding protein Sp1. Elongin A, a subunit of a transcription elongation factor previously known as SIII. It increases the rate of transcription by suppressing transient pausing of the elongation complex. PPP1R10, a nuclear regulatory subunit of protein phosphatase 1 that was previously known as p99, FB19 or PNUTS. PIBP, a small hypothetical protein that could be a phosphoinositide binding protein. IWS1, which is thought to function in both transcription initiation and elongation. TFIIS, which rescues RNA polymerase II from backtracked pause states. The N-terminal domain of MED26 is a protein fold known as a TFIIS N-terminal domain (or TND). It is a compact five-helix bundle. The hydrophobic core residues of helices 2, 3, and 4 are well conserved among TFIIS domains, although helix 1 is less conserved. Interactions MED26 has been shown to interact with MED8, Cyclin-dependent kinase 8, POLR2A, MED12 and MED28. It also acts synergistically to mediate the interaction between REST (a Kruppel-type zinc finger transcription factor that binds to a 21-bp RE1 silencing element present in over 900 human genes) and Mediator. References Further reading Protein domains
MED26
Biology
711
68,552,209
https://en.wikipedia.org/wiki/Oxalate%20phosphite
The oxalate phosphites are chemical compounds containing oxalate and phosphite anions. They are also called oxalatophosphites or phosphite oxalates. Oxalate phosphates can form metal organic framework compounds. Related compounds include the nitrite oxalates, arsenite oxalates, phosphate oxalates and oxalatophosphonates. The oxalate ion is rectangular and planar. The phosphite ion is shaped as a triangular pyramid. Because of high charge and stiff shape they will bridge across more than one cation, in particular those hard cations with a higher charge such as +3. Hydrogen can convert some of the oxygen on the anions to OH and reduce the charge. Many oxalate phosphite compounds have microporous structures where amines direct the structure formation. List References Oxalates Phosphites Mixed anion compounds
Oxalate phosphite
Physics,Chemistry
195
30,874,683
https://en.wikipedia.org/wiki/Stemming
In linguistic morphology and information retrieval, stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form. The stem need not be identical to the morphological root of the word; it is usually sufficient that related words map to the same stem, even if this stem is not in itself a valid root. Algorithms for stemming have been studied in computer science since the 1960s. Many search engines treat words with the same stem as synonyms as a kind of query expansion, a process called conflation. A computer program or subroutine that stems word may be called a stemming program, stemming algorithm, or stemmer. Examples A stemmer for English operating on the stem cat should identify such strings as cats, catlike, and catty. A stemming algorithm might also reduce the words fishing, fished, and fisher to the stem fish. The stem need not be a word, for example the Porter algorithm reduces argue, argued, argues, arguing, and argus to the stem argu. History The first published stemmer was written by Julie Beth Lovins in 1968. This paper was remarkable for its early date and had great influence on later work in this area. Her paper refers to three earlier major attempts at stemming algorithms, by Professor John W. Tukey of Princeton University, the algorithm developed at Harvard University by Michael Lesk, under the direction of Professor Gerard Salton, and a third algorithm developed by James L. Dolby of R and D Consultants, Los Altos, California. A later stemmer was written by Martin Porter and was published in the July 1980 issue of the journal Program. This stemmer was very widely used and became the de facto standard algorithm used for English stemming. Dr. Porter received the Tony Kent Strix award in 2000 for his work on stemming and information retrieval. Many implementations of the Porter stemming algorithm were written and freely distributed; however, many of these implementations contained subtle flaws. As a result, these stemmers did not match their potential. To eliminate this source of error, Martin Porter released an official free software (mostly BSD-licensed) implementation of the algorithm around the year 2000. He extended this work over the next few years by building Snowball, a framework for writing stemming algorithms, and implemented an improved English stemmer together with stemmers for several other languages. The Paice-Husk Stemmer was developed by Chris D Paice at Lancaster University in the late 1980s, it is an iterative stemmer and features an externally stored set of stemming rules. The standard set of rules provides a 'strong' stemmer and may specify the removal or replacement of an ending. The replacement technique avoids the need for a separate stage in the process to recode or provide partial matching. Paice also developed a direct measurement for comparing stemmers based on counting the over-stemming and under-stemming errors. Algorithms There are several types of stemming algorithms which differ in respect to performance and accuracy and how certain stemming obstacles are overcome. A simple stemmer looks up the inflected form in a lookup table. The advantages of this approach are that it is simple, fast, and easily handles exceptions. The disadvantages are that all inflected forms must be explicitly listed in the table: new or unfamiliar words are not handled, even if they are perfectly regular (e.g. cats ~ cat), and the table may be large. For languages with simple morphology, like English, table sizes are modest, but highly inflected languages like Turkish may have hundreds of potential inflected forms for each root. A lookup approach may use preliminary part-of-speech tagging to avoid overstemming. The production technique The lookup table used by a stemmer is generally produced semi-automatically. For example, if the word is "run", then the inverted algorithm might automatically generate the forms "running", "runs", "runned", and "runly". The last two forms are valid constructions, but they are unlikely.. Suffix-stripping algorithms Suffix stripping algorithms do not rely on a lookup table that consists of inflected forms and root form relations. Instead, a typically smaller list of "rules" is stored which provides a path for the algorithm, given an input word form, to find its root form. Some examples of the rules include: if the word ends in 'ed', remove the 'ed' if the word ends in 'ing', remove the 'ing' if the word ends in 'ly', remove the 'ly' Suffix stripping approaches enjoy the benefit of being much simpler to maintain than brute force algorithms, assuming the maintainer is sufficiently knowledgeable in the challenges of linguistics and morphology and encoding suffix stripping rules. Suffix stripping algorithms are sometimes regarded as crude given the poor performance when dealing with exceptional relations (like 'ran' and 'run'). The solutions produced by suffix stripping algorithms are limited to those lexical categories which have well known suffixes with few exceptions. This, however, is a problem, as not all parts of speech have such a well formulated set of rules. Lemmatisation attempts to improve upon this challenge. Prefix stripping may also be implemented. Of course, not all languages use prefixing or suffixing. Additional algorithm criteria Suffix stripping algorithms may differ in results for a variety of reasons. One such reason is whether the algorithm constrains whether the output word must be a real word in the given language. Some approaches do not require the word to actually exist in the language lexicon (the set of all words in the language). Alternatively, some suffix stripping approaches maintain a database (a large list) of all known morphological word roots that exist as real words. These approaches check the list for the existence of the term prior to making a decision. Typically, if the term does not exist, alternate action is taken. This alternate action may involve several other criteria. The non-existence of an output term may serve to cause the algorithm to try alternate suffix stripping rules. It can be the case that two or more suffix stripping rules apply to the same input term, which creates an ambiguity as to which rule to apply. The algorithm may assign (by human hand or stochastically) a priority to one rule or another. Or the algorithm may reject one rule application because it results in a non-existent term whereas the other overlapping rule does not. For example, given the English term friendlies, the algorithm may identify the ies suffix and apply the appropriate rule and achieve the result of . is likely not found in the lexicon, and therefore the rule is rejected. One improvement upon basic suffix stripping is the use of suffix substitution. Similar to a stripping rule, a substitution rule replaces a suffix with an alternate suffix. For example, there could exist a rule that replaces ies with y. How this affects the algorithm varies on the algorithm's design. To illustrate, the algorithm may identify that both the ies suffix stripping rule as well as the suffix substitution rule apply. Since the stripping rule results in a non-existent term in the lexicon, but the substitution rule does not, the substitution rule is applied instead. In this example, friendlies becomes friendly instead of . Diving further into the details, a common technique is to apply rules in a cyclical fashion (recursively, as computer scientists would say). After applying the suffix substitution rule in this example scenario, a second pass is made to identify matching rules on the term friendly, where the ly stripping rule is likely identified and accepted. In summary, friendlies becomes (via substitution) friendly which becomes (via stripping) friend. This example also helps illustrate the difference between a rule-based approach and a brute force approach. In a brute force approach, the algorithm would search for friendlies in the set of hundreds of thousands of inflected word forms and ideally find the corresponding root form friend. In the rule-based approach, the three rules mentioned above would be applied in succession to converge on the same solution. Chances are that the brute force approach would be slower, as lookup algorithms have a direct access to the solution, while rule-based should try several options, and combinations of them, and then choose which result seems to be the best. Lemmatisation algorithms A more complex approach to the problem of determining a stem of a word is lemmatisation. This process involves first determining the part of speech of a word, and applying different normalization rules for each part of speech. The part of speech is first detected prior to attempting to find the root since for some languages, the stemming rules change depending on a word's part of speech. This approach is highly conditional upon obtaining the correct lexical category (part of speech). While there is overlap between the normalization rules for certain categories, identifying the wrong category or being unable to produce the right category limits the added benefit of this approach over suffix stripping algorithms. The basic idea is that, if the stemmer is able to grasp more information about the word being stemmed, then it can apply more accurate normalization rules (which unlike suffix stripping rules can also modify the stem). Stochastic algorithms Stochastic algorithms involve using probability to identify the root form of a word. Stochastic algorithms are trained (they "learn") on a table of root form to inflected form relations to develop a probabilistic model. This model is typically expressed in the form of complex linguistic rules, similar in nature to those in suffix stripping or lemmatisation. Stemming is performed by inputting an inflected form to the trained model and having the model produce the root form according to its internal ruleset, which again is similar to suffix stripping and lemmatisation, except that the decisions involved in applying the most appropriate rule, or whether or not to stem the word and just return the same word, or whether to apply two different rules sequentially, are applied on the grounds that the output word will have the highest probability of being correct (which is to say, the smallest probability of being incorrect, which is how it is typically measured). Some lemmatisation algorithms are stochastic in that, given a word which may belong to multiple parts of speech, a probability is assigned to each possible part. This may take into account the surrounding words, called the context, or not. Context-free grammars do not take into account any additional information. In either case, after assigning the probabilities to each possible part of speech, the most likely part of speech is chosen, and from there the appropriate normalization rules are applied to the input word to produce the normalized (root) form. n-gram analysis Some stemming techniques use the n-gram context of a word to choose the correct stem for a word. Hybrid approaches Hybrid approaches use two or more of the approaches described above in unison. A simple example is a suffix tree algorithm which first consults a lookup table using brute force. However, instead of trying to store the entire set of relations between words in a given language, the lookup table is kept small and is only used to store a minute amount of "frequent exceptions" like "ran => run". If the word is not in the exception list, apply suffix stripping or lemmatisation and output the result. Affix stemmers In linguistics, the term affix refers to either a prefix or a suffix. In addition to dealing with suffixes, several approaches also attempt to remove common prefixes. For example, given the word indefinitely, identify that the leading "in" is a prefix that can be removed. Many of the same approaches mentioned earlier apply, but go by the name affix stripping'''. A study of affix stemming for several European languages can be found here. Matching algorithms Such algorithms use a stem database (for example a set of documents that contain stem words). These stems, as mentioned above, are not necessarily valid words themselves (but rather common sub-strings, as the "brows" in "browse" and in "browsing"). In order to stem a word the algorithm tries to match it with stems from the database, applying various constraints, such as on the relative length of the candidate stem within the word (so that, for example, the short prefix "be", which is the stem of such words as "be", "been" and "being", would not be considered as the stem of the word "beside").. Language challenges While much of the early academic work in this area was focused on the English language (with significant use of the Porter Stemmer algorithm), many other languages have been investigated.Savoy, Jacques; Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages, ACM Symposium on Applied Computing, SAC 2006, Stemming in Hungarian at CLEF 2005 Hebrew and Arabic are still considered difficult research languages for stemming. English stemmers are fairly trivial (with only occasional problems, such as "dries" being the third-person singular present form of the verb "dry", "axes" being the plural of "axe" as well as "axis"); but stemmers become harder to design as the morphology, orthography, and character encoding of the target language becomes more complex. For example, an Italian stemmer is more complex than an English one (because of a greater number of verb inflections), a Russian one is more complex (more noun declensions), a Hebrew one is even more complex (due to nonconcatenative morphology, a writing system without vowels, and the requirement of prefix stripping: Hebrew stems can be two, three or four characters, but not more), and so on. Multilingual stemming Multilingual stemming applies morphological rules of two or more languages simultaneously instead of rules for only a single language when interpreting a search query. Commercial systems using multilingual stemming exist. Error metrics There are two error measurements in stemming algorithms, overstemming and understemming. Overstemming is an error where two separate inflected words are stemmed to the same root, but should not have been—a false positive. Understemming is an error where two separate inflected words should be stemmed to the same root, but are not—a false negative. Stemming algorithms attempt to minimize each type of error, although reducing one type can lead to increasing the other. For example, the widely used Porter stemmer stems "universal", "university", and "universe" to "univers". This is a case of overstemming: though these three words are etymologically related, their modern meanings are in widely different domains, so treating them as synonyms in a search engine will likely reduce the relevance of the search results. An example of understemming in the Porter stemmer is "alumnus" → "alumnu", "alumni" → "alumni", "alumna"/"alumnae" → "alumna". This English word keeps Latin morphology, and so these near-synonyms are not conflated. Applications Stemming is used as an approximate method for grouping words with a similar basic meaning together. For example, a text mentioning "daffodils" is probably closely related to a text mentioning "daffodil" (without the s). But in some cases, words with the same morphological stem have idiomatic meanings which are not closely related: a user searching for "marketing" will not be satisfied by most documents mentioning "markets" but not "marketing". Information retrieval Stemmers can be used as elements in query systems such as Web search engines. The effectiveness of stemming for English query systems were soon found to be rather limited, however, and this has led early information retrieval researchers to deem stemming irrelevant in general. An alternative approach, based on searching for n-grams rather than stems, may be used instead. Also, stemmers may provide greater benefits in other languages than English.Airio, Eija (2006); Word Normalization and Decompounding in Mono- and Bilingual IR, Information Retrieval 9:249–271 Domain analysis Stemming is used to determine domain vocabularies in domain analysis. Use in commercial products Many commercial companies have been using stemming since at least the 1980s and have produced algorithmic and lexical stemmers in many languages.Building Multilingual Solutions by using Sharepoint Products and Technologies , Microsoft Technet The Snowball stemmers have been compared with commercial lexical stemmers with varying results.CLEF 2004: Stephen Tomlinson "Finnish, Portuguese and Russian Retrieval with Hummingbird SearchServer" Google Search adopted word stemming in 2003. Previously a search for "fish" would not have returned "fishing". Other software search algorithms vary in their use of word stemming. Programs that simply search for substrings will obviously find "fish" in "fishing" but when searching for "fishes" will not find occurrences of the word "fish". Text mining Stemming is used as a task in pre-processing texts before performing text mining analyses on it. See also — stemming is a form of reverse derivation — stemming is generally regarded as a form of NLP — implements several stemming algorithms in Python — designed for creating stemming algorithms References Further reading Dawson, J. L. (1974); Suffix Removal for Word Conflation, Bulletin of the Association for Literary and Linguistic Computing, 2(3): 33–46 Frakes, W. B. (1984); Term Conflation for Information Retrieval, Cambridge University Press Frakes, W. B. & Fox, C. J. (2003); Strength and Similarity of Affix Removal Stemming Algorithms, SIGIR Forum, 37: 26–30 Frakes, W. B. (1992); Stemming algorithms, Information retrieval: data structures and algorithms, Upper Saddle River, NJ: Prentice-Hall, Inc. Hafer, M. A. & Weiss, S. F. (1974); Word segmentation by letter successor varieties, Information Processing & Management 10 (11/12), 371–386 Harman, D. (1991); How Effective is Suffixing?, Journal of the American Society for Information Science 42 (1), 7–15 Hull, D. A. (1996); Stemming Algorithms – A Case Study for Detailed Evaluation, JASIS, 47(1): 70–84 Hull, D. A. & Grefenstette, G. (1996); A Detailed Analysis of English Stemming Algorithms, Xerox Technical Report Kraaij, W. & Pohlmann, R. (1996); Viewing Stemming as Recall Enhancement, in Frei, H.-P.; Harman, D.; Schauble, P.; and Wilkinson, R. (eds.); Proceedings of the 17th ACM SIGIR conference held at Zurich, August 18–22, pp. 40–48 Krovetz, R. (1993); Viewing Morphology as an Inference Process, in Proceedings of ACM-SIGIR93, pp. 191–203 Lennon, M.; Pierce, D. S.; Tarry, B. D.; & Willett, P. (1981); An Evaluation of some Conflation Algorithms for Information Retrieval, Journal of Information Science, 3: 177–183 Lovins, J. (1971); Error Evaluation for Stemming Algorithms as Clustering Algorithms, JASIS, 22: 28–40 Lovins, J. B. (1968); Development of a Stemming Algorithm, Mechanical Translation and Computational Linguistics, 11, 22—31 Jenkins, Marie-Claire; and Smith, Dan (2005); Conservative Stemming for Search and Indexing Paice, C. D. (1990); Another Stemmer , SIGIR Forum, 24: 56–61 Paice, C. D. (1996) Method for Evaluation of Stemming Algorithms based on Error Counting, JASIS, 47(8): 632–649 Popovič, Mirko; and Willett, Peter (1992); The Effectiveness of Stemming for Natural-Language Access to Slovene Textual Data, Journal of the American Society for Information Science, Volume 43, Issue 5 (June), pp. 384–390 Porter, Martin F. (1980); An Algorithm for Suffix Stripping, Program, 14(3): 130–137 Savoy, J. (1993); Stemming of French Words Based on Grammatical Categories Journal of the American Society for Information Science, 44(1), 1–9 Ulmschneider, John E.; & Doszkocs, Tamas (1983); A Practical Stemming Algorithm for Online Search Assistance, Online Review, 7(4), 301–318 Xu, J.; & Croft, W. B. (1998); Corpus-Based Stemming Using Coocurrence of Word Variants'', ACM Transactions on Information Systems, 16(1), 61–81 External links Apache OpenNLP—includes Porter and Snowball stemmers SMILE Stemmer—free online service, includes Porter and Paice/Husk' Lancaster stemmers (Java API) Themis—open source IR framework, includes Porter stemmer implementation (PostgreSQL, Java API) Snowball—free stemming algorithms for many languages, includes source code, including stemmers for five romance languages Snowball on C#—port of Snowball stemmers for C# (14 languages) Python bindings to Snowball API Ruby-Stemmer—Ruby extension to Snowball API PECL—PHP extension to the Snowball API Oleander Porter's algorithm—stemming library in C++ released under BSD Unofficial home page of the Lovins stemming algorithm—with source code in a couple of languages Official home page of the Porter stemming algorithm—including source code in several languages Official home page of the Lancaster stemming algorithm —Lancaster University, UK Official home page of the UEA-Lite Stemmer —University of East Anglia, UK Overview of stemming algorithms PTStemmer—A Java/Python/.Net stemming toolkit for the Portuguese language jsSnowball—open source JavaScript implementation of Snowball stemming algorithms for many languages Snowball Stemmer—implementation for Java hindi_stemmer—open source stemmer for Hindi czech_stemmer—open source stemmer for Czech Comparative Evaluation of Arabic Language Morphological Analysers and Stemmers Tamil Stemmer Linguistic morphology Natural language processing Tasks of natural language processing Computational linguistics Information retrieval techniques
Stemming
Technology
4,628
30,492,365
https://en.wikipedia.org/wiki/BigCouch
BigCouch is an open-source, highly available, fault-tolerant, clustered & API-compliant version of Apache CouchDB, which was maintained by Cloudant. On January 5, 2012, Cloudant announced they would contribute the BigCouch horizontal scaling framework into the CouchDB project. The merge was completed in July 2013. Cloudant announced in June 2015 that they were no longer supporting BigCouch. BigCouch allows users to create clusters of CouchDBs that are distributed over an arbitrary number of servers. While it appears to the end-user as one CouchDB instance, it is in fact one or more nodes in an elastic cluster, acting in concert to store and retrieve documents, index and serve views, and serve CouchApps. Clusters behave according to concepts outlined in Amazon's Dynamo paper, namely that each node can accept requests, data is placed on partitions based on a consistent hashing algorithm, and quorum protocols are for read/write operations. It relies on Erlang and the Open Telecom Platform, despite using its own RPC mechanism over OTP's own "rex" server. BigCouch was developed to address a common complaint raised by CouchDB skeptics is that "it doesn't scale," by which they mean it does not scale horizontally across many servers. This feature is necessary for CouchDB is to be used to address Big Data problems. References External links BigCouch Project at GitHub Cloudant Dynamo: Amazon’s Highly Available Key-value Store, SOSP 2007 Cloud applications Cloud infrastructure Distributed file systems
BigCouch
Technology
321
8,331,945
https://en.wikipedia.org/wiki/Ultra-short%20baseline%20acoustic%20positioning%20system
USBL (ultra-short baseline, also known as SSBL for super short base line) is a method of underwater acoustic positioning. A USBL system consists of a transceiver, which is mounted on a pole under a ship, and a transponder or responder on the seafloor, on a towfish, or on an ROV. A computer, or "topside unit", is used to calculate a position from the ranges and bearings measured by the transceiver. Mechanism An acoustic pulse is transmitted by the transceiver and detected by the subsea transponder, which replies with its own acoustic pulse. This return pulse is detected by the shipboard transceiver. The time from the transmission of the initial acoustic pulse until the reply is detected is measured by the USBL system and is converted into a range. To calculate a subsea position, the USBL calculates both a range and an angle from the transceiver to the subsea beacon. Angles are measured by the transceiver, which contains an array of transducers. The transceiver head normally contains three or more transducers separated by a baseline of 10 cm or less, hence the "short baseline" name. A method called “phase-differencing” within this transducer array is used to calculate the direction to the subsea transponder. The presence of environmental noise reduces USBL positioning accuracy. Combining Kalman filtering with an element array has been used to filter the signals and improve accuracy, using the minimum mean-square error rule. Applications USBLs are used in "inverted" (iUSBL) configurations, with the transceiver mounted on an autonomous underwater vehicle, and the transponder on the ship/shore that launches it. In this case, the "topside" processing happens inside the vehicle to allow it to locate the transponder for applications such as automatic docking, target tracking, and the exchange of text messages. References External links Navigation Surveying
Ultra-short baseline acoustic positioning system
Engineering
410
71,520,462
https://en.wikipedia.org/wiki/NGC%205523
NGC 5523 is an unbarred spiral galaxy in the constellation of Boötes, registered in New General Catalogue (NGC). The galaxy forms an equilateral triangle with NGC 5641 and NGC 5466 when observed using a telescope from the ground. Observation history NGC 5523 was discovered by William Herschel on 19 May 1784 using 18.7-inch f/13 speculum telescope. John Louis Emil Dreyer inside the New General Catalogue, described it as "faint, pretty large, pretty much extended 90°, 10th magnitude star to northwest". It was described in Burnham's Celestial Handbook as "faint, pretty large (5.0'x0.8'), much elongated, nearly edge-on". Steve Coe, an American astronomer, described it as "faint, pretty large, much elongated (3 X 1) in PA 90 and brighter in the middle at 100X." General The galaxy was originally thought to be isolated due to its lack of interaction with other galaxies in the past 1 to 3 billion years. However, a 2016 study reported that some irregularities of the contour of the discs and nucleated bulge at the center of the galaxy suggested that the galaxy previously had soft collisions with other galaxies. Notes References Galaxies discovered in 1784 5523 17900519 Unbarred spiral galaxies NGC 5523 Discoveries by William Herschel
NGC 5523
Astronomy
278
4,932,111
https://en.wikipedia.org/wiki/Capacitor
In electrical engineering, a capacitor is a device that stores electrical energy by accumulating electric charges on two closely spaced surfaces that are insulated from each other. The capacitor was originally known as the condenser, a term still encountered in a few compound names, such as the condenser microphone. It is a passive electronic component with two terminals. The utility of a capacitor depends on its capacitance. While some capacitance exists between any two electrical conductors in proximity in a circuit, a capacitor is a component designed specifically to add capacitance to some part of the circuit. The physical form and construction of practical capacitors vary widely and many types of capacitor are in common use. Most capacitors contain at least two electrical conductors, often in the form of metallic plates or surfaces separated by a dielectric medium. A conductor may be a foil, thin film, sintered bead of metal, or an electrolyte. The nonconducting dielectric acts to increase the capacitor's charge capacity. Materials commonly used as dielectrics include glass, ceramic, plastic film, paper, mica, air, and oxide layers. When an electric potential difference (a voltage) is applied across the terminals of a capacitor, for example when a capacitor is connected across a battery, an electric field develops across the dielectric, causing a net positive charge to collect on one plate and net negative charge to collect on the other plate. No current actually flows through a perfect dielectric. However, there is a flow of charge through the source circuit. If the condition is maintained sufficiently long, the current through the source circuit ceases. If a time-varying voltage is applied across the leads of the capacitor, the source experiences an ongoing current due to the charging and discharging cycles of the capacitor. Capacitors are widely used as parts of electrical circuits in many common electrical devices. Unlike a resistor, an ideal capacitor does not dissipate energy, although real-life capacitors do dissipate a small amount (see Non-ideal behavior). The earliest forms of capacitors were created in the 1740s, when European experimenters discovered that electric charge could be stored in water-filled glass jars that came to be known as Leyden jars. Today, capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies. In resonant circuits they tune radios to particular frequencies. In electric power transmission systems, they stabilize voltage and power flow. The property of energy storage in capacitors was exploited as dynamic memory in early digital computers, and still is in modern DRAM. History Natural capacitors have existed since prehistoric times. The most common example of natural capacitance are the static charges accumulated between clouds in the sky and the surface of the Earth, where the air between them serves as the dielectric. This results in bolts of lightning when the breakdown voltage of the air is exceeded. In October 1745, Ewald Georg von Kleist of Pomerania, Germany, found that charge could be stored by connecting a high-voltage electrostatic generator by a wire to a volume of water in a hand-held glass jar. Von Kleist's hand and the water acted as conductors and the jar as a dielectric (although details of the mechanism were incorrectly identified at the time). Von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained from an electrostatic machine. The following year, the Dutch physicist Pieter van Musschenbroek invented a similar capacitor, which was named the Leyden jar, after the University of Leiden where he worked. He also was impressed by the power of the shock he received, writing, "I would not take a second shock for the kingdom of France." Daniel Gralath was the first to combine several jars in parallel to increase the charge storage capacity. Benjamin Franklin investigated the Leyden jar and came to the conclusion that the charge was stored on the glass, not in the water as others had assumed. He also adopted the term "battery", (denoting the increase of power with a row of similar units as in a battery of cannon), subsequently applied to clusters of electrochemical cells. In 1747, Leyden jars were made by coating the inside and outside of jars with metal foil, leaving a space at the mouth to prevent arcing between the foils. The earliest unit of capacitance was the jar, equivalent to about 1.11 nanofarads. Leyden jars or more powerful devices employing flat glass plates alternating with foil conductors were used exclusively up until about 1900, when the invention of wireless (radio) created a demand for standard capacitors, and the steady move to higher frequencies required capacitors with lower inductance. More compact construction methods began to be used, such as a flexible dielectric sheet (like oiled paper) sandwiched between sheets of metal foil, rolled or folded into a small package. Early capacitors were known as condensers, a term that is still occasionally used today, particularly in high power applications, such as automotive systems. The term condensatore was used by Alessandro Volta in 1780 to refer to a device, similar to his electrophorus, he developed to measure electricity, and translated in 1782 as condenser, where the name referred to the device's ability to store a higher density of electric charge than was possible with an isolated conductor. The term became deprecated because of the ambiguous meaning of steam condenser, with capacitor becoming the recommended term in the UK from 1926, while the change occurred considerably later in the United States. Since the beginning of the study of electricity, non-conductive materials like glass, porcelain, paper and mica have been used as insulators. Decades later, these materials were also well-suited for use as the dielectric for the first capacitors. Paper capacitors, made by sandwiching a strip of impregnated paper between strips of metal and rolling the result into a cylinder, were commonly used in the late 19th century; their manufacture started in 1876, and they were used from the early 20th century as decoupling capacitors in telephony. Porcelain was used in the first ceramic capacitors. In the early years of Marconi's wireless transmitting apparatus, porcelain capacitors were used for high voltage and high frequency application in the transmitters. On the receiver side, smaller mica capacitors were used for resonant circuits. Mica capacitors were invented in 1909 by William Dubilier. Prior to World War II, mica was the most common dielectric for capacitors in the United States. Charles Pollak (born Karol Pollak), the inventor of the first electrolytic capacitors, found out that the oxide layer on an aluminum anode remained stable in a neutral or alkaline electrolyte, even when the power was switched off. In 1896 he was granted U.S. Patent No. 672,913 for an "Electric liquid capacitor with aluminum electrodes". Solid electrolyte tantalum capacitors were invented by Bell Laboratories in the early 1950s as a miniaturized and more reliable low-voltage support capacitor to complement their newly invented transistor. With the development of plastic materials by organic chemists during the Second World War, the capacitor industry began to replace paper with thinner polymer films. One very early development in film capacitors was described in British Patent 587,953 in 1944. Electric double-layer capacitors (now supercapacitors) were invented in 1957 when H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores used in his capacitor as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." The MOS capacitor was later widely adopted as a storage capacitor in memory chips, and as the basic building block of the charge-coupled device (CCD) in image sensor technology. In 1966, Dr. Robert Dennard invented modern DRAM architecture, combining a single MOS transistor per capacitor. Theory of operation Overview A capacitor consists of two conductors separated by a non-conductive region. The non-conductive region can either be a vacuum or an electrical insulator material known as a dielectric. Examples of dielectric media are glass, air, paper, plastic, ceramic, and even a semiconductor depletion region chemically identical to the conductors. From Coulomb's law a charge on one conductor will exert a force on the charge carriers within the other conductor, attracting opposite polarity charge and repelling like polarity charges, thus an opposite polarity charge will be induced on the surface of the other conductor. The conductors thus hold equal and opposite charges on their facing surfaces, and the dielectric develops an electric field. An ideal capacitor is characterized by a constant capacitance C, in farads in the SI system of units, defined as the ratio of the positive or negative charge Q on each conductor to the voltage V between them: A capacitance of one farad (F) means that one coulomb of charge on each conductor causes a voltage of one volt across the device. Because the conductors (or plates) are close together, the opposite charges on the conductors attract one another due to their electric fields, allowing the capacitor to store more charge for a given voltage than when the conductors are separated, yielding a larger capacitance. In practical devices, charge build-up sometimes affects the capacitor mechanically, causing its capacitance to vary. In this case, capacitance is defined in terms of incremental changes: Hydraulic analogy In the hydraulic analogy, voltage is analogous to water pressure and electrical current through a wire is analogous to water flow through a pipe. A capacitor is like an elastic diaphragm within the pipe. Although water cannot pass through the diaphragm, it moves as the diaphragm stretches or un-stretches. Capacitance is analogous to diaphragm elasticity. In the same way that the ratio of charge differential to voltage would be greater for a larger capacitance value (), the ratio of water displacement to pressure would be greater for a diaphragm that flexes more readily. In an AC circuit, a capacitor behaves like a diaphragm in a pipe, allowing the charge to move on both sides of the dielectric while no electrons actually pass through. For DC circuits, a capacitor is analogous to a hydraulic accumulator, storing the energy until pressure is released. Similarly, they can be used to smooth the flow of electricity in rectified DC circuits in the same way an accumulator damps surges from a hydraulic pump. Charged capacitors and stretched diaphragms both store potential energy. The more a capacitor is charged, the higher the voltage across the plates (). Likewise, the greater the displaced water volume, the greater the elastic potential energy. Electrical current affects the charge differential across a capacitor just as the flow of water affects the volume differential across a diaphragm. Just as capacitors experience dielectric breakdown when subjected to high voltages, diaphragms burst under extreme pressures. Just as capacitors block DC while passing AC, diaphragms displace no water unless there is a change in pressure. Circuit equivalence at short-time limit and long-time limit In a circuit, a capacitor can behave differently at different time instants. However, it is usually easy to think about the short-time limit and long-time limit: In the long-time limit, after the charging/discharging current has saturated the capacitor, no current would come into (or get out of) either side of the capacitor; Therefore, the long-time equivalence of capacitor is an open circuit. In the short-time limit, if the capacitor starts with a certain voltage V, since the voltage drop on the capacitor is known at this instant, we can replace it with an ideal voltage source of voltage V. Specifically, if V=0 (capacitor is uncharged), the short-time equivalence of a capacitor is a short circuit. Parallel-plate capacitor The simplest model of a capacitor consists of two thin parallel conductive plates each with an area of separated by a uniform gap of thickness filled with a dielectric of permittivity . It is assumed the gap is much smaller than the dimensions of the plates. This model applies well to many practical capacitors which are constructed of metal sheets separated by a thin layer of insulating dielectric, since manufacturers try to keep the dielectric very uniform in thickness to avoid thin spots which can cause failure of the capacitor. Since the separation between the plates is uniform over the plate area, the electric field between the plates is constant, and directed perpendicularly to the plate surface, except for an area near the edges of the plates where the field decreases because the electric field lines "bulge" out of the sides of the capacitor. This "fringing field" area is approximately the same width as the plate separation, , and assuming is small compared to the plate dimensions, it is small enough to be ignored. Therefore, if a charge of is placed on one plate and on the other plate (the situation for unevenly charged plates is discussed below), the charge on each plate will be spread evenly in a surface charge layer of constant charge density coulombs per square meter, on the inside surface of each plate. From Gauss's law the magnitude of the electric field between the plates is . The voltage(difference) between the plates is defined as the line integral of the electric field over a line (in the z-direction) from one plate to another The capacitance is defined as . Substituting above into this equation Therefore, in a capacitor the highest capacitance is achieved with a high permittivity dielectric material, large plate area, and small separation between the plates. Since the area of the plates increases with the square of the linear dimensions and the separation increases linearly, the capacitance scales with the linear dimension of a capacitor (), or as the cube root of the volume. A parallel plate capacitor can only store a finite amount of energy before dielectric breakdown occurs. The capacitor's dielectric material has a dielectric strength Ud which sets the capacitor's breakdown voltage at . The maximum energy that the capacitor can store is therefore The maximum energy is a function of dielectric volume, permittivity, and dielectric strength. Changing the plate area and the separation between the plates while maintaining the same volume causes no change of the maximum amount of energy that the capacitor can store, so long as the distance between plates remains much smaller than both the length and width of the plates. In addition, these equations assume that the electric field is entirely concentrated in the dielectric between the plates. In reality there are fringing fields outside the dielectric, for example between the sides of the capacitor plates, which increase the effective capacitance of the capacitor. This is sometimes called parasitic capacitance. For some simple capacitor geometries this additional capacitance term can be calculated analytically. It becomes negligibly small when the ratios of plate width to separation and length to separation are large. For unevenly charged plates: If one plate is charged with while the other is charged with , and if both plates are separated from other materials in the environment, then the inner surface of the first plate will have , and the inner surface of the second plated will have charge. Therefore, the voltage between the plates is . Note that the outer surface of both plates will have , but those charges do not affect the voltage between the plates. If one plate is charged with while the other is charged with , and if the second plate is connected to ground, then the inner surface of the first plate will have , and the inner surface of the second plated will have . Therefore, the voltage between the plates is . Note that the outer surface of both plates will have zero charge. Interleaved capacitor For number of plates in a capacitor, the total capacitance would be where is the capacitance for a single plate and is the number of interleaved plates. As shown to the figure on the right, the interleaved plates can be seen as parallel plates connected to each other. Every pair of adjacent plates acts as a separate capacitor; the number of pairs is always one less than the number of plates, hence the multiplier. Energy stored in a capacitor To increase the charge and voltage on a capacitor, work must be done by an external power source to move charge from the negative to the positive plate against the opposing force of the electric field. If the voltage on the capacitor is , the work required to move a small increment of charge from the negative to the positive plate is . The energy is stored in the increased electric field between the plates. The total energy stored in a capacitor (expressed in joules) is equal to the total work done in establishing the electric field from an uncharged state. where is the charge stored in the capacitor, is the voltage across the capacitor, and is the capacitance. This potential energy will remain in the capacitor until the charge is removed. If charge is allowed to move back from the positive to the negative plate, for example by connecting a circuit with resistance between the plates, the charge moving under the influence of the electric field will do work on the external circuit. If the gap between the capacitor plates is constant, as in the parallel plate model above, the electric field between the plates will be uniform (neglecting fringing fields) and will have a constant value . In this case the stored energy can be calculated from the electric field strength The last formula above is equal to the energy density per unit volume in the electric field multiplied by the volume of field between the plates, confirming that the energy in the capacitor is stored in its electric field. Current–voltage relation The current I(t) through any component in an electric circuit is defined as the rate of flow of a charge Q(t) passing through it. Actual charges – electrons – cannot pass through the dielectric of an ideal capacitor. Rather, one electron accumulates on the negative plate for each one that leaves the positive plate, resulting in an electron depletion and consequent positive charge on one electrode that is equal and opposite to the accumulated negative charge on the other. Thus the charge on the electrodes is equal to the integral of the current as well as proportional to the voltage, as discussed above. As with any antiderivative, a constant of integration is added to represent the initial voltage V(t0). This is the integral form of the capacitor equation: Taking the derivative of this and multiplying by C yields the derivative form: for independent of time, voltage and electric charge. The dual of the capacitor is the inductor, which stores energy in a magnetic field rather than an electric field. Its current-voltage relation is obtained by exchanging current and voltage in the capacitor equations and replacing with the inductance . RC circuits A series circuit containing only a resistor, a capacitor, a switch and a constant DC source of voltage is known as a charging circuit. If the capacitor is initially uncharged while the switch is open, and the switch is closed at , it follows from Kirchhoff's voltage law that Taking the derivative and multiplying by C, gives a first-order differential equation: At , the voltage across the capacitor is zero and the voltage across the resistor is V0. The initial current is then . With this assumption, solving the differential equation yields where is the time constant of the system. As the capacitor reaches equilibrium with the source voltage, the voltages across the resistor and the current through the entire circuit decay exponentially. In the case of a discharging capacitor, the capacitor's initial voltage () replaces . The equations become AC circuits Impedance, the vector sum of reactance and resistance, describes the phase difference and the ratio of amplitudes between sinusoidally varying voltage and sinusoidally varying current at a given frequency. Fourier analysis allows any signal to be constructed from a spectrum of frequencies, whence the circuit's reaction to the various frequencies may be found. The reactance and impedance of a capacitor are respectively where is the imaginary unit and is the angular frequency of the sinusoidal signal. The phase indicates that the AC voltage lags the AC current by 90°: the positive current phase corresponds to increasing voltage as the capacitor charges; zero current corresponds to instantaneous constant voltage, etc. Impedance decreases with increasing capacitance and increasing frequency. This implies that a higher-frequency signal or a larger capacitor results in a lower voltage amplitude per current amplitude – an AC "short circuit" or AC coupling. Conversely, for very low frequencies, the reactance is high, so that a capacitor is nearly an open circuit in AC analysis – those frequencies have been "filtered out". Capacitors are different from resistors and inductors in that the impedance is inversely proportional to the defining characteristic; i.e., capacitance. A capacitor connected to an alternating voltage source has a displacement current to flowing through it. In the case that the voltage source is V0cos(ωt), the displacement current can be expressed as: At , the capacitor has a maximum (or peak) current whereby . The ratio of peak voltage to peak current is due to capacitive reactance (denoted XC). XC approaches zero as approaches infinity. If XC approaches 0, the capacitor resembles a short wire that strongly passes current at high frequencies. XC approaches infinity as ω approaches zero. If XC approaches infinity, the capacitor resembles an open circuit that poorly passes low frequencies. The current of the capacitor may be expressed in the form of cosines to better compare with the voltage of the source: In this situation, the current is out of phase with the voltage by +π/2 radians or +90 degrees, i.e. the current leads the voltage by 90°. Laplace circuit analysis (s-domain) When using the Laplace transform in circuit analysis, the impedance of an ideal capacitor with no initial charge is represented in the domain by: where is the capacitance, and is the complex frequency. Circuit analysis Cpacitors in parallel Capacitors in a parallel configuration each have the same applied voltage. Their capacitances add up. Charge is apportioned among them by size. Using the schematic diagram to visualize parallel plates, it is apparent that each capacitor contributes to the total surface area. For capacitors in series Connected in series, the schematic diagram reveals that the separation distance, not the plate area, adds up. The capacitors each store instantaneous charge build-up equal to that of every other capacitor in the series. The total voltage difference from end to end is apportioned to each capacitor according to the inverse of its capacitance. The entire series acts as a capacitor smaller than any of its components. Capacitors are combined in series to achieve a higher working voltage, for example for smoothing a high voltage power supply. The voltage ratings, which are based on plate separation, add up, if capacitance and leakage currents for each capacitor are identical. In such an application, on occasion, series strings are connected in parallel, forming a matrix. The goal is to maximize the energy storage of the network without overloading any capacitor. For high-energy storage with capacitors in series, some safety considerations must be applied to ensure one capacitor failing and leaking current does not apply too much voltage to the other series capacitors. Series connection is also sometimes used to adapt polarized electrolytic capacitors for bipolar AC use. Voltage distribution in parallel-to-series networks. To model the distribution of voltages from a single charged capacitor connected in parallel to a chain of capacitors in series : Note: This is only correct if all capacitance values are equal. The power transferred in this arrangement is: Non-ideal behavior In practice, capacitors deviate from the ideal capacitor equation in several aspects. Some of these, such as leakage current and parasitic effects are linear, or can be analyzed as nearly linear, and can be accounted for by adding virtual components to form an equivalent circuit. The usual methods of network analysis can then be applied. In other cases, such as with breakdown voltage, the effect is non-linear and ordinary (normal, e.g., linear) network analysis cannot be used, the effect must be considered separately. Yet another group of artifacts may exist, including temperature dependence, that may be linear but invalidates the assumption in the analysis that capacitance is a constant. Finally, combined parasitic effects such as inherent inductance, resistance, or dielectric losses can exhibit non-uniform behavior at varying frequencies of operation. Breakdown voltage Above a particular electric field strength, known as the dielectric strength Eds, the dielectric in a capacitor becomes conductive. The voltage at which this occurs is called the breakdown voltage of the device, and is given by the product of the dielectric strength and the separation between the conductors, The maximum energy that can be stored safely in a capacitor is limited by the breakdown voltage. Exceeding this voltage can result in a short circuit between the plates, which can often cause permanent damage to the dielectric, plates, or both. Due to the scaling of capacitance and breakdown voltage with dielectric thickness, all capacitors made with a particular dielectric have approximately equal maximum energy density, to the extent that the dielectric dominates their volume. For air dielectric capacitors the breakdown field strength is of the order 2–5 MV/m (or kV/mm); for mica the breakdown is 100–300 MV/m; for oil, 15–25 MV/m; it can be much less when other materials are used for the dielectric. The dielectric is used in very thin layers and so absolute breakdown voltage of capacitors is limited. Typical ratings for capacitors used for general electronics applications range from a few volts to 1 kV. As the voltage increases, the dielectric must be thicker, making high-voltage capacitors larger per capacitance than those rated for lower voltages. The breakdown voltage is critically affected by factors such as the geometry of the capacitor conductive parts; sharp edges or points increase the electric field strength at that point and can lead to a local breakdown. Once this starts to happen, the breakdown quickly tracks through the dielectric until it reaches the opposite plate, leaving carbon behind and causing a short (or relatively low resistance) circuit. The results can be explosive, as the short in the capacitor draws current from the surrounding circuitry and dissipates the energy. However, in capacitors with particular dielectrics and thin metal electrodes, shorts are not formed after breakdown. It happens because a metal melts or evaporates in a breakdown vicinity, isolating it from the rest of the capacitor. The usual breakdown route is that the field strength becomes large enough to pull electrons in the dielectric from their atoms thus causing conduction. Other scenarios are possible, such as impurities in the dielectric, and, if the dielectric is of a crystalline nature, imperfections in the crystal structure can result in an avalanche breakdown as seen in semi-conductor devices. Breakdown voltage is also affected by pressure, humidity and temperature. Equivalent circuit An ideal capacitor only stores and releases electrical energy, without dissipation. In practice, capacitors have imperfections within the capacitor's materials that result in the following parasitic components: , the equivalent series inductance, due to the leads. This is usually significant only at relatively high frequencies. Two resistances that add a real-valued component to the total impedance, which wastes power: , a small series resistance in the leads. Becomes more relevant as frequency increases. , a small conductance (or reciprocally, a large resistance) in parallel with the capacitance, to account for imperfect dielectric material. This causes a small leakage current across the dielectric (see ) that slowly discharges the capacitor over time. This conductance dominates the total resistance at very low frequencies. Its value varies greatly depending on the capacitor material and quality. Simplified RLC series model As frequency increases, the capacitive impedance (a negative reactance) reduces, so the dielectric's conductance becomes less important and the series components become more significant. Thus, a simplified RLC series model valid for a large frequency range simply treats the capacitor as being in series with an equivalent series inductance and a frequency-dependent equivalent series resistance , which varies little with frequency. Unlike the previous model, this model is not valid at DC and very low frequencies where is relevant. Inductive reactance increases with frequency. Because its sign is positive, it counteracts the capacitance. At the RLC circuit's natural frequency , the inductance perfectly cancels the capacitance, so total reactance is zero. Since the total impedance at is just the real-value of , average power dissipation reaches its maximum of , where V is the root mean square (RMS) voltage across the capacitor. At even higher frequencies, the inductive impedance dominates, so the capacitor undesirably behaves instead like an inductor. High-frequency engineering involves accounting for the inductance of all connections and components. Q factor For a simplified model of a capacitor as an ideal capacitor in series with an equivalent series resistance , the capacitor's quality factor (or Q) is the ratio of the magnitude of its capacitive reactance to its resistance at a given frequency : The Q factor is a measure of its efficiency: the higher the Q factor of the capacitor, the closer it approaches the behavior of an ideal capacitor. Dissipation factor is its reciprocal. Ripple current Ripple current is the AC component of an applied source (often a switched-mode power supply) whose frequency may be constant or varying. Ripple current causes heat to be generated within the capacitor due to the dielectric losses caused by the changing field strength together with the current flow across the slightly resistive supply lines or the electrolyte in the capacitor. The equivalent series resistance (ESR) is the amount of internal series resistance one would add to a perfect capacitor to model this. Some types of capacitors, primarily tantalum and aluminum electrolytic capacitors, as well as some film capacitors have a specified rating value for maximum ripple current. Tantalum electrolytic capacitors with solid manganese dioxide electrolyte are limited by ripple current and generally have the highest ESR ratings in the capacitor family. Exceeding their ripple limits can lead to shorts and burning parts. Aluminum electrolytic capacitors, the most common type of electrolytic, suffer a shortening of life expectancy at higher ripple currents. If ripple current exceeds the rated value of the capacitor, it tends to result in explosive failure. Ceramic capacitors generally have no ripple current limitation and have some of the lowest ESR ratings. Film capacitors have very low ESR ratings but exceeding rated ripple current may cause degradation failures. Capacitance instability The capacitance of certain capacitors decreases as the component ages. In ceramic capacitors, this is caused by degradation of the dielectric. The type of dielectric, ambient operating and storage temperatures are the most significant aging factors, while the operating voltage usually has a smaller effect, i.e., usual capacitor design is to minimize voltage coefficient. The aging process may be reversed by heating the component above the Curie point. Aging is fastest near the beginning of life of the component, and the device stabilizes over time. Electrolytic capacitors age as the electrolyte evaporates. In contrast with ceramic capacitors, this occurs towards the end of life of the component. Temperature dependence of capacitance is usually expressed in parts per million (ppm) per °C. It can usually be taken as a broadly linear function but can be noticeably non-linear at the temperature extremes. The temperature coefficient may be positive or negative, depending mostly on the dielectric material. Some, designated C0G/NP0, but called NPO, have a somewhat negative coefficient at one temperature, positive at another, and zero in between. Such components may be specified for temperature-critical circuits. Capacitors, especially ceramic capacitors, and older designs such as paper capacitors, can absorb sound waves resulting in a microphonic effect. Vibration moves the plates, causing the capacitance to vary, in turn inducing AC current. Some dielectrics also generate piezoelectricity. The resulting interference is especially problematic in audio applications, potentially causing feedback or unintended recording. In the reverse microphonic effect, the varying electric field between the capacitor plates exerts a physical force, moving them as a speaker. This can generate audible sound, but drains energy and stresses the dielectric and the electrolyte, if any. Current and voltage reversal Current reversal occurs when the current changes direction. Voltage reversal is the change of polarity in a circuit. Reversal is generally described as the percentage of the maximum rated voltage that reverses polarity. In DC circuits, this is usually less than 100%, often in the range of 0 to 90%, whereas AC circuits experience 100% reversal. In DC circuits and pulsed circuits, current and voltage reversal are affected by the damping of the system. Voltage reversal is encountered in RLC circuits that are underdamped. The current and voltage reverse direction, forming a harmonic oscillator between the inductance and capacitance. The current and voltage tends to oscillate and may reverse direction several times, with each peak being lower than the previous, until the system reaches an equilibrium. This is often referred to as ringing. In comparison, critically damped or overdamped systems usually do not experience a voltage reversal. Reversal is also encountered in AC circuits, where the peak current is equal in each direction. For maximum life, capacitors usually need to be able to handle the maximum amount of reversal that a system may experience. An AC circuit experiences 100% voltage reversal, while underdamped DC circuits experience less than 100%. Reversal creates excess electric fields in the dielectric, causes excess heating of both the dielectric and the conductors, and can dramatically shorten the life expectancy of the capacitor. Reversal ratings often affect the design considerations for the capacitor, from the choice of dielectric materials and voltage ratings to the types of internal connections used. Dielectric absorption Capacitors made with any type of dielectric material show some level of "dielectric absorption" or "soakage". On discharging a capacitor and disconnecting it, after a short time it may develop a voltage due to hysteresis in the dielectric. This effect is objectionable in applications such as precision sample and hold circuits or timing circuits. The level of absorption depends on many factors, from design considerations to charging time, since the absorption is a time-dependent process. However, the primary factor is the type of dielectric material. Capacitors such as tantalum electrolytic or polysulfone film exhibit relatively high absorption, while polystyrene or Teflon allow very small levels of absorption. In some capacitors where dangerous voltages and energies exist, such as in flashtubes, television sets, microwave ovens and defibrillators, the dielectric absorption can recharge the capacitor to hazardous voltages after it has been shorted or discharged. Any capacitor containing over 10 joules of energy is generally considered hazardous, while 50 joules or higher is potentially lethal. A capacitor may regain anywhere from 0.01 to 20% of its original charge over a period of several minutes, allowing a seemingly safe capacitor to become surprisingly dangerous. Leakage No material is a perfect insulator, thus all dielectrics allow some small level of current to leak through, which can be measured with a megohmmeter.<ref>Robinson's Manual of Radio Telegraphy and Telephony by S.S. Robinson -- US Naval Institute 1924 Pg. 170</ref> Leakage is equivalent to a resistor in parallel with the capacitor. Constant exposure to factors such as heat, mechanical stress, or humidity can cause the dielectric to deteriorate resulting in excessive leakage, a problem often seen in older vacuum tube circuits, particularly where oiled paper and foil capacitors were used. In many vacuum tube circuits, interstage coupling capacitors are used to conduct a varying signal from the plate of one tube to the grid circuit of the next stage. A leaky capacitor can cause the grid circuit voltage to be raised from its normal bias setting, causing excessive current or signal distortion in the downstream tube. In power amplifiers this can cause the plates to glow red, or current limiting resistors to overheat, even fail. Similar considerations apply to component fabricated solid-state (transistor) amplifiers, but, owing to lower heat production and the use of modern polyester dielectric-barriers, this once-common problem has become relatively rare. Electrolytic failure from disuse Aluminum electrolytic capacitors are conditioned when manufactured by applying a voltage sufficient to initiate the proper internal chemical state. This state is maintained by regular use of the equipment. If a system using electrolytic capacitors is unused for a long period of time it can lose its conditioning. Sometimes they fail with a short circuit when next operated. Lifespan All capacitors have varying lifespans, depending upon their construction, operational conditions, and environmental conditions. Solid-state ceramic capacitors generally have very long lives under normal use, which has little dependency on factors such as vibration or ambient temperature, but factors like humidity, mechanical stress, and fatigue play a primary role in their failure. Failure modes may differ. Some capacitors may experience a gradual loss of capacitance, increased leakage or an increase in equivalent series resistance (ESR), while others may fail suddenly or even catastrophically. For example, metal-film capacitors are more prone to damage from stress and humidity, but will self-heal when a breakdown in the dielectric occurs. The formation of a glow discharge at the point of failure prevents arcing by vaporizing the metallic film in that spot, neutralizing any short circuit with minimal loss in capacitance. When enough pinholes accumulate in the film, a total failure occurs in a metal-film capacitor, generally happening suddenly without warning. Electrolytic capacitors generally have the shortest lifespans. Electrolytic capacitors are affected very little by vibration or humidity, but factors such as ambient and operational temperatures play a large role in their failure, which gradually occur as an increase in ESR (up to 300%) and as much as a 20% decrease in capacitance. The capacitors contain electrolytes which will eventually diffuse through the seals and evaporate. An increase in temperature also increases internal pressure, and increases the reaction rate of the chemicals. Thus, the life of an electrolytic capacitor is generally defined by a modification of the Arrhenius equation, which is used to determine chemical-reaction rates: Manufacturers often use this equation to supply an expected lifespan, in hours, for electrolytic capacitors when used at their designed operating temperature, which is affected by both ambient temperature, ESR, and ripple current. However, these ideal conditions may not exist in every use. The rule of thumb for predicting lifespan under different conditions of use is determined by: This says that the capacitor's life decreases by half for every 10 degrees Celsius that the temperature is increased, where: is the rated life under rated conditions, e.g. 2000 hours is the rated max/min operational temperature is the average operational temperature is the expected lifespan under given conditions Capacitor types Practical capacitors are available commercially in many different forms. The type of internal dielectric, the structure of the plates and the device packaging all strongly affect the characteristics of the capacitor, and its applications. Values available range from very low (picofarad range; while arbitrarily low values are in principle possible, stray (parasitic) capacitance in any circuit is the limiting factor) to about 5 kF supercapacitors. Above approximately 1 microfarad electrolytic capacitors are usually used because of their small size and low cost compared with other types, unless their relatively poor stability, life and polarised nature make them unsuitable. Very high capacity supercapacitors use a porous carbon-based electrode material. Dielectric materials Most capacitors have a dielectric spacer, which increases their capacitance compared to air or a vacuum. In order to maximise the charge that a capacitor can hold, the dielectric material needs to have as high a permittivity as possible, while also having as high a breakdown voltage as possible. The dielectric also needs to have as low a loss with frequency as possible. However, low value capacitors are available with a high vacuum between their plates to allow extremely high voltage operation and low losses. Variable capacitors with their plates open to the atmosphere were commonly used in radio tuning circuits. Later designs use polymer foil dielectric between the moving and stationary plates, with no significant air space between the plates. Several solid dielectrics are available, including paper, plastic, glass, mica and ceramic. Paper was used extensively in older capacitors and offers relatively high voltage performance. However, paper absorbs moisture, and has been largely replaced by plastic film capacitors. Most of the plastic films now used offer better stability and ageing performance than such older dielectrics such as oiled paper, which makes them useful in timer circuits, although they may be limited to relatively low operating temperatures and frequencies, because of the limitations of the plastic film being used. Large plastic film capacitors are used extensively in suppression circuits, motor start circuits, and power-factor correction circuits. Ceramic capacitors are generally small, cheap and useful for high frequency applications, although their capacitance varies strongly with voltage and temperature and they age poorly. They can also suffer from the piezoelectric effect. Ceramic capacitors are broadly categorized as class 1 dielectrics, which have predictable variation of capacitance with temperature or class 2 dielectrics, which can operate at higher voltage. Modern multilayer ceramics are usually quite small, but some types have inherently wide value tolerances, microphonic issues, and are usually physically brittle. Glass and mica capacitors are extremely reliable, stable and tolerant to high temperatures and voltages, but are too expensive for most mainstream applications. Electrolytic capacitors and supercapacitors are used to store small and larger amounts of energy, respectively, ceramic capacitors are often used in resonators, and parasitic capacitance occurs in circuits wherever the simple conductor-insulator-conductor structure is formed unintentionally by the configuration of the circuit layout. Electrolytic capacitors use an aluminum or tantalum plate with an oxide dielectric layer. The second electrode is a liquid electrolyte, connected to the circuit by another foil plate. Electrolytic capacitors offer very high capacitance but suffer from poor tolerances, high instability, gradual loss of capacitance especially when subjected to heat, and high leakage current. Poor quality capacitors may leak electrolyte, which is harmful to printed circuit boards. The conductivity of the electrolyte drops at low temperatures, which increases equivalent series resistance. While widely used for power-supply conditioning, poor high-frequency characteristics make them unsuitable for many applications. Electrolytic capacitors suffer from self-degradation if unused for a period (around a year), and when full power is applied may short circuit, permanently damaging the capacitor and usually blowing a fuse or causing failure of rectifier diodes. For example, in older equipment, this may cause arcing in rectifier tubes. They can be restored before use by gradually applying the operating voltage, often performed on antique vacuum tube equipment over a period of thirty minutes by using a variable transformer to supply AC power. The use of this technique may be less satisfactory for some solid state equipment, which may be damaged by operation below its normal power range, requiring that the power supply first be isolated from the consuming circuits. Such remedies may not be applicable to modern high-frequency power supplies as these produce full output voltage even with reduced input. Tantalum capacitors offer better frequency and temperature characteristics than aluminum, but higher dielectric absorption and leakage. Polymer capacitors (OS-CON, OC-CON, KO, AO) use solid conductive polymer (or polymerized organic semiconductor) as electrolyte and offer longer life and lower ESR at higher cost than standard electrolytic capacitors. A feedthrough capacitor is a component that, while not serving as its main use, has capacitance and is used to conduct signals through a conductive sheet. Several other types of capacitor are available for specialist applications. Supercapacitors store large amounts of energy. Supercapacitors made from carbon aerogel, carbon nanotubes, or highly porous electrode materials, offer extremely high capacitance (up to 5 kF ) and can be used in some applications instead of rechargeable batteries. Alternating current capacitors are specifically designed to work on line (mains) voltage AC power circuits. They are commonly used in electric motor circuits and are often designed to handle large currents, so they tend to be physically large. They are usually ruggedly packaged, often in metal cases that can be easily grounded/earthed. They also are designed with direct current breakdown voltages of at least five times the maximum AC voltage. Voltage-dependent capacitors The dielectric constant for a number of very useful dielectrics changes as a function of the applied electrical field, for example ferroelectric materials, so the capacitance for these devices is more complex. For example, in charging such a capacitor the differential increase in voltage with charge is governed by: where the voltage dependence of capacitance, , suggests that the capacitance is a function of the electric field strength, which in a large area parallel plate device is given by . This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of the electric field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage. Corresponding to the voltage-dependent capacitance, to charge the capacitor to voltage an integral relation is found: which agrees with only when does not depend on voltage . By the same token, the energy stored in the capacitor now is given by Integrating: where interchange of the order of integration is used. The nonlinear capacitance of a microscope probe scanned along a ferroelectric surface is used to study the domain structure of ferroelectric materials. Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor. This effect is intentionally exploited in diode-like devices known as varicaps. Frequency-dependent capacitors If a capacitor is driven with a time-varying voltage that changes rapidly enough, at some frequency the polarization of the dielectric cannot follow the voltage. As an example of the origin of this mechanism, the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes. A changing dielectric constant with frequency is referred to as dielectric dispersion, and is governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility): indicating the lag in response by the time dependence of , calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example, linear response function. The integral extends over the entire past history up to the present time. A Fourier transform in time then results in: where εr(ω) is now a complex function, with an imaginary part related to absorption of energy from the field by the medium. See permittivity. The capacitance, being proportional to the dielectric constant, also exhibits this frequency behavior. Fourier transforming Gauss's law with this form for displacement field: where is the imaginary unit, is the voltage component at angular frequency , is the real part of the current, called the conductance, and determines the imaginary part of the current and is the capacitance. is the complex impedance. When a parallel-plate capacitor is filled with a dielectric, the measurement of dielectric properties of the medium is based upon the relation: where a single prime denotes the real part and a double prime the imaginary part, is the complex impedance with the dielectric present, is the so-called complex capacitance with the dielectric present, and is the capacitance without the dielectric. (Measurement "without the dielectric" in principle means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.) Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency. Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy. Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond. At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials. Styles The arrangement of plates and dielectric has many variations in different styles depending on the desired ratings of the capacitor. For small values of capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger values can be made by multiple stacks of plates and disks. Larger value capacitors usually use a metal foil or metal film layer deposited on the surface of a dielectric film to make the plates, and a dielectric film of impregnated paper or plasticthese are rolled up to save space. To reduce the series resistance and inductance for long plates, the plates and dielectric are staggered so that connection is made at the common edge of the rolled-up plates, not at the ends of the foil or metalized film strips that comprise the plates. The assembly is encased to prevent moisture entering the dielectricearly radio equipment used a cardboard tube sealed with wax. Modern paper or film dielectric capacitors are dipped in a hard thermoplastic. Large capacitors for high-voltage use may have the roll form compressed to fit into a rectangular metal case, with bolted terminals and bushings for connections. The dielectric in larger capacitors is often impregnated with a liquid to improve its properties. Capacitors may have their connecting leads arranged in many configurations, for example axially or radially. "Axial" means that the leads are on a common axis, typically the axis of the capacitor's cylindrical bodythe leads extend from opposite ends. Radial leads are rarely aligned along radii of the body's circle, so the term is conventional. The leads (until bent) are usually in planes parallel to that of the flat body of the capacitor, and extend in the same direction; they are often parallel as manufactured. Small, cheap discoidal ceramic capacitors have existed from the 1930s onward, and remain in widespread use. After the 1980s, surface mount packages for capacitors have been widely used. These packages are extremely small and lack connecting leads, allowing them to be soldered directly onto the surface of printed circuit boards. Surface mount components avoid undesirable high-frequency effects due to the leads and simplify automated assembly, although manual handling is made difficult due to their small size. Mechanically controlled variable capacitors allow the plate spacing to be adjusted, for example by rotating or sliding a set of movable plates into alignment with a set of stationary plates. Low cost variable capacitors squeeze together alternating layers of aluminum and plastic with a screw. Electrical control of capacitance is achievable with varactors (or varicaps), which are reverse-biased semiconductor diodes whose depletion region width varies with applied voltage. They are used in phase-locked loops, amongst other applications. Capacitor markings Marking codes for larger parts Most capacitors have designations printed on their bodies to indicate their electrical characteristics. Larger capacitors, such as electrolytic types usually display the capacitance as value with explicit unit, for example, 220 μF. For typographical reasons, some manufacturers print MF on capacitors to indicate microfarads (μF). Three-/four-character marking code for small capacitors Smaller capacitors, such as ceramic types, often use a shorthand-notation consisting of three digits and an optional letter, where the digits (XYZ) denote the capacitance in picofarad (pF), calculated as XY × 10Z, and the letter indicating the tolerance. Common tolerances are ±5%, ±10%, and ±20%, denotes as J, K, and M, respectively. A capacitor may also be labeled with its working voltage, temperature, and other relevant characteristics. Example: A capacitor labeled or designated as 473K 330V has a capacitance of = 47 nF (±10%) with a maximum working voltage of 330 V. The working voltage of a capacitor is nominally the highest voltage that may be applied across it without undue risk of breaking down the dielectric layer. Two-character marking code for small capacitors For capacitances following the E3, E6, E12 or E24 series of preferred values, the former ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998 and ANSI/EIA-198-1-F:2002 as well as the amendment IEC 60062:2016/AMD1:2019 to IEC 60062 define a special two-character marking code for capacitors for very small parts which leave no room to print the above-mentioned three-/four-character code onto them. The code consists of an uppercase letter denoting the two significant digits of the value followed by a digit indicating the multiplier. The EIA standard also defines a number of lowercase letters to specify a number of values not found in E24. RKM code The RKM code following IEC 60062 and BS 1852 is a notation to state a capacitor's value in a circuit diagram. It avoids using a decimal separator and replaces the decimal separator with the SI prefix symbol for the particular value (and the letter for weight 1). The code is also used for part markings. Example: for 4.7 nF or for 2.2 F. Historical In texts prior to the 1960s and on some capacitor packages until more recently, obsolete capacitance units were utilized in electronic books, magazines, and electronics catalogs. The old units "mfd" and "mf" meant microfarad (μF); and the old units "mmfd", "mmf", "uuf", "μμf", "pfd" meant picofarad (pF); but they are rarely used any more. Also, "Micromicrofarad" or "micro-microfarad" are obsolete units that are found in some older texts that is equivalent to picofarad (pF). Summary of obsolete capacitance units: (upper/lower case variations are not shown) μF (microfarad) = mf, mfd pF (picofarad) = mmf, mmfd, pfd, μμF Applications Energy storage A capacitor can store electric energy when disconnected from its charging circuit, so it can be used like a temporary battery, or like other types of rechargeable energy storage system. Capacitors are commonly used in electronic devices to maintain power supply while batteries are being changed. (This prevents loss of information in volatile memory.) A capacitor can facilitate conversion of kinetic energy of charged particles into electric energy and store it. There are tradeoffs between capacitors and batteries as storage devices. Without external resistors or inductors, capacitors can generally release their stored energy in a very short time compared to batteries. Conversely, batteries can hold a far greater charge per their size. Conventional capacitors provide less than 360 joules per kilogram of specific energy, whereas a conventional alkaline battery has a density of 590 kJ/kg. There is an intermediate solution: supercapacitors, which can accept and deliver charge much faster than batteries, and tolerate many more charge and discharge cycles than rechargeable batteries. They are, however, 10 times larger than conventional batteries for a given charge. On the other hand, it has been shown that the amount of charge stored in the dielectric layer of the thin film capacitor can be equal to, or can even exceed, the amount of charge stored on its plates. In car audio systems, large capacitors store energy for the amplifier to use on demand. Also, for a flash tube, a capacitor is used to hold the high voltage. Digital memory In the 1930s, John Atanasoff applied the principle of energy storage in capacitors to construct dynamic digital memories for the first binary computers that used electron tubes for logic. Pulsed power and weapons Pulsed power is used in many applications to increase the power intensity (watts) of a volume of energy (joules) by releasing that volume within a very short time. Pulses in the nanosecond range and powers in the gigawatts are achievable. Short pulses often require specially constructed, low-inductance, high-voltage capacitors that are often used in large groups (capacitor banks) to supply huge pulses of current for many pulsed power applications. These include electromagnetic forming, Marx generators, pulsed lasers (especially TEA lasers), pulse forming networks, radar, fusion research, and particle accelerators. Large capacitor banks (reservoir) are used as energy sources for the exploding-bridgewire detonators or slapper detonators in nuclear weapons and other specialty weapons. Experimental work is under way using banks of capacitors as power sources for electromagnetic armour and electromagnetic railguns and coilguns. Power conditioning Reservoir capacitors are used in power supplies where they smooth the output of a full or half wave rectifier. They can also be used in charge pump circuits as the energy storage element in the generation of higher voltages than the input voltage. Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor compensates for the inductance and resistance of the leads to the lead–acid car battery. Power-factor correction In electric power distribution, capacitors are used for power-factor correction. Such capacitors often come as three capacitors connected as a three phase load. Usually, the values of these capacitors are not given in farads but rather as a reactive power in volt-amperes reactive (var). The purpose is to counteract inductive loading from devices like electric motors and transmission lines to make the load appear to be mostly resistive. Individual motor or lamp loads may have capacitors for power-factor correction, or larger sets of capacitors (usually with automatic switching devices) may be installed at a load center within a building or in a large utility substation. Suppression and coupling Signal coupling Because capacitors pass AC but block DC signals (when charged up to the applied DC voltage), they are often used to separate the AC and DC components of a signal. This method is known as AC coupling or "capacitive coupling". Here, a large value of capacitance, whose value need not be accurately controlled, but whose reactance is small at the signal frequency, is employed. Decoupling A decoupling capacitor is a capacitor used to protect one part of a circuit from the effect of another, for instance to suppress noise or transients. Noise caused by other circuit elements is shunted through the capacitor, reducing the effect they have on the rest of the circuit. It is most commonly used between the power supply and ground. An alternative name is bypass capacitor as it is used to bypass the power supply or other high impedance component of a circuit. Decoupling capacitors need not always be discrete components. Capacitors used in these applications may be built into a printed circuit board, between the various layers. These are often referred to as embedded capacitors. The layers in the board contributing to the capacitive properties also function as power and ground planes, and have a dielectric in between them, enabling them to operate as a parallel plate capacitor. High-pass and low-pass filters Noise suppression, spikes, and snubbers When an inductive circuit is opened, the current through the inductance collapses quickly, creating a large voltage across the open circuit of the switch or relay. If the inductance is large enough, the energy may generate a spark, causing the contact points to oxidize, deteriorate, or sometimes weld together, or destroying a solid-state switch. A snubber capacitor across the newly opened circuit creates a path for this impulse to bypass the contact points, thereby preserving their life; these were commonly found in contact breaker ignition systems, for instance. Similarly, in smaller scale circuits, the spark may not be enough to damage the switch but may still radiate undesirable radio frequency interference (RFI), which a filter capacitor absorbs. Snubber capacitors are usually employed with a low-value resistor in series, to dissipate energy and minimize RFI. Such resistor-capacitor combinations are available in a single package. Capacitors are also used in parallel with interrupting units of a high-voltage circuit breaker to equally distribute the voltage between these units. These are called "grading capacitors". In schematic diagrams, a capacitor used primarily for DC charge storage is often drawn vertically in circuit diagrams with the lower, more negative, plate drawn as an arc. The straight plate indicates the positive terminal of the device, if it is polarized (see electrolytic capacitor). Motor starters In single phase squirrel cage motors, the primary winding within the motor housing is not capable of starting a rotational motion on the rotor, but is capable of sustaining one. To start the motor, a secondary "start" winding has a series non-polarized starting capacitor to introduce a lead in the sinusoidal current. When the secondary (start) winding is placed at an angle with respect to the primary (run) winding, a rotating electric field is created. The force of the rotational field is not constant, but is sufficient to start the rotor spinning. When the rotor comes close to operating speed, a centrifugal switch (or current-sensitive relay in series with the main winding) disconnects the capacitor. The start capacitor is typically mounted to the side of the motor housing. These are called capacitor-start motors, that have relatively high starting torque. Typically they can have up-to four times as much starting torque as a split-phase motor and are used on applications such as compressors, pressure washers and any small device requiring high starting torques. Capacitor-run induction motors have a permanently connected phase-shifting capacitor in series with a second winding. The motor is much like a two-phase induction motor. Motor-starting capacitors are typically non-polarized electrolytic types, while running capacitors are conventional paper or plastic film dielectric types. Signal processing The energy stored in a capacitor can be used to represent information, either in binary form, as in DRAMs, or in analogue form, as in analog sampled filters and CCDs. Capacitors can be used in analog circuits as components of integrators or more complex filters and in negative feedback loop stabilization. Signal processing circuits also use capacitors to integrate a current signal. Tuned circuits Capacitors and inductors are applied together in tuned circuits to select information in particular frequency bands. For example, radio receivers rely on variable capacitors to tune the station frequency. Speakers use passive analog crossovers, and analog equalizers use capacitors to select different audio bands. The resonant frequency f of a tuned circuit is a function of the inductance (L) and capacitance (C) in series, and is given by: where is in henries and is in farads. Sensing Most capacitors are designed to maintain a fixed physical structure. However, various factors can change the structure of the capacitor, and the resulting change in capacitance can be used to sense those factors. Changing the dielectric The effects of varying the characteristics of the dielectric can be used for sensing purposes. Capacitors with an exposed and porous dielectric can be used to measure humidity in air. Capacitors are used to accurately measure the fuel level in airplanes; as the fuel covers more of a pair of plates, the circuit capacitance increases. Squeezing the dielectric can change a capacitor at a few tens of bar pressure sufficiently that it can be used as a pressure sensor. A selected, but otherwise standard, polymer dielectric capacitor, when immersed in a compatible gas or liquid, can work usefully as a very low cost pressure sensor up to many hundreds of bar. Changing the distance between the plates Capacitors with a flexible plate can be used to measure strain or pressure. Industrial pressure transmitters used for process control use pressure-sensing diaphragms, which form a capacitor plate of an oscillator circuit. Capacitors are used as the sensor in condenser microphones, where one plate is moved by air pressure, relative to the fixed position of the other plate. Some accelerometers use MEMS capacitors etched on a chip to measure the magnitude and direction of the acceleration vector. They are used to detect changes in acceleration, in tilt sensors, or to detect free fall, as sensors triggering airbag deployment, and in many other applications. Some fingerprint sensors use capacitors. Additionally, a user can adjust the pitch of a theremin musical instrument by moving their hand since this changes the effective capacitance between the user's hand and the antenna. Changing the effective area of the plates Capacitive touch switches are now used on many consumer electronic products. Oscillators A capacitor can possess spring-like qualities in an oscillator circuit. In the image example, a capacitor acts to influence the biasing voltage at the npn transistor's base. The resistance values of the voltage-divider resistors and the capacitance value of the capacitor together control the oscillatory frequency. Producing light A light-emitting capacitor is made from a dielectric that uses phosphorescence to produce light. If one of the conductive plates is made with a transparent material, the light is visible. Light-emitting capacitors are used in the construction of electroluminescent panels, for applications such as backlighting for laptop computers. In this case, the entire panel is a capacitor used for the purpose of generating light. Hazards and safety The hazards posed by a capacitor are usually determined, foremost, by the amount of energy stored, which is the cause of things like electrical burns or heart fibrillation. Factors such as voltage and chassis material are of secondary consideration, which are more related to how easily a shock can be initiated rather than how much damage can occur. Under certain conditions, including conductivity of the surfaces, preexisting medical conditions, the humidity of the air, or the pathways it takes through the body (i.e.: shocks that travel across the core of the body and, especially, the heart are more dangerous than those limited to the extremities), shocks as low as one joule have been reported to cause death, although in most instances they may not even leave a burn. Shocks over ten joules will generally damage skin, and are usually considered hazardous. Any capacitor that can store 50 joules or more should be considered potentially lethal. Capacitors may retain a charge long after power is removed from a circuit; this charge can cause dangerous or even potentially fatal shocks or damage connected equipment. For example, even a seemingly innocuous device such as the flash of a disposable camera, has a photoflash capacitor which may contain over 15 joules of energy and be charged to over 300 volts. This is easily capable of delivering a shock. Service procedures for electronic devices usually include instructions to discharge large or high-voltage capacitors, for instance using a Brinkley stick. Larger capacitors, such as those used in microwave ovens, HVAC units and medical defibrillators may also have built-in discharge resistors to dissipate stored energy to a safe level within a few seconds after power is removed. High-voltage capacitors are stored with the terminals shorted, as protection from potentially dangerous voltages due to dielectric absorption or from transient voltages the capacitor may pick up from static charges or passing weather events. Some old, large oil-filled paper or plastic film capacitors contain polychlorinated biphenyls (PCBs). It is known that waste PCBs can leak into groundwater under landfills. Capacitors containing PCBs were labelled as containing "Askarel" and several other trade names. PCB-filled paper capacitors are found in very old (pre-1975) fluorescent lamp ballasts, and other applications. Capacitors may catastrophically fail when subjected to voltages or currents beyond their rating, or in case of polarized capacitors, applied in a reverse polarity. Failures may create arcing that heats and vaporizes the dielectric fluid, causing a build up of pressurized gas that may result in swelling, rupture, or an explosion. Larger capacitors may have vents or similar mechanism to allow the release of such pressures in the event of failure. Capacitors used in RF or sustained high-current applications can overheat, especially in the center of the capacitor rolls. Capacitors used within high-energy capacitor banks can violently explode when a short in one capacitor causes sudden dumping of energy stored in the rest of the bank into the failing unit. High voltage vacuum capacitors can generate soft X-rays even during normal operation. Proper containment, fusing, and preventive maintenance can help to minimize these hazards. High-voltage capacitors may benefit from a pre-charge to limit in-rush currents at power-up of high voltage direct current (HVDC) circuits. This extends the life of the component and may mitigate high-voltage hazards. See also Capacitance meter Capacitor plague Electric displacement field Electroluminescence List of capacitor manufacturers Notes References Bibliography Philosophical Transactions of the Royal Society LXXII, Appendix 8, 1782 (Volta coins the word condenser) Further reading Tantalum and Niobium-Based Capacitors – Science, Technology, and Applications; 1st Ed; Yuri Freeman; Springer; 120 pages; 2018; . Capacitors; 1st Ed; R. P. Deshpande; McGraw-Hill; 342 pages; 2014; . The Capacitor Handbook; 1st Ed; Cletus Kaiser; Van Nostrand Reinhold; 124 pages; 1993; . Understanding Capacitors and their Uses; 1st Ed; William Mullin; Sams Publishing; 96 pages; 1964. (archive) Fixed and Variable Capacitors; 1st Ed; G. W. A. Dummer and Harold Nordenberg; Maple Press; 288 pages; 1960. (archive) The Electrolytic Capacitor''; 1st Ed; Alexander Georgiev; Murray Hill Books; 191 pages; 1945. (archive) External links The First Condenser – A Beer Glass – SparkMuseum How Capacitors Work – Howstuffworks Capacitor Tutorial Electrical components Energy storage Science and technology in the Dutch Republic Dutch inventions 18th-century inventions German inventions
Capacitor
Physics,Technology,Engineering
15,951
43,264,803
https://en.wikipedia.org/wiki/Benzenesulfonyl%20chloride
Benzenesulfonyl chloride is an organosulfur compound with the formula C6H5SO2Cl. It is a colourless viscous oil that dissolves in organic solvents, but reacts with compounds containing reactive N-H and O-H bonds. It is mainly used to prepare sulfonamides and sulfonate esters by reactions with amines and alcohols, respectively. The closely related compound toluenesulfonyl chloride is often preferred analogue because it is a solid at room temperature and easier to handle. Production The compound is prepared by the chlorsulfonation of benzene: Benzenesulfonic acid is an intermediate in this conversion. Diphenylsulfone is a side product. Benzenesulfonyl chloride can also be prepared by treating benzenesulfonate salts with phosphorus oxychloride. Reactions Benzenesulfonyl chloride is an electrophilic reagent. It hydrolyzes with heat but is stable toward cold water. Amines react to give sulfonamides. This reaction is the basis of the Hinsberg test for amines. References Reagents for organic chemistry Sulfonyl halides Phenyl compounds
Benzenesulfonyl chloride
Chemistry
246
14,215,906
https://en.wikipedia.org/wiki/L-fucose%20isomerase
In enzymology, a L-fucose isomerase () is an enzyme that catalyzes the chemical reaction L-fucose L-fuculose Hence, this enzyme has one substrate, L-fucose, and one product, L-fuculose. This enzyme belongs to the family of isomerases, specifically those intramolecular oxidoreductases interconverting aldoses and ketoses. The systematic name of this enzyme class is L-fucose aldose-ketose-isomerase. This enzyme participates in fructose and mannose metabolism. The enzyme is a hexamer, forming the largest structurally known ketol isomerase, and has no sequence or structural similarity with other ketol isomerases. The structure was determined by X-ray crystallography at 2.5 Angstrom resolution. Each subunit of the hexameric enzyme is wedge-shaped and composed of three domains. Both domains 1 and 2 contain central parallel beta- sheets with surrounding alpha helices. The active centre is shared between pairs of subunits related along the molecular three-fold axis, with domains 2 and 3 from one subunit providing most of the substrate-contacting residues. References Further reading Protein domains EC 5.3.1 Enzymes of unknown structure
L-fucose isomerase
Biology
277
51,419,245
https://en.wikipedia.org/wiki/William%20Garrow%20Lettsom
William Garrow Lettsom FRAS (1805 – 14 December 1887) was a British diplomat and scientist. He was instrumental in revealing the text of the secret Treaty of the Triple Alliance between Argentina, the Empire of Brazil and Uruguay. Early life Lettsom was born into a Quaker family at Fulham in March 1805. His paternal grandfather John Coakley Lettsom was a famous physician, philanthropist and abolitionist who held that sea-bathing was good for public health. His maternal grandfather − with whom he lived in his youth − was Sir William Garrow the celebrated criminal defender, afterwards a judge, who introduced the phrase "presumed innocent until proven guilty" into the common law and whose life inspired the television drama series Garrow's Law. Lettsom was educated at Westminster School and Cambridge University. Literary acquaintance As an undergraduate at Cambridge University Lettsom befriended the author William Makepeace Thackeray and was the (or an) editor of The Snob in which some of Thackeray's earliest work appeared; Lettsom has been identified as the character Tapeworm in Thackeray's novel Vanity Fair, a diplomat who fancies himself as a ladies' man. Lettsom was well acquainted with the cartoonist George Cruikshank, illustrator of the early works of Charles Dickens. Lettsom was a contributor to various literary periodicals under the pseudonym Dr. Bulgardo. Scientist Lettsom was a competent scientist in an age when this was still possible for an amateur. He was best known as the joint author of Greg and Lettsom's Manual of the Mineralogy of Great Britain and Ireland, which was the most complete and accurate work that had appeared on the mineralogy of the British Isles. First published in 1858, a century later it was still the standard work on the subject, when a reprint was issued. The mineral lettsomite is named after him. But his scientific interests were wider, and he corresponded with the most eminent workers in spectroscopy. He was a member of the London Electrical Society and the author of several papers on geological, electrical and spectroscopic subjects. He was elected a Fellow of the Royal Astronomical Society in 1849. In that year he communicated an experiment in bioelectricity: by making a wound in a finger and inserting the electrode of a galvanometer, while placing the other electrode in contact with an unwounded finger, a current was observed to flow. Lettsom observed that the experiment was repeatable for he had tried it himself. In 1857 while on diplomatic service in Mexico he sent to the Royal Entomological Society of London some seeds which, when put in a warm place, became "very lively". The grub responsible had not been investigated scientifically before, wrote Lettsom, and he asked the Society to do so. These were the celebrated Mexican jumping beans. While on diplomatic service in Uruguay he brought a 9 inch Henry Fitz telescope for astronomical observations in the southern hemisphere. Owing to unknown problems he sent the telescope back to New York to be checked and adjusted by the telescope maker. The telescope was received by Lewis Rutherford, pioneer astrophotographer and spectroscopist and associate of the Royal Astronomical Society, who helped Henry Fitz on this task. The telescope was left in Uruguay and is in use to this day by the Uruguayan Amateur Astronomers' Association. Diplomat Having been called to the Bar by Lincoln's Inn he entered the diplomatic service. After postings in Berlin, Munich (1831), Washington (1840), Turin (1849) and Madrid (1850) he was appointed secretary to the Legation at Mexico (1854) and became the Chargé d'affaires. In the unreformed British diplomatic service there were no examinations; candidates were appointed by the influence of political friends. This caused criticism. In the House of Commons on 22 May 1855 the motion wasThat it is the opinion of this House that the complete Revision of our Diplomatic Establishment recommended in the Report of the Select Committee of 1850 on Official Salaries should be carried into effect. In this debate Lettsom was used as a case in point to illustrate the defects of the unreformed system. It has been noted that Lettsom, "who had invariably conducted himself to the satisfaction of those who employed him", received one of the slowest promotions in the diplomatic service. A diplomat was expected to be a gentleman and to have a private income whereby he could receive unpaid diplomatic appointments. Hence nine of the twenty-three years of Lettsom's service were unsalaried; promotion was slow. This glacial treatment did not apply, however, to those who had powerful political friends, for they were soon appointed to agreeable capitals at enormous salaries. The motion was carried by 112 votes to 57, Mr Otway MP remarking that "The person who had shown himself to be the fittest man, whether he was the son of a Peer or a tailor, should be chosen". While in Mexico the British government suspended relations with that country on Lettsom's representation, and he was the object of an attempted assassination. Between 1859 and 1869 Lettsom was appointed Consul-General and Chargé d'Affaires to the Republic of Uruguay. Treaty of the Triple Alliance In 1864 and early 1865 Paraguayan forces under the orders of Francisco Solano López seized Brazilian and Argentine shipping and invaded the provinces of the Mato Grosso and Rio Grande do Sul (Brazil) and Corrientes (Argentina). On 1 May 1865 Brazil, Argentina and Uruguay signed the Treaty of the Triple Alliance against Paraguay. By Article XVIII of the Treaty its provisions were to be kept secret until its "principal object" should be obtained. One of its provisions concerned the acquisition by Argentina of large tracts of territory then in dispute between it and Paraguay. Lettsom was not satisfied about this and surreptitiously obtained a copy of the Treaty from the Uruguayan diplomat Dr Carlos de Castro. He forwarded it to London and the British government ordered it to be translated into English and published to Parliament. When the text became available in South America there was outrage in several quarters, some because of the Treaty's content, others because it had been published at all. Lettsom has been cited as an exemplar of the nuance with which a substantial part of the British diplomatic corps saw the Paraguayan War. Later Lettsom retired from the diplomatic service in 1869. He never married. He died of acute bronchitis on 14 December 1887. Notes References 1805 births 1887 deaths Amateur scientists Spectroscopists British diplomats British mineralogists Paraguayan War People from Fulham
William Garrow Lettsom
Physics,Chemistry
1,347
2,039,690
https://en.wikipedia.org/wiki/Isotopes%20of%20hydrogen
Hydrogen (H) has three naturally occurring isotopes: H, H, and H. H and H are stable, while H has a half-life of years. Heavier isotopes also exist; all are synthetic and have a half-life of less than 1 zeptosecond (10 s). Of these, H is the least stable, while H is the most. Hydrogen is the only element whose isotopes have different names that remain in common use today: H is deuterium and H is tritium. The symbols D and T are sometimes used for deuterium and tritium; IUPAC (International Union of Pure and Applied Chemistry) accepts said symbols, but recommends the standard isotopic symbols H and H, to avoid confusion in alphabetic sorting of chemical formulas. H, with no neutrons, may be called protium to disambiguate. (During the early study of radioactivity, some other heavy radioisotopes were given names, but such names are rarely used today.) List of isotopes Note: "y" means year, but "ys" means yoctosecond (10 second). |- | H | 1 | 0 | | colspan=3 align=center|Stable | 1/2+ | colspan="2" style="text-align:center" | [, ] | Protium |- | H (D) | 1 | 1 | | colspan=3 align=center |Stable | 1+ | colspan="2" style="text-align:center" | [, ] | Deuterium |- | H (T) | 1 | 2 | | | β | He | 1/2+ | Trace | | Tritium |- | H | 1 | 3 | | | n | H | 2− | | |- | H | 1 | 4 | | | 2n | H | (1/2+) | | |- | H | 1 | 5 | | | | | 2−# | | |- | H | 1 | 6 | # | | | | 1/2+# | | Hydrogen-1 (protium) H (atomic mass ) is the most common hydrogen isotope, with an abundance of >99.98%. Its nucleus consists of only a single proton, so it has the formal name protium. The proton has never been observed to decay, so H is considered stable. Some Grand Unified Theories proposed in the 1970s predict that proton decay can occur with a half-life between and years. If so, then H (and all nuclei now believed to be stable) are only observationally stable. As of 2018, experiments have shown that the mean lifetime of the proton is > years. Hydrogen-2 (deuterium) Deuterium, H (atomic mass ), the other stable hydrogen isotope, has one proton and one neutron in its nucleus, called a deuteron. H comprises 26–184 ppm (by population, not mass) of hydrogen on Earth; the lower number tends to be found in hydrogen gas and higher enrichment (150 ppm) is typical of seawater. Deuterium on Earth has been enriched with respect to its initial concentration in the Big Bang and outer solar system (≈27 ppm, atom fraction) and older parts of the Milky Way (≈23 ppm). Presumably the differential concentration of deuterium in the inner solar system is due to the lower volatility of deuterium gas and compounds, enriching deuterium fractions in comets and planets exposed to significant heat from the Sun over billions of years of solar system evolution. Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in H is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for H-nuclear magnetic resonance spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. Hydrogen-3 (tritium) Tritium, H (atomic mass ), has one proton and two neutrons in its nucleus (triton). It is radioactive, β decaying into helium-3 with half-life . Traces of H occur naturally due to cosmic rays interacting with atmospheric gases. H has also been released in nuclear tests. It is used in fusion bombs, as a tracer in isotope geochemistry, and in self-powered lighting devices. The most common way to produce H is to bombard a natural isotope of lithium, Li, with neutrons in a nuclear reactor. Tritium can be used in chemical and biological labeling experiments as a radioactive tracer. Deuterium–tritium fusion uses H and H as its main reactants, giving energy through the loss of mass when the two nuclei collide and fuse at high temperatures. Hydrogen-4 H (atomic mass ), with one proton and three neutrons, is a highly unstable isotope. It has been synthesized in the laboratory by bombarding tritium with fast-moving deuterons; the triton captured a neutron from the deuteron. The presence of H was deduced by detecting the emitted protons. It decays by neutron emission into H with a half-life of (or ). In the 1955 satirical novel The Mouse That Roared, the name quadium was given to the H that powered the Q-bomb that the Duchy of Grand Fenwick captured from the United States. Hydrogen-5 H (atomic mass ), with one proton and four neutrons, is highly unstable. It has been synthesized in the lab by bombarding tritium with fast-moving tritons; one triton captures two neutrons from the other, becoming a nucleus with one proton and four neutrons. The remaining proton may be detected, and the existence of H deduced. It decays by double neutron emission into H and has a half-life of () – the shortest half-life of any known nuclide. Hydrogen-6 H (atomic mass ) has one proton and five neutrons. It has a half-life of (). Hydrogen-7 H (atomic mass ) has one proton and six neutrons. It was first synthesized in 2003 by a group of Russian, Japanese and French scientists at Riken's Radioactive Isotope Beam Factory by bombarding hydrogen with helium-8 atoms; all six of the helium-8's neutrons were donated to the hydrogen nucleus. The two remaining protons were detected by the "RIKEN telescope", a device made of several layers of sensors, positioned behind the target of the RI Beam cyclotron. H has a half-life of (). Decay chains H and H decay directly to H, which then decays to stable He. Decay of the heaviest isotopes, H and H, has not been experimentally observed. Decay times are in yoctoseconds () for all these isotopes except H, which is in years. See also Hydrogen atom Hydrogen isotope biogeochemistry Hydrogen-4.1 (Muonic helium) Muonium – acts like an exotic light isotope of hydrogen Notes References Further reading Hydrogen Hydrogen
Isotopes of hydrogen
Chemistry
1,491
18,382,696
https://en.wikipedia.org/wiki/Haul%20truck
Haul trucks are off-road, heavy-duty dump trucks specifically engineered for use in high-production mining and exceptionally demanding construction environments. Most are dual axle; at least two examples of tri-axles were made in the 1970s. Haul trucks are denominated by their payload capacity, by weight (variously in tons, tonnes, and kg). Description Most haul trucks have a two-axle design, but two well-known models from the 1970s, the 350T Terex Titan and 235T WABCO 3200/B, had three axles. Haul truck capacities range from to nearly . An example on the smaller end is the Caterpillar 775 (rated at ). Quarry operations (which produce payloads that have value) are typically employ smaller trucks than mining operations (such as removing undesirable overburden, an expense). Haul trucks can generally be distinguished from standard dump trucks by: Being far too large to travel legally on public roads Having a dump body made of exceptionally strong steel plate that extends over the cab to protect it, angled upright at its end (or entirely) to aid in dumping; some are heated by exhaust gases to prevent loads from sticking or freezing to the bed; Having a driver's cab narrower than its body; No axle suspension; Limited speed and operating range; Special off-road only tires; A ratio of dead weight to payload not exceeding 1:1.6 Most large haul trucks use some form of traction motors coupled to regenerative brakings for power, braking, or both. Haul trucks are classified by: Type of unloading (dump or rear-eject); Direction of discharge (side, rear); Type of body (hopper, platform, sliding hopper, sliding platform). Ultra class The largest, highest-payload-capacity haul trucks are referred to as ultra class trucks. The ultra class includes all haul trucks with a payload capacity of or greater. , the BelAZ 75710 has the highest payload capacity, . Notable examples See also Articulated hauler Dumper Haul road Rear-eject haul truck bodies Unit Rig Notes References ru:Самосвал#Внедорожные (карьерные) самосвалы
Haul truck
Engineering
472
338,191
https://en.wikipedia.org/wiki/FLOX
FLOX is a flameless combustion process developed by WS Wärmeprozesstechnik GmbH. History In experiments with industrial gasoline engines conducted in April 1990, Joachim Alfred Wünning found that when combustion occurred at a temperature greater than 850 °C, the flames were blown away. Although this observation was initially thought to be an error, it turned out to be a discovery which led to the invention of what he called FLOX-Technology, a name derived from the German expression "flammenlose Oxidation" (flameless oxidation). The advantages of this technology attracted funding for a project at Stuttgart University called FloxCoal, a programme aiming to engineer a flameless atomizing coal burner. The reduced pollutant emission in FLOX combustion has been considered a promising candidate for use in coal pollution mitigation and the higher efficiency combustion in FLOX received increased interest as a result of the 1990 oil price shock. FLOX burners have since been used within furnaces in the steel and metallurgical industries. Technology FLOX requires the air and fuel components to be mixed in an environment in which exhaust gases are recirculated back into the combustion chamber. Flameless combustion also does not display the same high energy peaks as the traditional combustion observed within a swirl burner, resulting in a more smooth and stable combustion process. When combustion occurs, NOx is formed at the front of the flame: suppression of peak flame offers the theoretical possibility of reducing NOx production to zero. Experiments with FLOX-Technology have established that it can reduce the amount of NOx generated by 20% in the case of Rhenisch brown coal, and by 65% in the case of Polish black coal. The role of combustion temperature in NOx formation has been understood for some time. Reduction of the combustion temperature in gasoline engines, by reducing the compression ratio, was among the first steps taken to comply with the U.S. clean air act in the 1970s. This lowered the NOx emissions by lowering the temperature at the flame front. References External links list of article at WS Wärmetechnik Combustion
FLOX
Chemistry
429
60,812,572
https://en.wikipedia.org/wiki/Wi-Fi%207
IEEE 802.11be, dubbed Extremely High Throughput (EHT), is a wireless networking standard in the IEEE 802.11 set of protocols which is designated by the Wi-Fi Alliance. It has built upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4, 5, and 6 GHz frequency bands. Throughput is believed to reach a theoretical maximum of 46 Gbit/s, although actual results are much lower. Development of the 802.11be amendment is ongoing, with an initial draft in March 2021, and a final version expected by the end of 2024. Despite this, numerous products were announced in 2022 based on draft standards, with retail availability in early 2023. On 8 January 2024, the Wi-Fi Alliance introduced its Wi-Fi Certified 7 program to certify Wi-Fi 7 devices. While final ratification is not expected until the end of 2024, the technical requirements are essentially complete, and there are already products labeled as Wi‑Fi 7. The global Wi-Fi 7 market was estimated at US$1 billion in 2023, and is projected to reach US$24.2 billion by 2030. Core features The following are core features that have been approved as of Draft 3.0: 4096-QAM (4K-QAM) enables each symbol to carry 12 bits rather than 10 bits, resulting in 20% higher theoretical transmission rates than WiFi 6's 1024-QAM. Contiguous and non-contiguous 320/160+160 MHz and 240/160+80 MHz bandwidth Multi-Link Operation (MLO), a feature that increases capacity by simultaneously sending and receiving data across different frequency bands and channels. (2.4 GHz, 5 GHz, 6 GHz) 16 spatial streams and Multiple Input Multiple Output (MIMO) protocol enhancements Flexible Channel Utilization – Interference currently can negate an entire Wi-Fi channel. With preamble puncturing, a portion of the channel that is affected by interference can be blocked off while continuing to use the rest of the channel. Candidate features The main candidate features mentioned in the 802.11be Project Authorization Request (PAR) are: Multi-Access Point (AP) Coordination (e.g. coordinated and joint transmission), Enhanced link adaptation and retransmission protocol (e.g. Hybrid Automatic Repeat Request (HARQ)), If needed, adaptation to regulatory rules specific to 6 GHz spectrum, Integrating Time-Sensitive Networking (TSN) IEEE 802.1Q extensions for low-latency real-time traffic: IEEE 802.1AS timing and synchronization IEEE 802.11aa MAC Enhancements for Robust Audio Video Streaming (Stream Reservation Protocol over IEEE 802.11) IEEE 802.11ak Enhancements for Transit Links Within Bridged Networks (802.11 links in 802.1Q networks) Bounded latency: credit-based (IEEE 802.1Qav) and cyclic/time-aware traffic shaping (IEEE 802.1Qch/Qbv), asynchronous traffic scheduling (IEEE 802.1Qcr-2020) IEEE 802.11ax Scheduled Operation extensions for reduced jitter/latency Additional features Apart from the features mentioned in the PAR, there are newly introduced features: Newly introduced 4096-QAM (4K-QAM), Contiguous and non-contiguous 320/160+160 MHz and 240/160+80 MHz bandwidth, Frame formats with improved forward-compatibility, Enhanced resource allocation in OFDMA, Optimized channel sounding that requires less airtime, Implicit channel sounding, More flexible preamble puncturing scheme, Support of direct links, managed by an access point. Rate set Comparison 802.11be Task Group The 802.11be Task Group is led by individuals affiliated with Qualcomm, Intel, and Broadcom. Those affiliated with Huawei, Maxlinear, NXP, and Apple also have senior positions. Commercial availability Qualcomm announced its FastConnect 7800 series on 28 Feb 2022 using 14 nm chips. As of March 2023, the company claims 175 devices will be using their Wi-Fi 7 chips, including smartphones, routers, and access points. Broadcom followed on 12 April 2022 with a series of 5 chips covering home, commercial, and enterprise uses. The company unveiled its second generation Wi-Fi 7 chips on 20 June 2023 featuring tri-band MLO support and lower costs. The TP-Link Archer BE900 wireless router was available to consumers in April 2023. The company's Deco BE95 mesh networking system was also available that month. Asus, Eero, Linksys and Netgear had Wi-fi 7 wireless routers available by the end of 2023. The ARRIS SURFboard G54 is a DOCSIS 3.1 cable gateway featuring Wi-Fi 7. It became available in October 2023. Lumen's Quantum Fiber W1700K and W1701K are WiFi 7 certified and provided with their 360 WiFi offering. It is the first device made for a major Telecommunications Provider that's certified for WiFi 7. Client devices Intel launched the BE200 and BE202 wireless adapters for desktop and laptop motherboards in September 2023. The Asus ROG Strix Z790 E II motherboard is among the first with built-in Wi-Fi 7. Software Android 13 and higher provide support for Wi-Fi 7. The Linux 6.2 kernel provides support for Wi-Fi 7 devices. The 6.4 kernel added Wi-Fi 7 mesh support. Linux 6.5 included significant driver support by Intel engineers, particularly support for MLO. Support for Wi-Fi 7 was added to Windows 11, as of build 26063.1. Notes References be Networking standards Wireless communication systems
Wi-Fi 7
Technology,Engineering
1,212
4,216,002
https://en.wikipedia.org/wiki/Near-field%20scanning%20optical%20microscope
Near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field (or near-field) on the far side of the aperture. When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm and vertical resolution of 2–5 nm have been demonstrated. As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index, chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique. NSOM/SNOM is a form of scanning probe microscopy. History Edward Hutchinson Synge is given credit for conceiving and developing the idea for an imaging instrument that would image by exciting and collecting diffraction in the near field. His original idea, proposed in 1928, was based upon the usage of intense nearly planar light from an arc under pressure behind a thin, opaque metal film with a small orifice of about 100 nm. The orifice was to remain within 100 nm of the surface, and information was to be collected by point-by-point scanning. He foresaw the illumination and the detector movement being the biggest technical difficulties. John A. O'Keefe also developed similar theories in 1956. He thought the moving of the pinhole or the detector when it is so close to the sample would be the most likely issue that could prevent the realization of such an instrument. It was Ash and Nicholls at University College London who, in 1972, first broke the Abbe's diffraction limit using microwave radiation with a wavelength of 3 cm. A line grating was resolved with a resolution of λ0/60. A decade later, a patent on an optical near-field microscope was filed by Dieter Pohl, followed in 1984 by the first paper that used visible radiation for near field scanning. The near-field optical (NFO) microscope involved a sub-wavelength aperture at the apex of a metal coated sharply pointed transparent tip, and a feedback mechanism to maintain a constant distance of a few nanometers between the sample and the probe. Lewis et al. were also aware of the potential of an NFO microscope at this time. They reported first results in 1986 confirming super-resolution. In both experiments, details below 50 nm (about λ0/10) in size could be recognized. Theory According to Abbe's theory of image formation, developed in 1873, the resolving capability of an optical component is ultimately limited by the spreading out of each image point due to diffraction. Unless the aperture of the optical component is large enough to collect all the diffracted light, the finer aspects of the image will not correspond exactly to the object. The minimum resolution (d) for the optical component is thus limited by its aperture size, and expressed by the Rayleigh criterion: Here, λ0 is the wavelength in vacuum; NA is the numerical aperture for the optical component (maximum 1.3–1.4 for modern objectives with a very high magnification factor). Thus, the resolution limit is usually around λ0/2 for conventional optical microscopy. This treatment takes into account only the light diffracted into the far-field that propagates without any restrictions. NSOM makes use of evanescent or non propagating fields that exist only near the surface of the object. These fields carry the high frequency spatial information about the object and have intensities that drop off exponentially with distance from the object. Because of this, the detector must be placed very close to the sample in the near field zone, typically a few nanometers. As a result, near field microscopy remains primarily a surface inspection technique. The detector is then rastered across the sample using a piezoelectric stage. The scanning can either be done at a constant height or with regulated height by using a feedback mechanism. Modes of operation Aperture and apertureless operation There exist NSOM which can be operated in so-called aperture mode and NSOM for operation in a non-aperture mode. As illustrated, the tips used in the apertureless mode are very sharp and do not have a metal coating. Though there are many issues associated with the apertured tips (heating, artifacts, contrast, sensitivity, topology and interference among others), aperture mode remains more popular. This is primarily because apertureless mode is even more complex to set up and operate, and is not understood as well. There are five primary modes of apertured NSOM operation and four primary modes of apertureless NSOM operation. The major ones are illustrated in the next figure. Some types of NSOM operation utilize a campanile probe, which has a square pyramid shape with two facets coated with a metal. Such a probe has a high signal collection efficiency (>90%) and no frequency cutoff. Another alternative is "active tip" schemes, where the tip is functionalized with active light sources such as a fluorescent dye or even a light emitting diode that enables fluorescence excitation. The merits of aperture and apertureless NSOM configurations can be merged in a hybrid probe design, which contains a metallic tip attached to the side of a tapered optical fiber. At visible range (400 nm to 900 nm), about 50% of the incident light can be focused to the tip apex, which is around 5 nm in radius. This hybrid probe can deliver the excitation light through the fiber to realize tip-enhanced Raman spectroscopy (TERS) at tip apex, and collect the Raman signals through the same fiber. The lens-free fiber-in-fiber-out STM-NSOM-TERS has been demonstrated. Feedback mechanisms Feedback mechanisms are usually used to achieve high resolution and artifact free images since the tip must be positioned within a few nanometers of the surfaces. Some of these mechanisms are constant force feedback and shear force feedback Constant force feedback mode is similar to the feedback mechanism used in atomic force microscopy (AFM). Experiments can be performed in contact, intermittent contact, and non-contact modes. In shear force feedback mode, a tuning fork is mounted alongside the tip and made to oscillate at its resonance frequency. The amplitude is closely related to the tip-surface distance, and thus used as a feedback mechanism. Contrast It is possible to take advantage of the various contrast techniques available to optical microscopy through NSOM but with much higher resolution. By using the change in the polarization of light or the intensity of the light as a function of the incident wavelength, it is possible to make use of contrast enhancing techniques such as staining, fluorescence, phase contrast and differential interference contrast. It is also possible to provide contrast using the change in refractive index, reflectivity, local stress and magnetic properties amongst others. Instrumentation and standard setup The primary components of an NSOM setup are the light source, feedback mechanism, the scanning tip, the detector and the piezoelectric sample stage. The light source is usually a laser focused into an optical fiber through a polarizer, a beam splitter and a coupler. The polarizer and the beam splitter would serve to remove stray light from the returning reflected light. The scanning tip, depending upon the operation mode, is usually a pulled or stretched optical fiber coated with metal except at the tip or just a standard AFM cantilever with a hole in the center of the pyramidal tip. Standard optical detectors, such as avalanche photodiode, photomultiplier tube (PMT) or CCD, can be used. Highly specialized NSOM techniques, Raman NSOM for example, have much more stringent detector requirements. Near-field spectroscopy As the name implies, information is collected by spectroscopic means instead of imaging in the near field regime. Through near field spectroscopy (NFS), one can probe spectroscopically with sub-wavelength resolution. Raman SNOM and fluorescence SNOM are two of the most popular NFS techniques as they allow for the identification of nanosized features with chemical contrast. Some of the common near-field spectroscopic techniques are below. Direct local Raman NSOM is based on Raman spectroscopy. Aperture Raman NSOM is limited by very hot and blunt tips, and by long collection times. However, apertureless NSOM can be used to achieve high Raman scattering efficiency factors (around 40). Topological artifacts make it hard to implement this technique for rough surfaces. Tip-enhanced Raman spectroscopy (TERS) is an offshoot of surface enhanced Raman spectroscopy (SERS). This technique can be used in an apertureless shear-force NSOM setup, or by using an AFM tip coated with gold or silver. The Raman signal is found to be significantly enhanced under the AFM tip. This technique has been used to give local variations in the Raman spectra under a single-walled nanotube. A highly sensitive optoacoustic spectrometer must be used for the detection of the Raman signal. Fluorescence NSOM is a highly popular and sensitive technique which makes use of fluorescence for near field imaging, and is especially suited for biological applications. The technique of choice here is apertureless back to the fiber emission in constant shear force mode. This technique uses merocyanine-based dyes embedded in an appropriate resin. Edge filters are used for removal of all primary laser light. Resolution as low as 10 nm can be achieved using this technique. Near field infrared spectrometry and near-field dielectric microscopy use near-field probes to combine sub-micron microscopy with localized IR spectroscopy. The nano-FTIR method is a broadband nanoscale spectroscopy that combines apertureless NSOM with broadband illumination and FTIR detection to obtain a complete infrared spectrum at every spatial location. Sensitivity to a single molecular complex and nanoscale resolution up to 10 nm has been demonstrated with nano-FTIR. The nanofocusing technique can create a nanometer-scale "white" light source at the tip apex, which can be used to illuminate a sample at near-field for spectroscopic analysis. The interband optical transitions in individual single-walled carbon nanotubes are imaged and a spatial resolution around 6 nm has been reported. Artifacts NSOM can be vulnerable to artifacts that are not from the intended contrast mode. The most common root for artifacts in NSOM are tip breakage during scanning, striped contrast, displaced optical contrast, local far field light concentration, and topographic artifacts. In apertureless NSOM, also known as scattering-type SNOM or s-SNOM, many of these artifacts are eliminated or can be avoided by proper technique application. Limitations One limitation is a very short working distance and extremely shallow depth of field. It is normally limited to surface studies; however, it can be applied for subsurface investigations within the corresponding depth of field. In shear force mode and other contact operation it is not conducive for studying soft materials. It has long scan times for large sample areas for high resolution imaging. An additional limitation is the predominant orientation of the polarization state of the interrogating light in the near-field of the scanning tip. Metallic scanning tips naturally orient the polarization state perpendicular to the sample surface. Other techniques, like anisotropic terahertz microspectroscopy utilize in-plane polarimetry to study physical properties inaccessible to near-field scanning optical microscopes including the spatial dependence of intramolecular vibrations in anisotropic molecules. See also Fluorescence spectroscopy Nano-optics Near-field optics References External links Scanning probe microscopy Cell imaging Laboratory equipment Microscopy Optical microscopy
Near-field scanning optical microscope
Chemistry,Materials_science,Biology
2,466
22,790,579
https://en.wikipedia.org/wiki/CoRoT-5
CoRoT-5 is a magnitude 14 star located in the Monoceros constellation. Location and properties The announcement materials identify this star as located within the LRa01 field of view of the CoRoT spacecraft. According to the project website this field is in the Monoceros constellation. The announcement materials report that the star has a radius of about 116% of the Sun and a mass of about 101% of the Sun. This star is reported to be a main sequence F type star a little larger and hotter than the Sun. Planetary system The announcement states that this parent star is orbited by one known extrasolar planet identified as CoRoT-5b. The discovery was made using the astronomical transit method by the CoRoT program. See also CoRoT - an operational French-led ESA planet-hunting mission spacecraft, launched in 2006 References F-type main-sequence stars Planetary transit variables Planetary systems with one confirmed planet Monoceros
CoRoT-5
Astronomy
189
32,170,500
https://en.wikipedia.org/wiki/BPS%20domain
In molecular biology, the BPS domain (Between PH and SH2) domain is a protein domain of approximately 45 amino acids found in the adaptor proteins Grb7/|Grb10/Grb14. It mediates inhibition of the tyrosine kinase domain of the insulin receptor by binding of the N-terminal portion of the BPS domain to the substrate peptide groove of the kinase, acting as a pseudosubstrate inhibitor. It is composed of two beta strands and a C-terminal helix. References Protein domains
BPS domain
Chemistry,Biology
110
7,901,142
https://en.wikipedia.org/wiki/Fluorescence%20interference%20contrast%20microscopy
Fluorescence interference contrast (FLIC) microscopy is a microscopic technique developed to achieve z-resolution on the nanometer scale. FLIC occurs whenever fluorescent objects are in the vicinity of a reflecting surface (e.g. Si wafer). The resulting interference between the direct and the reflected light leads to a double sin2 modulation of the intensity, I, of a fluorescent object as a function of distance, h, above the reflecting surface. This allows for the nanometer height measurements. FLIC microscope is well suited to measuring the topography of a membrane that contains fluorescent probes e.g. an artificial lipid bilayer, or a living cell membrane or the structure of fluorescently labeled proteins on a surface. FLIC optical theory General two layer system The optical theory underlying FLIC was developed by Armin Lambacher and Peter Fromherz. They derived a relationship between the observed fluorescence intensity and the distance of the fluorophore from a reflective silicon surface. The observed fluorescence intensity, , is the product of the excitation probability per unit time, , and the probability of measuring an emitted photon per unit time, . Both probabilities are a function of the fluorophore height above the silicon surface, so the observed intensity will also be a function of the fluorophore height. The simplest arrangement to consider is a fluorophore embedded in silicon dioxide (refractive index ) a distance d from an interface with silicon (refractive index ). The fluorophore is excited by light of wavelength and emits light of wavelength . The unit vector gives the orientation of the transition dipole of excitation of the fluorophore. is proportional to the squared projection of the local electric field, , which includes the effects of interference, on the direction of the transition dipole. The local electric field, , at the fluorophore is affected by interference between the direct incident light and the light reflecting off the silicon surface. The interference is quantified by the phase difference given by is the angle of the incident light with respect to the silicon plane normal. Not only does interference modulate , but the silicon surface does not perfectly reflect the incident light. Fresnel coefficients give the change in amplitude between an incident and reflected wave. The Fresnel coefficients depend on the angles of incidence, and , the indices of refraction of the two mediums and the polarization direction. The angles and can be related by Snell's Law. The expressions for the reflection coefficients are: TE refers to the component of the electric field perpendicular to the plane of incidence and TM to the parallel component (The incident plane is defined by the plane normal and the propagation direction of the light). In cartesian coordinates, the local electric field is is the polarization angle of the incident light with respect to the plane of incidence. The orientation of the excitation dipole is a function of its angle to the normal and azimuthal to the plane of incidence. The above two equations for and can be combined to give the probability of exciting the fluorophore per unit time . Many of the parameters used above would vary in a normal experiment. The variation in the five following parameters should be included in this theoretical description. The coherence of the excitation light The incident angle () of excitation light Polarization angle () of the excitation light The angle of transition dipole () of the fluorophore The wavelength of the excitation light () The squared projection must be averaged over these quantities to give the probability of excitation . Averaging over the first 4 parameters gives Normalization factors are not included. is a distribution of the orientation angle of the fluorophore dipoles. The azimuthal angle and the polarization angle are integrated over analytically, so they no longer appear in the above equation. To finally obtain the probability of excitation per unit time, the above equation is integrated over the spread in excitation wavelength, accounting for the intensity and the extinction coefficient of the fluorophore . The steps to calculate are equivalent to those above in calculating except that the parameter labels em are replaced with ex and in is replaced with out. The resulting fluorescence intensity measured is proportional to the product of the excitation probability and emission probability It is important to note that this theory determines a proportionality relation between the measured fluorescence intensity and the distance of the fluorophore above the reflective surface. The fact that it is not an equality relation will have a significant effect on the experimental procedure. Experimental Setup A silicon wafer is typically used as the reflective surface in a FLIC experiment. An oxide layer is then thermally grown on top of the silicon wafer to act as a spacer. On top of the oxide is placed the fluorescently labeled specimen, such as a lipid membrane, a cell or membrane bound proteins. With the sample system built, all that is needed is an epifluorescence microscope and a CCD camera to make quantitative intensity measurements. The silicon dioxide thickness is very important in making accurate FLIC measurements. As mentioned before, the theoretical model describes the relative fluorescence intensity measured versus the fluorophore height. The fluorophore position cannot be simply read off of a single measured FLIC curve. The basic procedure is to manufacture the oxide layer with at least two known thicknesses (the layer can be made with photolithographic techniques and the thickness measured by ellipsometry). The thicknesses used depends on the sample being measured. For a sample with fluorophore height in the range of 10 nm, oxide thickness around 50 nm would be best because the FLIC intensity curve is steepest here and would produce the greatest contrast between fluorophore heights. Oxide thickness above a few hundred nanometers could be problematic because the curve begins to get smeared out by polychromatic light and a range of incident angles. A ratio of measured fluorescence intensities at different oxide thicknesses is compared to the predicted ratio to calculate the fluorophore height above the oxide (). The above equation can then be solved numerically to find . Imperfections of the experiment, such as imperfect reflection, nonnormal incidence of light and polychromatic light tend to smear out the sharp fluorescence curves. The spread in incidence angle can be controlled by the numerical aperture (N.A.). However, depending on the numerical aperture used, the experiment will yield good lateral resolution (x-y) or good vertical resolution (z), but not both. A high N.A. (~1.0) gives good lateral resolution which is best if the goal is to determine long range topography. Low N.A. (~0.001), on the other hand, provides accurate z-height measurement to determine the height of a fluorescently labeled molecule in a system. Analysis The basic analysis involves fitting the intensity data with the theoretical model allowing the distance of the fluorophore above the oxide surface () to be a free parameter. The FLIC curves shift to the left as the distance of the fluorophore above the oxide increases. is usually the parameter of interest, but several other free parameters are often included to optimize the fit. Normally an amplitude factor (a) and a constant additive term for the background (b) are included. The amplitude factor scales the relative model intensity and the constant background shifts the curve up or down to account for fluorescence coming from out of focus areas, such as the top side of a cell. Occasionally the numerical aperture (N.A.) of the microscope is allowed to be a free parameter in the fitting. The other parameters entering the optical theory, such as different indices of refraction, layer thicknesses and light wavelengths, are assumed constant with some uncertainty. A FLIC chip may be made with oxide terraces of 9 or 16 different heights arranged in blocks. After a fluorescence image is captured, each 9 or 16 terrace block yields a separate FLIC curve that defines a unique . The average is found by compiling all the values into a histogram. The statistical error in the calculation of comes from two sources: the error in fitting of the optical theory to the data and the uncertainty in the thickness of the oxide layer. Systematic error comes from three sources: the measurement of the oxide thickness (usually by ellipsometer), the fluorescence intensity measurement with the CCD, and the uncertainty in the parameters used in the optical theory. The systematic error has been estimated to be . References Microscopy Nanotechnology
Fluorescence interference contrast microscopy
Chemistry,Materials_science,Engineering
1,757
55,048,075
https://en.wikipedia.org/wiki/Nature%20Ecology%20and%20Evolution
Nature Ecology and Evolution is an online-only monthly peer-reviewed scientific journal published by Nature Portfolio covering all aspects of research on ecology and evolutionary biology. It was established in 2017. Its first and current editor-in-chief is Patrick Goymer. According to the Journal Citation Reports, Nature Ecology and Evolution has a 2020 impact factor of 15.46. References External links Nature Research academic journals Academic journals established in 2017 Ecology journals Monthly journals English-language journals
Nature Ecology and Evolution
Environmental_science
94
43,173,137
https://en.wikipedia.org/wiki/Alcohol%20%28drug%29
Alcohol (), sometimes referred to by the chemical name ethanol, is the second most consumed psychoactive drug globally behind caffeine. Alcohol is a central nervous system (CNS) depressant, decreasing electrical activity of neurons in the brain. The World Health Organization (WHO) classifies alcohol as a toxic, psychoactive, dependence-producing, and carcinogenic substance. Alcohol is found in fermented beverages such as beer, wine, and distilled spirit – in particular, rectified spirit, and serves various purposes; Certain religions integrate alcohol into their spiritual practices. For example, the Catholic Church requires alcoholic sacramental wine in the Eucharist, and permits moderate consumption of alcohol in daily life as a means of experiencing joy. Alcohol is also used as a recreational drug, for example by college students, for self-medication, and in warfare. It is also frequently involved in alcohol-related crimes such as drunk driving, public intoxication, and underage drinking. Short-term effects from moderate consumption include relaxation, decreased social inhibition, and euphoria, while binge drinking may result in cognitive impairment, blackout, and hangover. Excessive alcohol intake causes alcohol poisoning, characterized by unconsciousness or, in severe cases, death. Long-term effects are considered to be a major global public health issue and includes alcoholism, abuse, alcohol withdrawal, fetal alcohol spectrum disorder (FASD), liver disease, hepatitis, cardiovascular disease (e.g., cardiomyopathy), polyneuropathy, alcoholic hallucinosis, long-term impact on the brain (e.g., brain damage, dementia, and Marchiafava–Bignami disease), and cancers. For roughly two decades, the International Agency for Research on Cancer (IARC) of the WHO has classified alcohol as a Group 1 Carcinogen. Globally, alcohol use was the seventh leading risk factor for both deaths and DALY in 2016. According to WHO's Global status report on alcohol and health 2018, more than 200 health issues are associated with harmful alcohol consumption, ranging from liver diseases, road injuries and violence, to cancers, cardiovascular diseases, suicides, tuberculosis, and HIV/AIDS. Moreover, a 2024 WHO report indicates that these harmful consequences of alcohol use result in approximately 2.6 million deaths annually, accounting for 4.7% of all global deaths. In 2023, the WHO declared that 'there is no safe amount of alcohol consumption' and that 'the risk to the drinker's health starts from the first drop of any alcoholic beverage.' National agencies are aligning with the WHO's recommendations and increasingly advocating for abstinence from alcohol consumption. They highlight that even minimal alcohol intake is associated with elevated health risks, emphasizing that reducing alcohol intake is beneficial for everyone, regardless of their current drinking levels. Uses Dutch courage Dutch courage, also known as pot-valiance or liquid courage, refers to courage gained from intoxication with alcohol. Alcohol use among college students is often used as "liquid courage" in the hookup culture, for them to make a sexual advance in the first place. However, a recent trend called "dry dating" is gaining popularity to replace "liquid courage", which involves going on dates without consuming alcohol. Consuming alcohol prior to visiting female sex workers is a common practice among some men. Sex workers often resort to using drugs and alcohol to cope with stress. Alcohol when consumed in high doses is considered to be an anaphrodisiac. Criminal Albeit not a valid intoxication defense, weakening the inhibitions by drunkenness is occasionally used as a tool to commit planned offenses such as property crimes including theft and robbery, and violent crimes including assault, murder, or rape – which sometimes but not always occurs in alcohol-facilitated sexual assaults where the victim is also drugged. Warfare Alcohol has a long association of military use, and has been called "liquid courage" for its role in preparing troops for battle, anaesthetize injured soldiers, and celebrate military victories. It has also served as a coping mechanism for combat stress reactions and a means of decompression from combat to everyday life. However, this reliance on alcohol can have negative consequences for physical and mental health. Military and veteran populations face significant challenges in addressing the co-occurrence of PTSD and alcohol use disorder. Military personnel who show symptoms of PTSD, major depressive disorder, alcohol use disorder, and generalized anxiety disorder show higher levels of suicidal ideation. Alcohol consumption in the US Military is higher than any other profession, according to CDC data from 2013–2017. The Department of Defense Survey of Health Related Behaviors among Active Duty Military Personnel published that 47% of active duty members engage in binge drinking, with another 20% engaging in heavy drinking in the past 30 days. Reports from the Russian invasion of Ukraine in 2022 and since suggested that Russian soldiers are drinking significant amount of alcohol (as well as consuming harder drugs), which increases their losses. Some reports suggest that on occasion, alcohol and drugs have been provided to the lower quality troops by their commanders, in order to facilitate their use as expendable cannon fodder. Food energy The USDA uses a figure of per gram of alcohol ( per ml) for calculating food energy. For distilled spirits, a standard serving in the United States is , which at 40% ethanol (80 proof), would be 14 grams and 98 calories. However, alcoholic drinks are considered empty calorie foods because other than food energy they contribute no essential nutrients. Alcohol increases insulin response to glucose promoting fat storage and hindering carb/fat burning oxidation. This excess processing in the liver acetyl CoA can lead to fatty liver disease and eventually alcoholic liver disease. This progression can lead to further complications, alcohol-related liver disease may cause exocrine pancreatic insufficiency, the inability to properly digest food due to a lack or reduction of digestive enzymes made by the pancreas. The use of alcohol as a staple food source is considered inconvenient due to the fact that it increases the blood alcohol content (BAC). However, alcohol is a significant source of food energy for individuals with alcoholism and those who engage in binge drinking; For example, individuals with drunkorexia, engage in the combination of self-imposed malnutrition and binge drinking to avoid weight gain from alcohol, to save money for purchasing alcohol, and to facilitate alcohol intoxication. Also, in alcoholics who get most of their daily calories from alcohol, a deficiency of thiamine can produce Korsakoff's syndrome, which is associated with serious brain damage. Medical Spiritus fortis is a medical term for ethanol solutions with 95% ABV. When taken by mouth or injected into a vein ethanol is used to treat methanol or ethylene glycol toxicity when fomepizole is not available. Ethanol, when used to treat or prevent methanol and/or ethylene glycol toxicity, competes with other alcohols for the alcohol dehydrogenase enzyme, lessening metabolism into toxic aldehyde and carboxylic acid derivatives, and reducing the more serious toxic effects of the glycols when crystallized in the kidneys. Recreational Drinking culture is the set of traditions and social behaviors that surround the consumption of alcoholic beverages as a recreational drug and social lubricant. Although alcoholic beverages and social attitudes toward drinking vary around the world, nearly every civilization has independently discovered the processes of brewing beer, fermenting wine and distilling spirits. Common drinking styles include moderate drinking, social drinking, and binge drinking. Drinking styles In today's society, there is a growing awareness of this, reflected in the variety of approaches to alcohol use, each emphasizing responsible choices. Sober curious describes a mindset or approach where someone is consciously choosing to reduce or eliminate alcohol consumption, not drinking and driving, being aware of your surroundings, not pressuring others to drink, and being able to quit anytime. However, they are not necessarily committed to complete sobriety. A 2014 report in the National Survey on Drug Use and Health found that only 10% of either "heavy drinkers" or "binge drinkers" defined according to the above criteria also met the criteria for alcohol dependence, while only 1.3% of non-binge drinkers met the criteria. An inference drawn from this study is that evidence-based policy strategies and clinical preventive services may effectively reduce binge drinking without requiring addiction treatment in most cases. Binge drinking Binge drinking, or heavy episodic drinking, is drinking alcoholic beverages with an intention of becoming intoxicated by heavy consumption of alcohol over a short period of time, but definitions vary considerably. Binge drinking is a style of drinking that is popular in several countries worldwide, and overlaps somewhat with social drinking since it is often done in groups. Drinking games involve consuming alcohol as part of the gameplay. They can be risky because they can encourage people to drink more than they intended to. Recent studies link binge drinking habits to a decline in quality of life and a shortened lifespan by 3–6 years. Alcohol-based sugar-sweetened beverages, are closely linked to episodic drinking in adolescents. Sugar-infused alcoholic beverages include alcopops, and liqueurs. Pregame heavy episodic drinking (4+/5+ drinks for women/men) or more drinks is linked to a higher likelihood of engaging in high-intensity drinking (8+/10+ drinks), according to a 2022 study. The study also found that students who pregame at this level report more negative consequences compared to days with moderate pregame drinking and days without any pregame drinking. Hazing has a long-standing presence in college fraternities, often involving alcohol as a form of punishment. This can lead to dangerous levels of intoxication and severe ethanol poisoning, sometimes resulting in fatalities. High serum ethanol levels are common among affected students. Definition Binge drinking refers to the consumption of alcohol that takes place simultaneously or within a few hours of one another; The National Institute on Alcohol Abuse and Alcoholism (NIAAA) defines binge drinking as a pattern of alcohol consumption that brings a person's blood alcohol concentration (BAC) to 0.08 percent or above. This typically occurs when men consume five or more US standard drinks, or women consume four or more drinks, within about two hours. The Substance Abuse and Mental Health Services Administration (SAMHSA) defines binge drinking slightly differently, focusing on the number of drinks consumed on a single occasion. According to SAMHSA, binge drinking is consuming five or more drinks for men, or four or more drinks for women, on the same occasion on at least one day in the past month. Heavy drinking Alcohol in association football has long been a complex issue, with significant cultural and behavioral implications. Football is widely observed in various settings such as television broadcasts, sports bars, and arenas, contributing to the drinking culture surrounding the sport. A 2007 study at the University of Texas at Austin monitored the drinking habits of 541 students over two football seasons. It revealed that high-profile game days ranked among the heaviest drinking occasions, similar to New Year's Eve. Male students increased their consumption for all games, while socially active female students drank heavily during away games. Lighter drinkers also showed a higher likelihood of risky behaviors during away games as their intoxication increased. This research highlights specific drinking patterns linked to collegiate sports events. Heavy drinking significantly increases during December, particularly around Christmas and New Year's, leading to a rise in alcohol sales, consumption, and related harmful events and deaths. Because of increased alcohol consumption at festivities and poorer road conditions during the winter months, alcohol-related road traffic accidents increase over the Christmas and holiday season. According to a 2022 study, recreational heavy drinking and intoxication have become increasingly prevalent among Nigerian youth in Benin City. Traditionally, alcohol use was more accepted for men, while youth drinking was often taboo. Today, many young people engage in heavy drinking for pleasure and excitement. Peer networks encourage this behavior through rituals that promote intoxication and provide care for inebriated friends. The findings suggest a need to reconsider cultural prohibitions on youth drinking and advocate for public health interventions promoting low-risk drinking practices. Definition Heavy drinking should not be confused with heavy episodic drinking, commonly known as binge drinking, which takes place over a brief period of a few hours. However, multiple binge drinking sessions within a short timeframe can be classified as heavy drinking. Heavy alcohol use refers to consumption patterns that take place within a single day, week, or month, depending on the amount consumed: The Centers for Disease Control and Prevention defines heavy drinking as consuming more than 8 drinks per week for women and more than 15 drinks per week for men. NIAAA defines heavy alcohol use as the consumption of five or more standard drinks in a single day or 15 or more drinks within a week for men, while for women, it is defined as consuming four or more drinks in a day or eight or more drinks per week. SAMHSA considers heavy alcohol use to be engaging in binge drinking behaviors on five or more days within a month. Light, moderate, responsible, and social drinking In many cultures, good news is often celebrated by a group sharing alcoholic drinks. For example, sparkling wine may be used to toast the bride at a wedding, and alcoholic drinks may be served to celebrate a baby's birth. Buying someone an alcoholic drink is often considered a gesture of goodwill, an expression of gratitude, or to mark the resolution of a dispute. Definitions Light drinking, moderate drinking, responsible drinking, and social drinking are often used interchangeably, but with slightly different connotations: Light drinking - "Alcohol has been found to increase risk for cancer, and for some types of cancer, the risk increases even at low levels of alcohol consumption (less than 1 drink in a day). Caution, therefore, is recommended.", according to the Dietary Guidelines for Americans (DGA). "The Committee recommended that adults limit alcohol intake to no more than 1 drink per day for both women and men for better health" (DGA). Light alcohol consumption showed no connection to most cancers, but a slight rise in the likelihood of melanoma, breast cancer in females, and prostate cancer in males was observed. Moderate drinking - strictly focuses on the amount of alcohol consumed, following alcohol consumption recommendations. This is called "drinking in moderation". The CDC defines "Moderate drinking is having one drink or less in a day for women, or two drinks or less in a day for men." According to the WHO nearly half of all alcohol-attributable cancers in the WHO European Region are linked to alcohol consumption, even from "light" or "moderate" drinking – "less than 1.5 litres of wine or less than 3.5 litres of beer or less than 450 millilitres of spirits per week". However, moderate drinking is associated with a further slight increase in cancer risk. Also, moderate drinking may disrupt normal brain functioning. Responsible drinking - as defined by alcohol industry standards, often emphasizes personal choice and risk management, unlike terms like "social drinking" or "moderate drinking". Critics argue that the alcohol industry's definition does not always align with official recommendations for safe drinking limits. Social drinking - refers to casual drinking of alcoholic beverages in a social setting (for example bars, nightclubs, or parties) without an intent to become intoxicated. A social drinker is also defined as a person who only drinks alcohol during social events, such as parties, and does not drink while alone (e.g., at home). While social drinking often involves moderation, it does not strictly emphasize safety or specific quantities, unlike moderate drinking. Social settings can involve peer pressure to drink more than intended, which can be a risk factor for excessive alcohol consumption. Regularly socializing over drinks can lead to a higher tolerance for alcohol and potentially dependence, especially in groups where drinking is a central activity. Social drinking does not preclude the development of alcohol dependence. High-functioning alcoholism describes individuals who appear to function normally in daily life despite struggling with alcohol dependence. Self-medication The therapeutic index for ethanol is 10%. Alcohol can have analgesic (pain-relieving) effects, which is why some people with chronic pain turn to alcohol to self-medicate and try to alleviate their physical discomfort. People with social anxiety disorder commonly self-medicate with alcohol to overcome their highly set inhibitions. However, self-medicating excessively for prolonged periods of time with alcohol often makes the symptoms of anxiety or depression worse. This is believed to occur as a result of the changes in brain chemistry from long-term use. A 2023 systematic review highlights the non-addictive use of alcohol for managing developmental issues, personality traits, and psychiatric symptoms, emphasizing the need for informed, harm-controlled approaches to alcohol consumption within a personalized health policy framework. A 2023 study suggests that people who drink for both recreational enjoyment and therapeutic reasons, like relieving pain and anxiety/depression/stress, have a higher demand for alcohol compared to those who drink solely for recreation or self-medication. This finding raises concerns, as this group may be more likely to develop alcohol use disorder and experience negative consequences related to their drinking. A significant proportion of patients attending mental health services for conditions including anxiety disorders such as panic disorder or social phobia have developed these conditions as a result of recreational alcohol or sedative use. Self-medication or mental disorders may make people not decline their drinking despite negative consequences. This can create a cycle of dependence that is difficult to break without addressing the underlying mental health issue. Unscientific The American Heart Association warn that "We've all seen the headlines about studies associating light or moderate drinking with health benefits and reduced mortality. Some researchers have suggested there are health benefits from wine, especially red wine, and that a glass a day can be good for the heart. But there's more to the story. No research has proved a cause-and-effect link between drinking alcohol and better heart health." In folk medicine, consuming a nightcap is for the purpose of inducing sleep. However, alcohol is not recommended by many doctors as a sleep aid because it interferes with sleep quality. "Hair of the dog", short for "hair of the dog that bit you", is a colloquial expression in the English language predominantly used to refer to alcohol that is consumed as a hangover remedy (with the aim of lessening the effects of a hangover). Many other languages have their own phrase to describe the same concept. The idea may have some basis in science in the difference between ethanol and methanol metabolism. Instead of alcohol, rehydration before going to bed or during hangover may relieve dehydration-associated symptoms such as thirst, dizziness, dry mouth, and headache. Drinking alcohol may cause subclinical immunosuppression. Spiritual Christian views on alcohol encompass a range of perspectives regarding the consumption of alcoholic beverages, with significant emphasis on moderation rather than total abstinence. The moderationist position is held by Roman Catholics and Eastern Orthodox, and within Protestantism, it is accepted by Anglicans, Lutherans and many Reformed churches. Moderationism is also accepted by Jehovah's Witnesses. Spiritual use of moderate alcohol consumption is also found in some religions and schools with esoteric influences, including the Hindu tantra sect Aghori, in the Sufi Bektashi Order and Alevi Jem ceremonies, in the Rarámuri religion, in the Japanese religion Shinto, by the new religious movement Thelema, in Vajrayana Buddhism, and in Vodou faith of Haiti. Contraindication Pregnancy In the US, alcohol is subject to the FDA drug labeling Pregnancy Category X (Contraindicated in pregnancy). Minnesota, North Dakota, Oklahoma, South Dakota, and Wisconsin have laws that allow the state to involuntarily commit pregnant women to treatment if they abuse alcohol during pregnancy. Risks Fetal alcohol spectrum disorder Ethanol is classified as a teratogen—a substance known to cause birth defects; according to the U.S. Centers for Disease Control and Prevention (CDC), alcohol consumption by women who are not using birth control increases the risk of fetal alcohol spectrum disorders (FASDs). This group of conditions encompasses fetal alcohol syndrome, partial fetal alcohol syndrome, alcohol-related neurodevelopmental disorder, static encephalopathy, and alcohol-related birth defects. The CDC currently recommends complete abstinence from alcoholic beverages for women of child-bearing age who are pregnant, trying to become pregnant, or are sexually active and not using birth control. In South Africa, some populations have rates as high as 9%. Miscarriage Miscarriage, also known in medical terms as a spontaneous abortion, is the death and expulsion of an embryo or fetus before it can survive independently. Alcohol consumption is a risk factor for miscarriage. Sudden infant death syndrome Drinking of alcohol by parents is linked to sudden infant death syndrome (SIDS). One study found a positive correlation between the two during New Years celebrations and weekends. Another found that alcohol use disorder was linked to a more than doubling of risk. Adverse effects Alcohol has a variety of short-term and long-term adverse effects. Alcohol has both short-term, and long-term effects on the memory, and sleep. It also has reinforcement-related adverse effects, including alcoholism, dependence, and withdrawal; The most severe withdrawal symptoms, associated with physical dependence, can include seizures and delirium tremens, which in rare cases can be fatal. Alcohol use is directly related to considerable morbidity and mortality, for instance due to intoxication and alcohol-related health problems. The World Health Organization advises that there is no safe level of alcohol consumption. A study in 2015 found that alcohol and tobacco use combined resulted in a significant health burden, costing over a quarter of a billion disability-adjusted life years. Illicit drug use caused tens of millions more disability-adjusted life years. Drunkorexia is a colloquialism for anorexia or bulimia combined with an alcohol use disorder. Alcohol is a common cause of substance-induced psychosis or episodes, which may occur through acute intoxication, chronic alcoholism, withdrawal, exacerbation of existing disorders, or acute idiosyncratic reactions. Research has shown that excessive alcohol use causes an 8-fold increased risk of psychotic disorders in men and a 3-fold increased risk of psychotic disorders in women. While the vast majority of cases are acute and resolve fairly quickly upon treatment and/or abstinence, they can occasionally become chronic and persistent. Alcoholic psychosis is sometimes misdiagnosed as another mental illness such as schizophrenia. An inability to process or exhibit emotions in a proper manner has been shown to exist in people who consume excessive amounts of alcohol and those who were exposed to alcohol while fetuses (FAexp). Also, a significant portion (40–60%) of alcoholics experience emotional blindness. Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who have alcohol use disorders, due to the neurotoxic effects of alcohol on the brain, particularly the prefrontal cortex. Short-term effects The amount of ethanol in the body is typically quantified by blood alcohol content (BAC); weight of ethanol per unit volume of blood. Small doses of ethanol, in general, are stimulant-like and produce euphoria and relaxation; people experiencing these symptoms tend to become talkative and less inhibited, and may exhibit poor judgement. At higher dosages (BAC > 1 gram/liter), ethanol acts as a central nervous system (CNS) depressant, producing at progressively higher dosages, impaired sensory and motor function, slowed cognition, stupefaction, unconsciousness, and possible death. Ethanol is commonly consumed as a recreational substance, especially while socializing, due to its psychoactive effects. Central nervous system impairment Alcohol causes generalized CNS depression, is a positive allosteric GABAA modulator and is associated and related with decreased anxiety, decreased social inhibition, sedation, impairment of cognitive, memory, motor, and sensory function. It slows and impairs cognition and reaction time and the cognitive skills, impairs judgement, interferes with motor function resulting in motor incoordination, numbness, impairs memory formation, and causes sensory impairment. Binge drinking can cause generalized impairment of neurocognitive function, dizziness, analgesia, amnesia, ataxia (loss of balance, confusion, sedation, slurred speech), general anaesthesia, decreased libido, nausea, vomiting, blackout, spins, stupor, unconsciousness, and hangover. At very high concentrations, alcohol can cause anterograde amnesia, markedly decreased heart rate, pulmonary aspiration, positional alcohol nystagmus, respiratory depression, shock, coma and death can result due to profound suppression of CNS function alcohol overdose and can finish in consequent dysautonomia. Gastrointestinal effects Alcohol can cause nausea and vomiting in sufficiently high amounts (varying by person). Alcohol stimulates gastric juice production, even when food is not present, and as a result, its consumption stimulates acidic secretions normally intended to digest protein molecules. Consequently, the excess acidity may harm the inner lining of the stomach. The stomach lining is normally protected by a mucosal layer that prevents the stomach from, essentially, digesting itself. Ingestion of alcohol can initiate systemic pro-inflammatory changes through two intestinal routes: (1) altering intestinal microbiota composition (dysbiosis), which increases lipopolysaccharide (LPS) release, and (2) degrading intestinal mucosal barrier integrity – thus allowing LPS to enter the circulatory system. The major portion of the blood supply to the liver is provided by the portal vein. Therefore, while the liver is continuously fed nutrients from the intestine, it is also exposed to any bacteria and/or bacterial derivatives that breach the intestinal mucosal barrier. Consequently, LPS levels increase in the portal vein, liver and systemic circulation after alcohol intake. Immune cells in the liver respond to LPS with the production of reactive oxygen species, leukotrienes, chemokines and cytokines. These factors promote tissue inflammation and contribute to organ pathology. Hangover A hangover is the experience of various unpleasant physiological and psychological effects usually following the consumption of alcohol, such as wine, beer, and liquor. Hangovers can last for several hours or for more than 24 hours. Typical symptoms of a hangover may include headache, drowsiness, concentration problems, dry mouth, dizziness, fatigue, gastrointestinal distress (e.g., nausea, vomiting, diarrhea), absence of hunger, light sensitivity, depression, sweating, hyper-excitability, irritability, and anxiety (often referred to as "hangxiety"). Though many possible remedies and folk cures have been suggested, there is no compelling evidence to suggest that any are effective for preventing or treating hangovers. Avoiding alcohol or drinking in moderation are the most effective ways to avoid a hangover. The socioeconomic consequences of hangovers include workplace absenteeism, impaired job performance, reduced productivity and poor academic achievement. A hangover may also impair performance during potentially dangerous daily activities such as driving a car or operating heavy machinery. Holiday heart syndrome Holiday heart syndrome, also known as alcohol-induced atrial arrhythmias, is a syndrome defined by an irregular heartbeat and palpitations associated with high levels of ethanol consumption. Holiday heart syndrome was discovered in 1978 when Philip Ettinger discovered the connection between arrhythmia and alcohol consumption. It received its common name as it is associated with the binge drinking common during the holidays. It is unclear how common this syndrome is. 5-10% of cases of atrial fibrillation may be related to this condition, but it could be as high 63%. Positional alcohol nystagmus Positional alcohol nystagmus (PAN) is nystagmus (visible jerkiness in eye movement) produced when the head is placed in a sideways position. PAN occurs when the specific gravity of the membrane space of the semicircular canals in the ear differs from the specific gravity of the fluid in the canals because of the presence of alcohol. Allergic-like reactions Ethanol-containing beverages can cause alcohol flush reactions, exacerbations of rhinitis and, more seriously and commonly, bronchoconstriction in patients with a history of asthma, and in some cases, urticarial skin eruptions, and systemic dermatitis. Such reactions can occur within 1–60 minutes of ethanol ingestion, and may be caused by: genetic abnormalities in the metabolism of ethanol, which can cause the ethanol metabolite, acetaldehyde, to accumulate in tissues and trigger the release of histamine, or true allergy reactions to allergens occurring naturally in, or contaminating, alcoholic beverages (particularly wine and beer), and other unknown causes. Alcohol flush reaction has also been associated with an increased risk of esophageal cancer in those who do drink. Long-term effects According to The Lancet, 'four industries (tobacco, unhealthy food, fossil fuel, and alcohol) are responsible for at least a third of global deaths per year'. In 2024, the World Health Organization published a report including these figures. Due to the long term effects of alcohol abuse, binge drinking is considered to be a major public health issue. The impact of alcohol on aging is multifaceted. The relationship between alcohol consumption and body weight is the subject of inconclusive studies. Alcoholic lung disease is disease of the lungs caused by excessive alcohol. However, the term 'alcoholic lung disease' is not a generally accepted medical diagnosis. Alcohol's overall effect on health is uncertain. While some studies suggest moderate consumption might have some benefit, others find any amount increases health risks. This uncertainty is due to conflicting research methods and potential biases, including counting former drinkers as abstainers and the possibility of alcohol industry influence. Because of these issues, experts advise against using alcohol for health reasons. For example, reviews from 2016 found that the "risk of all-cause mortality, and of cancers specifically, rises with increasing levels of consumption, and the level of consumption that minimises health loss is zero". Additionally, in 2023, the World Health Organization (WHO) stated that there is currently no conclusive evidence from studies that the potential benefits of moderate alcohol consumption for cardiovascular disease and type 2 diabetes outweigh the increased cancer risk associated with these drinking levels for individual consumers. Despite being a widespread issue, social stigma around problematic alcohol use or alcoholism discourages over 80% from seeking help. Alcoholism Alcoholism or its medical diagnosis alcohol use disorder refers to alcohol addiction, alcohol dependence, dipsomania, and/or alcohol abuse. It is a major problem and many health problems as well as death can result from excessive alcohol use. Alcohol dependence is linked to a lifespan that is reduced by about 12 years relative to the average person. In 2004, it was estimated that 4% of deaths worldwide were attributable to alcohol use. Deaths from alcohol are split about evenly between acute causes (e.g., overdose, accidents) and chronic conditions. The leading chronic alcohol-related condition associated with death is alcoholic liver disease. Alcohol dependence is also associated with cognitive impairment and organic brain damage. Some researchers have found that even one alcoholic drink a day increases an individual's risk of health problems by 0.4%. Stigma surrounding alcohol use disorder is particularly strong and different from the stigma attached to other mental illnesses not caused by substances. People with this condition are seen less as truly ill, face greater blame and social rejection, and experience higher structural discrimination risks. Two or more consecutive alcohol-free days a week have been recommended to improve health and break dependence. Dry drunk is an expression coined by the founder of Alcoholics Anonymous that describes an alcoholic who no longer drinks but otherwise maintains the same behavior patterns of an alcoholic. A high-functioning alcoholic (HFA) is a person who maintains jobs and relationships while exhibiting alcoholism. Many Native Americans in the United States have been harmed by, or become addicted to, drinking alcohol. Brain damage While many people associate alcohol's effects with intoxication, the long-term impact of alcohol on the brain can be severe. Binge drinking, or heavy episodic drinking, can lead to alcohol-related brain damage that occurs after a relatively short period of time. This brain damage increases the risk of alcohol-related dementia, and abnormalities in mood and cognitive abilities. Alcohol can cause Wernicke encephalopathy and Korsakoff syndrome which frequently occur simultaneously, known as Wernicke–Korsakoff syndrome. Lesions, or brain abnormalities, are typically located in the diencephalon which result in anterograde and retrograde amnesia, or memory loss. Dementia Alcohol-related dementia (ARD) is a form of dementia caused by long-term, excessive consumption of alcohol, resulting in neurological damage and impaired cognitive function. Marchiafava–Bignami disease Marchiafava–Bignami disease is a progressive neurological disease of alcohol use disorder, characterized by corpus callosum demyelination and necrosis and subsequent atrophy. The disease was first described in 1903 by the Italian pathologists Amico Bignami and Ettore Marchiafava in an Italian Chianti drinker. Symptoms can include, but are not limited to lack of consciousness, aggression, seizures, depression, hemiparesis, ataxia, apraxia, coma, etc. There will also be lesions in the corpus callosum. Liver damage Consuming more than 30 grams of pure alcohol per day over an extended period can significantly increase the risk of developing alcoholic liver disease. During the metabolism of alcohol via the respective dehydrogenases, nicotinamide adenine dinucleotide (NAD) is converted into reduced NAD. Normally, NAD is used to metabolize fats in the liver, and as such alcohol competes with these fats for the use of NAD. Prolonged exposure to alcohol means that fats accumulate in the liver, leading to the term 'fatty liver'. Continued consumption (such as in alcohol use disorder) then leads to cell death in the hepatocytes as the fat stores reduce the function of the cell to the point of death. These cells are then replaced with scar tissue, leading to the condition called cirrhosis. Cancer Alcoholic beverages have been classified as carcinogenic by leading health organizations for more than two decades, including the WHO's IARC (Group 1 carcinogens) and the U.S. NTP, raising concerns about the potential cancer risk associated with alcohol consumption. In 2023 the WHO highlighted a statistic: nearly half of all alcohol-attributable cancers in the WHO European Region are linked to alcohol consumption, even from "light" or "moderate" drinking – "less than 1.5 litres of wine or less than 3.5 litres of beer or less than 450 millilitres of spirits per week". This new information suggests that these consumption levels should now be considered high-risk. Many countries exceed these levels by a significant margin. Echoing the WHO's view, a growing number of national public health agencies are prioritizing complete abstinence (teetotalism) and stricter drinking guidelines in their alcohol consumption recommendations. Alcohol is also a major cause for head and neck cancer, especially laryngeal cancer. This risk is even higher when alcohol is used together with tobacco. Qualitative analysis reveals that the alcohol industry likely misinforms the public about the alcohol-cancer link, similar to the tobacco industry. The alcohol industry influences alcohol policy and health messages, including those for schoolchildren. Cardiovascular disease Excessive daily alcohol consumption and binge drinking can cause a higher risk of stroke, coronary artery disease, heart failure, fatal hypertensive disease, and fatal aortic aneurysm. A 2010 study reviewed research on alcohol and heart disease. They found that moderate drinking did not seem to worsen things for people who already had heart problems. But importantly, the researchers did not say that people who do not drink should start in order to improve their heart health. Thus, the safety and potential positive effect of light drinking on the cardiovascular system has not yet been proven. Still alcohol is a major health risk, and even if moderate drinking lowers the risk of some cardiovascular diseases it might increase the risk of others. Therefore starting to drink alcohol in the hope of any benefit is not recommended. The World Heart Federation (2022) recommends against any alcohol intake for optimal heart health. It has also been pointed out that the studies suggesting a positive link between red wine consumption and heart health had flawed methodology in the form of comparing two sets of people which were not actually appropriately paired. Cardiomyopathy Alcoholic cardiomyopathy (ACM) is a disease in which the long-term consumption of alcohol leads to heart failure. ACM is a type of dilated cardiomyopathy. The heart is unable to pump blood efficiently, leading to heart failure. It can affect other parts of the body if the heart failure is severe. It is most common in males between the ages of 35 and 50. Hearing loss Alcohol, classified as an ototoxin (ear toxin), can contribute to hearing loss sometimes referred to as "cocktail deafness" after exposure to loud noises in drinking environments. Children with fetal alcohol spectrum disorder (FASD) are at an increased risk of having hearing difficulties. Withdrawal syndrome Discontinuation of alcohol after extended heavy use and associated tolerance development (resulting in dependence) can result in withdrawal. Alcohol withdrawal can cause confusion, paranoia, anxiety, insomnia, agitation, tremors, fever, nausea, vomiting, autonomic dysfunction, seizures, and hallucinations. In severe cases, death can result. Delirium tremens is a condition that requires people with a long history of heavy drinking to undertake an alcohol detoxification regimen. Alcohol is one of the more dangerous drugs to withdraw from. Drugs which help to re-stabilize the glutamate system such as N-acetylcysteine have been proposed for the treatment of addiction to cocaine, nicotine, and alcohol. Cohort studies have demonstrated that the combination of anticonvulsants and benzodiazepines is more effective than other treatments in reducing alcohol withdrawal scores and shortening the duration of intensive care unit stays. Nitrous oxide has been shown to be an effective and safe treatment for alcohol withdrawal. The gas therapy reduces the use of highly addictive sedative medications (like benzodiazepines and barbiturates). Cortisol Research has looked into the effects of alcohol on the amount of cortisol that is produced in the human body. Continuous consumption of alcohol over an extended period of time has been shown to raise cortisol levels in the body. Cortisol is released during periods of high stress, and can result in the temporary shut down of other physical processes, causing physical damage to the body. Gout There is a strong association between gout the consumption of alcohol, and sugar-sweetened beverages, with wine presenting somewhat less of a risk than beer or spirits. Ketoacidosis Alcoholic ketoacidosis (AKA) is a specific group of symptoms and metabolic state related to alcohol use. Symptoms often include abdominal pain, vomiting, agitation, a fast respiratory rate, and a specific "fruity" smell. Consciousness is generally normal. Complications may include sudden death. Mental disorders Alcohol misuse often coincides with mental health conditions. Many individuals struggling with psychiatric disorders also experience problematic drinking behaviors. For example, alcohol may play a role in depression, with up to 10% of male depression cases in some European countries linked to alcohol use. Psychiatric genetics research continues to explore the complex interplay between alcohol use, genetic factors, and mental health outcomes; A 2024 study found that excessive drinking and alcohol-related DNA methylation may directly contribute to the causes of mental disorders, possibly through the altered expression of affected genes. Austrian syndrome Austrian syndrome, also known as Osler's triad, is a medical condition that was named after Robert Austrian in 1957. The presentation of the condition consists of pneumonia, endocarditis, and meningitis, all caused by Streptococcus pneumoniae. It is associated with alcoholism due to hyposplenism (reduced splenic functioning) and can be seen in males between the ages of 40 and 60 years old. Robert Austrian was not the first one to describe the condition, but Richard Heschl (around 1860s) or William Osler were not able to link the signs to the bacteria because microbiology was not yet developed. The leading cause of Osler's triad (Austrian syndrome) is Streptococcus pneumoniae, which is usually associated with heavy alcohol use. Polyneuropathy Alcoholic polyneuropathy is a neurological disorder in which peripheral nerves throughout the body malfunction simultaneously. It is defined by axonal degeneration in neurons of both the sensory and motor systems and initially occurs at the distal ends of the longest axons in the body. This nerve damage causes an individual to experience pain and motor weakness, first in the feet and hands and then progressing centrally. Alcoholic polyneuropathy is caused primarily by chronic alcoholism; however, vitamin deficiencies are also known to contribute to its development. Specific population Women Breast cancer Drinking alcohol increases the risk for breast cancer. For women in Europe, breast cancer represents the most significant alcohol-related cancer burden. Breastfeeding difficulties Moderate alcohol consumption by breastfeeding mothers can significantly affect infants and cause breastfeeding difficulties. Even one or two drinks, including beer, may reduce milk intake by 20 to 23%, leading to increased agitation and poor sleep patterns. Regular heavy drinking (more than two drinks daily) can shorten breastfeeding duration and cause issues in infants, such as excessive sedation, fluid retention, and hormonal imbalances. Additionally, higher alcohol consumption may negatively impact children's academic achievement. Neonatal withdrawal Babies exposed to alcohol, benzodiazepines, barbiturates, and some antidepressants (SSRIs) during pregnancy may experience neonatal withdrawal. The onset of clinical presentation typically appears within 48 to 72 hours of birth but may take up to 8 days. Other effects Alcohol may negatively affect sleep. Alcohol consumption disrupts circadian rhythms, with acute intake causing dose-dependent alterations in melatonin and cortisol levels, as well as core body temperature, which normalize the following morning, while chronic alcohol use leads to more severe and persistent disruptions that are associated with alcohol use disorders (AUD) and withdrawal symptoms. Also, Alcohol consumption may increase the risk of sleep disorders, including insomnia, restless legs syndrome, and sleep apnea. Erosive gastritis is commonly caused by stress, alcohol, some drugs, such as aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs), and Crohn's disease. Excessive alcohol intake has been shown to cause immunodeficiency, compromising the body's ability to fight infections and diseases, as evidenced by research on people who regularly consume large amounts of alcohol. Alcohol is associated with instances of sudden death. Sudden arrhythmic death syndrome in alcohol misuse is a significant cause of death among heavy drinkers, characterized by older age and severe liver damage, highlighting the need for family screening for heritable channelopathies. Also, sudden unexpected death in epilepsy is associated with a twofold higher risk in individuals with a history of substance abuse or alcohol dependence. Alcohol consumption is associated with lower sperm concentration, percentage of normal morphology, and semen volume, but not sperm motility. Frequent drinking of alcoholic beverages is a major contributing factor in cases of hypertriglyceridemia. Alcoholism is the single most common cause of chronic pancreatitis. Excess alcohol use is frequently associated with porphyria cutanea tarda (PTC). Alcohol consumption is a risk factor for Dupuytren's contracture. The majority of those with aspirin-exacerbated respiratory disease experience respiratory reactions to alcohol. Interactions Disorders COVID-19 A 2023 study suggests a link between alcohol consumption and worse COVID-19 outcomes. Researchers analyzed data from over 1.6 million people and found that any level of alcohol consumption increased the risk of severe illness, intensive care unit admission, and needing ventilation compared to non-drinkers. Even a history of drinking was associated with a higher risk of severe COVID-19. These findings suggest that avoiding alcohol altogether might be beneficial during the pandemic. Diabetes See the insulin section. Hepatitis Alcohol consumption can be especially dangerous for those with pre-existing liver damage from hepatitis B or C. Even relatively low amounts of alcohol can be life-threatening in these cases, so a strict adherence to abstinence is highly recommended. Histamine intolerance Alcohol may release histamine in individuals with histamine intolerance. Mental disorders Mental disorders can be a significant risk factor for alcohol abuse. Alcohol abuse, alcohol dependence, and alcoholism are comorbid with anxiety disorders. With dual diagnosis, the initial symptoms of mental illness tend to appear before those of substance abuse. Individuals with common mental health conditions, such as depression, anxiety, or phobias, are twice as likely to also report having an alcohol use disorder, compared to those without these mental health challenges. Alcohol is a major risk factor for self-harm. Individuals with anxiety disorders who self-medicate with drugs or alcohol may also have an increased likelihood of suicidal ideation. Peptic ulcer disease In patients who have a peptic ulcer disease (PUD), the mucosal layer is broken down by ethanol. PUD is commonly associated with the bacteria Helicobacter pylori, which secretes a toxin that weakens the mucosal wall, allowing acid and protein enzymes to penetrate the weakened barrier. Because alcohol stimulates the stomach to secrete acid, a person with PUD should avoid drinking alcohol on an empty stomach. Drinking alcohol causes more acid release, which further damages the already-weakened stomach wall. Complications of this disease could include a burning pain in the abdomen, bloating and in severe cases, the presence of dark black stools indicate internal bleeding. A person who drinks alcohol regularly is strongly advised to reduce their intake to prevent PUD aggravation. Dosage forms Alcohol induced dose dumping (AIDD) Alcohol-induced dose dumping (AIDD) is by definition an unintended rapid release of large amounts of a given drug, when administered through a modified-release dosage while co-ingesting ethanol. This is considered a pharmaceutical disadvantage due to the high risk of causing drug-induced toxicity by increasing the absorption and serum concentration above the therapeutic window of the drug. The best way to prevent this interaction is by avoiding the co-ingestion of both substances or using specific controlled-release formulations that are resistant to AIDD. Drugs Alcohol can intensify the sedation caused by antipsychotics, and certain antidepressants. Alcohol combined with cannabis (not to be confused with tincture of cannabis which contains minute quantities of alcohol) — known as cross-fading and may easily cause spins in people who are drunk and smoke potent cannabis; Ethanol increases plasma tetrahydrocannabinol levels, which suggests that ethanol may increase the absorption of tetrahydrocannabinol. TOMSO is a lesser-known psychedelic drug and a substituted amphetamine. TOMSO was first synthesized by Alexander Shulgin. According to Shulgin's book PiHKAL, TOMSO is inactive on its own and requires consumption of alcohol to become active. Hypnotics/sedatives Alcohol can intensify the sedation caused by hypnotics/sedatives such as barbiturates, benzodiazepines, sedative antihistamines, opioids, nonbenzodiazepines/Z-drugs (such as zolpidem and zopiclone). Dextromethorphan Combining alcohol with dextromethorphan significantly increases the risk of overdose and other severe health complications, according to the NIAAA. Disulfiram-like drugs Disulfiram Disulfiram inhibits the enzyme acetaldehyde dehydrogenase, which in turn results in buildup of acetaldehyde, a toxic metabolite of ethanol with unpleasant effects. The medication or drug is commonly used to treat alcohol use disorder, and results in immediate hangover-like symptoms upon consumption of alcohol, this effect is widely known as disulfiram effect. Metronidazole Metronidazole is an antibacterial agent that kills bacteria by damaging cellular DNA and hence cellular function. Metronidazole is usually given to people who have diarrhea caused by Clostridioides difficile bacteria. Patients who are taking metronidazole are sometimes advised to avoid alcohol, even after 1 hour following the last dose. Although older data suggested a possible disulfiram-like effect of metronidazole, newer data has challenged this and suggests it does not actually have this effect. Insulin Alcohol consumption can cause hypoglycemia in diabetics on certain medications, such as insulin or sulfonylurea, by blocking gluconeogenesis. NSAIDs The concomitant use of NSAIDs with alcohol and/or tobacco products significantly increases the already elevated risk of peptic ulcers during NSAID therapy. The risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. Stimulants Controlled animal and human studies showed that caffeine (energy drinks) in combination with alcohol increased the craving for more alcohol more strongly than alcohol alone. These findings correspond to epidemiological data that people who consume energy drinks generally showed an increased tendency to take alcohol and other substances. Ethanol interacts with cocaine in vivo to produce cocaethylene, another psychoactive substance which may be substantially more cardiotoxic than either cocaine or alcohol by themselves. Ethylphenidate formation appears to be more common when large quantities of methylphenidate and alcohol are consumed at the same time, such as in non-medical use or overdose scenarios. However, only a small percent of the consumed methylphenidate is converted to ethylphenidate. While nicotinis mimic the name of classic cocktails like the appletini (their name deriving from "martini"), combining nicotine with alcohol is a bad idea. Tobacco and nicotine actually heighten cravings for alcohol, making this a risky mix. Methanol and ethylene glycol The rate-limiting steps for the elimination of ethanol are in common with certain other substances. As a result, the blood alcohol concentration can be used to modify the rate of metabolism of toxic alcohols, such as methanol and ethylene glycol. Methanol itself is not highly toxic, but its metabolites formaldehyde and formic acid are; therefore, to reduce the rate of production and concentration of these harmful metabolites, ethanol can be ingested. Ethylene glycol poisoning can be treated in the same way. Warfarin Excessive use of alcohol is also known to affect the metabolism of warfarin and can elevate the INR, and thus increase the risk of bleeding. The U.S. Food and Drug Administration (FDA) product insert on warfarin states that alcohol should be avoided. The Cleveland Clinic suggests that when taking warfarin one should not drink more than "one beer, 6 oz of wine, or one shot of alcohol per day". Special population Isoniazid Levels of liver enzymes in the bloodstream should be frequently checked in daily alcohol drinkers, pregnant women, IV drug users, people over 35, and those who have chronic liver disease, severe kidney dysfunction, peripheral neuropathy, or HIV infection since they are more likely to develop hepatitis from INH. Pharmacology Alcohol works in the brain primarily by increasing the effects of γ-Aminobutyric acid (GABA), the major inhibitory neurotransmitter in the brain; by facilitating GABA's actions, alcohol suppresses the activity of the CNS. The pharmacology of ethanol involves both pharmacodynamics (how it affects the body) and pharmacokinetics (how the body processes it). In the body, ethanol primarily affects the central nervous system, acting as a depressant and causing sedation, relaxation, and decreased anxiety. The exact mechanism remains elusive, but ethanol has been shown to affect ligand-gated ion channels, particularly the GABAA receptor. After oral ingestion, ethanol is absorbed via the stomach and intestines into the bloodstream. Ethanol is highly water-soluble and diffuses passively throughout the entire body, including the brain. Soon after ingestion, it begins to be metabolized, 90% or more by the liver. One standard drink is sufficient to almost completely saturate the liver's capacity to metabolize alcohol. The main metabolite is acetaldehyde, a toxic carcinogen. Acetaldehyde is then further metabolized into ionic acetate by the enzyme aldehyde dehydrogenase (ALDH). Acetate is not carcinogenic and has low toxicity, but has been implicated in causing hangovers. Acetate is further broken down into carbon dioxide and water and eventually eliminated from the body through urine and breath. 5 to 10% of ethanol is excreted unchanged in the breath, urine, and sweat. Alcohol also direct affects a number of other neurotransmitter systems including those of glutamate, glycine, acetylcholine, and serotonin. The pleasurable effects of alcohol ingestion are the result of increased levels of dopamine and endogenous opioids in the reward pathways of the brain. The average human digestive system produces approximately 3g of ethanol per day through fermentation of its contents. Safety Symptoms of ethanol overdose may include nausea, vomiting, CNS depression, coma, acute respiratory failure, or death. Levels of even less than 0.1% can cause intoxication, with unconsciousness often occurring at 0.3–0.4%. Death from ethanol consumption is possible when blood alcohol levels reach 0.4%. A blood level of 0.5% or more is commonly fatal. The oral median lethal dose (LD50) of ethanol in rats is 5,628 mg/kg. Directly translated to human beings, this would mean that if a person who weighs drank a glass of pure ethanol, they would theoretically have a 50% risk of dying. The highest blood alcohol level ever recorded, in which the subject survived, is 1.41%. A retrospective case-control study conducted from 1990 to 2001 found that alcohol consumption was responsible for over half of all deaths among Russian adults aged 15–54, significantly impacting mortality rates related to causes such as accidents, violence, and various diseases. In the US, the DEA has claimed illegal drugs are more deadly than alcohol, citing CDC data from 2000 showing similar death counts despite alcohol's wider use. However, this comparison is disputed; a JAMA article reported alcohol-related deaths in 2000 as 85,000, significantly higher than the DEA's figure of 18,539. Toxicity The WHO classifies alcohol as a toxic substance. More specifically, ethanol is categorized as a cytotoxin, hepatotoxin, neurotoxin, and ototoxin, which has acute toxic effects on the cells, liver, the nervous system, and the ears, respectively. However, ethanol's acute effects on these organs are usually reversible. This means that even with a single episode of heavy drinking, the body can typically repair itself from the initial damage. Methanol laced alcohol on the other hand can cause blindness even in small quantities. Ethanol is nutritious but highly intoxicating for most animals, which typically tolerate only up to 4% in their diet. However, a 2024 study found that oriental hornets fed sugary solutions containing 1% to 80% ethanol for a week showed no adverse effects on behavior or lifespan. A risk assessment using the margin of exposure (MOE) approach evaluated drugs like alcohol and tobacco. Alcohol had a benchmark dose of 531 mg/kg, while heroin's was 2 mg/kg. Alcohol, nicotine, cocaine, and heroin were classified as "high risk" (MOE < 10), and most others as "risk" (MOE < 100). Only alcohol was "high risk" on a population level, with cannabis showing an MOE over 10,000. This confirms alcohol and tobacco as high risk and cannabis as low risk. Chemistry Ethanol is also known chemically as alcohol, ethyl alcohol, or drinking alcohol. It is a simple alcohol with a molecular formula of C2H6O and a molecular weight of 46.0684 g/mol. The molecular formula of ethanol may also be written as CH3−CH2−OH or as C2H5−OH. The latter can also be thought of as an ethyl group linked to a hydroxyl (alcohol) group and can be abbreviated as EtOH. Ethanol is a volatile, flammable, colorless liquid with a slight characteristic odor. Aside from its use as a psychoactive and recreational substance, ethanol is also commonly used as an antiseptic and disinfectant, a chemical and medicinal solvent, and a fuel. Analogues Ethanol has a variety of analogues, many of which have similar actions and effects. In chemistry, "alcohol" can encompass other mind-altering alcohols besides the kind we drink. Some examples include synthetic drugs like ethchlorvynol and methylpentynol, once used in medicine. Also, ethanol is colloquially referred to as "alcohol" because it is the most prevalent alcohol in alcoholic beverages. But technically all alcoholic beverages contain several types of psychoactive alcohols, that are categorized as primary, secondary, or tertiary. Primary, and secondary alcohols, are oxidized to aldehydes, and ketones, respectively, while tertiary alcohols are generally resistant to oxidation. Ethanol is a primary alcohol that has unpleasant actions in the body, many of which are mediated by its toxic metabolite acetaldehyde. Less prevalent alcohols found in alcoholic beverages, are secondary, and tertiary alcohols. For example, the tertiary alcohol 2M2B which is up to 50 times more potent than ethanol and found in trace quantities in alcoholic beverages, has been synthesized and used as a designer drug. Alcoholic beverages are sometimes laced with toxic alcohols, such as methanol (the simplest alcohol) and isopropyl alcohol. A mild, brief exposure to isopropyl alcohol (which is only moderately more toxic than ethanol) is unlikely to cause any serious harm. But many methanol poisoning incidents have occurred through history, since methanol is lethal even in small quantities, as little as 10–15 milliliters (2–3 teaspoons). Ethanol is used to treat methanol and ethylene glycol toxicity. The Lucas test differentiates between primary, secondary, and tertiary alcohols. Production Ethanol is produced naturally as a byproduct of the metabolic processes of yeast and hence is present in any yeast habitat, including even endogenously in humans, but it does not cause raised blood alcohol content as seen in the rare medical condition auto-brewery syndrome (ABS). It is manufactured through hydration of ethylene or by brewing via fermentation of sugars with yeast (most commonly Saccharomyces cerevisiae). The sugars are commonly obtained from sources like steeped cereal grains (e.g., barley), grape juice, and sugarcane products (e.g., molasses, sugarcane juice). Ethanol–water mixture which can be further purified via distillation. Home-made alcoholic beverages Homebrewing Homebrewing is the brewing of beer or other alcoholic beverages on a small scale for personal, non-commercial purposes. Supplies, such as kits and fermentation tanks, can be purchased locally at specialty stores or online. Beer was brewed domestically for thousands of years before its commercial production, although its legality has varied according to local regulation. Homebrewing is closely related to the hobby of home distillation, the production of alcoholic spirits for personal consumption; however home distillation is generally more tightly regulated. Moonshine Although methanol is not produced in toxic amounts by fermentation of sugars from grain starches, it is a major occurrence in fruit spirits. However, in modern times, reducing methanol with the absorption of a molecular sieve is a practical method for production. History Alcoholic beverages have been produced since the Neolithic period, as early as 7000 BC in China. Since antiquity, prior to the development of modern agents, alcohol was used as a general anaesthetic. In the history of wound care, beer, and wine, are recognized as substances used for healing wounds. Late Middle Ages Alcohol has been used as an antiseptic as early as 1363 with evidence to support its use becoming available in the late 1800s. Early modern period The popular story dates the etymology of the term Dutch courage to English soldiers fighting in the Anglo-Dutch Wars (1652–1674) and perhaps as early as the Thirty Years' War (1618–1648). One version states that jenever (or Dutch gin) was used by English soldiers for its calming effects before battle, and for its purported warming properties on the body in cold weather. Another version has it that English soldiers noted the bravery-inducing effects of jenever on Dutch soldiers. The Gin Craze was a period in the first half of the 18th century when the consumption of gin increased rapidly in Great Britain, especially in London. By 1743, England was drinking 2.2 gallons (10 litres) of gin per person per year. The Sale of Spirits Act 1750 (commonly known as the Gin Act 1751) was an Act of the Parliament of Great Britain (24 Geo. 2. c. 40) which was enacted to reduce the consumption of gin and other distilled spirits, a popular pastime that was regarded as one of the primary causes of crime in London. Modern period The rum ration (also called the tot) was a daily amount of rum given to sailors on Royal Navy ships. It started 1866 and was abolished in 1970 after concerns that the intake of strong alcohol would lead to unsteady hands when working machinery. The Andrew Johnson alcoholism debate is the dispute, originally conducted among the general public, and now typically a question for historians, about whether or not Andrew Johnson, the 17th president of the United States (1865–1869), drank to excess. The prohibition in the United States era was the period from 1920 to 1933 when the United States prohibited the production, importation, transportation, and sale of alcoholic beverages. The nationwide ban on alcoholic beverages, was repealed by the passage of the Twenty-first Amendment to the United States Constitution on December 5, 1933. The Bratt System was a system that was used in Sweden (1919–1955) and similarly in Finland (1944–1970) to control alcohol consumption, by rationing of liquor. Every citizen allowed to consume alcohol was given a booklet called a motbok (viinakortti in Finland), in which a stamp was added each time a purchase was made at Systembolaget (in Sweden) and Alko (in Finland). A similar system also existed in Estonia between July 1, 1920 to December 31, 1925. The stamps were based on the amount of alcohol bought. When a certain amount of alcohol had been bought, the owner of the booklet had to wait until next month to buy more. The Medicinal Liquor Prescriptions Act of 1933 was a law passed by Congress in response to the abuse of medicinal liquor prescriptions during Prohibition. Gilbert Paul Jordan (aka The Boozing Barber) was a Canadian serial killer who is believed to have committed the so-called "alcohol murders" between 1965– in Vancouver, British Columbia. Society and culture The consumption of alcohol has a long human history deeply embedded in social practices and rituals, often celebrated as a cornerstone of community gatherings and personal milestones. Drinking culture is the set of traditions and social behaviours that surround the consumption of alcoholic beverages as a recreational drug and social lubricant. The alcohol consumption recommendations (or ) varies from no intake, to daily, weekly, or daily/weekly guidelines provided by health agencies of governments. The WHO published a statement in The Lancet Public Health in April 2023 that "there is no safe amount that does not affect health." United Nations Sustainable Development Goal 3 is part of "The Alcohol Policy Playbook," which is a resource for reaching the goals of the WHO European Framework for Action on Alcohol (2022–2025) and the WHO Global Alcohol Action Plan (2022–2030). In October 2024, the WHO Regional Office for Europe launched the "Redefine alcohol" campaign to address alcohol-related health risks, as alcohol causes nearly 1 in 11 deaths in the region. The campaign aims to raise awareness about alcohol's link to over 200 diseases, including several cancers, and to encourage healthier choices by sharing research and personal stories. It also calls for stricter regulation of alcohol to reduce its societal harm. This initiative is part of the WHO/EU Evidence into Action Alcohol Project, which seeks to reduce alcohol-related harm across Europe. Alcohol education is the practice of disseminating disinformation about the effects of alcohol on health, as well as society and the family unit. Alcohol as a gateway drug Alcohol and nicotine prime the brain for a heightened response to other drugs and are, like marijuana, also typically used before a person progresses to other, more harmful substances. A study of drug use of 14,577 U.S. 12th graders showed that alcohol consumption was associated with an increased probability of later use of tobacco, cannabis, and other illegal drugs. See also Alcohol myopia Cannabis (drug) Glossary of alcohol (drug) terms Lean (drug) Rum-running Responsible drug use GABAergics GABRD (δ subunit-containing receptors) Pigovian taxes, which are to pay for the damage to society caused by these goods. Speedball (drug) Sin taxes are used to increase the price in an effort to lower their use, or failing that, to increase and find new sources of revenue. References Further reading External links The National Institute on Alcohol Abuse and Alcoholism maintains a database of alcohol-related health effects. ETOH Archival Database (1972–2003) Alcohol and Alcohol Problems Science Database. WHO fact sheet on alcohol ChEBI – biology related Kyoto Encyclopedia of Genes and Genomes signal transduction pathway: KEGG – human alcohol addiction 5-HT3 agonists AMPA receptor antagonists Adenosine reuptake inhibitors Alcohol-related crimes Alcohol abuse Alcohol and health Alcohol dehydrogenase inhibitors Alcohol law Alcohols Analgesics Anaphrodisia Anxiolytics Calcium channel blockers Chemical hazards Depressogens Diuretics Drinking culture Drug culture Drugs acting on the nervous system Drugs with unknown mechanisms of action Emetics Ethanol Euphoriants GABAA receptor positive allosteric modulators General anesthetics Glycine reuptake inhibitors Hepatotoxins Human metabolites Hypnotics IARC Group 1 carcinogens Kainate receptor antagonists NMDA receptor antagonists Neurotoxins Nicotinic agonists Ototoxicity Psychoactive drugs Sedatives Teratogens de:Alkoholkonsum fr:Alcoolisation
Alcohol (drug)
Chemistry,Biology
14,117
24,162,058
https://en.wikipedia.org/wiki/C21H22O11
{{DISPLAYTITLE:C21H22O11}} The molecular formula C21H22O11 (molar mass: 450.39 g/mol, exact mass: 450.116212) may refer to: Astilbin, a flavanonol Marein, an aurone Smitilbin, a flavanonol
C21H22O11
Chemistry
77
66,446,555
https://en.wikipedia.org/wiki/Fluorine%20cycle
The fluorine cycle is the series of biogeochemical processes through which fluorine moves through the lithosphere, hydrosphere, atmosphere, and biosphere. Fluorine originates from the Earth’s crust, and its cycling between various sources and sinks is modulated by a variety of natural and anthropogenic processes. Overview Fluorine is the thirteenth most abundant element on Earth and the 24th most abundant element in the universe. It is the most electronegative element and it is highly reactive. Thus, it is rarely found in its elemental state, although elemental fluorine has been identified in certain geochemical contexts. Instead, it is most frequently found in ionic compounds (e.g. HF, CaF2). The major mechanisms that mobilize fluorine are chemical and mechanical weathering of rocks. Major anthropogenic sources include industrial chemicals and fertilizers, brick manufacturing, and groundwater extraction. Fluorine is primarily carried by rivers to the oceans, where it has a residence time of about 500,000 years. Fluorine can be removed from the ocean by deposition of terrigenous or authigenic sediments, or subduction of the oceanic lithosphere. Lithosphere The vast majority of the Earth's fluorine is found in the crust, where it is primarily found in hydroxysilicate minerals. Levels of fluorine in igneous rocks vary greatly, and are influenced by the fluorine contents of magma. Likewise, altered oceanic crust exhibits large variability in fluorine; serpentinization zones contain elevated levels of fluorine. Many details concerning the exact mineralogy and distribution of fluorine in the crust are poorly understood, particularly fluorine's abundance in metamorphic rocks, in the mantle, and in the core. Fluorine can be liberated from its crustal reservoirs via natural processes (such as weathering, erosion, and volcanic activity) or anthropogenic processes, such as phosphate rock processing, coal combustion, and brick-making. Anthropogenic contributions to the fluorine cycle are significant, with anthropogenic emissions contributing about 55% of global fluorine inputs. Hydrosphere Fluorine can dissolve into waters as the anion fluoride, where is abundance depends on local abundance within the surrounding rocks. This is in contrast to other halogen abundances, which tend to reflect the abundance of other local halogens, rather than the local rock composition. Dissolved fluoride is present found in low abundances in surface runoff in rainwater and rivers, and higher concentrations (74 micromolar) in seawater. Fluorine can also enter surface waters via volcanic plumes. Atmosphere Fluorine can enter the atmosphere via volcanic activity and other geothermal emissions, as well as via biomass burning and wind-blown dust plumes. Additionally, it can come from a wide variety of anthropogenic sources, including coal combustion, brick-making, uranium processing, chemical manufacturing, aluminum production, glass etching, and the microelectronics/semiconductor industry. Fluorine can also enter the atmosphere as a product of reactions between anthropogenically-generated atmospheric chemicals (for example, uranium fluoride). Furthermore, fluorine is a component in chlorofluorocarbon gases (CFCs), which were mass-produced throughout the 20th century until the detrimental effects associated with their breakdown into highly reactive chlorine and chlorine oxide species were better understood. The majority of contemporary studies on atmospheric fluorine focus on hydrogen fluoride (HF) in the troposphere, due to HF gas’s toxicity and high reactivity. Fluorine can be removed from the atmosphere via “wet” deposition, by precipitating out of rain, dew, fog, or cloud droplets, or via “dry” deposition, which refers to any processes that do not involve liquid water, such as adherence to surface materials as driven by atmospheric turbulence. HF can also be removed from the atmosphere via photochemical reactions in the stratosphere. Biosphere Fluorine is an important element for biological systems. From a mammalian health perspective, it is notable as a component of fluorapatite, a key mineral in the teeth of humans that have been exposed to fluorine, as well as shark and fish teeth. In soil, fluorine can act as a source for biological systems and a sink for atmospheric processes, as atmospheric fluorine can leach to considerable depths. References Fluorine Biogeochemical cycle
Fluorine cycle
Chemistry
945
76,734,839
https://en.wikipedia.org/wiki/Monkey%20Drug%20Trials
The Monkey Drug Trials of 1969 were a series of controversial animal testing experiments that were conducted on primates to study the effects of various psychoactive substances. The trials shed light on the profound effects of drug addiction and withdrawal in primates, pioneering critical insights into human substance abuse. Background The Monkey Drug Trials experiment was influenced by preceding research discussing related topics. Six notable research publications may be highlighted: “Factors regulating oral consumption of an opioid (etonitazene) by morphine-addicted rats”; “Experimental morphine addiction: Method for automatic intravenous injections in unrestrained rats.”; ”Morphine self-administration, food-reinforced, and avoidance behaviors in rhesus monkeys''; “Psychopharmacological elements of drug dependence”; “Drug addiction. I. Addiction by escape training”; “Morphine addiction in rats”. Experiments The study “Self-Administration of Psychoactive Substances by the Monkey” was conducted by G. Deneau, T. Yanagita and M.H. Seever at the department of pharmacology at the University of Michigan. The monkey drug trials consisted of self-injection of intravenous drugs in monkeys, in which the primates were trained to operate the self-administration of cocaine, morphine, amphetamines, codeine, caffeine, mescaline, pentobarbital, ethanol, by  using a lever in their cage. Their responses to the drugs over time were carefully analyzed to assess whether monkeys, after initial exposure to it, will show a voluntary intake of it, indicating psychological dependencies. The results suggested that some drugs elicited signs of dependency while others did not. They were compared with human dependency problems aiming to find an explanation of physiological and psychological drug dependence in humans. Those trials had some drastic side effects like tremors, hallucinations, convulsions, sudden death and disorientation. Several other experiments that were highly criticised because of moral issues led to the development of guidelines for guidelines for Ethical Conduct in the Care and Use of Animals provided by the American Psychological Association. Procedure Monkeys selected as experiment subjects were kept in specially-built cubicles. Inside the booths, monkeys were restrained by a harness attached to a restraining arm mounted on the wall. Upon acclimation of the monkey to the new environment, the harness was adjusted to fit the size of the animal. A silicone catheter was inserted into the monkey’s jugular vein under anesthesia and fixed in place. The other end of the catheter was attached to a tube running through the harness to an injector. After the monkey recovered from the catheter installation, two switches were placed inside the cubicle. Pressing one of the switches activated the injector and saline was injected into the vein of the monkey. Upon pressing the other switch, saline was transported back from the injector to the container. After the monkey learned to operate the self-admiration mechanism, saline was switched to a drug solution. Drug injections could be administered by the monkey or a timer. If the drug had rewarding effects on the monkey, it increased the self-admiration rate as pressing the switch would be associated with a pleasant experience. If the experience was perceived negatively, the monkey avoided pressing the switch. In the event of the monkey not initiating drug injections, the solution was administered automatically at regular intervals to test if upon further exposure, the monkey would begin to press the switch on its own, signaling psychological dependence on the drug. Results Morphine: Some monkeys expressed an initial reluctance to self-administer morphine at lower doses, but they eventually began and maintained a consistent intake of it, some even significantly increasing the dosage throughout the experiment. None of the monkeys voluntarily ceased their morphine intake during the study, and when this drug was taken away from them they expressed symptoms of severe dependence. Some of the side effects observed during the monkeys’ morphine consumption were drowsiness, apathy, reduced food intake, and temporary weight loss. Codeine: Four out of five monkeys initiated level-pressing for codeine, gradually increasing its intake until achieving a stable consumption between the fifth and sixth week of the experiment. One monkey experienced convulsions and died after reaching the highest observed daily dosage of 600 mg/kg, and the other four died between the sixth and eighth week of unrestricted codeine consumption. Nalorphine: Due to the monkey’s experience with the first drug trial, they refused to administer themselves drugs out of their own free will. So they were injected every four hours for the whole period of the trial. During the trial the monkeys were less active, somewhat apprehensive and salivated mildly for 10-15 minutes. Once the trial ended and the monkeys were taken off the drugs they yawned excessively and scratched for 2 days. Morphine-Nalorphine mixture: Four monkeys were tested with a mixture of both drugs, these monkeys had shown psychological dependence but had not been allowed to self-administer morphine. None of the monkey voluntarily self-administered the mixture. Cocaine: Two out of four monkeys started self-administration with a dose of 0.25 mg/kg, and the other three started with a dose of 1.0 mg/kg that they maintained throughout the experiment. Once self-administration began, the cocaine consumption rapidly increased, leading to convulsions and death within 30 days. To extend the experiment, the self-administration dose was restricted to one dose per hour, and the consequent pattern was monkeys self-administering until exhaustion, after which they voluntarily ceased their cocaine intake for a period ranging from 12 hours to 5 days. During this period of voluntary restriction, monkeys slept intermittently and ate frequently. Cocaine consumption resulted in secondary effects such as hallucinations, muscle mass loss, and frequent grand mal convulsions. Morphine-cocaine: Four monkeys had 2 tubes implanted, where one supplied morphine and one supplied cocaine, with corresponding lever switches which could be pressed as pleased. Quite shortly after, the monkeys developed dependency, primarily using cocaine during the day and morphine during the evening/night. Combined toxic effects of these drugs created disorientation, delirium, anorexia, motor impairment, emaciation and eventually death after 2-4 weeks. The Amphetamines: Five monkeys voluntarily self-administered a maximum dose of methamphetamine. The intake was infrequent with periods of voluntary withdrawal and periods of high intake daily and nightly. D-amphetamine had similar but milder effects than cocaine, lacking especially grand convulsions, chewing of forearms and digits. Hair was plucked from one monkey’s body by themselves leading to believe the hallucinations might have been present. Just as in cocaine the monkeys became confused and a catabolic effect was observed in both cocaine and amphetamines. Caffeine: Four monkeys were placed in a caffeine trial, where two failed self-administration of 1.0mg/kg, one failed self-administration at 2.5mg/kg and one did initiate self administration at this level, and one monkey initiated self-administration at 5.0mg/kg. One monkey self-administered voluntarily and two did with priming. The pattern following self-administration was sporadic and with irregular intervals of administration and abstinence. No tendency to heighten the dose or to take the drug at night was shown, and once the drug was withdrawn no signs of withdrawal were visible with the monkeys. Mescaline: The monkeys either did not self-administer mescaline or started doing so after one month of programmed administration. Effects observed were salivation indicating nausea although no monkeys vomited, mydriasis and piloerection. The monkeys were also very apprehensive of sounds. No abstinence signs were observed during programmed administration. Pentobarbital: Five monkeys initiated and maintained self-administration of 3 mg/kg doses of pentobarbital. They constantly self-administered as soon as the last dose enabled them to re-administer, reaching a tolerance plateau of 420mg/kg per week. All monkey’s abstained during meals, which were larger than average. The monkeys maintained good physical condition, gaining weight throughout the experiment. They never voluntarily abstained and when abstinence was forced, abstinence syndrome was observed with symptoms of extreme restlessness, tremors, grand mal convulsions and apparent hallucinations. Ethanol: Four out if five monkeys administered ethanol voluntarily, where one of four completely stopped self-administering after one month. Despite severe abstinence syndrome, the monkeys voluntarily abstained for 2-4 days during the first 4 months. Afterwards these periods were usually no more than a day. Effects observed included severe motor incoordination and stupor, sometimes to the point of light anesthesia. Withdrawal periods also induced symptoms of tremor, vomiting, hallucinatory behavior and convulsions within 6 hours after the last dose. Food intake was also severely deprived, showing marked weight loss and cachexia. Two monkeys died because of respiratory obstruction during anesthesia. Chlorpromazine: None of six monkeys willingly self-administered chlorpromazine, and received programmed injections. After withdrawal though, two monkeys willingly self-administered 2-5 times a day and then abstained completely after several weeks. Effects during administration included typical phenothiazine effects of reduced spontaneous activity and responsiveness, narrowed palpebral fissures and slight miosis, but no major dyskinesias were observed. No withdrawal signs were observed either. Saline: No attempt during the study to establish saline as a reinforcing agent was successful. Limitations Limitations the experimenters Deneau et al. mentioned in their paper about Self-Administration of Psychoactive Drugs by the Monkey include that the study was not able to test drugs that are not water soluble. This limited research of substances like the active ingredients of marijuana. In the paper it was also noted, that the individual variability in drug abuse of the individual monkeys, may affect the reliability of the results. When discussing the concept of self-administration for inferring abuse potential it is essential to consider the toxicity of the drug when it is administered as well as the withdrawal-effects once usage is stopped. The experiment mainly focused on the self-administration and not on the withdrawal-period or detailed effects that the drugs had on the body. Abuse potential as well as a drugs potential danger is not only determined by self-administration but by several factors that were not taken into account when the study was conducted. An additional point of criticism raised by primatologists is the limitation of generalizing results from data obtained on non-human primates onto humans. There are possible biases that can emerge through, for example, the environment the monkeys are being placed in or connected traumas resulting from the replacement of their previous environment to the laboratory. Ethics Experiments using Non-Human Primates (NHPs) are viewed more critically in years following 1969, when the study of Self-Administration of Psychoactive Substances by the Monkey was conducted. The bioethicist Peter Singer, for example, argues that there should be no use of any animal in biomedical research as this would indicate speciesism. It is often argued that animals lack sentience, autonomy and self-consciousness, which is utilized to justify the use of animals in scientific experiments. Singer draws the comparison of humans that lack these traits and argues that if one is morally able to deprive an animal of their rights based on this argument, one would also be entitled to depriving said humans of the same rights and privileges. More specific to the experiment of Self-Administration of Psychoactive Substances by the Monkey which utilized rhesus monkeys, there is the factor of a substantial phylogenetic proximity between non-human primates and humans which indicates that the suffering endured by NHPs in these experiments is similar to what a human would experience under the same circumstances. Some argue that this phylogenetic proximity between NHPs and humans is exactly what benefits comparative psychology, in which it is thus easier to infer from the animal to humans. The drug self-administration procedures that occur in animals have been found to present valid and reliable results for assessing the potential drug abuse in humans. The reliability of these studies is particularly high because of the phylogenetic similarity. Carl Cohen, a bioethicist, suggests that as long as animals are killed by humans daily, simply for consumption, even though this is unnecessary due to modern scientific developments, there would be no reason not to utilize animals in scientific experiments. A utilitarian argument to justify this further would be that there are relatively few NHPs used in research, compared to the relatively large number of people benefitting. The Institution of Harvard Medical School, when talking about a different ethically-questioned experiment using NHPs, stated:As long as non-human primates are used in scientific experiments, we are morally obligated to provide them with sufficient social conditions that ensure their emotional well-being. Legacy and aftermath The monkey drug trials were not the first nor the last experiment of its kind. Using animals to assess the effects of addiction and withdrawal was a relatively common practice during Deneau’s time, with monkeys and rats being the most prevalent subjects. Notably, Charles Schuster’s studies on drug self-administration were critical in demonstrating the highly addictive nature of stimulant substances. The cruelty and disregard for the animals displayed by the experimenters during the 1969 drug trials were the key contributor in the controversy that followed. The experiment had a profound impact on the field of neuroscience and addiction research, leading to lasting changes in research practices, ethical considerations, and public perception. In spite of the criticism that accompanied the publication of the study, subsequent research did provide valuable insights into addiction-driven behaviors. In the following decades, articles focusing on the biological processes underlying drug-addiction grew in popularity. Respected scientists, such as George Koob and Nora Volkov, were at the forefront of new discoveries in addiction neuroscience and the pharmacology of behavior. Research also explored the clinical aspects of addiction-related behaviors, with experiments aimed at reducing the likelihood of relapse in patients. While self-administered drug trials were not a novelty in the last century, the validity and efficiency of such procedures have since become a subject of debate in the scientific community. The use of animals in studies exploring the effects of various substances was still prevalent by the end of the 20th century. See also Cambridge University primates Silver Spring monkeys References Animal testing techniques Animal rights Toxicology tests Animal testing in the United States Animal testing on non-human primates Substance abuse
Monkey Drug Trials
Chemistry,Environmental_science
2,996
4,701,210
https://en.wikipedia.org/wiki/Prothallus
A prothallus, or prothallium, (from Latin pro = forwards and Greek θαλλος (thallos) = twig) is usually the gametophyte stage in the life of a fern or other pteridophyte. Occasionally the term is also used to describe the young gametophyte of a liverwort or peat moss as well. In lichens it refers to the region of the thallus that is free of algae. The prothallus develops from a germinating spore. It is a short-lived and inconspicuous heart-shaped structure typically 2–5 millimeters wide, with a number of rhizoids (root-like hairs) growing underneath, and the sex organs: archegonium (female) and antheridium (male). Appearance varies quite a lot between species. Some are green and conduct photosynthesis while others are colorless and nourish themselves underground as saprotrophs. Alternation of generations Spore-bearing plants, like all plants, go through a life-cycle of alternation of generations. The fully grown sporophyte, what is commonly referred to as the fern, produces genetically unique spores in the sori by meiosis. The haploid spores fall from the sporophyte and germinate by mitosis, given the right conditions, into the gametophyte stage, the prothallus. The prothallus develops independently for several weeks; it grows sex organs that produce ova (archegonia) and flagellated sperm (antheridia). The sperm are able to swim to the ova for fertilization to form a diploid zygote which divides by mitosis to form a multicellular sporophyte. In the early stages of growth, the sporophyte grows out of the prothallus, depending on it for water supply and nutrition, but develops into a new independent fern, which will produce new spores that will grow into new prothallia etc., thus completing the life cycle of the organism. Theoretical advantages of alternation of generations It has been argued that there is an important evolutionary advantages to the alternation of generations plant life-cycle. By forming a multicellular haploid gametophyte rather than limiting the haploid stage to gametes, there is often only one allele for any genetic trait. Thus, alleles are not masked by a dominant counterpart (there is no counterpart). One benefit of this is that a mutation that causes a lethal, or harmful, trait expression will cause the gametophyte to die; thus, the trait cannot be passed on to future generations, preserving the strength of the gene pool. Furthermore, if individual cells of the gametophyte compete with one another, somatic mutations that reduce cell vigour may prevent a cell lineage from reproducing. In lichens The region of the thallus in lichens that is free of algae (the photobiont partner) and contains only fungus (the mycobiont partner) is called the prothallus. It is typically white, brown, or black in colour. In crustose lichens, the prothallus is visible between areoles and on the growing thallus margin. In the large genus Cladonia, the prothallus may provide a mode of vegetative reproduction, and it may have a role in stabilising the soil. In some genera, such as Coenogonium, the presence of absence of prothalli is an important taxonomic character that is used to help classify species. The term prothallus was first used by German botanist Georg Meyer in 1825, who introduced it in a discussion of lichen growth. References External links Liverwort Sporophyte Fern Life-Cycle Ferns Fungal morphology and anatomy
Prothallus
Biology
792
25,141,419
https://en.wikipedia.org/wiki/Love%20number
The Love numbers (h, k, and l) are dimensionless parameters that measure the rigidity of a planetary body or other gravitating object, and the susceptibility of its shape to change in response to an external tidal potential. In 1909, Augustus Edward Hough Love introduced the values h and k which characterize the overall elastic response of the Earth to the tides—Earth tides or body tides. Later, in 1912, Toshi Shida added a third Love number, l, which was needed to obtain a complete overall description of the solid Earth's response to the tides. Definitions The Love number h is defined as the ratio of the body tide to the height of the static equilibrium tide; also defined as the vertical (radial) displacement or variation of the planet's elastic properties. In terms of the tide generating potential , the displacement is where is latitude, is east longitude and is acceleration due to gravity. For a hypothetical solid Earth . For a liquid Earth, one would expect . However, the deformation of the sphere causes the potential field to change, and thereby deform the sphere even more. The theoretical maximum is . For the real Earth, lies between 0 and 1. The Love number k is defined as the cubical dilation or the ratio of the additional potential (self-reactive force) produced by the deformation of the deforming potential. It can be represented as , where for a rigid body. The Love number l represents the ratio of the horizontal (transverse) displacement of an element of mass of the planet's crust to that of the corresponding static ocean tide. In potential notation the transverse displacement is , where is the horizontal gradient operator. As with h and k, for a rigid body. Values According to Cartwright, "An elastic solid spheroid will yield to an external tide potential of spherical harmonic degree 2 by a surface tide and the self-attraction of this tide will increase the external potential by ." The magnitudes of the Love numbers depend on the rigidity and mass distribution of the spheroid. Love numbers , , and can also be calculated for higher orders of spherical harmonics. For elastic Earth the Love numbers lie in the range: , and . For Earth's tides one can calculate the tilt factor as and the gravimetric factor as , where subscript two is assumed. Neutron stars are thought to have high rigidity in the crust, and thus a low Love number: ; isolated, nonrotating black holes in vacuum have vanishing Love numbers for all multipoles . Measuring the Love numbers of compact objects in binary mergers is a key goal of gravitational-wave astronomy. References Tides Elasticity (physics) Dimensionless numbers of mechanics Geodynamics
Love number
Physics,Materials_science
553
62,528,999
https://en.wikipedia.org/wiki/Epichlo%C3%AB%20gansuensis
Epichloë gansuensis is a haploid species in the fungal genus Epichloë. The sexual phase has not been observed. A systemic and seed-transmissible grass symbiont first described in 2004, Epichloë gansuensis is a sister lineage to Epichloë sibirica and an early branching lineage on the Epichloë tree. Epichloë gansuensis is found in Asia, where it has been identified in the grass species Achnatherum inebrians, Achnatherum sibiricum and Achnatherum pekinense. Varieties Epichloë gansuensis has one variety. Epichloë gansuensis subsp. inebrians (C.D. Moon & Schardl) Schardl was first described in 2007. It is found in Asia in the grass species Achnatherum inebrians. References gansuensis Fungi described in 2004 Fungi of Asia Fungus species
Epichloë gansuensis
Biology
196
54,360,045
https://en.wikipedia.org/wiki/Megasporoporia%20minor
Megasporoporia minor is a species of crust fungus in the family Polyporaceae. Found in China, it was described as a new species in 2013 by mycologists Bao-Kai Cui and Hai-Jiao Li. The type was collected was made in Daweishan Forest Park, Yunnan, where it was found growing on a fallen angiosperm branch. It is distinguished from other species of Megasporoporia by its relatively small pores (number 6–7 per millimetre) and small spores (measuring 6–7.8 by 2.6–4 μm); it is these features for which the fungus is named. References Fungi of China Fungi described in 2013 Polyporaceae Taxa named by Bao-Kai Cui Fungus species
Megasporoporia minor
Biology
160
26,221,186
https://en.wikipedia.org/wiki/Friendship%20paradox
The friendship paradox is the phenomenon first observed by the sociologist Scott L. Feld in 1991 that on average, an individual's friends have more friends than that individual. It can be explained as a form of sampling bias in which people with more friends are more likely to be in one's own friend group. In other words, one is less likely to be friends with someone who has very few friends. In contradiction to this, most people believe that they have more friends than their friends have. The same observation can be applied more generally to social networks defined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have. The friendship paradox is an example of how network structure can significantly distort an individual's local observations. Mathematical explanation In spite of its apparently paradoxical nature, the phenomenon is real, and can be explained as a consequence of the general mathematical properties of social networks. The mathematics behind this are directly related to the arithmetic-geometric mean inequality and the Cauchy–Schwarz inequality. Formally, Feld assumes that a social network is represented by an undirected graph , where the set of vertices corresponds to the people in the social network, and the set of edges corresponds to the friendship relation between pairs of people. That is, he assumes that friendship is a symmetric relation: if is a friend of , then is a friend of . The friendship between and is therefore modeled by the edge and the number of friends an individual has corresponds to a vertex's degree. The average number of friends of a person in the social network is therefore given by the average of the degrees of the vertices in the graph. That is, if vertex has edges touching it (representing a person who has friends), then the average number of friends of a random person in the graph is The average number of friends that a typical friend has can be modeled by choosing a random person (who has at least one friend), and then calculating how many friends their friends have on average. This amounts to choosing, uniformly at random, an edge of the graph (representing a pair of friends) and an endpoint of that edge (one of the friends), and again calculating the degree of the selected endpoint. The probability of a certain vertex to be chosen is The first factor corresponds to how likely it is that the chosen edge contains the vertex, which increases when the vertex has more friends. The halving factor simply comes from the fact that each edge has two vertices. So the expected value of the number of friends of a (randomly chosen) friend is We know from the definition of variance that where is the variance of the degrees in the graph. This allows us to compute the desired expected value as For a graph that has vertices of varying degrees (as is typical for social networks), is strictly positive, which implies that the average degree of a friend is strictly greater than the average degree of a random node. Another way of understanding how the first term came is as follows. For each friendship , a node mentions that is a friend and has friends. There are such friends who mention this. Hence the square of term. We add this for all such friendships in the network from both the 's and 's perspective, which gives the numerator. The denominator is the number of total such friendships, which is twice the total edges in the network (one from the 's perspective and the other from the 's). After this analysis, Feld goes on to make some more qualitative assumptions about the statistical correlation between the number of friends that two friends have, based on theories of social networks such as assortative mixing, and he analyzes what these assumptions imply about the number of people whose friends have more friends than they do. Based on this analysis, he concludes that in real social networks, most people are likely to have fewer friends than the average of their friends' numbers of friends. However, this conclusion is not a mathematical certainty; there exist undirected graphs (such as the graph formed by removing a single edge from a large complete graph) that are unlikely to arise as social networks but in which most vertices have higher degree than the average of their neighbors' degrees. The Friendship Paradox may be restated in graph theory terms as "the average degree of a randomly selected node in a network is less than the average degree of neighbors of a randomly selected node", but this leaves unspecified the exact mechanism of averaging (i.e., macro vs micro averaging). Let be an undirected graph with and , having no isolated nodes. Let the set of neighbors of node be denoted . The average degree is then . Let the number of "friends of friends" of node be denoted . Note that this can count 2-hop neighbors multiple times, but so does Feld's analysis. We have . Feld considered the following "micro average" quantity. However, there is also the (equally legitimate) "macro average" quantity, given by The computation of MacroAvg can be expressed as the following pseudocode. for each node initialize for each edge return Each edge contributes to MacroAvg the quantity , because . We thus get . Thus, we have both and , but no inequality holds between them. In a 2023 paper, a parallel paradox, but for negative, antagonistic, or animosity ties, termed the "enmity paradox," was defined and demonstrated by Ghasemian and Christakis. In brief, one's enemies have more enemies than one does, too. This paper also documented diverse phenomena is "mixed worlds" of both hostile and friendly ties. Applications The analysis of the friendship paradox implies that the friends of randomly selected individuals are likely to have higher than average centrality. This observation has been used as a way to forecast and slow the course of epidemics, by using this random selection process to choose individuals to immunize or monitor for infection while avoiding the need for a complex computation of the centrality of all nodes in the network. In a similar manner, in polling and election forecasting, friendship paradox has been exploited in order to reach and query well-connected individuals who may have knowledge about how numerous other individuals are going to vote. However, when utilized in such contexts, the friendship paradox inevitably introduces bias by over-representing individuals with many friends, potentially skewing resulting estimates. A study in 2010 by Christakis and Fowler showed that flu outbreaks can be detected almost two weeks before traditional surveillance measures would do so by using the friendship paradox in monitoring the infection in a social network. They found that using the friendship paradox to analyze the health of central friends is "an ideal way to predict outbreaks, but detailed information doesn't exist for most groups, and to produce it would be time-consuming and costly." This extends to the spread of ideas as well, with evidence that the friendship paradox can be used to track and predict the spread of ideas and misinformation through networks. This observation has been explained with the argument that individuals with more social connections may be the driving forces behind the spread of these ideas and beliefs, and as such can be used as early-warning signals. Friendship paradox based sampling (i.e., sampling random friends) has been theoretically and empirically shown to outperform classical uniform sampling for the purpose of estimating the power-law degree distributions of scale-free networks. The reason is that sampling the network uniformly will not collect enough samples from the characteristic heavy tail part of the power-law degree distribution to properly estimate it. However, sampling random friends incorporates more nodes from the tail of the degree distribution (i.e., more high degree nodes) into the sample. Hence, friendship paradox based sampling captures the characteristic heavy tail of a power-law degree distribution more accurately and reduces the bias and variance of the estimation. The "generalized friendship paradox" states that the friendship paradox applies to other characteristics as well. For example, one's co-authors are on average likely to be more prominent, with more publications, more citations and more collaborators, or one's followers on Twitter have more followers. The same effect has also been demonstrated for Subjective Well-Being by Bollen et al. (2017), who used a large-scale Twitter network and longitudinal data on subjective well-being for each individual in the network to demonstrate that both a Friendship and a "happiness" paradox can occur in online social networks. The friendship paradox has also been used as a means to identify structurally influential nodes within social networks, so as to magnify social contagion of diverse practices relevant to human welfare and public health. This has been shown to be possible in several large-scale randomized controlled field trials conducted by Christakis et al., with respect to the adoption of multivitamins or maternal and child health practices in Honduras, or of iron-fortified salt in India. This technique is valuable because, by exploiting the friendship paradox, one can identify such influential nodes without the expense and delay of actually mapping the whole network. See also References External links Statistical paradoxes Social networks Graph theory Friendship Probability theory paradoxes
Friendship paradox
Mathematics
1,871
50,112,980
https://en.wikipedia.org/wiki/Entoloma%20holoconiotum
Entoloma holoconiotum is a mushroom in the family Entolomataceae. It was originally described as Nolanea holoconiota by David Largent and Harry Thiers in 1972. Machiel Noordeloos and Co-David transferred it to the genus Entoloma in 2009. The species can be found in conifer forests in western North America. The cap is tan or orangish and ranges from 2–6 cm in diameter. The gills are white. The stalks are pale yellow, measuring 3–7 cm tall and 3–4 mm wide. The spores are brownish pink. Similar species include Entoloma cuneatum, E. propinquum, and E. vernum. See also List of Entoloma species References External links Entolomataceae Fungi of North America Fungi described in 1972 Taxa named by Harry Delbert Thiers Fungus species
Entoloma holoconiotum
Biology
190
19,344,245
https://en.wikipedia.org/wiki/Obstacles%20to%20troop%20movement
Obstacles to troop movement represent either natural, human habitat originated, constructed, concealed obstacles, or obstructive impediments to movement of military troops and their vehicles, or to their visibility. By impeding strategic, operational or tactical manoeuvre, the obstacle represents an added barrier between opposing combat forces, and therefore prevent achievement of objectives and goals specified in the operational planning schedule. The constructed obstacles are used as an aid to defending a position or area as part of the general defensive plan of the commander. The obstacles that originate from the human habitat can be converted by troops into constructed obstacles by either performing additional construction, or executing demolitions to obstruct movement over the transport network, to create a choke point, or to deny traversing of an area to the enemy. The natural obstacles can be used defensively by securing a more difficult to breach defensive position by for example securing a flank on terrain that is deemed impossible to traverse, thus denying the enemy an ability to close into combat range of direct fire weapons. Role of obstacles Obstacles are used in combat operations to create choke points, deny mobility corridors and avenue of approach to positions, to enhance field of fire for direct fire weapons, or to protect key tactical terrain features to the enemy. Types of obstacles Natural obstacles Natural obstacles are represented by those terrain features that for which few troops and their vehicles have capability to traverse. They include water obstacles, or areas of poor drainage such as lakes, rivers, swamps and marshes. The former two can be crossed by amphibious vehicles capable of swimming, or vehicles capable of deep wading after preparation, or by constructing a water crossing, and thus creating an easily targeted choke point. Soil and rock can also represent mobility obstacles if the soil is too soft and unable to support the weight of the military vehicles, or the terrain is fractured by cliffs, or large boulders that make organised movement impossible. While soft soil is relative to vehicle ground pressure, there is little that can be done to negotiate very rocky terrain or cliffs except by using specially trained light infantry troops. Vegetation such as jungles or dense forests can also represent obstacles to movement, in some cases even to light infantry troops. Some natural obstacles can be a result of climatic or soil activity such as deep snow that by covering all terrain makes safe traversing difficult and slow, or landslides that may create an obstruction suddenly despite previously clear route reconnaissance report. Habitat obstacles While human habitat had, since early construction of roads, sought to create ways of negotiating terrain faster, the human activity on the landscape can create obstacles in its own right. Artificial lakes and ponds, canals, and areas of agricultural cultivation, particularly those that are water-intensive such as rice-paddy fields create obstacles often more difficult than the natural equivalents. Mining activity creates quarries, and the building of roads, rail roads and dams also involve construction of cuts and fills. Seeded tree-line windbreaks, hedgerows, stone walls and plantation forests also disrupt mobility, particularly of vehicles. Lastly the urban areas in themselves represent obstacles by offering elevated firing positions and canyon-like choke points by forcing the opponent to advance through the streets. Constructed obstacles Constructed obstacles are those prepared by military engineering troops, often combat engineers, by either using materials to construct impediments to foot and vehicle-borne troops, or by using demolition methods, or excavation such as an abatis, to create obstacles from natural materials and terrain in specific location in accordance with the overall plan of operations. Sometimes such obstacles can be created intentionally or unintentionally through effects of artillery fire cratering. Buildings demolished due to combat or aerial bombing become very effective obstacles as rubble represents difficult to negotiate and irregular piles of building materials. Concealed obstacles Concealed obstacles are used with the intention of not only preventing movement of enemy troops, but also causing casualties during attempted movement. Although one of the oldest forms of obstacle use, this became far deadlier with the invention of the mine warfare, and more so with air-delivered scattered submunition minelets that can create an instant minefield. Obstructive obstacles Obstructive obstacles are used primarily to deny terrain visibility to the enemy, thus creating uncertainty in targeting friendly troops. Although ancient in use as tar smoke pots, modern smoke screens are temporary and are used as a tactical measure during manoeuvring, often when a unit is performing a position change. Obstacle negotiation Ground troops prefer to deal with physical obstacles by circumventing them as rapidly as possible, thus avoiding becoming stationary targets to the enemy direct and indirect fire weapons, and aircraft. Where this is not possible, in modern warfare the most expedient measures taken against constructed or urban obstacles are to either use armoured vehicles, preferably tanks, to remove the obstacle, or to demolish them by firing High Explosive munitions at them. Where combat engineers are present, they can perform this using their specialist skills and tools or vehicles. In the case of natural obstacles, specialist engineering equipment is usually required to negotiate the obstacle, commonly bridging or pontoons. The solution to obstacle bridging had at the strategic level created new forms of warfare and employment of troops in the amphibious operations, and later the airborne operations. At the operational level the use of helicopters in airmobile operations offers a vertical option to negotiating obstacles, often of considerable extent such as mountain passes or extensive areas of impossible vegetation. See also Engineer reconnaissance Route reconnaissance Military tactics Types of military tactics Types of military operations Types of military strategies References Citations Bibliography Liddell Hart, Basil Henry, The Tanks: The history of the Royal Tank Regiment and its predecessors, Heavy Branch, Machine-Gun Corps, Tank Corps, and Royal Tank Corps, 1914-1945, Cassell, London, 1959 Land warfare Military engineering
Obstacles to troop movement
Engineering
1,158
22,714,344
https://en.wikipedia.org/wiki/International%20Council%20for%20Information%20Technology%20in%20Government%20Administration
The International Council for Information Technology in Government Administration (ICA) is a non-profit making organisation which promotes the information exchange of knowledge, ideas and experiences between central government information technology authorities. The ICA was established in 1968. References External links http://www.ica-it.org/ Information technology organizations Organizations established in 1968
International Council for Information Technology in Government Administration
Technology
71
58,003,379
https://en.wikipedia.org/wiki/Adesto%20Technologies
Adesto Technologies Corporation was an American corporation founded in 2006 and based in Santa Clara, California. The company provided application-specific integrated circuits (ASICs) and embedded systems for the Internet of Things (IoT), and sells its products directly to original equipment manufacturers (OEMs) and original design manufacturers (ODMs) that manufacture products for its end customers. In 2020, Adesto was bought by Dialog Semiconductor. History Adesto Technologies was founded by Narbeh Derhacobian, Shane Hollmer, and Ishai Naveh in 2006. Derhacobian formerly served in senior technical and managerial roles at AMD, Virage Logic, and Cswitch Corporations. The company developed a non-volatile memory based on the movement of copper ions in a programmable metallization cell technology licensed from Axon Technologies Corp., a spinoff of Arizona State University. In October 2010, Adesto acquired intellectual property and patents related to Conductive Bridging Random Access Memory (CBRAM) technology from Qimonda AG, and their first CBRAM product began production in 2011. In 2015, the company held an initial public offering under the symbol IOTS, which entered the market at $5 per share. Underwriters included Needham & Company, Oppenheimer & Co. Inc., and Roth Capital Partners. The entire offering was valued at $28.75 million. Between May and September 2018, Adesto completed two acquisitions of S3 Semiconductors and Echelon Corporation. In May, the company acquired S3 Semiconductors, a provider of analog and mixed-signal ASICs and Intellectual Property (IP) cores. In June, the company announced its intention to buy Echelon Corporation, a home and industrial automation company, for $45 million. The acquisition was completed three months later. The company's offerings were expanded to include ASICs and IP from S3 Semiconductors and embedded systems from Echelon Corporation, in addition to its original non-volatile memory (NVM) products. In 2018 Adesto started a cooperation with the University of California San Diego in order to explore the possibility for calculations to be made directly in the memory. In 2020, Adesto was acquired by Dialog Semiconductor, a company headquartered in Reading, United Kingdom, for $500 million. References External links http://www.adestotech.com Defunct semiconductor companies of the United States 2006 establishments in California American companies established in 2006 Computer companies established in 2006 Electronics companies established in 2006 Technology companies based in the San Francisco Bay Area Companies based in Santa Clara, California Companies formerly listed on the Nasdaq 2015 initial public offerings Embedded systems 2020 mergers and acquisitions Defunct computer companies of the United States Defunct computer hardware companies
Adesto Technologies
Technology,Engineering
564
20,644,107
https://en.wikipedia.org/wiki/International%20Award%20of%20Merit%20in%20Structural%20Engineering
The International Award of Merit in Structural Engineering is presented to people for outstanding contributions in the field of structural engineering, with special reference to usefulness for society by the International Association for Bridge and Structural Engineering Fields of endeavour may include: planning, design, construction, materials, equipment, education, research, government, management. The first Award was presented in 1976. Awardees Source IABSE 2020: Ahseen Kareem, USA 2019: Niels Jørgen Gimsing, Denmark 2018: Tristram Carfrae, UK 2017: Juan José Arenas, Spain 2016: no award 2015: Jose Calavera, Spain 2014: William F. Baker, USA 2013: Theodossios Tassios, Greece 2012: Hai-Fan Xiang, China 2011: Leslie E. Robertson, USA 2010: Man-Chung Tang, USA 2009: Christian Menn, Switzerland 2008: Tom Paulay, New Zealand 2007: Manabu Ito, Japan and Spain 2006: Javier Manterola, Spain 2005: Jean-Marie Cremer, Belgium 2004: Chander Alimchandani, India 2003: Michel Virlogeux, France 2002: Ian Liddell, UK 2001: John W. Fisher, USA 2000: John E. Breen, USA 1998: Peter Head, UK 1997: Bruno Thürlimann, Switzerland 1996: Alan G. Davenport, Canada 1994: T. N. Subbarao, India 1995: Mamoru Kawaguchi, Japan 1993: Jean Muller, France 1992: Leo Finzi, Italy 1991: Jörg Schlaich, Germany See also List of engineering awards References External links IABSE webpage Structural engineering awards International awards
International Award of Merit in Structural Engineering
Technology,Engineering
344
30,814,562
https://en.wikipedia.org/wiki/HD%20207129
HD 207129 is a G-type pre-main-sequence star in the constellation of Grus. It has an apparent visual magnitude of approximately 5.58. This is a Sun-like star with the same stellar classification G2V and a similar mass. It is roughly the same age as the Sun, but has a lower abundance of elements other than hydrogen and helium; (which astronomers refer to as the star's metallicity). A debris disk has been imaged around this star in visible light using the ACS instrument on the Hubble Space Telescope; it has also been imaged in the infrared (70 μm) using the MIPS instrument on the Spitzer Space Telescope. Based on the ACS image, the disk appears to have a radius of about 163 astronomical units and to be about 30 AU wide, and to be inclined at 60° to the plane of the sky. Another star, CCDM J21483-4718B (also designated CD−47 13929 or WDS J21483-4718B), of apparent visual magnitude 8.7, has been observed 55 arcseconds away from this star, but based on comparison of proper motions, it is believed to be an optical double and not physically related to its companion. References Grus (constellation) G-type main-sequence stars Pre-main-sequence stars Double stars 207129 0838 107649 8323 Durchmusterung objects
HD 207129
Astronomy
301
1,024,043
https://en.wikipedia.org/wiki/Paul%20Poberezny
Paul Howard Poberezny (September 14, 1921 – August 22, 2013) was an American aviator, entrepreneur, and aircraft designer. He founded the Experimental Aircraft Association (EAA) in 1953, and spent the greater part of his life promoting homebuilt aircraft. Poberezny is widely considered as the first person to have popularized the tradition of aircraft homebuilding in the United States. Through his work founding EAA and the organization's annual convention in Oshkosh, Wisconsin, he had the reputation of helping inspire millions of people to get involved in grassroots aviation. Many attribute his legacy with the growth and sustainment of the US general aviation industry in the later part of the 20th century and into the early 21st. For the last two decades of his tenure as chairman of the EAA from 1989–2009, he worked closely with his son, aerobatic pilot and EAA president Tom Poberezny, to expand the organization and create several new programs within it, including an aviation education program for youth and the EAA Museum, among other initiatives. In addition to his longtime experience as a military aviator (earning all seven types of pilot wings offered by the armed services), Poberezny was also an instructor, air show, air race and test pilot who frequently test flew his own homebuilt designs as well as various aircraft built by the EAA, such as the EAA Biplane. He flew for more than 70 years of his life in over 500 different types of aircraft, and was inducted into the National Aviation Hall of Fame in 1999. He also received the Wright Brothers Memorial Trophy in 2002 and was ranked fourth on Flying's list of the 51 Heroes of Aviation, the highest-ranked living person on the list at the time of its release. Poberezny died of cancer in 2013, at the age of 91. Early life Paul Poberezny was the oldest of three children born to Peter Poberezny, a Ukrainian migrant born and raised in Terebovlia, and Jettie Dowdy, who hailed from the southern United States. Born in Leavenworth County, Kansas, Paul grew up poor in a tar paper shack in Milwaukee, Wisconsin and never experienced indoor plumbing until he went to school. He became interested in aviation at an early age and built model airplanes as his first educational experience into aircraft design. He then learned how to fly and repair aircraft in high school, starting with a WACO Primary Glider and Porterfield 35 monoplane, and followed by an American Eagle biplane after high school. Having never attended college, Poberezny once described learning to fly and maintain the Eagle as the closest thing he ever had to a college education experience. Experimental Aircraft Association Poberezny founded the Experimental Aircraft Association out of his Hales Corners, Wisconsin home in 1953. It started as predominately an aircraft homebuilding organization in his basement, but later went on to capture all aspects of general aviation internationally. Poberezny retired as EAA President in 1989, remaining as chairman of the organization until 2009. As of 2017, the organization had approximately 200,000 members in more than 100 countries. In 1953, the EAA released a two-page newsletter named The Experimenter (later renamed Sport Aviation). The newsletter was first published and written by Paul and his wife Audrey Poberezny along with other volunteers. The now-monthly magazine focuses on experimental homebuilding and other general aviation topics, including antique, war, and classic aircraft. EAA's annual convention and fly-in (now known as EAA AirVenture Oshkosh) in Oshkosh, Wisconsin attracts a total attendance in excess of 600,000 people, 10,000 aircraft, and 1,000 different forums & workshops annually, making it the largest of its kind in the world. It was first held in 1953 at what is now Timmerman Field in Milwaukee, and attracted only a handful of airplanes. Towards the late '50s, the event outgrew Timmerman Field and was moved to the Rockford, Illinois Municipal Airport (now Chicago Rockford International Airport). There, attendance at the fly-in continued to grow until the Rockford airport was too small to accommodate the crowds, and so it was moved to Oshkosh's Wittman Regional Airport in 1970. Paul's son, aerobatic world champion Tom Poberezny, was the chairman of the annual EAA AirVenture Convention from 1977 to August 2011, and was president of EAA from 1989 to September 2010. In March 2009, Paul stepped down as Chairman of EAA and his son took on these duties as well. Tom had a large impact on the expansive growth of the organization and convention over the more than two decades that he led them with his father. The EAA spawned the creation of numerous aviation programs and activities within the organization, including a technical counselor program, flight advisor program, youth introduction-to-aviation program (the Young Eagles), National Cadet Special Activity program as part of the Civil Air Patrol (National Blue Beret), and more. In addition, AirVenture has nearly a $200 million annual economic impact on the surrounding region of Wisconsin and inspired the formation of other similar events such as Tannkosh in Germany and Sun 'n Fun in Florida, as well as similar organizations such as the Aircraft Kit Industry Association founded by pioneer homebuilder Richard VanGrunsven. Military career Poberezny served for 30 years in the Wisconsin Air National Guard and United States Air Force, including active duty during World War II and the Korean War. He retired with the rank of lieutenant colonel, and attained all seven aviation wings offered by the military: glider pilot, service pilot, rated pilot, liaison pilot, senior pilot, Army aviator, and command pilot. Aircraft experience Poberezny flew over 500 aircraft types, including over 170 home-built planes throughout his life. He was introduced to aviation in 1936 at the age of 16 with the gift of a donated damaged WACO Primary Glider that he rebuilt and taught himself to fly. A high school teacher owned the glider and offered to pay Poberezny to repair it. He hauled it to his father's garage, borrowed books on building/repairing airplanes, and completed the restoration soon after. A friend used his car to tow the glider into the air with Poberezny at the controls; it rose to around a hundred feet when he released the tow rope and coasted to a gentle landing in a bed of alfalfa. A year later, Poberezny soloed at age 17 in a 1935 Porterfield and soon co-owned an American Eagle biplane. After returning home from World War II, Poberezny could not afford to buy his own aircraft, so he decided to build one himself. In 1955, he wrote a series of articles for the publication Mechanix Illustrated, where he described how an individual could buy a set of plans and build an airplane at home. In the magazine were also photos of himself fabricating the Baby Ace, an amateur-built aircraft (and the first to be marketed as a "homebuilt") that he bought the rights to for US$200 a few years prior. The articles became extremely popular and gave the concept of homebuilding worldwide acclaim. He designed, modified, and built several home-built aircraft, and had more than 30,000 hours of flight time in his career. Aircraft that he designed and built include: Acro Sport I & II "Little Audrey" Poberezny P-5 Pober Sport Pober Jr Ace Pober Pixie Pober Super Ace Poberezny made the first test flight of the EAA Biplane example Parkside Eagle in 1971, which was constructed by students of Parkside High School in Michigan. His 1944 North American F-51D Mustang, dubbed Paul I, which he flew at air shows and air races from 1977–2003, is on display at the EAA Aviation Museum in Oshkosh. Personal life and death In 1996, Poberezny teamed with his daughter Bonnie, her husband Chuck Parnall, and Bill Blake to write Poberezny: The Story Begins, a recounting of the early years of Paul and Audrey, including the founding of EAA. Paul Poberezny died of cancer on August 22, 2013, in Oshkosh, Wisconsin, at age 91. His estate in Oshkosh is preserved by Aircraft Spruce & Specialty Co. and was opened to public tours beginning in the summer of 2017. Audrey Poberezny died on November 1, 2020, at age 95, and Tom Poberezny died on July 25, 2022, at age 75, severing the last direct link between EAA and the Poberezny family that founded it. Awards and legacy In 1971 Poberezny was the first recipient of the Duane and Judy Cole Award, presented to individuals that promote sport aviation. In 1978 he was named an honorary fellow of the Society of Experimental Test Pilots, in 1986 he was inducted into the Wisconsin Aviation Hall of Fame, and in 1987, the National Aeronautic Association (NAA) awarded him the Elder Statesman of Aviation. In 1997 he was inducted into the International Air & Space Hall of Fame and in 1999, the National Aviation Hall of Fame in Dayton, Ohio. He received the NBAA's 2001 Award for Meritorious Service to Aviation and the 2002 Wright Brothers Memorial Trophy. In 2008 the Wisconsin Historical Society named him as a "Wisconsin History Maker", recognizing his unique contributions to the state's history. Flying Magazine ranked Poberezny at number 4 on their 2013 list of the 51 Heroes of Aviation, putting him ahead of figures like Bob Hoover, Amelia Earhart, Jimmy Doolittle, and even Chuck Yeager. At the time of its release, just one month before his death, Poberezny was the highest-ranked living person on the list. Many prominent aviation figures have praised Poberezny's legacy as being crucial to the maturation of the general aviation industry and to aviation advocacy at large. Radio newscaster and pilot Paul Harvey said that the Poberezny family "militantly manned the ramparts against those who would fence off the sky", and airshow pilot Julie Clark noted Poberezny as inspiring her and "countless thousands of others to get involved in the promotion of aviation." The Klapmeier brothers, fellow Wisconsinites who founded Cirrus Aircraft in the mid-1980s with a homebuilt design, also credited Poberezny and the EAA as essential to their success: See also One Six Right (2005 documentary) Project Schoolflight Timothy Prince Burt Rutan Steve Wittman References External links EAA - The Spirit of Aviation official website Paul Poberezny official EAA biography Profile in the National Aviation Hall of Fame Biography in the Wisconsin Aviation Hall of Fame Biography at FirstFlight.org (verified 3/2006) Wright Award announcement (verified 3/2006) Poberezny obituary in The New York Times 1921 births 2013 deaths American aerospace engineers American aviation businesspeople American people of Ukrainian descent Aviators from Kansas Aviators from Wisconsin People from Leavenworth County, Kansas People from Oshkosh, Wisconsin National Aviation Hall of Fame inductees Deaths from cancer in Wisconsin Writers from Kansas Writers from Wisconsin American aviation pioneers Aircraft designers Military personnel from Wisconsin United States Army Air Forces pilots of World War II National Guard (United States) officers Experimental Aircraft Association People from Hales Corners, Wisconsin Engineers from Milwaukee
Paul Poberezny
Engineering
2,366
6,123
https://en.wikipedia.org/wiki/Curl%20%28mathematics%29
In vector calculus, the curl, also known as rotor, is a vector operator that describes the infinitesimal circulation of a vector field in three-dimensional Euclidean space. The curl at a point in the field is represented by a vector whose length and direction denote the magnitude and axis of the maximum circulation. The curl of a field is formally defined as the circulation density at each point of the field. A vector field whose curl is zero is called irrotational. The curl is a form of differentiation for vector fields. The corresponding form of the fundamental theorem of calculus is Stokes' theorem, which relates the surface integral of the curl of a vector field to the line integral of the vector field around the boundary curve. The notation is more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notation is traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use the cross product notation with the del (nabla) operator, as in which also reveals the relation between curl (rotor), divergence, and gradient operators. Unlike the gradient and divergence, curl as formulated in vector calculus does not generalize simply to other dimensions; some generalizations are possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator of geometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensional cross product, and indeed the connection is reflected in the notation for the curl. The name "curl" was first suggested by James Clerk Maxwell in 1871 but the concept was apparently first used in the construction of an optical field theory by James MacCullagh in 1839. Definition The curl of a vector field , denoted by , or , or , is an operator that maps functions in to functions in , and in particular, it maps continuously differentiable functions to continuous functions . It can be defined in several ways, to be mentioned below: One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: if is any unit vector, the component of the curl of along the direction may be defined to be the limiting value of a closed line integral in a plane perpendicular to divided by the area enclosed, as the path of integration is contracted indefinitely around the point. More specifically, the curl is defined at a point as where the line integral is calculated along the boundary of the area in question, being the magnitude of the area. This equation defines the component of the curl of along the direction . The infinitesimal surfaces bounded by have as their normal. is oriented via the right-hand rule. The above formula means that the component of the curl of a vector field along a certain axis is the infinitesimal area density of the circulation of the field in a plane perpendicular to that axis. This formula does not a priori define a legitimate vector field, for the individual circulation densities with respect to various axes a priori need not relate to each other in the same way as the components of a vector do; that they do indeed relate to each other in this precise manner must be proven separately. To this definition fits naturally the Kelvin–Stokes theorem, as a global formula corresponding to the definition. It equates the surface integral of the curl of a vector field to the above line integral taken around the boundary of the surface. Another way one can define the curl vector of a function at a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosing divided by the volume enclosed, as the shell is contracted indefinitely around . More specifically, the curl may be defined by the vector formula where the surface integral is calculated along the boundary of the volume , being the magnitude of the volume, and pointing outward from the surface perpendicularly at every point in . In this formula, the cross product in the integrand measures the tangential component of at each point on the surface , and points along the surface at right angles to the tangential projection of . Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation of around , and whose direction is at right angles to this circulation. The above formula says that the curl of a vector field at a point is the infinitesimal volume density of this "circulation vector" around the point. To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates the volume integral of the curl of a vector field to the above surface integral taken over the boundary of the volume. Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinear orthogonal coordinates, e.g. in Cartesian coordinates, spherical, cylindrical, or even elliptical or parabolic coordinates: The equation for each component can be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices). If are the Cartesian coordinates and are the orthogonal coordinates, then is the length of the coordinate vector corresponding to . The remaining two components of curl result from cyclic permutation of indices: 3,1,2 → 1,2,3 → 2,3,1. Usage In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curl operator can be applied using some set of curvilinear coordinates, for which simpler representations have been derived. The notation has its origins in the similarities to the 3-dimensional cross product, and it is useful as a mnemonic in Cartesian coordinates if is taken as a vector differential operator del. Such notation involving operators is common in physics and algebra. Expanded in 3-dimensional Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), is, for composed of (where the subscripts indicate the components of the vector, not partial derivatives): where , , and are the unit vectors for the -, -, and -axes, respectively. This expands as follows: Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection. In a general coordinate system, the curl is given by where denotes the Levi-Civita tensor, the covariant derivative, is the determinant of the metric tensor and the Einstein summation convention implies that repeated indices are summed over. Due to the symmetry of the Christoffel symbols participating in the covariant derivative, this expression reduces to the partial derivative: where are the local basis vectors. Equivalently, using the exterior derivative, the curl can be expressed as: Here and are the musical isomorphisms, and is the Hodge star operator. This formula shows how to calculate the curl of in any coordinate system, and how to extend the curl to any oriented three-dimensional Riemannian manifold. Since this depends on a choice of orientation, curl is a chiral operation. In other words, if the orientation is reversed, then the direction of the curl is also reversed. Examples Example 1 Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The curl of the vector field at any point is given by the rotation of an infinitesimal area in the xy-plane (for z-axis component of the curl), zx-plane (for y-axis component of the curl) and yz-plane (for x-axis component of the curl vector). This can be seen in the examples below. Example 2 The vector field can be decomposed as Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linear force acting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed. Calculating the curl: The resulting vector field describing the curl would at all points be pointing in the negative direction. The results of this equation align with what could have been predicted using the right-hand rule using a right-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed. Example 3 For the vector field the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the line , the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negative direction. Inversely, if placed on , the object would rotate counterclockwise and the right-hand rule would result in a positive direction. Calculating the curl: The curl points in the negative direction when is positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the plane . Further examples In a vector field describing the linear velocities of each part of a rotating disk in uniform circular motion, the curl has the same value at all points, and this value turns out to be exactly two times the vectorial angular velocity of the disk (oriented as usual by the right-hand rule). More generally, for any flowing mass, the linear velocity vector field at each point of the mass flow has a curl (the vorticity of the flow at that point) equal to exactly two times the local vectorial angular velocity of the mass about the point. For any solid object subject to an external physical force (such as gravity or the electromagnetic force), one may consider the vector field representing the infinitesimal force-per-unit-volume contributions acting at each of the points of the object. This force field may create a net torque on the object about its center of mass, and this torque turns out to be directly proportional and vectorially parallel to the (vector-valued) integral of the curl of the force field over the whole volume. Of the four Maxwell's equations, two—Faraday's law and Ampère's law—can be compactly expressed using curl. Faraday's law states that the curl of an electric field is equal to the opposite of the time rate of change of the magnetic field, while Ampère's law relates the curl of the magnetic field to the current and the time rate of change of the electric field. Identities In general curvilinear coordinates (not only in Cartesian coordinates), the curl of a cross product of vector fields and can be shown to be Interchanging the vector field and operator, we arrive at the cross product of a vector field with curl of a vector field: where is the Feynman subscript notation, which considers only the variation due to the vector field (i.e., in this case, is treated as being constant in space). Another example is the curl of a curl of a vector field. It can be shown that in general coordinates and this identity defines the vector Laplacian of , symbolized as . The curl of the gradient of any scalar field is always the zero vector field which follows from the antisymmetry in the definition of the curl, and the symmetry of second derivatives. The divergence of the curl of any vector field is equal to zero: If is a scalar valued function and is a vector field, then Generalizations The vector calculus operations of grad, curl, and div are most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifying bivectors (2-vectors) in 3 dimensions with the special orthogonal Lie algebra of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) and these all being 3-dimensional spaces. Differential forms In 3 dimensions, a differential 0-form is a real-valued function ; a differential 1-form is the following expression, where the coefficients are functions: a differential 2-form is the formal sum, again with function coefficients: and a differential 3-form is defined by a single term with one function as coefficient: (Here the -coefficients are real functions of three variables; the "wedge products", e.g. , can be interpreted as some kind of oriented area elements, , etc.) The exterior derivative of a -form in is defined as the -form from above—and in if, e.g., then the exterior derivative leads to The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives, and antisymmetry, the twofold application of the exterior derivative yields (the zero -form). Thus, denoting the space of -forms by and the exterior derivative by one gets a sequence: Here is the space of sections of the exterior algebra vector bundle over Rn, whose dimension is the binomial coefficient ; note that for or . Writing only dimensions, one obtains a row of Pascal's triangle: the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div. Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On a Riemannian manifold, or more generally pseudo-Riemannian manifold, -forms can be identified with -vector fields (-forms are -covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on an oriented vector space with a nondegenerate form (an isomorphism between vectors and covectors), there is an isomorphism between -vectors and -vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchange -forms, -vector fields, -forms, and -vector fields; this is known as Hodge duality. Concretely, on this is given by: 1-forms and 1-vector fields: the 1-form corresponds to the vector field . 1-forms and 2-forms: one replaces by the dual quantity (i.e., omit ), and likewise, taking care of orientation: corresponds to , and corresponds to . Thus the form corresponds to the "dual form" . Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields: grad takes a scalar field (0-form) to a vector field (1-form); curl takes a vector field (1-form) to a pseudovector field (2-form); div takes a pseudovector field (2-form) to a pseudoscalar field (3-form) On the other hand, the fact that corresponds to the identities for any scalar field , and for any vector field . Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms and -forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and -forms are always fiberwise -dimensional and can be identified with vector fields. Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are so the curl of a 1-vector field (fiberwise 4-dimensional) is a 2-vector field, which at each point belongs to 6-dimensional vector space, and so one has which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way. However, one can define a curl of a vector field as a 2-vector field in general, as described below. Curl geometrically 2-vectors correspond to the exterior power ; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as the special orthogonal Lie algebra of infinitesimal rotations. This has dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we have , which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebra The curl of a 3-dimensional vector field which only depends on 2 coordinates (say and ) is simply a vertical vector field (in the direction) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page. Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions. Inverse In the case where the divergence of a vector field is zero, a vector field exists such that . This is why the magnetic field, characterized by zero divergence, can be expressed as the curl of a magnetic vector potential. If is a vector field with , then adding any gradient vector field to will result in another vector field such that as well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknown irrotational field with the Biot–Savart law. See also Helmholtz decomposition Hiptmair–Xu preconditioner Del in cylindrical and spherical coordinates Vorticity References Further reading External links Differential operators Linear operators in calculus Vector calculus Analytic geometry
Curl (mathematics)
Mathematics
4,049
4,313,210
https://en.wikipedia.org/wiki/List%20of%20stoae
Stoas, in the context of ancient Greek architecture, are covered walkways or porticos, commonly for public usage. The following is a list of Greek and Hellenistic stoas sorted alphabetically by the stoa's city or location, with the name appearing in bold text, followed by a short description and/or location of the stoa: A Alexandria Doric Stoa: monumental Doric stoa built at a right angle to the ancient main street along the ancient street R4, dated to the Ptolemaic period Assos North Stoa (Lower Story): Two-storied Doric on the north side of the agora. South Stoa: Two-aisled on the south side of the agora. Athens Doric Stoa: near Theater of Dionysos in the Sanctuary of Dionysos Eleuthereus on the south slope of the Acropolis, sharing its north wall with the back wall of the stage building of the Theater of Dionysos. East stoa a small stoa in the south-east quadrant of the Agora. Middle Stoa: approximately in the middle of the Agora and dividing it into north and south areas. South Stoa I (of Athens): on the south side of the Agora, located between the Heliaia and the Enneakrounos. South Stoa II: on the southern edge of the Agora, on the approximate location of the South Stoa I, between the Heliaia, and the Middle Stoa. Southeast Stoa: near Library of Pantainos and the Eleusinion. Stoa Amphiaraion: on the east side of the Sanctuary of Amphiaraios, southeast of the Theater. Stoa Basileios (Royal Stoa): in the northeast corner of the Agora. Stoa of Hermes located to the north of the Agora. Stoa of Artemis Brauronia: Stoa with wings; the south boundary of the Sanctuary of Artemis Brauronia, on the Acropolis, southeast of the Propylaia, west of the Chalkotheke. Stoa of Attalos: Two-storied on the eastern side of the Agora. Stoa of Zeus (Eleutherios): Two-aisled in the northwest corner of the Agora. Stoa Poikile (Painted Porch): on the north side of the Ancient Agora of Athens, the stoa from which Stoicism takes its name Street Stoa: between Library of Pantainos and Roman Agora. D Delos South Stoa I: south of the Sanctuary of Apollo and west of the Oblique Stoa and the L-shaped Stoa of the Agora of the Delians. Stoa of Philip: Two-part south of the Sanctuary of Apollo, between the South Stoa and the harbour. Oblique Stoa: south of the Sanctuary of Apollo, south of the L-shaped Stoa of the Agora of the Delians. Stoa of Antigonos: Two-aisled the north boundary of the Sanctuary of Apollo. L-shaped Stoa of the Agora of the Delians: Stoa creating north and east sides of a court, south of the Sanctuary of Apollo. Stoa of the Naxians: L-shaped forming the southwest corner of the Sanctuary of Apollo. L-Shaped Stoa: L-shaped bounded the Sanctuary of Artemis (Artemision) on the eastern side of the Sanctuary of Apollo. Delphi Stoa of the Athenians: in the Sanctuary of Apollo, south of the Apollo Temple platform, with the southern, polygonal wall of the platform forming the north wall of the stoa. Stoa SD 108: At the south-East entrance of the Sanctuary of Apollo, on the north side of the so-called "Sacred Way". Offering from the Arcadians. West Stoa: projecting from the west wall of the Sanctuary of Apollo, southwest of the Theater. Stoa of Attalos I: in the Sanctuary of Apollo, east of theater and northeast of the Temple of Apollo, intersecting and projecting east from the peribolos wall. B Brauron Stoa at Artemision: Three-sided surrounding the northern end of the Sanctuary of Artemis. E Eleusis Stoa of the Great Forecourt: L-shaped stoa with rooms; northeast of the Greater Propylon, outside the Sanctuary of Demeter and Kore, bounding east and west sides of a court. Epidauros Stoa of Apollo Maleatas: on the north side of the Sanctuary of Apollo Maleatas. L Lindos Acropolis of Lindos Stoa: Hellenistic stoa. O Olympia South Stoa: T-shaped the southern boundary of the Sanctuary of Zeus (Altis). Echo Hall (Painted Stoa): on the east side of the Sanctuary of Zeus (Altis), forming an eastern boundary to the central sanctuary. P Philippi Stoa of Philippi : Roman-era stoa. Priene Sacred Stoa: Two-aisled stoa located in the north of the agora in the center of the city. Stoa of Athena Sanctuary: One-aisled stoa facing south, forming southern extremity of Sanctuary of Athena Polias. S Samos South Stoa: South Stoa, SW of the main altar in Sanctuary of Hera, Samos. North West Stoa: North West Stoa, NW of main altar and W of N gate at Sanctuary of Hera. Sounion West Hall: along western wall of the Sanctuary of Poseidon, at a right angle and adjacent to the North Hall. Sounion, North Hall: along the northern wall of the Sanctuary of Poseidon at the western end. T Thermon Middle Stoa: in the Sanctuary of Apollo Thermios, running north–south between the Temple of Apollo and the South Stoa. South Stoa: on the south side of the Sanctuary of Apollo Thermios, parallel to the southern sanctuary wall. East Stoa: at the southeast corner of the Sanctuary of Apollo Thermios. References Architecture lists Lists of buildings and structures in Greece Stoae
List of stoae
Engineering
1,268
37,121,257
https://en.wikipedia.org/wiki/CryptoParty
CryptoParty (Crypto-Party) is a grassroots global endeavour to introduce the basics of practical cryptography such as the Tor anonymity network, I2P, Freenet, key signing parties, disk encryption and virtual private networks to the general public. The project primarily consists of a series of free public workshops. History As a successor to the Cypherpunks of the 1990s, CryptoParty was conceived in late August 2012 by the Australian journalist Asher Wolf in a Twitter post following the passing of the Cybercrime Legislation Amendment Bill 2011 and the proposal of a two-year data retention law in that country, the Cybercrime Legislation Amendment Bill 2011. The DIY, self-organizing movement immediately went viral, with a dozen autonomous CryptoParties being organized within hours in cities throughout Australia, the US, the UK, and Germany. Many more parties were soon organized or held in Chile, The Netherlands, Hawaii, Asia, etc. Tor usage in Australia itself spiked, and CryptoParty London with 130 attendees—some of whom were veterans of the Occupy London movement—had to be moved from London Hackspace to the Google campus in east London's Tech City. As of mid-October 2012 some 30 CryptoParties have been held globally, some on a continuing basis, and CryptoParties were held on the same day in Reykjavik, Brussels, and Manila. The first draft of the 442-page CryptoParty Handbook (the hard copy of which is available at cost) was pulled together in three days using the book sprint approach, and was released 2012-10-04 under a CC BY-SA license. Edward Snowden involvement In May 2014, Wired reported that Edward Snowden, while employed by Dell as an NSA contractor, organized a local CryptoParty at a small hackerspace in Honolulu, Hawaii on December 11, six months before becoming well known for leaking tens of thousands of secret U.S. government documents. During the CryptoParty, Snowden taught 20 Hawaii residents how to encrypt their hard drives and use the Internet anonymously. The event was filmed by Snowden's then-girlfriend, but the video has never been released online. In a follow-up post to the CryptoParty wiki, Snowden pronounced the event a "huge success." Media response CryptoParty has received early messages of support from the Electronic Frontier Foundation and (purportedly) AnonyOps, as well as the NSA whistleblower Thomas Drake, WikiLeaks central editor Heather Marsh, and Wired reporter Quinn Norton. Eric Hughes, the author of A Cypherpunk's Manifesto nearly two decades before, delivered the keynote address, Putting the Personal Back in Personal Computers, at the Amsterdam CryptoParty on 2012-09-27. Marcin de Kaminski, founding member of Piratbyrån which in turn founded The Pirate Bay, regards CryptoParty as the most important civic project in cryptography today, and Cory Doctorow has characterized a CryptoParty as being "like a Tupperware party for learning crypto." in December 2014 mentioned "crypto parties" in the wake of the Edward Snowden leaks in an article about the NSA. See also Cyber self-defense References External links CryptoParty Wiki An Australian crypto primer preso Beginning of CryptoParty London's slideshow Eric Hughes's keynote address at the Amsterdam CryptoParty Asher Wolf on privacy concerns and the origin and spread of CryptoParty Anarchism Cryptography Crypto-anarchism Cypherpunks Internet privacy 21st-century social movements Internet activism Mass surveillance
CryptoParty
Mathematics,Engineering
756
1,999,738
https://en.wikipedia.org/wiki/SIGCSE
SIGCSE is the Association for Computing Machinery's (ACM) Special Interest Group (SIG) on Computer Science Education (CSE), which provides a forum for educators to discuss issues related to the development, implementation, and/or evaluation of computing programs, curricula, and courses, as well as syllabi, laboratories, and other elements of teaching and pedagogy. SIGCSE is also the colloquial name for the SIGCSE Technical Symposium on Computer Science Education, which is the largest of the four conferences organized by SIGCSE. The main focus of SIGCSE is higher education, and discussions include improving computer science education at high school level and below. The membership level has held steady at around 3300 members for several years. the chair of SIGCSE is Alison Clear for July 1, 2022 to June 30, 2025. Conferences SIGCSE has four large annual conferences: The SIGCSE Technical Symposium on Computer Science Education is held in North America with an average annual attendance of approximately 1600 in recent years. The most recent conference was held March 15 through March 19, 2023 in Toronto, Canada. The annual conference on Innovation and Technology in Computer Science Education (ITiCSE). The next ITiCSE will be held July 8 - July 10, hosted by the University of Università degli Studi di Milano in Milan, Italy. This conference is attended by about 200-300 and is mainly held in Europe, but has also been held in countries outside of Europe ((Israel - 2012), and (Peru - 2016)). The International Computing Education Research (ICER) conference. This conference has about 70 attendees and is held in the United States every other year. On the alternate years it rotates between Europe and Australasia. The 2023 conference was held in Chicago, Illinois. The 2024 conference will be held in Melbourne, Australia. The ACM Global Computing Education (CompEd) conference. This conference will be held at locations outside of the typical North American and European locations. The first conference was held in Chengdu, China between the 17th and 19 May 2019. The second was held in Hyderabad, India, from December 7 to December 9, 2023. The third is planned to be held in Botswana in 2025. It is planned for CompEd to be held every other year. Newsletter/Bulletin The SIGCSE Bulletin is an newsletter published once a quarter, started in 1969. Today, it is published electronically. Awards SIGCSE has two main awards that are given out annually. The Outstanding Contribution to Computer Science Education is given annually since 1981. The SIGCSE Lifetime Service to Computer Science Education has been awarded annually since 1997. SIGCSE Board The current SIGCSE Board for July 1, 2022 – June 30, 2025 is: Alison Clear, Chair Brett A. Becker, Vice-Chair Jill Denner, Treasurer Dan Garcia, Secretary Rodrigo Duran, at-large member Yolanda A. Rankin, at-large member Judy Sheard, at-large member Adrienne Decker, past chair SIGCSE Chairs over the years: Adrienne Decker, 2019-2022 Amber Settle, 2016–19 Susan H. Rodger, 2013–16 Renee McCauley, 2010-2013 Barbara Boucher Owens, 2007–10 Henry Walker, 2001-2007 Bruce Klein, 1997-01 , 1993–97 Nell B. Dale, 1991–93 References Association for Computing Machinery Special Interest Groups Computer science education
SIGCSE
Technology
719
42,297,424
https://en.wikipedia.org/wiki/Dual%20therapy%20stent
A dual therapy stent is a coronary artery stent that combines the technology of an antibody-coated stent and a drug-eluting stent. Currently, second-generation drug-eluting stents require long-term use of dual-antiplatelet therapy, which increases the risk of major bleeding occurrences in patients. Compared to drug-eluting stents, dual therapy stents have improved vessel regeneration and cell proliferation capabilities. As a result, dual therapy stents were developed to reduce the long-term need for dual-antiplatelet therapy. The COMBO stent is the first and only dual therapy stent that addresses the challenges of vessel healing in drug-eluting stents. This stent is an anti-CD34 antibody-coated and sirolimus-eluting bioresorbable stent. The COMBO stent combines the Genous stent's endothelial cell capture technology with an antiproliferative, biodegradable sirolimus drug elution. The COMBO stent has received CE Mark approval. History and Problems of Coronary Artery Stents The field of interventional cardiology began in the 20th century with the development of the plain old balloon angioplasty. However, this procedure carried risks of promoting platelet aggregation, tearing, arterial recoil, and restenosis. Thus, coronary artery stents were created to prevent restenosis after balloon dilation. There are three types of stents: bare-metal stents (BMS), drug-eluting stents (DES), and bioresorbable vascular scaffolds (BRS). The first stents created were bare-metal stents where they were made from stainless steel and had poor flexibility. Despite its reduced rates of restenosis compared to plain old balloon angioplasty, it still had high rates of stent thrombosis and required a high dosage of blood thinners. This led to the development of drug-eluting stents to act as local drug delivery and vascular scaffold platform to reduce in-stent restenosis. Antiproliferative drugs like sirolimus and paclitaxel were used in the first-generation drug-eluting stents to inhibit the migration of vascular smooth muscle cells and restenosis. The first implanted drug-eluting stent occurred in 1999, which revolutionized the course of interventional cardiology. However, despite the drug-eluting stents superiority over the bare-metal stents, drug-eluting stent implantation had possible concerns over platelet aggregation and significant blood clotting in a localized area. As a result, improvements in the stent material, strut thickness, polymer, and drug choice led to the development of second-generation drug-eluting stents that showcased overall clinical enhancements to its predecessor. The new stent used more biocompatible molecules like zotarolimus and everolimus with quicker drug elution. However, despite these improvements, concerns persisted on the risk of stent thrombosis. Risks in Drug-Eluting Stents The development of dual therapy stents resulted from the health risks of long-term use of dual antiplatelet therapy from drug-eluting stents. Drug-eluting stents inhibit the growth of endothelial cells and vascular smooth muscle cells, which are essential for in-stent endothelialization. Due to inhibition of vital vascular system cells, this causes risk of stent thrombosis, and, thus, patients with drug-eluting stents are required to use dual antiplatelet therapy for approximately 12 months. Although long term use of dual antiplatelet therapy research showcases reduced risk of cardiovascular deaths, it has increased occurrences of major bleeding events, which has challenges for patients with bleeding disorders. Clinical Applications of Dual Therapy Stents The COMBO dual therapy stent is the first and only dual therapy stent presently developed. The COMBO dual therapy stent combines the anti-CD34 antibody coating of the Genous Stent with antiproliferative sirolimus elution. The sirolimus drug reduces the risk of stent restenosis through inhibiting the formation of neointima while the anti-CD34 antibody coating reverses the inhibition of local endothelial cells from the sirolimus elution. Genous Stent The predecessor of the biotechnology used to create a dual therapy stent was the development of the Genous stent. The Genous stent is a coronary artery stent coated with anti-CD3 monoclonal antibodies that bind with circulating endothelial cells to the stent. The coated stent promotes the formation of an endothelial layer, which protects against thrombosis and reduces restenosis. Furthermore, the Genous stent promotes coronary vascular repair response and reduces neointimal hyperplasia after stent implantation. Although the Genous stent promotes rapid vessel healing, it did not decrease the rate of target lesion failure compared to drug-eluting stents, which increases the risk of restenosis and stent failure. COMBO Dual Therapy Stent The COMBO stent is a pro-healing stent with sirolimus drug elution and anti-CD3 monoclonal antibodies that has enhanced degree of endothelization. The stent has an abluminal, facing vessel wall, bioabsorbable coating that continuously releases sirolimus and a luminal anti-CD34 antibody cell capture coating. The COMBO stent's enhanced endothelization is due to the sirolimus drug that reduces the risk of stent restenosis and the Genous stent's anti-CD3 antibodies capture biotechnology. The COMBO stent reduces not only the rate of stent restenosis but also the need for dual antiplatelet therapy, which enables high-risk patient groups like patients who are under long-term anticoagulation regimens or patients with bleeding disorders to use this type of stent. See also Coronary Artery Stent Drug-Eluting Stent Antiplatelet drug Endothelial Progenitor Cells Coronary Artery Disease Interventional Cardiology References External links The COMBO Dual Therapy Stent Interventional cardiology Drug delivery devices Biological engineering
Dual therapy stent
Chemistry,Engineering,Biology
1,292
222,937
https://en.wikipedia.org/wiki/List%20of%20diseases%20of%20the%20honey%20bee
Diseases of the honey bee or abnormal hive conditions include: Pests and parasites Varroa mites Varroa destructor and V. jacobsoni are parasitic mites that feed on the fat bodies of adult, pupal and larval bees. When the hive is very heavily infested, Varroa mites can be seen with the naked eye as a small red or brown spot on the bee's thorax. Varroa mites are carriers for many viruses that are damaging to bees. For example, bees infected during their development will often have visibly deformed wings. Varroa mites have led to the virtual elimination of feral bee colonies in many areas, and are a major problem for kept bees in apiaries. Some feral populations are now recovering—it appears they have been naturally selected for Varroa resistance. Varroa mites were first discovered in Southeast Asia in about 1904, but are now present on all continents, following their introduction to Australia in 2022. They were discovered in the United States in 1987, in the United Kingdom in 1992, and in New Zealand in 2000. To the untrained eye, these mites are generally not a very noticeable problem for a strongly growing hive- as the bees may appear strong in number, and may even be very effective at foraging. However, the mite reproduction cycle occurs inside the capped pupae, and the mite population can surge as a result of colony growth. Careful observation of a colony can help identify signs of disease often spread by mites. When the hive population growth is reduced in preparation for winter or due to poor late summer forage, the mite population growth can overtake that of the bees and can then destroy the hive. It has been observed diseased colonies may slowly die off and be unable to survive through winter even when adequate food stores are present. Often a colony will simply abscond (leave as in a swarm, but leaving no population behind) under such conditions. Varroa in combination with viral vectors and bacteria have been theoretically implicated in colony collapse disorder. It is known that thymol, a compound produced by thyme, naturally occurring in thyme honey, is a treatment for Varroa, though it may cause bee mortality at high concentrations. Provisioning active colonies with crops of thyme may provide the colony with a non-interventional chemical defense against Varroa. Treatment A variety of chemical and mechanical treatments are used to attempt to control Varroa mites. "Hard" chemicals "Hard" chemical treatments include amitraz (marketed as "Apivar"), fluvalinate (marketed as "Apistan"), coumaphos (marketed as CheckMite), flumethrin (marketed as "Bayvarol" and "Polyvar Yellow"). "Soft" chemicals "Soft" chemical treatments include thymol (marketed as "ApiLife-VAR" and "Apiguard"), sucrose octanoate esters (marketed as "Sucrocide"), oxalic acid (marked as "Api-bioxal") and formic acid (sold in liquid form or in gel strips as Mite Away Quick Strips and Formic Pro, but also used in other formulations). According to the U.S. Environmental Protection Agency, when used in beehives as directed, chemical treatments kill a large proportion of the mites while not substantially disrupting bee behavior or life span. Use of chemical controls is generally regulated and varies from country to country. With few exceptions, they are not intended for use during production of marketable honey. "Mechanical" treatments Common mechanical controls generally rely on disruption of some aspect of the mites' lifecycle. These controls are generally intended not to eliminate all mites, but merely to maintain the infestation at a level which the colony can tolerate. Examples of mechanical controls include drone brood sacrifice (Varroa mites are preferentially attracted to the drone brood), powdered sugar dusting (which encourages cleaning behavior and dislodges some mites), screened bottom boards (so any dislodged mites fall through the bottom and away from the colony), brood interruption and, perhaps, downsizing of the brood cell size. Acarine (tracheal) mites Acarapis woodi is a parasitic mite that infests the trachea that lead from the first pair of thoracic spiracles. An unidentified bee illness was first reported on the Isle of Wight in England in 1904, becoming known as the 'Isle of Wight disease' (IoWD), which was initially thought to be caused by Acarapis woodi when it was identified in 1921 by Rennie. The IoWD disease quickly spread to the rest of Great Britain and Ireland, dealing a devastating blow to British and Irish beekeeping, being claimed as having wiped out the indigenous bee population of the British Isles. In 1991 Bailey and Ball stated "The final opinion of Rennie (1923), a co-discoverer of Acarapis woodi, who had much experience with bees said to have the Isle of Wight Disease, was that under the original and now quite properly discarded designation 'Isle of Wight Disease' were included several maladies having analogous superficial symptoms", the authors came to the firm conclusion that the IoWD was not caused by acarine (Acarapis woodi) mites solely, but primarily by chronic bee paralysis virus (CBPV), even though Acarapis woodi was always found to be present within the hive whenever CBPV symptoms were observed. Brother Adam at Buckfast Abbey developed a resistant bee breed known as the Buckfast bee, which is now available worldwide. Diagnosis for tracheal mites generally involves the dissection and microscopic examination of a sample of bees from the hive. Acarapis woodi are believed to have entered the U.S. in 1984, from Mexico. Mature female acarine mites leave the bee's airway and climb out on a hair of the bee, where they wait until they can transfer to a young bee. Once on the new bee, they move into the airways and begin laying eggs. Treatment Acarine mites are commonly controlled with grease patties (typically made from one part vegetable shortening mixed with three to four parts powdered sugar) placed on the top bars of the hive. The bees come to eat the sugar and pick up traces of shortening, which disrupts the mite's ability to identify a young bee. Some of the mites waiting to transfer to a new host remain on the original host. Others transfer to a random bee—a proportion of which will die of other causes before the mite can reproduce. Menthol, either allowed to vaporize from crystal form or mixed into the grease patties, is also often used to treat acarine mites. Nosema disease Nosema apis is a microsporidian that invades the intestinal tracts of adult bees and causes Nosema disease, also known as nosemosis. Nosema infection is also associated with black queen cell virus. It spreads via fecal to oral matter to infect bees. It is normally only a problem when the bees cannot leave the hive to eliminate waste (for example, during an extended cold spell in winter or when the hives are enclosed in a wintering barn). When the bees are unable to void (cleansing flights), they can develop dysentery. Nosema disease is treated by increasing the ventilation through the hive. Some beekeepers treat hives with agents such as fumagillin. Nosemosis can also be prevented or minimized by removing much of the honey from the beehive, then feeding the bees on sugar water in the late fall. Sugar water made from refined sugar has lower ash content than flower nectar, reducing the risk of dysentery. Refined sugar, however, contains fewer nutrients than natural honey, which causes some controversy among beekeepers. In 1996, a similar type of organism to N. apis was discovered on the Asian honey bee Apis cerana and subsequently named N. ceranae. This parasite apparently also infects the western honey bee. Exposure to corn pollen containing genes for Bacillus thuringiensis (Bt) production may weaken the bees' defense against Nosema. In relation to feeding a group of bees with Bt corn pollen and a control group with non-Bt corn pollen: "in the first year, the bee colonies happened to be infested with parasites (microsporidia). This infestation led to a reduction in the number of bees and subsequently to reduced broods in the Bt-fed colonies, as well as in the colonies fed on Bt toxin-free pollen. The trial was then discontinued at an early stage. This effect was significantly more marked in the Bt-fed colonies. (The significant differences indicate an interaction of toxin and pathogen on the epithelial cells of the honeybee intestine. The underlying mechanism which causes this effect is unknown.)" This study should be interpreted with caution given that no repetition of the experiment nor any attempt to find confounding factors was made. In addition, Bt toxin and transgenic Bt pollen showed no acute toxicity to any of the life stages of the bees examined, even when the Bt toxin was fed at concentrations 100 times that found in transgenic Bt pollen from maize. Nosema disease is very common when bees get into winter clusters, as they spend an extensive time in their hives as they keep together for warmth and have little to no opportunities to eliminate waste. Small hive beetle Aethina tumida is a small, dark-colored beetle that lives in beehives. Originally from Africa, the first discovery of small hive beetles in the Western Hemisphere was made in St. Lucie County, Florida, in 1998. The next year, a specimen that had been collected from Charleston, South Carolina, in 1996 was identified, and is believed to be the index case for the United States. By December 1999, small hive beetles were reported in Iowa, Maine, Massachusetts, Minnesota, New Jersey, Ohio, Pennsylvania, Texas, and Wisconsin, and it was found in California by 2006. The lifecycle of this beetle includes pupation in the ground outside of the hive. Controls to prevent ants from climbing into the hive are believed to also be effective against the hive beetle. Several beekeepers are experimenting with the use of diatomaceous earth around the hive as a way to disrupt the beetle's lifecycle. The diatoms abrade the insects' surfaces, causing them to dehydrate and die. Treatment Several pesticides are currently used against the small hive beetle. The chemical fipronil (marketed as Combat Roach Gel) is commonly applied inside the corrugations of a piece of cardboard. Standard corrugations are large enough that a small hive beetle can enter the cardboard through the end, but small enough that honey bees cannot enter (thus are kept away from the pesticide). Alternative controls such as oil-based top-bar traps are also available, but they have had very little commercial success. Wax moths Galleria mellonella (greater wax moths) do not attack the bees directly, but feed on the shed exoskeletons of bee larvae and pollen that is found in dark brood comb, which was used by the bees to hold the developing bees. Their full development to adults requires access to used brood comb or brood cell cleanings—these contain protein essential for the larval development, in the form of brood cocoons. The destruction of the comb will spill or contaminate stored honey and may kill bee larvae. When honey supers are stored for the winter in a mild climate, or in heated storage, the wax moth larvae can destroy portions of the comb, though they will not fully develop. Damaged comb may be scraped out and replaced by the bees. Wax moth larvae and eggs are killed by freezing, so storage in unheated sheds or barns in higher latitudes is the only control necessary. Because wax moths cannot survive a cold winter, they are usually not a problem for beekeepers in the northern U.S. or Canada, unless they survive winter in heated storage, or are brought from the south by purchase or migration of beekeepers. They thrive and spread most rapidly with temperatures above 30 °C (90 °F), so some areas with only occasional days that are hot rarely have a problem with wax moths, unless the colony is already weak due to stress from other factors. Control and treatment A strong hive generally needs no treatment to control wax moths; the bees themselves kill and clean out the moth larvae and webs. Wax moth larvae may fully develop in cell cleanings when such cleanings accumulate thickly where they are not accessible to the bees. Wax moth development in comb is generally not a problem with top bar hives, as unused combs are usually left in the hive during the winter. Since this type of hive is not used in severe wintering conditions, the bees are able to patrol and inspect the unused comb. Wax moths can be controlled in stored comb by application of the aizawai variety of B. thuringiensis spores by spraying. It is a very effective biological control and has an excellent safety record. Wax moths can be controlled chemically with paradichlorobenzene (moth crystals or urinal disks). If chemical methods are used, the combs must be well-aired for several days before use. The use of naphthalene (mothballs) is discouraged because it accumulates in the wax, which can kill bees or contaminate honey stores. Control of wax moths by other means includes the freezing of the comb for a few hours. Langstroth found that placing a spider, such as a daddy-long-legs, with stored combs controlled wax moth and eliminate the need for hash chemicals. This has been confirmed more recently by others, such as Bergqvist. Tropilaelaps Tropilaelaps mercedesae and T. clareae are considered serious threats to honeybees. Although they are not currently found outside Asia, these mites have the potential to inflict serious damage to colonies due to their rapid reproduction inside the hive. Bacterial diseases American foulbrood American foulbrood (AFB), caused by the spore-forming Paenibacillus larvae (formerly classified as Bacillus larvae, then P. larvae ssp. larvae/pulvifaciens), is the most widespread and destructive of the bee brood diseases. P. larvae is a rod-shaped bacterium. Larvae up to three days old become infected by ingesting spores present in their food. Young larvae less than 24 hours old are most susceptible to infection. Spores germinate in the gut of the larva and the vegetative bacteria begin to grow, taking nourishment from the larva. Spores will not germinate in larvae over three days old. Infected larvae normally die after their cell is sealed. The vegetative form of the bacterium will die, but not before it produces many millions of spores. American foulbrood spores are extremely resistant to desiccation and can remain viable for 80 years in honey and beekeeping equipment. Each dead larva may contain as many as 100 million spores. This disease only affects the bee larvae, but is highly infectious and deadly to bee brood. Infected larvae darken and die. As with European foulbrood, research has been conducted using the "shook swarm" method to control American foulbrood, "the advantage being that chemicals are not used". European foulbrood European foulbrood (EFB) is caused by the bacterium Melissococcus plutonius that infects the midgut of bee larvae. European foulbrood is considered less serious than American foulbrood. M. plutonius is not a spore-forming bacterium, but bacterial cells can survive for several months on wax foundation. Symptoms include dead and dying larvae which can appear curled upwards, brown or yellow, melted or deflated with tracheal tubes more apparent, or dried out and rubbery. method Scientific research showed that the spread of the disease is density dependent. The higher the density of apiaries, the higher the probability of disease transmission. European foulbrood is often considered a "stress" disease—dangerous only if the colony is already under stress for other reasons. An otherwise healthy colony can usually survive European foulbrood. Chemical treatment with oxytetracycline hydrochloride may control an outbreak of the disease, but honey from treated colonies could have chemical residues from the treatment, and prophylactic treatments are not recommended as they may lead to resistant bacteria. The "shook swarm" method of bee husbandry can also effectively control the disease, with the advantage of avoiding the use of chemicals. The Alexander-House-Miller treatment has also been shown to be effective against the disease. The method requires the hive to be strong and the queen to be prevented from laying for a week or so. A modified version of this method is given by Carr in his article. The queen is placed on frames of foundation below a queen excluder, and all of the brood frames are put above the excluder. Once all of the worker brood has emerged, these frames are removed from the hive and the old comb in them replaced with foundation ready for re-use. Fungal diseases Chalkbrood Ascosphaera apis causes a fungal disease that only affects bee brood, but adult bees can be carriers. It infests the gut of the larvae before the cell is sealed or soon after. The fungus competes with them for food, ultimately causing them to starve. The fungus then goes on to consume the rest of the larval bodies, causing them to appear white, hard, and "chalky". If fungal spores start to develop, the larva can also appear gray or black. One study suggested it could be economically devastating because not only does it weaken the hive, but it can cause honey reductions of 5–37%. Chalkbrood (ascosphaerosis larvae apium) is most commonly visible during wet springs. Hedtke et al. provided statistical evidence that chalkbrood outbreaks occurred in summer when there was a N. ceranae infection earlier in the spring and there is an ongoing V. destructor infestation. Stress, genetics of the bees, and health can also be contribute to the presence of chalkbrood. Spores of the fungus can last for up to 15 years, which is why old equipment from a previously infected hive should not be used. These spores can last in pollen, honey, and wax. Even though Hornitzky's literature review of articles on chalkbrood disease concluded that there was no definitive cure or control, there are a variety of prevention mechanisms. Improving genetic stock to be more hygienic, sterilization of old equipment, good ventilation and the replacement of old brood comb are all techniques that can be attempted. Chalkbrood was first recognized in 1900s in Europe, and then spread to countries such as Argentina, Turkey, Philippines, Mexico, Chile, Central America and Japan. It was first recorded in the United States in the mid-1960s in Utah and spread across the US from there. Stonebrood Stonebrood (aspergillosis larvae apium) is a fungal disease caused by Aspergillus fumigatus, A. flavus, and A. niger. It causes mummification of the brood of a honey bee colony. The fungi are common soil inhabitants and are also pathogenic to other insects, birds, and mammals. The disease is difficult to identify in the early stages of infection. The spores of the different species have different colours and can also cause respiratory damage to humans and other animals. When bee larvae take in spores, they may hatch in the gut, growing rapidly to form a collar-like ring near the larval heads. After death, the larvae turn black and become difficult to crush, hence the name stonebrood. Eventually, the fungus erupts from the integument of the larvae and forms a false skin. In this stage, the larvae are covered with powdery fungal spores. Worker bees clean out the infected brood and the hive may recover depending on factors such as the strength of the colony, the level of infection, and hygienic habits of the strain of bees (variation in the trait occurs among different subspecies). Viral diseases Dicistroviridae Chronic bee paralysis virus Syndrome 1 result in abnormal trembling of the wings and body. The bees cannot fly, and often crawl on the ground and up plant stems. In some cases, the crawling bees can be found in large numbers (1000+). The bees huddle together on the top of the cluster or on the top bars of the hive. They may have bloated abdomens due to distension of the honey sac. The wings are partially spread or dislocated. Syndrome 2-affected bees are able to fly, but are almost hairless. They appear dark or black and look smaller. They have a relatively broad abdomen. They are often nibbled by older bees in the colony and this may be the cause of the hairlessness. They are hindered at the entrance to the hive by the guard bees. A few days after infection, trembling begins. They then become flightless and soon die. In 2008, the chronic bee paralysis virus was reported for the first time in Formica rufa and another species of ant, Camponotus vagus. Acute bee paralysis virus Acute bee paralysis virus is considered to be a common infective agent of bees. It belongs to the family Dicistroviridae, as does the Israel acute paralysis virus, Kashmir bee virus, and the black queen cell virus. It is frequently detected in apparently healthy colonies. This virus seemingly plays a role in cases of sudden collapse of honey bee colonies infested with the parasitic mite V. destructor. Israeli acute paralysis virus Described in 2004 the Israeli acute paralysis virus belongs to the family Dicistroviridae, as does the Acute bee paralysis virus. The virus is named after the place where it was first identified – its place of origin is unknown. It has been suggested as a marker associated with colony collapse disorder. Kashmir bee virus Kashmir bee virus is related to the preceding viruses. Discovered in 2004, it is currently only positively identifiable by a laboratory test. Little is known about it yet. Black queen cell virus Black queen cell virus causes the queen larva to turn black and die. It is thought to be associated with Nosema. Cloudy wing virus Cloudy wing virus is a little-studied, small, icosahedral virus commonly found in honey bees, especially in collapsing colonies infested by V. destructor, providing circumstantial evidence that the mite may act as a vector. Sacbrood virus A picornavirus-like virus causes sacbrood disease. Affected larvae change from pearly white to gray and finally black. Death occurs when the larvae are upright, just before pupation. Consequently, affected larvae are usually found in capped cells. Head development of diseased larvae is typically delayed. The head region is usually darker than the rest of the body and may lean toward the center of the cell. When affected larvae are carefully removed from their cells, they appear to be a sac filled with water. Typically, the scales are brittle but easy to remove. Sacbrood-diseased larvae have no characteristic odor. Iflaviridae Deformed wing virus Deformed wing virus (DWV) is the causative agent of the wing deformities and other body malformations typically seen in honeybee colonies that are heavily infested with the parasitic mite V. destructor. DWV is part of a complex of closely related virus strains/species that also includes Kakugo virus, V. destructor virus 1 and Egypt bee virus. This deformity can clearly be seen on the honeybee's wings in the image. The deformities are produced almost exclusively due to DWV transmission by V. destructor when it parasitizes pupae. Bees infected as adults remain symptom-free, although they do display behavioral changes and have reduced life expectancy. Deformed bees are rapidly expelled from the colony, leading to a gradual loss of adult bees for colony maintenance. If this loss is excessive and can no longer be compensated by the emergence of healthy bees, the colony rapidly dwindles and dies. Kakugo virus Kakugo virus is an Iflavirus infecting bees; varroa mites may mediate its prevalence. Kakugo virus appears to be a subtype of Deformed wing virus. Slow bee paralysis virus As the name suggests, slow bee paralysis virus induces paralysis to the anterior legs ten to twelve days after infection. Iridoviridae Invertebrate iridescent virus type 6 (IIV-6) Applying proteomics-based pathogen screening tools in 2010, researchers announced they had identified a co-infection of an Iridovirus; specifically invertebrate iridescent virus type 6 (IIV-6) and N. ceranae in all CCD colonies sampled. On the basis of this research, the New York Times reported the colony collapse mystery solved, quoting researcher Bromenshenk, a co-author of the study, "[The virus and fungus] are both present in all these collapsed colonies." Evidence for this association, however, remains minimal and several authors have disputed the original methodology used to associate CCD with IIV-6. Secoviridae Tobacco ringspot virus The RNA virus tobacco ringspot virus, a plant pathogen, was described to infect honeybees through infected pollen, but this unusual claim was soon challenged and remains to be confirmed. Lake Sinai virus In 2015, Lake Sinai virus (LSV) genomes were assembled and three main domains were discovered: Orf1, RNA-dependent RNA polymerase and capsid protein sequences. LSV1, LSV2, LSV3, LSV4, LSV5, and LSV6 were described. LSV were detected in bees, mites and pollen. It only actively replicates in honey bees and mason bees (Osmia cornuta) and not in Varroa mites. Dysentery Dysentery is a condition resulting from a combination of long periods of inability to make cleansing flights (generally due to cold weather) and food stores that contain a high proportion of indigestible matter. As a bee's gut becomes engorged with feces that cannot be voided in flight as preferred by the bees, the bee voids within the hive. When enough bees do this, the hive population rapidly collapses and death of the colony results. Dark honeys and honeydews have greater quantities of indigestible matter. Occasional warm days in winter are critical for honey bee survival; dysentery problems increase in likelihood during periods of more than two or three weeks with temperatures below 50 °F (10 °C). When cleansing flights are few, bees are often forced out at times when the temperature is barely adequate for their wing muscles to function, and large quantities of bees may be seen dead in the snow around the hives. Colonies found dead in spring from dysentery have feces smeared over the frames and other hive parts. In very cold areas of North America and Europe, where honey bees are kept in ventilated buildings during the coldest part of winter, no cleansing flights are possible; under such circumstances, beekeepers commonly remove all honey from the hives and replace it with sugar water or high-fructose corn syrup, which have nearly no indigestible matter. Chilled brood Chilled brood is not actually a disease, but can be a result of mistreatment of the bees by the beekeeper. It also can be caused by a pesticide hit that primarily kills off the adult population, or by a sudden drop in temperature during rapid spring build-up. The brood must be kept warm at all times; nurse bees will cluster over the brood to keep it at the right temperature. When a beekeeper opens the hive (to inspect, remove honey, check the queen, or just to look) and prevents the nurse bees from clustering on the frame for too long, the brood can become chilled, deforming or even killing some of the bees. Pesticide losses Honey bees are susceptible to many of the chemicals used for agricultural spraying of other insects and pests. Many pesticides are known to be toxic to bees. Because the bees forage up to several miles from the hive, they may fly into areas actively being sprayed by farmers or they may collect pollen from contaminated flowers. Carbamate pesticides, such as carbaryl, can be especially pernicious since toxicity can take as long as two days to become evident, allowing infected pollen to be returned and distributed throughout the colony. Organophosphates and other insecticides are also known to kill honey bee clusters in treated areas. Pesticide losses may be relatively easy to identify (large and sudden numbers of dead bees in front of the hive) or quite difficult, especially if the loss results from a gradual accumulation of pesticide brought in by the foraging bees. Quick-acting pesticides may deprive the hive of its foragers, dropping them in the field before they can return home. Insecticides that are toxic to bees have label directions that protect the bees from poisoning as they forage. To comply with the label, applicators must know where and when bees forage in the application area, and the length of residual activity of the pesticide. Some pesticide authorities recommend, and some jurisdictions require, that notice of spraying be sent to all known beekeepers in the area, so they can seal the entrances to their hives and keep the bees inside until the pesticide has had a chance to disperse. This, however, does not solve all problems associated with spraying and the label instructions should be followed regardless of doing this. Sealing honey bees from flight on hot days can kill bees. Beekeeper notification does not offer any protection to bees, if the beekeeper cannot access them, or to wild native or feral honey bees. Thus, beekeeper notification as the sole protection procedure does not really protect all the pollinators of the area, and is, in effect, a circumventing of the label requirements. Pesticide losses are a major factor in pollinator decline. Colony collapse disorder Colony collapse disorder (CCD) is a poorly understood phenomenon in which worker bees from a beehive or western honey bee colony abruptly disappear. CCD was originally discovered in Florida by David Hackenberg in western honey bee colonies in late 2006. European beekeepers observed a similar phenomenon in Belgium, France, the Netherlands, Greece, Italy, Portugal, and Spain, and initial reports have also come in from Switzerland and Germany, albeit to a lesser degree. Possible cases of CCD have also been reported in Taiwan since April 2007. Initial hypotheses were wildly different, including environmental change-related stresses, malnutrition, pathogens (i.e., disease including Israel acute paralysis virus), mites, or the class of pesticides known as neonicotinoids, which include imidacloprid, clothianidin, and thiamethoxam. Most new research suggests the neonicotinoid hypothesis was incorrect, however, and that pesticides play little role in CCD compared to Varroa and Nosema infestations. Other theories included radiation from cellular phones or other man-made devices and genetically modified crops with pest-control characteristics,. In 2010, U.S. researchers announced they had identified a co-infection of invertebrate iridescent virus type 6 (IIV-6) and N. ceranae in all CCD colonies sampled. References Further reading Canadian Honey Council Essential Oils for Varroa, Tracheal, AFB Control (via Web Archive) Morse, Roger (editor), The ABC and XYZ of Beekeeping Sammataro, Diana; et al., The Beekeeper's Handbook Shimanuki, Hachiro and Knox, David A., Diagnosis of Honey Bee Diseases, US Department of Agriculture, July 2000 External links Beekeeping page at the University of Georgia, with a large section on Honey Bee Disorders Apiculture Factsheets at the British Columbia Ministry of Agriculture and Lands (via Web Archive) BeeBase at the Defra Food and Environment Research Agency in the UK Diseases and Afflictions of Honey Bees, Kohala.net (via Web Archive) Beediseases Honey bee diseases website by Dr. Guido Cordoni. Agricultural pests Bee diseases Western honey bee pests Beekeeping Honey bee
List of diseases of the honey bee
Biology
6,758
70,992,623
https://en.wikipedia.org/wiki/Nova%20Cassiopeiae%202021
Nova Cassiopeiae 2021, also known V1405 Cassiopeiae, was a nova in the constellation Cassiopeia. It reached a peak brightness of magnitude 5.449 on May 9, 2021, making it visible to the naked eye. It was discovered by Japanese amateur astronomer Yuji Nakamura of Kameyama, Japan, at 10:10 UT on March 18, 2021. The nova was first seen by Nakamura in four 15 second CCD exposures with a 135mm F/4 lens, when it was at magnitude 9.3. Nothing was seen brighter than magnitude 13.0 with the same equipment in exposures taken at 10:12 UT on March 14, 2021. For the first seven months after discovery, the nova's brightness stayed at a rough plateau, fading and rebrightening at least eight times; it is considered a very slow nova. After the seven month long series of peaks, Nova Cassiopeiae began a linear decline in brightness. This nova has been detected throughout the electromagnetic spectrum, from radio to gamma rays. All novae are binary stars, consisting of a white dwarf orbiting a "donor star" from which the white dwarf accretes material. Spectra taken of Nova Cassiopeiae around maximum brightness showed that the nova was an FE II type novae. The ejecta from FE II novae is believed to come from a large circumbinary envelope of gas (which was lost from the donor star), rather than the white dwarf. TESS observations revealed an orbital period of hours for the binary system. References Novae Cassiopeia (constellation) 2021 in science Cassiopeiae, V1405 20210318
Nova Cassiopeiae 2021
Astronomy
349
18,342,137
https://en.wikipedia.org/wiki/Finke%20Desert%20Race
The Finke Desert Race is an off-road, multi-terrain two-day race for motorbikes, cars, buggies and quad bikes through desert country from Alice Springs to the small and remote community of Aputula (called Finke until the 1980s) in Australia's Northern Territory. The race is usually held each year on the King's Birthday long weekend in June. "Finke", as it is commonly known, is one of the biggest annual sporting events in the Northern Territory. Track Encompassing about 229km each way, the Finke Desert Race travels through many properties on its way to end up crossing the Finke River just north of Aputula. The track is divided into five sections: Start/Finish Line to Deep Well (61 km) Deep Well to Rodinga (31 km) Rodinga to Bundooma (43 km) Bundooma to Mount Squires (45 km) Mount Squires to Finke (49 km) History The race started in 1976 as a "there and back" challenge for a group of local motorbike riders to race from Alice Springs Inland Dragway to the Finke River and return. After the success of this initial ride, the Finke Desert Race has been held annually on the King's Birthday long weekend ever since. The race is run along sections of the Central Australia Railway along a winding corrugated track, which goes through the outback terrain of red dirt, sand, spinifex, mulga and desert oaks. Even though the railway line was realigned and rebuilt in the early 1980s, with the old tracks being pulled up, the race continues along its original course. While originally the Finke was only a bike race, its increasing popularity saw the introduction of cars and off-road buggies in 1988. A rivalry developed between the two and four wheelers, as the buggies were keen to claim the "King of the Desert" title. For eleven consecutive years the bikes were too quick for the cars despite the gap constantly narrowing. Finally in 1999, a buggy returned home first to claim the honour, with the bikes winning back the title in 2000 and 2001. From 2002 until 2004 the buggies held onto the "King of the Desert" title. In 2005 the title was changed to see two "Kings of the Desert", one for the cars and one for bikes, each picking up $10,000 for their effort. The last bike to beat the cars time was Michael Vroom in 2001 on his Honda CR500. COVID-19 impact The 2020 race was cancelled for the first time in the event's history due to the COVID-19 pandemic. This cost the economy of Alice Springs about $8 million. In 2021 about 200 Victorian competitors, plus race officials, were unable to attend when the Northern Territory classed all of Victoria as a hot spot after the state entered its fourth lockdown. 2021 fatal crash During the 2021 race, a vehicle struck spectators just 35 kilometres short of the finish line. One person was killed and two others, including the driver, were hospitalised. The remainder of the event was subsequently cancelled, meaning the bike race was not completed. The buggy category had already been won earlier that morning. The winning racer, Toby Price, had previously won in the bike category six times, and therefore became the first person to have won in both the bike and buggy categories. Media coverage A 2018 television documentary Desert Daredevils: The Finke Desert Race and Finke: There and Back described the experiences of some racers. Highlights are available to watch on 7plus. List of winners See also Australian Off Road Championship Kalgoorlie Desert Race Pooncarie Desert Dash References External links Rally raid races Motorcycle races Auto races in Australia Sports competitions in the Northern Territory Sport in Alice Springs Motorsport in the Northern Territory Motorcycle racing in Australia Deserts
Finke Desert Race
Biology
787
20,869,422
https://en.wikipedia.org/wiki/Tea%20leaf%20paradox
In fluid dynamics, the tea leaf paradox is a phenomenon where tea leaves in a cup of tea migrate to the center and bottom of the cup after being stirred rather than being forced to the edges of the cup, as would be expected in a spiral centrifuge. The correct physical explanation of the paradox was for the first time given by James Thomson in 1857. He correctly connected the appearance of secondary flow (both Earth atmosphere and tea cup) with "friction on the bottom". The formation of secondary flows in an annular channel was theoretically treated by Joseph Valentin Boussinesq as early as in 1868. The migration of near-bottom particles in river-bend flows was experimentally investigated by A. Ya. Milovich in 1913. The solution first came from Albert Einstein in a 1926 paper in which he explained the erosion of river banks and repudiated Baer's law. Explanation The stirring makes the water spin in the cup, causing a centrifugal force outwards. Near the bottom however, the water is slowed by friction. Thus the centrifugal force is weaker near the bottom than higher up, leading to a secondary circular (helical) flow that goes outwards at the top, down along the outer edge, inwards along the bottom, bringing the leaves to the center, and then up again. Applications The phenomenon has been used to develop a new technique to separate red blood cells from blood plasma, to understand atmospheric pressure systems, and in the process of brewing beer to separate out coagulated trub in the whirlpool. See also References External links See also figure 25 in figures.pdf Einstein's 1926 article online and analyzed on BibNum (click 'Télécharger' for English) (unsecure link). Fluid mechanics Physical paradoxes Albert Einstein Tea Articles containing video clips
Tea leaf paradox
Engineering
376
59,961,754
https://en.wikipedia.org/wiki/Indistinguishability%20obfuscation
In cryptography, indistinguishability obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, iO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable. Indistinguishability obfuscation has several interesting theoretical properties. Firstly, iO is the "best-possible" obfuscation (in the sense that any secret about a program that can be hidden by any obfuscator at all can also be hidden by iO). Secondly, iO can be used to construct nearly the entire gamut of cryptographic primitives, including both mundane ones such as public-key cryptography and more exotic ones such as deniable encryption and functional encryption (which are types of cryptography that no-one previously knew how to construct), but with the notable exception of collision-resistant hash function families. For this reason, it has been referred to as "crypto-complete". Lastly, unlike many other kinds of cryptography, indistinguishability obfuscation continues to exist even if P=NP (though it would have to be constructed differently in this case), though this does not necessarily imply that iO exists unconditionally. Though the idea of cryptographic software obfuscation has been around since 1996, indistinguishability obfuscation was first proposed by Barak et al. (2001), who proved that iO exists if P=NP is the case. For the P≠NP case (which is harder, but also more plausible), progress was slower: Garg et al. (2013) proposed a construction of iO based on a computational hardness assumption relating to multilinear maps, but this assumption was later disproven. A construction based on "well-founded assumptions" (hardness assumptions that have been well-studied by cryptographers, and thus widely assumed secure) had to wait until Jain, Lin, and Sahai (2020). (Even so, one of these assumptions used in the 2020 proposal is not secure against quantum computers.) Currently known indistinguishability obfuscation candidates are very far from being practical. As measured by a 2017 paper, even obfuscating the toy function which outputs the logical conjunction of its thirty-two Boolean data type inputs produces a program nearly a dozen gigabytes large. Formal definition Let be some uniform probabilistic polynomial-time algorithm. Then is called an indistinguishability obfuscator if and only if it satisfies both of the following two statements: Completeness or Functionality: For any Boolean circuit C of input length n and input , we have Indistinguishability: For every pair of circuits of the same size k that implement the same functionality, the distributions and are computationally indistinguishable. In other words, for any probabilistic polynomial-time adversary A, there is a negligible function (i.e., a function that eventually grows slower than for any polynomial p) such that, for every pair of circuits of the same size k that implement the same functionality, we have History In 2001, Barak et al., showing that black-box obfuscation is impossible, also proposed the idea of an indistinguishability obfuscator, and constructed an inefficient one. Although this notion seemed relatively weak, Goldwasser and Rothblum (2007) showed that an efficient indistinguishability obfuscator would be a best-possible obfuscator, and any best-possible obfuscator would be an indistinguishability obfuscator. (However, for inefficient obfuscators, no best-possible obfuscator exists unless the polynomial hierarchy collapses to the second level.) An open-source software implementation of an iO candidate was created in 2015. Candidate constructions Barak et al. (2001) proved that an inefficient indistinguishability obfuscator exists for circuits; that is, the lexicographically first circuit that computes the same function. If P = NP holds, then an indistinguishability obfuscator exists, even though no other kind of cryptography would also exist. A candidate construction of iO with provable security under concrete hardness assumptions relating to multilinear maps was published by Garg et al. (2013), but this assumption was later invalidated. (Previously, Garg, Gentry, and Halevi (2012) had constructed a candidate version of a multilinear map based on heuristic assumptions.) Starting from 2016, Lin began to explore constructions of iO based on less strict versions of multilinear maps, constructing a candidate based on maps of degree up to 30, and eventually a candidate based on maps of degree up to 3. Finally, in 2020, Jain, Lin, and Sahai proposed a construction of iO based on the symmetric external Diffie-Helman, learning with errors, and learning plus noise assumptions, as well as the existence of a super-linear stretch pseudorandom generator in the function class NC0. (The existence of pseudorandom generators in NC0 (even with sub-linear stretch) was a long-standing open problem until 2006.) It is possible that this construction could be broken with quantum computing, but there is an alternative construction that may be secure even against that (although the latter relies on less established security assumptions). Practicality There have been attempts to implement and benchmark iO candidates. In 2017, an obfuscation of the function at a security level of 80 bits took 23.5 minutes to produce and measured 11.6 GB, with an evaluation time of 77 ms. Additionally, an obfuscation of the Advanced Encryption Standard encryption circuit at a security level of 128 bits would measure 18 PB and have an evaluation time of about 272 years. Existence It is useful to divide the question of the existence of iO by using Russell Impagliazzo's "five worlds", which are five different hypothetical situations about average-case complexity: Algorithmica: In this case P = NP, but iO exists. Heuristica: In this case NP problems are easy on average; iO does not exist. Pessiland: In this case, BPP ≠ NP, but one-way functions do not exist; as a result, iO does not exist. Minicrypt: In this case, one-way functions exist, but secure public-key cryptography does not; iO does not exist (because explicit constructions of public-key cryptography from iO and one-way functions are known). Cryptomania: In this case, secure public-key cryptography exists, but iO does not exist. Obfustopia: In this case, iO is believed to exist. Potential applications Indistinguishability obfuscators, if they exist, could be used for an enormous range of cryptographic applications, so much so that it has been referred to as a "central hub" for cryptography, the "crown jewel of cryptography", or "crypto-complete". Concretely, an indistinguishability obfuscator (with the additional assumption of the existence of one-way functions) could be used to construct the following kinds of cryptography: Indistinguishability obfuscation for programs in the RAM model and for Turing machines IND-CCA-secure public-key cryptography Short digital signatures IND-CCA-secure key encapsulation schemes Perfectly zero-knowledge non-interactive zero-knowledge proofs and succinct non-interactive arguments Constant-round concurrent zero-knowledge protocols Multilinear maps with bounded polynomial degrees Injective trapdoor functions Fully homomorphic encryption Witness encryption Functional encryption Secret sharing for any monotone NP language Semi-honest oblivious transfer Deniable encryption (both sender-deniable and fully-deniable) Multiparty, non-interactive key exchange Adaptively secure succinct garbled RAM Correlation intractable functions Attribute-based encryption Oblivious transfer Traitor tracing Graded encoding schemes Additionally, if iO and one-way functions exist, then problems in the PPAD complexity class are provably hard. However, indistinguishability obfuscation cannot be used to construct every possible cryptographic protocol: for example, no black-box construction can convert an indistinguishability obfuscator to a collision-resistant hash function family, even with a trapdoor permutation, except with an exponential loss of security. See also Black-box obfuscation, a stronger form of obfuscation proven to be impossible References Cryptographic primitives Software obfuscation
Indistinguishability obfuscation
Technology,Engineering
1,864
54,699,202
https://en.wikipedia.org/wiki/NGC%207085
NGC 7085 is a spiral galaxy located about 365 million light-years away in the constellation of Pegasus. NGC 7085 was discovered by astronomer Albert Marth on August 3, 1864. See also List of NGC objects (7001–7840) References External links Spiral galaxies Pegasus (constellation) 7085 66926 Astronomical objects discovered in 1864
NGC 7085
Astronomy
71
54,949,641
https://en.wikipedia.org/wiki/P%C3%A9ter%20Surj%C3%A1n
Péter R. Surján (born August 30, 1955) is a Hungarian theoretical chemist who is known for his research on application of the theory of second quantization in quantum chemistry. In 2016 a festschrift from Theoretical Chemistry Accounts journal was published in his name which is also published as a book in Highlights in Theoretical Chemistry series by the Springer Nature. He is currently a professor and a former dean of the Faculty of Science of the Eötvös Loránd University. Academic career Surján received his Master's degree in physics in 1978 and his PhD in quantum chemistry in 1981, both from (Eötvös Loránd University. In 1986, he was a Candidate of Science. From there, he worked at the Technical University of Budapest as a senior researcher in physics from 1990 to 1995 before moving to Eötvös Loránd University, where he has been since. He has taught in the Department of Theoretical Chemistry since 1991, becoming a full professor in 1998, and has been the Director of the Bolyai College (2007-2012) and the Institute of Chemistry (2008-2012). He was also the dean of the Faculty of Science. Surján is a member of the Hungarian Academy of Sciences (1998) and has been on the editorial boards of the Journal of Mathematical Chemistry and Interdisciplinary Sciences: Computational Life Sciences. He has also been a guest editor for the International Journal of Quantum Chemistry. Surján has published more than 190 papers in his scientific career. His first paper was published in 1980. Selected publications Papers "Optical Rotatory Strength Calculation by Evaluating the Gradient Matrix through the Equation of Motion", Theoretica Chimica Acta 55, 103 (1980); "Higher excitations in coupled-cluster theory", The Journal of Chemical Physics 115, 2945 (2001); "A general state-selective multireference coupled-cluster algorithm", The Journal of Chemical Physics 117, 980 (2002); "Computing coupled-cluster wave functions with arbitrary excitations", The Journal of Chemical Physics 113, 1359 (2000); "An observable-based interpretation of electronic wavefunctions: application to “hypervalent” molecules", Computational and Theoretical Chemistry 255, 9 (1992); Books References External links 1955 births Living people Hungarian chemists Theoretical chemists Eötvös Loránd University alumni Academic staff of Eötvös Loránd University
Péter Surján
Chemistry
484
403,321
https://en.wikipedia.org/wiki/65%20%28number%29
65 (sixty-five) is the natural number following 64 and preceding 66. In mathematics 65 is the nineteenth distinct semiprime, (5.13); and the third of the form (5.q), where q is a higher prime. 65 has a prime aliquot sum of 19 within an aliquot sequence of one composite numbers (65,19,1,0) to the prime; as the first member' of the 19-aliquot tree. It is an octagonal number. It is also a Cullen number. Given 65, the Mertens function returns 0. This number is the magic constant of a 5x5 normal magic square: This number is also the magic constant of n-Queens Problem for n = 5. 65 is the smallest integer that can be expressed as a sum of two distinct positive squares in two (or more) ways, 65 = 82 + 12 = 72 + 42. It appears in the Padovan sequence, preceded by the terms 28, 37, 49 (it is the sum of the first two of these). 65 is a Stirling number of the second kind, the number of ways of dividing a set of six objects into four non-empty subsets. 65 = 15 + 24 + 33 + 42 + 51. 65 is the length of the hypotenuse of 4 different Pythagorean triangles, the lowest number to have more than 2: 652 = 162 + 632 = 332 + 562 = 392 + 522 = 252 + 602. The first two are "primitive", and 65 is the lowest number to be the largest side of more than one such triple. 65 is the number of compositions of 11 into distinct parts. In other fields the traditional age for retirement in the United Kingdom, Germany, the United States, Canada, and several other countries. in the U.S., the age at which a person is eligible to obtain Medicare. 65 is commonly used in names of many dishes of South India cuisine, for instance Chicken 65. A 65th anniversary is sometimes referred to as a sapphire jubilee. References Integers
65 (number)
Mathematics
434
52,984,006
https://en.wikipedia.org/wiki/World%20Water%20Index
The World Water Index (WOWAX) is a global stock market index established in February 2002 by Société Générale in cooperation with SAM Group and Dow Jones Index/STOXX. It contains the globally largest 20 corporations of the water supply, water infrastructure and water utilities/treatment sector. The index' assortment of shares is rebalanced every quarter of a year, and revised every six months. The ISIN of the WOWAX is XY0100291446 and US98151V3006, respectively. Companies See also Palisades Water Index References Global stock market indices Water industry
World Water Index
Environmental_science
124
26,369
https://en.wikipedia.org/wiki/RFPolicy
The RFPolicy states a method of contacting vendors about security vulnerabilities found in their products. It was originally written in 2001 by hacker and security consultant Rain Forest Puppy. The policy gives the vendor five working days to respond to the reporter of the bug. If the vendor fails to contact the reporter in those five days, the issue is recommended to be disclosed to the general community. The reporter should help the vendor reproduce the bug and work out a fix. The reporter should delay notifying the general community about the bug if the vendor provides feasible reasons for requiring so. If the vendor fails to respond or shuts down communication with the reporter of the problem in more than five working days, the reporter should disclose the issue to the general community. When issuing an alert or fix, the vendor should give the reporter proper credits about reporting the bug. References External links RFPolicy v2.0 Computer security Software bugs
RFPolicy
Technology
189
2,824,536
https://en.wikipedia.org/wiki/Peaceful%20nuclear%20explosion
Peaceful nuclear explosions (PNEs) are nuclear explosions conducted for non-military purposes. Proposed uses include excavation for the building of canals and harbours, electrical generation, the use of nuclear explosions to drive spacecraft, and as a form of wide-area fracking. PNEs were an area of some research from the late 1950s into the 1980s, primarily in the United States and Soviet Union. In the U.S., a series of tests were carried out under Project Plowshare. Some of the ideas considered included blasting a new Panama Canal, constructing the proposed Nicaragua Canal, the use of underground explosions to create electricity (Project PACER), and a variety of mining, geological, and radionuclide studies. The largest of the excavation tests was carried out in the Sedan nuclear test in 1962, which released large amounts of radioactive gas into the air. By the late 1960s, public opposition to Plowshare was increasing, and a 1970s study of the economics of the concepts suggested they had no practical use. Plowshare saw decreasing interest from the 1960s, and was officially cancelled in 1977. The Soviet program started a few years after the U.S. efforts and explored many of the same concepts under their Nuclear Explosions for the National Economy program. The program was more extensive, eventually conducting 239 nuclear explosions. Some of these tests also released radioactivity, including a significant release of plutonium into the groundwater and the polluting of an area near the Volga River. A major part of the program in the 1970s and 80s was the use of very small bombs to produce shock waves as a seismic measuring tool, and as part of these experiments, two bombs were successfully used to seal blown-out oil wells. The program officially ended in 1988. As part of ongoing arms control efforts, both programs came to be controlled by a variety of agreements. Most notable among these is the 1976 Treaty on Underground Nuclear Explosions for Peaceful Purposes (PNE Treaty). The Comprehensive Nuclear-Test-Ban Treaty of 1996 prohibits all nuclear explosions, regardless of whether they are for peaceful purposes or not. Since that time the topic has been raised several times, often as a method of asteroid impact avoidance. Peaceful Nuclear Explosions Treaty In the PNE Treaty, the signatories agreed: not to carry out any individual nuclear explosions having a yield exceeding 150 kilotons TNT equivalent; not to carry out any group explosion (consisting of a number of individual explosions) having an aggregate yield exceeding 1,500 kilotons; and not to carry out any group explosion having an aggregate yield exceeding 150 kilotons unless the individual explosions in the group could be identified and measured by agreed verification procedures. The parties also reaffirmed their obligations to comply fully with the Limited Test Ban Treaty of 1963. The parties reserve the right to carry out nuclear explosions for peaceful purposes in the territory of another country if requested to do so, but only in full compliance with the yield limitations and other provisions of the PNE Treaty and in accord with the Non-Proliferation Treaty. Articles IV and V of the PNE Treaty set forth the agreed verification arrangements. In addition to the use of national technical means, the treaty states that information and access to sites of explosions will be provided by each side, and includes a commitment not to interfere with verification means and procedures. The protocol to the PNE Treaty sets forth the specific agreed arrangements for ensuring that no weapon-related benefits precluded by the Threshold Test Ban Treaty are derived by carrying out a nuclear explosion used for peaceful purposes, including provisions for use of the hydrodynamic yield measurement method, seismic monitoring, and on-site inspection. The agreed statement that accompanies the treaty specifies that a "peaceful application" of an underground nuclear explosion would not include the developmental testing of any nuclear explosive. United States: Operation Plowshare Operation Plowshare was the name of the U.S. program for the development of techniques to use nuclear explosives for peaceful purposes. The name was coined in 1961, taken from Micah 4:3 ("And he shall judge among the nations, and shall rebuke many people: and they shall beat their swords into plowshares, and their spears into pruning hooks: nation shall not lift up sword against nation, neither shall they learn war any more"). Twenty-eight nuclear blasts were detonated between 1961 and 1973. One of the first U.S. proposals for peaceful nuclear explosions that came close to being carried out was Project Chariot, which would have used several hydrogen bombs to create an artificial harbor at Cape Thompson, Alaska. It was never carried out due to concerns for the native populations and the fact that there was little potential use for the harbor to justify its risk and expense. There was also talk of using nuclear explosions to excavate a second Panama Canal, as well as an alternative to the Suez Canal. The largest excavation experiment took place in 1962 at the Department of Energy's Nevada Test Site. The Sedan nuclear test carried out as part of Operation Storax displaced 12 million tons of earth, creating the largest man-made crater in the world, generating a large nuclear fallout over Nevada and Utah. Three tests were conducted in order to stimulate natural gas production, but the effort was abandoned as impractical because of cost and radioactive contamination of the gas. There were many negative impacts from Project Plowshare's 27 nuclear explosions. For example, the Project Gasbuggy site, located east of Farmington, New Mexico, still contains nuclear contamination from a single subsurface blast in 1967. Other consequences included blighted land, relocated communities, tritium-contaminated water, radioactivity, and fallout from debris being hurled high into the atmosphere. These were ignored and downplayed until the program was terminated in 1977, due in large part to public opposition, after $770 million had been spent on the project. Soviet Union: Nuclear Explosions for the National Economy The Soviet Union conducted a much more vigorous program of 239 nuclear tests, some with multiple devices, between 1965 and 1988 under the auspices of Program No. 6—Employment of Nuclear Explosive Technologies in the Interests of National Economy and Program No. 7—Nuclear Explosions for the National Economy. The initial program was patterned on the U.S. version, with the same basic concepts being studied. One test, Chagan test in January 1965, has been described as a "near clone" of the U.S. Sedan shot. Like Sedan, Chagan also resulted in a massive plume of radioactive material being blown high into the atmosphere, with an estimated 20% of the fission products with it. Detection of the plume over Japan led to accusations by the U.S. that the Soviets had carried out an above-ground test in violation of the Partial Test Ban Treaty, but these charges were later dropped. The later, and more extensive, "Deep Seismic Sounding" Program focused on the use of much smaller explosions for various geological uses. Some of these tests are considered to be operational, not purely experimental. These included the use of peaceful nuclear explosions to create deep seismic profiles. Compared to the usage of conventional explosives or mechanical methods, nuclear explosions allow the collection of longer seismic profiles (up to several thousand kilometres). Alexey Yablokov has stated that all PNE technologies have non-nuclear alternatives and that many PNEs actually caused nuclear disasters. Reports on the successful Soviet use of nuclear explosions in extinguishing out-of-control gas well fires were widely cited in United States policy discussions of options for stopping the 2010 Gulf of Mexico Deepwater Horizon oil spill. Other nations Germany at one time considered manufacturing nuclear explosives for civil engineering purposes. In the early 1970s a feasibility study was conducted for a project to build a canal from the Mediterranean Sea to the Qattara Depression in the Western Desert of Egypt using nuclear demolition. This project proposed to use 213 devices, with yields of 1 to 1.5 megatons, detonated at depths of to build this canal for the purpose of producing hydroelectric power. The Smiling Buddha, India's first explosive nuclear device, was described by the Indian Government as a peaceful nuclear explosion. In Australia, nuclear blasting was proposed as a way of mining iron ore in the Pilbara. Civil engineering and energy production Apart from their use as weapons, nuclear explosives have been tested and used, in a similar manner to chemical high explosives, for various non-military uses. These have included large-scale earth moving, isotope production and the stimulation and closing-off of the flow of natural gas. At the peak of the Atomic Age, the United States initiated Operation Plowshare, involving "peaceful nuclear explosions". The United States Atomic Energy Commission chairman announced that the Plowshare project was intended to "highlight the peaceful applications of nuclear explosive devices and thereby create a climate of world opinion that is more favorable to weapons development and tests". The Operation Plowshare program included 27 nuclear tests designed towards investigating these non-weapon uses from 1961 through 1973. Due to the inability of the U.S. physicists to reduce the fission fraction of low-yield (approximately 1 kiloton) nuclear devices that would have been required for many civil engineering projects, when long-term health and clean-up costs from fission products were included in the cost, there was virtually no economic advantage over conventional explosives except for potentially the very largest projects. The Qattara Depression Project was developed by Professor Friedrich Bassler during his appointment to the West German ministry of economics in 1968. He put forth a plan to create a Saharan lake and hydroelectric power station by blasting a tunnel between the Mediterranean Sea and the Qattara Depression in Egypt, an area that lies below sea level. The core problem of the entire project was the water supply to the depression. Calculations by Bassler showed that digging a canal or tunnel would be too expensive, therefore Bassler determined that the use of nuclear explosive devices, to excavate the canal or tunnel, would be the most economical. The Egyptian government declined to pursue the idea. The Soviet Union conducted a much more exhaustive program than Plowshare, with 239 nuclear tests between 1965 and 1988. Furthermore, many of the "tests" were considered economic applications, not tests, in the Nuclear Explosions for the National Economy program. These included a 30 kilotons explosion being used to close the Uzbekistani Urtabulak gas well in 1966 that had been blowing since 1963, and a few months later a 47 kilotons explosive was used to seal a higher-pressure blowout at the nearby Pamuk gas field. (For more details, see Blowout (well drilling)#Use of nuclear explosions.) Devices that produced the highest proportion of their yield via fusion-only reactions are possibly the Taiga Soviet peaceful nuclear explosions of the 1970s. Their public records indicate 98% of their 15 kiloton explosive yield was derived from fusion reactions, so only 0.3 kiloton was derived from fission. The repeated detonation of nuclear devices underground in salt domes, in a somewhat analogous manner to the explosions that power a car's internal combustion engine (in that it would be a heat engine), has also been proposed as a means of fusion power in what is termed PACER. Other investigated uses for low-yield peaceful nuclear explosions were underground detonations to stimulate, by a process analogous to fracking, the flow of petroleum and natural gas in tight formations; this was developed most in the Soviet Union, with an increase in the production of many well heads being reported. Terraforming In 2015, billionaire entrepreneur Elon Musk popularized an approach in which the cold planet Mars could be terraformed by the detonation of high-fusion-yielding thermonuclear devices over the mostly dry-ice icecaps on the planet. Musk's specific plan would not be very feasible within the energy limitations of historically manufactured nuclear devices (ranging in kilotons of TNT-equivalent), therefore requiring major advancement for it to be considered. In part due to these problems, the physicist Michio Kaku (who initially put forward the concept) instead suggests using nuclear reactors in the typical land-based district heating manner to make isolated tropical biomes on the Martian surface. Alternatively, as nuclear detonations are presently somewhat limited in terms of demonstrated achievable yield, the use of an off-the-shelf nuclear explosive device could be employed to "nudge" a Martian-grazing comet toward a pole of the planet. Impact would be a much more efficient scheme to deliver the required energy, water vapor, greenhouse gases, and other biologically significant volatiles that could begin to quickly terraform Mars. One such opportunity for this occurred in October 2014 when a "once-in-a-million-years" comet (designated as C/2013 A1, also known as comet "Siding Spring") came within of the Martian atmosphere. Physics The discovery and synthesis of new chemical elements by nuclear transmutation, and their production in the necessary quantities to allow study of their properties, was carried out in nuclear explosive device testing. For example, the discovery of the short-lived einsteinium and fermium, both created under the intense neutron flux environment within thermonuclear explosions, followed the first Teller–Ulam thermonuclear device test—Ivy Mike. The rapid capture of so many neutrons required in the synthesis of einsteinium would provide the needed direct experimental confirmation of the so-called r-process, the multiple neutron absorptions needed to explain the cosmic nucleosynthesis (production) of all chemical elements heavier than nickel on the periodic table in supernova explosions, before beta decay, with the r-process explaining the existence of many stable elements in the universe. The worldwide presence of new isotopes from atmospheric testing beginning in the 1950s led to the 2008 development of a reliable way to detect art forgeries. Paintings created after that period may contain traces of caesium-137 and strontium-90, isotopes that did not exist in nature before 1945. (Fission products were produced in the natural nuclear fission reactor at Oklo about 1.7 billion years ago, but these decayed away before the earliest known human painting.) Both climatology and particularly aerosol science, a subfield of atmospheric science, were largely created to answer the question of how far and wide fallout would travel. Similar to radioactive tracers used in hydrology and materials testing, fallout and the neutron activation of nitrogen gas served as a radioactive tracer that was used to measure and then help model global circulations in the atmosphere by following the movements of fallout aerosols. After the Van Allen Belts surrounding Earth were discovered about in 1958, James Van Allen suggested that a nuclear detonation would be one way of probing the magnetic phenomenon. Data obtained from the August 1958 Project Argus test shots, a high-altitude nuclear explosion investigation, were vital to the early understanding of Earth's magnetosphere. Soviet nuclear physicist and Nobel Peace Prize recipient Andrei Sakharov also proposed the idea that earthquakes could be mitigated and particle accelerators could be made by utilizing nuclear explosions, with the latter created by connecting a nuclear explosive device with another of his inventions, the explosively pumped flux compression generator, to accelerate protons to collide with each other to probe their inner workings, an endeavor that is now done at much lower energy levels with non-explosive superconducting magnets in CERN. Sakharov suggested to replace the copper coil in his MK generators by a big superconductor solenoid to magnetically compress and focus underground nuclear explosions into a shaped charge effect. He theorized this could focus 1023 positively charged protons per second on a 1 mm2 surface, then envisaged making two such beams collide in the form of a supercollider. Underground nuclear explosive data from peaceful nuclear explosion test shots have been used to investigate the composition of Earth's mantle, analogous to the exploration geophysics practice of mineral prospecting with chemical explosives in "deep seismic sounding" reflection seismology. Project A119, proposed in the 1960s, which as Apollo scientist Gary Latham explained, would have been the detonating of a "smallish" nuclear device on the Moon in order to facilitate research into its geologic make-up. Analogous in concept to the comparatively low yield explosion created by the water prospecting (LCROSS) Lunar Crater Observation and Sensing Satellite mission, which launched in 2009 and released the "Centaur" kinetic energy impactor, an impactor with a mass of 2,305 kg (5,081 lb), and an impact velocity of about , releasing the kinetic energy equivalent of detonating approximately 2 tons of TNT (8.86 GJ). Propulsion use The first preliminary examination of the effects of nuclear detonations upon various metal and non-metal materials, occurred in 1955 with Operation Teapot, were a chain of approximately basketball sized spheres of material, were arrayed at fixed aerial distances, descending from the shot tower. In what was then a surprising experimental observation, all but the spheres directly within the shot tower survived, with the greatest ablation noted on the aluminum sphere located from the detonation point, with slightly over of surface material absent upon recovery. These spheres are often referred to as "Lew Allen's balls", after the project manager during the experiments. The ablation data collected for various materials and the distances the spheres were propelled, serve as the bedrock for the nuclear pulse propulsion study, Project Orion. The direct use of nuclear explosives, by using the impact of ablated propellant plasma from a nuclear shaped charge acting on the rear pusher plate of a ship, was and continues to be seriously studied as a potential propulsion mechanism. Although likely never achieving orbit due to aerodynamic drag, the first macroscopic object to obtain Earth orbital velocity was a "900kg manhole cover" propelled by the somewhat focused detonation of test shot Pascal-B in August 1957. The use of a subterranean shaft and nuclear device to propel an object to escape velocity has since been termed a "thunder well". In the 1970s Edward Teller, in the United States, popularized the concept of using a nuclear detonation to power an explosively pumped soft X-ray laser as a component of a ballistic missile defense shield known as Project Excalibur. This created dozens of highly focused X-ray beams that would cause the missile to break up due to laser ablation. Laser ablation is one of the damage mechanisms of a laser weapon, but it is also one of the researched methods behind pulsed laser propulsion intended for spacecraft, though usually powered by means of conventionally pumped, laser arrays. For example, ground flight testing by Professor Leik Myrabo, using a non-nuclear, conventionally powered pulsed laser test-bed, successfully lifted a lightcraft 72 meters in altitude by a method similar to ablative laser propulsion in 2000. A powerful solar system based soft X-ray, to ultraviolet, laser system has been calculated to be capable of propelling an interstellar spacecraft, by the light sail principle, to 11% of the speed of light. In 1972 it was also calculated that a 1 Terawatt, 1-km diameter X-ray laser with 1 angstrom wavelength impinging on a 1-km diameter sail, could propel a spacecraft to Alpha Centauri in 10 years. Asteroid impact avoidance A proposed means of averting an asteroid impacting with Earth, assuming short lead times between detection and Earth impact, is to detonate one, or a series, of nuclear explosive devices, on, in, or in a stand-off proximity orientation with the asteroid, with the latter method occurring far enough away from the incoming threat to prevent the potential fracturing of the near-Earth object, but still close enough to generate a high thrust laser ablation effect. A 2007 NASA analysis of impact avoidance strategies using various technologies stated: Nuclear stand-off explosions are assessed to be 10–100 times more effective than the non-nuclear alternatives analyzed in this study. Other techniques involving the surface or subsurface use of nuclear explosives may be more efficient, but they run an increased risk of fracturing the target near-Earth object. They also carry higher development and operations risks. See also Nuclear weapon design#Clean bombs Nuclear terrorism References External links IAEA review of the 1968 book: The constructive uses of nuclear explosions by Edward Teller. "The Containment of Underground Nuclear Explosions", Project Director Gregory E van der Vink, U.S. Congress, Office of Technology Assessment, OTA-ISC-414, (Oct 1989). . . . . . On the Soviet nuclear program. . . . . . includes tests of peaceful nuclear explosions. Nuclear technology Nuclear weapons testing ru:Мирные ядерные взрывы в СССР
Peaceful nuclear explosion
Physics,Chemistry,Technology
4,251
74,834,638
https://en.wikipedia.org/wiki/Miriam%20M.%20Unterlass
Miriam Margarethe Unterlass (born 1986 in Erlangen, Germany) is a German chemist. She is full professor of solid state chemistry at the University of Konstanz, as well as  adjunct principal investigator at CeMM - Research Center for Molecular Medicine of the Austrian Academy of Sciences. On 1 October 2024, Prof. Miriam Unterlass took over the management of the renowned Fraunhofer Institute for Silicate Research ISC in Würzburg. Education and career Miriam M. Unterlass was born in 1986 in Erlangen, Germany. She studied chemistry, process engineering and materials science in the framework of a double diploma degree in Würzburg, Germany, in Lyon, France, and in Southampton, United Kingdom. She completed her PhD under the supervision of Professor Markus Antonietti at the Max Planck Institute of Colloids and Interfaces in Potsdam-Golm, Germany. In 2011 she obtained her doctoral degree (magna cum laude) at the University of Potsdam, Germany, with her doctoral work entitled “From Monomer Salts and Their Tectonic Crystals to Aromatic Polyimides: Development of Neoteric Synthesis Routes”. In 2011 she continued her career with a postdoc in the Centre national de la recherche scientifique (CNRS) Laboratory Soft Matter and Chemistry under supervision of Professor Ludwik Leibler at the École supérieure de physique et de chimie industrielles de la ville de Paris (ESPCI). In 2012 she was a visiting scholar at Massachusetts Institute of Technology (MIT) hosted by Professor Gregory C. Rutledge. Later that year, she started as an independent group leader of the research group “Advanced Organic Materials” at the Institute of Materials Chemistry of the Vienna University of Technology (TU Wien). She  habilitated (venia docendi) in materials chemistry at TU Wien in 2018 and became assistant professor with tenure in 2019. In 2018 she joined CeMM - Research Center for Molecular Medicine of the Austrian Academy of Sciences and to date works as adjunct principal investigator. In 2021 she became an associate professor at TU Wien and since May 2021 she is full professor of solid state chemistry at the University of Konstanz, Germany. In 2022 she was guest professor at the Department of Chemical Science and Engineering of Institute of Science Tokyo (formerly known as Tokyo Institute of Technology), Japan, hosted by Professor Shinji Ando. Research Miriam Unterlass is a prominent researcher in the field of chemistry, known for her innovative work at the intersection of materials science and synthetic chemistry. Her research primarily focuses on the development of sustainable routes towards advanced materials and small molecules. The latter is based on the central hypothesis that water is able to be a near-universal solvent for chemical synthesis and processing. She has made significant contributions to the understanding of the use of water as a core technology. Her group has demonstrated that water is an ideal medium to produce advanced materials, profiting from the properties of water under hydrothermal conditions. This approach utilizes hot liquid water as a reaction medium, producing a variety of materials, i.e. high-performance polymers suitable for aeronautics and microelectronics, small molecules relevant to biology and medicine or optoelectronics, and inorganic-organic hybrid materials. Moreover, her group employs modern computational and automation approaches to aim for maximal efficiency and discovery of new materials to address the various challenges of human life. She has published over 40 peer-reviewed articles, contributed with more than 80 scientific talks at different conferences. Thus, she has submitted over 7 patents and patent applications, and actively works alongside industry partners to translate her findings into practical applications. Selected publications F. A. Amaya-García and M. M. Unterlass*: "Synthesis of 2,3-Diarylquinoxaline Carboxylic Acids in High-Temperature Water” Synthesis 2022, 54(15), 3367-3382. F. A. Amaya-García, M. Caldera, A. Koren, S. Kubicek, J. Menche, and M. M. Unterlass*: “Green hydrothermal synthesis of fluorescent 2,3-diarylquinoxalines and large-scale computational comparison to existing alternatives“, ChemSusChem 2021, 14(8), 1853-1863. M. J. Taublaender, S. Mezzavilla, S. Thiele, F. Glöcklhofer, and M. M. Unterlass*, “Hydrothermal Generation of Conjugated Polymers on the Example of Pyrrone Polymers and Polybenzimidazoles”, Angew. Chem. Int. Ed. 2020, 59, 15050-15060. M. J. Taublaender, F. Glöcklhofer, M. Marchetti-Deschmann, and M. M. Unterlass*: “Green and Rapid Hydrothermal Crystallization and Synthesis of Fully Conjugated Aromatic Compounds“, Angew. Chem. Int. Ed. 2018, 57, 12270-12274. M. M. Unterlass*: “Hot Water Generates Crystalline Organic Materials”, Angew. Chem. Int. Ed. 2018, 57, 2292-2294. L. Leimhofer, B. Baumgartner, M. Puchberger, T. Prochaska, T. Konegger, and M. M. Unterlass*: “Green one-pot synthesis and processing of polyimide-silica hybrid materials”, J. Mater. Chem. A. 2017, 5, 16326-16335. B. Baumgartner, A. Svirkova, J. Bintinger, C. Hametner, M. Marchetti-Deschmann, and M. M. Unterlass*: “Green and highly efficient synthesis of perylene and naphthalene bisimides is nothing but water”, Chem. Commun. 2017, 53, 1229-1232. B. Baumgartner, M. J. Bojdys, and M. M. Unterlass*: “Geomimetics for Green Polymer Synthesis: Highly Ordered Polyimides via Hydrothermal Techniques”, Polym. Chem. 2014, 5, 3771-3776. Memberships Member of the German Society of Materials Science (DGM), since 2023 Member of the International Society for Advancement of Supercritical Fluids (ISASF), since 2023 Member of the International Solvothermal and Hydrothermal Association (ISHA) and representative for Austria, since 2019 Member of the  Young Academy of the Austrian Academy of Sciences (ÖAW), since 2018 Member of the Royal Society of Chemistry (MRSC), since 2015 European Crystallographic Association (ECA), since 2015 German Association of University Professors and Lecturers (DHV), since 2014 Austrian Chemical Society (GÖCH), since 2013 German Chemical Society (GDCh), since 2005 Awards and honors 2023 Roy-Somiya Award of the International Solvothermal and Hydrothermal Association (ISHA) for her “Outstanding contributions to the field of hydrothermal and solvothermal synthesis by a scientist under the age of 45” 2023 Nomination for the LUKS teaching Awards of the University of Konstanz 2022 Appointment to the Scientific Advisory Board - Materials Field - of the Federal Institute for Materials Research and Testing (BAM) for the term 2022 - 2025 2021 Appointment to the Board of trustees of the Hochschuljubileumsfonds of the city of Vienna 2020 National Patent Award (Staatspreis Patent) from the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation, and Technology 2020 Bürgenstock JSP Fellow of the Swiss Chemical Society (SCS) 2020 Associate Editor of the Journal of Materials Chemistry A (Royal Society of Chemistry, RSC) 2020 Associate Editor of the journal Materials Advances (RSC) 2020 Member of the selection committee for scholarships and representative for Solid State and Materials Chemistry of the Alexander von Humboldt Foundation 2020 Selected as Mentoring lecturer and member of the selection committee of the Austrian Study Foundation (Österreichische Studienstiftung) 2019 Member of the Scientific Advisory Board of the European Forum Alpbach 2019 Member of AcademiaNet, nominated by the Austrian Science Fund (FWF) 2019 Full member of the Wolfgang Pauli Institute (WPI) Vienna 2019 Finalist in the RSC Emerging Technologies competition (Category: Enabling Technologies) with UGP Materials 2018 Selected as one of 100 Women in Materials Science by the Royal Society of Chemistry 2018 Selected for the International Visitor Leadership Program (IVLP) of the U.S. Department of State’s Bureau of Educational and Cultural Affairs (Specific program: “Hidden No More: Empowering Women Leaders in Science, Technology, Engineering, the Arts, and Mathematics (STEAM”) 2018 Elected Member of the Young Academy of the Austrian Academy of Sciences 2018 Sallinger Fonds S&B (Science-to-Business) award to UGP Materials 2017 START prize of the Austrian Science Fund (FWF) 2017 PHÖNIX award (Austrian founders award) in the category “best prototype” 2017 Selected for the Hochschullehrer-Nachwuchs-Workshop of the German Chemical Society (GDCh), section for Macromolecular Chemistry 2016 Member of the Fast Track program (“Excellence and Leadership Skills for Outstanding Women in Science”)of the Robert Bosch Foundation 2016 Named one of the Young Talents 2016 by the journal Macromolecular Chemistry and Physics 2016 Pro Didactica Awards 2016, 2nd place in the category best lecture within the BSc curriculum “Technical Chemistry” at TU Wien for “Inorganic Chemistry I” 2016 Pro Didactica Awards 2016, 2nd place in the category fairest exam within the BSc curriculum “Technical Chemistry” at TU Wien for “Inorganic chemistry I” 2016 INiTS Startup-Camp Award 2016 for best overall concept presented at the i2c StartAcademy 2015 Young Participant of the Lindau Nobel Laureate Meeting 2015 2015 Named one of the Emerging Investigator 2015 by the journal Polymer Chemistry 2014 Anton Paar Science Award by the Austrian Chemical Society (Goech) 2014 One of six finalist for the Austrian Innovator of the Year Award 2014 References External Links Miriam M. Unterlass publications indexed by Google Scholar Website of the UnterlassLAB - Research Group Official Website Website at University of Konstanz Website at CeMM Press Release - Prof. Miriam Unterlass Appointed New Director of the Fraunhofer Institute for Silicate Research ISC Living people 1986 births German chemists People from Erlangen Solid state chemists German women chemists University of Würzburg alumni Academic staff of TU Wien Academic staff of the University of Konstanz
Miriam M. Unterlass
Chemistry
2,234
10,980,912
https://en.wikipedia.org/wiki/Smart%20transducer
A smart transducer is an analog or digital transducer, actuator, or sensor combined with a processing unit and a communication interface. As sensors and actuators become more complex, they provide support for various modes of operation and interfacing. Some applications require additionally fault-tolerant and distributed computing. Such functionality can be achieved by adding an embedded microcontroller to the classical sensor/actuator, which increases the ability to cope with complexity at a fair price. Typically, these on-board technologies in smart sensors are used for digital processing, either frequency-to-code or analog-to-digital conversations, interfacing functions and calculations. Interfacing functions include decision-making tools like self-adaption, self-diagnostics, and self-identification functions, but also the ability to control how long and when the sensor will be fully awake, to minimize power consumption and to decide when to dump and store data. They are often made using CMOS, VLSI technology and may contain MEMS devices leading to lower cost. They may provide full digital outputs for easier interface or they may provide quasi-digital outputs like pulse-width modulation. In the machine vision field, a single compact unit that combines the imaging functions and the complete image processing functions is often called a smart sensor. Smart sensors are a crucial element in the phenomenon Internet of Things (IoT). Within such a network, multiple physical vehicles and devices are embedded with sensors, software and electronics. Data will be collected and shared for better integration between digital environments and the physical world. The connectivity between sensors is an important requirement for an IoT innovation to perform well. Interoperability can therefore be seen as an consequence of connectivity. The sensors work and complement each other. Improvement over traditional sensors The key features of smart sensors as part of the IoT that differentiate them from traditional sensors are: Small size Self-validation and self-identification Low power requirements Self-diagnosis Self-calibration Connection to the Internet and other devices The traditional sensor collects information about an object or a situation and translates it into an electrical signal. It gives feedback of the physical environment, process, or substance in a measurable way, and signals or indicates when change in this environment occurs. Traditional sensors in a network of sensors can be divided in three parts: one, the sensors; two, a centralized interface where the data is collected and processed; and three, an infrastructure that connects the network, like plugs, sockets and wires. A network of smart sensors can be divided in two parts; (1) the sensors, and (2) a centralized interface. The fundamental difference with traditional sensors, is that the microprocessors embedded in the smart sensors already process the data. Therefore, less data has to be transmitted and the data can immediately be used and accessed on different devices. The switch to smart sensors entails that the tight coupling between transmission and processing technologies is removed. Digital traces Within a digital environment, actions or activities leave a digital trace. Smart sensors measure these activities in the physical environment and translate this into a digital environment. Therefore, every step within the process becomes digitally traceable. Whenever a mistake is made somewhere in a production process, this can be tracked down using these digital traces. As a result, it will be easier to track down inefficiencies within a production process and simplify process innovations, because one can easier analyze what part of the production process is inefficient. Due to the fact that all the information is digitized, the company is exposed to cyber attacks. To protect itself from these information breaches, ensuring a secure platform is crucial. Layered modular architecture of digital sensors The term layered modular architecture is a combination between the modular architecture of the physical components of a product with the layered architecture of the digital system. There is a contents layer, a service layer, a network layer ((1) logical transmission, (2) physical transport), and a device layer ((1) logical capability, (2) physical machinery). Starting at the device layer, the smart sensor itself is the physical machinery, measuring its physical environment. The logical capacity refers to operating systems, which can be Windows, MacOS or another operating system that is used to run the platform on. At the network layer, the logical transmission can consist of various transmission methods; Wi-Fi, Bluetooth, NFC, Zigbee and RFID. For smart sensors, physical transport is not necessary, since smart sensors are usually wireless. Yet charging wires and sockets are still commonly used. The service layer is about the service that is provided by the smart sensor. The sensors are able to process the data themselves. Therefore, there is not one specific service of the sensors because they process multiple things simultaneously. They can for example signal that certain assets need to be repaired. The content layer would be the centralised platforms, that are created and used to gain insights and create value. Usage across industries Insurance Traditionally, insurance companies tried to assess the risk of their clients by looking over their application form, trust their answers and then simply cover it with a monthly premium. However, due to asymmetric information, it was difficult to accurately determine risk of a certain client. The introduction of smart sensors in the insurance industry is disrupting the traditional practice in multiple ways. Smart sensors generate a large amount of (big) data and affects the business models of insurance companies as follows. Smart sensors in client’s homes or in wearables help insurance companies to get much more detailed information. Wearables can for example monitor heart-related metrics, location-based systems like security technologies, or smart thermostats can generate important data of your house. They can use this information to improve risk assessment and risk management, reduce asymmetric information, and ultimately reduce costs. Additionally, if clients agree upon providing this data of sensors in their homes, they can even get a discount on their premium. This approach of trading information in return for special deals is called bartering and it is one form of data monetization. Data monetization is the act of exchanging information-based products and services for legal tender or something of perceived equivalent value. In other words, data monetization is exploiting opportunities to generate new revenues. Another form of data monetization, which insurers regularly use nowadays, is selling data to third parties. Manufacturing One of the recent trends in manufacturing is the revolution of Industry 4.0, in which data exchanging and automation play a crucial role. Traditionally, machines were already able to automate certain small tasks (e.g. open/close valves). Automation in smart factories go beyond these easy tasks. It increasingly includes complex optimization decisions that humans typically make. For machines to be able to make human decisions, it is imperative to get detailed information, and that’s were smart sensors come in. For manufacturing, efficiency is one of the most important aspects. Smart sensors pull data from assets to which they are connected and process the data continuously. They can provide detailed real-time information about the plant and process and reveal performance issues. If this is just a small performance issue, the smart factory can even solve the problem itself. Smart sensors can predict defects as well, so rather than fixing a problem afterwards, maintenance workers can prevent it. This all leads to outstanding asset efficiency and reduces downtime, which is the enemy of every production process. Smart sensors can also be applied beyond the factory. For example sensors on objects like vehicles or shipping containers can give detailed information about delivery status. This affects both manufacturing and the whole supply chain. Automotive The last couple of years, the automotive industry has been challenging their ‘old’ ecosystems. Several new technologies like smart sensors play a crucial role in this process. Nowadays, these sensors only enable some small autonomous features like automatic parking services, obstacle detection and emergency braking, which improves security. Although a lot of companies are focused on technologies that improve cars and work towards automation, complete disruption of the industry has not yet been reached. Yet, experts expect that autonomous cars without any human interference will dominate the roads in 10 years. Smart sensors generate data of the car and their surroundings, connect them into a car network, and translate this into valuable information which allows the car to see and interpret the world. Basically, the sensor works as follows. It has to pull physical and environmental data, use that information for calculations, analyze the outcomes and translate it into action. Sensors in other cars have to be connected into the car network and communicate with each other. However, smart sensors in the automotive industry can also be used in a more sustaining way. Car manufacturers place smart sensors in different parts of the car, which collects and shares information. Drivers and manufacturers can use this information to transform from scheduled to predictive maintenance. Established firms have a strong focus on these sustaining innovations, but the risk is that they do not see new entrants coming and have difficulties to adapt. Therefore, making a distinction between a disruptive and sustaining innovation is important and brings different implications to managers. See also Ambient intelligence Edge computing IEEE 1451 Internet of things Intelligent sensor Machine to machine Sentroller SensorML System on a chip Transducer electronic data sheet TransducerML References External links IEEE Spectrum: Smart Sensors Smart devices Transducers
Smart transducer
Technology
1,892
48,686,732
https://en.wikipedia.org/wiki/Earlandite
Earlandite, [Ca3(C6H5O7)2(H2O)2]·2H2O, is the mineral form of calcium citrate tetrahydrate. It was first reported in 1936 and named after the English microscopist and oceanographer Arthur Earland FRSE. Earlandite occurs as warty fine-grained nodules ca. 1 mm in size in bottom sediments of the Weddell Sea, off Antarctica. Its crystal symmetry was first assigned as orthorhombic, then as monoclinic, and finally as triclinic. References Bibliography Palache, P.; Berman H.; Frondel, C. (1960). "Dana's System of Mineralogy, Volume II: Halides, Nitrates, Borates, Carbonates, Sulfates, Phosphates, Arsenates, Tungstates, Molybdates, Etc. (Seventh Edition)" John Wiley and Sons, Inc., New York, pp. 1105-1106. Calcium minerals Organic minerals Triclinic minerals Minerals in space group 2 Minerals described in 1936
Earlandite
Chemistry
237
22,870,254
https://en.wikipedia.org/wiki/Mainspring%20gauge
Mainspring gauges were used to measure the sizes of watch mainsprings so that replacement springs could be ordered from a supply house. Mainsprings are characterised by their strength (thickness) and height (width). European gauges They were made by several European companies and are all similar to the gauge in Figure 1, being brass plates with notches on both edges and round sinks on both sides. The height of a spring is measured by finding the smallest notch on the edge of the gauge into which it will fit. The strength is determined by finding the smallest round sink into which the mainspring barrel (horology) will fit. (The diameter of the barrel is related to the mainspring length and strength, but this method for determining strength only works for springs of standard lengths.) Each manufacturer used a different arbitrary scale to describe mainspring sizes, but these sizes are usually based on the French inch. A better method for measuring strength is to use a slit gauge, Figure 2, with which the thickness of the mainspring is measured directly. This gauge uses a scale of sizes running from 21 (0.05 mm) to 5/0 (0.30 mm). Scales used in watchmaking generally do not use negative numbers. Instead, after size 0 there is 2/0 or 00, 3/0 or 000, and so on. Dennison "US Standard" gauges About 1840 Aaron Lufkin Dennison devised a gauge "upon which all the different parts of a watch could be accurately measured", Figure 3. Later this became known as the "US Standard" gauge. The edges have the usual notches for mainspring heights (based on the Imperial inch, but there is no obvious way to determine mainspring strength, except perhaps by barrel diameter using one of the three scales on the top left (which are Imperial inch scales in 1/64 and 1/32 divisions). Over time, this gauge was simplified and reduced to being purely a mainspring gauge, Figure 4. Only the height notches remain and a thickness slit gauge added to it. The slit gauge is also based on the Imperial inch, but it often has metric equivalents which are crude approximations. References Watkins, Richard: Mainspring Gauges and the Dennison Combined Gauge, 2009. Timekeeping components
Mainspring gauge
Technology
479
18,738,893
https://en.wikipedia.org/wiki/Feltrinelli%20Prize
The Feltrinelli Prize (from the Italian "Premio Feltrinelli", also known as "International Feltrinelli Prize" or "Antonio Feltrinelli Prize") is an award for achievement in the arts, music, literature, history, philosophy, medicine, and physical and mathematical sciences. Administered by the Antonio Feltrinelli Fund, the award comes with a monetary grant ranging between €50,000 and €250,000, a certificate, and a gold medal. The prize is awarded, both nationally and internationally, once every five years in each field by Italy's Accademia Nazionale dei Lincei. A further prize is awarded periodically for an exceptional enterprise in moral and humanitarian value. Considered to be Italy's most distinguished scientific society, the organization was founded in 1603 and included Galileo Galilei among its first members. Award winners Source: 1956 (Mathematics) Prize reserved for Italian citizens (L. 1.500.000) Beppo Levi 1957 (Literature) Prize reserved for Italian citizens (L. 5.000.000) Antonio Baldini Virgilio Giotti Vasco Pratolini International Prize (L. 20.000.000) Wystan Hugh Auden Aldo Palazzeschi 1962 (Literature) Prize reserved for Italian citizens Bruno Cicognani Giuseppe De Robertis John Dos Passos Carlo Emilio Gadda Camillo Sbarbaro International Prize Eugenio Montale 1963 (Arts) Prize reserved for Italian citizens (L. 5.000.000) Painting: Mino Maccari Music: Giorgio Federico Ghedini Cinema: Luchino Visconti International Prize Sculpture: Henry Moore 1964 (Medicine) International Prize (L. 25.000.000) Experimental medicine: Wallace O. Fenn Applied medical and surgical sciences: Albert Sabin 1966 (Physics, math and natural sciences) Prize reserved for Italian citizens (L. 5.000.000) Mathematics, Mechanics and Applications: Guido Stampacchia Physics, Chemistry and Applications: Luigi Arialdo Radicati di Brozolo Biology and Applications: Vittorio Capraro International Prize (L. 20.000.000) Geology: Harry Harry Hammond Hess 1972 (Literature) Prize reserved for Italian citizens (L. 10.000.000) Narration: Italo Calvino History and criticism of literary language: Italo Siciliano Theory and history of literary language: Gianfranco Folena Poetry: Vittorio Sereni International Prize (L. 20.000.000) Theatre: Eduardo De Filippo 1981 International Prize (L. 100.000.000) Sol Spiegelman 1984 (Medicine) Prize reserved for Italian citizens (L. 25.000.000) Ruggero Ceppellini International Prize (L. 100.000.000) Jérôme Lejeune Robert Allan Weinberg 1986 (Physical, mathematical and natural sciences) Prize reserved for Italian citizens (L. 20.000.000) Mathematics: Lucilla Bassotti, Claudio Procesi Astronomy, geodesy and geophysics: Fernando Sansò Physics and chemistry: Giorgio Parisi, Alessandro Ballio, Emilio Gatti Geology and paleontology: Maria Bianca Cita Sironi Biological science: Lilia Alberghina, Luciano Bullini, Pietro Calissano International Prize (L. 100.000.000) Chemistry: Alan Battersby 1989 (Medicine) International Prize (L. 100.000.000) Giuseppe Attardi 1990 International Prize (L. 150.000.000) Robert Roswell Palmer 1991 International Prize (L. 150.000.000) Alfred Edward Ringwood 1992 (Literature) Prize reserved for Italian citizens (L. 50.000.000) Luciano Anceschi International Prize (L. 200.000.000) John Ashbery 1993 (Arts) Prize reserved for Italian citizens (L. 50.000.000) Emilio Vedova 1995 Prize reserved for Italian citizens (L. 100.000.000) Sebastiano Timpanaro 1998 (Arts) Prize reserved for Italian citizens (L. 125.000.000) Painting: Carlo Maria Mariani Cinema: Michelangelo Antonioni Sculpture: Giuliano Vangi Theatre: Luigi Squarzina International Prize (L. 300.000.000) Architecture: José Rafael Moneo Valles 1999 (Medicine) International Prize (L. 300.000.000) Arvid Carlsson 2002 (Literature) Prize reserved for Italian citizens (€ 65.000) Piero Boitani Daniele Del Giudice 2003 (Arts) Prize reserved for Italian citizens (€ 65.000) Cinema: Ermanno Olmi Photography: Mimmo Jodice Orchestra direction: Riccardo Chailly Engraving: Guido Strazza International Prize (€ 250.000) Music: Salvatore Sciarrino 2004 (Medicine) International Prize (€ 250.000) Gottfried Schatz 2006 Prize reserved for Italian citizens (€ 65.000) Alberto Bressan Giovanni Jona-Lasinio International Prize (€ 250.000) Saul Perlmutter 2007 (Literature) International Prize (€ 250.000) Brian Stock 2008 (Arts) International Prize (€ 250.000) Juha Leiviskä 2009 (Medicine) Prize reserved for Italian citizens (€ 65.000) Rino Rappuoli International Prize (€ 250.000) Ira Pastan 2011 Prize reserved for Italian citizens Christopher Hacon 2014 (Medicine) International Prize Chris Dobson 2016 Prize reserved for Italian citizens Alberto Mantovani 2023 (Arts) Prize reserved for Italian citizens Zerocalcare 2024 (Medicine) International Prize H. Franklin Bunn 2024 (Biology) International Prize Paola Arlotta References External links Italian awards Science and technology awards Italian science and technology awards
Feltrinelli Prize
Technology
1,163
7,068,038
https://en.wikipedia.org/wiki/Thermal%20barrier%20coating
Thermal barrier coatings (TBCs) are advanced materials systems usually applied to metallic surfaces on parts operating at elevated temperatures, such as gas turbine combustors and turbines, and in automotive exhaust heat management. These 100 μm to 2 mm thick coatings of thermally insulating materials serve to insulate components from large and prolonged heat loads and can sustain an appreciable temperature difference between the load-bearing alloys and the coating surface. In doing so, these coatings can allow for higher operating temperatures while limiting the thermal exposure of structural components, extending part life by reducing oxidation and thermal fatigue. In conjunction with active film cooling, TBCs permit working fluid temperatures higher than the melting point of the metal airfoil in some turbine applications. Due to increasing demand for more efficient engines running at higher temperatures with better durability/lifetime and thinner coatings to reduce parasitic mass for rotating/moving components, there is significant motivation to develop new and advanced TBCs. The material requirements of TBCs are similar to those of heat shields, although in the latter application emissivity tends to be of greater importance. Structure An effective TBC needs to meet certain requirements to perform well in aggressive thermo-mechanical environments. To deal with thermal expansion stresses during heating and cooling, adequate porosity is needed, as well as appropriate matching of thermal expansion coefficients with the metal surface that the TBC is coating. Phase stability is required to prevent significant volume changes (which occur during phase changes), which would cause the coating to crack or spall. In air-breathing engines, oxidation resistance is necessary, as well as decent mechanical properties for rotating/moving parts or parts in contact. Therefore, general requirements for an effective TBC can be summarized as needing: 1) a high melting point. 2) no phase transformation between room temperature and operating temperature. 3) low thermal conductivity. 4) chemical inertness. 5) similar thermal expansion match with the metallic substrate. 6) good adherence to the substrate. 7) low sintering rate for a porous microstructure. These requirements severely limit the number of materials that can be used, with ceramic materials usually being able to satisfy the required properties. Thermal barrier coatings typically consist of four layers: the metal substrate, metallic bond coat, thermally-grown oxide (TGO), and ceramic topcoat. The ceramic topcoat is typically composed of yttria-stabilized zirconia (YSZ), which has very low conductivity while remaining stable at the nominal operating temperatures typically seen in TBC applications. This ceramic layer creates the largest thermal gradient of the TBC and keeps the lower layers at a lower temperature than the surface. However, above 1200 °C, YSZ suffers from unfavorable phase transformations, changing from t'-tetragonal to tetragonal to cubic to monoclinic. Such phase transformations lead to crack formation within the top coating. Recent efforts to develop an alternative to the YSZ ceramic topcoat have identified many novel ceramics (e.g., rare earth zirconates) exhibiting superior performance at temperatures above 1200 °C, but with inferior fracture toughness compared to that of YSZ. In addition, such zirconates may have a high concentration of oxygen-ion vacancies, which may facilitate oxygen transport and exacerbate the formation of the TGO. With a thick enough TGO, spalling of the coating may occur, which is a catastrophic mode of failure for TBCs. The use of such coatings would require additional coatings that are more oxidation resistant, such as alumina or mullite. The bond coat is an oxidation-resistant metallic layer which is deposited directly on top of the metal substrate. It is typically 75-150 μm thick and made of a NiCrAlY or NiCoCrAlY alloy, though other bond coats made of Ni and Pt aluminides also exist. The primary purpose of the bond coat is to protect the metal substrate from oxidation and corrosion, particularly from oxygen and corrosive elements that pass through the porous ceramic top coat. At peak operating conditions found in gas-turbine engines with temperatures in excess of 700 °C, oxidation of the bond-coat leads to the formation of a thermally-grown oxide (TGO) layer. Formation of the TGO layer is inevitable for many high-temperature applications, so thermal barrier coatings are often designed so that the TGO layer grows slowly and uniformly. Such a TGO will have a structure that has a low diffusivity for oxygen, so that further growth is controlled by diffusion of metal from the bond-coat rather than the diffusion of oxygen from the top-coat. The TBC can also be locally modified at the interface between the bond coat and the thermally grown oxide so that it acts as a thermographic phosphor, which allows for remote temperature measurement. Failure mechanisms In general, failure mechanisms of TBCs are very complex and can vary significantly from TBC to TBC and depending on the environment in which the thermal cycling takes place. For this reason, the failure mechanisms are still not yet fully understood. Despite this multitude of failure mechanisms and their complexity, though, three of the most important failure mechanisms have to do with the growth of the thermally-grown oxide (TGO) layer, thermal shock, and sintering of the top coat (TC), discussed below. Additional factors contributing to failure of TBCs include mechanical rumpling of the bond coat during thermal cyclic exposure (especially coatings in aircraft engines), accelerated oxidation at high temperatures, hot corrosion, and molten deposit degradation. TGO layer growth The growth of the thermally-grown oxide (TGO) layer is the most important cause of TBC spallation failure. When the TGO forms as the TBC is heated, it causes a compressive growth stress associated with volume expansion. When it is cooled, a lattice mismatch strain arises between TGO and the top coat (TC) due to differing thermal expansion coefficients. Lattice mismatch strain refers to the strain that comes about when two crystalline lattices at an interface have different lattice constants and must nonetheless match one another where they meet at the interface. These growth stresses and lattice mismatch stresses, which increase with increasing cycling number, lead to plastic deformation, crack nucleation, and crack propagation, ultimately contributing to TBC failure after many cycles of heating and cooling. For this reason, in order to make a TBC that lasts a long time before failure, the thermal expansion coefficients between all layers should match well. Whereas a high BC creep rate increases the tensile stresses present in the TC due to TGO growth, a high TGO creep rate actually decreases these tensile stresses. Because the TGO is made of Al2O3, and the metallic bond coat (BC) is normally made of an aluminum-containing alloy, TGO formation tends to deplete the Al in the bond coat. If the BC runs out of aluminum to supply to the growing TGO, it's possible for compounds other than Al2O3 to enter the TGO (such as Y2O3, for example), which weakens the TGO, making it easier for the TBC to fail. Thermal shock Because the purpose of TBCs is to insulate metallic substrates such that they can be used for prolonged times at high temperatures, they often undergo thermal shock, which is a stress that arises in a material when it undergoes a rapid temperature change. This thermal shock is a major contributor to the failure of TBCs, since the thermal shock stresses can cause cracking in the TBC if they are sufficiently strong. In fact, the repeated thermal shocks associated with turning the engine on and off many times is a main contributor to failure of TBC-coated turbine blades in airplanes. Over the course of repeated cycles of rapid heating and cooling, thermal shock leads to significant tensile strains perpendicular to the interface between the BC and the TC, reaching a maximum magnitude at the BC/TC interface, as well as a periodic strain field in the direction parallel to the BC/TC interface. Especially after many cycles of heating and cooling, these strains can lead to nucleation and propagation of cracks both parallel and perpendicular to the BC/TC interface. These linked-up horizontal and vertical cracks due to thermal shock ultimately contribute to the failure of the TBC via delamination of the TC. Sintering A third major contributor to TBC failure is sintering of the TC. In TBC applications, YSZ has a columnar structure. These columns start out with a feathery structure, but become smoother with heating due to atomic diffusion at high temperature in order to minimize surface energy. The undulations on adjacent smoother columns eventually touch one another and begin to coalesce. As the YSZ sinters and becomes more dense in this fashion, it shrinks in size, leading to the formation of cracks via a mechanism analogous to the formation of mudcracks, where the top layer shrinks but the bottom layer (the BC in the case of TBCs, or the earth in the case of mud) remains the same size. This mud-cracking effect can be exacerbated if the underlying substrate is rough, or if it roughens upon heating, for the following reason. If the surface under the columns is curvy and if the columns can be modeled as straight rods normal to the surface underneath them, then column density will necessarily be high above valleys in the surface and low above peaks in the surface due to the tilting of the straight rods. This leads to a non-uniform columnar density throughout the TBC and promotes crack development in low-density regions. In addition to this mud-cracking effect, sintering increases the Young's modulus of the TC as the columns become attached to one another. This in turn increases the lattice mismatch strain at the interface between the TC and BC or TGO. The TC's increased Young's modulus makes it more difficult for its lattice to bend to meet that of the substrate under it; this is the origin of the increased lattice mismatch strain. In turn, this increased mismatch strain adds with the other previously mentioned strain fields in the TC to promote crack formation and propagation, leading to failure of the TBC. Types YSZ YSZ is the most widely studied and used TBC because it provides excellent performance in applications such as diesel engines and gas turbines. Additionally, it was one of the few refractory oxides that could be deposited as thick films using the then-known technology of plasma spraying. As for properties, it has low thermal conductivity, high thermal expansion coefficient, and low thermal shock resistance. However, it has a fairly low operating limit of 1200 °C due to phase instability, and can corrode due to its oxygen transparency. Mullite Mullite is a compound of alumina and silica, with the formula 3Al2O3-2SiO2. It has a low density, along with good mechanical properties, high thermal stability, low thermal conductivity, and is corrosion and oxidation resistant. However, it suffers from crystallization and volume contraction above 800 °C, which leads to cracking and delamination. Therefore, this material is suitable as a zirconia alternative for applications such as diesel engines, where surface temperatures are relatively low and temperature variations across the coating may be large. Alumina Only α-phase Al2O3 is stable among aluminum oxides. With a high hardness and chemical inertness, but high thermal conductivity and low thermal expansion coefficient, alumina is often used as an addition to an existing TBC coating. By incorporating alumina in YSZ TBC, oxidation and corrosion resistance can be improved, as well as hardness and bond strength without significant change in the elastic modulus or toughness. One challenge with alumina is applying the coating through plasma spraying, which tends to create a variety of unstable phases, such as γ-alumina. When these phases eventually transform into the stable α-phase through thermal cycling, a significant volume change of ~15% (γ to α) follows, which can lead to microcrack formation in the coating. CeO2 + YSZ CeO2 (Ceria) has a higher thermal expansion coefficient and lower thermal conductivity than YSZ. Adding ceria into a YSZ coating can significantly improve the TBC performance, especially in thermal shock resistance. This is most likely due to less bond coat stress due to better insulation and a better net thermal expansion coefficient. Some negative effects of the addition of ceria include the decrease of hardness and accelerated rate of sintering of the coating (less porous). Rare-earth zirconates La2Zr2O7, also referred to as LZ, is an example of a rare-earth zirconate that shows potential for use as a TBC. This material is phase stable up to its melting point and can largely tolerate vacancies on any of its sublattices. Along with the ability for site-substitution with other elements, this means that thermal properties can potentially be tailored. Although it has a very low thermal conductivity compared to YSZ, it also has a low thermal expansion coefficient and low toughness. Rare earth oxides Single and mixed phase materials consisting of rare earth oxides represent a promising low-cost approach towards TBCs. Coatings of rare earth oxides (e.g.: La2O3, Nb2O5, Pr2O3, CeO2 as main phases) have lower thermal conductivity and higher thermal expansion coefficients when compared to YSZ. The main challenge to overcome is the polymorphic nature of most rare earth oxides at elevated temperatures, as phase instability tends to negatively impact thermal shock resistance. Another advantage of rare earth oxides as TBCs is their tendency to exhibit intrinsic hydrophobicity, which provides various advantages for systems that undergo intermittent use and may otherwise suffer from moisture adsorption or surface ice formation. Metal-glass composites A powder mixture of metal and normal glass can be plasma-sprayed in vacuum, with a suitable composition resulting in a TBC comparable to YSZ. Additionally, metal-glass composites have superior bond-coat adherence, higher thermal expansion coefficients, and no open porosity, which prevents oxidation of the bond-coat. Uses Automotive Thermal barrier ceramic coatings are becoming more common in automotive applications. They are specifically designed to reduce heat loss from engine exhaust system components including exhaust manifolds, turbocharger casings, exhaust headers, downpipes and tailpipes. This process is also known as "exhaust heat management". When used under-bonnet, these have the positive effect of reducing engine bay temperatures, therefore reducing the intake air temperature. Although most ceramic coatings are applied to metallic parts directly related to the engine exhaust system, technological advances now allow thermal barrier coatings to be applied via plasma spray onto composite materials. It is now commonplace to find ceramic-coated components in modern engines and on high-performance components in race series such as Formula 1. As well as providing thermal protection, these coatings are also used to prevent physical degradation of the composite material due to friction. This is possible because the ceramic material bonds with the composite (instead of merely sticking on the surface with paint), thereby forming a tough coating that doesn't chip or flake easily. Although thermal barrier coatings have been applied to the insides of exhaust system components, problems have been encountered because of the difficulty in preparing the internal surface prior to coating. Aviation Thermal barrier coatings are commonly used to protect nickel-based superalloys from both melting and thermal cycling in aviation turbines. Combined with cool air flow, TBCs increase the allowable gas temperature above that of the superalloy melting point. To avoid the difficulties associated with the melting point of superalloys, many researchers are investigating ceramic-matrix composites (CMCs) as high-temperature alternatives. Generally, these are made from fiber-reinforced SiC. Rotating parts are especially good candidates for the material change due to the enormous fatigue that they endure. Not only do CMCs have better thermal properties, but they are also lighter meaning that less fuel would be needed to produce the same thrust for the lighter aircraft. The material change is, however, not without consequences. At high temperatures, these CMCs are reactive with water and form gaseous silicon hydroxide compounds that corrode the CMC. SiOH2 + H2O = SiO(OH)2 SiOH2 + 2H2O = Si(OH)4 2SiOH2 + 3H2O = Si2O(OH)6 The thermodynamic data for these reactions has been experimentally determined over many years to determine that Si(OH)4 is generally the dominant vapor species. Even more advanced environmental barrier coatings are required to protect these CMCs from water vapor as well as other environmental degradants. For instance, as the gas temperatures increase towards 1400 K-1500 K, sand particles begin to melt and react with coatings. The melted sand is generally a mixture of calcium oxide, magnesium oxide, aluminum oxide, and silicon oxide (commonly referred to as CMAS). Many research groups are investigating the harmful effects of CMAS on turbine coatings and how to prevent damage. CMAS is a large barrier to increasing the combustion temperature of gas turbine engines and will need to be solved before turbines see a large increase in efficiency from temperature increase. Processing In industry, thermal barrier coatings are produced in a number of ways: Electron beam physical vapor deposition: EBPVD Air plasma spray: APS High velocity oxygen fuel: HVOF Electrostatic spray-assisted vapor deposition: ESAVD Direct vapor deposition Additionally, the development of advanced coatings and processing methods is a field of active research. One such example is the solution precursor plasma spray process, which has been used to create TBCs with some of the lowest reported thermal conductivities without sacrificing thermal cyclic durability. See also Piezospectroscopy Thermal spraying Zircotec References External links Materials science Thin film deposition Thermal protection
Thermal barrier coating
Physics,Chemistry,Materials_science,Mathematics,Engineering
3,719