id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
69,240,607
https://en.wikipedia.org/wiki/Moveworks
Moveworks is an American artificial intelligence (AI) company headquartered in Mountain View, California. The company develops an AI platform, designed for large enterprises, that uses natural language understanding (NLU), probabilistic machine learning, and automation to resolve workplace requests. Moveworks’ customers include Autodesk, Broadcom, and other firms. Employees converse with the Moveworks chatbot to submit their requests, which Moveworks analyzes and then resolves via integrations with other software applications. Moveworks is available in business communication tools such as Slack and Microsoft Teams, as well as through online platforms such as ServiceNow and SharePoint. As of its Series C financing round in June 2021, Moveworks is valued at $2.1 billion and has raised $315 million in total funding. The company’s investors include Tiger Global, Alkeon Capital, and other firms. History Moveworks was founded in 2016 by Bhavin Shah (CEO), Vaibhav Nivargi (CTO), Varun Singh (Vice President of Product), and Jiang Chen (Vice President of Machine Learning). The founders recognized the potential of an AI-powered chatbot to resolve a significant portion of employees’ support issues, without involvement from a corporate help desk. This model would enable self-service for employees with common requests or questions. After working with a group of lighthouse customers to automate IT support use cases, Moveworks came out of "stealth mode" in April 2019, following a $30 million Series A investment from Lightspeed Venture Partners and Bain Capital. The company raised a $75 million Series B round in November 2019 and a $200 million Series C round in June 2021. Moveworks initially solved employees’ IT support issues. In March 2021, the company expanded its Employee Service Platform to address issues concerning other lines of business, including HR, finance, and facilities. Moveworks also released an internal communications solution that allows company leaders to send interactive messages to employees. Moveworks was recognized as the Best Chatbot Solution at the 2021 AI Breakthrough Awards, named to the Forbes AI 50 in 2019, 2020, and 2021, and selected as one of the Most Innovative Tech Companies of the Year at the 2021 American Business Awards. Technology The Moveworks platform comprises a multitude of specialized machine learning models, such as variants of the BERT language model. These models are trained on historical support tickets in order to process and fulfill new requests; for example, answering policy questions, accessing software, and editing email groups. As of October 2021, Moveworks is capable of resolving requests written in across 100+ languages. A central goal of Moveworks' machine learning process is to augment the "small data" of its customers. Training deep learning models often requires very large data sets; for example, millions of annotated requests for a new laptop. Whereas few companies possess a sufficient quantity of such requests from their own employees, Moveworks leverages Collective Learning across many companies to make high-accuracy predictions about how to resolve a given issue. References Business software companies Applications of artificial intelligence American companies established in 2016 Companies based in Mountain View, California 2016 establishments in California
Moveworks
[ "Technology" ]
639
[ "Announced information technology acquisitions", "Information technology" ]
69,240,781
https://en.wikipedia.org/wiki/Jennifer%20Van%20Eyk
Jennifer Eileen Van Eyk is the Erika Glazer Chair in Women's Heart Health, the Director of Advanced Clinical Biosystems Institute in the Department of Biomedical Sciences, the Director of Basic Science Research in the Women's Heart Center, a Professor in Medicine and in Biomedical Sciences at Cedars-Sinai. She is a renowned scientist in the field of clinical proteomics. Early life and education Jennifer E. Van Eyk was born in Northern Ontario, Canada. She obtained a bachelor of science in biology and chemistry from the University of Waterloo in 1982. She received a PhD in biochemistry under the direction of Robert S. Hodges from University of Alberta in 1991. She conducted post-doctoral research at University of Heidelberg, University of Alberta, and University of Illinois at Chicago with R. John Solaro. Career Van Eyk began her academic career in 1996 as an assistant professor in the Department of Physiology at Queen's University, Kingston, Canada, and she was promoted to associate professor and received tenure in 2001. She then left Canada to join Johns Hopkins University as the Director of the Proteomics Innovation Center in Heart Failure in 2003, and later Cedars-Sinai in 2014. Van Eyk is a member-at-large and a council member of Human Proteome Organization, and the president of US Human Proteome Organization. She was a technical briefs editor at Proteomics. She served on the editorial board of Proteomics: clinical application and Journal of Physiology and Circulation Research. She currently serves on the editorial board of Clinical Proteomics. She is a Fellow of the International Society for Heart Research. and is a Fellow of the American Heart Association. Research She is an international leading scientist in clinical proteomics. She is the founding director of Cedars-Sinai Advanced Clinical Biosystems Research Institute, whose motto is “from discovery to patient care”. She is co-editor of Clinical Proteomics: From Diagnosis to Therapy, an essential, important and impressive book in clinical proteomics and translational medicine. Her list of publications: https://www.ncbi.nlm.nih.gov/sites/myncbi/1VsYqQYH8535l/bibliography/48183272/public/. Awards 2024 The Analytical Scientist Power List, Human Health Heroes 2024 The Karger Medal, Barnett Institute of Chemical & Biological Analysis, NorthWestern University 2024 Richard Simpson Lecturer Award, Australian Proteomics Society 2024 U.S. Human Proteome Organization Catherine E. Costello Award for Exemplary Achievements in Proteomics 2023 The International Society of Heart Research, International President’s Lecture Award 2023 The Analytical Scientist Power List - Leaders and Advocates 2022 The Association for Mass Spectrometry and Advances in Clinical Lab (MSACL)Distinguished Contribution Award 2021 The Analytical Scientist Power List 2020 The Analytical Scientist Power List 2019 Human Proteome Organization Distinguished Achievement in Proteomic Sciences Award 2019 US Human Proteome Organization The Donald F. Hunt Distinguished Contribution in Proteomics award 2019 Canadian National Proteomics Network The Tony Pawson Proteomics Award 2017 The Analytical Scientist Power List: Top 10 Omics Explorers 2015 Human Proteome Organization Clinical & Translational Proteomics Award 2014 American Heart Association Council on Genomic and Precision Medicine Medal of Honor 2013 American Heart Association Council on Genomic and Precision Medicine Distinguished Achievement Award Recent Patents Role of citrullination in diagnosing diseases (2021) US 11,105,817 B2 Biomarkers of myocardial injury (2021) US 11,041,865 B2 Correlated peptides for quantitative mass spectrometry (2019) US 10,352,942 B2 Citrullinated proteins: a post-translated modification of myocardial proteins as marker of physiological and pathological disease (2019) US 10,309,974 B2 Diagnostic assay for Alzheimer's disease (2017) US 9,678,086 B2 References Living people University of Alberta alumni Mass spectrometrists Canadian women scientists Cedars-Sinai Medical Center Year of birth missing (living people) Proteomics Proteomics journals Proteomics organizations Canadian physiologists Cardiovascular physiology Cardiovascular researchers American Heart Association American scientists
Jennifer Van Eyk
[ "Physics", "Chemistry" ]
870
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
69,242,552
https://en.wikipedia.org/wiki/Polyfluoroalkoxyaluminates
Polyfluoroalkoxyaluminates (PFAA) are weakly coordinating anions many of which are of the form [Al(ORF)4]−. Most PFAA's possesses an Al(III) center coordinated by four −ORF (RF = -CPh(CF3)2 (hfpp), -CH(CF3)2 (hfip), -C(CH3)(CF3)2 (hftb), -C(CF3)3 (pftb)) ligands, giving the anion an overall -1 charge. The most weakly coordinating PFAA is an aluminate dimer, [F{Al(Opftb)3}2]−, which possess a bridging fluoride between two Al(III) centers. The first PFAA, [Al(Ohfpp)4]−, was synthesized in 1996 by Steven Strauss, and several other analogs have since been synthesized, including [Al(Ohfip)4]−, [Al(Ohftb)4]−, and [Al(Opftb)4]− by Ingo Krossing in 2001. These chemically inert and very weakly coordinating ions have been used to stabilize unusual cations, isolate reactive species, and synthesize strong Brønsted acids. Synthesis Work by Strauss demonstrated that the synthesis of Li+[Al(Ohfpp)4]− could be achieved from the reaction of lithium aluminum hydride and HOhfpp. Analogous metal PFAA salts (MPFAA's) were later synthesized by Krossing using a similar synthetic pathway. LiAlH4 + 4HOR_{F} -> LiAl(OR_{F})4 + 4H2 Reaction of lithium aluminum hydride with four equivalents of polyfluoroalcohol overnight in refluxing toluene yields the desired PFAA's . The colorless products can be precipitated from toluene in high yields on multi-gram scales by cooling at -20 °C for an hour. It can be furthered purified by sublimation. Cation exchange and reactivity Metal exchange While Li+[Al(Ohfpp)4]− is readily soluble in hydrocarbon solvents, presumably due to aryl substituents, Li+[Al(Ohfip)4]−, Li+[Al(Ohftb)4]−, and Li+[Al(Opftb)4]− are only sparingly soluble in common organic solvents including dichloromethane (DCM), toluene, and hexane. Their silver analogs are much more soluble however, making AgPFAA's more desirable reagents for liquid phase reactivity.LiAl(OR_{F})4 + AgF -> AgAl(OR_{F})4 +LiF Ag+[Al(Ohfip)4]−, Ag+[Al(Ohftb)4]−, and Ag+[Al(Opftb)4]− can be synthesized via salt metathesis reactions; ultrasonication of a suspension of Li+[PFAA]− and an excess of AgF at 40 °C for 12 hours produces the final colorless products in high yields on multigram scales. Analogous M+[Al(Opftb)4]−, M = Na, K, Rb, Cs, salts can also be prepared via the same synthetic route, from the metathesis reactions of Li+[Al(Opftb)4]− with the corresponding MCl salt. Brønsted acid chemistry Strong Brønsted acids, [H(OEt2)2]+[Al(Opftb)4]− and [H(THF)2]+[Al(Opftb)4]− , can be prepared via the reaction of Li+[Al(Opftb)4]− with two equivalents of Lewis base, Et2O or THF, and strong acid, HX (X = Cl, Br). [H(OEt2)2]+[Al(Opftb)4]− is isolable as a white powder sensitive to air and water and stable at moderately high temperatures. [H(THF)2]+[Al(Opftb)4]− can be isolated as a crystalline solid from a brown oily reside, presumably containing polymerized THF products formed upon addition of strong acid. Li+[Al(OR_{F})4]- + 2B + HX -> [H(B)2]+[Al(OR_{F})4]- + LiX Ab initio calculations and crystallographic structural analysis of [H(OEt2)2]+[Al(Opftb)4]− indicate potential unequal sharing of the proton between the two diethyl ether molecules, and the authors propose a solid state structure in which [H(OEt2)2]+ is described as a diethyl ether molecule acting as a hydrogen bond acceptor from an ethanol molecule which stabilizes an ethyl cation as a Lewis base in one resonance structure. Nitrosonium exchange Nitrosonium salts, NO+[Al(Ohfpp)4]− and NO+[Al(Opftb)4]−, can be prepared via an exchange reaction of the respective lithium salt with nitrosonium hexafluoroantimonate. Li+[Al(OR_{F})4]- + NO+[SbF6]- -> NO+[Al(OR_{F})4]- + Li+[SbF6]- The NO+[Al(Opftb)4]− salt can be obtained in much higher yields than the analogous hfpp salt and can be used to oxidize several transition metal and main group element complexes. Cation stabilization Transition metal complexes Manganese(V) nitrosyl cation The first metal nitrosyl cation was prepared using the PFAA's [Al(Opftb)4]− and [F{Al(Opftb)3}2]− as stabilizing anions. Ultraviolet radiation of Mn2(CO)10 under a NO(g) atmosphere yields Mn(CO)(NO)3. Further oxidation of this complex is achieved through reaction with both NO+[PFAA]−'s to yield Mn(NO)4+[PFAA]− 's as deep red solids that are stable for months under an inert atmosphere. The Mn(NO)4+ cation is tetrahedral and linear NO− ligand in both salts indicate 3 electron donation to the Mn(V) metal center. Rigorous tetrahedral geometry of the Mn(NO)4+[F{Al(Opftb)3}2]− salt indicates a pseudo-gas phase environment about the cation due to the weakly coordinating behavior of the anionic PFAA. Chromium(I) carbonyl radical cation Synthesis of the chromium(I) homoleptic radical cation, [Cr(CO)6]•+, is achieved by use of PFAA's [Al(Opftb)4]− and [F{Al(Opftb)3}2]− as stabilizing anions. Oxidation of Cr(CO)6 by NO+[PFAA]−'s under cold vacuum for short reaction times yields the kinetic product [Cr(CO)6]•+[PFAA]− as a pale yellow crystalline solid. Oxidation in a closed room temperature vessel for long reaction times yields the thermodynamic product [Cr(CO)5(NO)]+[PFAA]− as an orange crystalline solid. Assignment of the thermodynamic and kinetic products was further supported by ab initio calculations. Fluctional Jahn-Teller distortions at room temperature are indicated by the presence of a broad band in the Raman spectra of these compounds. Cobalt(I) sandwich complex Cationic cobalt(I) sandwich complexes of the form Co(arene)2+[PFAA]- can be prepared via two synthetic routes (arene = mesitylene, benzene, fluorobenzene, o-difluorobenzene & PFAA = [Al(Opftb)4]− and [F{Al(Opftb)3}2]−). Reaction of Co(CO)5+[PFAA]− with arene yields the cobalt(I) sandwich complex stabilized by a PFAA anion. Additionally, the oxidation of Co2(CO)8 with Ag+[PFAA]− and arene yields the cobalt(I) sandwich complex stabilized by a PFAA anion and produces silver metal and gaseous carbon monoxide. Structural analysis of Co(I)bz2+[Al{OC(CF3)}4]− reveals the sandwich complex is slightly staggered, twisted 6° from an eclipsed confirmation. 3° bending of C-H bonds towards the cobalt center yields D6 symmetry. The cobalt sandwich complex can be used as a precursor to synthesize Co(PtBu3)2 upon ligand substitution. Nickel(I) complexes Oxidation of Ni(COD)2 with Ag+[Al(Opftb)4]− yields Ni(COD)2+[Al(Opftb)4]− as an orange crystalline solid. In the solid phase the material is stable to air and moisture, but is sensitive to diatomic oxygen in solution. EPR analysis reveals that 90% of the unpaired electron spin density is located on the nickel center. This nickel salt serves as synthetically feasible precursor to a series of nickel(I) arene and phosphine cations stabilized by PFAA's. Reactions of Ni(COD)2+[Al(Opftb)4]− with mesitylene, benzene, or hexamethyl benzene results in substitution of one COD ligand. Arene ligand exchange results in partial electron spin delocalization onto the aromatic arene ligand, with 84-87% of the unpaired electron spin density located on the nickel center. Reactions of Ni(COD)2+[Al(Opftb)4]− with phosphines results in complete ligand substitution and dissociation of COD. Addition of chelating phosphines, 1,3-bis(diphenylphosphino)propane (dppp) and 1,2-bis(diphenylphosphino)propane (dppe) yields four coordinate distorted tetrahedral nickel cations. Addition of triphenylphosphine yields a three coordinate trigonal planar cation. Addition of bulky tri-tert-butylphosphine yields a two coordinate linear cation. Main group element complexes AlCp2+ Reaction of AlCp3 with the strong Brønsted acid, [H(OEt2)2]+[Al(Opftb)4]−, yields [AlCp2]+[Al(Opftb)4]− as colorless solid as well as [AlCp2•2Et2O]+[Al(Opftb)4]−. The former complex exhibits nearly identical bonding to its analog AlCp*2+ while the Cp substituents in the later compound exhibit η1 bonding due to two diethyl ether substituents bound to the aluminum center. Gallium(I) olefin complex The first main-group homoleptic olefin compound isolable in bulk was synthesized using a stabilizing PFAA counter ion. [Ga(PhF)2]+[Al(Opftb)4]− can be prepared via the oxidation of Ga by Ag+[Al(Opftb)4]− in the presence of fluorobenzene. Fluorobenzene ligands can then be displaced by COD to produce [Ga(COD)2]+[Al(Opftb)4]−. AIM analysis of the compound reveals minimal back bonding to the olefin double bonds, characterizing the ligand-Ga interactions as primarily electrostatic. The gallium salt serves as a precursor to gallium phosphine complexes, as addition of triphenylphosphine yields [Ga(PPh3)2]+[Al(Opftb)4]−. Germyl cation Halide abstraction from BrGeR3 (R = [C6H3(OtBu)2]3) using Ag+[Al(Opftb)4]- yields the germyl cation Ge[C6H3(OtBu)2]3+, stabilized by bulky ligands and a weakly coordinating PFAA anion. The aryl substituents are oriented in a paddlewheel confirmation about the germanium center and possess shortened Ge-C bonds due to partial double bonding character. Due to the weakly coordinating nature of the PFAA anion, solid state structure of the salt reveals no ion-ion contacts between the germyl cation and the PFAA, giving rise to a very electrophilic germanium species. Tin(II) dications Various tin(II) dications can be synthesized with PFAA's as counterions. [Sn(MeCN)6]2+[Al(Opftb)4]2− can be prepared via the oxidation of tin metal with NO+[Al(Opftb)4]−. Addition of pyrazine to this complex results in ligand substitution to produce [Sn(pyz)2(MeCN)4]2+, while addition of triphenylphosphine produces [Sn(PPh3)2(MeCN)4]2+•MeCN. The salt, Sn(dmap)42+[Al(Opftb)4]2− is prepared by a different synthetic route. Halide abstraction of SnCpCl by Li+[Al(Opftb)4]− yields [SnCp]+ which produces Sn(dmap)42+ upon addition of dmap. Sn(dmap)42+ adopts a see-saw geometry with dmap ligands stabilizing a Sn(II) center. P9+ The cationic P9+ cluster can be isolated from the oxidation of P4 by NO+[Al(Opftb)4]−. In the multistep reaction, [P4NO]+ is a proposed intermediate from analysis of collision-induced dissociation (CID) experiments. Complex coupling present in the 31P NMR spectra of P9+ allowed for the determination of its structure. Applications Ionic liquids Due to low polarizability, large charge delocalization, and high conformational flexibility, PFAA salts are potentially useful ionic liquids. Several PFAA salts, including those of [Al(Ohfip)4]−, possess melting points as low as 273 K or colder. Walden Plots, which are created by plotting the logarithm of conductivity against the logarithm of inverse viscosity, indicate that several [Al(Ohfip)4]− ionic liquids are potentially better than the best commercially available ionic liquids. Better ionic liquids are defined to have high conductivities and high viscosities. See also Non-coordinating anions Ionic liquids References Wikipedia Student Program Aluminates Alkoxy groups Perfluorinated compounds
Polyfluoroalkoxyaluminates
[ "Chemistry" ]
3,325
[ "Substituents", "Alkoxy groups", "Functional groups" ]
66,309,977
https://en.wikipedia.org/wiki/DY%20Pegasi
DY Pegasi, abbreviated DY Peg, is a binary star system in the northern constellation of Pegasus. It is a well-studied SX Phoenicis variable star with a brightness that ranges from an apparent visual magnitude of 9.95 down to 10.62 with a period of . This system is much too faint to be seen with the naked eye, but can be viewed with large binoculars or a telescope. Based on its high space motion and low abundances of heavier elements, it is a population II star system. Observation history The variability of this star was first reported by Otto Morgenroth in 1934, and the first light curves of its photometric behavior were constructed by A. V. Soloviev in 1938. This curve showed a rapid increase of 0.7 in magnitude followed by a slower decline. It was found to be an intrinsic variable with an "ultra-short" period of 105 minutes. The 'b-v' color index of the star was found to vary with each cycle, corresponding to a change in spectral type from A7 at maximum to F1 at minimum. Direct observation of spectra showed a variation from A3 to A9. Evidence was found of small variations in the light curve between each cycle. By 1972, it was widely regarded as a dwarf cepheid; a Delta Scuti variable. However, some astronomers classed it as a short-period RRs Lyrae variable. Photometric observations of DY Peg in 1975 by E. H. Geyer and M. Hoffman showed non-periodic changes to the light curve that suggested an overtone pulsation. A frequency analysis of observations made by A. Masani and P. Broglia in 1953 strengthened the evidence that DY Peg is a double mode cepheid, showing a fundamental pulsation and a weaker first overtone with a period ratio of 0.764. By 1982, similarities with SX Phoenicis had been found, with both showing comparable drifts in their beat periods. Application of the Baade-Wesselink method provided a preliminary distance estimate to DY Peg of . In 2003, J. N. Fu and C. Sterken suggested that much of the long-term trend in variability period changes could be explained by a highly-eccentric orbital model, although it was not deemed a complete solution since some small residuals remained from the period 1930–1950. They computed a preliminary orbital period of with an eccentricity of . L.-J. Li and S.-B. Qian in 2010 found a mass estimate of the secondary in the range of 0.028 to , which suggests the companion may be a brown dwarf. Properties A 2020 analysis of data collected by the AAVSO found three independent frequencies in the variability of the visible component. The primary and secondary modes are radial pulsations with 13.71249 and 17.7000 cycles per day, respectively, while a newly discovered non-radial mode has a frequency of 18.138 cycles per day. Consistent with being a population II star, it has a low metallicity. The stellar class ranges from A3 to F1 over each cycle, and the radius of the star varies by 3.5%. To explain certain discrepant properties of the system, H.-F. Xue and J.-S. Niu proposed that the primary may be accreting mass from an orbiting dust disk. This is conjectured to be leftover material from a white dwarf companion as it passed through the asymptotic giant branch. DY Pegasi has been classified as a SX Phoenicis variable on the basis of its low metallicity. However, a 2014 study by S. Barcza and J. M. Benkő found a much higher general abundance of heavy elements with [M/H] = dex, approaching solar in composition. (This notation indicates the base-10 logarithm of the ratio of "metals" 'M' to hydrogen 'H', compared to the same abundances in the Sun. A value of 0.0 is solar.) They proposed that this may instead be a high amplitude Delta Scuti variable. The short period of this variable rules it out as an RR Lyrae variable. The properties of DY Pegasi are uncertain due to the presence of an unknown companion, but it appears to lie close to the main sequence at the red (cool) edge of the instability strip. However, it has also been treated as a possible RR Lyrae variable which would be a horizontal branch star. As an old low-metallicity SX Phoenicis variable, it is very similar to blue stragglers, which are formed from stellar mergers or mass transfer in binary systems. References Variable stars Binary stars Pegasus (constellation) BD+16 4877 218549 114290 Pegasi, DY SX Phoenicis variables
DY Pegasi
[ "Astronomy" ]
1,009
[ "Pegasus (constellation)", "Constellations" ]
66,310,978
https://en.wikipedia.org/wiki/7255%20aluminium%20alloy
7255 aluminium alloy is a wrought alloy with high zinc weight percentage (from 7.8 to 8.4%). It also contains magnesium, copper. Chemical composition Properties References Aluminium alloy table Aluminium–zinc alloys
7255 aluminium alloy
[ "Chemistry" ]
45
[ "Alloys", "Aluminium alloys" ]
66,311,080
https://en.wikipedia.org/wiki/7475%20aluminium%20alloy
7475 aluminum alloy (Adirium) is a wrought alloy with high zinc weight percentage. It also contains magnesium, silicon and chromium. 7475 alloy can not be welded. It has more spring back because of its strength. It has high machinability. Chemical composition Properties Applications Shell casings Aircraft References External links https://www.suppliersonline.com/propertypages/7475.asp https://www.suppliersonline.com/propertypages/7475.asp https://www.makeitfrom.com/material-properties/7475-AlZn5.5MgCuA-Aluminum https://www.efunda.com/Materials/alloys/aluminum/show_aluminum.cfm?ID=AA_7475&show_prop=all&Page_Title=AA%207475 Aluminium alloys Aluminium–zinc alloys
7475 aluminium alloy
[ "Chemistry" ]
193
[ "Alloys", "Aluminium alloys" ]
66,311,428
https://en.wikipedia.org/wiki/Anaerobutyricum%20hallii
Anaerobutyricum hallii (formerly Eubacterium hallii) is an anaerobic bacterium that lives inside the human digestive system. References Anaerobes Bacteria described in 2018
Anaerobutyricum hallii
[ "Biology" ]
42
[ "Bacteria", "Anaerobes" ]
66,313,039
https://en.wikipedia.org/wiki/Lui%20Pao%20Chuen
Lui Pao Chuen is a Singaporean military scientist who has had roles as Chairman of the Advisory Board of the Singapore Space and Technology Association (SSTA); Advisor at the Ministry of National Development (MND); Senate Member at the Management Development Institute of Singapore (MDIS); Board of Trustee Member of the Singapore University Technology and Design (SUTD) and Advisor of the National Research Foundation at the Prime Minister's Office. Radio and Space Research Station In 1965 Chuen was appointed scientific officer in the Radio and Space Research Station in Singapore. The following year he was promoted to the Logistics Division to work as Officer-in-Charge of the Test and Evaluation Section.  Ten years after his first appointment he became Special Projects Director. MINDEF Chuen worked at the Ministry of National Development and Defense (MINDEF) for more than four decades.  During this time he played a pivotal role in establishing the groundwork for Singapore's defense capacities within the army. Awards and recognition 1975: SAF Good Service Medal 1979: The Public Administration Silver Medal (Silver) 1986: Chief Defence Scientist (first time appointee) 1992: The Public Administration Gold Medal 1997: Long Service Award for three decades service in Singapore 2002: Distinguished Alumni Award from the US Naval Postgraduate School 2002: National Science and Technology Medal 2002: Inducted into Naval Postgraduate School Hall of Fame 2005: NUS Distinguished Science Alumni Award and Outstanding Service Award 2007: NUS University Outstanding Service Award 2008: Elected Honorary Fellow by the Institution of Engineers, Singapore 2011: IPS President Medal 2014: Institution of Engineers Singapore's Lifetime Engineering Achievements Award 2015: Inaugural Defence Technology Medal (Outstanding Service) from the Defence Minister Education Chuen has a BSc (Hons) in Physics from Singapore University, an MA in Operations Research and Systems Analysis. He was awarded a Postgraduate fellowship from MINDEF and from the Singapore National Academy of Science in 2011. References Year of birth missing (living people) Living people Military engineers 21st-century Singaporean scientists Singaporean academics National University of Singapore alumni
Lui Pao Chuen
[ "Engineering" ]
412
[ "Military engineers", "Military engineering" ]
66,313,605
https://en.wikipedia.org/wiki/Diiodine%20oxide
Diiodine oxide, also known as iodo hypoiodite, is an oxide of iodine that is equivalent to an acid anhydride of hypoiodous acid. This substance is unstable and it is very difficult to isolate. Preparation Diiodine oxide can be prepared by reacting iodine with potassium iodate (KIO3) in 96% sulfuric acid and then extracting it into chlorinated solvents. Reactions Diiodine oxide reacts with water to form hypoiodous acid: References Iodine compounds Oxides
Diiodine oxide
[ "Chemistry" ]
116
[ "Oxides", "Salts" ]
66,315,143
https://en.wikipedia.org/wiki/Bristol%20Janus
During World War 2, the Bristol Aero Engine part of the Bristol Aeroplane Company was pre-occupied with developing and manufacturing radial piston engines, such as the Bristol Hercules and the more powerful Bristol Centaurus. However, in 1944 the Company decided to form a Project Department to investigate the design of gas turbines. Initially the department was based at Tockington Manor, a large country house close to the main factory at Patchway, Bristol. A predominantly young team was formed and was initially tasked with studying turboprop engines. The Ministry of Supply asked BAE for design studies for a 1000 hp turboprop engine. An early decision taken was to go for a centrifugal compressor configuration, because the engine would be so small that an axial unit would be challenging. Sufficient overall pressure ratio was obtained by mounting two centrifugal compressors in series on the HP shaft. These were driven by a single stage turbine. Another important decision taken was to opt for a free power turbine. This delivered power to the forward mounted propeller reduction gearbox. The two centrifugal compressors were mounted back-to-back, the outlet of the first unit being connected to the inlet of the second by four curved pipes. Four highly skewed combustion chambers were located between these pipes and discharged combustion gases into the turbine system located aft. The exhaust pipe was angled downwards. This reference shows an external view of the engine. At some point the design became known as the Bristol Janus. BAE considered it to be a very compact and light engine Later the Ministry of Supply asked for the design to be scaled down to an output of 500 hp, to avoid conflict with the projects of other manufacturers. In the event, the Bristol Janus was never manufactured. However, the Project Department went on to design the Bristol Theseus, which became the first Bristol gas turbine to be actually manufactured and developed. It passed a Type Test and was also flight tested. Applications Some version of the Bristol Janus was offered as the powerplant for the twin engined Bristol Type 173, but the prototypes of this helicopter were eventually powered by 550 hp Alvis Leonides 73 air-cooled 9-cylinder radial engines. Variants Scaled 500 hp variant Specifications (Janus 1) See also List of aircraft engines References Notes Bibliography 1940s turboprop engines Theseus Gas turbines
Bristol Janus
[ "Technology" ]
466
[ "Engines", "Gas turbines" ]
66,315,644
https://en.wikipedia.org/wiki/Stipitatic%20acid
Stipitatic acid is a tropolone derivative isolated from Talaromyces stipitatus (Penicillium stipitatum). References Tropolones Tropones Aromatic compounds
Stipitatic acid
[ "Chemistry" ]
45
[ "Organic compounds", "Aromatic compounds" ]
66,316,355
https://en.wikipedia.org/wiki/Michael%20Apter
Michael J. Apter (born 1939) is a British psychologist who was born in Stockton-on-Tees, County Durham and grew up in Bristol. He was educated at Clifton College (1965) and at Bristol University where he gained both his Bachelor of Science degree and his Doctorate in Psychology in 1965, having also spent a doctoral year at Princeton University. He taught for twenty years at Cardiff University in Wales and has since held invited positions at Purdue University, the University of Chicago, Yale University, University of Toulouse, and Georgetown University. He also taught at Northwestern University where he received a teaching award. He has held visiting positions at several additional universities and is a chartered psychologist and fellow of the British Psychological Society. Apter's work has been mainly in motivation and phenomenology, particularly through the lens of reversal theory. Selected works Apter, Michael J. (1966) Cybernetics and Development. Oxford: Pergamon Press. Apter, Michael J. (1970/2018) The Computer Simulation of Behaviour. London. Routledge. Apter, Michael J. (1982) The Experience of Motivation: The Theory of Psychological Reversals, London Academic Press. Apter, Michael J. (1992) The Dangerous Edge: The Psychology of Excitement. New York. The Free Press. Apter. Michael J. (2001) (Ed) Motivational Styles in Everyday Life. A Guide to Reversal Theory. Washington D.C.: American Psychological Association. Apter. Michael J. (2007) Reversal Theory: The Dynamics of Motivation, Emotion and Personality 2nd, Edition. Oxford: Oneworld. Apter, Michael J. (2018) Zigzag: Reversal and Paradox in Human Personality. Leicestershire U.K.: Matador. References Motivation Personality psychologists Artificial intelligence people 21st-century British psychologists 20th-century British psychologists Living people 1939 births People from Stockton-on-Tees
Michael Apter
[ "Biology" ]
387
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
66,316,846
https://en.wikipedia.org/wiki/Tropodithietic%20acid
Tropodithietic acid is a tropolone derivative produced by the marine bacteria Phaeobacter piscinae, Phaeobacter inhibens and Phaeobacter gallaeciensis. Its structure is composed by a dithiete moiety fused to tropone-2-carboxylic acid. References Tropolones Dithietanes Organic disulfides Carboxylic acids
Tropodithietic acid
[ "Chemistry" ]
94
[ "Carboxylic acids", "Functional groups" ]
66,317,267
https://en.wikipedia.org/wiki/Douglas%20McCauley
Douglas J. McCauley (born January 31, 1979 Los Angeles, CA) is a professor of ocean science at the University of California Santa Barbara, and serves as the Director of the Benioff Ocean Science Laboratory (formerly Benioff Ocean Initiative) - an applied ocean research center based at UC Santa Barbara's Marine Science Institute. His research focuses on using tools from ecology, data science, and marine policy for ocean conservation. Education McCauley received a B.A in Integrative Biology and a B.A. in Political Science, both from University of California, Berkeley in 2001. He received his Ph.D. in Biological Sciences at Stanford University in 2010. He conducted postdoctoral research at Stanford University, Princeton University, and University of California, Berkeley. Career McCauley began as a deckhand on a fishing boat based in Los Angeles Harbor. Prior to his appointment at UC Santa Barbara he also worked with the US National Marine Fisheries Service and US Fish and Wildlife Service in Honolulu, HI. McCauley is a professor at University of California Santa Barbara (Department of Ecology, Evolution and Marine Biology & Marine Science Institute). In 2016 he founded the Benioff Ocean Initiative at UC Santa Barbara with a gift from Marc Benioff and Lynne Benioff - founder and CEO of the cloud computing company Salesforce. The mission of the Benioff Ocean Science Laboratory is to leverage the power of science to solve pressing threats challenging ocean health and inspire the replication of these successes. To date, the organization has taken on a variety of initiatives, including trying to slow the flow of plastic pollution to oceans through the Clean Currents Coalition, using technology to prevent collisions between ships and endangered whales through project Whale Safe, employing artificial intelligence to help spot white sharks at beaches through project SharkEye, creating programs for community scientists to aid in the conservation of endangered ocean species through the Spotting Giant Sea Bass project, and creating transparency tools to track the start of ocean mining. Research McCauley's primary areas of research expertise are in the areas of ocean science, conservation biology, and ecology. His pure ecological research has focused on the decline of wild animals in ocean ecosystems, understanding how energy and materials flow across living ecological systems, how species are interlinked in ecological networks, and how humans as a dominant ecological force shape the dynamics of nature. His applied science includes research on how the health of nature affects human nutritional health, disease dynamics, wealth systems, and social justice. Public Engagement McCauley has been active on topics of ocean health and ecosystem integrity at the United Nations, the World Economic Forum, and the US White House. He serves on the advisory board of the World Economic Forum's Friends of Ocean Action. Awards and honors In 2015, McCauley was named an Alfred P Sloan Research Fellow. In 2018, he was named an Early Career Fellow by Ecological Society of America. In 2019, he was awarded UC Santa Barbara's Plous award. He was also named a “Human of the Year” by Vice (magazine) Media. Selected works McCauley has published more than 85 publications Some of his most known publications are as follows: References External links McCauley Lab Douglas McCauley UCSB Profile Benioff Ocean Initiative ' 1979 births Living people Academics from Los Angeles American environmental scientists Scientists from Los Angeles Stanford University alumni University of California, Santa Barbara faculty 20th-century American scientists 21st-century American scientists Fellows of the Ecological Society of America
Douglas McCauley
[ "Environmental_science" ]
700
[ "American environmental scientists", "Environmental scientists" ]
66,317,881
https://en.wikipedia.org/wiki/Martin%20Gouterman
Martin Paul Gouterman (December 26, 1931 – February 22, 2020) was an American chemist who was a professor of chemistry at the University of Washington. He is remembered for his seminal work on the optical spectra porphyrins, for which he developed a simple model generally referred to as Gouterman's four-orbital model. Early life and education Gouterman was born in Philadelphia, the only child to Bernard and Melba Buxbaum Gouterman. He attended Philadelphia Central High School and graduated in 1949. Gouterman was an undergraduate student at the University of Chicago, where originally he majored in piano performance but eventually studied physics. He stayed at Chicago for his doctoral research, where he started studying porphyrins. Research and career After graduating, Gouterman was appointed to the faculty at Harvard University where he worked as a postdoctoral researcher with William Moffitt. Shortly after Gouterman arrived, Moffitt died of a heart attack during a squash game. Gouterman was quickly promoted to assistant professor, and spent his time using quantum chemical calculations to understand the photophysical properties of porphyrins. He primarily made use of the Hückel molecular orbital method to interrogate their optical spectra. Gouterman's molecular models, which included symmetry arguments and configuration interactions, were able to predict the intensity differences between the absorption bands of porphyrins. The so-called four-orbit model incorporates two, almost degenerate highest occupied molecular orbitals and two degenerate lowest unoccupied molecular orbitals. The Soret and Q-bands that are visible in porphyrin spectra are the result of transitions from between these four orbitals. Gouterman moved to the University of Washington in 1966, where he worked until his retirement. In Seattle, Gouterman continued to study the optical properties of porphyrins. He described how the chemical structures of porphyrins determine whether the spectral shape was 'normal', hyper- and hypso-. For example, the UV-Visible absorption spectra of hyper porphyrins contain red-shifted peaks and extra bands due to ligand-to-metal charge transfer (LMCT) transitions. Amongst the complicated structures analysed by Gouterman were cytochrome P450–carbon monoxide complexes, whose electronic spectra included a split Soret band due to LMCT transitions. Awards and honors Elected Fellow of the American Physical Society University of Washington Minority Science and Engineering Program Faculty Excellence Award Creativity Certificate Award, Porphyrin Chemistry Community Selected publications Personal life Gouterman was a community organiser and activist. He campaigned to end the Vietnam War. In his early career Gouterman was not open about his sexuality. He came out as gay at around age 35 after he moved to Seattle. There he became an activist for gay rights and co-founded the Dorian Society. He also worked with the New Jewish Agenda and International Jewish Peace Union to promote Israeli-Palestinian peace. In the early 1980s, Gouterman acted as a sperm donor and helped a lesbian couple have a son.Through mutual acquaintances, he discovered the identity of his son and thereafter enjoyed a close relationship with him. In the last years of his life, Gouterman suffered from Alzheimer's disease. References American LGBTQ scientists Scientists from Philadelphia Chemists from Pennsylvania 1931 births 2020 deaths American organic chemists Computational chemists University of Chicago alumni University of Washington faculty Fellows of the American Physical Society
Martin Gouterman
[ "Chemistry" ]
706
[ "Organic chemists", "American organic chemists" ]
66,318,912
https://en.wikipedia.org/wiki/WASP-69
WASP-69, also named Wouri, is a K-type main-sequence star away from Earth. Its surface temperature is 4782 K. WASP-69 is slightly enriched in heavy elements compared to the Sun, with a metallicity Fe/H index of 0.10, and is much younger than the Sun at 2 billion years. The data regarding starspot activity of WASP-69 are inconclusive, but spot coverage of the photosphere may be very high. Multiplicity surveys did not detect any stellar companions to WASP-69 as of 2020. Nomenclature The designation WASP-69 indicates that this was the 69th star found to have a planet by the Wide Angle Search for Planets. In August 2022, this planetary system was included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Cameroon, were announced in June 2023. WASP-69 is named Wouri and its planet is named Makombé, after the Wouri and Makombé rivers. Planetary system In 2013, one planet, named WASP-69b, was discovered on a tight, circular orbit. Its equilibrium temperature is 886 K, but the measured terminator temperature is significantly higher by at least 200 K. The planet is losing mass at a moderate rate of 0.5 per billion years, not producing a visible cometary tail, although it was detected in 2024 and measured to be at least 7 times its own radius. The planetary atmosphere is extremely hazy and contains a partial cloud deck with cloud tops rising to a pressure of 100 Pa. Its composition is mostly hydrogen and helium, and sodium was also detected in low concentration. The sodium may originate from volcanic moons, not from the planet itself. By 2021, the presence of hazes in atmosphere of WASP-69b was confirmed, along with a solar or super-solar water abundance. References Aquarius (constellation) Planetary transit variables K-type main-sequence stars Planetary systems with one confirmed planet J21000618-0505398 BD-05 5432 069 Wouri
WASP-69
[ "Astronomy" ]
431
[ "Constellations", "Aquarius (constellation)", "Astronomy organizations", "Wide Angle Search for Planets" ]
66,319,247
https://en.wikipedia.org/wiki/Katherine%20E.%20Stange
Katherine E. Stange is a Canadian-American mathematician and an associate professor of mathematics at the University of Colorado Boulder. She is a number theorist specializing in topics in arithmetic geometry. Education and career Stange earned her PhD in mathematics from Brown University in 2008 under the supervision of Joseph H. Silverman. She was a National Science Foundation (NSF) Postdoctoral Fellow and Junior Lecturer at Harvard University from 2008 to 2009. She also held postdoctoral fellowships at Simon Fraser University, Pacific Institute for the Mathematical Sciences, and the University of British Columbia (2009–2011) and Stanford University (2011–2012). In 2012, Stange joined the faculty at the University of Colorado Boulder as an assistant professor. She was promoted to associate professor with indefinite tenure effective August 2018. Stange has been active in Women in Numbers, the prototype for the Association for Women in Mathematics' Research Collaboration Networks for Women. She was co-organizer and proceedings co-editor of Directions in Number Theory: Proceedings of the 2014 WIN3 Workshop and a project leader for Women in Numbers 4. Stange served on the American Mathematical Society Committee on Women in Mathematics (CoWIM) from 2019–2020. Recognition The Mathematical Association of America presented Stange and Lionel Levine the 2013 Paul R. Halmos - Lester R. Ford Award for outstanding paper in The American Mathematical Monthly for their paper How to make the most of a shared meal: plan the last bite first. Stange was elected a Fellow of the Association for Women in Mathematics in the Class of 2021 "for leadership in the Women in Numbers Network by creating its website (the first of its kind), mentoring early-career researchers, organizing conferences, editing its proceedings volumes, and chairing its steering committee; and for service on AWM committees, including support of other research networks". She was named a 2021 Simons Fellow in Mathematics. References External links Katherine E. Stange's Author Profile on MathSciNet University of Colorado Boulder faculty Number theorists Arithmetic geometers Living people Fellows of the Association for Women in Mathematics Year of birth missing (living people) 21st-century American mathematicians 21st-century Canadian mathematicians 20th-century American mathematicians 20th-century Canadian mathematicians Brown University alumni University of Waterloo alumni 20th-century American women mathematicians 21st-century American women mathematicians
Katherine E. Stange
[ "Mathematics" ]
460
[ "Number theorists", "Number theory" ]
66,321,244
https://en.wikipedia.org/wiki/1ES%202344%2B514
1ES 2344+514 is a blazar first detected on December 20, 1995 with its official discovery being announced in 1998. It is more than 5 billion light years away from Earth. It was discovered by the Whipple Collaboration at the Whipple Observatory using a 10 meter gamma-ray telescope. References Blazars Cassiopeia (constellation)
1ES 2344+514
[ "Astronomy" ]
76
[ "Cassiopeia (constellation)", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
66,323,417
https://en.wikipedia.org/wiki/Transition%20metal%20dithiocarbamate%20complexes
Transition metal dithiocarbamate complexes are coordination complexes containing one or more dithiocarbamate ligand, which are typically abbreviated R2dtc−. Many complexes are known. Several homoleptic derivatives have the formula M(R2dtc)n where n = 2 and 3. Ligand characteristics Dithiocarbamates are anions. Because of the pi-donor properties of the amino substituent, the two sulfur centers show enhanced basicity. This situation is represented by the zwitterionic resonance structure that depicts a positive charge on N and negative charges on both sulfurs. This N to C pi-bonding results in partial double bond character. Consequently, barriers to rotational about this bond are elevated. Another consequence of their high basicity, dithiocarbamates often stabilize complexes in uncharacteristically high oxidation state (e.g., Fe(IV), Co(IV), Ni(III), Cu(III)). Dithiocarbamate salts are easily synthesized. Many primary and secondary amines react with carbon disulfide and sodium hydroxide to form dithiocarbamate salts: R2NH + CS2 + NaOH → R2NCS2−Na+ + H2O A wide variety of secondary amines give the corresponding dtc ligand. Popular amines include dimethylamine (Me2NH), diethylamine (Et2NH), and pyrrolidine ((CH2)4NH). Related ligands Dithiocarbamates are classified as derivatives of dithiocarbamic acid. Their properties as ligands resemble the conjugate bases of many related "1,1-dithioacids": Xanthates, ROCS2− Dithiophosphates, (RO)2PS2− Dithiocarboxylates, RCS2− Synthetic methods Commonly, metal dithiocarbamates are prepared by salt metathesis reactions using alkali metal dithiocarbamates: NiCl2 + 2NaS2CNMe2 → Ni(S2CNMe2)2 + 2NaCl In some cases, the dithiocarbamate serves as a reductant, followed by its complexation. A complementary method entails oxidative addition of thiuram disulfides to low-valent metal complexes: Mo(CO)6 + 2[S2CNMe2]2 → Mo(S2CNMe2)4 + 6CO Metal amido complexes, such as tetrakis(dimethylamido)titanium, react with carbon disulfide: Ti(NMe2)4 + 4CS2 → Ti(S2CNMe2)4 Homoleptic complexes Bis complexes nickel bis(dimethyldithiocarbamate), palladium bis(dimethyldithiocarbamate), platinum bis(dimethyldithiocarbamate), all square-planar complexes copper bis(diethyldithiocarbamate), a square-planar complex Tris complexes vanadium tris(diethyldithiocarbamate), an octahedral complex chromium tris(diethylditiocarbamate), an octahedral complex manganese tris(dimthylthtiocarbamate), an octahedral complex iron tris(diethyldithiocarbamate), ruthenium tris(diethyldithiocarbamate), osmium tris(diethyldithiocarbamate), all octahedral complexes cobalt tris(diethyldithiocarbamate), rhodium tris(diethyldithiocarbamate), iridium tris(diethyldithiocarbamate), all octahedral complexes Tetrakis complexes titanium tetrakis(dimethyldithiocarbamate) molybdenum tetrakis(diethyldithiocarbamate) Dimetallic complexes iron bis(diethyldithiocarbamate), pentacoordinate Fe dimer zinc bis(dimethyldithiocarbamate), pentacoordinate Zn dimer dicobalt pentakis(diethyldithiocarbamate) cation, with a pair of octahedral Co(III) centers diruthenium pentakis(diethyldithiocarbamate) cation, with a pair of octahedral Ru(III) centers, two isomers Reactions Dithiocarbamate complexes do not undergo characteristic reactions. They can be removed from complexes by oxidation, as illustrated by the iodination of the iron tris(diethyldithiocarbamate): They degrade to metal sulfides upon heating. Applications Dtc complexes find several applications: herbicides in the form of the iron and zinc derivatives Ferbam and Zineb, respectively vulcanization accelerators, zinc bis(dimethyldithiocarbamate). medicine, iron tris(dimethyldithiocarbamate) as a nitric oxide scavenger. lubricants. Metal thiocarbamates are also used in metal-to-metal lubrication proposes, mainly as an anti-oxidation or anti-extreme pressure (EP) additive. 1-2% of such compounds can be added to internal combustion engine lubricant to increase extreme pressure performance in high operational temperatures. References Dithiocarbamates
Transition metal dithiocarbamate complexes
[ "Chemistry" ]
1,152
[ "Dithiocarbamates", "Functional groups" ]
75,015,194
https://en.wikipedia.org/wiki/Hiroyuki%20K.M.%20Tanaka
Hiroyuki K.M. Tanaka or Hiroyuki Tanaka is a Japanese physicist. and inventor. Tanaka is a pioneer in the field of muography which was also coined by him. His inventions include muometric navigation. Career Muon tomography works Tanaka pioneered the technique to image volcanoes with cosmic muons and is the most published author as of 2022 in this topic. He was also the initiator of the use of muography for the ScanPyramids mission. Tanaka and his team exploited the unique properties of cosmic rays to map the interiors of hard-to-access places such as volcanoes, the core of nuclear reactors, and pyramids. He was interested in "x-raying" the throat of a volcano with cosmic muons. In 2007, he led a team of scientists to install a muon detector near the summit of a volcano. By measuring muons traveling nearly horizontally through the volcano, the density of the volcano’s innards was determined, and provided the first demonstration that magma and voids within the volcano could be detected with this technique. Tanaka has used muography to predict volcanic eruptions at an active volcano called Sakurajima, as well as to image ancient tombs, dams, and industrial furnaces. In 2013, he succeeded in creating a video from time-lapse images captured magma motion over the course of several days. In 2016, he x-rayed a volcano from the sky using a helicopter to fly the detectors around the volcano. In May 2023, in an interview in elDiario.es, he explains "If you hide a coin behind a sheet of paper, under normal circumstances you wouldn't see it, but if you put it against the sun you see the shadow of the coin. Light is energetic enough to pass through a sheet of paper, but it cannot penetrate a pyramid. Cosmic muons are more penetrating, so muography allows us to image shadows through much thicker objects". In 2022, he expanded muography to image targets in the atmosphere and the ocean. First, Tanaka set his sights on tropical cyclones and measured the muon flux before the storm arrived. He showed the cyclone's cross sections and revealed variations in density to provide information on wind speed and storm strength. Tanaka commented “We may be able to design a completely new kind of cyclone forecast system ” in an interview in IEEE Spectrum in July 2023. This appears to be the first time that anyone has made 3-D muon scans of the insides of a storm to remotely monitor the behavior of cyclones approaching coastal regions. In 2022, Tanaka proposed undersea muon detectors to monitor variations in water depth to forecast storm surges without experiencing physical abrasion as there are no moving parts. In the same year (2022), Tanaka has used muography to measure meteotsunamis, another hazard linked to cyclones. Tanaka's research group together with NEC Corporation were the first in the world to measure the muon flux under the seafloor. An array of detectors called the Tokyo Bay Seafloor Hyper-Kilometric Submarine Deep Detector (TS-HKMSDD) were installed in the underwater section of the Tokyo Bay Aqua-Line to capture meteotsunami induced by Typhoon Mindulle which reached a height of 15 cm and decayed within a few hours. In a May 2022 Eos (magazine) interview, Tanaka explained "it [muography] can detect offshore tides." He also remarked in a July 2023 Physics World interview "We want to know before the meteotsunami hits land." This HKMSDD system in Japan detected the wave height deviation generated by the 2022 Hunga Tonga–Hunga Haʻapai eruption and tsunami in 2022. Muometric navigation works Tanaka founded the field of muometric navigation with his invention called the muometric positioning system (muPS)in 2020. This is a new kind of GPS using muons which works underground, indoors, and underwater even when obstructed by obstacles like rocks, water, and buildings. It took He has been trying for several years to create this system to geolocate an object underground, underwater or inside a building. Tanaka explains “Cosmic muons are not intercepted like radio waves, making the technique suitable for universal indoor or underground navigation.” After the initial wired system, Tanaka invented a new wireless version system called MuWNS capable of conducting autonomous operation in places like underground tunnels, which has never been possible with conventional technology. Tanaka and his team demonstrated that MuWNS can perform at an accuracy of 1–10 m., and the size of the receiver could be miniaturized to a chip scale and installed in smartphones in the future. He also plans to improve positioning accuracy of MuWNS. In 2022, Tanaka began work to improve wireless time synchronization with cosmic rays by developing the Cosmic Time Synchronizer and the Cosmic Time Calibrator, capable of providing stable and accurate time synchronization without a GPS signal input. In a May 2022 interview for ScienceAlert, Tanaka compared this to the development of the lightbulb saying "Thomas Edison lit up Manhattan starting with a single light bulb. Perhaps we should take that approach, starting with a city block, then a district, and eventually we'll synchronize the whole of Tokyo and beyond.". According to an article in the National Tribune, Cosmic Time Synchronizer could bring accurate timing abilities to remote sensing stations, or even underwater, places that other methods cannot operate. Tanaka applied cosmic-ray wireless time synchronization to wireless security, and named this technique Cosmic Coding and Transfer (COSMOCAT). He claims that the new scheme is more secure than other cryptosystems because this hardware random number generator does not require the sender and receiver of a message to exchange a key (cryptography). Tanaka has also expanded his COSMOCAT system to include a secured key storage system called Cosmic Coding and Transfer Storage (COSMOCATS). Tanaka commented in an interview in El País in February 2023, "If we dispense with this key-sharing idea and instead find a way to use unpredictable random numbers to encrypt information, the system might be immune" Views On innovation In April 2022, in interviewed by Science News, Tanaka stated “particles arriving from the universe have not been applied to our regular lives.” He is trying to change that Tanaka believes the university has an important role in the progress of new areas of research. He said "However, in the future there will be a stronger public interest in how research can lead to resolving the issues that modern society faces.". He pursues strategies to help with the creation of new industries based on research results, and training young technical experts acting at the forefront of corporations and other organizations" On collaboration When commemorating the establishment of a new research center for investigating muography, Tanaka said “it marks the first joint scientific effort between Hungary and Japan. Hungary has a rich scientific history, which makes this venture both significant and exciting.” Federico Iacobucci, pianist, composer and conductor resident in Tokyo, who as a Conservatory student also had the fortune of knowing the Maestro Ennio Morricone personally, stated "Tanaka is also a great expert in classical music. Tanaka is also a great music lover and a relationship of profound dialogue was born between us." References Year of birth missing (living people) Living people Cosmic ray physicists 21st-century Japanese physicists 21st-century Japanese inventors People associated with the Global Positioning System
Hiroyuki K.M. Tanaka
[ "Technology" ]
1,544
[ "Global Positioning System", "People associated with the Global Positioning System" ]
75,016,231
https://en.wikipedia.org/wiki/Hirst%20Prize%20and%20Lectureship
The Hirst Prize and Lectureship is a biennial prize, jointly awarded by the London Mathematical Society (LMS) and the British Society for the History of Mathematics (BSHM). The prize recognises original and innovative contributions to the history of mathematics by an individual winner or by joint winners. The prize was first awarded in 2015 (solely by the LMS) as part of the LMS's 150th anniversary celebrations. The prize is named in honour of Thomas Archer Hirst, who was from 1872 to 1874 the fifth President of the LMS. Any mathematician or historian of mathematics is eligible for the prize — except for previous winners of the De Morgan Medal, LMS's Pólya Prize, Fröhlich Prize, Naylor Prize and Lectureship, Senior Whitehead Prize, Senior Anne Bennett Prize, or the Christopher Zeeman Medal. In the year for awarding the prize, the members of the Hirst Prize Committee, the members of the LMS and BSHM Councils are also ineligible. The administration of the Hirst Prize alternates between the LMS and the BSHM offices, but the LMS alone organises the Hirst Lectureship. The lecture normally takes place in the year following the award of the Hirst Prize, and the venue for the lecture is chosen by the winner (or winners) of the Hirst Prize. Recipients 2015: Edmund F. Robertson and John Joseph O'Connor (joint winners) 2016 lecture: History of Mathematics: Some Personal Thoughts 2018: Jeremy Gray 2019 lecture: Jesse Douglas, Minimal Surfaces, and the first Fields Medal 2021: Karine Chemla 2022 lecture: Algebraic work with operations in China, 1st century—13th century 2023: Erhard Scholz 2024 lecture: From Grassmann complements to Hodge duality References Awards established in 2015 History of science awards Mathematics awards Awards of the London Mathematical Society British lecture series 2015 establishments in the United Kingdom Recurring events established in 2015 Science lecture series Biennial events Mathematical events
Hirst Prize and Lectureship
[ "Technology" ]
399
[ "Science and technology awards", "History of science awards", "Mathematics awards" ]
75,016,439
https://en.wikipedia.org/wiki/CityTrees
CityTrees, also known as Robot Trees, Robo-Trees, and Moss Walls, are large air filters installed in many European cities, as well as Hong Kong, that remove pollutants from the atmosphere. CityTrees are large structures covered in moss. The filters intend to curb harmful emissions from nearby traffic congestion, including fine dust particles and nitrogen oxides, of which they are claimed to take in 80%, although this has been disputed by some experts. The structures have been criticised, especially in Cork, Ireland, for their perceived ineffectiveness, possible wastage of energy and water, and costs of around €400,000 per year. Following this criticism, a Cork City Council debate on the CityTrees scheme was held on 13 November 2023. Councillors decided that the data available for the CityTrees was too inconclusive, partly due to the windy conditions where they were placed, and so they will remain for another 6–12 months. History and development The developers of CityTrees, German company Green City Solutions, previously created the first vertical moss farm in Bestensee, Berlin. The first CityTree was constructed in Jena, Germany in late 2014. CityTrees have since been installed in Amsterdam, Berlin, Bern, Brussels, Budapest, Cork, Glasgow, London, Oslo, Paris, and Skopje. Outside of Europe, these structures are also present in Hong Kong. The operations director for Cork City, Council David Joyce, said that the CityTrees "are not there to replace a tree", because they perform a different function. Where a tree "takes in carbon dioxide and releases oxygen", the CityTrees "take in particulate matter — dust — from diesel engines, from burning fossil fuels, and it captures that dust and eats the dust so it takes 80% of that dust out of the air." Cork City Council were planning to plant 1 million trees from 2021 to 2028, but this was postponed following a lack of funding. Although it has been reported that one CityTree is equivalent to about 275 real trees, Green City Solutions' marketer Simon Dierks told Cork newsletter Tripe + Drisheen that this figure is "four or five years old", "not true", and "not a smart thing to say". Dierks restated the company's commitment to planting real trees, and cited the shadowing effects and ability to serve as a home for animals as things that real trees can do, that CityTrees cannot. A newer version of the CityTree was unveiled at the shopping centre in 2020. The new version had been researched from 2018 to 2020 with an €2 million grant from the EU. Simon Dierks told Tripe + Drisheen Green City Solutions are working on replacing the earlier iterations of the CityTree with the more recent model. Design The structures are under 4 meters tall, 3 meters wide, and 2.19 meters deep. CityTrees are covered in living Hypnum mosses that capture and consume pollutants around them. Each CityTree has a bench attached. To further capture pollutants, CityTrees use Internet of Things technology. They include a 40-inch TV screen that displays information about air quality in the area. The structures are self sustaining with solar panels and rainwater collection systems that require only a few hours of maintenance every year. The slim version of the CityTree, for use in more compact environments, omits the bench and takes up only 1.5 meters of floor space. CityTree 2020 The newer model, dubbed "CityTree 2020", is made of wood and is a hexagonal shape, which Green City Solutions called a "Bauhaus-inspired design". They stated that the 2020 version also has a reduced CO2 footprint. Green City Solutions claimed in February 2021 that the new CityTree can "inactivate 1/5 of coronaviruses". Reception Ireland When they were installed in Cork, UCC atmospheric scientist Dr. Dean Venables told the Irish Examiner that they were "a costly and ineffectual gimmick" and predicted that they would have little impact on Cork’s air quality. He stated that it would be better to regulate production of emissions, instead of attempting to curb emissions after the fact. According to the Waltham Forest Echo, after the CityTrees were installed, "local residents took to pelting the units with fast food and pinning posters to them that said the money could have been spent on providing free school meals for school children in need." Labour party politician Peter Horgan called the CityTrees "the most expensive benches ever purchased by a local authority". Horgan criticised the cost for maintenance and installation, as well as the lack of a formal vote by the Cork Council for their installation. It was his belief that the installation of public footpaths, benches, or real trees would be more beneficial. Horgan has since said he desires "a full explanation of why and how they were purchased coupled with the spending expected on them, and a full vote of the council to determine whether they should continue". Professor John Sodeau, an expert in air pollution and climate change, contests the 80% pollutant capture figure. He has stated that the independent study by the Leibniz Institute for Tropospheric Research only mentioned a particle reduction of up to 30% for indoor measurements, and that no outdoor measurements are mentioned, besides that "the measurements were shown to be dependent on meteorological conditions". Many Cork City councilors have said they believe it time to "pull the plug" on the project. North Macedonia The CityTrees in Skopje and Tetovo have withered after neglect by the city governments. Additionally, Tetovo CityTrees have been almost entirely destroyed. A local resident told Sloboden Pečat "Here in Tetovo, whatever is set up in the interest of the city and its citizens is destroyed." Sloboden Pečat was critical of the lack of fines for polluters in Tetovo, and noted that despite the CityTrees, the amount of PM10 particles was only going to increase. The Netherlands In 2018, the city of Amsterdam started a 1 year trial for the CityTrees before deciding whether to install them permanently. At the trial's conclusion, only one-fourth of the plants survived, and experts from the Municipal Health Service and Wageningen University found that CityTrees took in less than 1% of particle matter, instead of the promised 20%. The experts had also found that, as CityTrees suck in the nitrogen oxides it then emits again, pollution along the tested street had actually increased. In response to this study, Amsterdam chose not to keep the CityTrees. United Kingdom Waltham Forest Council abandoned CityTrees installed in Leytonstone, following the death of the towers and the quiet removal of projects elsewhere, such as Westminster. Glasgow City Council abandoned an investment in CityTrees that were estimated to remove less than 0.02 percent of the city’s pollutants each year. Similar projects Another air filter modelled after trees, the BioUrban, was constructed in Puebla, Mexico in 2019. The company that created the BioUrban plans to expand to Turkey and Colombia. In Belgrade, Serbia, currently ranked 24th in the world for worst air quality, the Liquid3, also called a "liquid tree", was installed in 2019. The Liquid3 contains 600 liters of water and uses micro-algae that binds carbon dioxide. The company that installed the Liquid3 says it is "10-50 times more efficient than trees and grass at photosynthesising and creating pure oxygen." See also Bosco Verticale Folkewall Green building Green infrastructure Green roof Green wall Greening Particulate pollution Smog tower Urban forest Urban forestry Urban reforestation References Air pollution control systems Air filters Environment of the Republic of Ireland Controversies in Ireland
CityTrees
[ "Chemistry" ]
1,618
[ "Air filters", "Filters" ]
75,018,143
https://en.wikipedia.org/wiki/Outline%20of%20reptiles
The following outline is provided as an overview of and topical guide to reptiles: Reptile – What type of thing are reptiles? A reptile can be described as all of the following: Lifeform Animal Chordate Vertebrate Amniote Ectotherm Types of reptiles List of reptiles List of largest reptiles List of largest extant lizards Lists of reptiles by region List of U.S. state reptiles Marine reptile List of marine reptiles Reptile classifications List of reptile genera Testudines Crocodilia Squamata Rhynchocephalia Examples of reptiles Alligator Crocodile Lizard Gecko Iguana Hybrid iguana Komodo dragon Snake Python Tortoise Tuatara Turtle History of reptiles Reptile egg fossil History of the study of reptiles 2014 in reptile paleontology 2015 in reptile paleontology 2017 in reptile paleontology 2018 in reptile paleontology 2019 in reptile paleontology 2020 in reptile paleontology 2021 in reptile paleontology 2022 in reptile paleontology 2023 in reptile paleontology Evolutionary history of reptiles Evolution of reptiles Evolution of reptiles Archosauromorpha Lepidosauromorpha Extinct reptiles List of largest extinct lizards Parareptilia Captorhinidae Araeoscelidia Neodiapsida Drepanosauromorpha Younginiformes Ichthyosauromorpha Thalattosauria Lepidosauriformes Characteristics of reptiles Reptile scales Reptile reproduction Reptile incubation Human impact on reptiles Herpetoculture Herping Human uses of reptiles Reptile conservation Reptile centres Reptile organizations Endangered reptiles lists List of least concern reptiles List of data deficient reptiles List of near threatened reptiles List of vulnerable reptiles List of endangered reptiles List of critically endangered reptiles Reptile centres Reptile centre Alice Springs Reptile Centre Armadale Reptile Centre Australian Reptile Park Clyde Peeling's Reptiland Colorado Gators Reptile Park Crocodile Rehabilitation and Research Centre Indian River Reptile and Dinosaur Park Kentucky Reptile Zoo Komodo Indonesian Fauna Museum and Reptile Park Melaka Butterfly and Reptile Sanctuary Reptile Gardens Reptile World Serpentarium Sleeping Turtles Preserve Snakes Down Under Reptile Park and Zoo The Reptile Zoo West Australian Reptile Park Reptile organizations Amphibian and Reptile Conservation Trust British Herpetological Society Friends of Snakes Society International Reptile Rescue Katala Foundation Snake Cell Andhra Pradesh Society for the Study of Amphibians and Reptiles United States Association of Reptile Keepers Reptile publications Books on reptiles Periodicals on reptiles Practical Reptile Keeping Reptiles Scientific journals covering reptiles African Journal of Herpetology Bibliotheca Herpetologica Caribbean Herpetology Chelonian Conservation and Biology Herpetologica Herpetological Conservation and Biology Herpetological Monographs Ichthyology & Herpetology Reptile databases Reptile Database Persons influential in reptile-related activities List of herpetologists See also Bird Outline of birds List of birds External links Reptiles
Outline of reptiles
[ "Biology" ]
606
[ "Reptiles", "Animals" ]
75,018,261
https://en.wikipedia.org/wiki/Tremella%20wrightii
Tremella wrightii is a species of fungus in the family Tremellaceae. It produces light brown to orange-brown, lobed, gelatinous basidiocarps (fruit bodies) and is parasitic on other fungi on dead branches of broad-leaved trees. It was originally described from Cuba. Taxonomy Tremella wrightii was first published in 1868 by British mycologist Miles Joseph Berkeley and American mycologist Moses Ashley Curtis based on a collection made in Cuba by the American botanist Charles Wright, after whom it was named. Description Fruit bodies are firm, gelatinous, light brown to orange-brown, up to 5 cm (2 in) across, and lobed, often with inflated horn-like processes. Microscopically, the basidia are tremelloid (subglobose to ellipsoid, with oblique to vertical septa), 4-celled, 11 to 18 by 8 to 11 μm. The basidiospores are ellipsoid, smooth, 5.5 to 7.5 by 4 to 6 μm. Similar species Tremella coffeicolor and Phaeotremella frondosa, also reported from the neotropics, are both brown and gelatinous, but with lobes that are more frondose, less inflated, and not or rarely horn-like. Tremella laurisilvae, described from the Canary Islands, is very similar but said to be distinct. Habitat and distribution Tremella wightii is a parasite on lignicolous fungi, but its host species is unknown, though collections have been noted on pyrenomycetes. It is found on dead, attached or fallen branches of broad-leaved trees. The species was described from Cuba and has been reported from Brazil Guyana, Trinidad, Panama, Belize, Cameroon, and Uganda. References wrightii Fungi of Africa Fungi of Central America Fungi of South America Fungi of the Caribbean Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis Fungi described in 1868 Fungus species
Tremella wrightii
[ "Biology" ]
422
[ "Fungi", "Fungus species" ]
75,018,753
https://en.wikipedia.org/wiki/Lidia%20Angeleri%20H%C3%BCgel
Lidia Angeleri Hügel (born 1960) is an Italian mathematician whose research in abstract algebra and representation theory focuses on tilting theory and its offshoot, silting theory. She is a professor of algebra at the University of Verona. Education and career Angeleri Hügel was born in Milan, in 1960. She studied mathematics at the Ludwig Maximilian University of Munich, completing a Ph.D. there in 1991 under the supervision of Wolfgang Zimmermann. She continued at Ludwig Maximilian University of Munich as a postdoctoral researcher from 1992 to 2002, earning a habilitation there in 2000. In 2002, she was Ramon y Cajal Fellow at the Autonomous University of Barcelona, and briefly held an associate professorship at the University of Insubria, before moving to the University of Verona as an associate professor. She became full professor at the University of Verona in 2016. At the University of Verona, she served as Vice-Rector for International Relations from 2013 to 2019. Book Angeleri Hügel is the co-editor of the Handbook of Tilting Theory (Cambridge University Press, London Mathematical Society Lecture Note Series 332, 2007, with Dieter Happel and Henning Krause). References External links 1960 births Living people Italian mathematicians Italian women mathematicians Algebraists Ludwig Maximilian University of Munich alumni University of Insubria Academic staff of the University of Verona
Lidia Angeleri Hügel
[ "Mathematics" ]
269
[ "Algebra", "Algebraists" ]
75,019,505
https://en.wikipedia.org/wiki/History%20of%20France%27s%20civil%20nuclear%20program
The history of France's civil nuclear program traces the evolution that led France to become the world's second largest producer of nuclear-generated electricity by the end of the 20th century, based on units deployed, installed capacity, and total production. Since the 1990s, nuclear energy has furnished three-fourths of France's electricity; by 2018, this portion had reached 71.7%. At the start of the 20th century, France made significant contributions to the discovery of radioactivity and its initial uses. In the 1930s, French scientists uncovered artificial radioactivity and the mechanisms behind nuclear fission, placing the nation in a leading position within the field. However, World War II halted France's ambitions. When Germany occupied France, research relocated to the UK and subsequently to the US, where the first nuclear reactors and weapons were developed. After World War II, France initiated an extensive nuclear program with the establishment of the Commissariat à l'Energie Atomique (CEA), but due to resource constraints, it took a considerable amount of time to achieve substantial progress. In the 1950s, the pace accelerated as France initiated a military nuclear program, which led to the creation of a deterrent force in the subsequent decade. Simultaneously, France commenced the construction of its first nuclear power plants, which were intended to produce plutonium and electricity. In the 1970s, fueled by the oil shocks, the Pierre Messmer government decided to utilize "all-nuclear" power generation in France. This decision led to the construction of 58 standardized nuclear power reactors throughout the country for the next 25 years. Even though domestic technology was abandoned, French industrialists quickly incorporated the American technology they had chosen and exported it to South Africa, South Korea, and China. At the same time, France was developing expertise in managing the nuclear fuel cycle by constructing the largest civil reprocessing plant in the world at La Hague, as well as experimental fast-breeder reactors. Although the anti-nuclear movement had less of an impact in France than in other European countries from the 1980s onward, radioactive waste management emerged as a crucial issue in public discourse in France. In addition, the conclusion of the equipment phase, along with the liberalization of the electricity market, and the growing anti-nuclear movement bolstered by nuclear disasters such as Chernobyl and Fukushima, are causing changes in the French nuclear industry. Consequently, since 2015, initiatives have been made to decrease the proportion of electricity created by civil nuclear power in France, in order to accommodate renewable energy sources. Nevertheless, construction of new-generation French reactors, including the European Pressurized Reactor (EPR), persists domestically and internationally. Research for future solutions is concentrated on Generation IV reactors and nuclear fusion. Meanwhile, shutting down reactors presents new challenges. President Macron announced in February 2022 his plan to restart the civil nuclear program to construct six to fourteen new reactors while also expanding the lifespan of current nuclear reactors "as much as possible." The scientific adventure of the atom (1895-1945) In just fifty years, from the initial detection of X-rays to the development of nuclear reactors and weapons, the scientific pursuit of the atom completely changed the world. France was a prominent player in the field, courtesy of the Curie family's contributions, until World War II caused a severe setback to national efforts. The origins (1895-1932) By the end of 1895, German physicist Wilhelm Röntgen demonstrated that a cathode-ray tube produced invisible radiation capable of penetrating matter. This discovery, named "X-rays," earned him the first Nobel Prize in Physics and piqued the scientific community's interest. The next year, French physicist Henri Becquerel searched for a connection between phosphorescence and X-rays. He subsequently observed that uranium salts, also known as phosphorescent rocks, emitted radiation even if they weren't exposed to light. These emissions came to be known as Becquerel rays or uranic rays because they were believed to be unique to the element. Pierre and Marie Curie discovered that uranic rays vary in intensity depending on the uranium ore. Starting in 1898, they aimed to isolate the element responsible for this phenomenon. By manually refining hundreds of kilograms of uraninite, they discovered the first element in July, which they named polonium in honor of Marie's homeland. In December, they found a second, even more reactive element: radium. Additionally, they co-discovered the activity of thorium. The Curies were awarded the Nobel Prize in Physics in 1903, jointly with Henri Becquerel, for their discovery of natural radioactivity. In 1911, Marie Curie was presented with the Nobel Prize in Chemistry for her isolation of radium and polonium. In 1899, New Zealand-born physicist Ernest Rutherford discovered two types of radiation, alpha and beta rays, which were less penetrating than X-rays. In 1903, he linked these rays to the Curies' discoveries and proposed a hypothesis that the radioactive elements in uranium and thorium were linked together, with the heavier element spontaneously losing its substance through decay to give rise to a lighter element and so forth. In 1911, Rutherford suggested a novel depiction of the atom's structure subsequent to a well-known experiment: a nucleus that bears positively-charged particles attracting negative charges, namely electrons. Niels Bohr, a Danish physicist, modified Rutherford's model using Max Planck's initiated quantum theory and showed that electrons do not collapse towards the nucleus due to attraction, and rather remain at a specific level, which is also known as Bohr's model. Finally, in 1919, Rutherford demonstrated that the hydrogen atom's nucleus was present in all other nuclei and labeled it the proton. He also hypothesized the presence of neutral, uncharged particles in the nucleus, alongside protons. In 1930, the Germans Walther Bothe and Herbert Becker noticed that when alpha rays bombarded lithium, beryllium, and boron, these elements emitted "ultra-penetrating" rays. Intrigued by these findings, Irène Curie, the daughter of Pierre and Marie Curie, and her husband Frédéric Joliot, embarked on a mission to comprehend the radiation's nature. They found that it could set protons in motion in January 1932. Following this discovery, James Chadwick, an Englishman and a former disciple of Rutherford, uncovered the last piece of the atomic puzzle: neutrons, the following month. Discovery of nuclear energy (1933-1939) Nuclear physics was born from the work of Frédéric Joliot and Irène Curie. In late 1933, they used alpha radiation to bombard aluminum foil and demonstrated the production of radioactive phosphorus-30, a phosphorus isotope. They concluded that irradiation could produce new radioactive elements. From the outset, they anticipated numerous applications in medicine, particularly with radioactive tracers. For their groundbreaking discovery, they were honored with the 1935 Nobel Prize in Chemistry. In 1934, Enrico Fermi of Italy found that slowed-down neutrons (such as those that have passed through kerosene) were more efficient than regular ones, leading to the necessity for the use of "moderating" materials like heavy water in future facilities. Numerous European research laboratories were subjecting nuclei to bombardment for analysis of their effects. In December 1938, two German exiles residing in Sweden, Lise Meitner and Otto Frisch, discovered a crucial explanation for nuclear energy: the phenomenon of fission. In February 1939, Niels Bohr confirmed that inside natural uranium, there exist two isotopes; 238U and 235U. However, only uranium 235 is "fissile". Although it is the rarer of the two, accounting for only 0.72% of uranium, enriching uranium ore is necessary to increase the proportion of fissile material and obtain a more reactive fuel. Finally, in April 1939, French physicists Frédéric Joliot-Curie, Hans von Halban, Lew Kowarski, and Francis Perrin published an article in the journal Nature. The article demonstrated that uranium nucleus fission accompanies the emission of 3.5 neutrons (later corrected to 2.4), which can subsequently fragment other nuclei resulting in a "chain reaction" phenomenon. This fundamental finding in nuclear physics was published shortly before their American competitors. In May 1939, four French individuals filed three patents. The initial two patents concerned uranium energy production, and the third one, a confidential patent, was for improving explosive charges. Joliot, who strongly believed in the future significance of atomic energy, met Minister of Armament Raoul Dautry in the fall of the same year. Dautry provided full support for the development of explosives and the production of energy. July 1939 saw the start of experiments at the Collège de France laboratory on the release of energy through chain reactions, which continued at the Atomic Synthesis Laboratory. To safeguard his patents, Joliot established an industrial network around him, primarily via an agreement between CNRS and Union minière du Haut-Katanga, who owned the uranium in Belgian Congo. In the fall of 1939, the Joliot team recognized that France lacked the capability to enrich natural uranium to its fissile isotope (235U). As a result, they opted to employ heavy water to construct an atomic reactor. Following the request of the Collège de France in February 1940, Raoul Dautry dispatched Jacques Allier on a clandestine expedition to Norway to reclaim the complete inventory of heavy water held by Norsk Hydro, a partially French-owned corporation, which Germany also coveted. Suspension of research in France (1940-1945) Germany's invasion of France in May 1940 halted work. In early June, the laboratory quickly relocated from Paris to Clermont-Ferrand, but the war was already lost. On June 18, 1940, as General de Gaulle launched his renowned appeal on London radio, Hans Halban and Lew Kowarski departed from Bordeaux to the United Kingdom with the heavy water stockreactor. The uranium stockreactor was concealed in Morocco and France. Joliot opted to remain in his position at the Collège de France and care for his unwell wife, instead of leaving. During a challenging period, German physicists arrived at his lab to examine the Collège de France cyclotron. After extensive questioning, Joliot-Curie agreed to work alongside the German physicists, headed by Wolfgang Gentner whom he had previously studied with at the Curie laboratory, to ensure his laboratory's continuation. In 1941–1942, he enlisted with the National Front for the Liberation and Independence of France, also known as the Front National de Lutte pour la Libération et l'Indépendance de la France, an underground resistance movement established by the French Communist Party. He went into hiding in April 1944. The members of the Collège de France who were exiled deliver French secrets to the Allies, primarily to the British who were spearheading an atomic bomb project through the MAUD Committee. However, the US nuclear program excluded them (except for Bertrand Goldschmidt) for economic (pertaining to the three patents) and political reasons (due to the distrust of de Gaulle and Joliot). Isolated at the Cavendish lab in Cambridge and later at the Montreal lab starting from the end of 1942, they contributed to the work conducted by an Anglo-Canadian team. Louis Rapkine directed the establishment of a scientific office for the Delegation of Free France in New York shortly after the US joined the war in December 1941. This office facilitated the integration of French scientists in exile, including Pierre Auger and Jules Guéron, into the Anglo-Canadian project led by Halban, as they declined to acquire American citizenship and work with the American teams. This office facilitated the integration of French scientists in exile, including Pierre Auger and Jules Guéron, into the Anglo-Canadian project led by Halban, as they declined to acquire American citizenship and work with the American teams. Their amassed knowledge proved pivotal in revitalizing French research in this domain. On July 11, 1944, Pierre Auger, Jules Guéron, and Bertrand Goldschmidt confidentially informed General de Gaulle, who was visiting Ottawa (Canada), about the United States' Manhattan Project. Shortly after the liberation of Paris in August 1944, the initial batch of French scientists, which included Auger, came back from Montreal. In April 1945, the German atomic scientists' towns were captured by the 1st French Army, while Operation Alsos raided the labs, including the Haigerloch atomic reactor, apprehended the Reich scientists, and only left behind a handful of technicians. The French nuclear program had to go it alone, having been ostracized by the Anglo-Saxons, deprived of uranium sources, and given meagre war prizes. The birth of a nuclear program (1945-1952) In March 1945, during the ongoing war, Raoul Dautry (Minister of Reconstruction and Urban Planning in the Provisional Government) informed General de Gaulle (President of the Provisional Government) of the benefits of nuclear power for reconstruction efforts. The atomic bombings of Hiroshima and Nagasaki on August 6 and 9, 1945, revealed the advancements made by American research in this area to the global audience. On August 31, de Gaulle tasked Raoul Dautry and Frédéric Joliot (Director of the National Center for Scientific Research) with proposing an organization that could unify research to restore France's position in the global atomic science field. Creation of the French Alternative Energies and Atomic Energy Commission (CEA) The French Atomic Energy Commission (CEA) was established by General de Gaulle on October 18, 1945. Its objective was to "conduct research on scientific and technical aspects of atomic energy usage in multiple areas, including industry, science, and national defense". Reporting directly to the President of the French Council, CEA was responsible for managing atomic energy starting from uranium prospecting to constructing power reactors. To meet the demands of both scientists and politicians, the CEA was jointly organized by Joliot, serving as High Commissioner for Atomic Energy, and Dautry, the government's Deputy Director General. As Joliot, a member of the French Communist Party, exerted his influence, opposition to the military application of atomic energy gained traction across the CEA. As the high commissioner, he advocated for France to take a stance against military nuclear power through a global ban on atomic weapon production and to prioritize the development of large-scale power reactors. Given France's neutral position between the two superpowers and the military's requirement for resources to manage decolonization, Ambassador Alexandre Parodi reinforced this political position on June 25, 1946, during the first UN Atomic Energy Commission hearing. It became the official stance of the Fourth Republic, allowing it to hide its vulnerabilities and later its classified information. Despite the Quebec Agreement signed between the United States and the United Kingdom in August 1943, which prohibited the disclosure of their nuclear research, the British permitted the remaining French scientists to return home with a few notes as a gesture of gratitude to France. In 1946, "Canadians" Lew Kowarski, Jules Guéron, and Bertrand Goldschmidt were integrated into the CEA, with their valuable notes forming the foundation of French nuclear knowledge and enabling the CEA to train the first generation of atomic scientists, both civilian and military. On March 8, 1946, the CEA settled into the Fort de Châtillon in Fontenay-aux-Roses, located in the southwest of Paris. The initial plan proposed the immediate execution of two fuel cells, one utilizing heavy water, and the other graphite, and the establishment of a 100-megawatt (MWe) nuclear power plant within a decade. The uranium rush In order to carry out the CEA's program independently, France required uranium from sources it could control. Exploration permits were not granted in the French colonies as early as the summer of 1945. The pre-war reserve, brought back from Morocco discreetly, only met the requirements for constructing an initial cell. However, the existence of uranium ores in French territory in the Morvan region and Madagascar was verified in the 19th century. Starting in March 1946, trained prospecting commandos began their search at the Laboratoire de Minéralogie of the National Museum of Natural History. The first CEA prospectors, former maquisards, scoured the area equipped with Geiger counters. Within two years, the CEA's Direction des Recherches et Exploitations Minières (DREM) workforce grew from 10 to nearly 300 employees. Eventually, in November 1948, the first uranium deposit was discovered at Saint-Sylvestre, located in the Limousin region. The La Crouzille deposit commenced production on July 10, 1950, and was succeeded by numerous others in Vendée (1951), Brittany, Auvergne (1954), and Languedoc (1957). These deposits were operated by either the CEA or private firms. Within a decade, France emerged as the leading producer of uranium in Europe, with a total of 217 active mines up to 2001. Research conducted outside of mainland France, such as in Madagascar and Ivory Coast (1946), Morocco (1947), French Congo (1948), and Algeria and Cameroon (1950), proved inconclusive. To encourage prospecting in the colonies, the CEA abolished its research monopoly in 1954. Aerial prospecting eased the task, particularly over the Sahara. The first significant deposit was found in 1956 at Mounana in Gabon. The largest reserve, at Arlit and Imouraren in Niger, was discovered in 1965. Prospecting even expanded beyond French possessions, including Western Canada's Cluff Lake in 1968. These deposits, France's primary sources of supply, became foreign with decolonization but remained under the control of the CEA. This would contribute to the success of its successor, Compagnie Générale des Matières Nucléaires (Cogema), making it the top producer of natural uranium in the Western bloc by 1980. Zoé, France's first atomic cell The French Atomic Energy Commission (CEA) established a facility in a section of the Bouchet powder mine near Ballancourt-sur-Essonne in January 1948 to purify uranium ore into pure oxide. However, the conversion of this substance into uranium metal posed challenges, resulting in a postponed reactor construction. To meet the public and politicians' demands swiftly and secure the necessary CEA subsidies, a decision was made to construct a small reactor using natural uranium oxide as fuel, despite its limited technical value. As the French-produced graphite remained too impure for use as a moderator, Kowarski, who drew upon his knowledge from building the Canadian ZEEP heavy-water atomic reactor, was tasked with constructing a comparable reactor. The first French atomic reactor began operating on December 15, 1948, with a process called "diverge". The Obninsk Nuclear Power Plant still remained a secret, but EL1 or "Zoé" was the first operational atomic reactor outside of an Anglo-Saxon nation and a source of national pride. Despite producing only a few kilowatts, it aided in gaining a deeper comprehension of nuclear reactions in physics studies as well as the production of radioisotopes for research and industry. On November 20, 1949, Goldschmidt and his colleagues in Canada developed a process that enabled the isolation of the first four milligrams of plutonium from irradiated fuel and its removal from the Zoé fuel cell. This event was a significant milestone since the synthetic element was essential for designing the first atomic bomb. The construction of the Saclay center south of Paris also started in the same year, as the Châtillon fort became overcrowded. The center was designed by Auguste Perret. In 1952, the Van de Graaff accelerator was commissioned at the same site where the second heavy water reactor (EL2) became operational. This new accelerator was more powerful, utilizing uranium metal and cooled by gas. Its purposes included conducting physics and metallurgy experiments and producing artificial radioisotopes in greater quantities. Following the Prague coup and Berlin blockade, the Soviet Union detonated its first atomic bomb, marking the start of the Cold War. Frédéric Joliot intentionally launched the Stockholm Appeal on March 19, 1950. However, he overstepped his bounds the next month when he stated: "Progressive and Communist scientists will never contribute their knowledge to wage war against the Soviet Union". As a result, he was promptly dismissed from his position. Raoul Dautry capitalized on the opportunity to restructure the CEA. The following year, he selected Francis Perrin as its new leader, who had not supported the appeal. Dautry passed away on August 21, 1951. On November 8, he was succeeded by Pierre Guillaumat, a Companion of the Liberation. Guillaumat eliminated communist scientists and steered the CEA in a fresh military-industrial direction. The deployment of the nuclear program (1952-1969) As the CEA lacked the necessary technical and financial resources to enrich natural uranium to its fissile isotope (235U), it was incapable of manufacturing nuclear weapons or developing light-water reactors. While heavy-water reactor technology represented one viable solution, the production of this nuclear fuel was a costly endeavor. Therefore, France opted to follow the footsteps of the UK in turning to graphite atomic cell technology to address this challenge. This nuclear technology utilized natural uranium as fuel, graphite as a neutron moderator, and gas as a coolant to cool the core and extract heat to a water-steam circuit that drove the turbo-alternator. The first three components could provide France with the means to build the bomb, whereas the next three could propel the country into the nuclear power industry. Plutonigenic reactors The National Assembly approved the initial nuclear energy plan for a five-year term on July 24, 1952. The plan sought to build two experimental reactors at the Marcoule nuclear site, and construction began in 1955. Shortly thereafter, the construction of a third reactor commenced. In addition to generating electricity, these reactors would produce plutonium in sufficient quantities to support a civil advanced reactor program and potentially a military one, at a cost three times lower than highly enriched uranium. The G1 reactor, a prototypical unit optimized for plutonium production, diverged on January 7, 1956, producing 40 MWt of energy. The G1 reactor, a prototypical unit optimized for plutonium production, diverged on January 7, 1956, producing 40 MWt of energy. However, it consumed more electrical energy than it generated. This launch marked the start of collaboration between CEA and the industry, supported by an agreement with Électricité de France (EDF) for electricity production, starting on September 28, with a capacity of 2 MWe. The G2 and G3 reactors, commissioned in 1958 and 1959 respectively, utilized pressurized carbon dioxide for cooling and were significantly more powerful (150 MWt, 40 MWe) than previous models. These reactors were poised to set the benchmark in the power generation industry. A reprocessing plant (UP1) was also commissioned in 1958 to extract plutonium from spent fuel. On the military side, the decision to develop an atomic bomb was made by the Pierre Mendès France government at the end of 1954, but only became official after Charles de Gaulle's inauguration as President of the Council on June 1, 1958. During the initial Defense Council on June 17, de Gaulle terminated the military nuclear collaboration project established among France, Germany, and Italy in 1955, and hastened the national program by confirming the schedule of France's inaugural military test. Nuclear control and possessing atomic weapons as a deterrent formed the basis of De Gaulle's policy of national independence. It was applied in both the military and energy sectors. France detonated its first atomic bomb, the "Gerboise bleue" as planned on February 13, 1960, at the Reggane testing site in Algeria. In July 1957, Saclay inaugurated its third heavy water reactor (EL3) that utilized enriched uranium supplied by the United States, which had relaxed its non-proliferation policy since the Atomic Energy Act of 1954. Nonetheless, becoming self-sufficient in producing fuel was crucial to mastering the entire nuclear cycle for both military and civilian purposes. Therefore, Saclay began operating a pilot plant for uranium enrichment by gaseous diffusion (PS1) in April 1958. After the plan for a Franco-British and then European plant was scrapped due to national independence, construction began on a military uranium enrichment facility at the end of 1958. The selected process required substantial electricity, so the industrial complex was erected near the Donzère-Mondragon dam in Pierrelatte. The enrichment cascades, which were commissioned from 1964 to 1967, generated highly enriched uranium (20% or more isotope 235) for thermonuclear weapons. Generating reactors Following the success of the experimental reactors at Marcoule, EDF was tasked with establishing the French nuclear power program using the same type of reactors, Uranium naturel graphite gaz (UNGG). In order to quickly achieve competitiveness, the state-owned company launched reactors of increasing power, learning from the construction of previous models without waiting for their commissioning. To minimize expenses, a series of prototypes were constructed: three at the Chinon location (EDF1, EDF2, and EDF3), followed by two at Saint-Laurent-des-Eaux (EDF4 and EDF5). The most recent prototype, situated at Bugey, aimed to begin a set of six identical power plants that pave the path towards 1,000 MWe of energy using new fuel varieties. However, as the development of Bugey-1 progressed, it became clear that the UNGG technology had its limitations. Between 1957 and 1965, the unit power output increased from 70 MWe (EDF1) to 540 MWe (Bugey-1). However, any further increase would make the reactor difficult to control. To compete with domestic thermal power stations and the American light-water reactors that our European neighbors are adopting, increasing the power output and reducing the cost per kilowatt-hour (kWh) produced is the only way. Failing this, no new UNGGs were built, and only one was exported to Vandellòs, Spain. By the end of the decade, graphite-gas nuclear power only supplied 5% of the electricity generated in France. The future of this energy source was even more uncertain as oil prices hit rock bottom. Meanwhile, EDF was exploring other technologies alongside the French UNGG line. In 1966, French and Belgian electricians collaborated under the Euratom framework to construct an American pressurized water reactor (PWR) near the border of Belgium and France, which served as the prototype of the Ardennes power plant (later renamed Chooz A). This was not Belgium's initial endeavor toward such a facility, as they had previously hosted the first American PWR in Europe (BR-3) located in Mol in 1962. There was also a plan with Switzerland to construct another boiling water reactor (BWR) in Kaiseraugst, but it was eventually abandoned due to several postponements. The CEA, which trusted its UNGG's, decided to do the same, but with the intention of preparing to build the UNGG itself: By participating in a British high-temperature gas reactor prototype (Dragon) in 1964. Building an experimental reactor moderated by heavy water and cooled with carbon dioxide (EL4) at Brennilis in 1966; Then, a fast neutron reactor (Rapsodie) was built in 1967. The latter served as the precursor to Phénix and Superphénix. Research reactors The Cadarache center, located near Manosque, was created in 1960 to house Rapsodie and to study nuclear propulsion for ships. It became the fifth nuclear research facility not reserved for military purposes, following Fort de Châtillon, Saclay, Marcoule, and Grenoble. Throughout the 1960s, each center had an average of two research reactors built, totaling ten. Minerve (1959), Marius (1960), Peggy (1961), César (1964), Éole (1965), and Isis (1966) were models critical for neutronic calculations on the fuel networks of various nuclear reactors. Cabri (1963) researched "power excursions", while Pégase (1963) and Osiris (1966) focused on materials and fuels for nuclear power plants. Osiris also manufactured doped silicon and radioelements for industrial and medical purposes, including technetium-99m, of which it is among the world's three exclusive producers. Harmonie (1965) and Masurca (1966) conducted experiments on overgeneration. The High Flux Reactor (RHF), the world's most powerful neutron source, facilitated essential materials research beginning in 1971. Phébus and Orphée were added in 1978 and 1980, respectively, to simulate accidents that could impact PWRs and to support the RHF. The industrial turning point (1969-1983) In the early 1960s, the Commission pour la production d'électricité d'origine nucléaire (Péon Commission), which was established in 1955 to evaluate the expenses of constructing nuclear reactors, recommended the advancement of nuclear energy to address the insufficiency of domestic energy resources. Two contrasting positions emerged: that of the CEA, which supported the national dual-purpose UNGG reactor (civil and military), and that of EDF, which advocated for the development of a more competitive "American" reactor (enriched uranium and light water). In January 1967, a technical report jointly produced by the CEA and EDF found that the cost of producing kWh from a UNGG reactor was almost 20% higher than that of a pressurized water reactor (PWR) with the same power (500 MWe). However, in December, General de Gaulle authorized the construction of two UNGG reactors in Fessenheim, Haut-Rhin region while concurrently collaborating with Belgium on PWRs to maintain national independence. This cooperation resulted in the Tihange Nuclear Power Station in 1975 after Chooz. Part of a successful technology transfer, this 950 MWe plant was entirely designed by French and Belgian design offices, making it exceedingly powerful for its time, which enabled both countries to dominate the industry. Abandonment of the UNGG process The equipment bidding process for UNGGs at Fessenheim was a disaster, as every manufacturer submitted an overpriced offer to mitigate its own risk. On November 15, 1968, the Energy Commission proposed selecting a provider based on economic criteria, leading de Gaulle to concede to the inevitable outcome. However, the responsibility to officially abandon the national line in favor of light-water reactors fell on his newly elected successor, Georges Pompidou, and the Jacques Chaban-Delmas government. They made an interministerial decision on November 13, 1969, citing two arguments: the simplicity and safety of these reactors, and the technical and financial proficiency of the American companies promoting them. The setback faced by Britain with AGR and the partial core meltdown that took place in reactor A1 at the Saint-Laurent-des-Eaux power plant, a month prior, also played a significant role in the decision made by public authorities. The Péon Commission proposed that four or five light-water reactors be put into operation by 1976, since purchasing uranium, including enriched uranium from the United States, would be less expensive than importing oil. Two companies, Framatome and Compagnie Générale d'Electricité (CGE), competed to supply EDF with their "nuclear boilers." Framatome used Westinghouse's patent for pressurized water reactor (PWR) technology, while CGE used General Electric's patent for boiling water reactor (BWR) technology. For turbo-alternators, Alsthom's technology - a subsidiary of CGE - competes against Compagnie Électro-Mécanique's technology, a subsidiary controlled by the Swiss company Brown, Boveri & Cie (BBC). This decision was made after a new call for tenders as Framatome's proposal was more cost-effective than CGE's. In 1970, EDF selected Framatome's proposal for building two French-language copies of the Beaver Valley pressurized water reactor, equipped with Alsthom turbines, at Fessenheim. Consequently, the original plan of constructing two UNGGs was scrapped. The subsequent year four additional reactors were authorized at Bugey, bringing the total number of reactors to six. These six reactors were commissioned between 1977 and 1979, and later they were known as the CP0 (contrat programme zéro) stage. From now on, French nuclear power plants will no longer be produced individually, but in identical power level tranches, like thermal power plants. This will standardize production and decrease costs. In September 1972, CGE introduced the BWR-6, a General Electric boiling water reactor that boasted greater power (995 MWe) due to fuel enhancements. Subsequently, on February 4, 1974, EDF informed CGE of an order for eight reactors, which included two confirmed orders for Saint-Laurent-des-Eaux 3 and 4, while BBC purchased the associated turbine-generator sets. For CGE, the contract was valued at 3.5 billion francs (excluding taxes), and General Electric was entitled to a 2.5% royalty, equating to 87.5 million francs. Progress on the project moved quickly, with General Electric having already transmitted 10,000 documents by March 1, 1975. In addition, over 200 missions had been conducted in the United States by technicians in training, and 388 CGE employees were fully dedicated to the project. However, on August 4, 1975, EDF canceled the order due to a sharp cost increase, and passed it on to Framatome. This was a significant setback for CGE, leading to its withdrawal from the French nuclear industry. However, CGE obtained a major compensation; they secured Alsthom's place at the center of the national nuclear industry. By the end of 1976, Alsthom-Atlantique had a virtual monopoly on the French turbo-alternator market. The BBC turbines and their accompanying water stations are the only remains of the intended BWR installations at Saint-Laurent-des-Eaux. Instead, they were used to outfit the PWR reactors that replaced them on the site. A decision was made by the Council of Ministers on August 6, 1975, to preserve only one kind of reactor: the PWR. The government mandated the complete consolidation of the domestic nuclear industry, asserting that the benefits of standardization surpass those that could be attained through competition among multiple providers. The existence of a sole vendor and operator, coupled with the constraints imposed by the Westinghouse license, which prevent EDF or CEA from making destabilizing alterations to the reactor design, will facilitate the streamlined production of forthcoming large-scale series. Acceleration of the nuclear power program (Messmer plan) International events caused France's nuclear power program to accelerate dramatically. The Arab-Israeli conflict, particularly the Yom Kippur War, resulted in the first oil shock, which increased oil prices four-fold between October 1973 and March 1974. This starkly emphasized the energy dependence and fragility of Western nations at a time when their economic growth began to slow down. With domestic coal production on the decline and hydroelectric construction nearing completion, the inter-ministerial committee meeting of May 22, 1973 - which took place five months prior to the crisis in the Middle East - had already made the decision to increase the number of nuclear power plants under "Plan VI" from 8,000 to 13,000 megawatts (MW). These developments prompted the Pierre Messmer government to accelerate the program even further, leading to the creation of "Plan VII" or "Plan Messmer" on March 5, 1974. The 13,000 megawatts set to be constructed from 1972 to 1977 will be fully committed by the end of 1975. Afterward, EDF will maintain its investments by building six to seven reactors annually, equivalent to a commitment of 50,000 MW of nuclear power between 1974 and 1980. This installed capacity, which represents an additional 55,900 MWe reactors in operation, along with the current six, is estimated to cost a total of 83 billion euros (2010). Over the next decade, EDF will need to borrow over 100 billion euros, primarily through international markets, with the guarantee of the French government. Contract-Program 1, launched in 1974, consisted of 16,900 MWe units: Blayais (1–4), Dampierre (1–4), Gravelines (B1-B4), and Tricastin (1–4). Two more units (C5 and C6) were added in Gravelines in 1979, resulting in a total of 18 CP1 units. During Valéry Giscard d'Estaing's presidency, despite a stagnation in national electricity consumption, the Pompidou-Messmer plan continued at full speed due to the prioritization of reducing dependency on imported oil. The second oil crisis reinforced this determination. Contract-Program 2, which was launched in 1976, consists of ten units: Chinon (B1, B2, B3, B4), Cruas (1, 2, 3, and 4), and Saint-Laurent-des-Eaux (B1 and B2). The main difference between the CP2 stage and its predecessor is the radial placement of the machine room concerning the nuclear island. This arrangement prevents debris resulting from turbine failure from harming the reactor containment. The construction of the PWR fleet demanded sizable amounts of low-enriched uranium, procured from both the US and the USSR beginning in 1971. France aimed to establish European authority over the nuclear cycle and also rival the Urenco gas centrifuge enrichment initiative in Anglo-German-Dutch partnership. To that end, France collaborated with Belgium, Italy, Spain, and Sweden to construct a civil gaseous diffusion enrichment facility. The Eurodif Production group was established on February 25, 1972. In 1974, construction commenced on the Tricastin nuclear site. The Tricastin nuclear power plant was subsequently built to provide electricity to the facility (3,600 MW). Eurodif, later renamed Georges Besse, officially opened on April 9, 1979, as the site's enrichment plant. Located next to the Pierrelatte military enrichment plant, it utilized a portion of the latter's capacity until it achieved full capacity in 1982. At the same time, during the downstream fuel cycle, the La Hague reprocessing plant underwent modifications to recycle high-level radioactive waste from the new pressurized water reactor line. As a result, a HAO (High Activity Oxide) workshop was added to the UP2-400 plant, which was put into service in April 1966 to aid the Marcoule plant (UP1) in extracting plutonium from UNGG spent fuel. In 1976, CEA transferred the responsibility for operating the complex to Cogema. In 1981, Cogema received authorization to construct two plants to manage the growing volume of waste. The plants, UP2-800 and UP3-A, were designed to process up to 800 tons of light-water spent fuel annually. UP3-A was funded by foreign customers and commissioned in 1990, whereas UP2-800 was commissioned in 1994. First international contracts Similar to EDF, French manufacturers supported the light water option because its adoption allowed them to benefit from the experience gained across the Atlantic while avoiding the technical and financial risks involved in developing a new technology. The construction of the country's nuclear fleet provided an opportunity to develop major industrial groups that could Frenchify and export American nuclear technology. Framatome secured its first major international contract for South Africa's second nuclear power plant. Initially, the South African authorities awarded the contract to American General Electric in April 1976. However, Washington's restrictions on the apartheid regime made them reconsider, resulting in Framatome being selected on May 19. As Pretoria demanded EDF's involvement in construction, both the French electricity company and Framatome established SOFINEL. In any case, Framatome, which holds the Westinghouse license, required EDF's expertise to construct a full-fledged plant and obtain its operator's license to initiate operations. Despite a terrorist attack, the two reactors at the Koeberg facility north of Cape Town, modeled after those at Bugey, entered commercial service in 1984 and 1985. The subsequent deal was signed with Iran, France's primary oil supplier, which had already expressed interest in developing civilian and military nuclear power. The US did not support the Shah's ambitions, so he looked to France and signed a partnership agreement with them on June 27, 1974. Iran acquired Sweden's billion-dollar portion of Eurodif and ordered two reactors from Framatome/SOFINEL. Construction started at Darkhovin in 1977, with components fabricated the next year in France. However, the project ended with the Islamic Revolution of 1978. The completed components added two units to the Gravelines power plant, which started operating in 1985. The enriched uranium meant for Iran's Tricastin plant share was never delivered to the Islamic regime. It was only after several attacks and hostage-takings that a repayment agreement was signed on October 29, 1991. The third deal came from South Korea. After two failed attempts in 1976 and 1977, Framatome secured the contract for South Korea's ninth and tenth nuclear reactors located at Uljin on November 7, 1980. Along with the contract, there was also a transfer of technology, and a South Korean company conducted the construction. The Hanul Nuclear Power Plant started operations in 1988. CEA marketed research reactors via its subsidiary Technicatome. The company signed its most notable contract in 1976, worth 1.45 billion francs, with Iraq for the construction of Isis (Tammuz II) and Osiris (Tammuz I or Osirak) reactors at Saclay. This agreement was the result of talks that began two years ago between Iraq and then-Prime Minister Jacques Chirac for the exchange of oil for nuclear reactors. After the French government rejected Saddam Hussein's request for a UNGG reactor during his visit to the Cadarache center on November 6, 1975, this agreement was reached. Israel was against Iraq's program, citing its military goals. On April 6, 1979, Mossad demolished the reactor vessels being built at the Constructions industrielles de la Méditerranée (CNIM) plant in La Seyne-sur-Mer. Nevertheless, the contract still proceeded. On June 7, 1981, an air attack (Operation Opera) destroyed Osirak before it became operational. Even though France terminated its cooperation with Iraq in 1984, Tammuz II/Isis kept operating in the 1980s. The reactor was eventually obliterated in 1991. Birth of the anti-nuclear movement During the 1960s, when military nuclear power was at the forefront of national attention, civilian nuclear power was deemed an excellent opportunity for rural economic development, and few people worried about it. In the early 1970s, environmental awareness gave rise to the discussion of the aftermath of implementing nuclear energy. In 1974, most of the public favored civil nuclear power (76% in favor). However, after the initial demonstration at Fessenheim in April 1971, opposition grew rapidly among the French population. By early 1975, 4,000 scientists signed a petition denouncing the Messmer Plan's downplaying of risks and rashness and created the GSIEN association to inform the public. The original meaning was preserved while improving clarity, conciseness, formality, precision, structure, vocabulary, and grammatical correctness in American English. In 1977, public opinion shifted, with 53% opposing the Superphénix project. In July, a manifestation in Creys-Malville turned violent, resulting in one fatality among the 40,000 to 90,000 attendees. In response, public authorities urged the expansion of current sites and the construction of more robust facilities to mitigate the need for new power plants. Ultimately, out of the 43 sites considered in 1974, 19 will accommodate French PWRs, four of which are currently home to UNGG reactors. Apart from concerns related to the French power grid, the site selection is strategic: 17 of the 19 communes (excluding Gravelines and Nogent-sur-Seine) designated for hosting power plants share a similar rural profile and have faced depopulation and a lack of industry. The income, development, and job opportunities resulting from the power plants will create a significant reliance on the surrounding areas. Additionally, EDF is enlisting the services of architects and landscape architects during the P4 stage to reduce the impact of its facilities on the environment. Sociologists, psychologists, and semiologists are also working to improve the social acceptability of nuclear power by changing its public image. From April 1975 to 1982, an EDF "nuclear information group" received and responded to up to 500 letters daily. They successfully organized visits to power plants. These information campaigns, combined with certain anti-nuclear groups' radicalization, flip-flopped public opinion, causing opposition to wane. Opposition to nuclear power became local, especially in Brittany and Loire-Atlantique. This local opposition led to the abandonment of the Plogoff and Pellerin power plant projects in 1983, respectively, after numerous demonstrations. However, in the Ardennes, the opposition failed to prevent the Chooz nuclear power plant from being extended. In the early 1980s, pro-nuclear sentiment reigned supreme in France, with 65% favoring it in 1982 and 67% in 1985. This was despite numerous accidents that rocked the industry and underscored the pressing need to prioritize safety throughout the nuclear cycle. Notably, the Three Mile Island nuclear disaster in 1979, where a PWR core melted, causing minimal radioactive emissions into the environment. In 1980, the Saint-Laurent-des-Eaux nuclear power plant experienced France's most serious nuclear accident to date, as two fuel elements in the A2 reactor (UNGG) melted down. The incident was classified as level 4 on the INES scale, signifying "an accident not entailing any significant risk outside the site". Nevertheless, the French people's recently acquired faith in nuclear power and concerned authorities was short-lived. Against the rising environmental movement, François Mitterrand's (Socialist Party) platformed for the 1981 French presidential election (proposition 38) that "the nuclear program will be restricted to power plants being built until the nation can vote on it, after being truly informed in a referendum." In the lead up to the election, Francois Mitterrand acknowledged that the use of nuclear energy is inevitable nowadays. However, he emphasized the need to limit and control its development to prevent a technical and economic impasse similar to that of the all-oil era in the 1960s. He also spoke out against the entire nuclear program that was forced upon the French people. A time of questioning (1983-1999) While the oil shocks signaled the beginning of substantial equipment programs in countries affected by oil imports such as France and Japan, they paradoxically resulted in a cessation of nuclear investment. The United States was the first to halt such investment, largely for economic reasons, followed by Europe due to political pressures from anti-nuclear movements emboldened by significant accidents. In France, where the environmental movement had minimal effect, the expansion of nuclear power plants persisted despite increased costs. However, the potential for excess production continued to exist. During this time, the previously overlooked subject of sustainable management of radioactive waste became prominent in the public's perception of nuclear power. Completing the industrial park The early 1980s marked the commercial launch of CP1 nuclear reactors, operating from 1980 to 1985, and CP2 reactors, operating from 1983 to 1988. During this time, nuclear power accounted for 37% of America's national electricity production, a figure that rose to 55% three years later. However, due to energy-conservation initiatives and decreased economic growth, electricity consumption plateaued, leading to concerns about the overcapacity of the nuclear power program. In 1983, François Mitterrand's presidency led to a reduction in the construction pace to one unit annually. To justify fleet development, EDF boosted its exports and became Europe's primary exporter. Additionally, the promotion of electric heating spurred its adoption as the standard in new housing. The major construction projects continued with the P4 stage, consisting of four-loop reactors (compared to three-loop reactors, hence the name) with a power output of 1,300 MWe. This project was a result of Framatome and Westinghouse's collaboration since 1972. The objective was to offset the extended construction schedules and expenses incurred during prior stages, which is the reason for the increase in unit power. Eight units, which were ordered between 1975 and 1982, were put into commission from 1984 to 1987. The Flamanville (1 and 2), Paluel (1 to 4), and Saint-Alban (1 and 2) reactors were constructed first. Following those, there was the P'4 level (a hybrid of P4), which involved the staggered construction of 12 new units between 1979 and 1984 and commissioning from 1987 to 1994. Lastly, the reactors at Belleville (1 and 2), Cattenom (1, 2, 3, and 4), Nogent (1 and 2), Penly (1 and 2), and Golfech (1 and 2) were built. In comparison to P4, power and safety systems remain the same while the buildings have been reduced in size to cut costs. Nevertheless, moving from CP0 to P'4 did not yield the anticipated economies of scale, primarily due to the introduction of more rigorous regulations, as stated by the French Court of Accounts in 2012. Framatome reached a pivotal moment in 1981 by signing a long-term Nuclear Technical Cooperation Agreement (NTCA) with Westinghouse. The agreement was grounded on Westinghouse's admiration for the skills of the French manufacturer, with reciprocal exchanges taking place. Even though royalties were substantially reduced, payment still had to be made. This technical and commercial independence allowed Westinghouse to fully withdraw from Framatome's capital. This enabled the French company to create its own reactor models, including the N4 series. The N4 series consists of four 1,500 MW units, designed entirely by the French, with development beginning in 1977. The Chooz B (1 and 2) and Civaux (1 and 2) reactors were part of this project, with construction taking place from 1984 to 1991 and commercial commissioning occurring from 1996 to 1999. In 1992, the agreement between Westinghouse and Framatome ended, leading to the discontinuation of royalties and full francization of Framatome-built reactors. The design advancements of these new reactors considered feedback from already functioning 900 and 1,300 MW reactors, in addition to lessons learned from the Three Mile Island nuclear disaster. In addition to introducing the "Arabelle" turbines and new primary pumps, the main improvement of the P'4 series was the complete computerization of the control room. Chooz B was the first power plant in the world to be equipped with this technology. Civaux was the second power plant to be equipped with the computerization, and it was the last nuclear power plant to be built in France. The 1999 commercial launch of its second reactor, the 58th one since Fessenheim, marked an end to nearly thirty years of unbroken construction activities that set EDF back to the tune of 106 billion euros in 2018. Back then, nuclear power contributed to 76% of France's total electricity generation, a larger share than any other country had. The Chernobyl shock Chernobyl's disaster on April 26, 1986, marked a pivotal moment in the history of civil nuclear power. The incident resulted in the core of a reactor melting down, causing an explosion and a significant discharge of radioactivity into the environment, which led to countless fatalities, including some resulting from radiation exposure. The Chernobyl disaster was the first incident to be categorized at level 7 on the International Nuclear Event Scale (INES) and remains the most severe nuclear accident prior to the Fukushima incident in 2011. The consequences of the disaster were extensive, spanning health, ecological, economic, and political realms. Over 300,000 individuals were displaced from the surrounding region. While nearby countries swiftly implemented preventative measures, such as prohibiting consumption of specific foods, French public authorities provided minimal communication and attempted to downplay the impact of the disaster while acknowledging the rise in radioactivity. Certain media interpreted the released radioactive cloud from the explosion as having halted at the border. Abnormal radioactivity levels were detected at CEA facilities and EDF nuclear power plants as early as April 28, but the SCPRI did not acknowledge the particle plume reaching France until May 1 and did not release the first soil contamination map until May 10. Following Chernobyl, nuclear power's image in Europe was permanently affected, leading to changes in national programs. New power plant construction projects were generally suspended with only those currently underway completed. Italy withdrew from nuclear power, followed by Yugoslavia, the Netherlands, Belgium, and Germany in subsequent years. In France, the lack of transparency among authorities impacted public opinion, giving rise to a resurgence of the anti-nuclear movement and the formation of autonomous radioactivity monitoring groups like CRIIRAD and ACRO. However, the incident itself, did not prompt any inquiries into the energy policy. When the accident happened, construction on the P'4 and new N4 reactors was still in progress and far from being finished. France participated in the discussions that began in 1992 to define binding international commitments on nuclear safety. France signed the International Convention on Nuclear Safety on September 20, 1994, and ratified it the following year. The Convention came into effect following the decree of October 24, 1996. France's first report on the safety of its power plants was released in September 1998. In 2001, the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) became operational, assuming the safety roles formerly held by the Ministry of Health and the CEA. The Chinese market In the 1970s, Deng Xiaoping aimed to modernize China, including implementing a civilian nuclear power program. Although the military preferred heavy-water reactors, the Ministry of Electricity selected foreign pressurized-water reactors, specifically from France. As the first Western country to recognize the People's Republic of China in 1964, France appeared to be the sole nation willing to share its nuclear expertise. After several unsuccessful attempts, cooperation with China on the construction of nuclear reactors and technology transfer began during President François Mitterrand's visit to Beijing in May 1983. The first plant was set to be built in the fast-developing Guangdong region, specifically on Daya Bay, and financed with the assistance of Hong Kong. This financing led to Franco-British collaboration in manufacturing the turbines for the plant, which then resulted in the merger between GEC and Alsthom. The Chinese authorities desired N4 reactors, yet they approved a model derived from the slightly enhanced CP1 units 5 and 6 of the Gravelines plant. On January 19, 1985, the agreement was signed, and the construction commenced under the supervision of EDF the following year. Despite the events of Tiananmen Square and the Taiwan frigate affair, which strained relations with Beijing, the plant was inaugurated on February 10, 1994. Meanwhile, Framatome set up a fuel fabrication plant in Yibin in 1995 and announced the development of a second plant in the same area called Ling Ao. The new facility would consist of two reactors paralleling those in the Daya Bay plant. EDF, Framatome (nuclear island), and GEC-Alsthom (turbines) were selected without a bidding process, in exchange for a low-interest loan obtained from eight French banks. The contract was signed on October 25, 1995. Construction was conducted by the Chinese company CGNPC. The reactors were commissioned ahead of schedule, in 2002 and 2003. To the dismay of French industry leaders, China later decided to broaden its range of nuclear technologies, incorporating Canadian (CANDU) and Russian (VVER) models. However, following a visit by President Jacques Chirac in 1997, China developed its own reactor, the CPR-1000, based on the French CP1 and N4 designs, with assistance from EDF, Framatome, and GEC-Alsthom. Development of the European EPR reactor The Chernobyl disaster and the oil glut caused many countries to slow down or completely abandon their nuclear programs, putting pressure on the nuclear industry. In response, the industry shifted its focus to exports, a highly competitive market that necessitated the consolidation of the European industry. Against this backdrop, Framatome and Siemens signed a cooperation agreement on April 13, 1989, and established a joint company. The objective of this partnership, backed by the respective governments, was to create pressurized water reactors using Franco-German technology. Initially, this was for the benefit of both countries, and subsequently for all organizations worldwide that generate nuclear power. Meanwhile, EDF explored options for the French nuclear program's sustainability and developed plans for constructing a novel pressurized water reactor (PWR) to succeed the original reactors that were initiated in the 1970s. The anticipated completion date for the new reactor was set around 2010. In 1986, EDF initiated the REP 2000 project, a new evolutionary phase of reactor models that were expected to be operational between 2000 and 2015. Initially, the REP 2000 project was conceived for France with the aim of improving safety measures and reducing production costs while also optimizing uranium utilization. However, due to the recession in the early 1990s and the subsequent improvement in plant availability, it was determined that additional N4 reactors were unnecessary. As a result, the construction of Units 3 and 4 at the Penly, Flamanville, and Saint-Alban plants was canceled, and the REP 2000 project, also known as N4+, was merged with the Franco-German project. On February 23, 1995, EDF and nine German utilities collaborated with Framatome and Siemens to initiate engineering studies for the European Pressurized Reactor (EPR), a third-generation nuclear reactor intended to revamp the nuclear fleet. This "evolutionary" reactor, with a planned unit power of 1,450 MWe that will be increased to 1,650 MWe for greater competitiveness, can incorporate technological advances of both the Konvoi and N4 reactors to feature improved safety (core catcher, buildings more resistant to aircraft crashes, absence of bottom penetrations and additional safety systems), an extended service life, greater utilization of MOx fuel, and enhanced thermal efficiency. The preliminary design was submitted to the French and German safety authorities in October 1997. EDF selected the Le Carnet site, near Le Pellerin, to construct the EPR prototype. Despite significant local opposition, Prime Minister Alain Juppé approved the project. The Plural Left government's election resulted in the cancellation of the project in 1997. In 1999, Germany decided to withdraw from nuclear power, and ten years later, Siemens ended its collaboration with Framatome, now Areva NP. The European reactor, which had always been Franco-German, is now entirely French. From Superphénix to MOx At the beginning of the 1950s, uranium was rare enough to make us imagine that it would soon be in short supply. The development of reactors using plutonium was seen as a safeguard against a possible shortage. The French nuclear industry was to be based on primary cells using natural uranium to produce electricity and plutonium, which secondary cells would have "burned" to produce electricity while generating more fissile material than they consumed, hence the name breeder reactors. With a view to making the French nuclear cycle self-sufficient, the CEA commissioned two experimental fast-neutron sodium-cooled reactors of this type: Rapsodie in 1967 at Cadarache, followed by the more powerful Phénix (250 MWe) in 1973 at Marcoule. While uranium proved more abundant than expected in the 1960s, the oil crisis of the 1970s and the rapid nuclear program development worldwide revived concerns of a fissile element shortage. As a result, fast-breeder reactors regained prominence. After discontinuing the UNGG line, the CEA ceased to function as a national reactor designer and shifted its focus to the fuel cycle's mastery. They also worked on developing a line capable of recycling plutonium, with Rapsodie III, later renamed Superphénix, serving as the industrial prototype. On April 15, 1976, French Prime Minister Jacques Chirac greenlit the 4.4 billion franc project, which was the result of European collaboration. With a power output of 1,200 MWe, the upcoming Creys-Mépieu plant would become the world's most potent breeder reactor. The reactor first diverged on September 7, 1985, and it was ultimately connected to the grid on January 14. The laborious nine-year construction span was accompanied by massive demonstrations in 1977, making it one of the most substantial events in the history of French anti-nuclear activism. Additionally, the plant witnessed a rocket launcher attack in 1982, and the final price tag spiraled up to 25 billion francs, as indicated by sources. On March 8, 1987, a sodium leakage caused the reactor to shut down until January 1989. This incident had a lasting impact on Superphénix's reputation, occurring merely a year after Chernobyl. The Americans led the way in abandoning the fast breeder reactor option, after which the Germans and the British followed. In December 1990, the roof of the machine room collapsed due to snowfall. The reactor did not resume operations until August 4, 1994, after multiple delays and a change in its classification to that of a research reactor under the Bataille law. On December 25 of the same year, a new leak occurred which resulted in a 7-month shutdown. 1996 marked the first year in which the reactor functioned efficiently and generated substantial amounts of electricity. However, disregarding the CEA's advice, Prime Minister Lionel Jospin made the definitive decision to close the plant on December 30, 1998, citing the low price of uranium in comparison to the overall cost of operating Superphénix (60 billion francs in 1994, equivalent to 13 billion euros in 2018). In 1982, falling uranium prices and delays in the Superphénix project led to a prolonged postponement of the industrial development of fast breeder reactors. As a result, EDF examined another option to recycle plutonium previously researched by the CEA and their Belgian and German counterparts in the early 1960s: Mixed Oxides (MOx), a fuel for PWRs consisting of 8.6% plutonium and depleted uranium. Mixed Oxides (MOx), a fuel for PWRs consisting of 8.6% plutonium and depleted uranium. Mixed Oxides (MOx), a fuel for PWRs consisting of 8.6% plutonium and depleted uranium. Trials at the Franco-Belgian Chooz power plant, which commenced in 1974, validated the concept's effectiveness. In 1987, EDF retrofitted its CP1 and CP2 plants for the purpose and deployed the technique first at Saint-Laurent-des-Eaux and then in five other locations (Gravelines, Dampierre, Blayais, Tricastin, and Chinon). MOx was manufactured at the Cadarache plutonium technology workshop from 1967 to 2005 and at the Melox facility in Marcoule since 1995. Reprocessing spent fuel into MOx would be as expensive as storing it. Since reprocessing only makes economic sense if the resulting materials are reused, the advantage of MOx would only be to provide an outlet for the products of the La Hague reprocessing plant, especially its UP2-800 unit, as the fast-breeder reactor option was abandoned. The waste management issue In France, nuclear reactors' used fuel is not considered waste because it contains uranium and plutonium that can be recycled to create MOx fuel or fuel future breeder reactors. Spent fuel can therefore be stored "temporarily" in pools, whether or not it is currently being reprocessed. Only non-recoverable nuclear materials are classified as waste and are undergoing permanent storage solutions either in place or under study. However, since 1978, the waste has undergone vitrification at Marcoule and since 1989 at La Hague. It is now being stored on-site until a permanent storage solution is identified. The highly dangerous, long-lasting waste resulting from reprocessing was first kept as liquid in tanks. The quest for such a solution started early on. Similar to the disposal of outdated ammunition from both World Wars, the sea was considered. It diluted pollution. From 1950 to 1963, the United Kingdom and Belgium dumped waste in the Hurd's Deep off the Cotentin peninsula, and France participated in this policy coordinated by the European Nuclear Energy Agency (ENA) by submerging liquid and solid low-level radioactive waste from Marcoule in the depths of the Atlantic Ocean. This practice ended in 1969 with the opening of the Manche storage center beside the La Hague plant. After January 1992, it was replaced by the Aube storage center due to saturation. The deployment of nuclear power plants in the 1970s and the resulting volume of spent fuel drastically altered the situation. The London Treaty, which was enacted in 1975, prohibited the dumping of highly radioactive waste. To rid the world of this waste for good, France initiated studies on entombment, leading to a consensus in 1977 that sparked significant progress. Several campaigns were conducted between 1979 and 1988 off Cape Verde, followed by the North Atlantic. These expeditions were aimed at evaluating the feasibility of burying it deep in marine sediments and were part of the international Seabed program. On November 12, 1993, the Convention signatories decided to prohibit the disposal of any type of radioactive waste at sea after a decade-long pause, and consequently, this solution was abandoned. Nonetheless, some low-level radioactive effluents from fuel reprocessing, such as tritium and iodine-129, are still discharged off Cap de la Hague. The search for an appropriate disposal site in France started with the establishment of the Agence Nationale pour la Gestion des Déchets Radioactifs (ANDRA) within the CEA in 1979. From 1982 to 1984, the Castaing Commission suggested deep geological disposal as a solution, as well as exploring other alternatives. Prospecting for underground labs, which started in 1987, faced strong opposition in the departments chosen for their diverse geologies (Ain, Aisne, Deux-Sèvres and Maine-et-Loire), leading Michel Rocard's government to halt work in early 1990. The research law on radioactive waste management (the Bataille law), enacted in 1991, pacified the debate by outlining research in three complementary areas: transmutation, long-term storage, and geological disposal. In the same year, ANDRA gained independence from the CEA and resumed its prospecting activities. On December 9, 1998, ANDRA selected a geological site in the Meuse region of France at Bure. A laboratory was built by ANDRA between 1999 and 2004 at a depth of 490 meters within an impermeable and stable argillite layer to investigate the viability of an industrial, reversible geological disposal center known as Cigéo. On June 28, 2006, the Bataille law was replaced by a new law that confirmed the selection of this storage solution. Total cost of French nuclear facilities in 2012 Following the Fukushima nuclear accident in 2011, the French government requested that the Court of Accounts prepare a report on the overall cost of both public and private investment in the French nuclear power industry from its beginning, including all expenditures. The report estimates that the industry has cost around 228 billion euros for a yearly production of roughly 400 TWh, with a cumulative production of approximately 11,000 TWh. Among the expenses, the Court of Accounts differentiates €55 billion spent on research since 1950 (equivalent to approximately a billion dollars annually) and €121 billion spent on construction, which includes €96 billion on the 58 reactors. Restructuring the sector (1999 to 2020) Following the 1990s, the global fight against climate change, widely supported by the Kyoto Protocol, and the growing energy demands of developing nations reinvigorated the nuclear energy industry on a global scale. In the early 2000s, France witnessed the liberalization of its electricity market for European competition. However, the current local business climate did not support the development of nuclear power due to its requirement for significant investment, rendering it less competitive in the short run when compared to quicker-to-build technologies such as gas turbines. Opening of the electricity market From February 1, 1999, Électricité de France (EDF), a quasi-public monopoly, was gradually deregulated to allow competition in electricity production and supply as per European Union directives. Initially, the French electricity market was opened up to the largest customers whose consumption exceeded a threshold set by the decree. Later, following the March 2002 decisions of the Barcelona European Council, the market was open to all consumers. As of July 1, 2007, the French electricity market, encompassing around 450 TWh, became open to competition. Nevertheless, the French market remains highly consolidated within the European Union, with EDF maintaining an 85% share of the residential customer base in 2017. The law permitted domestic customers to switch to regulated electricity sales tariffs with specific conditions to counteract the steep increase in energy prices since 2004. Additionally, a transitional market adjustment tariff (TaRTAM) was temporarily introduced to assist industrial customers. EDF's ability to revert to the regulated tariff, even when market prices were higher, allowed the company to maintain its dominant position. As a result, the European Commission launched two legal challenges in 2006 and 2007, disputing the French system of regulated tariffs for its constraint on competition. To comply with this mandate, on July 1, 2011, EDF, the proprietor of France's nuclear power plants, was compelled under the French Electricity Market Organization Act (loi NOME) to vend a maximum of 100 TWh of electricity each year from its reactors to rivals, at terms that accurately reflect the economic circumstances of electricity production. These conditions are evaluated by the French Commission de régulation de l'énergie (CRE) and established through the mechanism of regulated access to historic nuclear electricity (ARENH). The liberalization of the French energy market led to a change in the status of incumbent operators, EDF and Gaz de France (GDF). The Law passed on August 9, 2004, for the public electricity service as well as electricity and gas organizations, which converted them from public entities into limited companies while implementing EU commitments into French law. The French State, through the Agence des participations de l'État, continues to be the majority shareholder in these companies. However, this new designation authorizes them to conduct business in the European market. In 2008, GDF-Suez acquired Belgium's seven nuclear reactors by buying Electrabel. The next year, EDF Energy acquired 16 of the UK's nuclear reactors by purchasing British Energy. Since the UK's AGR reactors are approaching the end of their lifespan, the UK subsidiary of the French utility proposed constructing pairs of EPR reactors at the Hinkley Point and Sizewell locations to replace them. From 2014, General Electric, the American conglomerate, sought acquisition of Alstom Power and Alstom Grid. These were subsidiaries of the Belfort-based company, which specialized in the production of the Arabelle turbine. Due to the strategic importance of the activities, the buyout required authorization from the French state. It was granted on November 4, 2014, by Emmanuel Macron, the Minister of Economy, Industry, and the Digital Economy. In 2020, General Electric pursued liquidity by selling assets, including the ex-Alstom nuclear activities. EDF ultimately bought GE's turbine production business, now called GE Steam Power, with the announcement made in February 2022. Areva, rise and fall The nuclear industry underwent restructuring to enhance French competitiveness and facilitate international alliances for Framatome. Cogema assumed the role of Framatome's primary shareholder in 1999. By 2001, the merger between Germany's Siemens and Framatome, which had been in the works since the late 1980s, culminated in the establishment of a joint venture, Framatome ANP (Advanced Nuclear Power), with Framatome holding a 66% ownership stake and Siemens holding a 34% stake. This company is currently the global leader in nuclear boiler construction, representing 21% of the installed base. Additionally, the company provides services to installed plants and holds a 41% share of the global nuclear fuel market. In June 2001, a new company named Topco emerged from CEA Industrie, which was later renamed as Areva in September. It consisted of Cogema, Framatome ANP, Technicatome, and holdings in the new technologies sector such as FCI and ST-Microelectronics. The objective of this new French powerhouse was to reinforce its nuclear division, for which it adopted a new shareholding structure. A significant step towards this goal was taken on November 24, 2003, when it signed an agreement with its British competitor, Urenco, enabling it to access gas centrifuge technology. This proven uranium enrichment method was chosen over the alternative Chemex and AVLIS processes developed by CEA for the Georges-Besse II facility. Following four years of construction, the first cascade at the new Tricastin facility was launched on May 18, 2009. In 2012, it permanently replaced its predecessor, Eurodif, which consumed excessive electricity. As a solution to the global warming issue and the third oil crisis, the reorganized nuclear industry was optimistic about the future, even referring to it as a "nuclear renaissance". During this period, Areva recorded increasing profits from 2002 to 2010 and raised its investments. Especially in renewable energies, with procurement in wind and solar power as early as 2005, and in mining, with the acquisition of three African uranium deposits in 2007. In the meantime, all of the Group's leading subsidiaries have adopted the Areva trade name. Cogema is now Areva NC, Framatome ANP is now Areva NP, and Technicatome is now called Areva TA. During the 2000s, Areva became the premier global nuclear power company and the only one that fully integrates the industry. However, the Finnish EPR project's added expense, the UraMin affair, the consequences of the Fukushima disaster, the failure to embrace renewable energies, and increased international competition all impacted the French group's finances, resulting in a loss exceeding 10 billion euros between 2011 and 2016. To salvage the state-owned company, the French government demanded its division and EDF's acquisition of the reactor construction business (Areva NP) on July 28, 2015. Thus, on March 30, 2017, Areva divested the majority of its stake in Areva TA, which subsequently changed its name to TechnicAtome and is now 50% owned by the Agence des participations de l'État. In July of the same year, the French government infused 4.5 billion euros into Areva. Of this sum, 2 billion went to AREVA S. A., which focuses on the group's riskiest assets, including the Olkiluoto EPR. The remaining 2.5 billion went to New Areva, a newly formed subsidiary tasked with consolidating fuel cycle activities. EDF, impacted by setbacks to the French EPR, obtains 3 billion euros from the French government and assumes ownership of Areva NP, rebranding it as Framatome. During January 2018, New Areva is renamed Orano, achieving the full dissolution of the organization. EPR construction sites In December 2003, Siemens convinced TVO, the Finnish utility, to select the EPR, a third-generation reactor they had co-developed, for the extension of its Olkiluoto nuclear power plant. Although work began in February 2005, it was delayed due to the fact that the turnkey reactor was a prototype. It was only in the following year that EDF decided to build a "production demonstrator" of the EPR at the Flamanville nuclear power plant in France. After the decision was made, a public debate ensued, during which anti-nuclear advocates criticized the fact that the choice had already been made. The bill authorizing the construction of the EPR had passed on June 23, 2005, over three months before the start of the debate. A year later, construction began with a budget of 3.3 billion euros and a completion date set for 2012. However, by July 2009, the project was already two years behind schedule. The cost of constructing the third reactor at Flamanville underwent several upward revisions, from 5 billion euros in 2010 to 6 billion euros in 2011, 8.5 billion euros in 2012 and 10.9 billion euros in 2018. Ultimately, the Normandy EPR's completion was delayed until 2023 at a staggering cost of 12.4 billion euros, which was almost four times the planned expenditure and a decade overdue. In China, where Areva was granted an €8 billion contract for two EPRs in Taishan in 2007, construction is currently behind schedule and over budget but to a lesser extent than in prior years. The initial Chinese EPR commenced operation on June 6, 2018, prior to its Finnish and French peers, whose building had initiated four and three years earlier correspondingly. In January 2009, the French government selected Penly as the location for the construction of France's second EPR. The project will be headed by a consortium consisting of EDF (majority shareholder), GDF Suez, Total, Enel, and E.ON. A public debate taking place between March 24 and July 24, 2010, ended in a stalemate two months later. Supporters of the project were convinced of its necessity, while opponents remained steadfastly opposed. The project was halted in July 2012 after Francois Hollande was elected as the President of the French Republic. However, in 2019, EDF resumed the project by launching a call for bids to construct two EPRs on the site. François Roussely's report on the future of the French nuclear industry, published on June 16, 2010, revealed that the prospects for French nuclear power plants with a lifespan of over 40, let alone 50 years, rested primarily on exports in the medium-term. Areva, having taken lessons from the EPR's challenges and aligning with the foreign market, proposed reactors that were smaller in capacity. One of the nuclear reactors, Atmea1, which was co-developed with Mitsubishi Heavy Industries since 2007, was offered in 2010 by GDF Suez for installation at the Marcoule or Tricastin nuclear sites. EDF had a negative perception of Areva, its supplier, for collaborating with competitors to compete directly with EDF on domestic and international markets. This is because, during the same time, the French utility was developing a new reactor with its Guangdong counterpart, CGNPC, to replace the CPR-1000. The competition between the two French government-controlled organizations resulted in the termination of these initiatives. Nonetheless, despite the fact that China independently developed its third-generation reactor (Hualong-1), it worked with EDF on building two EPRs at Hinkley Point. Exporting the plutonium cycle The French fuel cycle industry has gained global recognition for exporting their technology and forming partnerships since the 1950s, more so than in reactor construction. In 1973, the Pakistani government sought the expertise of Saint-Gobain Nucléaire (SGN) to establish a fuel reprocessing facility at Chashma, with a capacity of 100 tons per year. The hope was that France, which refused to sign the Non-Proliferation Treaty, would not require the facility to be placed under international supervision. The contract was executed in October 1974. However, the explosion of India's inaugural A-bomb emphasized the need for export monitoring. In response to Paris's persistence, Islamabad consented in March 1976 to place the establishment under international observation, until pressure from the United States and the Shah of Iran ultimately terminated the project in 1978. In 1977, with the United States declining to export their technology, Japan approached SGN to construct a test reprocessing plant in Tōkai. The plant had a yearly capacity of 200 tonnes. A decade after, France agreed to a technology transfer deal to build a significantly bigger reprocessing plant in northern Japan. The latter was modeled on the UP3 unit located in La Hague. The construction commenced in 1993, and the first spent fuel bundles arrived in 1998 for storage. Despite a threefold increase in costs and multiple delays, the Rokkasho nuclear plant is set to commence processing stored fuel in 2022. A MOx fabrication unit, being constructed since 2010, will be installed at the complex. Meanwhile, since 1999 France has been producing this fuel for Japan at Marcoule by reprocessing Japanese spent fuel it has recycled at La Hague since 1982. With MOx, Areva aims to make the plutonium industry its international economic spearhead. Since the end of the Cold War, France has been involved in the development of a process for disposing of military plutonium from the dismantling of the arsenals of the two great powers. This was demonstrated in Russia, with the Aida I (Franco-Russian) and Aida-Mox II (Franco-German-Russian) study programs from 1992 to 2002, and in the United States with the MOx for Peace program. The successful conversion of US military plutonium into MOx at Marcoule in 2005 launched Areva's construction of a specialized plant at the Savannah River nuclear site two years later. However, due to delays and rising costs, the completion of the Mixed Oxide Fuel Fabrication Facility, a twin of the Melox plant, was abandoned at the end of 2018. Also in the United States, Areva has been involved since 2004 in the decontamination of the Hanford military complex, notably through the construction of the world's largest nuclear waste vitrification plant. In China, Areva has since 2007 been seeking to conclude an agreement for the construction of a reprocessing plant, similar to the UP3 unit at La Hague, together with an MOx fabrication unit. China National Nuclear Corporation (CNNC), for its part, is trying to acquire a stake in its French counterpart. Aftermath of the Fukushima accident After Chernobyl, another incident has sparked concerns about nuclear power, impeding the industry's recovery. On March 11, 2011, a magnitude 9 earthquake triggered a tsunami that ravaged the Tōhoku region on Pacific coast of Japan, and resulted in the Fukushima nuclear disaster. Failure to cool down the shut-down reactors at the Fukushima Daiichi power plant led to core meltdowns in three of them, resulting in substantial radioactive releases and the evacuation of more than 150,000 people. Prime Minister François Fillon appointed the Autorité de Sûreté Nucléaire (ASN) on March 23, 2011, to conduct an audit of French nuclear facilities. The audit evaluated the risks of flooding, earthquakes, loss of power, and cooling systems, as well as the operational management of accident situations. ASN, which was established in 2006, is responsible for ensuring nuclear safety and radiation protection in France. At the conclusion of the audit on January 3, 2012, the ASN suggested strengthening site security by adding emergency generators, bunkerized crisis management facilities, and enhancing subcontractor monitoring. Consequently, the increased surveillance exposed several production irregularities at Areva's Le Creusot plant, which manufactures nuclear island components. This led to the shutdown of 18 reactors in 2016 for inspection. Since the Fukushima disaster, EDF has been investing 3.7 billion euros annually, totaling 55 billion euros by 2025, to upgrade and maintain their power plants to meet the tightened ASN standards and extend their operating life to 50 or 60 years. This initiative, called the "Grand Carénage" program, is estimated by the French Court of Accounts to cost 75 billion by 2030, which will be further compounded by 25 billion in operating expenses. Extending the reactors' lifespan would allow the French utility to allocate the necessary funds to finance their dismantling, which may surpass 100 billion euros. Relaunch of a nuclear program Prospects for nuclear power plants in 2020 In August 2015, the French Energy Transition Act provided for a cap of 63 GW on installed capacity and 50% on the share of nuclear power in national electricity production by 2025. This deadline was pushed back to 2035 three years later. Électricité de France (EDF) considers that maintaining this capacity would require the construction of new reactors between now and 2030, to compensate for the concomitant closure of older ones. For the second stage of fleet renewal, after the EPRs, fourth-generation reactors, currently under development, would be deployed, pending fusion. Energetic transition To reduce the cost of electricity production while incorporating input from existing EPR reactors, researchers are examining a new streamlined design known as the EPR-NM, later designated as EPR2. EDF projected in late 2015 that its nuclear fleet would consist of 30 to 40 of these reactors by 2050, replacing the then-operational 58. To maintain French nuclear expertise in the absence of new exports, a 2018 report called for the construction of six reactors beginning in 2025. EDF estimates the project to cost 46 billion euros for a duration of 20 years. A significant aspect of nuclear industry development involves adjusting power plant output to enable the incorporation of intermittent renewable energies into the electricity grid, thereby participating in the energy transition. Emmanuel Macron announced in November 2018, the Fessenheim nuclear power plant's closure would commence in 2020, as sanctioned in April 2017. The initial reactor was set to shut down on February 22, and the second one on June 29. The French government has agreed to compensate EDF for the loss of revenue that resulted from the early closure of the power plant, which was formerly scheduled to operate until 2041. In January 2020, an Information Mission was introduced in the French National Assembly regarding the "Closing of the Fessenheim Nuclear Power Plant". In 2019, as part of the first multi-year energy program, the government announced the closure of an additional 12 reactors between 2027 and 2035, which will be designated by EDF. On January 21, 2020, EDF proposed to study the closure of reactor pairs at seven sites: Bugey (CP0), Tricastin, Gravelines, Dampierre, Blayais (CP1), Chinon and Cruas (CP2). As all of these nuclear power plants have at least four reactors, this solution would enable the utility to avoid closures at these respective sites. At this time, the French government has no intention of providing compensation to EDF for the revenue lost from the premature shutdown of the reactors in question. This decision is based on the fact that all the reactors would have already reached their 50-year depreciation period. The decision, in 2022, to relaunch a nuclear program President Macron announced on February 10, 2022, his decision to "prolong the usage period of all nuclear reactors [as much as possible] [...] and initiate a new reactor program today" which includes six EPR2s now and potentially eight additional reactors in the future. Despite the addition of 14 extra EPR reactors and the extension of the lifespan of current reactors, the contribution of nuclear power to the French electricity mix is expected to decrease from 70% in 2021 to 40% by 2050. Under France's new energy strategy, the construction of the first of six EPR2 reactors will commence in 2028, with commissioning planned for 2035. The National Commission for Public Debate (Commission nationale du débat public) will be consulted on the project from the second half of 2022. As part of France 2030, the Head of State has announced a $1 billion program to develop new types of reactors to add 25 GW of production capacity by 2050. Half of the $500 million will fund the Nuward project, which will focus on small, modular reactors led by EDF. The first prototype is scheduled for release in 2030. The remaining $250 million will be used to advance the development of innovative reactors that generate less waste. In September 2022, the CEO of the Électricité de France group, Jean-Bernard Lévy, questioned the government's strategy, citing that his own approach is guided by the law that reduces the use of nuclear power in the electricity mix to 50%. He clarifies that he hired employees to shut down twelve power plants, not to construct new ones. He made these statements during a period of 32 reactor closures and historically low availability of EDF, which worsened the energy crisis in the country. Shortly after, Emmanuel Macron vehemently criticized Jean-Bernard Lévy's comments and upheld his administration's nuclear strategy, specifically the shutdown of the Fessenheim facility. In August 2022, 32 of the 56 reactors were deactivated, 12 due to corrosion issues and 18 for maintenance purposes. Normally, yearly maintenance is focused on the summer season. However, the French nuclear safety authority (Autorité de sûreté nucléaire) asked for schedule extensions to refurbish the facilities and elongate the reactors' lifespan beyond the 40-year mark, for at least ten more years of use. Furthermore, an unforeseen issue of corrosion on forged stainless steel has been discovered, posing a threat to the safety injection pipes utilized for cooling down the reactor during an accident. To address this problem, EDF has initiated an ultrasonic crack detection program aimed at resolving the matter by 2025. Dismantling old power plants Nuclear decommissioning is an area where France has been cultivating its proficiency since the late 1980s, given the magnitude and diversity of national facilities involved. These consist of nine UNGG reactors, whose decommissioning faces the utmost hurdle of irradiated graphite core, a distinct heavy water reactor (Brennilis), and three fast neutron reactors (Rapsodie, Phénix, Superphenix), that mandate the adoption of novel techniques for sodium handling. Dismantling entails decommissioning the initial fuel cycle facilities, including the UP1 reprocessing plant in Marcoule, the UP2 plant in La Hague, and the Eurodif enrichment plant in Tricastin. The aforementioned scope encompasses two entire CEA centers located in Grenoble and Fontenay-aux-Roses, in addition to the Commissariat's research reactors (Ulysse and Phébus). It is expected that all pressurized water reactors (PWRs) constructed between 1977 and 1999 under the Messmer plan will be deactivated by 2050. This process was scheduled to start with the Fessenheim nuclear power plant in 2020. Fessenheim will serve as a model for future PWRs, but it will not be the first PWR dismantled in France. The distinction of being the first will go to the Franco-Belgian Chooz A reactor, which was shut down in 1991 and will undergo dismantlement for 15 years before being awarded the title in 2022. The French Nuclear Safety Authority (ASN) has recommended the immediate dismantling of the shut-down reactors. However, EDF wishes to delay this for decades until there is a sufficient reduction in the accumulated radioactivity of the nuclear islands to facilitate operations. Fourth-generation projects In 2000, the USA launched the Generation IV international forum to establish cooperation in the development of innovative nuclear reactors. Two years later, six major concepts have been selected: three are thermal (slow) neutron reactors and three are fast neutron reactors. France, which had just shut down the Superphénix fast reactor, turned to high-temperature prismatic reactor technology with the Antares program (Areva New Technology base on Advanced gas cooled Reactor for Energy Supply). Framatome had been involved in the development of this type of reactor with General Atomics for 20 years. In January 2006, President Jacques Chirac decided to launch the design of a prototype for a fourth-generation reactor. Under the impetus of the CEA, capitalizing on its expertise in the field, France returned to sodium-cooled fast-breeder reactors, as the only concept sufficiently mature for a prototype to be built in the medium term. The Astrid (Advanced Sodium Technological Reactor for Industrial) technology demonstrator project got underway. In 2010, at a time when the Phénix research reactor has been shut down for good, Astrid received a €651 million grant as part of the "Investments d'avenir" program. Design studies then begin. In 2014, Japan joined the project, estimated at five billion euros, only to be sidelined four years later when the CEA downsized the project to keep costs down. In early 2019, the research program was renewed, but the construction of a new fast-breeder reactor was abandoned in the absence of a competitive uranium price. With the abandonment of Astrid, the CEA turned its attention to Small Modular Reactors (SMR) with the research program Initiatives Usine Nucléaire du Futur, in partnership with EDF and Framatome. In 1981, CEA and EDF had already collaborated on the design of the NP-300, a 300 MWe modular reactor derived from the K15 naval reactors. The Nuward small modular pressurized water reactor also drew on expertise acquired with naval reactors. Framatome is also developing another high-temperature gas SMR in collaboration with General Atomics. Fusion research France's efforts to develop nuclear fusion technology began in 1957 with the construction of the Tore TA 2000 facility in Fontenay-aux-Roses. While the project was initially shrouded in secrecy, civil engineering work became public in 1958 following the Atoms for Peace Conference. The remarkable strides made by the Soviet Union in this field, revealed in the late 1960s, had a lasting impact on future research, leading towards tokamak technology. The Fontenay-aux-Roses tokamak (TFR) was the first of its kind in France and began operation on March 22, 1973. It was the world's most powerful tokamak at the time. Following TFR, the Tore Supra started operating at Cadarache in April 1988. Additionally, France is a collaborator in the Joint European Torus (JET) located in England since 1983. On June 28, 2005, Cadarache was chosen as the host for the international ITER tokamak. Under construction since 2007, ITER aims to demonstrate the technical capability of a reactor capable of producing ten times more power than it consumes over realistic timeframes, with the goal of paving the way for an industrial prototype (Demo). See also :fr:Histoire du programme nucléaire militaire de la France Energy in France Nuclear power debate Nuclear power in France :fr:Exploitation de l'uranium en France Force de dissuasion :fr:Centrale nucléaire en France Uranium mining in France Notes References Bibliography Articles (fr) Jacques Blanc, "Les mines et les mineurs français d'uranium de 1945 à 1975", UARGA, 2009 (read online archive). (fr) Sezin Topçu, "Les physiciens dans le mouvement antinucléaire: entre science, expertise et politique", Cahiers d'histoire, no 102, 2007, p. 89-108 (read online archive). (fr) Alain Mallevre, "L'Histoire de l'énergie nucléaire en France de 1895 à nos jours", L'écho du Grand Rué, Association des retraités du CEA, no 133, 2006 (read online archive). (fr) Dominique Mongin, "Aux origines du programme atomique militaire français", Matériaux pour l'histoire de notre temps, vol. 31, no 1, 1993, p. 13-21 (read online archive) Publications (fr) Jean Songe, Ma vie atomique, Éditions Calmant-Lévy, 2016, 320 p. . (fr) Yves Lenoir, La comédie atomique. L'histoire occultée des dangers des radiations, La Découverte, 2016 . (fr) Nicole Colas-Linhart and Anne Petiet, La saga nucléaire: Témoignages d'acteurs, L'Harmattan, 2015, 251 p. . (fr) Robert Belot, L'Atome et la France: Aux origines de la technoscience française, Odile Jacob, 2015, 332 p. , read online archive). (fr) Sophie Bretesché and Bernd Grambow, Le nucléaire au prisme du temps, Presses des Mines, 2014, 118 p. . (fr) Boris Dänzer-Kantof and Félix Torres, L'Énergie de la France. De Zoé aux EPR, l'histoire du programme nucléaire, Éditions François Bourin, 2013, 703 p. . (fr) Sezin Topçu, La France nucléaire: L'art de gouverner une technologie contestée, Éditions du Seuil, 2013, 350 p. . (fr) Raphaël Granvaud, Areva en Afrique: Une face cachée du nucléaire français, Coédition Agone, coll. "Dossiers Noirs", 2012, 300 p. . (fr) Cour des comptes, Les coûts de la filière électronucléaire, Paris, January 27, 2012, 430 p. (read online archive). (fr) Bruno Tertrais, Le marché noir de la bombe, Éditions Buchet/Chastel, 2009, 262 p. . (fr) Michel Hug, Un siècle d'énergie nucléaire, Académie des Technologies, coll. "Grandes aventures technologiques françaises", 2009, 86 p. (read online archive). (fr) Paul Reuss, L'épopée de l'énergie nucléaire: une histoire scientifique et industrielle, Paris, EDP Sciences, coll. "Génie atomique", February 8, 2007, 167 p. , read online archive). (fr) Philippe Pradel, CEA, Direction de l'énergie nucléaire, L'énergie nucléaire du futur: quelles recherches pour quels objectifs, Paris, Éditions du Moniteur, 2005, 108 p. , read online archive). (fr) Bruno Barrillot, Le complexe nucléaire: Des liens entre l'atome civil et l'atome militaire, CDRPC/Observatoire des armes nucléaires françaises, 2005, 144 p. . (fr) Aude Le Dars, Pour une gestion durable des déchets nucléaires, Paris, Presses Universitaires de France, 2004, 281 p. . (fr) Gabrielle Hecht, Le rayonnement de la France, Éditions La Découverte, 2004, 455 p. . (fr) Lionel Taccoen, Le pari nucléaire français: Histoire politique des décisions cruciales, L'Harmattan, 2003, 208 p. . Dominique Finon and Carine Staropoli, The performing interaction between institutions and technology in the French electronuclear industry, Grenoble, October 2000, 26 p. (read online archive). (fr) André Bendjebbar, Histoire secrète de la bombe atomique française, le cherche midi, 2000, 400 p. . (fr) Michel Dürr, Guide international de l'énergie nucléaire, Paris, Éditions Technip, 1987, 406 p. , read online archive), "L'énergie nucléaire en France", pp. 37–44. (fr) Jean-Claude Debeir, Jean-Paul Deléage and Daniel Hémery, Les servitudes de la puissance, Flammarion, coll. "Nouvelle bibliothèque scientifique", 1986, "Un nucléaire très cartésien", p. 299-342. (fr) François Dorget, Le choix nucléaire français, Paris, Economica, 1984, 345 p. . (fr) Spencer R. Weart, La grande aventure des atomistes français: Les savants au pouvoir, Fayard, 1980, 394 p. . Filmography Kenichi Watanabe, Terres Nucléaires: une histoire du plutonium archive, 2015, documentary, on Dailymotion. Nucléaire, exception française archive, 2013, report, on Dailymotion. Nicole Le Garrec, Plogoff, des pierres contre des fusils, documentary, 1980. France's first atomic power plant archive, 1955, on the INA website. Official inauguration of the first French atomic reactor at Fort de Châtillon archive, 1948, on the INA website. External links The history of nuclear energy in France from 1895 to the present day archive [PDF] PLOGOFF. Chronicle of a victory against nuclear power. archive Nuclear power in France Nuclear power
History of France's civil nuclear program
[ "Physics" ]
21,073
[ "Power (physics)", "Physical quantities", "Nuclear power" ]
75,019,680
https://en.wikipedia.org/wiki/Society%205.0
Society 5.0, also known as the Super Smart Society, is a concept for a designed society introduced by the Japanese government in 2016. The plan aims to integrate technologies such as artificial intelligence into a preexisting society. It is an adaptation of the Fourth Industrial Revolution, first introduced by the Japanese government's Cabinet Office's Council for Science, Technology, and Innovation. The unveiling of Society 5.0 took place within the framework of the 5th Science and Technology Basic Plan, presented by the late Japanese Prime Minister Shinzo Abe in 2019. In summary, the plan called for the integration of cyberspace and physical space, vis-a-vis augmented reality. Objective Society 5.0 was designed to promote a shift toward a human-centered, knowledge-based, and data-driven society. Contrary to Germany's Industry 4.0, which focuses on industrial IT integration, Society 5.0 includes the application of IT to improve public living spaces and habits. The Cabinet Office of the Government of Japan describes Society 5.0 as an initiative aimed at ensuring safety, security, comfort, and health for all individuals, facilitating the pursuit of their preferred lifestyles. History The term “Society 5.0” comes from the intention of creating a fifth new society by making the best use of digital transformation, after going through several societies such as the hunting society (Society 1.0), the agrarian society (Society 2.0), the industrial society (Society 3.0), and the information society (Society 4.0). Society 1.0 (Hunting society) A hunter-gatherer society is an anthropological concept that characterizes a society's way of life as dependent on hunting and collecting wild animals, fruits and plants for sustenance. It is believed that all human societies followed a hunter-gatherer lifestyle until the advent of agriculture during the Neolithic era/period. Society 2.0 (Agricultural society) An agrarian society is a societal structure where the economy primarily relies on agriculture. The origins of agrarian societies are associated with the Neolithic Revolution, also known as the First Agricultural Revolution, which took place during the Neolithic or Stone Age. These societies have persisted in various parts of the world for thousands of years since then, up to the present day, making them the most prevalent form of social and economic organization throughout the history of human organizational work in pre-industrial times. Society 3.0 (Industrial society) An industrial society that has made significant progress in industrialization, and is also referred to as an industrialized society. In many instances, industrial societies follow a previous stage characterized by an agricultural society, including making full use of technologies across different fields for a human-centered society. Society 4.0 (Information society) An information society is a society in which activities related to the utilization, generation, dissemination, and incorporation of information hold considerable importance. The primary catalysts behind this phenomenon are information and communication technologies, which have led to the rapid development of automatic machines and robots for the revolution of industry and information. Technology applications Japan's National Institute of Advanced Industrial Science and Technology report lists the following six topics as basic technologies for realizing Society 5.0: Technology for enhancing human capabilities, fostering sensitivity, and enabling control within Cyber-Physical Systems (CPS). AI hardware technology and AI application systems. Self-developing security technology for AI applications. Highly efficient network technology along with advanced information input and output devices. Next-generation manufacturing system technology designed to facilitate mass customization. New measurement technology tailored for digital manufacturing processes. The Japan Business Federation (Keidanren) initiated "Society 5.0 for SDGs" in alignment with the United Nations' Sustainable Development Goals (SDGs) due to the compatibility between Society 5.0 and the SDGs. See also Cyber manufacturing List of emerging technologies Digital modelling and fabrication Computer-integrated manufacturing Industrial control system Simulation software Technological singularity Work 4.0 World Economic Forum 2016 References 21st century Industrial automation Industrial computing Internet of things Technology forecasting Big data Industrial Revolution Fourth Industrial Revolution Science and technology in Japan
Society 5.0
[ "Technology", "Engineering" ]
825
[ "Industrial computing", "Industrial engineering", "Automation", "Data", "Big data", "Industrial automation" ]
75,020,993
https://en.wikipedia.org/wiki/List%20of%20honeyguides
Honeyguides are birds in the family Indicatoridae in the order Piciformes. There are currently 16 extant species of honeyguides recognised by the International Ornithologists' Union. Conventions Conservation status codes listed follow the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Range maps are provided wherever possible; if a range map is not available, a description of the honeyguide's range is provided. Ranges are based on the IOC World Bird List for that species unless otherwise noted. Population estimates are of the number of mature individuals and are taken from the IUCN Red List. This list follows the taxonomic treatment (designation and order of species) and nomenclature (scientific and common names) of version 13.2 of the IOC World Bird List. Where the taxonomy proposed by the IOC World Bird List conflicts with the taxonomy followed by the IUCN or the 2023 edition of The Clements Checklist of Birds of the World, the disagreement is noted next to the species's common name (for nomenclatural disagreements) or scientific name (for taxonomic disagreements). Classification The International Ornithologists' Union (IOU) recognises 16 species of honeyguides in four genera. This list does not include hybrid species, extinct prehistoric species, or putative species not yet accepted by the IOU. Family Indicatoridae Genus Prodotiscus: three species Genus Melignomon: two species Genus Indicator: ten species Genus Melichneutes: one species Honeyguides Notes References Lists of animals Lists of birds Indicatoridae
List of honeyguides
[ "Biology" ]
315
[ "Lists of biota", "Lists of animals", "Animals" ]
75,022,723
https://en.wikipedia.org/wiki/D%20Puppis
The Bayer designations D Puppis and d Puppis are distinct. For D Puppis: D Puppis (HR 2691, HD 54475), a bluish star. For d Puppis: d1 Puppis (HR 2961, HD 61831), a blue dwarf star d2 Puppis (HR 2963, HD 61878), a binary star d3 Puppis (HR 2954, HD 61899), a bluish star d4 Puppis (V468 Puppis), a variable blue giant star Puppis, d Puppis
D Puppis
[ "Astronomy" ]
125
[ "Puppis", "Constellations" ]
75,022,832
https://en.wikipedia.org/wiki/HD%2054475
D Puppis, also known as HD 54475, is a B-type star and a pulsating variable in the constellation of Puppis. It has an apparent magnitude of 5.783, which is enough to be visible to the unaided eye. The distance to D Puppis, based on a parallax of from the Hipparcos satellite, is 776 light-years. Characteristics This is a B-type main-sequence star with a spectral type of B3V. It has 6.2 times the mass of the Sun and 3.54 times the Sun's radius. It radiates 690 times the solar luminosity from its outer atmosphere at a temperature of 15,723 K. Its age is estimated to be of about 15 million years. The distance to D Puppis is about 776 light-years, based on a parallax of from the Hipparcos satellite. It is mentioned to be a pulsating variable star on SIMBAD, but the American Association of Variable Star Observers does not mention any variable-star type for the star. Notes References Sources B-type stars Puppis Variable stars Bright Star Catalogue objects Hipparcos objects Henry Draper Catalogue objects Bayer objects WISE objects
HD 54475
[ "Astronomy" ]
253
[ "Puppis", "Constellations" ]
75,023,075
https://en.wikipedia.org/wiki/HD%20213402
HD 213402 (HR 8577; 73 G. Octantis) is a solitary star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 6.14, placing it near the limit for naked eye visibility. The object is located relatively far at a distance of 920 light-years based on Gaia DR3 parallax measurements, but it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 213402's brightness is diminshed by 0.45 magnitudes due to interstellar extinction and it has an absolute magnitude of −1.15. HD 213402 has a stellar classification of K1 III, indicating that it is an evolved K-type giant. It has a comparable mass to the Sun but it has expanded to 44.5 times the radius of the Sun. It radiates 471 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of , giving it the typical orange hue of a K-type star. Gaia DR3 stellar evolution models place it on the red giant branch and yield a larger radius of and a higher luminosity of . HD 213402 is slightly metal deficient with an iron abundance of [Fe/H] = −0.07 or 85% of the Sun's abundance. Like many giant stars it rotates slowly, but its projected rotational velocity is too low to be measured accurately. References K-type giants Octans Octantis, 73 PD-79 01206 213402 111504 8577 00273651959
HD 213402
[ "Astronomy" ]
337
[ "Octans", "Constellations" ]
75,024,242
https://en.wikipedia.org/wiki/Iran%E2%80%93Pakistan%20border%20barrier
The Iran–Pakistan border barrier is a border barrier being built jointly by both countries along their 959-kilometer (596-mile) shared border. The primary goal is to prevent unauthorized border crossings and minimize the trafficking of illegal goods. Background The Iran-Pakistan border, which separates Iran and Pakistan, demarcates the Iranian province of Sistan and Baluchestan from the Pakistani province of Balochistan. The border is 909 kilometers (565 miles) in length. The Iran-Pakistan Barrier, presently being constructed consists of a three-foot thick concrete wall (approximately 0.91 meters) that stands ten feet high (around 3.05 meters). This imposing structure spans 700 kilometers, navigating challenging desert terrain. The objectives of the barrier are to deter unlawful border crossings and curtail the influx of illegal drugs. History and stated purpose The wall is being constructed to stop illegal border crossings and stem the flow of drugs, and is also a response to terror attacks, notably the one in the Iranian border town of Zahedan on February 17, 2007, which killed 13 people, including nine Iranian Revolutionary Guard officials. However Pakistani Foreign Ministry spokeswoman Tasnim Aslam denied any link between the fence and the bomb blast, saying that Iran was not blaming these incidents on Pakistan. Fencing Iranian fencing project (2011) The 3 ft (91.4 cm) thick and 10 ft (3.05 m) high concrete wall, fortified with steel rods, will span the 700 km frontier stretching from Taftan to Mand. The project will include large earth and stone embankments and deep ditches to deter illegal trade crossings and drug smuggling to both side. The border region is already dotted with police observation towers and fortress-style garrisons for troops. Iran and Pakistan do not have border disputes or other irredentist claims and Pakistan's Foreign Ministry has stated, "Pakistan has no reservation because Iran is constructing the fence on its territory". Pakistani fencing project (2019) In 2019, Pakistan announced its intention to fence its border with Iran. In May 2019, Pakistan approved $18.6 Million funds to fence border with Iran. In September 2021, Pakistan approved $58.5 Million additional funds for border fencing. As of mid-2021, Pakistan has fenced 46% of border and is expected to be fully fenced by December 2021. As of January 2022, Pakistan has fenced 80% of border. Interior Ministry stated that remaining border will also be fenced. Construction The barrier is currently under construction in challenging mountainous areas in southeastern Iran, known for their tough-to-cross terrain. It includes a robust concrete wall measuring three feet in thickness and ten feet in height, extending over 700 kilometers through inhospitable desert regions. As of March 2022, a stretch of 659 kilometers out of the total 830-kilometer border has already been fenced. The remaining 171 kilometers are scheduled to be completed by December 2023. Impact on local economies Despite the presence of barricades and the sophisticated Taftan portal, a significant amount of illegal goods managed to pass through. In 2021 the desolate and underdeveloped characteristics of the area created challenges in enforcing the law. This situation has sparked frustration among cross-border families. Due to restricted crossings, numerous pickup trucks, locally referred to as "zambad," have been stranded at the Pakistan-Iran border for the past month, enduring uncomfortable heat and hunger. Moreover, Iran and Pakistan have decided to construct six joint-border markets to boost trade. In the first phase, three markets will be opened in the border points of Kuhak-Chadgi, Rimdan-Gabd and Pishin-Mand areas. In the second phase, border markets will be set up at three other border points. The first three border markets out of six have already been constructed and operationalized at Gabd, Mand and Chadgi. Diplomatic relations Iran and Pakistan have established a collaborative working group to oversee border management, encompassing security, trade, and travel matters between both nations. In January 2023, the two parties signed 39 Memorandums of Understanding (MOUs). These agreements have the potential to significantly boost trade, potentially reaching an estimated trade value of around $5 billion per year. Reactions to the barrier The Foreign Ministry of Pakistan has said that Iran has the right to erect border fencing in its territory. However, opposition to the construction of the wall was raised in the Provincial Assembly of Balochistan. It maintained that the wall would create problems for the Baloch people whose lands straddle the border region. The community would become further divided politically and socially, with their trade and social activities being seriously impeded. Leader of the Opposition Kachkol Ali said the governments of the two countries had not taken the Baloch into their confidence on this matter, demanded that the construction of the wall be stopped immediately, and appealed to the international community to help the Baloch people. References Border barriers Borders of Iran Borders of Pakistan Foreign relations of Iran Foreign relations of Pakistan Border crossings of Iran Border crossings of Pakistan Geography of Sistan and Baluchestan province Geography of Balochistan, Pakistan Iran–Pakistan relations
Iran–Pakistan border barrier
[ "Engineering" ]
1,056
[ "Separation barriers", "Border barriers" ]
75,025,760
https://en.wikipedia.org/wiki/Copalic%20acid
Copalic acid is a chemical compound that is a constituent of copaiba oil, an oleoresin extracted from trees in the genus Copaifera. It is a diterpenoid of the labdane class. Because copaiba oil has some uses in traditional herbal medicine, there has been scientific interest in investigating the potential pharmacology of its constituents, including copalic acid. In addition, synthetic derivatives of copalic acid have been investigated for their potential pharmacology as well. Several laboratory syntheses of copalic acid have been reported. References Diterpenes Carboxylic acids
Copalic acid
[ "Chemistry" ]
124
[ "Carboxylic acids", "Functional groups" ]
75,028,527
https://en.wikipedia.org/wiki/Wale%20Oladipo
Professor Abiodun Adewale Oladipo (born January 1, 1958, Ile-Ife, Nigeria) is a Nigerian academician, administrator, and politician. His Contribution includes, the field of nuclear chemistry, as well as his involvement in Nigerian politics. He serves as the Pro-chancellor and Chairman of the Governing Council of Osun State University. Early life and education Oladipo attended St. John's Catholic Grammar School, Ile-Ife, from 1972 to 1976. During this period, he achieved a Grade 1 in the West African Senior Certificate Examination. He then pursued his bachelor's degree in Chemistry (Education) at Obafemi Awolowo University (formerly University of Ife), Ile-Ife, graduating with Second Class (Upper Division) honors in 1981. Following his undergraduate studies, he continued his education abroad at the Université Claude Bernard, Lyon I, Villeurbanne, France, where he obtained an MPhil and a PhD in Analytical Chemistry (Nuclear Techniques) With more than five publications in 1984-1988, under the active supervision of Professor JP Thomas. Academic career In 2005, Oladipo achieved the rank of Research Professor at the Centre for Energy Research and Development (CERD) at Obafemi Awolowo University. He began his career as a Senior Research Fellow in CERD in 1993. He is also a member of Nigerian Association of Medical Physicists. Administrative career Throughout his career, Oladipo has held various positions, including Head of Division, Environmental and Earth Sciences, CERD, OAU, and membership on the Academic Board, CERD, OAU. Notable Achievements and Awards He has authorised many articles and also served as keynote speakers at several local and international conferences. His work on nuclear chemistry includes the use of Cryogenically Produced Heavy Cluster ions of Hydrogen in the Study of Plasma Desorption Mass Spectrometry, as well as the establishment of a fully Automated AAS Laboratory with Graphite Atomization and Cold Vapor Hg Detection Option. Career and Activism Professor Oladipo is active in various civic roles, including serving as a Nominee Director for Odu’a Investment Company Ltd in 1992, a Part-time Member of the Osun State Sports Council from 1998 to 1999, and a Part-time Member of the Osun State Local Govt. Service Commission from Feb. 2000 to 2002. He also served as a Member of the Ife Development Board for three years. In 2008, he was appointed as the Chairman of the Osun State Universal Basic Education Board (SUBEB). Additionally, he served as the Chairman of the Governing Board of the Federal Neuropsychiatric Hospital, Yaba, from 2009 to 2011. In July 2013, he was nominated as the National Secretary of the Peoples Democratic Party (PDP) and was subsequently elected for a substantive four-year tenure in December 2014 at the Party's Special National Convention held in Abuja. Publications J.P. Thomas, A. Oladipo and M. Fallavier; 1988;B32: 354–359. J.P. Thomas, A. Oladipo and M. Fallavier; 1988; "Secondary Ion Emission Induced in Insulators: Analytical Applications"605-611. J.P. Thomas, A. Oladipo and M. Fallavier; 1989; "Collective Effects in the Desorption Process Induced by Hn+ Clusters near the Bohr's Velocity"; J.P. Thomas, A. Oladipo and M. Fallavier; 1989; "Surface Profiling of Insulating Layers using Desorption Induced by Monatomic or Cluster Ions of Beam Diameter in the 5–10 μm Range"; A. Oladipo, M. Fallavier and J.P. Thomas; 1991; "Secondary Ion Emission from Cesium Salts under Megaelectronvolt Ion Bombardment: Comparative Study and Beam Secondary Effects";. B. Nsouli, P. Rumeau, H. Allali, B. Chabert, O. Debre, A. A. Oladipo, J. P. Soulier and J. P. Thomas; 1995; "Plasma Desorption Time-of-flight Mass Spectrometric Elucidation of the Mechanisms of Adhesion Enhancement between Plasma-treated PEEK-Carbon Composite and an Epoxyamine Adhesive"; H. Allali, O. Debre, B. Lagrange, B. Nsouli, A. A. Oladipo and J. P. Thomas; 1995; "Spontaneous Desorption : A Controlled Phenomenon for Surface Analysis Application? Part I : New evidence for a sputtering process induced by a well localized field enhanced desorption"; C. A Adesanmi, I. A. Tubosun, F. A. Balogun, and A. A. Oladipo; 1997; "Advantages of Combined IENAA and Ko-factor Technique in the Determination of U and Th Concentrations in Exploration Rock Samples"; H. Allali, M. Ben Embarek, O. Debre, B. Nsouli, A. Oladipo, A. Roche and J. P. Thomas: 1997; "An HSF-SIMS Investigation of the Prephosphatation Contribution to the Phosphatation Process of Silicon Steel Surface"; Rapid Comm. Mass Spectr. 11 1377–1382. C. A. Adesanmi, F. A. Balogun, M. K. Fasasi, I. A. Tubosun, A. A. Oladipo; 2001; A semi- empirical formula for HPGe detector efficiency calibration; References 1958 births Nigerian politicians Nuclear chemists Living people
Wale Oladipo
[ "Chemistry" ]
1,189
[ "Nuclear chemists" ]
75,028,904
https://en.wikipedia.org/wiki/G%C3%A9rard%20G.%20Medioni
Gérard G. Medioni is a computer scientist, author, academic and inventor. He is a vice president and distinguished scientist at Amazon and serves as emeritus professor of Computer Science at the University of Southern California. Medioni has made contributions to computer vision, in particular 3D sensing, surface reconstruction, and object modelling. He has translated his computer vision research into customer-facing inventions and products. He has authored four books, including Emerging Topics in Computer Vision, Multimedia Systems: Algorithms, Standards, and Industry Practices, and A Computational Framework for Segmentation and Grouping, and has published more than 80 journal papers, 200 conference papers, with over 34,000 citations and his h-index is 88. In addition, he holds 103 patents to his name which include Visual tracking in video images in unconstrained environments by exploiting on-the-fly context using supporters and distracters and Depth mapping based on pattern matching and stereoscopic information, along with patents on Just Walk Out technology and Amazon One. Medioni is a Fellow of the Association for the Advancement of Artificial Intelligence, the Institute of Electrical and Electronics Engineers, the International Association for Pattern Recognition, and the National Academy of Inventors. He is also a member of National Academy of Engineering. Education and early career Medioni obtained his Diplôme d'Ingénieur in 1977 from Ecole Nationale Supérieure des Telecommunications (ENST) Paris and was appointed as a Research Engineer at Thomson-CSF from 1977 to 1978. He then completed his MSc in 1980 and his Ph.D. in 1983 in computer science from the University of Southern California. Career Following his Ph.D., in 1983, Medioni began his academic career as a research associate professor in the Department of Computer Science and Electrical Engineering at the University of Southern California. He was subsequently promoted, becoming an assistant professor in 1987, an associate professor in 1992, and a full professor in 1999. Since 2019, he has been serving as an emeritus professor in the department of Computer Science at the University of Southern California. From 2001 to 2007, Medioni chaired the department of Computer Science at the University of Southern California. Medioni was the President and CEO at I.C. Vision, Chief Technical Officer at Geometrix, and Director of Research at Amazon. Additionally, he has served as an advisory board member at DXO Labs and PrimeSense in Tel Aviv. In 2019, he was promoted to Distinguished Scientist and Vice President at Amazon. Research Medioni's research spans the field of image understanding, focusing on fundamental issues of representation, matching, and recognition. He has also been interested in designing and implementing highly reliable vision systems capable of tackling challenging tasks, even when constructed from imperfect modules. Moreover, he used an interdisciplinary approach to connect Computer Vision and Graphics to comprehend visual information processing. Just walk out technology Medioni introduced the Just Walk Out technology (JWO) which is a new shopping experience for customers. The data captured by a bank of cameras and other sensors in the store is processed in real-time to solve the "who took what" problem for every customer. It achieved a high level of accuracy in detecting people, keeping track of their location throughout their journey in the store, recognizing items that a customer picks up from the shelves, and producing an accurate receipt for items they end up buying. Amazon One Medioni developed the algorithmic components for Amazon One. This device optically captures the unique print and vein patterns of the palm and identifies a user among enrolled users. Primesense As an advisory board member and technical consultant, Medioni contributed to developing a low-cost 3D depth (range) sensor, PrimeSensor, used in the Microsoft Kinect. After Apple acquired PrimeSense in 2013, the sensor was integrated into the Apple iPhone X, enabling FaceID for mobile unlock. Tensor voting Medioni established Tensor Voting, an approach to a wide range of problems in computer vision and machine learning that is non-parametric, data-driven, local, and requires a minimal number of assumptions. The tensor voting framework provided a unified perceptual organization methodology applicable to a wide variety of problems. While the original tensor voting formulation worked with 2-D input, it was extended to 3-D (surfaces, stereo), 4-D (motion), and N-D. It is thus applicable to both Computer Vision and Machine Learning. Iterative closest point Medioni developed the Iterative Closest Point (ICP) algorithm to create a complete 3D model of a physical object from partial scans. ICP serves as a dominant method for registering partial 3-D scans of a scene, with over 5,500 citations. Rapid avatar capture simulation Medioni's Rapid avatar capture and simulation was the first demonstration of using commodity depth sensors to capture the 3D shape and appearance of human subjects, and then registering it and controlling it within an animation system within minutes. Face modelling Medioni has also worked on face modeling and introduced a technique for building human face models by using only two photographs. Through collaborative research efforts he proposed a 3D face modeling and recognition system and a method to produce 3D face models in laser scan quality. Moreover, he presented a method for remotely identifying non-cooperative individuals using 3D face models from a sequence of images. Face Recognition Medioni has also worked on face recognition technology. He proposed domain-specific data augmentation as a more accessible way to improve face recognition, achieving performance similar to systems using large datasets. Additionally, he introduced Pose-Aware Models (PAMs) for unconstrained face recognition. Awards and honors 1999 – Okawa Foundation Award, Okawa Foundation 2003 – Fellow, Institute of Electrical and Electronics Engineers (IEEE) 2004 – Fellow, Association for the Advancement of Artificial Intelligence (AAAI) 2007 – Most Influential Paper over the Decade Award, MVA 2019 – PAMI Mark Everingham Prize, IEEE Trans 2021 – Fellow, Asia-Pacific Artificial Intelligence Association (AAIA) 2021 – Distinguished Leader, APSIPA Industrial 2022 – Fellow, National Academy of Inventors 2023 – Member, National Academy of Engineering (NAE) Bibliography Selected books A Computational Framework for Segmentation and Grouping (2000) ISBN 978-0080529486. Emerging Topics in Computer Vision (2004) ISBN 978-0131013667 Tensor Voting: A Perceptual Organization Approach to Computer Vision and Machine Learning (2006) ISBN 978-1598291001 Multimedia Systems: Algorithms, Standards, and Industry Practices (2009) ISBN 978-1418835941 Selected articles Medioni, G., & Nevatia, R. (1985). Segment-based stereo matching. Computer vision, graphics, and image processing, 31(1), 2–18. Huertas, A., & Medioni, G. (1986). Detection of intensity changes with subpixel accuracy using Laplacian-Gaussian masks. IEEE Transactions on Pattern Analysis and Machine Intelligence, (5), 651–664. Chen, Y., & Medioni, G. (1992). Object modelling by registration of multiple range images. Image and vision computing, 10(3), 145–155. Stein, F., & Medioni, G. (1992). Structural indexing: Efficient 3-D object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 125–145. Dinh, T. B., Vo, N., & Medioni, G. (2011, June). Context tracker: Exploring supporters and distracters in unconstrained environments. In CVPR 2011 (pp. 1177–1184). IEEE. Khan, S., Rahmani, H., Shah, S. A. A., Bennamoun, M., Medioni, G., & Dickinson, S. (2018). A guide to convolutional neural networks for computer vision (Vol. 8, No. 1, pp. 1–207). San Rafael: Morgan & Claypool Publishers. References Computer vision researchers Computer scientists 21st-century inventors Members of the IEEE Members of the United States National Academy of Engineering Fellows of the Association for the Advancement of Artificial Intelligence 2023 fellows of the Association for Computing Machinery Fellows of the International Association for Pattern Recognition Fellows of the National Academy of Inventors Amazon (company) people Télécom Paris alumni University of Southern California alumni University of Southern California faculty Year of birth missing (living people) Living people
Gérard G. Medioni
[ "Technology" ]
1,751
[ "Computer science", "Computer scientists" ]
75,030,090
https://en.wikipedia.org/wiki/Latil%20TL
The Latil TL, TL being an initialism (), is a multipurpose all-wheel drive tractor produced by the French manufacturer Latil. History The TL tractor was introduced in 1924 for forestry, agriculture and "colonial" uses, being unveiled in October of that year at the Paris Salon. In 1925, it was presented in the United Kingdom. The French military commissioned TLs from 1928 onwards. They were tested as haulers of the 75 mm gun alongside the Citroën Kegresse P7bis, although the Latil model was judged too powerful for that usage. It was finally adopted for hauling the 105 L 13 canon until that,  in 1935, it was replaced by the Latil K TL4. As the TL was considered slow for hauling heavy guns, it was reworked as a hauler of rangefinders for anti-aircraft units. Technical details The engine is an inline-four petrol unit. It is side valved monobloc with an 85 mm bore and a 130 mm stroke, giving a displacement of 2,950 cc. It delivered . Its fiscal power is rated at 14 CV. The gearbox is a 3-speed manual transmission with a transfer case, giving 6 forward speeds and a reverse. The single-disc clutch  and the gearbox are built in one unit with the engine. The differential system can lock the drive on any axle through a lever next to the driver's seat. The tractor has a four-wheel steering system. The wheels have either pneumatic tyres or bare steel, and can be twins on the rear. They could be mounted with retractable spuds for improving grip on some surfaces. The TL can haul up to 5 tonnes The tractor's wheelbase is and its length (main) . Its weight is about 1.8 tonnes. Braking is through a contracting system on the transmission actioned by a pedal and friction brakes on each wheel actioned by a lever. Suspension is by long flat leaf springs. The military version had a speed of . References Citations Bibliography Vehicles introduced in 1924 Tractors Vehicles of France
Latil TL
[ "Engineering" ]
429
[ "Engineering vehicles", "Tractors" ]
75,030,807
https://en.wikipedia.org/wiki/HD%20204904
HD 204904 (HR 8234; 59 G. Octantis) is a spectroscopic binary located in the southern circumpolar constellation Octans. It has an apparent magnitude of 6.17, placing it near the limit for naked eye visibility, even under ideal conditions. The object is located relatively close at distance of 212 light-years based on Gaia DR3 parallax measurements and it is drifting closer with a heliocentric radial velocity of . At its current distance, HD 204904's brightness is diminished by 0.19 magnitudes due to interstellar extinction and it has an absolute magnitude of +2.13. HD 204904 has a stellar classification of either F6 IV or F4 IV, indicating that it is a slightly evolved F-type subgiant. It has 1.53 times the mass of the Sun and a slightly enlarged radius 2.87 times that of the Sun's. It radiates 12.1 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it the typical yellowish-white hue of an F-type star. HD 204904 is metal deficient with an iron abundance of [Fe/H] = −0.20 or 63.1% of the Sun's iron abundance. It is estimated to be 2.56 billion years old and it spins modestly with a projected rotational velocity of . In 2014, J. R. De Medeiros and colleagues detected radial velocity variations from the star, indicating that it was a spectroscopic binary. However, the system does not have a defined orbit. References F-type subgiants Spectroscopic binaries Octans Octantis, 59 CD-79 00856 204904 106881 8234 00354944747
HD 204904
[ "Astronomy" ]
376
[ "Octans", "Constellations" ]
73,641,345
https://en.wikipedia.org/wiki/Harry%20Hallam%20%28academic%29
Harry Evans Hallam (d. 1977) was a chemist and academic at the University College of Swansea. Early life and career Hallam spent his early years in East Africa. He attended Ardwyn Grammar School in Aberystwyth before going on to serve in the RAF. He studied chemistry at the University College of Aberystwyth and then undertook a University of London PhD by correspondence while working at the University of Khartoum. Academic career In 1955 Hallam was appointed to the staff of the Department of Chemistry at University College of Swansea becoming Senior Lecturer in 1964 and Reader in 1970. In 1963, Hallam took a year's sabbatical and became an adviser in physical chemistry at the new University of Nigeria at Nsukka. He had active international collaborations and was presented with a medal by the University of Helsinki in 1973 for his outstanding service and was also a visiting professor at the University of Marburg in 1975. Hallam was known for his work in infrared spectroscopy of the hydrogen bond and as one of the founders of matrix isolation spectroscopy. He passed away unexpectedly on 14 May 1977. Works Vibrational spectroscopy of trapped species; infrared and Raman studies of matrix-isolated molecules, radicals and ions. London; New York: J. Wiley (1973 ) Modern Analytical Methods. London: Chemical Society (1972 ) Personal life Hallam was married to Joan and they had a son called David. He was an active member of the Clyne Chapel, Blackpill. The Harry Hallam Memorial Fund In his memory, an endowment for an annual lecture to take a "particular account would be taken of Harry’s interest in spectroscopy" was created in 1977 with an appeal made for donations in the Journal of Molecular Structure. The lectureship is administered by the South Wales West Local Section of the Royal Society of Chemistry. Hallam Prizewinners 1983: M. S. Garley 1984: T. A. Sheppard 1986: A. M. M. Doherty and P. Graham 1988: Miss S. L. Giddings 1989: G. Williams 1990: Miss T. J. Lovelock 1991: Ian A. Evetts 1993: A. J. Parry 1994: S. R. Andrews and Prof David A. Worsley 1995 P. D. J. Anderson 1996: Sara Shinton 1997: P. Green and R. Phillips 1999: M. Francis 2000: D. K. Thomas 2001 S. Ford 2002: Rachel Fretwell and Kay Eaton 2003: D. J. Mitchell ... 2008: Rachel C. Evans References Academics of Swansea University People educated at Ardwyn School, Aberystwyth Alumni of Aberystwyth University Welsh chemists Spectroscopists 1977 deaths
Harry Hallam (academic)
[ "Physics", "Chemistry" ]
556
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
73,642,749
https://en.wikipedia.org/wiki/G%20107-69/70
G 107-69/70 is a quadruple system, consisting of the astrometric binary G 107-69 and the resolved binary G 107-70. The system is 36.76 light years (11.27 parsecs) from Earth. G 107-69 and G 107-70 are separated by 103.2 arcseconds, or 1163 astronomical units (AU). G 107-69A is a red dwarf star with a spectral type of M4.5 and a mass of about . G 107-69B has a mass of about or . The binary has a period of 0.94 years and a predicted separation of about . From its mass G 107-69B could be either a low-mass red dwarf star or a brown dwarf. G 107-70 (also called WD 0727+482) is a pair of white dwarfs, with both having similar mass, brightness and atmospheric composition. The binary was first partially resolved in 1976. Later Nelan et al. fully resolved the orbit of this binary with Hubble's Fine Guidance Sensor and found an orbital period of and a semi-major axis of . At a distance of 11.27 parsecs the semi-major axis is about . By resolving the orbit of the G 107-70 system Nelan et al. were able to calculate the dynamical mass of each component: G 107-70A has a mass of and G 107-70B has a mass of . Both white dwarfs have a spectral type of DA, which indicates an atmosphere dominated by hydrogen. See also Gliese 318, suspected to be the closest double white dwarf, which would make G 107-70 the second closest double white dwarf Capella, is another nearby quadruple system References White dwarfs M-type main-sequence stars Multiple star systems 0275 Lynx (constellation) Double stars
G 107-69/70
[ "Astronomy" ]
380
[ "Lynx (constellation)", "Constellations" ]
73,643,281
https://en.wikipedia.org/wiki/Validation%20and%20verification%20%28medical%20devices%29
Validation and verification are procedures that ensure that medical devices fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests. Validation or verification The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements. For instance, a regulatory agency (such as CE or FDA) may ensure that a product has been validated for general use before approval. An individual laboratory that introduces such an approved medical device may then not need to perform their own validation, but generally still need to perform verification to ensure that the device works correctly. Workflow Standards Standards for validation and verification of medical laboratories are outlined in the international standard ISO 15189, in addition to national and regional regulations. As per United States federal regulations, the following analytical tests need to be done by a medical laboratory that introduces a new testing device: To establish a reference range, the Clinical and Laboratory Standards Institute (CLSI) recommends testing at least 120 patient samples. In contrast, for the verification of a reference range, it is recommended to use a total of 40 samples, 20 from healthy men and 20 from healthy women, and the results should be compared to the published reference range. The results should be evenly spread throughout the published reference range rather than clustered at one end. The published reference range can be accepted for use if 95% of the results fall within it. Otherwise, the laboratory needs to establish its own reference range. See also Validation (drug manufacture) References Quality management Product testing Systems engineering
Validation and verification (medical devices)
[ "Engineering" ]
338
[ "Systems engineering" ]
73,643,914
https://en.wikipedia.org/wiki/HD%20193002
HD 193002 (HR 7758; NSV 25094) is a solitary red hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.26, placing it near the limit for naked eye visibility, even under ideal conditions. The object is located relatively far at a distance of 1,030 light years based on Gaia DR3 parallax measurements, but it is approaching the Solar System with a heliocentric radial velocity of . At its current distance, HD 193002's brightness is diminished by 0.17 magnitudes due to interstellar dust and it has an absolute magnitude of −0.93. HD 193002 has a stellar classification of M0/1 III, indicating that it is an evolved red giant with the characteristics of an M0 and M1 giant star. It is currently on the asymptotic giant branch, generating fusion via hydrogen and helium shells around an inert carbon core. It has a comparable mass to the Sun but it has expanded to 84.5 times the radius of the Sun. It radiates 711 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 193002 is slightly metal enriched with an iron abundance 118% that of the Sun's or [Fe/H] = +0.07. HD 193002 was first suspected to be variable in 1997 by the Hipparcos satellite. It fluctuates between 6.34 and 6.39 in the Hipparcos passband. References M-type giants Asymptotic-giant-branch stars Suspected variables Telescopium Telescopii, 85 PD-55 09365 193002 100300 7758
HD 193002
[ "Astronomy" ]
364
[ "Telescopium", "Constellations" ]
73,644,242
https://en.wikipedia.org/wiki/Tofersen
Tofersen, sold under the brand name Qalsody, is a medication used for the treatment of amyotrophic lateral sclerosis (ALS). Tofersen is an antisense oligonucleotide that targets the production of superoxide dismutase 1, an enzyme whose mutant form is commonly associated with amyotrophic lateral sclerosis. It is administered as an intrathecal injection. The most common side effects include fatigue, arthralgia (joint pain), increased cerebrospinal (brain and spinal cord) fluid white blood cells, and myalgia (muscle pain). Tofersen was approved for medical use in the United States in April 2023, and in the European Union in May 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. Medical uses Tofersen is indicated to treat people with amyotrophic lateral sclerosis (ALS) associated with a mutation in the superoxide dismutase 1 (SOD1) gene (SOD1-ALS). History Tofersen was developed by Ionis Pharmaceuticals and was licensed to, and co-developed by, Biogen. The effectiveness of tofersen was evaluated in a 28-week, randomized, double-blind, placebo-controlled clinical study in 147 participants with weakness attributable to amyotrophic lateral sclerosis and a superoxide dismutase 1 (SOD-1) mutation confirmed by a central laboratory. The study randomly assigned 108 participants in a 2:1 ratio to receive treatment with either tofersen 100 mg (n = 72) or placebo (n = 36) for 24 weeks (three loading doses followed by five maintenance doses). The participants were approximately 43% female; 57% male; 64% White; and 8% Asian. The average age was 49.8 years (range from 23 to 78 years). The stage III clinical trial was conducted by the Neuroscience Institute and Sheffield Institute for Translational Neuroscience (SITraN), both at the University of Sheffield. The US Food and Drug Administration (FDA) granted the application for tofersen priority review, orphan drug, and fast track designations. Society and culture Economics Only around 1-2% of ALS cases diagnosed in the United States each year carry the specific SOD1 mutation targeted by the drug. Fewer than 500 patients a year are expected to be eligible for the drug, which is expected to cost over $100,000 for a year's treatment. Legal status In February 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization under exceptional circumstances for the medicinal product Qalsody, intended for the treatment of a type of amyotrophic lateral sclerosis caused by a defective superoxide dismutase 1 (SOD1) protein. The applicant for this medicinal product is Biogen Netherlands B.V. Tofersen was approved for medical use in the European Union in May 2024. References Amyotrophic lateral sclerosis Therapeutic gene modulation Orphan drugs
Tofersen
[ "Biology" ]
653
[ "Therapeutic gene modulation" ]
73,644,559
https://en.wikipedia.org/wiki/James%20Busfield
James Busfield is a Queen Mary University of London professor, and head of the United Kingdom's largest research group in the area of Soft Matter. Education Busfield completed an MA in Engineering Science at the University of Oxford in 1989. In 2000, he completed a doctoral degree in materials science at Queen Mary University of London under advisor Alan G. Thomas. He made influential studies of ceramic foams and of the electrical and mechanical behavior of filled rubbers. He has chaired the 2003 European Conference on Constitutive Models for Rubber, together with Alan Muhr. Awards and recognition 2009 - National Teaching Fellowship 2009 - Colwyn Medal from the IOM3 2010 - Sparks–Thomas award from the ACS Rubber Division. 2020 - Fellow of the Royal Academy of Engineering 2021 - George Stafford Whitby Award by the ACS Rubber Division Notable students Lewis Tunnicliffe References Polymer scientists and engineers Living people Year of birth missing (living people) Academics of Queen Mary University of London British physical chemists 21st-century British chemists Alumni of the University of Oxford Alumni of Queen Mary University of London Fellows of the Royal Academy of Engineering
James Busfield
[ "Chemistry", "Materials_science" ]
227
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
73,644,731
https://en.wikipedia.org/wiki/GJ%203522
GJ 3522 (G 41–14) is a nearby triple star system, consisting out of a short-period double-line spectroscopic binary and an outer companion that was discovered with adaptive optics on the CFHT. The system is 22 light-years (6.8 parsec) from Earth. The inner binary orbit each other every 7.6 days. Orbiting around the inner binary the outer companion completes an orbit every 5.7 years. The system has a spectral type of M3.5. The star shows flares in the optical and x-ray. It also shows activity in H-alpha and ultraviolet. See also List of star systems within 20–25 light-years References Triple star systems Cancer (constellation) M-type main-sequence stars Flare stars
GJ 3522
[ "Astronomy" ]
160
[ "Cancer (constellation)", "Constellations" ]
73,645,096
https://en.wikipedia.org/wiki/New%20Investigator%20Award
A New Investigator Award is a type of funding grant awarded to "early career" academics in the sciences. A wide array of entities offer New Investigator Awards, including: Engineering and Physical Sciences Research Council (UK) American Association of Colleges of Pharmacy American Medical Informatics Association European Human Behaviour and Evolution Association Canadian Association for Neuroscience References Grants (money)
New Investigator Award
[ "Technology" ]
70
[ "Science and technology awards", "Science award stubs" ]
73,645,617
https://en.wikipedia.org/wiki/Stefan%20Bringezu
Stefan Bringezu is a German environmental scientist. He has conducted pioneering research in the field of material flow analysis and derived policy-relevant indicators of resource use, which contributed to statistical standards in the EU, OECD, and UNEP and environmental footprinting across scales. He had been selected as inaugural member of the International Panel for Sustainable Resource Management (now: International Resource Panel ) and lead-coordinated in three of their reports. He was scientific director of the Center for Environmental Systems Research at Kassel University, Germany. References Environmental scientists German scientists Living people Year of birth missing (living people)
Stefan Bringezu
[ "Environmental_science" ]
121
[ "Environmental scientists" ]
73,645,787
https://en.wikipedia.org/wiki/Diffuse%20correlation%20spectrometry
Diffuse correlation spectroscopy (DCS) is a type of medical imaging and optical technique that utilizes near-infrared light to directly and non-invasively measure tissue blood flow. The imaging modality was created by David Boas and Arjun Yodh in 1995. Blood flow is one the most important factors affecting the delivery of oxygen and other nutrients to tissues. Abnormal blood flow is associated with many diseases such as stroke and cancer. Tumors from cancer can generate abnormal tumor blood flow compared to the surrounding tissue. Current treatments attempt to decrease blood flow to cancer cells. Therefore, there is an urgent need for a way to measure blood flow. However, blood flow is difficult to measure because of sensitivity and stability of the measurement as it depends on magnitude of flow, location, and the diameter of individual vessels. Current imaging modalities used to measure blood flow include Doppler ultrasound, PET, and MRI. Doppler ultrasound is limited to large vessels. PET requires arterial blood sampling and exposure to ionizing radiation. MRI cannot be used for patients with pacemakers and those with metal implants. All together, these imaging modalities have large and costly instrumentation and are not conducive to continuous measurements. With these considerations in mind, the first methodology used to measure blood flow is near-infrared spectroscopy (NIRS). It is based on a well known spectral window that exists in the near-infrared (NIR, 700-900 nm) where tissue absorption is relatively low so that light can penetrate into deep/thick volumes of tissue, up to several centimeters. It provides a fast and portable alternative to measure deep tissue hemodynamics. However, it has a poor spatial resolution and is a ‘static’ method. This means that it measures the relatively slow variation in tissue absorption and scattering. In other words, it measures the changes in the amount of scattering rather than the motion of the scatter. This led to the ‘dynamic’ NIRS technique or Diffuse correlation spectroscopy. It measures the motions of the scatters while also maintaining the advantages of NIRS. The primary moving scatterers are red blood cells. The main advantages of this method is no ionizing radiation, no contrast agents, high temporal resolution, and large penetration depth. The utility of DCS technology has been demonstrated in tumors, brains, and skeletal muscles. The general approach with DCS is that the temporal statistics of the fluctuations of the scattered light within a speckle area or pixel is monitored. Then, the electric field temporal autocorrelation function is measured. A model for photon propagation through tissues, the measured autocorrelation signal is used to determine the motion of blood flow. Mathematical principles Diffuse correlation spectrometry is an extension of single-scattering dynamic light scattering (DLS). Single-scattering theory becomes inadequate as multiple scattering effects take place in biological thick tissues. Therefore, each scattering event contributes to the decay of the correlation function. The fields from individual photon paths are assumed to be uncorrelated; therefore, the total field autocorrelation function can be expressed as the weighted sum of the field autocorrelation function from each photon path. The physical effect that makes the blood flow measurement possible is the temporal electric field autocorrelation function, shown in equation 1, diffuses through tissue in a manner that is similar to the light fluence rate. In a highly scattering media, the photon fluence rate obeys the time-dependent diffusion equation, shown in equation 2. Optical imaging variables used in these equation are here. The blood flow measurement can be governed by the diffusion equation. Many tissue optical properties that affect diffusion such as tissue absorption and tissue reduced scattering coefficient are the same for temporal autocorrelation. Using the same set of approximations, the temporal field autocorrelation function obeys a formally similar diffusion equation, shown in equation 3. The mean-square particle displacement has been found to be reasonably well approximated as an “effective” Brownian motion, i.e., DB represents the effective diffusion coefficient of the moving scatterers. In order to estimate relative blood flow from DCS data, we fit the measured intensity autocorrelation functions to solutions of the equation in equation 3. Currently, there is no evidence explaining why Brownian-motion correlation curves work effectively. This is the current empirical approach. The unit of αDB (cm2/s) has been found to correlate well with other blood flow measurement modalities and is used to measure blood flow. Therefore, is the blood flow index (BFI). To calculate the relative blood flow (rBF), the equation is shown in equation 4 where BFI0 is the DCS blood flow measurement at a baseline. Instrumentation and data acquisition The instrumentation needed in order to conduct the data acquisition include a multimode optical fiber, single-mode or few-mode fibers, photon-counting avalanche photodiodes (APDs), multi-tau correlator board, and a computer. The first step of data acquisition is probing the tissue with multimode optical fibers that deliver a long coherence length laser light to the tissue. The second step of data acquisition is collecting photons emitted from the tissue surface with single-mode or few-mode fibers. The third step of data acquisition is the APDs detect the photons from the single-mode or few-mode fibers. The APDs act like detectors. The APDs will have a transistor-transistor logic output or binary outputs with the use of transistors. These outputs will be fed into the multi-tau correlator board which will calculate the temporal intensity auto-correlation functions of the detected signal. Then, the function outputs onto the computer where the functions are fitted to the diffusion equation in the previous section in order to determine optical properties about the tissue as well as properties of the scatters or red blood cells such as blood flow index and many more. Application Example A clinical application of DCS is for use in diagnosis of cancers. An example of this is measuring red blood cell flow in breast tumors. In this experiment, both healthy patients and patients with breast tumors were recruited. Researchers scanned the tumor with a hand-held optical probe with 4 sources and detectors 2.5 cm apart from each other.   Then, the resultant correlation functions were fit to the solution of the correlation diffusion equation to obtain the blood flow index. The average relative blood flow was reported at each position. Blood flow increased in both horizontal and vertical scans as the probe crossed over the tumor. These findings were consistent with previous Doppler ultrasound and PET results. Advantages, limitations, and future directions Diffuse correlation spectrometry measures the motion of scatters or red blood cells in tissue by analyzing the intensity of autocorrelation functions. There are many advantages to this method. The first advantage is that DCS can be used for patients of all ages. This is significant as some modalities such as MRI are difficult to use for certain populations. The second advantage is that DCS instrumentation is easy to assemble and requires only one wavelength that can be chosen. The third advantage is that the theoretical concepts of DCS can be adapted to other blood flow imaging techniques. However, there are limitations associated with DCS.  First, the reason for why the dynamics of RBCs are so well approximated by a Brownian motion flow model is still not clear. Second, motion artifacts are common and can generate signals that can mislead physiological interpretation. Third, on the instrumentation side, the low SNR levels due to small fibers and tissues are challenging. Next steps for DCS include using this modality as a bedside monitor of cerebral perfusion. Furthermore, DCS should be used to increase our understanding of early brain development. The ability to monitor neurovascular responses will enable the use of more complex stimulation paradigms. References Medical imaging Spectroscopy
Diffuse correlation spectrometry
[ "Physics", "Chemistry" ]
1,605
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
73,646,116
https://en.wikipedia.org/wiki/Cantharellus%20anzutake
Cantharellus anzutake, also known as Japanese golden chanterelle, is a fungus native to Japan and Korea. It is a member of the genus Cantharellus along with other popular edible chanterelles. It is named after the Japanese common name of chanterelle, . Description The pileus (cap) of C. anzutake is wide, and yellow, sometimes with a darker center. The hymenium is folded into decurrent ridges (false gills) and cross-veins. The color of these ridges is usually similar to the cap, becoming whitish to pale cream near the stipe (stem). The stem is long and wide, with white coloration. The spores are ellipsoid to ovoid, 7.3–8.8 × 5.1–6.1 μm. Distribution and habitat Native to Japan and Korea, C. anzutake forms a mycorrhizal association with Pinus densiflora, Carpinus laxiflora, and Quercus mongolica. Uses Cantharellus anzutake is an edible mushroom, long labeled as C. cibarius. Scientists have described a method of obtaining a pure C. anzutake culture from mycorrhizae and reported repeated fruiting of potted pine seedlings inoculated with the culture, potentially making cultivation feasible. References External links anzutake Edible fungi Fungi of Japan Fungi described in 2017 Fungi in cultivation Fungus species
Cantharellus anzutake
[ "Biology" ]
301
[ "Fungi", "Fungus species" ]
73,646,514
https://en.wikipedia.org/wiki/Descriptive%20Experience%20Sampling
Descriptive Experience Sampling or DES is a method that aims to uncover the contents of a person's consciousness over the course of short intervals. To do this, practitioners use devices that deliver random beeps. Participants hear these beeps as they go about their daily life. After each beep, they jot down what was in their inner experience in the short moment directly before the beep. This could be a thought, feeling, ‘voice in their head’, or whatever else is present. After a certain number of beeps are collected, participants are given an interview following strict guidelines. DES holds that participants must be trained over the course of multiple days in order to faithfully observe what's in their experience. Findings often differ greatly from participant expectations and sometimes even from scientific consensus. History Russell Hurlburt developed the method in the early 1970s. It was refined over the course of the next decades, with the help of frequent collaborators such as Christopher Heavey, Sarah Akther, and Alek Krumm. Hurlburt and collaborators wanted a method to examine inner experience without the memory errors, biases, heuristics, and self-schema based preconceptions that can distort first-person reporting. First-person methods have had a difficult history. A disagreement between two camps of introspectionists at the beginning of the 20th century led to the field's abandonment by mainstream psychology. An influential 1977 study by Nisbett and Wilson further cemented the notion that first-person reporting is flawed and distorted by memory issues and biases. Hurlburt and colleagues sought a method that would overcome these limitations. DES complies with Nisbett and Wilson's oft-overlooked recommendations for how first-person reports could be more accurately obtained. These include 1) interrupting a process at the moment it is occurring, 2) alerting subjects to pay careful attention to their cognitive process, and 3) coaching them in introspective procedures. Hurlburt's research started with the use of the beeper device in naturalistic settings. Originally he gave participants a questionnaire with a limited range of options. This facilitated quantitative comparison. But reportedly, Hurlburt grew frustrated at the limitations this placed on unveiling experience. He moved towards more in-depth qualitative interviewing. He studied the work of Husserl and Heidegger and drew inspiration from phenomenology. When first refining the method, Hurlburt at first sampled himself. He then concluded that it would be better not to use himself as a subject. Phenomena that he observed in himself he might more easily attribute to others. For the next 25 years or so he refused to participate in DES as a subject until the urgings of his students convinced him to try. Method Procedure DES uses a device that delivers random beeps to participants throughout the day. It can be a specialized device or a programmed smartphone. The beeps are delivered through an earpiece—to increase the immediacy with which participants can observe their experience. In a typical procedure, participants receive six beeps a day. Sampling occurs in the participant's everyday environment to increase ecological validity. They could be doing laundry, having lunch with friends, driving to work, or whatever else they would typically do. After each beep, participants jot down what was in their inner experience directly before the beep. Not after or during, but directly before. This is sometimes referred to as the “millisecond” before the beep, or the “last undisturbed moment”. These phrasings are both somewhat inaccurate, but the goal is to convey to the subject that they should be as precise as possible in their descriptions. After six beeps are collected, participants are given a one-hour interview. This interview is delivered within 24 hours of beep collection, to reduce errors of memory recall. The premise of one-hour interviews is to not exhaust the patient. Any beeps not discussed within the hour are discarded. Participants are told that they can choose not to report any beep that is too private. The entire process of beep collection and interviewing is repeated, usually for around six days but often longer. The first day of sampling and interviewing is always considered training and discarded from further data analysis. Other days may be discarded as well if interviewers judge that the participant is not yet adequately trained. DES holds that observing experience is difficult and the sustained efforts of both the participant and interviewer, as “co-investigators,” are required. What follows is an example of an admissible DES sample, which researchers compile based on the participant's interview responses. Steven was pacing around his condo engaged in a mental argument. At the beep he was innerly saying the word “whatever” to himself in his own voice, as if directed at the person he was mentally arguing with. He was also aware of a sense of frustration and an accompanying sensation of heat and outward-radiating pressure behind his ears and eyes. Simultaneously, he was also aware of a “frenetic” restless energy in his arms and legs that made him feel like he had to be moving. After samples are collected, they can be coded. This sample, for example, could be coded as containing inner speaking (the “whatever” element), feeling (the sense of frustration), and sensory awareness (the restless energy). After coding, intrasubject, intersubject and intergroup analysis can also be performed. Interview guidelines The interview procedure is detailed in books like Exploring Inner Experience: The Descriptive Experience Sampling method. It is also available in an interactive website and a video series. DES interviews follow rigorous guidelines. A core component is “bracketing presuppositions”. Interviewers must leave behind their notions of what they think experience is like. A participant's experience may be very different from their own. Participants are also trained to bracket their own presuppositions. They might at first have preconceived notions of their experience that prevent careful observation. DES literature contains examples of participants originally mistaken about their experience. For example, one participant, Donald, prior to DES, described his experience as mostly consisting of anxiety. But DES Sampling revealed that in a good deal of his samples he was angry, specifically at his children. Donald denied this theme until shown his samples. Hurlburt interprets that retrospective self-accounts are often incorrect. Presuppositions can be difficult to overcome. In order to reach accurate descriptions without biasing participants, DES uses what Hurlburt calls “open-beginninged probes.” One such question could be: ‘What, if anything, was in your experience at the moment of the beep?’ Other phrasings like "what were you thinking about at the moment of the beep" could bias participants—for example suggesting that they need to report something that qualifies as “thinking.” Interviews should avoid generalizations and guide participants toward the concrete. For example, a participant might answer, “I always am talking to myself.” This is not admissible for DES. The goal is to find what was in experience specifically at the instance before the beep. If questions aren't content-neutral, interviewers should give multiple options. For example, if a participant describes a mental image, the next step could be eliciting greater precision. Questioning could proceed: ‘Were there borders around the image? Or no borders? Or you’re not sure?’ Opportunities should be given for participants to revise their story or change it completely. Interviewers pay attention to “subjunctification.” This includes hesitation, words like ‘umm,’ ‘like,’ ‘I guess,’ and ‘I suppose’. These can indicate the participant's doubt and their removal from direct experience. The goal of DES is not to force an accurate description for every sample. Often, samples are inconclusive. The goal is to train participants so they can be more sensitive to their experience on subsequent days. Each day of interviewing can be considered training for the next. Validity Claims to validity Hurlburt and Heavey write that DES follows ‘idiographic validity’. By this, they mean that we can only judge the validity of DES for one participant at a time. Researchers approaching the method should ask if they are convinced by the argumentation behind the method's guidelines. They should then ask if the researcher and participant in question complied with these guidelines. Validity studies can also be performed. One study looked at the interobserver reliability of interviewing and coding. Two researchers independently interviewed DES participants and coded their experiences. They compared these codes to see if they matched and found high reliability. DES samples can also be checked with other observables. Hurlburt and Heavey refer to this as situating DES in a “nomological network.” This means an interlocking system of observables that can help build up validity. These could be first-person or third-person observables. For example, one DES participant, Fran, did not have any figure / ground distinction in her mental imagery. No part of her mental images appeared closer or better in focus. Hurlburt was surprised by this and sought another way of testing. He showed Fran classic figure/ground illusions, for example an image that simultaneously shows faces and a vase. Fran reported seeing both the faces and the vase at the same, with no alternation between them. The third-person measure—Fran's response to the illusion—was used to corroborate Fran's first-person reports. Other studies can be done incorporating third-person data, for example neuroimaging studies. For example, samples of inner speaking while in an MRI scanner correlated with activation in classic speech processing areas including the left inferior frontal gyrus. Other evidence for validity could come from participants being helped by the DES process. For example, after sampling, Fran was able to control intrusive thoughts better. Other participants have gained better clarity over their inner life and relationships. Outcomes like these can be part of the interlocking system of observables, even if they aren't, in themselves, proof of validity. Criticism For some, DES, despite its efforts, doesn't overcome issues of first-person reporting including biases and memory constraints. For example, Eric Schwitzgebel sees first-person reporting as still too subject to distortion. Hurlburt and Schwitzgebel have addressed these criticisms in a book where Hurlburt conducts DES with a participant. Hurlburt and Schwitzgebel then discuss the potential limitations of the method. Another line of criticism is that DES doesn't go far enough in uncovering certain aspects of experience. For example, its narrow temporal scope might leave out certain temporally extended phenomena. Or its lack of directing participants' awareness could mean it misses certain nuances of experience. Hurlburt acknowledges that DES samples can be incomplete and may miss some elements of experience. To him, being confident of reported elements is more important than capturing all possible elements in experience. Findings Importance of the individual A key insight from DES research is how variable individual experience can be. Different participants can have very different kinds of conscious experience. Hurlburt writes: I have sampled with some people whose inner experience is characterized almost exclusively by inner speech; with others whose inner experience is characterized almost exclusively by images, or by sensory awareness, or by unsymbolized thinking, or by feelings; with others whose inner experience is characterized by a combination of all those; with some whose inner experience is characterized by many simultaneous events; with others whose inner experience is characterized almost exclusively by one event at a time; and so on. So, yes, I think people are importantly different when it comes to inner experience. DES has been used to form generalizations about groups, for example regarding various mental disorders. But researchers emphasize that beyond such generalizations, we should retain the importance of individual experience. Researchers call this an 'idiographic' focus. Hurlburt gives the example of a participant, RD, who described some thoughts as being “solid” and some as being “light.” The goal of interviewing was to determine whether this distinction represented a salient aspect of RD's experience or if it was merely a quirk of his descriptions. After questioning Hurlburt concluded that this division was indeed a salient feature. A light thought meant RD was “not deeply focused on the thought or working on the thought.” A solid thought, in contrast was “heavily concentrated, deeply focused” This division of experience into 'solid' and 'light' is not common. Particularities of individual experience can emerge through sampling. This doesn't, however, preclude further sampling with other participants from revealing similarities. Five frequent phenomena Once interviewers recognize that their primary focus is on individual experience, they can build up an understanding of consistent features across participants. For example, research has identified five particularly frequent phenomena: sensory awareness, inner speaking, images, feelings, and unsymbolized thinking. One study looked at the inner experience of 30 college students to estimate frequencies of these phenomena. Frequency varied substantially for each individual. For instance, nine participants had no samples of sensory awareness throughout their ten samples. One participant had sensory awareness in 100% of samples. What follows is a description of each of these five frequent phenomena, with examples from other studies. Sensory awareness This denotes paying particular attention to sensory aspects of the world or one's own body. For example, a participant might notice, “a shiny blueness, feel the coldness of an iced drink, hear a feature of a friend’s voice, or feel a muscle twinge.” To be coded as sensory awareness, the participant should be focused on these aspects—they're not merely passive in the background. Hurlburt gives the example of someone skillfully driving a car but without their attention directed at their surroundings. He compares this to someone driving a car with their attention on the “particular yellow color of the yellow line.” The latter would be coded as sensory awareness. As noted, there is great intrasubject variability for sensory awareness. Some participants can hardly ever be attuned to sensory aspects of their environment. Some participants can have sensory awareness in nearly every sample, or multiple different instances in one sample. Variations in sensory awareness may play a role in psychopathology. From one study, people with autism could have sensory awareness occupy 100% of their samples, to the exclusion of all other phenomena. People with schizophrenia can have distortions in their sensory awareness—vision can be scratched, warped, and shuffled. Inner Speaking Inner speaking denotes what some might call ‘self-talk’ or a ‘voice in their head’. But whereas some use these phrases metaphorically, for some they can be quite literal. Inner speaking can take different characteristics. It is often in the participant's own voice, although can take on characteristics of people they know. Sometimes tone, pitch, and auditory characteristics are clear and sometimes not. Sometimes the voice occupies specific regions, often in participants’ heads. Sometimes this localization is more metaphorical, and sometimes there is no localization at all. Inner speaking can sometimes fully express thoughts. At other times, it may only emphasize certain words—in what can be called partially worded speech In the aforementioned study on the frequent phenomena of experience, the presence of inner speaking correlated with a decrease in psychological distress. Inner speaking might then have a role in lessening psychological distress. Images Sometimes called “inner seeing,” this refers to mental images. These are different from participants observing the world in front of them. A participant could for instance be physically in an office but have a mental image of a palm tree on a beach. DES researchers have observed a good deal of variability regarding mental images, both between samples and between participants. Sometimes images can be clear, and sometimes vague and ill-defined. Sometimes they can appear with borders around them like a photograph. Sometimes they can appear as if the participant is present ‘in the scene.’ Sometimes they can be in color. Sometimes they can be in black and white. Hurlburt and Heavey speculate that inner seeing is a skill that can take time to acquire and can also be lost. They illustrate this with an example from a 9-year-old boy. At the moment before the beep, he had a mental image of a hole he had been digging in real life. When asked if it was like the real hole, the boy replied “Yes […] except that the real hole has more toys in it. If you had beeped me in a couple minutes, I would have had time to finish the picture.” The authors infer that creating detailed mental images is a skill that adults might take for granted but takes practice to develop. They compare this with sampling from an 81-year-old women whose mental images were entirely in black-and-white or brown-and white. They speculate that detailed, colored inner seeing is a skill that for her has declined with age. Feelings This code denotes awareness of emotional experiences like sadness, happiness, worry, and frustration. In DES, 'feelings' does not refer to physical sensations, which are coded as sensory awareness. It is also distinguished from “emotions” in that emotional processes may be ongoing but not always in direct experience. In some samples, for example, participants have behavioral manifestations of anger, but are not angry at that exact moment. As with other codes, there is a great deal of variability between participants. From the study with 30 college students, five of them had no instances of feelings in their samples. One of them had feelings in 90% of samples. Contrary to some theorizing, feelings are not always present in experience, at least from the ability of DES to discern. Sometimes feelings are clear and sometimes not. Sometimes multiple feelings can be ‘mixed’ together simultaneously—for example, happiness with a tinge of sadness. Sometimes feelings can have a clear bodily location and sometimes not. Similarly to mental images, Hurlburt speculates that experience of feelings may also be a skill that takes time to develop. He uses as an example a sample with an 11-year-old who had watched a show where one of her favorite TV characters had died. At the moment right before the beep she was repeating to herself “I’m sad, I’m sad, I’m sad…” but “was not actually feeling sad at that moment.” The verbal repetition may have served to coordinate the skillful ability of feeling sadness. Unsymbolized thinking This code refers to thoughts that have clear, differentiated content but no discernible features that “carry” that content: no images, no words, no other kinds of symbols. For example, in one sample Abigail is wondering whether her friend will pick her up in his car or his pickup truck. This content is clear. But there are no words or images accompanying it, merely the content. Some consciousness scholars deny the existence of unsymbolized thinking, but this is often based on self-initiated and self-directed introspection without a defined method. Many DES participants have unsymbolized thinking without beforehand recognizing it as a feature of their experience. Since content can be clear, unsymbolized thinking might at first be mistaken for other phenomena like inner speaking. Even after decades of research, Hurlburt was unable to recognize unsymbolized thinking in his own experience before DES sampling revealed it as a common feature. Use for psychopathology DES's focus is first on individual samples and then on individual people. Commonalities may, however, emerge for certain groups. This includes for people with psychiatric diagnoses. Many of these studies have small sample sizes and could be considered exploratory. But some clearer findings, with replications, have emerged—for example regarding bulimia nervosa. Bulimia nervosa With bulimic participants, multiple DES studies have found what researchers have termed ‘fragmented multiplicity.’ This means that bulimic individuals can have multiple elements in experience at the same time. These could be simultaneous images, feelings, or other kinds of experience. There may be over a dozen simultaneous elements at once. Depending on the participant, the proportion of samples with multiple fragmented inner experience ranged from 44% to 92%. These simultaneous elements may or may not be focused on body image issues. This shows that bulimia may be characterized more by the process of experience than the content. An example of a sample showing fragmented multiplicity: Sample 5.2. Jessica was looking at her digital camera display, seeing a photo of her and her boyfriend taken on a recent trip to Chicago. While seeing this photo, she was also innerly seeing at least five separate, simultaneous, overlapping visual scenes of places she had visited in Chicago. These inner seeings were fuzzy or indistinct, and were apprehended as if looking at snapshots – the scenes had edges, for example. Simultaneously, she was innerly seeing herself and her boyfriend standing close together at the kitchen sink. In this seeing, which was somewhat clearer than the Chicago scenes, Jessica was on the left, the boyfriend on the right, and both were seen from the back. This was a re-creation of an event that had actually taken place, but viewed from behind her, an obviously impossible perspective for her to have taken in reality. Simultaneous she was feeling happy, apprehended as a volleyball-sized sensation deep in her stomach but also all over her stomach. In this one sample, Jessica had multiple instance of inner seeing and a simultaneous feeling. These elements didn't occur sequentially over a lengthy period of time. They were all present at once. Another feature of bulimic experience is elements leaving direct consciousness but still lingering, as if they might reoccur. Participants have different terms for this, for example calling these elements “tails”—like the visible tails of fish hiding under rocks. Regarding fragmented multiplicity and 'tails,' participants are generally unaware of these features before sampling. These features have also not been described elsewhere in literature. They show that, if valid, DES may be able to uncover features that other methods miss. These features could then potentially be useful in diagnosing and treating mental illnesses. Autism spectrum One small study looked at three adults with Asperger's syndrome, a diagnosis now recognized as belonging to the autism spectrum. For one participant, experience was unclear, but for the remaining two, visual images were the dominant feature of their experience. Mental images or visual sensory awareness occupied up to 100% of participant samples. Cognitive processes like problem solving manifested through images. One participant also described these images as forming “the shape of [his] thoughts”. For example, in one sample, he was looking at a brick wall and was visually focused on three or four bricks. He described his thoughts as having ‘taken the shape of’ the bricks. His awareness was completely occupied by them and nothing more. Anxiety disorder In a small sample of five individuals, some characteristics emerged. Compared to individuals without psychiatric diagnoses, those with anxiety disorder had a relatively high frequency of indeterminate visual images (between 8 and 25% of their samples). Indeterminate images are images with little or no clarity, color, or detail. Another feature was a relatively high frequency of worded thinking—words being present without being innerly spoken or heard. Participants also commonly had repetitive rumination and critical thoughts—either critical of the participant or of others. Another feature was what Hurlburt calls the “’doing’ of understanding and the ‘happening’ of speaking”. Understanding others required effort and concentration, but speaking simply occurred undirected and effortless. For people without anxiety, often the opposite is true. Speaking can take more effort than understanding. Schizophrenia Sampling with schizophrenic participants revealed some commonalities. Images were experientially important, as was color in these images. These images could exist more concretely than for non-schizophrenic participants. Images or visual sensory awareness could often be "goofed up"—scratched, warped, or otherwise distorted. Hurlburt speculates that schizophrenia may be more a disorder of distorted perception than of disorganized association. Another inference based on interview analysis was that decompensating schizophrenics (in the midst of severe episodes) may sometimes have no inner experience at all. Major depressive disorder DES sampling results with depressed people have been inconsistent. One study found that depressed people had a much greater frequency of unsymbolized thinking than non-depressed people. Clarity of thinking decreases with depression. Another study did not replicate this. This study also did not find statistically significant differences in depressive symptomology between depressed and non-depressed people. This means that the depressed group did not have a significant increase of samples with depression, anxiety, fatigue, body discomfort, negative feelings, negative content, or reduced positive feelings. One study looked at different response styles of depression—rumination vs. distraction. Those with a ‘rumination’ response style cope with negative content by repeatedly mulling it over. Those with a ‘distraction’ response style cope by distracting themselves with other thoughts. People with a ‘rumination’ response style had a higher frequency of unsymbolized thinking and feelings. Reasons for unclear findings could be that depression is a broad symptom cluster making it difficult for commonalities to emerge. Further division of different subtypes (as in the response style study) could be useful for psychopathology. Another reason could be that identification of depression can depend on self-schemas that may not reflect experiential particularities. Depression questionnaires rely on memory and retrospection. A participant's self-schema could influence how they answer these questionnaires. DES samples minimize memory demands and give a different picture of experience. Another reason for unclear findings could be that DES, with its narrow temporal scope, may not uncover all aspects of depression. Use with neuroimaging DES has been used in combination with neuroimaging in pursuit of bridging an understanding of the mind with its physical substrate. One study found that when DES revealed samples of inner speaking in the MRI scanner, classical language areas of the brain were activated. A further study showed that while language areas were activated during spontaneous inner speaking, these were different areas than those active during tasks. Participants were scanned at resting state—not given any particular instruction. Of collected samples, those that included inner speaking were analyzed. Brain activation for this inner speaking was different than for inner speaking typically elicited in fMRI studies, where participants follow instructions for specific tasks. The authors conclude that we should be wary of extrapolating from task-based fMRI to infer about natural experience. Another study analyzed what DES samples can say about ‘resting state’ experience. DES samples were collected when participants' default mode network was active. The default mode network is commonly seen as responsible for our 'resting state.' The study found a variety of experience. Participants could have very different frequencies of the five frequent phenomena (described above). This is counter to scientific literature describing resting state consciousness as a unified phenomenon. The authors also proffer to more accurately describe resting state as “unconstrained activity.” In another study, researchers rated DES samples in the scanner as either being internally or externally directed (or both simultaneously). Internally directed samples correlated with default mode network activity. A model was then able to predict raters choices based on neural data. This demonstrates a step towards being able to describe consciousness based on neuroimaging. Other findings Left-handers In a study of left-handed participants, a number of salient features emerged, including words experienced without semantic significance. Verbal content could be present in experience, without accompanying meaning. Participants could either be looking at these words, visually imagining them, or innerly speaking them. The focus was on visual or auditory aspects of these words rather than what they represented. This may have to do with left-handers' differing lateralization of brain function. Absence of experience In quite a few DES samples, participants appear to have no experience at all—even if experience is defined in the broadest of ways. To the best of their recollection, they have no thoughts, emotions, sensation or anything else that could constitute experience. For example, in one sample a participant, Ben, has no experience and describes this like having a “void within”. Many philosophers of consciousness argue that this is impossible, and that we always have some type of experience. Hurlburt argues that the process of questioning leading to a conclusion of absent experience is thorough. Still, DES cannot completely rule out some minimal experience. Some DES research has hypothesized absence or near absence of experience for individuals—for example for one participant with autism, or for schizophrenic participants when symptoms become severe. Silent reading A study found that, contrary to the beliefs of certain theorists, very few samples of reading involved inner narration of the text—only three percent in fact. Other elements were common, for example visual imagery. According to the authors, this shows that experience can be different than participant presuppositions and the presuppositions of theorists. Notes References Brouwers, V. P., Heavey, C. L., Lapping-Carr, L., Moynihan, S. A., Kelsey, J. M., & Hurlburt, R. T. (2018). Pristine inner experience while silent reading: It's not silent speaking of the text. Journal of Consciousness Studies, 25(3–4), 29–54. Dainton, B. (2000). Stream of consciousness (Rev. ed.). Routledge. Dickens, Y. L., Raalte, J. V., & Hurlburt, R. T. (2018). On investigating self-talk: A Descriptive Experience Sampling study of inner experience during golf performance. The Sport Psychologist, 32, 66–73. https://doi.org/10.1123/tsp.2016-0073 Doucette, S. A. (1992) Sampling the inner experience of bulimic and other individuals (231) [Doctoral dissertaion, University of Nevada, Las Vegas]. UNLV Retrospective Theses & Dissertations. Fernyhough, C. (2016). The voices within. Profile Books. Gunter, J. D. (2011) Examining experience in depressed and nondepressed individuals (1215) [Doctoral dissertation, University of Nevada, Las Vegas.] UNLV Theses, Dissertations, Professional Papers, and Capstones. Heavey, C. L., & Hurlburt, R. T. (2008). The phenomena of inner experience. Consciousness and Cognition, 17(3), 798–810. https://doi.org/10.1016/j.concog.2007.12.006 Heavey, C. L., Hurlburt, R. T., & Lefforge, N. L. (2012). Toward a phenomenology of feelings. Emotion, 12(4), 763–777. https://doi.org/10.1037/a0026905 Heavey, C. L., Lefforge, N. L., Lapping-Carr, L., & Hurlburt, R. T. (2017). Mixed emotions: Toward a phenomenology of blended and multiple feelings. Emotion Review, 9(2). https://doi.org/10.1177/1754073916639661 Hurlburt, R. T. (1980). Validation and correlation of thought sampling with retrospective measures. Cognitive Therapy and Research, 4(2), 235–238. Hurlburt, R. T. (1993). Sampling Inner Experience in Disturbed Affect. Plenum Press. Hurlburt, R. T. (2011). Investigating pristine inner experience: Moments of Truth. Cambridge University Press. Hurlburt, R. T. (Date accessed: April 24, 2023) DES-imp: Interactive DES example and training. http://hurlburt.faculty.unlv.edu/desimp/labs/lab0/lab0.html Hurlburt, R. T. (2022) Do I really have internal monologue? (Reality TV about inner experience). (Date accessed: April 25, 2023). http://hurlburt.faculty.unlv.edu/lena/do_I_have_internal_monologue_sampling.html Hurlburt, R.T. & Akhter, S.A. (2006). The Descriptive Experience Sampling method. Phenomenology and the Cognitive Sciences 5, 271–301. https://doi.org/10.1007/s11097-006-9024-0 Hurlburt, R. T., Alderson-Day, B., Fernyhough, C., & Kühn, S. (2015). What goes on in the resting-state? A qualitative glimpse into resting-state experience in the scanner. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01535 Hurlburt, R. T., Alderson-Day, B., Kühn, S., & Fernyhough, C. (2016). Exploring the ecological validity of thinking on demand: Neural correlates of elicited vs. spontaneously occurring inner speech. PLoS-ONE, 11(2). https://doi.org/10.1371/journal.pone.0147932 Hurlburt, R. T., Happé, F., & Frith, U. (1994). Sampling the form of inner experience in three adults with Asperger syndrome. Psychological Medecine, 24, 385–395. Hurlburt, R. T., & Heavey, C. L. (2002). Interobserver reliability of Descriptive Experience Sampling. Cognitive Therapy and Research, 26(1), 135–142. https://doi.org/10.1023/A:1013802006827 Hurlburt, R. T., & Heavey, C. L. (2006). Exploring inner experience: The Descriptive Experience Sampling method. John Benjamins. https://doi.org/10.1075/aicr.64 Hurlburt, R. T., Heavey, C. L., & Kelsey, J. M. (2013). Toward a phenomenology of inner speaking. Consciousness and Cognition, 22(4), 1477–1494. https://doi.org/10.1016/j.concog.2013.10.003  Hurlburt, R. T., & Schwitzgebel, E. (2007). Describing Inner Experience? Proponent Meets Skeptic. MIT Press. Hurlburt, R. T., & Sipprelle, C. N. (1978). Random sampling of cognitions in alleviating anxiety attacks. Cognitive Therapy and Research, 2(2), 165–169. Jones-Forrester, S. (2009). Descriptive Experience Sampling of individuals with bulimia nervosa (1212) [Doctoral dissertation, University of Nevada, Las Vegas]. UNLV Theses, Dissertations, Professional Papers, and Capstones. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295x.84.3.231 Petitmengin, C. (2006). Describing one's subjective experience in the second person: An interview method for the science of consciousness. Phenomenology and the Cognitive Sciences 5, 229–269. https://doi.org/10.1007/s11097-006-9022-2 Robinson, W. S. (2005). Thoughts without distinctive non-imagistic phenomenology. Philosophy and Phenomenological Research, 70, 534–561. Scott, T. A. (2009). Evaluating the response styles theory of depression using Descriptive Experience Sampling (56) [Master's dissertation, University of Nevada, Las Vegas]. UNLV Theses, Dissertations, Professional Papers, and Capstones. Sutton, J. (2011). Time, experience, and Descriptive Experience Sampling. Journal of Consciousness Studies, 18(1), 118–29. Vásquez-Rosati A. (2017) Body awareness to recognize feelings: The exploration of a musical emotional experience. Constructivist Foundations, 12(2), 219–226. Vermersch, P. (1999) Introspection as practice. Journal of Consciousness Studies, 6(2–3), 17–42. Cognitive psychology Consciousness Neuropsychology
Descriptive Experience Sampling
[ "Biology" ]
7,795
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
73,647,023
https://en.wikipedia.org/wiki/Boletus%20nobilissimus
Boletus nobilissimus is an edible basidiomycete mushroom, of the genus Boletus in the family Boletaceae. Long considered a variety of European Boletus edulis, it has become a species on its own in 2000, with 2010 molecular study finding that it is most closely related to B. atkinsonii, B. quercophilus of Costa Rica and then B. barrowsii of western United States. It is found in abundance in open oak forests after heavy rains and warm weather (30 °C or more). Morphology Cap The cap is 9.5 to 15 cm in diameter, initially convex in shape, before becoming broadly convex to plane as it ages; The surface is dry with small hair, yellow brown to vinaceous brown, and then dark brown. The thick flesh is white and does not turn blue when bruised. Pores The pores are white when young, becoming yellowish or brownish yellow to greenish olivacous, unchanged when bruised. Stipe From 8 to 12 cm long; 1-3 cm thick, dry, solid; whitish or brownish; club shaped to bulbous with strongly raised reticulation. Spore print The spore print is yellowish-brown. Spores Ellipsoid to subfusiform, smooth, pale yellow, 11.5-13.5 x 4-5 μm. Habitat and distribution Forms mycorrhiza with hardwoods, especially oak and beech in presence of pines; single, scattered, or gregarious, in summer and fall; collected in New England, New York, and other Eastern parts of United States, with distribution limits unknown. References External links Edible fungi nobilissimus Fungi described in 2000 Fungi of North America Fungus species
Boletus nobilissimus
[ "Biology" ]
354
[ "Fungi", "Fungus species" ]
73,647,350
https://en.wikipedia.org/wiki/Cyathus%20gayanus
Cyathus gayanus is a species of fungus belonging to the genus Cyathus. It was first documented in 1844 by French mycologists Charles Tulasne and Louis René Tulasne. It has been documented in Chile, Costa Rica, Jamaica, and Venezuela. References Nidulariaceae Fungi described in 1844 Fungi of Chile Fungi of Central America Fungi of the Caribbean Fungi of Venezuela Fungus species
Cyathus gayanus
[ "Biology" ]
83
[ "Fungi", "Fungus species" ]
73,648,487
https://en.wikipedia.org/wiki/Southwest%20and%20Central%20Wales%20Local%20Section%20%28Royal%20Society%20of%20Chemistry%29
The South Wales West Local Section is one of 35 local sections of the Royal Society of Chemistry in the UK and Ireland. It covers an area including the Local Authority areas of Bridgend, Carmarthenshire, Neath Port Talbot, Pembrokeshire and Swansea. History The section was originally established in November 1918 as the South Wales Section of the Royal Institute of Chemistry following the decision to establish local sections to allow members to play a more prominent role in the Institute and develop communication between members in their own areas. Members from the Munitions Factory Pembrey were the nucleus of the section. Despite its name, the South Wales Section served the majority of members in Wales but by 1935 the number of members in the southeast had increased sufficiently for them to form their own South East Wales Local Section and in 1948 the South Wales Section successfully campaigned for a North Wales Local Section to be created. With the amalgamation of the RIC and the Chemical Society in 1971, it became the South Wales West Local Section. Governance Chair Historically the chair comes alternately from the world of education and from the industry. Up to 1968, seven chairs came from Swansea University, seven from the Mond Nickel Company, three each from the Munitions Factory, local technical colleges and the Llandarcy oil refinery. 1918: John Christie, Munitions Factory Pembrey F.J. Bloomer, Mond Nickel Company Clydach 1924-26: Prof J. E. Coates, University College Swansea 1928: Mr. C. M. W. Grieb 1931-33: Prof J. E. Coates, University College Swansea 1939-41: John Christie, Munitions Factory Pembrey 1962: Mr Hermas Evans 1968: Dr B.K. Davison 2023: Prof Simon Bott, Swansea University Hon Secretary 1950,1955,1958,1959: Ernest Edward Ayling 1959-1962: Harry Hallam 1963: Dr W. Williams ARIC Jim Ballantine Lectures The local section holds a variety of lectures for members and the public. One of the largest meetings was in 1936 when 220 people attended a lecture given by Mr Davidson Pratt on Protecting the civil population from chemical gases. A selected list of lectures is given below. 1922: The Low Temperature Carbonisation of Coal, by Mr. T. Eynon Davies AIC 1923: Public Analyst 1927: Hormones 1928: River Pollution by Prof Campbell James 1938: Cancer and Chemical Substances, by Professor J. W. Cook FRS 1950: The Formation and Reactions of Free Radicals in Solution, by Professor M. G. Evans FRS 1955: Chemistry in Rocket Propulsion, by Dr T. P. Hughes 1957 and 1965: Blood alcohol testing 1957: Chromatography -Theory and Applications, by Dr Tudor S. G. Jones (Wellcome Research Laboratories) 1958: Safety in the Chemical Industry, by I. E. Baggs 1963: Silicones, Their Chemistry and Applications, by Mr C. J. Baker and Mr Johnson (Midland Silicones) In former years a successful development was the Annual Lecture on the History of Chemistry associated with the name of Sir William Grove, the scientist from Swansea. One such lecture was by the Nobel Prize winner Archer Martin on his work on the invention of partition chromatography. At one time a Ladies Night were regularly organized lectures such as by Mr H. Armitage of British Nylon Spinners on Nylon in Industry and Fashion. Science and energy lectures For many years Mr Bill Williams and Dr Jim Ballantine conducted a series of demonstration lectures where the children carry out all the experiments themselves to show how energy is interconvertible. Prizes and awards Hallam Prize Since 1983, each year the section awards a lectureship as part of the Hallam Memorial Fund in memory of the late Harry Evans Hallam. Ayling Prize Since 1963, each year a prize has been awarded by Swansea University in memory of the late Ernest Edward Ayling, who had served the section as Hon Secretary for 21 years. Notable members Ernest Edward Ayling Jim Ballantine (1934 - 2013), Hon Secretary and Treasurer for 23 years John Cadogan Harry Hallam Keith Smith Howard Purnell External links Science and Energy Lecture References Royal Society of Chemistry 1918 establishments in Wales Organizations established in 1918 Organisations based in Swansea Scientific organisations based in Wales
Southwest and Central Wales Local Section (Royal Society of Chemistry)
[ "Chemistry" ]
865
[ "Royal Society of Chemistry" ]
73,651,341
https://en.wikipedia.org/wiki/Aleurodiscus%20oakesii
Aleurodiscus oakesii is a cluster of small, gray-white, irregular cup-shaped saprotrophic fungi that grows on decaying hardwood tree bark. This fungus may also be called hophornbeam discs, and it causes smooth patch disease. A. oakesii is found year round in North America, Europe, and Asia and is commonly found on oak trees. Taxonomy Aleurodiscus oakesii is a species of fungus in the family Stereaceae. The species was first described by Miles Joseph Berkeley and Moses Ashley Curtis in 1873 as Corticium oakesii. Mordecai Cubitt Cooke first proposed the designation Aleurodiscus oakesii in 1875 as a nomen nudum, and the first accepted mention using the new designation was by Narcisse Théophile Patouillard in 1890. The specific epithet both honors English botanist William Oakes and references the fungi's tendency to colonize oak trees. Description Aleurodiscus oakesii produces clusters of gray or cream-colored, flat, crustlike, fruiting bodies, usually less than a centimeter in diameter, though larger sizes may result from the combination of adjacent bodies. The inner, fertile space of the cup-like fruiting bodies is darker in color in comparison to the sterile outer surface. The spore-bearing bodies are tough in texture and attached to the bark by a single point, but the fungi lacks a stipe. The edges of the fruiting bodies are raised and may be confused for cup-shaped Ascomycete fungus. A. oakesii grows best on thinner sections of bark, with the stromata formed beneath the top bark layer and enlarging cracks in the bark with growth of fungal hyphae that then form the cup-like structure. Microscopically, A. oakesii has septate hyphae and antler-like acanthophyses. Basidiospores are oval or egg-shaped and relatively small in comparison to species of the same genus, and the spores of this species have spines and warts. Habitat and distribution Found across North America, Europe, and Asia, most commonly in the North Eastern United States, most commonly found in spring and fall but grows year round. It colonizes the outer bark of trees, especially white oaks, and eventually digests it, causing the "smooth patch disease" the fungus is most well known for. A. oakesii grows commonly on hardwood trees of the genus Ostrya and is less commonly found on leafier trees. Pathology Aleurodiscus oakesii is the most common fungi to cause “smooth patch disease” on the nonliving outer bark of trees. This fungal infection can lead to trees shedding and leaving smooth and lighter patches of bark on the tree, giving “smooth patch” its meaning. These patches can vary from a few inches to a foot or more in diameter. Fungal infection usually occurs during the growing season of hardwood trees, mainly many species of oaks, but can grow year round. Other names for smooth patch disease include white patch, smooth bark, bark rot, and bark patch. Unlike wood-decay fungi, A. oakesii is not a parasite because does not saprotrophically colonize the tree itself, only the bark. As a result, it does not directly harm the living tree. However, due to a decrease of bark thickness, smooth patch disease may decrease protection of the bark against many factors such as wood decay fungi, dehydration, or injury. References Fungi described in 1873 Fungi of North America Stereaceae Fungus species
Aleurodiscus oakesii
[ "Biology" ]
738
[ "Fungi", "Fungus species" ]
73,651,871
https://en.wikipedia.org/wiki/Methane%20leak
A methane leak comes from an industrial facility or pipeline and means a significant natural gas leak: the term is used for a class of methane emissions. Satellite data enables the identification of super-emitter events that produce methane plumes. Over 1,000 methane leaks of this type were found worldwide in 2022. As with other gas leaks, a leak of methane is a safety hazard: coalbed methane in the form of fugitive gas emission has always been a danger to miners. Methane leaks also have a serious environmental impact. Natural gas can contain some ethane and other gases, but from both the safety and environmental point of view the methane content is the major factor. As a greenhouse gas and climate change contributor, methane ranks second, following carbon dioxide. Fossil fuel exploration, transportation and production is responsible for about 40% of human-caused methane emissions. Smaller leaks than can be spotted from space comprise a long tail of emissions. They can be identified from planes flying at . According to Fatih Birol of the International Energy Agency, "Methane emissions are still far too high, especially as methane cuts are among the cheapest options to limit near-term global warming". Examples of methane leaks Individual methane leaks as reported are specific events, with a large quantity of gas released. An example followed the 2022 Nord Stream pipeline sabotage. Following early reports that the escape might exceed 105 tonnes, The International Methane Emissions Observatory of the United Nations Environment Programme analysed the release. In February 2023 it put the mass of methane gas in the range 7.5 to 23.0 x 104 tonnes. In terms of overall human-made methane emissions, these figures are under 0.1% of the annual total. Satellite data detection has shown that methane super emitter sites in Turkmenistan, USA and Russia are responsible for the biggest number of events from fossil fuel facilities. Equipment failures are normally responsible for the releases, which can last for weeks. The Aliso Canyon gas leak of 2015 has been quantified as at least 1.09 x 105 tonnes of methane. Satellite data for the Raspadskaya coal mine, Kemerovo Oblast, Russia indicated in 2022 an hourly methane leakage rate of 87 tonnes; this compares to 60 tonnes per hour of natural gas leaking from the Aliso Canyon incident, considered among the worst recorded leak events. Spain's Technical University of Valencia, in a study published in 2022, found that a super emitter event at a gas and oil platform in the Gulf of Mexico released around 4 x 104 tonnes of methane during a 17-day time period in December 2021 (hourly rate around 98 tonnes). Another major event in 2022 was a leak of 427 tonnes an hour in August, near Turkmenistan's Caspian coast and a major pipeline. Units Quantitative reports of methane leaks often use the standard cubic foot (scf) of the United States customary system. Applied to natural gas, a complex mixture of uncertain proportions, and depending on pressure and temperature conditions, the accuracy of calculations converting scf to metric units of mass is subject to limitations. A conversion figure given is 5 x 104 scf of natural gas as . For detection sensitivity, quantitative criteria are typically stated in units of standard cubic feet per hour (scf/h, "skiff", US), or thousand standard cubic feet per day (Mscf/d); or with metric units kilograms per hour (kg/hr), cubic meters per day (m3/d). To describe the mass balance of methane in the atmosphere, mass rates are described in units of Tg/yr, i.e. teragrams per year where a teragram is 106 tonnes (megagrams). The methane leak from the Permian Basin, a significant region of the Mid-Continent Oil Producing Area, was estimated for 2018/9 from satellite data as 2.7 Tg/yr. Quoted in terms of the proportion of the mass of extracted gas, the leakage comes to 3.7%. The 2021 Carbon Mapper project, a collaboration of the Jet Propulsion Laboratory and academia, detected 533 methane super-emitters in the Permian Basin. References Greenhouse gas emissions Methane Natural gas safety
Methane leak
[ "Chemistry" ]
852
[ "Greenhouse gas emissions", "Methane", "Natural gas safety", "Natural gas technology", "Greenhouse gases" ]
73,655,823
https://en.wikipedia.org/wiki/Perla%20Serfaty
Perla Serfaty (pen names, Perla Serfaty-Garzon and Perla Korosec-Serfaty; born in 1944 in Marrakesh, Morocco) is a French and Canadian academic, sociologist, psychosociologist, writer, and essayist, known in particular for her work on home and intimacy. She is a theorist of domestic intimacy, hospitality and the appropriation of inhabited places, and an expert in environmental psychology. Her book Vieillesse et Engendrements. La longévité dans la tradition juive., dedicated to the traditional Jewish view of longevity as transmitted by the Hebrew Bible, was awarded the J. I. Segal Jewish Book Award in 2014. The contribution of Serfaty's work to environmental psychology was distinguished by her induction in 2018 into the International Association of People-Environment Studies (IAPS) Hall of Fame. Early life and education Perla Serfaty moved to France in 1964. She studied philosophy, psychology, and sociology at the University of Strasbourg, where she followed the teachings of Georges Gusdorf, André Canivez, Georges Lanteri-Laura, Didier Anzieu, and Henri Lefebvre. She joined the laboratory of Professor Paul-Henry Chombart de Lauwe at the School for Advanced Studies in the Social Sciences, (École Pratique des Hautes Études en Sciences Sociales), Paris V – Sorbonne, where she received her doctorate in literature and humanities (1985, sociologie). Career and research Appointed to the Institute of Psychology at the University of Strasbourg in 1969, she introduced into her teaching environmental psychology, a young discipline that was not taught in France at the time, where it was still practically unknown. Serfaty took an active part in the development of research in environmental psychology as well as in its institutional recognition. She organized the first international conference devoted to environmental psychology to be held in France: the 3rd International Architectural Psychology Conference (IAPC) (Strasbourg, 1976), ), in Strasbourg, the theme of which was ‘The Appropriation of Space.’ The creation of the IAPS (International Association of People-Environmental Studies) in 1981 consolidated, institutionalized and gave formal recognition to the international character of the IAPC. Perla Serfaty took an active part in the development of research and theory of Environmental Psychology as well as in its institutional recognition as well as in the conceptualization within the framework of this discipline of the notions of dwelling, ‘chez-soi,’ hospitality, loss of home in migration, as well as of appropriation of space. Among Perla Serfaty’s research interests are also sociability and the modes of appropriation of public urban spaces, the transformation of the meaning of protection of architectural and urban heritage as well as of intangible cultural heritage. Awards and honours 2014, J. I. Segal Jewish Book Award 2018, induction into the International Association of People-Environment Studies (IAPS) Hall of Fame Selected works (Prix J.I.-Segal, 2014) Korosec-Serfaty, Perla, La Grand'place. Pratiques quotidiennes et identité de lieu, Paris, Éditions du CNRS, 1986 References External links Personal website 1944 births Living people People from Marrakesh Jewish women scientists French sociologists Canadian sociologists Environmental social scientists 21st-century French non-fiction writers 21st-century French women writers 21st-century French essayists University of Strasbourg alumni Academic staff of the University of Strasbourg Jewish women writers
Perla Serfaty
[ "Environmental_science" ]
726
[ "Environmental social scientists", "Environmental social science" ]
73,655,972
https://en.wikipedia.org/wiki/Hwang%20affair
The Hwang affair, or Hwang scandal, or Hwanggate, is a case of scientific misconduct and ethical issues surrounding a South Korean biologist, Hwang Woo-suk, who claimed to have created the first human embryonic stem cells by cloning in 2004. Hwang and his research team at the Seoul National University reported in the journal Science that they successfully developed a somatic cell nuclear transfer method with which they made the stem cells. In 2005, they published again in Science the successful cloning of 11 person-specific stem cells using 185 human eggs. The research was hailed as "a ground-breaking paper" in science. Hwang was elevated as "the pride of Korea", "national hero" [of Korea], and a "supreme scientist", to international praise and fame. Recognitions and honours immediately followed, including South Korea's Presidential Award in Science and Technology, and Time magazine listing him among the "People Who Mattered 2004" and the most influential people "The 2004 Time 100". Suspicion and controversy arose in late 2005, when Hwang's collaborator, Gerald Schatten at the University of Pittsburgh, came to know of the real source of oocytes (egg cells) used in the 2004 study. The eggs, reportedly from several voluntary donors, were from Hwang's two researchers, a fact which Hwang denied. The ethical issues made Schatten immediately break his ties with Hwang. In December 2005, a whistleblower informed Science of reuse of the same data. As the journal probed in, it was revealed that there was a lot more data fabrication. The SNU immediately investigated the research work and found that both the 2004 and 2005 papers contained fabricated results. Hwang was compelled to resign from the university, and publicly confessed in January 2006 that the research papers were based on fabricated data. Science immediately retracted the two papers. In 2009, the Seoul Central District Court convicted Hwang for embezzlement and bioethical violations, sentencing him to a two-year imprisonment. The incident was then recorded as the scandal that "shook the world of science," and became "one of the most widely reported and universally disappointing cases of scientific fraud in history". Background Hwang Woo-suk was a professor of veterinary biotechnology at the Seoul National University and specialised in stem cell research. In 1993, he devised an in vitro fertilisation method with which he made the first assisted reproduction in cows. He rose to public notice in 1999 when he announced that he had successfully cloned a dairy cow, named Yeongrong-i, and a few months later, a Korean cow, Jin-i (also reported as Yin-i). The following year, he announced the preparation for cloning an endangered Siberian tiger. It was a failed attempt, but nonetheless, his popularity in the Korean public escalated from wide media coverage. In 2002, he claimed creation of a genetically modified pig that could be used for human organ transplant. In 2003, he announced the successful cloning of a BSE (bovine spongiform encephalopathy)-resistant cow. However, science sceptics raised concern over the absence of research papers for any of his claims. 2004 human cell cloning In 2004, Hwang announced the first complete cloning of a human embryo. The research, published in the 12 March 2004 issue of Science, was reported as "Evidence of a pluripotent human embryonic stem cell line derived from a cloned blastocyst." For its potential medical value to replace diseased and damaged cells, several scientists had previously tried to clone the human embryo, but in vain. Hwang's team had developed an improved method of somatic cell nuclear transfer using which they could transfer the nuclei of somatic (non-reproductive) cells into egg cells which had their nuclei removed. They used human egg cells and cumulus cells, which are found in ovaries near the developing eggs and are known to be good source of nuclear transfer. After emptying an egg of its nucleus, they transferred the nucleus of the cumulus cell into it. The new egg cell divided normally and grew into a blastocyst, an early embryo characterised by a hollow ball of cells. They isolated the outer trophoblast cells that are destined to become the placenta, discarding the inner cell mass that would form the placenta. When the trophoblast cells were cultured, they could divide and form different tissues, indicating that they were viable stem cells. The report concluded: "This study shows the feasibility of generating human ES [embryonic stem] cells from a somatic cell isolated from a living person." It was the first instance of cloning of adult human cells and human embryonic stem cells. Hwang publicly reported the research at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle on 16 February 2004. He specified that they used 242 eggs from 16 unpaid volunteers, creating about 100 cells from which 30 embryos were developed. Since the embryos had adult DNA, the resulting stem cells became clones of the adult somatic cells. From the embryos, the stem cells were collected and grafted into mice in which they could grow into various body parts including muscle, bone, cartilage and connective tissues. The method ensured that immune rejection would be avoided so that it could be used for the treatment of genetic disorders, as Hwang explained: "Our approach opens the door for the use of these specially developed cells in transplantation medicine." 2005 human cell cloning Hwang's team reported another successful cloning of human cells in the 17 June 2005 issue of Science, in this case, embryonic stem cells derived from skin cells. Their study claimed the creation of 11 different stem cell lines that were the exact match of DNA in people having a variety of diseases. The experiment used 185 eggs from 18 donors. The report explicitly stated that: "Patients voluntarily donated oocytes and somatic cells for therapeutic cloning research and relevant applications but not for reproductive cloning ... no financial reimbursement in any form was paid." Initial receptions The 2004 report When the 2004 research was announced, it was received with praise and admiration. Donald Kennedy, editor-in-chief of Science, remarked: "the generation of stem cells by somatic cell nuclear transfer methods involving the same individuals may hold promise for advances in transplantation technology that could help people affected by many devastating conditions." Michael S. Gazzaniga, neuroscientists and bioethicist at Dartmouth College, who had supported therapeutic cloning, commented it as "a major advance in biomedical cloning". American scientists took the news to criticise the weakness of US government in stem cell research and its prohibitive attitude. As Helen Pearson reported in Nature, the cloning accomplishment turned Asians into "scientific tigers". Time reported that as a consequence of the achievement, "a medical and ethical door that had remained mostly closed was kicked wide open." Hwang and his colleague Shin Yong Moon were listed by Time at number 84 in its list of most influential people "The 2004 Time 100" in April 2004. The critical issue was on bioethics, as the method ultimately wasted many human embryos and could be used to create full human clones, as John T. Durkin argued in Science: "the developmental events leading from fertilized ovum, to blastula, to embryo, to fetus, to fully formed adult constitute a continuum." Hwang claimed that the purpose was for medical applications only, and said in Seattle, "Reproductive cloning is strictly prohibited [in South Korea]." At the time, South Korea was developing its "Bioethics and Biosafety Act" to be enforced in 2005. The regulations proscribed human reproductive cloning and experimental fusion of human and animal embryos; even therapeutic cloning for diseases would require authorised approval. Based on this situation, Sang-yong Song of the Hanyang University, criticised Hwang for not waiting the forthcoming regulations and social consensus in Korea. Howard H. Kendler, psychologist at the University of California, had an unbiased viewpoint, commenting: "Although individuals will differ in their opinions, a democracy can decide whether the benefits of embryonic stem cell research outweigh any disadvantages. Science can assist in making this decision, but cannot dictate it." Circle of influences Hwang loved public attention and tried to establish a network of bureaucrats. To name his second cloned cow, he solicited President Kim Dae-jung, who gave the name after a celebrated Korean geisha "Hwang Jin-i." As he announced the cloning of a BSE-resistant cow in 2003, President Roh Moo-hyun visited his laboratory and was shown a dog cured from its injury using stem cell transfer, to which the president applauded, "this is not a science; this is a magic." From that point, Hwang received escalated research funding from the government that peaked in 2005 at around US$30 million. That year, Korean Ministry of Science and Technology officially honoured him as "Supreme Scientist" for the first time in Korea; the title carried US$15 million. The government set up the World Stem Cell Hub at Seoul National University Hospital on 19 October 2005, created and directed by Hwang. On the day of opening, people registered for willingness to take stem cell therapy. Scientific flaws In the 2004 report, Hwang's team prudently remarked that "we cannot completely exclude the possibility that the cells had a parthenogenetic origin." Reference to parthenogenesis, the ability of embryo development from egg cells without fertilisation, was relevant because it had been documented that stem cells are capable of such transformation. In 1984, an experiment demonstrated that a genetic mixture (chimera) of nuclei from the stem cells, one-cell-stage embryos of mouse could develop into full embryos. Researchers at Advanced Cell Technology (ACT) in Worcester, Massachusetts, further showed in 2002 that primate (in this case crab-eating macaque, Macaca fascicularis) stem cells grew into the blastocyst stage. The ACT subsequently announced that they created the human parthenogenetic cells, although the cells could not reach the blastocyst stage. In 2003, Gerald Schatten of the University of Pittsburgh and his team reported a failed attempt of stem cell cloning in rhesus monkey, the cell divisions were always erratic and produced abnormal chromosomes. Schatten declared: "This reinforces the fact that the charlatans who claim to have cloned humans have never understood enough cell or developmental biology." Documentary In June 2023, Netflix released the documentary film King of Clones, which covered Hwang Woo-suk and this affair. See also He Jiankui affair References 2005 controversies Scientific misconduct incidents Stem cell research 2005 in biotechnology Cloning
Hwang affair
[ "Chemistry", "Engineering", "Biology" ]
2,246
[ "Stem cell research", "Cloning", "Genetic engineering", "Translational medicine", "Tissue engineering" ]
70,737,478
https://en.wikipedia.org/wiki/Avasimibe
Avasimibe (INN), codenamed CI 1011, is a drug that inhibits sterol O-acyltransferases (SOAT1 and SOAT2, also known as ACAT1 and ACAT2), enzymes involved in the metabolism and catabolism of cholesterol. It was discovered by Parke-Davis (later Pfizer) and developed as a possible lipid-lowering agent and treatment for atherosclerosis. The first description of avasimibe was published in 1996. Clinical trials began in 1997. However, development was halted in 2003 due to a high potential for interactions with other medicines, and a pivotal study found it had no favorable effect on atherosclerosis and actually increased LDL cholesterol levels significantly. SOAT/ACAT inhibition has since been discredited as a viable strategy for treating high cholesterol and atherosclerosis, but renewed interest in avasimibe has arisen due to its potential antitumor utility through other mechanisms. It has never been marketed or used outside clinical trials. Pharmacology Mechanism of action Avasimibe is a potent activator of the pregnane X receptor and, consequently, an indirect inducer of CYP3A4 and P-glycoprotein, as well as a potent inhibitor of several cytochrome P450 isoenzymes, including CYP1A2, CYP2C9, and CYP2C19; its spectrum of CYP induction and inhibition is similar to that of rifampicin. Pharmacokinetics Avasimibe is better absorbed when taken with food, especially with a high-fat meal, as reflected by increases in its peak serum concentration and AUC. History Avasimibe was the result of a rational drug design process carried out at Parke-Davis in the early 1990s which sought to obtain orally bioavailable, water-soluble ACAT inhibitors; all such inhibitors known at the time were lipophilic and poorly absorbed when taken by mouth. This process yielded several compounds with potential, including one (designated PD 138142-15) with good solubility in water and remarkable efficacy in animal studies, but it was chemically unstable and degraded rapidly, especially in acidic environments. (Undesirable CYP450 induction was first noted at this time, in PD 138142-15 and its degradation products.) Chemical modification of PD 138142-15 and retrosynthetic analysis found that avasimibe (then codenamed CI-1011) could be easily manufactured from commercially available starting compounds, and once its efficacy was demonstrated in vitro and in rat studies, it was selected for further development. After additional safety and preclinical efficacy studies in animals, phase I clinical trials in humans began in 1997, first for hyperlipidemia (June) and subsequently for atherosclerosis (December). Phase II trials for both indications followed in 1998, and phase III trials in 2001. In October 2003, clinical development of avasimibe was discontinued. Later research discredited the concept of ACAT inhibition as a treatment for dyslipidemia and atherosclerosis, and interest in these compounds as a class waned accordingly. Research Since the termination of its development as an antilipidemic agent, there has been renewed interest in potential repurposing of avasimibe as an antitumor drug and to prevent or treat bacterial infections by decreasing bacterial virulence. , these potential indications remain in preclinical research. References Hypolipidemic agents Pfizer Sulfamate esters Terpenes and terpenoids
Avasimibe
[ "Chemistry" ]
775
[ "Organic compounds", "Biomolecules by chemical classification", "Terpenes and terpenoids", "Natural products" ]
70,738,123
https://en.wikipedia.org/wiki/Kidney%20%28vertebrates%29
The kidneys are a pair of organs of the excretory system in vertebrates, which maintain the balance of water and electrolytes in the body (osmoregulation), filter the blood, remove metabolic waste products, and, in many vertebrates, also produce hormones (in particular, renin) and maintain blood pressure. In healthy vertebrates, the kidneys maintain homeostasis of extracellular fluid in the body. When the blood is being filtered, the kidneys form urine, which consists of water and excess or unnecessary substances, the urine is then excreted from the body through other organs, which in vertebrates, depending on the species, may include the ureter, urinary bladder, cloaca, and urethra. All vertebrates have kidneys. The kidneys are the main organ that allows species to adapt to different environments, including fresh and salt water, terrestrial life and desert climate. Depending on the environment in which animals have evolved, the functions and structure of the kidneys may differ. Also, between classes of animals, the kidneys differ in shape and anatomical location. In mammals, they are usually bean-shaped. Evolutionarily, the kidneys first appeared in fish as a result of the independent evolution of the renal glomeruli and tubules, which eventually united into a single functional unit. In some invertebrates, the nephridia are analogous to the kidneys but nephridia are not kidneys. The metanephridia, together with the vascular filtration site and coelom, are functionally identical to the ancestral primitive kidneys of vertebrates. The main structural and functional element of the kidney is the nephron. Between animals, the kidneys can differ in the number of nephrons and in their organisation. According to the complexity of the organisation of the nephron, the kidneys are divided into pronephros, mesonephros and metanephros. The nephron by itself is similar to pronephros as a whole organ. The simplest nephrons are found in the pronephros, which is the final functional organ in primitive fish. The nephrons of the mesonephros, the functional organ in most anamniotes called opisthonephros, are slightly more complex than those of the pronephros. The main difference between the pronephros and the mesonephros is that the pronephros consists of non-integrated nephrons with external glomeruli. The most complex nephrons are found in the metanephros of birds and mammals. The kidneys of birds and mammals have nephrons with loop of Henle. All three types of kidneys are developed from the intermediate mesoderm of the embryo. It is believed that the development of embryonic kidneys reflects the evolution of vertebrate kidneys from an early primitive kidney, the archinephros. In some vertebrate species, the pronephros and mesonephros are functional organs, while in others they are only intermediate stages in the development of the final kidney, and each next kidney replaces the previous one. The pronephros is a functioning kidney of the embryo in bony fish and amphibian larvae, but in mammals it is most often considered rudimentary and not functional. In some lungfish and bony fishes, the pronephros can remain functional in adults, including often simultaneously with the mesonephros. The mesonephros is the final kidney in amphibians and most fish. Evolution Evolutionary pressure and the need to regulate body fluid homeostasis have led to pre-adaptation of the vertebrate kidneys to different environment conditions and to development of three kidney forms: the pronephros, mesonephros and metanephros. The kidneys of amniotes are unique compared to other internal organs, since three different kidneys are sequentially developed during embryogenesis, replacing each other and reflecting the evolution of the kidneys in vertebrates. At the very beginning of vertebrates, when they evolved from marine chordates, their evolution probably took place in fresh or slightly saline water. There is a hypothesis according to which marine fish received their kidneys after a previous adaptation of the kidneys to fresh water. As a result, early vertebrates developed renal glomeruli capable of filtering blood and perhaps tubules that reabsorbed ions. Excretion of excess water from the body is the main characteristic of the pronephros in the case of species in which it develops into a functional excretory organ. In some species, the pronephros is functional during the embryonic stage of development, representing the first stage of kidney development, after which the mesonephros develops. The mesonephros probably appeared in the course of evolution in response to the increase in body mass of vertebrates, which also led to an increase in blood pressure. The evolution of the kidneys, along with the evolution of the lungs, allowed vertebrates called amniotes to live and reproduce in terrestrial environment. Metanephros, the permanent kidney of amniotes, has the unique ability to efficiently retain water in the body. In addition to water conservation, terrestrial life also required maintenance of salt levels in the body along with the excretion of waste products. The first class of animals to become fully terrestrial without a larval stage were the reptiles, which were the first amniotes. The kidney takes a key role in maintenance of the constant internal environment. The relative ionic composition of the extracellular fluid is similar between marine fish and all subsequent species. Therefore, it can be said that the kidneys made it possible to preserve approximately the same composition of extracellular fluid in vertebrates as it was in the primordial ocean. Kidney forms Archinephros It is believed that the ancient primitive form of the kidney was the archinephros, which had series of segmental tubules through the entire length of the trunk part of the body, and each body segment had a pair of tubules. All tubules were opened medially (closer to the midline of the body) into the body cavity known as coelom and united laterally into the two common archinephric ducts which were located in opposite sides of the body. And the archinephric ducts were opened into the cloaca. As an organ, the archnephros is still preserved in the larvae of hagfishes and some caecilians, and is also found in the embryos of some more developed vertebrates. Pronephros In lower vertebrates, the pronephros is sometimes called the head kidney due to its anterior position behind the head. In embryogenesis it is usually a transitional structure and is subsequently replaced by the mesonephros in most vertebrates. In mammalian embryogenesis, the pronephros is usually considered to be rudimentary and non-functional. A functional pronephros develops in vertebrates that have a free-swimming larval stage in their development. Pronephros functions in amphibians in the larval stage, in the adults of some bony fishes, and in the adults of some other fish species. The pronephros is a vital organ in animals that go through the aquatic larval stage. If in larvae the pronephros becomes non-functional, then they rapidly die from edema. The pronephros is a relatively large organ that has a primitive structure and usually consists of a single pair of bilateral nephrons with an external glomerulus or glomus. The typical pronephric nephron is non-integrated, and the wastes are filtered through the glomerulus or glomus directly to the coelom, in the more advanced pronephros they are filtered into the nephrocoel, which is a cavity adjacent to the coelom. The coelom is connected to the pronephric duct through the ciliated nephrostomes, which drain coelom fluid into the cloaca. Because of its small size and simple structure, the pronephros of fish and amphibian larvae has become an important experimental model for studying kidney development. Mesonephros and opisthonephros Mesonephros develops after the pronephros, replacing it. The mesonephros is the final kidney in amphibians and most fish. In more advanced vertebrates (amniotes), mesonephros develops during embryogenesis and is then replaced by the metanephros. In reptiles and marsupials, it remains functional for some time after birth along with the metanephros. When mesonephros degenerates in male mammals, its remains are involved in the formation of the reproductive system. Sometimes the anamniote mesonephros is called opistonephros to distinguish it from the stage of development in amniotes. In anamniotes, opisthonephros develops from a region of the nephric ridge, which is derived from intermediate mesoderm, from which both the mesonephros and metanephros are developed in the embryo of amniotes. Unlike the pronephros, the mesonephros consists of a set of nephrons, the glomeruli of which are enclosed in Bowman's capsules, but in some marine fish glomeruli may be absent. In fish, mesonephric kidneys has no division into cortex and medulla. Usually the mesonephros consists of 10–50 nephrons. The mesonephric tubules may have a connection to the coelom, however, the glomeruli of mesonephric nephrons still remain integrated. Nephrostomes are typically absent in the embryonic mesonephros of birds and mammals. Mesonephros in fish has the ability to add new nephrons while body mass increases. Metanephros In amniotes, which include reptiles, birds, and mammals, the pronephros and mesonephros are usually intermediate stages in the formation of metanephros during embryonic development, and metanephros is the final kidney. Genes that are involved in the formation of one form of kidney are reused in the formation of the next one. Metanephros differs from pronephros and mesonephros in development, position in the body, shape, number of nephrons, organization and drainage. Unlike mesonephros, after the end of its development process, metanephros has no longer the ability to add new nephrons through nephrogenesis, although many reptiles show ongoing nephron formation in adults. Metanephros is the most complex form of kidney. Each metanephric kidney is characterized by a large number of nephrons and a highly branched system of collecting tubules and ducts, that open into the ureter. Such branching in the metanephros is unique in relation to the pronephros and mesonephros. Depending on classes and species urine from the ureters can be excreted directly into the cloaca, or collected in the urinary bladder and then excreted into the cloaca, or collected in the urinary bladder and then excreted outside through the urethra. Metanephric kidneys Reptile kidney Reptiles were the first class of animals that had no larval stage and that were fully terrestrial animals. The mesonephros in reptiles functions for some time after birth simultaneously with the metanephros, while later the metanephric kidneys become permanent and the mesonephros degenerates. The kidneys in reptiles are located mainly in the caudal part (away from the head) of the abdominal cavity or retroperitoneally (behind the peritoneum) in the pelvic cavity in the case of lizards. Reptile kidneys are commonly elongated with color ranging from light to dark brown. The shape of the kidneys varies between reptiles due to variations of their body form. The kidneys of snakes are elongated, cylindrical and lobulated. Turtles and some lizards have urinary bladder that opens into the cloaca but snakes and crocodiles do not have it. Compared with the metanephros of birds and mammals, the metanephros of reptiles is simpler in structure. Unlike mammals, the kidneys of reptiles do not have a clear distinction between cortex and medulla. The kidneys lack the loop of Henle, have fewer nephrons (from about 3,000 to 30,000), and cannot produce hypertonic urine. Nitrogenous waste products excreted by the kidneys may include uric acid, urea and ammonia. Aquatic reptiles excrete predominantly urea, while terrestrial reptiles excrete uric acid, which allows them to conserve water. Since the reptile kidneys are unable to produce concentrated urine due to the absence of the loop of Henle, glomerular filtration rate is decreased if water loss needs to be reduced. The glomeruli in reptiles have also decreased in size compared to amphibians. In addition to the renal artery blood supply, reptiles also have a renal portal system, which can redirect blood to the kidneys during periods of water deprivation, bypassing the glomeruli, to prevent ischemic necrosis of tubular cells. Mammalian kidney In mammals, the kidneys are usually bean-shaped and located retroperitoneally on the dorsal (posterior) wall of the body. The outer layer of each kidney is made up of a fibrous sheath called the renal capsule. The peripheral layer of the kidney is called the cortex and the inner part is called the medulla. The medulla consists of one or more pyramids, the bases of which start from corticomedullary border. Medulla pyramid with overlying cortex comprises the renal lobe. In multilobar kidneys, the pyramids are separated from each other by dipped into the kidney areas of cortical tissue known as the renal columns. Blood enters the kidney through the renal artery, which in the multilobar kidney then branches in the region of the renal pelvis into large interlobar arteries that pass through the renal columns. The pyramids consist mainly of tubules that transport urine from the cortex, that produces it by blood filtration, to the tips of the pyramids, that form the renal papillae. Urine is excreted through the renal papillae into the calyces and then into the pelvis, ureter, and bladder. Then it is excreted outside through the urethra. In monotremes, the ureters open into the urogenital sinus, which is connected to the urinary bladder and cloaca, and urine is excreted into the cloaca instead of the urethra. Structurally, kidneys vary between mammals. Which structural type a particular species will have depends mainly on the body mass of the species. Small mammals have simple, unilobar kidneys with a compact structure and a single renal papilla, while large animals have more complex multilobar kidneys, such as those of bovines. Kidneys can also be with a single renal papilla (the unipapillary kidneys), as in mice and rats, with several, as in spider monkeys, or with a large number, as in pigs or humans. Most animals have a single renal papilla. In some animals, such as horses, the apices of the renal pyramids fuse with each other to form a common renal papilla, called the renal crest. The renal crest usually appears in animals larger than rabbits. The kidneys of bovines are multilobar with external lobation. Marine mammals, bears and otters have reniculate kidneys which are made of large amount of lobes called reniculi. Each renculus can be compared to a simple unipapillary kidney as a whole. Nitrogenous waste products are excreted by the kidneys of mammals primarily in the form of urea, which is highly soluble in water. Each nephron is located in both the cortex and the medulla. The most proximal part of the nephron is glomerulus, which is located in the cortex. The nephrons of the mammalian kidneys have loops of Henle, which are the most efficient way to reabsorb water and produce concentrated urine to conserve water in the body. The mammalian kidneys combine both nephrons with a short and nephrons with a long loop of Henle. The medulla is divided into outer and inner regions. The outer region consists of short loops of Henle and collecting ducts, and the inner region consists of long loops of Henle and collecting ducts. After passing through the loop of Henle, the fluid becomes hypertonic relative to the blood plasma. The renal portal system is absent in mammals. Avian kidney In birds, the kidneys are typically elongated and located dorsally in the abdominal cavity in the pelvic skeletal depressions. The structure of the avian kidneys differs from the structure of the mammalian kidneys. The avian kidney is lobulated and usually consists of three lobes. The lobes are divided into lobules, each of which has a cortex and a medulla. The medulla of the each lobule is shaped like a cone, and, unlike mammals, it is not subdivided into the inner and outer regions, while structurally it is similar to the outer medulla of the mammalian kidney. In the avian kidney, the renal pelvis is absent, and each lobule has a separate branch to the ureter. No birds, except for the ostrich, have a bladder; urine is excreted from the kidneys through the ureters to the cloaca. Avian kidneys combine so called reptilian-type nephrons, without the loop of Henle, and mammalian-type nephrons, with the loop of Henle. Most nephrons are reptilian-type. The loop of Henle of birds is similar to that of mammals, the main difference is that the nephron of birds has only a short loop of Henle. Like mammals, although to a lesser extent, birds are able to produce concentrated urine, thus conserving water in the body. Nitrogenous waste products are excreted mainly in the form of uric acid, which is a white paste that is poorly soluble in water, which also helps to reduce water loss. Additional water reabsorption occurs in the cloaca and distal intestine. Altogether, this allows birds to excrete their wastes without significant loss of water, allowing them to fly long distances with limited water. In birds, the arterial blood is supplied to the kidneys by the cranial, middle and caudal renal arteries. Like reptiles, birds have a renal portal system, but it does not deliver blood to the loops of Henle, blood is delivered only to the proximal and distal tubules of the nephrons. When birds are in a state of dehydration, nephrons without a loop of Henle stop filtering, while nephrons with a loop continue, but due to the presence of a loop, they can produce concentrated urine. References Endocrine system Kidney
Kidney (vertebrates)
[ "Biology" ]
4,061
[ "Organ systems", "Endocrine system" ]
70,738,617
https://en.wikipedia.org/wiki/Target-mediated%20drug%20disposition
Target-mediated drug disposition (TMDD) is the process in which a drug binds with high affinity to its pharmacological target (for example, a receptor) to such an extent that this affects its pharmacokinetic characteristics. Various drug classes can exhibit TMDD, most often these are large compounds (biologics such as antibodies, cytokines or growth factors) but also smaller compounds can exhibit TMDD (such as warfarin). A typical TMDD pattern of antibodies displays non-linear clearance and can be seen at concentration ranges that are usually defined as 'mid-to-low'. In this concentration range, the target is partly saturated. References Pharmacokinetics Medicinal chemistry
Target-mediated drug disposition
[ "Chemistry", "Biology" ]
152
[ "Pharmacology", "Pharmacokinetics", "Medicinal chemistry stubs", "Medicinal chemistry", "nan", "Biochemistry", "Pharmacology stubs" ]
70,739,538
https://en.wikipedia.org/wiki/Non-linear%20mixed-effects%20modeling%20software
Nonlinear mixed-effects models are a special case of regression analysis for which a range of different software solutions are available. The statistical properties of nonlinear mixed-effects models make direct estimation by a BLUE estimator impossible. Nonlinear mixed effects models are therefore estimated according to Maximum Likelihood principles. Specific estimation methods are applied, such as linearization methods as first-order (FO), first-order conditional (FOCE) or the laplacian (LAPL), approximation methods such as iterative-two stage (ITS), importance sampling (IMP), stochastic approximation estimation (SAEM) or direct sampling. A special case is use of non-parametric approaches. Furthermore, estimation in limited or full Bayesian frameworks is performed using the Metropolis-Hastings or the NUTS algorithms. Some software solutions focus on a single estimation method, others cover a range of estimation methods and/or with interfaces for specific use cases. General-purpose software General (use case agnostic) nonlinear mixed effects estimation software can be covering multiple estimation methods or focus on a single. Software with multiple estimation methods SAS is a package that is used in the wide statistical community and supports multiple estimation methods from PROC NLMIX. Multiple estimation methods are available in the R open source software system, such as nlme. MATLAB provides multiple estimation methods in their nlmefit system. SPSS at the moment does not support non-linear mixed effects methods. Software dedicated to a single estimation method WinBUGS is an implementation of the Metropolis-Hastings method for Bayesian analysis. Stan is open source software that implements the NUTS algorithm. Software dedicated to pharmacometrics The field of pharmacometrics relies heavily on nonlinear mixed effects approaches and therefore uses specialized software approaches. As with general-purpose software, implementations of both single or multiple estimation methods are available. This type of software relies heavily on ODE solvers. Software with multiple estimation methods NONMEM is the most widely used software in the field of pharmacometics. Phoenix implements multiple estimation methods in a graphical user interface. Pumas implements multiple estimation methods in the julia language. nlmixr/nlmixr2 is a suite interfaced in R that implements FOCE and SAEM. ADAPT and S-ADAPT implement multiple estimation methods in a graphical or scripting interface, respectively. Software dedicated to a single estimation method Monolix is a powerful implementation of SAEM which also can parse NMTRAN. NPEM implements non-parametric mixed effects. Related software Efficiency of ODE solvers impacts quality of estimation. Popular solvers are Runge-Kutta based methods, various stiff solvers and switching solvers such as LSODA of the LAPACK suite. A specialized form of pharmacokinetics modeling, physiology-based pharmacokinetic (PBPK) modeling can in some cases also be seen as a nonlinear mixed-effects implementation, see also the software section of that lemma. Optimal design software such as PopED can be used in conjunction with estimation. References Regression analysis Numerical software
Non-linear mixed-effects modeling software
[ "Mathematics" ]
631
[ "Numerical software", "Mathematical software" ]
70,739,650
https://en.wikipedia.org/wiki/Space%20dust%20measurement
Space dust measurement refers to the study of small particles of extraterrestrial material, known as micrometeoroids or interplanetary dust particles (IDPs), that are present in the Solar System. These particles are typically of micrometer to sub-millimeter size and are composed of a variety of materials including silicates, metals, and carbon compounds. The study of space dust is important as it provides insight into the composition and evolution of the Solar System, as well as the potential hazards posed by these particles to spacecraft and other space-borne assets. The measurement of space dust requires the use of advanced scientific techniques such as secondary ion mass spectrometry (SIMS), optical and atomic force microscopy (AFM), and laser-induced breakdown spectroscopy (LIBS) to accurately characterize the physical and chemical properties of these particles. Overview From the ground, space dust is observed as scattered sun light from myriads of interplanetary dust particles and as meteoroids entering the atmosphere. By observing a meteor from several positions on the ground, the trajectory and the entry speed can be determined by triangulation. Atmospheric entry speeds of up to 72,000 m/s have been observed for Leonid meteors. Even sub-millimeter sized meteoroids hitting spacecraft at speeds around 300 m/s (much faster than bullets) can cause significant damage. Therefore, the early US Explorer 1, Vanguard 1, and the Soviet Sputnik 3 satellites carried simple 0.001 m2 sized microphone dust detectors in order to detect impacts of micron sized meteoroids. The obtained fluxes were orders of magnitude higher than those estimated from zodiacal light measurements. However, the latter determination had big uncertainties in the assumed size and heliocentric radial dust density distributions. Thermal studies in the lab with microphone detectors suggested that the high count-rates recorded were due to noise generated by temperature variations in Earth orbit. An excellent review of the early days of space dust research was given by Fechtig, H., Leinert, Ch., and Berg, O. in the book Interplanetary Dust. Dust accelerators A dust accelerator is a critical facility to develop, test, and calibrate space dust instruments. Classic guns have muzzle velocities between just a few 100 m/s and 1 km/s, whereas meteoroid speeds range from a few km/s to several 100 km/s for nanometer sized dust particles. Only experimental light-gas guns (e.g. at NASA's Johnson Space Center, JSC) reach projectile speeds of several km/s up to 10 km/s in the laboratory. By exchanging the projectile with a sabot containing dust particles, high speed dust projectiles can be used for impact cratering and dust sensor calibration experiments. The workhorse for hypervelocity dust impact experiments is the electrostatic dust accelerator. Nanometer to micrometer sized conducting dust particles are electrically charged and accelerated by an electrostatic particle accelerator to speeds up to 100 km/s. Currently, operational dust accelerators exist at IRS in Stuttgart, Germany (formally at Max Planck Institute for Nuclear Physics in Heidelberg), and at the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado. The LASP dust accelerator facility has been operational since 2011, and has been used for basic impact studies, as well as for the development of dust instruments. The facility is available for the planetary and space science communities. Dust accelerators are used for impact cratering studies, calibration of impact ionization dust detectors, and meteor studies. Only electrically conducting particles can be used in an electrostatic dust accelerator because the dust source is located in the high-voltage terminal. James F. Vedder, at Ames Research Center, ARC, used a linear particle accelerator by charging dust particles by an ion beam in a quadrupole ion trap under visual control. This way, a wide range of dust materials could be accelerated to high speeds. Reliable dust detections Tennis court sized (200 m2) penetration detectors on the Pegasus satellites determined a much lower flux of 100 micron sized particles that would not pose a significant hazard to the crewed Apollo missions. The first reliable dust detections of micron sized meteoroids were obtained by the dust detectors on board the Pioneer 8 and 9 and HEOS 2 spacecraft. Both instruments were impact ionization detectors using coincident signals from ions and electrons released upon impact. The detectors had sensitive areas of approximately 0.01 m2 and detected outside the Earth's magnetosphere on average one impact per ten days. Microcrater analyses Microcraters on lunar samples provide an extensive record of impacts onto the lunar surface. Uneroded glass splashes from big impacts covering crystalline lunar rocks preserve microcraters well. The number of microcraters was measured on a single rock sample using microscopic and scanning electron microscopic analyses. The craters ranged in size from 10−8 to 10−3 m, and were correlated to the mass of meteoroids based on impact simulations. The impact speed onto the lunar surface was assumed to be 20 km/s. The age of the rocks on the surface could not be determined through traditional methods (counting the solar flare track densities), so spacecraft measurements by the Pegasus satellites were used to determine the interplanetary dust flux, specifically the crater production flux at 100 μm size. The flux of smaller meteoroids was found to be smaller than the observed cratering flux on the lunar surface due to fast ejecta from impacts of bigger meteoroids. The flux was adjusted using data from the HEOS-2 and Pioneer 8/9 space probes. From April 1984 to January 1990, NASA's Long Duration Exposure Facility exposed several passive impact collectors (each a few square meters in area) to the space dust environment in low Earth orbit. After recovery of LDEF by the Space Shuttle Columbia, the instrument trays were analyzed. The results generally confirmed the earlier analysis of lunar microcraters. Optical and infrared zodiacal dust observations Zodiacal light observations at different heliocentric distances were performed by the Zodiacal light photometer instruments on Helios 1 and 2 and the Pioneer 10 and Pioneer 11 space probes, ranging between 0.3 AU and 3.3 AU from the sun. This way, the heliocentric radial profile was determined, and shown to vary by a factor of about 100 over that distance. The Asteroid Meteoroid Detector (AMD) on Pioneer 10 and Pioneer 11 used the optical detection and triangulation of individual meteoroids to get information on their sizes and trajectories. Unfortunately, the trigger threshold was set too low, and noise corrupted the data. Zodiacal light observations at visible light wavelengths use the light scattered by interplanetary dust particles, which constitute only a few percent of the incoming light. The remainder (over 90%) is absorbed and reradiated at infrared wavelengths. The zodiacal dust cloud is much brighter at infrared wavelengths than visible wavelengths. However, on the ground, most of these infrared wavelengths are blocked by atmospheric absorption bands. Therefore, most infrared astronomy observations are done from space observatory satellites. The Infrared Astronomical Satellite (IRAS) mapped the sky at wavelengths of 12, 25, 60, and 100 micrometers. Between wavelengths of 12 and 60 microns, zodiacal dust was a prominent feature. Later, the Diffuse Infrared Background Experiment (DIRBE) on NASA's COBE mission provided a complete high-precision survey of the zodiacal dust cloud at the same wavelengths. IRAS sky maps showed structure in the sky brightness at infrared wavelengths. In addition to the wide, general zodiacal cloud and a broad, central asteroidal band, there were several narrow cometary trails. Follow-up observations using the Spitzer Space Telescope showed that at least 80% of all Jupiter family comets had trails. When the Earth passes through a comet trail, a meteor shower is observed from the ground. Due to the enhanced risk to spacecraft in such meteoroid streams, the European Space Agency developed the IMEX model, which follows the evolution of cometary particles and hence allows us to determine the risk of collision at specific positions and times in the inner Solar System. Penetration detectors In the early 1960s, pressurized cell micrometeorite detectors were flown on the Explorer 16 and Explorer 23 satellites. Each satellite carried more than 200 individual gas-filled pressurized cells with metal walls of 25 and 50 microns thick. A puncture of a cell by a meteoroid impact could be detected by a pressure sensor. These instruments provided important measurements of the near-Earth meteoroid flux. In 1972 and 1973, the Pioneer 10 and Pioneer 11 interplanetary spacecraft carried 234 pressurized cell detectors each, mounted on the back of the main dish antenna. The stainless-steel wall thickness was 25 microns on Pioneer 10, and 50 microns on Pioneer 11. The two instruments characterized the meteoroid environment in the outer Solar System as well as near Jupiter and near Saturn. In preparation for the Apollo Missions to the moon, three Pegasus satellites were launched by the Saturn 1 rocket into near-Earth orbit. Each satellite carried 416 individual meteoroid detectors with a total detection surface of about 200 m2. The detectors consisted of aluminum penetration sheets of various thicknesses: 171 m2 of 400 micron-thick, 16 m2 of 200 micron-thick, and 7.5 m2 of 40 micron-thick. Placed behind these penetration sheets were 12 micron-thick mylar capacitor detectors that recorded penetrations of the overlying sheet. The results showed that the meteoroid hazard is significant and meteoroid protection methods must be implemented for large space vehicles. In 1986, the Vega 1 and Vega 2 missions were equipped with a new dust detector, developed by John Simpson, which used polyvinylidene difluoride PVDF films. This material responds to dust impacts by generating electrical charge due to impact cratering or penetration. Since PVDF detectors are also sensitive to mechanical vibrations and energetic particles, detectors using PVDF work acceptably well as high-rate dust detectors in very dusty environments, like cometary comae or planetary rings (as was the case for the Cassini–Huygens Cosmic Dust Analyzer). For example, on the Stardust mission, the Dust Flux Monitor Instrument (DFMI) used PVDF detectors to study dust in the coma of Comet Wild 2. However, in low-dust environments such as interplanetary space, this sensitivity makes the detectors susceptible to noise. Because of this, the PVDF detectors on the Venetia Burney Student Dust Counter also needed shielded reference detectors in order to determine the background noise rate. Modern microphone detectors During its flyby of Halley's Comet at a distance of 600 km, the Giotto spacecraft was protected from space dust by a 1 mm-thick front Whipple shield (1.85 m diameter) and a 12 mm-thick rear Kevlar shield. Mounted on the front dust shield were three piezoelectric momentum sensors of the Dust Impact Detection System (DIDSY). A fourth momentum sensor was mounted on the rear shield. These microphone detectors, together with other detectors, measured the dust distribution within the inner coma of the comet. These instruments also measured dust during Giotto's encounter with the comet 26P/Grigg–Skjellerup. On the Mercury Magnetospheric Orbiter of the BepiColombo mission, the Mercury Dust Monitor (MDM) will measure the dust environments of interplanetary space and Mercury. MDM is composed of four piezoelectric ceramic sensors made of lead zirconate titanate, from which impact signals will be recorded and analyzed. Chance dust detectors Most instruments on a spacecraft flying through a dense dust environment will experience effects of dust impacts. A prominent example of such an instrument was the Plasma Wave Subsystem (PWS) on the Voyager 1 and Voyager 2 spacecraft. PWS provided useful information on the local dust environment. Initially, the Asteroid Meteoroid Detector (AMD) previously flown on Pioneer 10 and 11 was preliminarily selected for the Voyager payload. However, because there were doubts about its performance, the instrument was deselected and, hence, no dedicated dust instrument was carried by either Voyager 1 or 2. During the Voyager 2 flythrough of the Saturn system, PWS detected intense impulse noise centered on the ring plane at 2.88 Saturn radii distance, slightly outside of the G ring. This noise was attributed to micron sized particles hitting the spacecraft. In-situ dust detections by the Cassini Cosmic Dust Analyzer and camera observations of the outer rings confirmed the existence of an extended G ring. Also during Voyager's flybys of Uranus and Neptune, dust concentrations in the equatorial planes were observed. During the flyby of comet 21P/Giacobini–Zinner by the International Cometary Explorer, dust impacts were observed by the plasma wave instrument. Though plasma wave instruments on various spacecraft claimed to detect dust, it was only in 2021 that a model for the generation of signals on plasma wave antennas by dust impacts was presented, based on dust accelerator tests. Impact ionization detectors Impact ionization detectors are the most successful dust detectors in space. With these detectors, the interplanetary dust environment between Venus and Jupiter has been explored. Impact ionization detectors use the simultaneous detection of positive ions and electrons upon dust impact on a solid target. This coincidence provides a means to discriminate from noise on a single channel. The first successful dust detector in interplanetary space at about 1 AU was flown on the Pioneer 8 and Pioneer 9 space probes. The Pioneer 8 and 9 detectors had sensitive target areas of 0.01 m2. Besides interplanetary dust on eccentric orbits, it detected dust on hyperbolic orbits—that is, dust leaving the Solar System. The HEOS 2 dust detector was the first detector that employed a hemispherical geometry, like all the subsequent detectors of the Galileo and Ulysses spacecraft, and the LDEX detectors on the LADEE mission. The hemispherical target of 0.01 m2 area collected electrons from the impact and the ions were collected by the central ion collector. These signals served to determine the mass and speed of the impacted meteoroid. The HEOS 2 dust detector explored the Earth dust environment within 10 Earth radii. The twin Galileo and Ulysses dust detectors were optimized for interplanetary dust measurements in the outer Solar System. The sensitive target areas were increased ten-fold to 0.1 m2 in order to cope with the expected low dust fluxes. In order to provide reliable dust impact data even within the harsh Jovian environment, an electron channeltron was added in the center of the ion grid collector. This way, an impact was detected by triple coincidence of three charge signals. The 2.5-ton Galileo spacecraft was launched in 1989 and cruised for 6 years in interplanetary space between Venus’ and Jupiter's orbit and measured interplanetary dust. The 370 kg Ulysses spacecraft was launched a year later and went on a direct trajectory to Jupiter, which it reached in 1992 for a swing-by maneuver that put the spacecraft on a heliocentric orbit of 80 degrees inclination. In 1995, Galileo started its 7-year path through the Jovian system with several flybys of all the Galilean moons. After its Jupiter flyby, Ulysses identified a flow of interstellar dust sweeping through the Solar System and hyper-velocity streams of nano-dust which are emitted from Jupiter and then couple to the solar magnetic field. In addition, the Galileo instrument detected ejecta clouds around the Galilean moons. The Lunar Dust Experiment (LDEX) on board the Lunar Atmosphere and Dust Environment Explorer (LADEE) mission is a smaller version of the Galileo and Ulysses dust detectors. The most sensitive impact charge detector is a microchannel plate (MCP) behind the central focusing grid. LDEX has a sensitive area of 0.012 m2. The objective of the instrument was the detection and analysis of the lunar dust environment. From 16 October 2013 to 18 April 2014, LDEX detected about 140,000 dust hits at an altitude of 20–100 km above the lunar surface. It found a tenuous and permanent, asymmetric ejecta cloud around the Moon that is caused by meteoroid impacts onto the lunar surface. From this data it was found that approximately 40 μm/Myr of lunar regolith is redistributed due to meteoritic bombardment. Besides a continuous meteoroid bombardment, meteoroid streams cause temporary enhancements of the ejecta cloud. Dust composition analyzers The Helios Micrometeoroid Analyzer was the in-situ instrument to analyze the composition of cosmic dust. In 1974, the instrument was carried by the Helios spacecraft from the Earth's orbit down to 0.3 AU from the Sun. The goal of the Micrometeoroid Analyzer was to determine the spatial distribution of the dust in the inner planetary system, and to search for variations in the compositional and physical properties of micrometeoroids. The instrument consisted of two impact ionization time-of-flight mass spectrometers (Ecliptic and South sensor) with a total target area of about 0.01 m2. One sensor was shielded by the spacecraft rim from direct sunlight, whereas the other sensor was protected by a thin aluminized parylene film from intense solar radiation. These Micrometeoroid Analyzers were calibrated with a wide range of materials at the dust accelerators of the Max Planck Institute for Nuclear Physics in Heidelberg and the Ames Research Center in Moffet Field. The mass resolution of the mass spectra of the Helios sensors was low: . There was an excess of impacts recorded by the South sensor compared to the Ecliptic sensor. On the basis of the penetration studies with the Helios film, this excess was interpreted to be due to low density ( < 1000 kg/m3) meteoroids that were shielded from entering the Ecliptic sensor. The mass spectra range from those with dominant low masses (up to 30 mu), compatible with silicates, to those with dominant high masses (between 50 and 60 mu), compatible with iron and molecular ions. Meteoroid streams and even interstellar dust particles were identified in the data. Twin dust mass analyzers were flown on the 1986 Halley's Comet missions Vega 1, Vega 2, and Giotto. These spacecraft flew by the comet at a distance of 600–1,000 km with a speed of 70–80 km/s. The PUMA (Vega) and PIA (Giotto) instruments were developed by Jochen Kissel of the Max Planck Institute for Nuclear Physics in Heidelberg. Dust particle hitting the small (approximately 5 cm2) impact target generated ions by impact ionization. The instruments were high mass resolution (R ≈ 100) reflectron type time-of-flight mass spectrometers. The instruments could record up to 500 impacts per second. During comet flybys, the instruments recorded an abundance of small particles of mass less than 10−14 grams. Besides unequilibrated silicates, many of the particles were rich in light elements such as hydrogen, carbon, nitrogen, and oxygen. This suggests that most particles consisted of a predominantly chondritic core with a refractory organic mantle. The Cometary and Interstellar Dust Analyzer (CIDA) was flown on the Stardust mission. In January 2004, Stardust flew by comet Comet Wild 2 at a distance of 240 km with a relative speed of 6.1 km/s. In February 2011, Stardust flew by comet Tempel 1 at a distance of 181 km with a speed of 10.9 km/s. During the interplanetary cruise between the comet encounters, there were favorable opportunities to analyze the interstellar dust stream discovered earlier by Ulysses. CIDA is a derivative of the impact ionization mass spectrometers flown on the Giotto, Vega 1, and Vega 2 missions. The impact target peeks out to the side of the spacecraft while the main part of the instrument is protected from the high-speed dust. It has a sensitive area of approximately 100 cm2 and a mass resolution R ≈ 250. Besides the positive ion mode, CIDA has also a negative ion mode for better sensitivity for organic molecules. The 75 spectra obtained during the comet flybys indicate a dominance of organic matter; sulfur ions were also detected in one spectrum. In the 45 spectra obtained during the cruise phase favorable for the detection of interstellar particles, derivates of quinone were suggested as constituents of the organic component. The Cosmic Dust Analyzer (CDA) was flown on the Cassini mission to Saturn. CDA is a large-area (0.1 m2 total sensitive area) multi-sensor dust instrument that includes a 0.01 m2 medium resolution (R ≈ 20–50) chemical dust analyzer, a 0.09 m2 highly-reliable impact ionization detector, and two high-rate polarized polyvinylidene fluoride (PVDF) detectors with sensitive areas of 0.005 m2 and 0.001 m2, respectively. During its 6-year cruise to Saturn, CDA analyzed interplanetary dust, the stream of interstellar dust, and Jupiter dust streams. A highlight was the detection of electrical dust charges in interplanetary space and in Saturn's magnetosphere. During the following 13 years, Cassini completed 292 orbits around Saturn (2004–2017) and measured several million dust impacts which characterize dust primarily in Saturn's E ring. In 2005, during Cassini's close flyby of Enceladus within 175 km from the surface, CDA discovered active ice geysers. Detailed compositional analyses found salt-rich water ice grains close to Enceladus, which led to the discovery of large reservoirs of liquid water oceans below the icy crust of the moon. Analyses of interstellar grains at Saturn's distance suggest magnesium-rich grains of silicate and oxide composition, some with iron inclusions. Dust Telescopes A Dust Telescope is an instrument to perform dust astronomy. It not only analyses the signals and ions that are generated by a dust impact on the sensitive target, but also determines the dust trajectory prior to the impact. The latter is based on the successful measurement of the dust electric charge by Cassini's Cosmic Dust Analyzer (CDA). A Dust Trajectory Sensor consists of four planes of parallel position sensing wire electrodes. Dust accelerator tests show that dust trajectories can be determined to an accuracy of 1% in velocity and 1° in direction. The second element of a Dust Telescope is a Large-area Mass Analyzer: a reflectron type time-of-flight mass analyzer with a sensitive area of up to 0.2 m2 and a mass resolution R > 150. It consists of a circular plate target with the ion detector behind the center hole. In front of the target is an acceleration grid. Ions generated by an impact are reflected by a paraboloid shaped grid onto the center ion detector. Prototypes of dust telescope have been built at the Laboratory for Atmospheric and Space Physics (LASP) of the University of Colorado, Boulder, USA and at the Institute of Space Systems of the University of Stuttgart, Germany, and tested at their respective dust accelerators. The Surface Dust Analyser (SUDA) on board the Europa Clipper mission is being developed by Sacha Kempf and colleagues at LASP. SUDA will collect spatially resolved compositional maps of Jupiter's moon Europa along the ground tracks of the Europa orbiter, and search for plumes. The instrument is capable of identifying traces of organic and inorganic compounds in the ice ejecta. The launch of the Europa Clipper mission is planned for 2024. The DESTINY+ Dust Analyzer (DDA) will fly on the Japanese–German space mission DESTINY+ to asteroid 3200 Phaethon. Phaethon is believed to be the origin of the Geminids meteor stream that can be observed from the ground every December. DDA development is led by Ralf Srama and colleagues from the Institute of Space Systems (IRS) at the University of Stuttgart in cooperation with von Hoerner & Sulger GmbH (vH&S) company. DDA will analyze interstellar and interplanetary dust on cruise to Phaethon and will study its dust environment during the encounter; of particular interest is the proportion of organic matter. Its launch is planned for 2024. The Interstellar Dust Experiment (IDEX), developed by Mihaly Horanyi and colleagues at LASP, will fly on the Interstellar Mapping and Acceleration Probe (IMAP) in orbit about the Sun–Earth L1 Lagrange point. IDEX is a large-area (0.07 m2) dust analyzer that provides the mass distribution and elemental composition of interstellar and interplanetary dust particles. A laboratory version of the IDEX instrument was used at the dust accelerator facility operated at University of Colorado to collect impact ionization mass spectra for a range of dust samples of known composition. Its launch is planned for 2025. Collected dust analyses The importance of lunar samples and lunar soil for dust science was that they provided a meteoroid impact cratering record. Even more important are the cosmochemical aspects—from their isotopic, elemental, molecular, and mineralogical compositions, important conclusions can be drawn, such as concerning the giant-impact hypothesis of the Moon's formation. From 1969 to 1972, six Apollo missions collected 382 kilograms of lunar rocks and soil. These samples are available for research and teaching projects. From 1970 to 1976, three Luna spacecraft returned 301 grams of lunar material. In 2020, Chang'e 5 collected 1.7 kg of lunar material. In 1950, Fred Whipple showed that micrometeoroids smaller than a critical size (~100 micrometers) are decelerated at altitudes above 100 km slowly enough to radiate their frictional energy away without melting. Such micrometeorites sediment through the atmosphere and ultimately deposit on the ground. The most efficient method to collect micrometeorites is by high (~20 km) flying aircraft with special silicon oil covered collectors that capture this dust. At lower altitudes, these micrometeorites become mixed with Earth dust. Don Brownlee first reliably identified the extraterrestrial nature of collected dust particles by their chondritic composition. These stratospheric dust samples are available for further research. Stardust was the first mission to return samples from a comet and from interstellar space. In January 2004, Stardust flew by Comet Wild 2 at a distance of 237 km with a relative velocity of 6.1 km/s. Its dust collector consisted of 0.104 m2 aerogel and 0.015 m2 aluminium foil; one side of the detector was exposed to the flow of cometary dust. The Stardust cometary samples were a mix of different components, including presolar grains like 13C-rich silicon carbide grains, a wide range of chondrule-like fragments, and high-temperature condensates like calcium-aluminum inclusions found in primitive meteorites that were transported to cold nebular regions. During March–May 2000 and July–December 2002, the spacecraft was in a favorable position to collect interstellar dust on the back side of the sample collector. Once the sample capsule was returned in January 2006, the collector trays were inspected and thousands of grains from Comet Wild 2 and seven probable interstellar grains were identified. These grains are available for teaching and research from the NASA Astromaterials Curation Office. The first asteroid samples were returned by the JAXA Hayabusa missions. Hayabusa encountered asteroid 25143 Itokawa in November 2005, picked up surface samples, and returned to Earth in June 2010. Despite some problems during sample collection, thousands of 10–100 micron sized particles were collected and are available for research in the laboratories. The second Hayabusa2 mission rendezvoused with asteroid 162173 Ryugu in June 2018. About 5 g of surface and sub-surface material from this primitive C-type asteroid were returned. JAXA shares about 10% of the collected samples with NASA sample curation. The Rosetta space probe orbited comet 67P/Churyumov–Gerasimenko from August 2014 to September 2016. During this time, Rosetta's instruments analyzed the nucleus, dust, gas, and plasma environments. Rosetta carried a suite of miniaturized sophisticated lab instruments to study collected cometary dust particles. Among them was the high-resolution secondary ion mass spectrometer COSIMA (Cometary Secondary Ion Mass Analyzer) that analyzed the rocky and organic composition of collected dust particles, an atomic force microscope MIDAS (Micro-Imaging Dust Analysis System) that investigated morphology and physical properties of micrometer-sized dust particles that were deposited on a collector plate, and the double-focus magnetic mass spectrometer (DFMS) and the reflectron type time of flight mass spectrometer (RTOF) of ROSINA (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis) to analyze cometary gas and the volatile components of cometary particulates. Rosetta's Philae lander carried the gas chromatography–mass spectrometry COSAC experiment to analyze organic molecules in the comet's atmosphere and on its surface. See also Cosmic Dust Analyzer Galileo and Ulysses Dust Detectors Helios Dust Instrumentation Surface Dust Analyser Venetia Burney Student Dust Counter Micro-Imaging Dust Analysis System References Space exploration Cosmic dust Astrobiology Astrochemistry Extragalactic astronomy Galactic astronomy Planetary science
Space dust measurement
[ "Chemistry", "Astronomy", "Biology" ]
6,080
[ "Origin of life", "Outer space", "Cosmic dust", "Space exploration", "Speculative evolution", "Galactic astronomy", "Astrobiology", "Astrochemistry", "Planetary science", "nan", "Biological hypotheses", "Extragalactic astronomy", "Astronomical objects", "Astronomical sub-disciplines" ]
70,740,162
https://en.wikipedia.org/wiki/Wyss%20Institute%20for%20Biologically%20Inspired%20Engineering
The Wyss Institute for Biologically Inspired Engineering (pronounced "veese") is a cross-disciplinary research institute at Harvard University focused on bridging the gap between academia and industry (translational medicine) by drawing inspiration from nature's design principles to solve challenges in health care and the environment. It is focused on the field of biologically inspired engineering to be distinct from bioengineering and biomedical engineering. The institute also has a focus on applications, intellectual property generation, and commercialization. The Wyss Institute is located in Boston's Longwood Medical Area and has 375 full-time staff. The Wyss is organized around eight focus areas, each of which integrate faculty, postdocs, fellows, and staff scientists. The focus areas are bioinspired therapeutics & diagnostics, diagnostics accelerator, immuno-materials, living cellular devices, molecular robotics, 3D organ engineering, predictive bioanalytics and synthetic biology. History In 2005, Harvard University established a faculty working group to envision the future of bioengineering. The group was called the Harvard Institute for Biologically Inspired Engineering (HIBIE), with the committee focused on synthetic biology, living materials, and biological control. HIBIE was co-chaired by Harvard professors Donald E. Ingber and David J. Mooney. In January 2009, institute was reformed into the Wyss Institute upon receiving a $125 million gift from Hansjörg Wyss. Ingber became the founding director of the Wyss Institute and David Mooney became a founding Core Faculty member, along with Professors Joanna Aizenberg, David A. Edwards, Kit Parker, George M. Whitesides, George Church, Ary Goldberger, William Shih, Robert Wood, James J. Collins, L. Mahadevan, Radhika Nagpal, and Pamela Silver. In 2013, Hansjörg Wyss gave another $125 million to Harvard University, doubling his initial gift. The funding was used to further the institute's interdisciplinary research, which includes DNA engineering, cleaning toxins from blood, vibrating insoles to help older adults maintain balance, and a melanoma cancer vaccine. In 2019, Hansjörg Wyss donated a third gift of $131 million to the Wyss Institute. In 2020, the Wyss Institute and Northpond Ventures, a Maryland-based venture capital firm, created the Laboratory for Bioengineering Research and Innovation at the Wyss Institute. The $12 million funding supports research related to RNA therapies, genome engineering, and new drug delivery methods. Within its first ten years, the institute also spun out 29 startup companies to commercialize Wyss Institute developments. Scientific developments The institute was originally founded with fourteen faculty from Harvard University. The institute had around 40 scientists and engineers as a part of the Advanced Technology Team organized around six technology platforms and two cross-platform initiatives across the fields of adaptive material technologies, bioinspired soft robotics, biomimetic microsystems, immuno-materials, living cellular devices, molecular robotics, synthetic biology, and 3D organ engineering. The Wyss Institute has been responsible for a number of scientific developments and spinoffs. In 2010, Donald Ingber pioneered the first 3D organ-on-a-chip that mimics a human lung. Following the lung-on-a-chip, the team built a kidney-on-a-chip and an intestine-on-a-chip. In 2014, Emulate spun out to make organ chips commercially available for other scientists to use for disease modeling and drug testing, including those at Johnson & Johnson, Merck, Takeda, Roche, and Cedars-Sinai Medical Center. In 2013, Conor Walsh developed a soft exosuit that uses textiles and cables to replicate leg muscles, which can help a healthy wearer not fatigue as quickly and help people with physical disabilities restore their muscles and increase mobility. In 2016, ReWalk robotics licensed the exosuit technology for the treatment of stroke, Multiple Sclerosis (MS), and mobility limitations. In 2019, ReWalk received clearance from the FDA to sell their ReStore soft exosuit for rehabilitation of stroke survivors. In 2013, David Mooney and the Dana-Farber Cancer Institute began a Phase I clinical trial for an implantable cancer vaccine. In 2018, Swiss pharmaceutical company Novartis licensed the technology. Mooney also developed injectable versions of their cancer vaccine. In 2014, Jennifer A. Lewis developed inks and a process to 3D bioprint organs that could be suitable for human transplants. In 2022, Trestle Biotherapeutics licensed technology to develop 3D bioprinted kidney tissue from Harvard University. In 2014, James J. Collins and MIT developed an inexpensive diagnostic that consists of cellular "machinery" (proteins, nucleic acids and ribosomes) freeze-dried on paper. The team tested their diagnostic with Ebola virus and in 2016 they tested it with the Zika virus. In 2021, the technology was licensed to Sherlock Biosciences. In 2015, Donald Ingber engineered a blood protein that binds to more than 90 sepsis-causing pathogens, including bacteria, fungi, viruses, and parasites. The technology was licensed by BOA Biomedical and approved in 2021 by the FDA to conduct human clinical trials. In 2015, Conor Walsh developed is a soft robotic grip glove to restore mobility for people with impaired hand function. In 2021, Imago Rehab spun out to develop the soft robotic glove for stroke rehabilitation. In 2017 David J. Mooney, inspired by the sticky properties of Arion subfuscus slug secretions, developed a non-toxic hydrogel adhesive that sticks to wet surfaces and stretches, making it ideal for use within the body. In 2019, George Church published research on combination gene therapy to treat multiple age-related diseases in mice, including diabetes, heart disease and kidney disease. The team founded Rejuvenate Bio to further develop the technology to treat age-related diseases in dogs. In 2019, George Church's lab developed a machine-learning approach to make more efficient adeno-associated viruses (AAVs), which are delivery vehicles for gene therapies. This team spun out Dyno Therapeutics to continue developing enhanced AAVs. Dyno Therapeutics has partnerships with pharmaceutical companies Novartis, Sarepta Therapeutics, and Roche. In 2021, Dyno Therapeutics raised a $100 million Series A. In 2020, Michael Levin and Josh Bongard developed new synthetic lifeform called Xenobots made from skin cells and heart muscle cells from the African clawed frog (Xenopus laevis). The scientists use an AI program to design the Xenobots to carry out desired functions, learning how cells cooperate to build complex bodies during morphogenesis and about regenerative medicine more broadly. In 2021, Jennifer A. Lewis and Massachusetts Eye and Ear hospital developed PhonoGraft, a 3D-printed regenerative eardrum graft. The team launched a startup company that was acquired by Desktop Health, a subsidiary of Desktop Metal. In 2021, Pamela Silver engineered bacteria to feed off of greenhouse gasses to then produce fats similar to animal and vegetable fats, as well as polymers similar to those made from petrochemicals. Response to COVID During the COVID-19 pandemic, the Wyss Institute was engaged in several notable efforts. This included the development of a diagnostic face mask that can detect SARS-CoV-2 RNA in the wearer's breath, and the application of the eRapid technology to detect the nucleic acids of the genome of SARS-CoV-2. The technology would be licensed by Antisoma Therapeutics as a point-of-care diagnostic test for COVID-19. The identification of undocumented nucleic acid contamination during routine experiments, which inadvertently caused false positives for COVID-19, led to the development of new safety protocols to protect researchers and ensure data integrity. New nasal swabs that could be manufactured quickly and more easily which launched the startup Rhinostics. Use of computational approaches and organ-chips to repurpose FDA-approved drugs like Amodiaquine to prevent or treat COVID-19. See also Bioinspiration Wyss Center for Bio and Neuroengineering in Switzerland References External links Official website 2009 establishments in Massachusetts Biotechnology organizations Engineering research institutes Harvard University research institutes Independent research institutes Medical research institutes in Massachusetts Laboratories in the United States Multidisciplinary research institutes Organizations established in 2009 Science and technology in Massachusetts
Wyss Institute for Biologically Inspired Engineering
[ "Engineering", "Biology" ]
1,765
[ "Biotechnology organizations", "Engineering research institutes" ]
70,740,417
https://en.wikipedia.org/wiki/Backward%20flying
Backward flying, also known as reverse flying, is a locomotive phenomenon where the object flies in the opposite of its intended flight direction. Different fields Biology In nature, there are very few organisms who can fly in such manner, making the phenomenon very rare. In the class Aves (birds), there is only one family, Trochilidae (hummingbirds) where the backward flying phenomenon can be found. In the class Insecta (insects), in the infraorder Anisoptera (dragonflies), genus Hemaris (bee hawk-moths) and order Diptera (true flies), species with this ability can be also found. There are also some species that don't use the traditional wing flapping mechanism to fly backwards. One such example is the Japanese flying squid, which uses a jet propulsion mechanism for backward flying. Technology In technology, there are some aircraft that can fly backwards. One example is helicopters. Efficiency There is no difference in the efficiency between forward flying and backward flying. Although, it was originally thought that backward flying would be much less efficient. Similar phenomena Similar to backward flying, backward gliding phenomenon also exists in nature. An example of organism that can backward glide is Cephalotes atratus (kaka-sikikoko). Notes References Flight
Backward flying
[ "Physics" ]
266
[ "Flight", "Physical phenomena", "Motion (physics)" ]
70,740,584
https://en.wikipedia.org/wiki/Se%C3%A1n%20Dineen
Seán Dineen (12 February 1944 – 18 January 2024) was an Irish mathematician specialising in complex analysis. His academic career was spent, in the main, at University College Dublin (UCD) where he was Professor of Mathematics, serving as Head of Department and as Head of the School of Mathematical Sciences before retiring in 2009. Dineen died on 18 January 2024, at the age of 79. Education Seán Dineen was born in Clonakilty, Co. Cork, Ireland on 12 February 1944. He attended St Mary's, the first secondary school for boys in Clonakilty, which his parents Jerry (Jeremiah) and Margaret Dineen had founded in 1938. His father had died in 1953 and the school was subsequently run by his mother. He entered University College Cork (UCC) in 1961 to study mathematics, graduating with honours BSc in mathematics in 1964. While at UCC, he was involved in setting up the student mathematics society there. His tutors and lecturers included Finbarr Holland, Michael Mortell, Tagdh Carey, Paddy Kennedy, Paddy Barry and Siobhán O'Shea (later Siobhán Vernon). He completed his MSc there in 1965, and was awarded a National University of Ireland Travelling Studentship. Dineen was the first student of pure mathematics from UCC to travel to the USA to do his doctorate, where he did his coursework in the University of Maryland. His official supervisor there was John Horvath, but his PhD research was carried out in Rio de Janeiro at Instituto Nacional de Matemática Pura e Aplicada (IMPA) under the supervision of Leopoldo Nachbin. He completed his thesis on "Holomorphy Types on a Banach Space" in 1970. UCD Career Dineen spent the year 1969-1970 at Johns Hopkins as an instructor before returning to Ireland. After two years at the Dublin Institute for Advanced Studies (DIAS), he secured a position at University College Dublin. Seven years later, in 1979, he was appointed to the professorship and chair of mathematics vacated by J. R. Timoney. He spent the rest of his career there, formally retiring in 2009. Mathematics Dineen's work has principally been in the area of infinite dimensional complex analysis and the topological structure of spaces of Holomorphic functions. He later worked on bounded symmetric domains and spectral theory, among other topics. He has said "If you want to stay active as a research mathematician, you have to reinvent yourself regularly". His academic footprint includes 10 books and/or monographs, over 100 peer-reviewed research articles, over 4000 citations, 11 PhD students, over 40 collaborators, and the organisation of numerous mathematical conferences and meetings. In 1987 he was elected to the Royal Irish Academy. Selected papers Dineen, Seán "The second dual of a JB∗ triple system. Complex analysis, functional analysis and approximation theory". (Campinas, 1984), 67–69, North-Holland Math. Stud., 125, Notas Mat., 110, North-Holland, Amsterdam, 1986. Dineen, Seán "Complete holomorphic vector fields on the second dual of a Banach space". Math. Scand. 59 (1986), no. 1, 131–142. Dineen, Seán "Holomorphy types on a Banach space". Studia Math. 39 (1971), 241–288. Alencar, Raymundo; Aron, Richard M.; Dineen, Seán "A reflexive space of holomorphic functions in infinitely many variables". Proc. Amer. Math. Soc. 90 (1984), no. 3, 407–411. Dineen, Seán; Timoney, Richard M. "On a problem of H. Bohr". Bull. Soc. Roy. Sci. Liège 60 (1991), no. 6, 401–404. Dineen, Seán; Timoney, Richard M.; Vigué, Jean-Pierre "Pseudodistances invariantes sur les domaines d'un espace localement convexe". Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 12 (1985), no. 4, 515–529. Dineen, Seán; Timoney, Richard M. "Absolute bases, tensor products and a theorem of Bohr". Studia Math. 94 (1989), no. 3, 227–234. Dineen, Seán; Mellon, Pauline "Holomorphic functions on symmetric Banach manifolds of compact type are constant". Math. Z. 229 (1998), no. 4, 753–765. Dineen, Seán; Mackey, Michael; Mellon, Pauline "The density property for JB∗-triples". Studia Math. 137 (1999), no. 2, 143–160. Dineen, Seán; Patyi, Imre; Venkova, Milena "Inverses depending holomorphically on a parameter in a Banach space". J. Funct. Anal. 237 (2006), no. 1, 338–349. Dineen, Seán; Mujica, Jorge "A monomial basis for the holomorphic functions on $c_0$". Proc. Amer. Math. Soc. 141 (2013), no. 5, 1663–1672. Dineen, Seán; Harte, Robin E. "Banach-valued axiomatic spectra". Studia Math. 175 (2006), no. 3, 213–232. Dineen, Seán; Galindo, Pablo; García, Domingo; Maestre, "Manuel Linearization of holomorphic mappings on fully nuclear spaces with a basis". Glasgow Math. J. 36 (1994), no. 2, 201–208. Selected books Analysis, a Gateway to Understanding. World Scientific, 2012, 320pp. Black-Scholes Formula. Second edition. Graduate Studies in Mathematics, 70. American Mathematical Society, Providence, RI, 2013. xiv+305 Probability Theory in Finance. A Mathematical Guide to the Black-Scholes formula. Graduate Studies in Mathematics, 70. American Mathematical Society, Providence, RI, 2005. xiv+294 pp. Complex Analysis on Infinite-Dimensional Spaces. Springer Monographs in Mathematics. Springer-Verlag London, Ltd., London, 1999. xvi+543 pp. Multivariate Calculus and Geometry. Springer Undergraduate Mathematics, Series. Springer-Verlag London, Ltd., London, 1998. xii+262 pp. Third edition 2014. xiv+257 pp. Functions of Two Variables. Chapman and Hall Mathematics, Series. Chapman & Hall, London, 1995. x+189 pp. Second edition Chapman & Hall/CRC, Boca Raton, FL, 2000. xii+191 pp. The Schwarz Lemma. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1989. x+248 pp. References External links Obituary in Irish Maths Society Bulletin 1944 births 2024 deaths Mathematical analysts Complex analysts 20th-century Irish mathematicians 21st-century Irish mathematicians Alumni of University College Cork University System of Maryland alumni Instituto Nacional de Matemática Pura e Aplicada alumni Academics of University College Dublin Johns Hopkins University faculty People from Clonakilty Members of the Royal Irish Academy Academics of the Dublin Institute for Advanced Studies Scientists from County Cork
Seán Dineen
[ "Mathematics" ]
1,536
[ "Mathematical analysis", "Mathematical analysts" ]
70,740,807
https://en.wikipedia.org/wiki/Thermotomaculum%20hydrothermale
Thermotomaculum hydrothermale is a species of Acidobacteriota. References Bacteria Bacteria described in 2017
Thermotomaculum hydrothermale
[ "Biology" ]
26
[ "Microorganisms", "Prokaryotes", "Bacteria" ]
70,741,068
https://en.wikipedia.org/wiki/Thermoanaerobaculum%20aquaticum
Thermoanaerobaculum aquaticum is a species of Acidobacteriota. References Bacteria described in 2013 Acidobacteriota
Thermoanaerobaculum aquaticum
[ "Biology" ]
32
[ "Bacteria stubs", "Bacteria" ]
70,741,349
https://en.wikipedia.org/wiki/Polarisedimenticola%20svalbardensis
"Candidatus Polarisedimenticola svalbardensis" is a candidate species of Acidobacteriota. References Bacteria described in 2021 Acidobacteriota Candidatus taxa
Polarisedimenticola svalbardensis
[ "Biology" ]
41
[ "Bacteria stubs", "Bacteria" ]
70,742,164
https://en.wikipedia.org/wiki/Affordable%20Connectivity%20Program
The Affordable Connectivity Program (ACP) was a United States government-sponsored program that provided internet access to low-income households. Several companies signed on to participate in the program, including Verizon Communications, Frontier Communications, T-Mobile, Spectrum, Cox, AT&T, Xfinity, Optimum and Comcast. The program was administered by the Federal Communications Commission. The Infrastructure Investment and Jobs Act provided $14.2 billion in funding for $30 subsidies for those with low incomes, and $75 subsidies on tribal lands. As of June 2024, the program has ended. History 2020 Passed December 2020, the Consolidated Appropriations Act, 2021 directed the FCC to create a new "Emergency Broadband Benefit program" (EBB) with the aim to help Americans with broadband connectivity in response to the effects of the COVID 19 pandemic. $3.2 billion was appropriated for the EBB program. 2021 In 2021, the US Congress passed a $1.2 trillion infrastructure package including $14.2 billion for the Affordable Connectivity Program. The program replaced the Emergency Broadband Benefit program, with $14 billion dedicated to the act. The ACP replaced the EBB on December 31, 2021. When the act was remarked upon by US President Biden on May 9, 2022, close to 40% of American households qualified for assistance, i.e. households or individuals earned twice the poverty level or less. There are higher limits in Hawaii and Alaska. According to NPR, an estimated 48 million Americans qualified, with the plans to provide at least 100 megabits per second of speed for a maximum of $30. One person in the household must participate in government assistance programs, if the household is above 200% of the Federal Poverty Guidelines. Twenty internet providers were initially involved, including regional companies such as Hawaiian Telcom and Jackson Energy Authority in Tennessee. The full list included Allo Communications, Altafiber, Altice USA, Astound, AT&T, Breezeline, Comcast, Comporium, Cox Communications, Frontier, IdeaTek, Jackson Energy Authority, Kinetic, MediaCom, MLGC, Spectrum, Verizon, Vermont Telephone Company, Vexus Fiber, and Wow! Internet, Cable, and TV. Aristata Communications joined the program in August 2023. In 2021, Pew Research Center engaged in a study on the act along with the University of Southern California's Annenberg Research Network on International Communications and the California Emerging Technology Fund, looking at uptake and impact in California. 2022-2023 Separate from the ACP, United States Secretary of Commerce Gina Raimondo and North Carolina Governor Roy Cooper introduced the $45 billion Internet for All initiative in Durham, North Carolina on May 13, 2022. The Broadband Equity, Access and Deployment program is to provide each state with $5 million for planning and $100 million for expansion, with states having a greater need receiving more money. The legislature of each state must approve. Funding is also provided by the Infrastructure Investment and Jobs Act. A Pew study released in February 2023 about the ACP stated that enrollment difficulties presented difficulties for eligible households, with 45 percent of applicants rejected, and others giving up on the applications before they were submitted. Data sharing limitations between agencies was also described a limiting factor for uptake. On February 27, 2023, the White House announced 16 million households were "saving $500 million per year" due to the program. By July 2023, there were 1,300 internet providers participating in the ACP, although not all provided the discounted device benefit. In July 2023, a study showed about 14% of the United States was enrolled in the program. As of July 31, 2023, 19.8 million households had signed up for the ACP, with 2.8 million of them in rural counties. By August 2023, there were reports the program would run out of money by 2024. By mid-2024, according to an analysis by The Hill, federal funds available to pay for the program will be depleted, although the federal government "may" continue funding. On August 3, 2023, it was reported that the subsidy would increase to $75/hr month for people in 'high-cost' areas. Congress instructed the NTIA to identify high-cost areas and consult on the matter with the FCC. 2024 As of January 2024 the future of the program remained in doubt, with New Street Research giving chances of the $7 billion extension bill being passed as "significantly below 50%". Provisions are in place for internet service providers to apprise their customers of its status, with the last full month of discounted service currently being April 2024. By that time, more than 23 million households had accessed the ACP. As of late 2024 the program had been subject to funding delays and progress in rollout of broadband had not met the targets of the plan. Qualifications According to the FCC website in August 2023, "The benefit provides a discount of up to $30 per month toward internet service for eligible households and up to $75 per month for households on qualifying Tribal lands. Eligible households can also receive a one-time discount of up to $100 to purchase a laptop, desktop computer, or tablet from participating providers if they contribute more than $10 and less than $50 toward the purchase price." Those receiving various federal benefits were also eligible, including SSI, Pell grants, discounted school meals, Medicaid, WIC assistance, food stamps, VA Survivors Pension, and VA Veterans Pension. Others included Federal Public Housing. The program applied a discount on a monthly basis for participating companies. Other factors for eligibility included being in specific Tribal programs such as Bureau of Indian Affairs General Assistance, Tribal TANF and Food Distribution Program on Indian Reservations. See also Lifeline (FCC program) Internet in the United States References External links Affordable Connectivity Program at FCC.gov 2022 in the United States 2022 in American politics Internet access Former United States Federal assistance programs
Affordable Connectivity Program
[ "Technology" ]
1,227
[ "Internet access", "IT infrastructure" ]
70,742,210
https://en.wikipedia.org/wiki/Mixed%20Chinese%20postman%20problem
The mixed Chinese postman problem (MCPP or MCP) is the search for the shortest traversal of a graph with a set of vertices V, a set of undirected edges E with positive rational weights, and a set of directed arcs A with positive rational weights that covers each edge or arc at least once at minimal cost. The problem has been proven to be NP-complete by Papadimitriou. The mixed Chinese postman problem often arises in arc routing problems such as snow ploughing, where some streets are too narrow to traverse in both directions while other streets are bidirectional and can be plowed in both directions. It is easy to check if a mixed graph has a postman tour of any size by verifying if the graph is strongly connected. The problem is NP hard if we restrict the postman tour to traverse each arc exactly once or if we restrict it to traverse each edge exactly once, as proved by Zaragoza Martinez. Mathematical Definition The mathematical definition is: Input: A strongly connected, mixed graph with cost for every edge and a maximum cost . Question: is there a (directed) tour that traverses every edge in and every arc in at least once and has cost at most ? Computational complexity The main difficulty in solving the Mixed Chinese Postman problem lies in choosing orientations for the (undirected) edges when we are given a tight budget for our tour and can only afford to traverse each edge once. We then have to orient the edges and add some further arcs in order to obtain a directed Eulerian graph, that is, to make every vertex balanced. If there are multiple edges incident to one vertex, it is not an easy task to determine the correct orientation of each edge. The mathematician Papadimitriou analyzed this problem with more restrictions; "MIXED CHINESE POSTMAN is NP-complete, even if the input graph is planar, each vertex has degree at most three, and each edge and arc has cost one." Eulerian graph The process of checking if a mixed graph is Eulerian is important to creating an algorithm to solve the Mixed Chinese Postman problem. The degrees of a mixed graph G must be even to have an Eulerian cycle, but this is not sufficient. Approximation The fact that the Mixed Chinese Postman is NP-hard has led to the search for polynomial time algorithms that approach the optimum solution to reasonable threshold. Frederickson developed a method with a factor of 3/2 that could be applied to planar graphs, and Raghavachari and Veerasamy found a method that does not have to be planar. However, polynomial time cannot find the cost of deadheading, the time it takes a snow plough to reach the streets it will plow or a street sweeper to reach the streets it will sweep. Formal definition Given a strongly connected mixed graph with a vertex set , and edge set , an arc set and a nonnegative cost for each , the MCPP consists of finding a minim-cost tour passing through each link at least once. Given , , , denotes the set of edges with exactly one endpoint in , and . Given a vertex , (indegree) denotes the number of arcs enter , (outdegree) is the number of arcs leaving , and (degree) is the number of links incident with . Note that . A mixed graph is called even if all of its vertices have even degree, it is called symmetric if for each vertex , and it is said to be balanced if, given any subset of vertices, the difference between the number of arcs directed from to , , and the number of arcs directed from to , , is no greater than the number of undirected edges joining and , . It is a well known fact that a mixed graph is Eulerian if and only if is even and balanced. Notice that if is even and symmetric, then G is also balanced (and Eulerian). Moreover, if is even, the can be solved exactly in polynomial time. Even MCPP Algorithm Given an even and strongly connected mixed graph , let be the set of arcs obtained by randomly assigning a direction to the edges in and with the same costs. Compute for each vertex i in the directed graph . A vertex with will be considered as a source (sink) with supply demand . Note that as is an even graph, all supplies and demands are even numbers (zero is considered an even number). Let be the set of arcs in the opposite direction to those in and with the costs of those corresponding edges, and let be the set of arcs parallel to at zero cost. To satisfy the demands of all the vertices, solve a minimum cost flow problem in the graph , in which each arc in has infinite capacity and each arc in has capacity 2. Let be the optimal flow. For each arc in do: If , then orient the corresponding edge in from to (the direction, from to , assigned to the associated edge in step 1 was "wrong"); if , then orient the corresponding edge in from to (in this case, the orientation in step 1 was "right"). Note the case is impossible, as all flow values through arcs in are even numbers. Augment by adding copies of each arc in . The resulting graph is even and symmetric. Heuristic algorithms When the mixed graph is not even and the nodes do not all have even degree, the graph can be transformed into an even graph. Let be a mixed graph that is strongly connected. Find the odd degree nodes by ignoring the arc directions and obtain a minimal-cost matching. Augment the graph with the edges from the minimal cost matching to generate an even graph . The graph is even but is not symmetric and an eulerian mixed graph is even and symmetric. Solve a minimum cost flow problem in to obtain a symmetric graph that may not be even . The final step involves making the symmetric graph even. Label the odd degree nodes . Find cycles that alternate between lines in the arc set and lines in the edge set that start and end at points in . The arcs in should be considered as undirected edges, not as directed arcs. Genetic algorithm A paper published by Hua Jiang et. al laid out a genetic algorithm to solve the mixed chinese postman problem by operating on a population. The algorithm performed well compared to other approximation algorithms for the MCPP. See also Capacitated arc routing problem References Computational problems in graph theory
Mixed Chinese postman problem
[ "Mathematics" ]
1,309
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems" ]
70,742,220
https://en.wikipedia.org/wiki/NYCxDesign
NYCxDESIGN is a non-profit organization responsible for organizing the NYCxDESIGN Festival, New York City's official annual citywide celebration of Design, which takes place every May. The festival is made up of independently hosted trade fairs, exhibitions, open studios, talks and panel discussions, and other in-person, and virtual events that take place in the five boroughs of New York City. The participants highlight a range of design and architecture related fields including furniture design, interior design, graphic design, fashion design, entertainment, technology, and art. New York City mayor Eric Adams spoke at the opening ceremony of its 10th edition in 2022. The organization also produces year-round education programming, including the poster series Ode to NYC, the Emerging Designer Showcase, NYCxDESIGNxSOUVENIR, and the podcast The Mic, hosted by Debbie Millman, and Design Pavilion, the outdoor design exhibition and interactive installations at landmark venues, free and open to the public. History and operation In 2012 NYCEDC started NYCxDESIGN as a municipal initiative steered by a committee of leaders from New York City's design community. The 2019 edition was estimated to have attracted 300,000 visitors with its 400+ events, which resulted in a total expenditure of approximately $111 million in New York City. The city announced that SANDOW would be taking over the operations in 2020. Starting in 2021, NYCxDESIGN has transitioned to become an independent non-profit organization.In 2023, Ilene Shaw took over as the Executive Director of the 501(c)(3) organization. In March 2024, in honor of the 12th year of the NYCxDESIGN Festival, the organization announced the launch of a Keynote program. Through this, a daily keynote is presented throughout the Festival week, inviting designers and brands of all disciplines from around the world to share inspiring stories and vision. References Further reading Architectural Digest - Highlights From a Whirlwind New York Design Week Dezeen - "Don't come wearing Milan or European glasses" says NYCxDesign executive director External links 2012 establishments in New York City Annual events in New York City Festivals in New York City Design events
NYCxDesign
[ "Engineering" ]
443
[ "Design", "Design events" ]
70,742,797
https://en.wikipedia.org/wiki/Androgenesis
Androgenesis is a system of asexual reproduction that requires the presence of eggs and occurs when a zygote is produced with only paternal nuclear genes. During standard sexual reproduction, one female parent and one male parent each produce haploid gametes (such as a sperm or egg cell, each containing only a single set of chromosomes), which recombine to create offspring with genetic material from both parents. However, in androgenesis, there is no recombination of maternal and paternal chromosomes, and only the paternal chromosomes are passed down to the offspring. (The inverse of this is gynogenesis, where only the maternal chromosomes are inherited, which is more common than androgenesis). The offspring produced in androgenesis will still have maternally inherited mitochondria, as is the case with most sexually reproducing species. One of two things can occur to produce offspring with exclusively paternal genetic material: the maternal nuclear genome can be eliminated from the zygote, or the female can produce an egg with no nucleus, resulting in an embryo developing with only the genome of the male gamete. Androgenesis blurs the lines between sexual and asexual reproduction: it is not strictly a form of asexual reproduction because both male and female gametes are required. However, it is also not strictly a form of sexual reproduction because the offspring have uniparental nuclear DNA that has not undergone recombination, and the proliferation of androgenesis can lead to exclusively asexually reproducing species. Androgenesis occurs in nature in many organisms such as plants (including trees, flowers, barley, algae or corn), invertebrates (including clams, stick insects, some ants, bees, flies and parasitic wasps) and vertebrates (mainly amphibians and fish). Androgenesis has also been observed in roosters and genetically modified laboratory mice. Elimination of the maternal nuclear genome When androgenesis occurs via elimination of the maternal nuclear genome, the elimination takes place after fertilization. The nuclei of the two gametes fuse as normal, but immediately afterwards the male nuclear genome then eliminates the female nuclear genome, leaving a fertilized ovum with only the nuclear genome of the male parent. If viable, the resulting offspring is a clone or sub-clone of the sperm- or pollen-producing parent. Elimination of the maternal nuclear genome is evolutionarily advantageous for the male parent, because all offspring produced have entirely paternally-inherited alleles: in contrast, a male parent that reproduces sexually without androgenesis only passes down half of its genetic material to each of its offspring. A male allele promoting the elimination of the female gametic nucleus therefore has a high fitness advantage and can spread through a population and even reach fixation. However, this may be part of the reason androgenesis is very rarely observed in nature: despite being advantageous to the individual producing offspring, it is deleterious to the population as a whole: if an androgenesis-inducing allele reaches high frequencies, egg-producing individuals become rare. Because both egg- and sperm-producers are necessary for androgenesis, if the sex ratio becomes highly unbalanced and there are too few egg-producers, the population is driven to extinction. However, in hermaphrodites (species where a single individual produces both male and female gametes), this is less of a problem since there is no sex ratio. Female production of a non-nuclear egg Androgenesis can also occur through female production of an egg without a nucleus. Upon fertilization with pollen or sperm, there is no maternal nucleus to expel, and a zygote is produced that derives its nuclear genome entirely from its paternal parent. It is unclear why production of non-nucleate eggs would have evolved, because there is no fitness advantage to the egg parent: none of its nuclear genes are being passed onto its offspring. Therefore, any female allele causing non-nucleate egg production would be highly disadvantageous. This form of androgenesis could spread through genetic drift, or there may be some unknown benefit to the egg-producing parent. Species in which non-nucleate egg production occurs are less likely to go extinct than species where the maternal nuclear genome is eliminated. This is because females producing non-nucleate eggs are disfavored by natural selection, so their proportion in a population will remain low. Male apomixis Another type of androgenesis is male apomixis, which is a reproductive process in which a plant develops from a sperm cell (male gamete) without the participation of a female cell (ovum). In this process, the zygote is formed solely with genetic material from the father, resulting in offspring genetically identical to the male organism. This has been noted in many plants such as Nicotiana, Capsicum frutescens, Cicer arietinum, Poa arachnifera, Cupressus sempervirens, Solanum verrucosum, Phaeophyceae,Elodea canadensis, Barleys,Tripsacum dactyloides, and Zea mays, and occurs as the regular reproductive method in Cupressus dupreziana. This contrasts with the more common apomixis, where development occurs without fertilization, but with genetic material only from the mother. There are also clonal species that reproduce through vegetative reproduction such as Lomatia tasmanica, Lagarostrobos franklinii, and Pando, where the genetic material is exclusively male. Obligate androgenesis Obligate androgenesis is the process by which males are only able to produce offspring exclusively through male genetic material, where mating with females of related species is not necessary to produce offspring. This leads to these species being able to survive in the absence of females. They are also capable of interbreeding with sexual and other androgenetic lineages in a phenomenon sometimes referred to as egg parasitism or androgenetic parasitism. This method of reproduction is relatively rare and has been found in several species of clams of the genus Corbicula, some plants like Elodea canadensis, Cupressus dupreziana, Lomatia tasmanica, Lagarostrobos franklinii, and Pando, algae of the genus Phaeophyceae, and recently in the all male fish species Squalius alburnoides. Although the most common term to refer to completely asexual reproduction in males is paternal apomixis, the term obligate androgenesis is more commonly used in animals and unlike paternal apomixis, obligate androgenesis implies that individuals are incapable of reproducing in a sexual manner, and therefore depend on androgenesis to reproduce. Ploidy in androgenesis Individuals produced through androgenesis can be either haploid or diploid (having one or two sets of chromosomes, respectively), depending on the species. Diploidy occurs through either the fusion of two haploid sperm cells or the duplication of chromosomes from one haploid sperm cell. In both cases, the offspring experience a loss of genetic variation: individuals with the genome of 2 fused sperm cells will suffer from inbreeding depression, and individuals with the genome of a duplicated sperm cell will be fully homozygous. In species with male heterogamety (males have XY or XO chromosomes and females have XX, like in most mammals), the doubling of male chromosomes will cause all offspring to be female: if the sperm carries an X chromosome, the embryo must be XX, and if it carries a Y or O, the embryo will be YY or OO, and nonviable. With sperm fusion, a quarter of fertilized eggs will be female (XX), half will be male (XO or XY), and a quarter will be nonviable (YY or OO). Androgenesis is more common in haplodiploid species, (where sex is determined by ploidy such that males generally develop from an unfertilized egg and females from a fertilized egg), than in diploid species (where all sexes are diploid). This is because with haplodiploids, there is no requirement of the doubling of chromosomes from a haploid gamete, so no embryos are lost due to YY or OO chromosomes. Androgenesis in non-gonochoristic species Androgenesis is more likely to persist in hermaphrodites than in species with two distinct sexes (gonochorists) because all individuals have the ability to produce ova, so the spread of androgenesis-promoting alleles causing egg-producers to become scarce is not an issue. Androgenesis is also seen more frequently in species that already have uncommon modes of reproduction such as hybridogenesis and parthenogenesis, and is sometimes seen in interspecies hybridization. Induced androgenesis Humans sometimes induce androgenesis to create clonal lines in plants (specifically crops), fish, and silkworms. A common method of inducing androgenesis is through irradiation. Egg cells can have their nuclei inactivated by gamma ray, UV, or X-ray radiation before being fertilized with sperm or pollen. A 2015 study was successful in producing zebrafish androgenones by cold-shocking just-fertilized eggs, which prevents the first cleavage event that doubles the chromosome number after parthenogenesis, and then heat-shocking them to double their chromosome number. See also Parthenogenesis Parthenocarpy References Asexual reproduction
Androgenesis
[ "Biology" ]
2,017
[ "Behavior", "Asexual reproduction", "Reproduction" ]
70,742,859
https://en.wikipedia.org/wiki/United%20Nations%20Special%20Rapporteur%20on%20Toxics%20and%20Human%20Rights
The mandate of the United Nations Special Rapporteur on Toxics and Human Rights was established in 1995 by the United Nations Commission on Human Rights. Background In 1995, the Commission on Human Rights established the mandate to examine the human rights implications of exposure to hazardous substances and toxic waste. This included the implications of trends like illicit traffic and release of toxic and dangerous products during military activities, war and conflict, shipbreaking. Other areas included in the mandate are medical waste, extractive industries (particularly oil, gas and mining), labour conditions in manufacturing and agricultural sectors, consumer products, environmental emissions of hazardous substances from all sources, and the disposal of waste. In 2011, the UN Human Rights Council affirmed that hazardous substances and waste may constitute a serious threat to the full enjoyment of human rights. It expanded the mandate to include the whole life-cycle of hazardous products, from manufacturing to final disposal. This is known as the cradle-to-grave approach. The rapid acceleration in chemical production suggests the likelihood that this is an increasing threat, particularly for the human rights of the most vulnerable segments of society. The UN asserts that states are required by international human rights law to take active measures to prevent the exposure of individuals and communities to toxic substances. Vulnerable members of society are often deemed most affected. They include people living in poverty, workers, children, minority groups, indigenous peoples, migrants, among other vulnerable or susceptible groups, with highly gendered impacts. Independent expert The Special Rapporteur is appointed by the UN Human Rights Council. The appointed expert is required by the Human Rights Council to examine and report back to member States on initiatives taken to promote and protect the human rights implicated by the improper management of hazardous substances and wastes. Selection of topics reported on by the Special Rapporteur In March 2022, Human Rights Watch made a submission to the report of the Special Rapporteur regarding mercury, artisanal and small-scale gold mining. Mercury is used in mining to retrieve the gold from the ore. Mercury, which is particularly harmful to children, attacks the central nervous system and can cause brain damage and death. The mining work is often carried out by child labourers who have little or false information about the risks of mercury. 2021 - Report: The stages of the plastics cycle and their impacts on human rights The report highlights the human rights implications of toxic additives in plastics and the life cycle stages of plastic, including the rights of women, children, workers, and indigenous peoples. Toxic chemicals are commonly added to plastics, causing serious risks to human rights and the environment. The Special Rapporteur puts forward recommendations aimed at addressing the negative consequences of plastics on human rights. 2015 - Report: Right to Information on Hazardous Substances and Wastes In this report, the Special Rapporteur clarifies the scope of the right to information throughout the life cycle of hazardous substances and wastes, identifies challenges that have emerged in realizing this right and outlines potential solutions to these problems. Obligations of States and responsibilities of business in relation to implementing the right to information on hazardous substances and wastes are discussed. Current Independent Expert Marcos A. Orellana, 2020–current Past Independent Experts Mr. Baskut Tuncak (Turkey/US), 2014–2020 Mr. Marc Pallemaerts (Belgium), 2012–2014 Mr. Calin Georgescu (Romania), 2010–2012 Mr. Okechukwu Ibeanu (Nigeria), 2004–2010 Ms. Fatma Zohra Ouhachi-Vesely (Algeria), 1995–2004 References External links United Nations Special Rapporteur on Toxics and Human Rights United Nations Human Rights Council Human rights Diplomacy Hazardous waste
United Nations Special Rapporteur on Toxics and Human Rights
[ "Technology" ]
745
[ "Hazardous waste" ]
70,743,583
https://en.wikipedia.org/wiki/List%20of%20organisms%20named%20after%20works%20of%20fiction
Newly created taxonomic names in biological nomenclature often reflect the discoverer's interests or honour those the discoverer holds in esteem, including fictional elements. † Denotes that the organism is extinct. Literature Greek mythology Norse mythology Gargantua and Pantagruel William Shakespeare Don Quixote Robinson Crusoe Gulliver's Travels Victor Hugo The Three Musketeers Moby-Dick Lewis Carroll Mark Twain The Adventures of Pinocchio Arthur Conan Doyle Rudyard Kipling Cyrano de Bergerac Dracula Peter Pan H. P. Lovecraft Winnie-the-Pooh Vladimir Nabokov J. R. R. Tolkien Enid Blyton Jorge Amado Dune Aubrey–Maturin series The Hitchhiker's Guide to the Galaxy Discworld The Witcher A Song of Ice and Fire Harry Potter Rumo and His Miraculous Adventure Ready Player One Other literature Comics The Adventures of Tintin Popeye Asterix DC Comics Marvel Comics Peanuts Monica and Friends Calvin and Hobbes JoJo's Bizarre Adventure Other comics Films Disney and Pixar Looney Tunes Orson Welles Godzilla Star Wars Alien Terminator Crocodile Dundee Predator The Fifth Element The Big Lebowski Madagascar Avatar Other films Television Doctor Who Star Trek Sesame Street and The Muppets Dungeons & Dragons SpongeBob SquarePants Battlestar Galactica The Big Bang Theory Other television series Games Galaga Super Mario The Legend of Zelda Street Fighter Pokémon BioShock Other games Other media See also List of unusual biological names List of organisms named after famous people References Fiction Lists of etymologies
List of organisms named after works of fiction
[ "Biology" ]
313
[ "Lists of biota" ]
70,744,892
https://en.wikipedia.org/wiki/Nagstatin
Nagstatin is a strong competitive inhibitor of the N-acetyl-β-d-glucosaminidase with the molecular formula C12H17N3O6. Nagstatin is produced by the bacterium Streptomyces amakusaensis. References Further reading Nagstatin Acetamides Carboxylic acids Triols Imidazopyridines
Nagstatin
[ "Chemistry" ]
85
[ "Carboxylic acids", "Functional groups", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
70,746,075
https://en.wikipedia.org/wiki/Future%20predator
The future predator is a fictional future apex predator in the British science fiction television programme Primeval. The future predator was conceived by producers Tim Haines and Adrian Hodges and was designed by Daren Horley. Giant and flightless future descendants of bats, the predators are ruthless creatures that appear several times throughout the series. They were positively received and have been termed by some commentators as the "Daleks of Primeval" owing to their repeated appearances and how difficult they are to stop. Description and design The central story of Primeval revolves around time portals (called "anomalies") opening up and letting through various prehistoric creatures into the present. While making the programme, the producers concluded that anomalies should logically be able to connect the present to the future as well. The future predators are flightless macropredatory future descendants of bats about the size of lions. They walk on their knuckles and are quadrupedal, though are also able to run and rear up bipedally at times. With their elongated arms, evolved from the wings of their ancestors, they are also able to leap and climb. Since the predators rely on advanced echolocation to hunt, their eyes have atrophied away. The design of the future predator was created by Daren Horley, the Digital Textures Lead on the series. The description Horley received to work with was that the future predator was to be a sinister-looking (but not too fanciful) quadrupedal creature which could also rear up on two legs. Horley decided that the creature would be eyeless since this would make it more strange. The predator was initially going to be a reptile, with a dinosaur-inspired head design, but Horley changed this to the later used more unusual design after feedback from Tim Haines, the executive producer. It is possible that the concept of the future predator being a bat-descendant was inspired by the night stalker, a future giant predatory flightless bat featured in Dougal Dixon's 1981 speculative evolution book After Man; Dixon himself believes the night stalker to have been the inspiration. Other presumed sources of inspiration for the creature include the creatures of the Alien (aspects of the design and for inspiration for various shots in the episodes) and Predator (particularly for the echolocation) film series. Though they were maintained to have been bat descendants throughout Primeval, the design of the future predators, particularly their body plan also strongly evokes primates. Whether the future predators are natural future descendants of bats or were bioengineered is never fully made clear in the programme; some commentators have suggested that they are, alongside bats, also perhaps intended to be linked to humans. Their origin story might also change throughout the programme as no matter what changes are made in the present, the future predators continue to appear. Appearances The future predators make their debut in episode six (the finale) of the first series of Primeval when a family of future predators were first transported by an anomaly into the Permian period and then to the Forest of Dean in the present. The father future predator is killed in the present by Nick Cutter (Douglas Henshall) and the mother is killed in the Permian in a fight with a Inostrancevia, which also eats most of the young predators. The future predators made their return in series two, appearing in both episode six and seven (the finale). This time, future predators have been captured by Helen Cutter (Juliet Aubrey) and brought to the present, where Oliver Leek (Karl Theobald) uses neural clamps to control them. In episode six, one of the controlled future predators nearly kills James Lester (Ben Miller), the head of the ARC (Anomaly Research Center) until it was killed by a Columbian Mammoth. In episode seven, Nick Cutter disabled the neural clamps, causing the predators to first attack and kill Leek, then kill Stephen Hart (James Murray) and then turn on each other and wipe themselves out. Series three saw the future predators appear several times. They made brief appearances in both episode one (when they appear in the future and attack some soldiers) and episode four (when one is depicted being experimented on) before making a major return in episode eight. Episode eight saw main characters Danny Quinn (Jason Flemyng), Captain Becker (Ben Mansfield), Connor Temple (Andrew-Lee Potts) and Abby Maitland (Hannah Spearritt) step through an anomaly into the future, where they encounter numerous future predators in the ruins of a city. The future predators also made brief appearances in both episode nine and ten (the finale). After being absent throughout series four, the future predators made their last return in episode six (the finale) of series 5, which sees an incursion from mutated future predators from a future where the organization New Dawn has managed to reduce the Earth to a dying wasteland. Reception The future predators have been described as an iconic creature of the series. Their ability to move and kill without being seen has been described as "almost supernatural". Their appearance in Primeval marked a turning point in the series since they were the first non-prehistoric creature to appear in the programme. The storyline accompanying the creature's first appearance was also strikingly different, exploring temporal paradoxes and the impact changing the past could have on the present. In later series of Primeval, further future creatures would also be introduced. The introductory episode of the future predator was ranked as the fourth best episode of the entire series by Philip Lickley of Den of Geek in 2012 and David Selby of CultFix ranked the future predator as the best creature of the series in 2013. Some commentators have referred to the future predators, on account of their repeated appearances in the series and how difficult they are to deal with, as the Daleks of Primeval. Andrew-Lee Potts, who starred in Primeval, thought the future predator was one of the standout creatures in the series. Potts also compared the predator to the Daleks, though noted that the predators could "move a hell of a lot faster", and saw it as a creature straight out of a horror film. Douglas Henshall, who also starred in the series, likewise found the design to be "terrific", particularly since he felt it could actually be a proper animal. The palaeontologist Darren Naish praised the design of the future predator in a 2012 article for successfully incorporating numerous bat-like qualities despite its weird appearance, size and lack of flight, such as how the creature runs in a similar way to a vampire bat. The future predators have been compared to the Death Angels featured in the 2018 post-apocalyptic horror film A Quiet Place, though it is not known whether the designers for that film drew on the future predators for inspiration. References Primeval (TV series) Speculative evolution Fictional monsters Television characters introduced in 2007 Fictional bats Fictional species and races Fictional blind characters
Future predator
[ "Biology" ]
1,405
[ "Biological hypotheses", "Speculative evolution", "Hypothetical life forms" ]
70,746,116
https://en.wikipedia.org/wiki/William%20Hare%20Group
William Hare Group Ltd is a UK headquartered structural steel contractor and the second largest, by turnover, in the country. It is family owned and has carried out projects in over fifty countries. Landmark works include structural steelwork for 20 Fenchurch Street and 201 Bishopsgate in London, and the Aldar Headquarters and Al Bahr Towers in Abu Dhabi. William Hare Group manufactures in the UK and United Arab Emirates. History William Hare started his eponymous enterprise in 1888. It incorporated as William Hare Ltd in 1945, and reorganised as William Hare Group Ltd in 1998. The firm began as a Bolton based steel erector and in 1945 diversified into steel fabrication. During the 1960s and 1970s William Hare Ltd commenced producing fabricated steel for overseas petrochemical projects, in 1977 receiving a Queen's Award for Export. The present Bury fabrication premises was acquired in 1977 with the purchase of California Engineering Company Ltd. In 1992 the business opened an office in Singapore, followed in 2002 by an engineering support office in Chennai. Grandson of the founder, and Group Chief Executive David Hodgkiss, died in 2020. He was succeeded as Group Chief Executive by his sister Susan Hodgkiss, . Acquisitions and new businesses Locations William Hare Group has steel fabrication facilities at Bury, Wetherby, Scarborough, Wigan, Newport, Rotherham and in the United Arab Emirates. Coatings are applied at a Grantham site. The Derby plant manufactures cold-formed steel components and engineered timber / hybrid structures. The Group operates sales and engineering support offices in London, Chennai, Singapore, Seoul and Porto. A majority of the firm's staff are employed at overseas subsidiaries and branches. Controversies Unfair dismissal In a 2010 judgement, William Hare was found to have unfairly dismissed an employee. Judge Brain, however, reduced the financial award by to take account of the employee's conduct before dismissal. Trinity Walk In 2008, Shepherd Construction contracted with William Hare Group to provide structural steelwork for the Trinity Walk shopping centre in Wakefield. Shepherd Construction subsequently sought, under a pay when paid clause, to withhold payment in the sum of £996,683.35. The ultimate client had gone into Administration. In 2009, Mr Justice Coulson of the Technology and Construction Court ruled against the payment being withheld. That judgement was upheld at the Court of Appeal in 2010. Shepherd Construction had used an obsolete form of words defining insolvency in its contract with William Hare Group. Fatal fall The Health and Safety Executive fined William Hare Group £75,000 plus £9,000 costs in 2003. An employee fell and died during 1998 extension works to the London Imperial War Museum. William Hare Group pleaded guilty to a breach of section 2(1) of the Health and Safety at Work Act 1974. The firm was criticised for its vague method statement, and for leaving workers to decide basic safety precautions. The unharnessed victim fell 13m from a precarious platform. See also References External links Construction and civil engineering companies of the United Kingdom Steel companies of the United Kingdom Structural steel Companies established in the 19th century 1888 establishments in England Organisations based in Bury, Greater Manchester Privately held companies of the United Kingdom Family-owned companies of the United Kingdom Family-owned companies of England Privately held companies of England
William Hare Group
[ "Engineering" ]
661
[ "Structural engineering", "Structural steel" ]
70,746,711
https://en.wikipedia.org/wiki/Su-Huai%20Wei
Su-Huai Wei () is a Chinese computational physicist. Wei earned a bachelor's of science degree in physics from Fudan University in 1981, and moved to the United States to pursue graduate study in the subject. After he completed his doctorate at the College of William & Mary in 1985, Wei became a postdoctoral researcher at the National Renewable Energy Laboratory. He remained on the NREL staff until returning to China for a post at the Computational Science Research Center. In 1998, while affiliated with NREL, Wei was elected a fellow of the American Physical Society "[f]or contributions to the understanding of electronic structures and stabilities of compounds, alloys, interfaces, superlattices and impurities using first-principles calculations and for development of the methods for such calculations." References Computational physicists Living people Year of birth missing (living people) Fellows of the American Physical Society College of William & Mary alumni Chinese expatriates in the United States 21st-century Chinese physicists 20th-century Chinese physicists
Su-Huai Wei
[ "Physics" ]
207
[ "Computational physicists", "Computational physics" ]
70,747,050
https://en.wikipedia.org/wiki/Autosomal%20dominant%20cerebellar%20ataxia%2C%20deafness%2C%20and%20narcolepsy
Autosomal dominant cerebellar ataxia, deafness, and narcolepsy (ADCADN) is a rare progressive genetic disorder that primarily affects the nervous system and is characterized by sensorineural hearing loss, narcolepsy with cataplexy, and dementia later in life. People with this disorder usually start showing symptoms when they are in their early-mid adulthoods. It is a type of autosomal dominant cerebellar ataxia. Presentation Usually, people with this disorder have ataxia, mild–moderate sensorineural hearing loss, narcolepsy, and cataplexy. These symptoms start happening when an affected person is about 30 years old. A bit later in life, people with ADCADN start showing a decline in executive function known as dementia. Degeneration of the optic nerves, cataracts, sensory neuropathy, lymphedema of the arms and legs, urinary incontinence, depression, uncontrollable and inappropriate laughing or crying (e.g. sudden incontrollable laughing during a funeral), and psychosis are features that typically accompany it. People with this disorder only live to be 40–50 years old. Other features of the disorder that may or may not occur in all patients include diabetes mellitus, spasticity, nystagmus, tremors, dilatation of the right ventricle, cerebral atrophy, and other generalized brain abnormalities. Complications Genetics This condition is caused by mutations in exon 20–21 of the DNMT1 gene, located in chromosome 19. These mutations are inherited in an autosomal dominant manner, meaning that for someone to show symptoms of a condition, they must have at least one copy of the mutation. This can occur in two scenarios; it can be hereditary or it can be the result of a spontaneous error. This gene plays a role in the production of an enzyme called DNA methyltransferase 1, which is involved in DNA methylation. This enzyme is essential for the regulation of neuron maturation, differentiation, migration, and most importantly, survival. The mutations involved in ADCAN alter a certain region in the enzyme produced by the gene which helps DNA methylation, which ends up distorting said process. This affects the expression of various genes. This also disrupts neuron maintenance, leading to the characteristic psychiatric and cognitive symptoms of this condition. Diagnosis This condition can be diagnosed by using methods such as whole exome sequencing and examination of the patient's symptoms. Treatment Prevalence More than 80 cases from families around the world have been described in medical literature. The following list comprises all countries of origin (according to OrphaNet): Sweden United States Italy Brazil China New Zealand Belgium United Kingdom Canada Germany Taiwan History This condition was first discovered in 1995 by Melberg et al. when they described 5 members of a 4-generation Swedish family where cerebellar ataxia and sensorineural deafness presented as an autosomal dominant trait, 4 of them had narcolepsy and 2 had diabetes mellitus. The oldest members had psychiatric symptoms, neurological anomalies, and optic atrophy, showing the progressive nature of the condition. References Genetic diseases and disorders Nervous system Hearing loss Narcolepsy Dementia
Autosomal dominant cerebellar ataxia, deafness, and narcolepsy
[ "Biology" ]
678
[ "Organ systems", "Nervous system" ]
70,748,897
https://en.wikipedia.org/wiki/Muhammad%27s%20eclipse
Muhammad's eclipse was an annular solar eclipse that occurred on January 27, 632, and was visible across parts of East Africa, North Africa, the Middle East, Central Asia, South Asia, the Far East, and Siberia. This eclipse is especially relevant to the history of Islam as it is identified as the eclipse that occurred during the life of the final Islamic prophet, Muhammad, upon the death of his youngest son, Ibrahim. It is exclusively documented in Islamic sīrah (biographies of Muhammad) and hadith literature. A solar eclipse occurs when the Moon passes between the Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the apparent diameter of the Moon is smaller than that of the Sun, presenting as the Moon blocking most, but not all, of the Sun's light and causing the Sun to look like an annulus (ring). This eclipse had a magnitude of 0.9836. Within Islamic sources The occurrence of the eclipse during the life of Islamic prophet Muhammad earned it the epithet 'Muhammad's eclipse'. The eclipse is well-documented in early Islamic sources, but no references to it have been found elsewhere. The eclipse occurred around the time of the death of Muhammad's youngest son, Ibrahim, who was 18 months old. Rumours of God's personal condolence quickly arose. It was also believed in pre-Islamic Arabia that eclipses occurred at the death of a great man. Muhammad denied the rumours and rejected the pre-Islamic beliefs. Eclipse prayer and sermon Muslims believe the eclipse prayer performed during solar and lunar eclipses was first performed by Muhammad during this eclipse, thereafter becoming a sunnah. A hadith narrated by Abd Allah ibn Amr ibn al-As in Sunan Abi Dawud asserts that Muhammad performed the prayer from when the eclipse was observed until the sun was clear. Narrations by Jabir ibn Abd Allah, Asma bint Abi Bakr, and Abu Musa al-Ash'ari in Sunan an-Nasa'i, Sahih Muslim, and Sahih al-Bukhari, respectively, also describe a long prayer with Muhammad having stood, bowed, and prostrated for long periods of time. Muhammad delivered a khutbah (sermon) following the prayer, saying: See also Splitting of the Moon Sunnah prayer – Optional ritual prayers performed by Muslims, one of which is the eclipse prayer. Assyrian eclipse – Another historical solar eclipse that occurred in the year 763 BC; mentioned in the Bible. References 632 1 27 632 Life of Muhammad Moon myths
Muhammad's eclipse
[ "Astronomy" ]
538
[ "Astronomical myths", "Moon myths" ]
70,749,968
https://en.wikipedia.org/wiki/Bis%282-methoxyethyl%29%20phthalate
Bis(2-methoxyethyl) phthalate, also commonly di(2-methoxyethyl) phthalate (DMEP), is a phthalate ester baring 2-methoxyethanol groups. Historically it was used as a plasticizer in cellulose acetate plastics, it is now largely banned owing to concerns over its effects to human health. References Phthalate esters Ethers Methoxy compounds
Bis(2-methoxyethyl) phthalate
[ "Chemistry" ]
99
[ "Organic compounds", "Functional groups", "Ethers" ]
70,750,448
https://en.wikipedia.org/wiki/Delphinella%20strobiligena
Delphinella strobiligena is a species of fungus in the family Dothioraceae. References External links Fungi described in 1962 Fungi of Croatia Fungi of Spain Dothideales Fungus species
Delphinella strobiligena
[ "Biology" ]
41
[ "Fungi", "Fungus species" ]
70,750,560
https://en.wikipedia.org/wiki/Lasiobolus%20cuniculi
Lasiobolus cuniculi is a species of coprophilous fungus in the family Ascodesmidaceae. It is known to grow on the dung of sheep, goats and donkeys. References External links Fungi described in 1934 Fungi of Greece Pezizales Fungus species
Lasiobolus cuniculi
[ "Biology" ]
59
[ "Fungi", "Fungus species" ]
70,750,641
https://en.wikipedia.org/wiki/Podospora%20decipiens
Podospora decipiens is a species of coprophilous fungus in the family Podosporaceae. It is especially common on the islands around Greece, where it grows on the dung of sheep, goats and donkeys. References External links Fungi described in 1883 Fungi of Greece Sordariales Fungus species
Podospora decipiens
[ "Biology" ]
66
[ "Fungi", "Fungus species" ]
70,750,663
https://en.wikipedia.org/wiki/Podospora%20macrodecipiens
Podospora macrodecipiens is a species of coprophilous fungus in the family Podosporaceae. It was discovered in Antiparos in Greece, where it was found growing on sheep dung. References External links Fungi described in 2008 Fungi of Greece Sordariales Fungus species
Podospora macrodecipiens
[ "Biology" ]
63
[ "Fungi", "Fungus species" ]
70,750,685
https://en.wikipedia.org/wiki/Podospora%20pleiospora
Podospora pleiospora is a species of coprophilous fungus in the family Podosporaceae. It is especially common on the islands around Greece, where it grows on the dung of goats and sheep. References External links Fungi described in 1883 Fungi of Greece Sordariales Fungus species
Podospora pleiospora
[ "Biology" ]
65
[ "Fungi", "Fungus species" ]
70,750,716
https://en.wikipedia.org/wiki/Schizothecium%20inaequale
Schizothecium inaequale is a species of coprophilous fungus in the family Lasiosphaeriaceae. It is known to grow in the dung of goats. References External links Fungi described in 1972 Fungi of Greece Sordariales Fungus species
Schizothecium inaequale
[ "Biology" ]
57
[ "Fungi", "Fungus species" ]
70,750,727
https://en.wikipedia.org/wiki/Schizothecium%20miniglutinans
Schizothecium miniglutinans is a species of coprophilous fungus in the family Lasiosphaeriaceae. It is known to grow in the dung of goats and possibly on that of sheep. References External links Fungi described in 1972 Fungi of Greece Sordariales Fungus species
Schizothecium miniglutinans
[ "Biology" ]
64
[ "Fungi", "Fungus species" ]
70,750,740
https://en.wikipedia.org/wiki/Schizothecium%20tetrasporum
Schizothecium tetrasporum is a species of coprophilous fungus in the family Lasiosphaeriaceae. It is known to grow in the dung of goats and rabbits. References External links Fungi described in 1972 Fungi of Greece Fungi of Iceland Sordariales Fungus species
Schizothecium tetrasporum
[ "Biology" ]
62
[ "Fungi", "Fungus species" ]
70,750,754
https://en.wikipedia.org/wiki/Schizothecium%20vesticola
Schizothecium vesticola is a species of coprophilous fungus in the family Lasiosphaeriaceae. In Greece, it is known to grow in the dung of goats and possibly on that of sheep, goats and donkeys. In Iceland, it has been reported from the dung of sheep, goose and horse. References External links Fungi described in 1972 Fungi of Greece Fungi of Iceland Sordariales Fungus species
Schizothecium vesticola
[ "Biology" ]
90
[ "Fungi", "Fungus species" ]
70,751,123
https://en.wikipedia.org/wiki/Gymnopus%20herinkii
Gymnopus herinkii is a rare species of mushroom-forming fungus in the family Omphalotaceae. It was described in 1998 by mycologists Vladimír Antonín and Machiel Noordeloos. The type specimen was from a collection made in the Lenora region of Bohemia, made by Czech mycologists Jiří Kubička and Josef Herink in 1952; the latter is acknowledged in the species epithet. Marcel Bon proposed a transfer to the genus Collybia in 1998. Characteristic features of Gymnopus herinkii include the distantly-spaced gills on the underside of the hygrophanous, brown cap, and an onion-like odour. Microscopic characteristics include the lack of cheilocystidia, and the lack of a dryophila-structure in the pileipellis. The fungus grows on fallen leaves or humus. See also List of Gymnopus species References Omphalotaceae Fungi described in 1996 Fungi of Europe Taxa named by Machiel Noordeloos Fungus species
Gymnopus herinkii
[ "Biology" ]
213
[ "Fungi", "Fungus species" ]
70,751,448
https://en.wikipedia.org/wiki/Orca%20types%20and%20populations
Orcas or killer whales have a cosmopolitan distribution and several distinct populations or types have been documented or suggested. Three to five types of orcas may be distinct enough to be considered different races, subspecies, or possibly even species (see Species problem). The IUCN reported in 2008, "The taxonomy of this genus is clearly in need of review, and it is likely that O. orca will be split into a number of different species or at least subspecies over the next few years." Although large variation in the ecological distinctiveness of different orca groups complicate simple differentiation into types. Mammal-eating orcas in different regions were long thought likely to be closely related, but genetic testing has refuted this hypothesis. Northern waters North Pacific Research off the west coast of Canada and the United States in the 1970s and 1980s identified the following three types: Resident: These are the most commonly sighted of the three populations in the coastal waters of the northeast Pacific. Residents' diets consist primarily of fish and sometimes squid, and they live in complex and cohesive family groups called pods. Female residents characteristically have rounded dorsal fin tips that terminate in a sharp corner. The grey or white area around the dorsal fin, known as the "saddle patch", often contains some black colouring in residents. They visit the same areas consistently. British Columbia and Washington resident populations are amongst the most intensively studied marine mammals anywhere in the world. Resident orcas can be divided into at least three distinct communities; northern, southern and southern Alaskan. Southern Alaskan resident orcas are distributed from southeastern Alaska to the Kodiak Archipelago and number over 700 individuals. These whales consist of two interbreeding clans distinguished by acoustic calls and whose ranges overlap. The northern resident community lives in coastal and inland waters from southeastern Alaska to Vancouver Island. It consists of three clans and 16 pods and number over 300 orcas total. The southern resident community generally inhabits the inland waters of southern British Columbia and Washington, but can be found in the outer waters off Vancouver Island, Washington, Oregon and California. They consist of one clan and three pods, and number less than 80 individuals and are listed as endangered. Transient or Bigg's: The diets of these orcas consist almost exclusively of marine mammals. They live in the same areas as residents, but the two avoid each other. Transients generally travel in small groups, usually of two to six animals, but sometimes on rare occasions pods merge into groups of 200. They have less persistent family bonds than residents. Transients vocalize in less variable and less complex dialects. Female transients are characterized by more triangular and pointed dorsal fins than those of residents. The saddle patches of transients are solid and uniformly grey (in contrast to the residents saddle patches that often have more black-coloring). Transients roam widely along the coast; some individuals have been sighted in both southern Alaska and California. Transients are also referred to as Bigg's orca in honour of cetologist Michael Bigg. The term has become increasingly common and may eventually replace the transient label. The transient ecotype is estimated to have diverged 700,000 years ago. There are at least three different "stocks" of transients off North America, the AT1 stock which occurs from Prince William Sound to Kenai Fjords, the Gulf of Alaska/Aleutian Islands/Bering Sea (GOA/AI/BS) stock and the west coast stock which ranges from southeast Alaska to California. AT1 is considered a depleted stock; it was affected by the Exxon Valdez oil spill and declined from 22 individuals to eight between 1989 and 2004. The GOA/AI/BS stock may number around 500 whales while the west coast transients number over 320 orcas with over 200 along southeast Alaska, British Columbia and Washington and over 100 orcas off California. California transients do not appear to intermingle much with those further north and west coast transients may be divided into sub-communities. Offshore: A third population of orcas in the northeast Pacific was discovered in 1988, when a humpback whale researcher observed them in open water. As their name suggests, they travel far from shore and feed primarily on schooling fish. However, because they have large, scarred and nicked dorsal fins resembling those of mammal-hunting transients, it may be that they also eat mammals and sharks. They have mostly been encountered off the west coast of Vancouver Island and near Haida Gwaii. Offshores typically congregate in groups of 20–75, with occasional sightings of larger groups of up to 200. Little is known about their habits, but they are genetically distinct from residents and transients. Offshores appear to be smaller than the others, and females are characterized by dorsal fin tips that are continuously rounded. They have been spotted in Monterey Bay in California. Separate fish-eating and mammal-eating orca communities also exist off the coast of the Russian Far East and Hokkaido, Japan. Russian orcas are commonly seen around the Kamchatka Peninsula and Commander Islands. Over 2,000 individual resident-like orcas and 130 transient-like orcas have been identified off Russia. At least 195 individual orcas have been cataloged in the eastern tropical Pacific, ranging from Baja California and the Gulf of California in the north to the northwest coast of South America in the south and west towards Hawaii. Orcas appear to regularly occur off the Galápagos Islands. Orcas sighted in Hawaiian waters may belong to a greater population in the central Pacific. North Atlantic and adjacent At least 15,000 whales are estimated to inhabit the North Atlantic. In the Northeast Atlantic, two orca ecotypes have been proposed. Type 1 orcas consist of seven haplotypes and include herring-eating orcas of Norway and Iceland and mackerel-eating orcas of the North Sea, as well as seal-eating orcas off Norway. Type 2 orcas consist of two haplotypes, and mainly feed on baleen whales. These two types have now been dropped from the classification, because of a lack of samples for type 2 (5 individuals) and how little it was representative of a potential ecotype. Instead, recent studies using dietary tracers such as fatty acids and organic contaminants have shown how varied the diet of North Atlantic orcas is. For example, orcas in the Eastern North Atlantic (Norway, Faroe Islands, Iceland) mainly feed on fish, specifically herring. Meanwhile, those in the Central North Atlantic (Greenland) prefer to consume seals such as ringed, harp, hooded, and bearded seals. Finally, orcas in the Western North Atlantic (Eastern Canadian Arctic and Eastern Canada) tend to prey on other whale species, such as belugas and narwhals in the Arctic and baleen whales and porpoises in Eastern Canada. In the Mediterranean Sea, orcas are considered "visitors", likely from the North Atlantic, and sightings become less frequent further east. However, a small year-round population exists in the Strait of Gibraltar, which numbered around 39 in 2011. From 2020, this population started ramming vessels and damaging their rudders. Distinct populations may also exist off the west coast of tropical Africa, which have generalized diets. The northwest Atlantic population is found year-round around Labrador and Newfoundland, while some individuals seasonally travel to the waters of the eastern Canadian Arctic when the ice has melted. Sightings of these whales have been documented as far south as Cape Cod and Long Island. This population is possibly continuous with orcas sighted off Greenland. Orcas are sighted year-round in the Caribbean Sea, and an estimated 267 (as of 2020) are documented in the northern Gulf of Mexico. North Indian Ocean Over 50 individual whales have been cataloged in the northern Indian Ocean, including two individuals that were sighted in the Persian Gulf in 2008 and off Sri Lanka in 2015. Southern waters A small population of orcas seasonally visits the northern point of the Valdes Peninsula on the east coast of Argentina and hunt for sea lions and elephant seals on the shore, temporary stranding themselves. Similar behaviors occur among orcas off the Crozet Islands, which breach to grab elephant seals. These orcas also prey on Patagonian toothfish. 65 individuals have been documented in this area. Off South Africa, a distinctive "flat-tooth" morphotype exists and preys on sharks. A pair of male orcas, Port and Starboard, have become well known for hunting great whites and other sharks off the South African coast. Orcas occur throughout the waters of Australia, New Zealand and Papua New Guinea. They are sighted year round in New Zealand waters, while off Australia, they are seasonally concentrated off the northwest, in the inshore waters of Ningaloo Reef, and the southwest, at the Bremer region. Genetic evidence shows that the orcas of New Zealand, and northwest and southwest Australia form three distinct populations. New Zealand orcas mainly prey on sharks and rays. Antarctic Around 25,000 orcas are estimated around the Antarctic, and four types have been documented. Two dwarf species, named Orcinus nanus and Orcinus glacialis, were described during the 1980s by Soviet researchers, but most cetacean researchers are skeptical about their status, and linking these directly to the types described below is difficult. Type A or Antarctic orcas look like a "typical" orca, a large, black-and-white form with a medium-sized white eye patch, living in open water and feeding mostly on minke whales. Type B1 or pack ice orcas are smaller than type A. It has a large white eye patch. Most of the dark parts of its body are medium grey instead of black, although it has a dark grey patch called a "dorsal cape" stretching back from its forehead to just behind its dorsal fin. The white areas are stained slightly yellow. It feeds mostly on seals. Type B1 orca are abundant between Adelaide Island and the mainland Antarctic peninsula. Type B2 or Gerlache orcas are morphologically similar to Type B1, but smaller. This ecotype has been recorded feeding on penguins and seals, and is often found in the Gerlache Strait. Type C or Ross Sea orcas are the smallest ecotype and live in larger groups than the others. Its eye patch is distinctively slanted forwards, rather than parallel to the body axis. Like type B, it is primarily white and medium grey, with a dark grey dorsal cape and yellow-tinged patches. Its only observed prey is the Antarctic cod. Type D or Sub-Antarctic orcas were first identified based on photographs of a 1955 mass stranding in New Zealand and six at-sea sightings since 2004. The first video record of this type was made in 2014 between the Kerguelen and Crozet Islands, and again in 2017 off the coast of Cape Horn, Chile. It is recognizable by its small white eye patch, narrower and shorter than usual dorsal fin, bulbous head (similar to a pilot whale), and smaller teeth. Its geographic range appears to be circumglobal in sub-Antarctic waters between latitudes 40°S and 60°S. Although its diet is not determined, it likely includes fish, as determined by photographs around longline vessels, where Type D orcas appeared to be preying on Patagonian toothfish. Types B and C live close to the ice, and diatoms in these waters may be responsible for the yellowish colouring of both types. Mitochondrial DNA sequences support the theory that these are recently diverged separate species. More recently, complete mitochondrial sequencing indicates the types B and C be recognized as distinct species, as should the North Pacific transients, leaving the others as subspecies pending additional data. Advanced methods that sequenced the entire mitochondrial genome revealed systematic differences in DNA between different populations. A 2019 study of Type D orcas also found them to be distinct from other populations and possibly even a unique species. References Sources Biogeography Orcas Population ecology Population genetics
Orca types and populations
[ "Biology" ]
2,458
[ "Biogeography" ]
70,751,853
https://en.wikipedia.org/wiki/Nivasorexant
Nivasorexant (; developmental code name ACT-539313) is an orexin antagonist medication which is under development for the treatment of binge eating disorder and was previously under development for the treatment of anxiety disorders. It is an orally active small-molecule compound with an elimination half-life of 3.3 to 6.5hours and acts as a selective orexin OX1 receptor antagonist (1-SORA). As of May 2022, the drug is in phase 2 clinical trials for binge eating disorder. Following negative efficacy results of a phase 2 trial of nivasorexant for binge eating disorder, Idorsia (the developer of nivasorexant) signaled in May 2022 that it would not pursue further development of the drug for this indication. References Experimental psychiatric drugs Ketones Morpholines Orexin antagonists Triazoles
Nivasorexant
[ "Chemistry" ]
183
[ "Ketones", "Functional groups" ]
72,231,866
https://en.wikipedia.org/wiki/Huawei%20Mate%2050
The Huawei Mate 50 is a series of EMUI-based smartphone manufactured by Huawei. They were announced on September 6, 2022 and released on September 28, 2022. References External links Huawei smartphones Mobile phones introduced in 2022 Flagship smartphones Android (operating system) devices Mobile phones with multiple rear cameras Mobile phones with 4K video recording
Huawei Mate 50
[ "Technology" ]
74
[ "Mobile technology stubs", "Flagship smartphones", "Crossover devices", "Mobile phone stubs", "Phablets", "Discontinued flagship smartphones" ]
72,231,981
https://en.wikipedia.org/wiki/Smart%20Energy
Smart Energy is an international, peer-reviewed open-access multi-disciplinary scientific journal focused on energy transition to upcoming smart renewable energy systems. The journal was established in 2021 and is published by Elsevier. The editor-in-chief is Brian Vad Mathiesen (Aalborg University). It is emphasized that efforts to advocate UN's goals of sustainable development are welcomed, specifically "Affordable and clean energy". Abstracting and indexing The journal is abstracted and indexed in Scopus, and the Directory of Open Access Journals. References External links Energy and fuel journals Academic journals established in 2015 English-language journals Elsevier academic journals Creative Commons Attribution-licensed journals Continuous journals
Smart Energy
[ "Environmental_science" ]
142
[ "Environmental science journals", "Energy and fuel journals" ]
72,233,644
https://en.wikipedia.org/wiki/Hanseniaspora%20gamundiae
Hanseniaspora gamundiae is a species of yeast in the family Saccharomycodaceae. It has been isolated from the fruiting bodies of Cyttaria hariotii mushrooms in Patagonia and is likely responsible for the early stages of fermentation of an alcoholic chicha produced from the mushrooms. Taxonomy Samples of H. gamundiae were first isolated from samples taken from the stromata of edible Cyttaria hariotii mushrooms growing on southern beech trees in Patagonia in Spring 2007. Genetic testing revealed that the yeast was a previously undescribed species and it was given the specific epithet "gamundiae" in honor of Dr. Irma Gamundi in Argentina in recognition for her taxonomic work with fungi and particularly with the Cyttaria genus. Genetic sequencing shows that the species is closely related to Hanseniaspora taiwanica and Hanseniaspora occidentalis. Description Microscopic examination of the yeast cells in YM liquid medium after 48 hours at 25°C reveals cells that are 4.3 to 15.7 μm by 2.4 to 4.7 μm in size, apiculate, ovoid to elongate, appearing singly or in pairs. Reproduction is by budding, which occurs at both poles of the cell. In broth culture, sediment is present, and after one month a very thin ring is formed. Colonies that are grown on malt agar for one month at 25°C appear cream-colored, butyrous, glossy, and smooth. Growth is flat to slightly raised at the center, with an entire to slightly undulating margin. The yeast forms poorly-developed pseudohyphae on cornmeal agar. The yeast has been observed to form one to two spherical and warty ascospores when grown for at least two weeks on 5% Difco malt extract agar. The yeast can ferment glucose and can weakly ferment sucrose, but not galactose, cellobiose, maltose, or lactose. The yeast can assimilate glucose, sucrose, cellobiose, arbutin, and salicin. It has a positive growth rate at 30°C, but there is no growth at 35°C. It can grow on agar media containing 10% sodium chloride but absent on media with 16% sodium chloride. Growth on agar with 50% glucose-yeast extract agar, 1% acetic acid, and 0.01% cycloheximide is absent. Ecology The species was collected during a study of yeasts that were present on species of Cyttaria mushrooms. Mature mushrooms are composed of up to 10.2% of the fructose, glucose, and sucrose, which resembles the composition of grape juice, a habitat that is well-known to contain Hanseniaspora species. The Mapuche people of Patagonia consumed the Cyttaria in several ways, including in the production of the alcoholic beverage chicha. They used the mushrooms by collecting the mature stomata and either squeezing them to obtain the juice, or by leaving the stromata in cooled boiled water for a few days, after which it spontaneously ferments. Due to the composition of yeasts found in Cyttaria, it is believed that Hanseniaspora gamundiae plays a significant role in the early stages of the fermentation of the beverage, followed by naturally-occurring Saccharomyces cerevisiae, Saccharomyces uvarum, or Saccharomyces eubayanus species. References Saccharomycetes Yeasts Fungi described in 2019 Fungus species
Hanseniaspora gamundiae
[ "Biology" ]
760
[ "Yeasts", "Fungi", "Fungus species" ]
72,235,151
https://en.wikipedia.org/wiki/Phylogenetic%20reconciliation
In phylogenetics, reconciliation is an approach to connect the history of two or more coevolving biological entities. The general idea of reconciliation is that a phylogenetic tree representing the evolution of an entity (e.g. homologous genes or symbionts) can be drawn within another phylogenetic tree representing an encompassing entity (respectively, species, hosts) to reveal their interdependence and the evolutionary events that have marked their shared history. The development of reconciliation approaches started in the 1980s, mainly to depict the coevolution of a gene and a genome, and of a host and a symbiont, which can be mutualist, commensalist or parasitic. It has also been used for example to detect horizontal gene transfer, or understand the dynamics of genome evolution. Phylogenetic reconciliation can account for a diversity of evolutionary trajectories of what makes life's history, intertwined with each other at all scales that can be considered, from molecules to populations or cultures. A recent avatar of the importance of interactions between levels of organization is the holobiont concept, where a macro-organism is seen as a complex partnership of diverse species. Modeling the evolution of such complex entities is one of the challenging and exciting direction of current research on reconciliation. Phylogenetic trees as nested structures Phylogenetic trees are intertwined at all levels of organization, integrating conflicts and dependencies within and between levels. Macro-organism populations migrate between continents, their microbe symbionts switch between populations, the genes of their symbionts transfer between microbe species, and domains are exchanged between genes. This list of organization levels is not representative or exhaustive, but gives a view of levels where reconciliation methods have been used. As a generic method, reconciliation could take into account numerous other levels. For instance, it could consider the syntenic organization of genes, the interacting history of transposable elements and species, the evolution of a protein complex across species. The scale of evolutionary events considered can go from population events such as geographical diversification to nucleotids levels one inside genes, including for instance chromosome levels events inside genomes such as whole genome duplication. Phylogenies have been used for representing the diversification of life at many levels of organization: macro-organisms, their cells throughout development, micro-organisms through marker genes, chromosomes, proteins, protein domains, and can also be helpful to understand the evolution of human culture elements such as languages or fairy tales. At each of these levels, phylogenetic trees describe different stories made of specific diversification events, which may or may not be shared among levels. Yet because they are structurally nested (similar to matryoshka dolls) or functionally dependent, the evolution at a particular level is bound to those at other levels. Phylogenetic reconciliation is the identification of the links between levels through the comparison of at least two associated trees. Originally developed for two trees, reconciliations for more than two levels have been recently constructed (see section Explicit modeling of three or more levels). As such, reconciliation provides evolutionary scenarios that reveal conflict and cooperation among evolving entities. These links may be unintuitive, for instance, genes present in the same genome may show uncorrelated evolutionary histories while some genes present in the genome of a symbiont may show a strong coevolution signal with the host phylogeny. Hence, reconciliation can be a useful tool to understand the constraints and evolutionary strategies underlying the assemblage that forms a holobiont. Because all levels essentially deal with the same object, a phylogenetic tree, the same models of reconciliation—in particular those based on duplication-transfer-loss events, which are central to this article—can be transposed, with slight modifications, to any pair of connected levels: an "inner", "lower", or "associate" entity (e.g. gene, symbiont species, population) evolves inside an "upper", or "host" one (respectively species, host, or geographical area). The upper and lower entities are partially bound to the same history, leading to similarities in their phylogenetic trees, but the associations can change over time, become more or less strict or switch to other partners. History The principle of phylogenetic reconciliation was introduced in 1979 to account for differences between genes and species-level phylogenies. In a parsimonious setting, two evolutionary events, gene duplication and gene loss were invoked to explain the discrepancies between a gene tree and a species tree. It also described a score on gene trees knowing the species tree and an aligned sequence by using the number of gene duplication, loss, and nucleotide replacement for the evolution of the aligned sequence, an approach still central today with new models of reconciliation and phylogeny inference. The term reconciliation has been used by Wayne Maddison in 1997, as a reverse concept of "phylogenetic discord" resulting from gene level evolutionary events. Reconciliation was then developed jointly for the coevolution of host and symbiont and the geographic diversification of species. In both settings, it was important to model a horizontal event that implied parallel branches of the host tree: host switch for host and symbiont and species dispersion from one area to another in biogeography. Unlike for genes and genomes, the coevolution of host and symbiont and the explanation of species diversification by geography are not always the null hypothesis. A visual depiction of the two phylogenies in a tanglegram can help assess such coevolution, although it has no statistical obvious interpretation. Character methods, such as Brooks Parsimony Analysis, were proposed to test coevolution and reconstruct scenarios of coevolution. In these methods, one of the trees is forgotten except for its leaves, which are then used as a character evolving on the second tree. First models for reconciliation, taking explicitly into account the two topologies and using a mechanistic event-based approach, were proposed for host and symbiont and biogeography. Debates followed, as the methods were not yet completely sound but integrated useful information in a new framework. Costs for each event and a dynamic programming technique considering all pairs of host and symbiont nodes were then introduced into a host and symbiont approach, both of which still underlie most of the current reconciliation methods for host and symbiont as well as for species and genes. Reconciliation returned to the framework it was introduced in, gene and species. After character models were considered for horizontal gene transfer, a new reconciliation model, following and improving the dynamic programming approach presented for host and symbiont, effectively introduced horizontal gene transfer to gene and species reconciliation on top of the duplication and loss model. The progressive development of phylogenetic reconciliation was thus possible through exchanges between multiple research communities studying phylogenies at the host and symbiont, gene and species, or biogeography levels. This story and its modern developments have been reviewed several times, generally focusing on specific pairs of levels, with a few exceptions. New developments start to bring the different frameworks together with new integrative models. Pocket Gophers and their chewing lices: a classical example Pocket gophers (Geomyidae) and their chewing lice (Trichodectidae) form a well studied system of host and symbiont coevolution. The phylogeny of host and symbiont and the matching of the leaves of their trees are depicted on the left. For the host, O. stands for Orthogeomys, G. for Geomys and T. for Thomomys; for the symbiont, G. stands for Geomydoecus and T. for Thomoydoecus. Reconciling the two trees means giving a scenario with evolutionary events and matching on the ancestral nodes depicting the coevolution of the two trees. The events considered in this system are the events of the DTL model: duplication, transfer (or host switch), loss, and cospeciation, the null event of coevolution. Two scenarios were proposed in two studies, using two different frameworks which could be deemed as pre-dynamic programming DTL reconciliation. In modern DTL reconciliation frameworks, costs are assigned to events. The two scenarios were then shown to correspond to maximum parsimonious reconciliation with different cost assignments. The scenario A uses 6 cospeciations, 2 duplications, 3 losses and 2 host switches to reconcile the two trees, while scenario B uses 5 cospeciations, 3 duplications, 3 losses and 2 host switches. The cost of a scenario is the sum of the cost of its events. For instance, with a cost of 0 for cospeciation, 2 for duplication, 1 for loss and 3 for host switch, scenario A has a cost of and scenario B of , and so according to a parsimonious principle, scenario A would be deemed more likely (scenario A stays more likely as long as the cost of cospeciation is less than the cost of duplication). Development of Phylogenetic Reconciliation Models Models and methods used today in phylogeny are the result of several decades of research, made progressively complex, driven by the nature of the data and the quest for biological realism on one side, and the limits and progresses of mathematical and algorithmic methods on the other. Pre-reconciliation models: characters on trees Character methods can be used when there is no tree available for one of the levels, but only values for a character at the leaves of a phylogenetic tree for the other level. A model defines the events of character value change, their rate, probabilities or costs. For instance, the character can be the presence of a host on a symbiont tree, the geographical region on a species tree, the number of genes on a genome tree, or nucleotides in a sequence. Such methods thus aim at reconstructing ancestral characters at internal nodes of the tree. Although these methods have produced results on genome evolution, the utility of a second tree appears with very simple examples. If a symbiont has recently acquired the ability to spread in a group of species and thus it is present in most of them, character methods will wrongly indicate that the common ancestor of the hosts already had the symbiont. In contrast, a comparison of the symbiont and host trees would show discrepancies revealing horizontal transfers. The origins of reconciliation: the Duplication Loss model and the Lowest Common Ancestor mapping Duplication and loss were invoked first to explain the presence of multiple copies of a gene in a genome or its absence in certain species. It is possible with those two events to reconcile any two trees, i.e. to map the nodes and branches of the lower and upper trees, or equivalently to give a list of evolutionary events explaining the discrepancies between the upper tree and the lower tree. A most parsimonious Duplication and Loss (DL) reconciliation is computed through the Lowest Common Ancestor (LCA) mapping: proceeding from the leaves to the root, each internal node is mapped to the lowest common ancestor of the mapping of its two children. A Markovian model for reconciliation The LCA mapping in the DL model follows a parsimony principle: no event should be invoked if it is not necessary. However the use of this principle is debated, and it is commonly admitted that it is more accurate in molecular evolution to fit a probabilistic model as a random walk, which does not necessarily produce parsimonious scenarios. A birth and death Markovian model is such a model that can generate a lower tree "inside" a fixed upper one from root to leaves. Statistical inference provides a framework to find most likely scenarios, and in that case, a maximum likelihood reconciliation of two trees is also a parsimonious one. In addition, it is possible with such a framework to sample scenarios, or integrate over several possible scenarios in order to test different hypotheses, for example to explore the space of lower trees. Moreover, probabilistic models can be integrated into larger models, as probabilities simply multiply when assuming independence, for instance combining sequence evolution and DL reconciliation. Introducing horizontal transfer Host switch, i.e. inheritance of a symbiont from a kin lineage, is a crucial event in the evolution of parasitic or symbiotic relationships between species. This horizontal transfer also models migration events in biogeography and became of interest for the reconciliation of gene and species trees when it appeared that many discrepancies could not simply be explained by duplication and loss and that horizontal gene transfer (HGT) was a major evolutionary process in micro-organisms evolution. This switching, or horizontal transfer, pattern can also model admixture or introgression. It is considered in character methods, without information from the symbiont phylogeny. On top of the DL model, horizontal transfer enables new and very different reconciliation scenarios. The simple yet powerful dynamic programming approach The LCA reconciliation method yields a unique solution, which has been shown to be optimal for the problem of minimizing the weighted number of events, whatever the relative weights of duplication and loss. In contrast, with Duplication, horizontal Transfer and Loss (DTL), there can be several equally parsimonious reconciliations. For instance, a succession of duplications and losses can be replaced by a single transfer. One of the first ideas to define a computational problem and approach a resolution was, in a host/symbiont framework, to maximize the number of co-speciations with a heuristic algorithm. Another solution is to give relative costs to the events and find a scenario that minimizes the sum of the costs of its events. In the probabilistic model frameworks, the equivalent task consists of assigning rates or probabilities to events and search for maximum likelihood scenarios, or sample scenarios according to their likelihood. All these problems are solved with a dynamic programming approach. This dynamic programming method involves traversing the two trees in a postorder. Proceeding from the leaves and then going up in the two trees, for each couple of internal nodes (one for each tree), the cost of a most parsimonious DTL reconciliation is computed. In a parsimony framework, costs of reconciling a lower subtree rooted at with an upper subtree rooted at is initialized for the leaves with their matching: And then inductively, denoting the children of the children of the costs associated with speciation, duplication, horizontal transfer and loss, respectively (with often fixed to 0), The costs and , because they do not depend on , can be computed once for all , hence achieving quadratic complexity to compute for all couples of and . The cost of losses only appears in association with other events because in parsimony, a loss can always be associated with the preceding event in the tree. The induction behind the use of dynamic programming is based on always progressing in the trees toward the roots. However some combinations of events that can happen consecutively can make this induction ill-defined. One such combination consists of a transfer followed immediately by a loss in the donor lineage (TL). Restricting the use of this TL event repairs the induction. With an unlimited use, it is necessary to use or add other known methods to solve systems of equations like fixed point methods, or numerical solving of differential equations. In 2016, only two out of seven of the most commonly used parsimony reconciliation programs did handle TL events, although their consideration can drastically change the result of a reconciliation. Unlike LCA mapping, DTL reconciliation typically yields several scenarios of minimal cost, in some cases an exponential number. The strength of the dynamic programming approach is that it enables to compute a minimum cost of coevolution of the input upper and lower tree in quadratic time, and to get a most parsimonious scenario through backtracking. It can also be transposed to a probabilistic framework to compute the likelihood of coevolution and get a most likely reconciliation, replacing costs with rates, minimums by sums and sums by products. Moreover, through multiple backtracks, the approach is suitable for enumerating all parsimonious solutions or to sample scenarios, optimal and sub-optimal, according to their likelihood. Estimation of event costs and rates Dynamic programming per se is only a partial solution and does not solve several problems raised by reconciliation. Defining a most parsimonious DTL reconciliation requires assigning costs to the different kinds of events (D, T and L). Different cost assignments can yield different reconciliation scenarios, so there is a need for a way to choose those costs. There is a diversity of approaches to do so. CoRe-PA explores in a recursive manner the space of cost vectors, searching for a good matching with the event frequencies in reconciliations. ALE uses the same idea in a probabilistic framework to estimate the event rates by maximum likelihood. Alternatively, COALA is a preprocess using approximate Bayesian computation with sequential Monte Carlo: simulation and statistic rejection or acceptance of parameters with successive refinement. In the parsimony framework, it is also possible to divide the space of possible event costs into areas of costs which lead to the same Pareto optimal solution. Pareto optimal reconciliations are such that no other reconciliation has a strictly inferior cost for one type of event (duplication, transfer or loss), and less or equal for the others. It is possible as well to rely on external considerations in order to choose the event costs. For example, the software Angst chooses the costs that minimize the variation of genome size, in number of genes, between parent and children species. The problem of temporal feasibility The dynamic programming method works for dated (internal nodes are totally ordered) or undated upper trees. However, with undated trees, there is a temporal feasibility issue. Indeed, a horizontal transfer implies that the donor and the receiver are contemporaneous, therefore implying a time constraint on the tree. In consequence, two horizontal transfers may be incompatible, because they imply contradicting time constraints. The dynamic programming approach can not easily check for such incompatibilities. If the upper tree is undated, finding a temporally feasible most parsimonious reconciliation is NP-hard. It is fixed parameter tractable, which means that there are algorithms running in time bounded by an exponential of the number of transfers in the output scenarios. Some solutions imply integer linear programming or branch and bound exploration. If the upper tree is dated, then there is no incompatibility issue because horizontal transfers can be constrained to never go backward in time. Finding a coherent optimal reconciliation is then solved in polynomial time or with a speed-up in RASCAL, by testing only a fraction of node mappings. Most of the software taking undated trees does not look for temporal feasibility, except Jane, which explores the space of total orders via a genetic algorithm, or, in a post process, Notung, and Eucalypt, which searches inside the set of optimal solutions for time consistent ones. Other methods work as supplementary layers to reconciliations, correcting reconciliations or returning a subset of feasible transfers, which can be used to date a species tree. Expanding phylogenies: Transfers from the dead In phylogenetics in general, it is important to keep in mind that the extant and ancestral species that are represented in any phylogeny are only a sparse sample of the species that currently exist or ever have existed. This is why one can safely assume that all transfers that can be detected using phylogenetic methods have originated in lineages that are, strictly speaking, absent from a studied phylogeny. Accounting for extinct or unsampled biodiversity in phylogenetic studies can give a better understanding of these processes. Originally, DTL reconciliation methods did not recognize this phenomenon and only allowed for transfer between contemporaneous branches of the tree, hence ignoring most plausible solutions. However, methods working on undated upper trees can be seen as implicitly handling the unknown diversity by allowing transfers "to the future" from the point of view of one phylogeny, that is, the donor is more ancient than the recipient. A transfer to the future can be translated into a speciation to unknown species, followed by a transfer from unknown species. ALE in its dated version explicitly takes the unknown diversity into account by adding a Moran process of speciation/extinctions of species to the dated birth/death model of gene evolution. Transfers from the dead are also handled in a parsimonious setting by Tera and ecceTERA, showing that considering these transfers improves the capacity to reconstruct gene trees using reconciliation, and with a more explicit model and in a probabilistic setting, in ALE undated. The specificity of biogeography: a tree like structure for the "evolution" of areas In biogeography, some applications of reconciliation approaches consider as an upper tree an area cladogram with defined ancestral nodes. For instance, the root can be Pangaea and the nodes contemporary continents. Sometimes, internal nodes are not ancestral areas but the unions of the areas of their children, to account for the possibility of species evolving along the lower tree to inhabit one or several areas. In this case, the evolutionary events are migration, where one species colonizes a new area, allopatric speciation, or vicariance, equivalent to co-speciation in host/symbiont comparisons. Even though this approach does not always give a tree (if the unions AB and BC of leaves A, B, C exist, a child can have several parents), and this structure is not associated with time (it is possible for a species to go from A to AB by migration, as well as from AB to A by extinction), reconciliation methods—with events and dynamic programming—can infer evolutionary scenarios between the upper geographical structure and the lower species tree. Diva and Lagrange are two reconciliation models constructing such a tree-like structure and then applying reconciliation, the first with a parsimony principle, the second in a probabilistic framework. Additionally, BioGeoBEARS is a biogeography inference package that reimplemented DIVA and Lagrange models and allows for new options, like distant dependent transfers and discussion on statistical model selection. Graphical output With two trees and multiple evolutionary events linking them to represent, viewing reconciled trees is a challenging but necessary question in order to make reconciliation studies more accessible. Some reconciliation software include annotation of the evolutionary events on the lower trees, while others, and specific packages, in DL or DTL, trace the lower tree embedded in the upper one. One difficulty in this regard is the variety of output formats for the different reconciliation software. A common standard, recphyloxml, has been established and endorsed by part of the community, and a viewer is available, able to display reconciliation in multi level systems. Addressing Additional Practical Considerations Applying DTL reconciliation to biological data raises several problems related to uncertainty and confidence levels of input and output. Concerning the output, the uncertainty of the answer calls for an exploration of the whole solution space. Concerning the input, phylogenetic reconciliation has to handle uncertainties in the resolution or rooting of the upper or lower trees, or even to propose roots or resolutions according to their confidence. Exploring the space of reconciliations Dynamic programming makes it possible to sample reconciliations, uniformly among optimal ones or according to their likelihood. It is also possible to enumerate them in time proportional to the number of solutions, a number which can quickly become intractable (even only for optimal ones). Finding and presenting structure among the multitude of possible reconciliations has been at the center of recent methodological developments, especially for host and symbiont aimed methods. Several works have focused on representing a set of reconciliations in a compact way, from a uniform sample of optimal ones or by constructing a graph summarizing the optimal solutions. This can be achieved by giving support values to specific events based on all optimal (or suboptimal) reconciliations, or with the use of a consensus reconciled tree. In a DL model, it is possible to define a median reconciliation, based on shared events and to compute it in polynomial time. EMPRess can group similar reconciliations through clustering, with all pairwise distance between reconciliations computable in polynomial time (independently of the number of most parsimonious reconciliations). With the same aim, Capybara defines equivalence classes among reconciliations, efficiently computing representatives for all classes, and outputs with linear delay a given number of reconciliations (first optimal ones, then sub optimal). The space of most parsimonious reconciliation can be expanded or reduced when increasing or decreasing horizontal transfer allowed distance, which is easily done by dynamic programming. Inferring phylogenetic trees with reconciliation Reconciliation and input uncertainty Reconciliation works with two fixed trees, a lower and an upper, both assumed correct and rooted. However, those trees are not first hand data. The most frequently used data for phylogenetics consists in aligned nucleotidic or proteic sequences. Extracting DNA, sequencing, assembling and annotating genomes, recognizing homology relationships among genes and producing multiple alignments for phylogenetic reconstruction are all complex processes where errors can ultimately affect the reconstructed tree. Any topology or rooting error can be misinterpreted and cause systematic bias. For instance, in DL reconciliations, errors on the lower tree bias the reconciliation toward more duplication events closer to the root and more losses closer to the leaves. On the other hand, reconciliation, as a macro evolutionary model, can work as a supplementary layer to the micro evolutionary model of sequence evolution, resolving polytomies (nodes with more than two children) or rooting trees, or be intertwined with it through integrative models in order to get better phylogenies. Most of the works in this direction focus on gene/species reconciliations, nevertheless some first steps have been made in host/symbiont, such as considering unrooted symbiont trees or dealing with polytomies in Jane. Exploring the space of lower trees with reconciliation Reconciliation can easily take unrooted lower trees as input, which is a frequently used feature because trees inferred from molecular data are typically unrooted. It is possible to test all possible roots, or a thoughtful triple traversal of the unrooted tree allows to do it without additional time complexity. In a duplication-loss model, the set of roots minimizing the costs are found close to one another, forming a "plateau", a property which does not generalize to DTL. Reconciliation can also take as input non binary trees, that is, with internal nodes with more than two children. Such trees can be obtained for example by contracting branches with low statistical support. Inferring a binary tree from a non binary tree according to reconciliation scores is solved in DL with efficient methods. In DTL, the problem is NP hard. Heuristics and exact fixed parameter tractable algorithms are possible solutions. Another way to handle uncertainty in lower trees is to take as input a sample of alternative lower trees instead of a single one. For example, in the paper that gave reconciliation its name, it was proposed to consider all most likely lower trees, and choose from these trees the best one according to their DL costs, a principle also used by TreeFix-DTL. The sample of lower trees can similarly reflect their likelihood according to the aligned sequences, as obtained from Bayesian Markov chain Monte Carlo methods as implemented for example in Phylobayes. AngST, ALE and ecceTERA use "amalgamation", an extension of the DTL dynamic programming that is able to efficiently traverse a set of alternative lower trees instead of a single tree. A local search in the space of lower trees guided by a joint likelihood, on the one hand from multiple sequence alignments and on the other hand from reconciliation with the upper tree, is achieved in Phyldog with a DL model and in GeneRax with DTL. In a DL model with sequence evolution and relaxed molecular clock, the lower tree space can be explored with an MCMC. MowgliNNI can modify the input gene tree at poorly supported nodes to increase DTL score, while TreeSolve resolves the multifurcations added by collapsing poorly supported nodes. Finally, integrative models—mixing sequence evolution and reconciliation—can compute a joint likelihood via dynamic programming (for both reconciliation and gene sequences evolution), use Markov chain Monte Carlo to include molecular clock to estimate branch lengths, in a DL model or with a relaxed molecular clock, and in a DTL model. These models have been applied in gene/species frameworks, not yet in host/symbiont or biogeography contexts. Inferring upper trees using reconciliation Inferring an upper tree from a set of lower trees is a long-standing question related to the supertree problem. It is particularly interesting in the case of gene/species reconciliation where many (typically thousands of) gene trees are available from complete genome sequences. Supertree methods attempt to assemble a species tree based on sets of trees which may differ in terms of contemporary species sets and topology, but usually without consideration for the biological process explaining these differences. However, some supertree approaches are statistically consistent for the reconstruction of the species tree if the gene trees are simulated under a DL model. This means that if the number of input lower trees generated from the true upper tree via the DL model grows toward infinity, given that there are no additional errors, the output upper tree converges almost surely to the true one. This has been shown in the case of a quartet distance, and with a generalised Robinson Foulds multicopy distance, with better running time but assuming gene trees do not contain bipartitions contradicting the species tree, which seems rare under a DL model. Reconciliation can also be used for the inference of upper trees. This is a computationally hard problem: already resolving polytomies in a non binary upper tree with a binary lower one—minimizing a DL reconciliation score—is NP-hard. In particular, reconstructing the species tree giving the best DL cost for several gene trees is NP-hard and 2-approximable. It is called the Gene Duplication problem or more generally Gene Tree parsimony. The problem was seen as a way to detect paralogy to get better species tree reconstruction. It is NP-hard, with interesting results on the problem complexity and the behaviour of the model with different input size, structure and ILS presence. Multiple solutions exists, with ILP or heuristics, and with the possibility of a deep coalescence score. ODTL takes as input gene trees and searches a maximum likelihood species tree according to a DTL model, with a hill-climbing search. The approach produces a species tree with internal nodes ordered in time, ensuring a time compatibility for the scenarios of transfer among lower trees {link section|The problem of temporal feasibility}. Addressing a more general problem, Phyldog searches for the maximum likelihood species tree, gene trees and DL parameters from multiple family alignments via multiple rounds of local search. It thus performs the exploration of both upper and lower trees at the same time. MixTreEM presents a faster solution. Limits of the two-level DTL model A limit to dynamic programming: non independent evolution of children lineages The dynamic programming framework, like usual birth and death models, works under the hypothesis of independent evolution of children lineages in the lower tree. However, this hypothesis does not hold if the model is complemented with several other documented evolutionary events, such as horizontal transfer with replacement of a homologous gene in the recipient lineage, or gene conversion. Horizontal transfer with replacement is usually modeled by a rearrangement of the upper tree, called Subtree Prune and Regraft (SPR). Reconciling under SPR is NP-hard, even in dated trees, and fixed-parameter tractable regarding the output size. Another way to model and infer replacing horizontal transfers is through maximum agreement forest, where branches are cut in the lower and upper trees in order to get two identical (or statistically indistinguishable) upper and lower forests. The problem is NP-hard, but several approximations have been proposed. Replacing transfers can be considered on top of the DL model. In the same vein, gene conversion can be seen as a "replacing duplication". In this latter case, a polynomial algorithm which does not use dynamic programming and is an extension of the LCA method can find all optimal solutions, including gene conversions. Integrating population levels: failure to diverge and Incomplete Lineage Sorting In host/symbiont frameworks, a single symbiont species is sometimes associated to several host species. This means that while a speciation or diversification has been observed in the host, the populations are indistinguishable in the symbiont. This is handled for example by additional polytomies in the symbiont tree, possibly leading to intractable inference problems, because polytomies need to be resolved. It is also modeled by an additional evolutionary event "failure to diverge" (Jane, Amocoala). Failure to diverge can be a way to allow "free" host switch in a population, a flow of symbionts between closely related hosts. Following that vision, host switch allowed only for close hosts is considered in Eucalypt. This idea of horizontal flow between close populations can also be applied to gene/species frameworks, with a definition of species based on a gradient of gene flow between populations. Failure to diverge is one way of introducing population dynamics in reconciliation, a framework mainly adapted to the multi-species level, where populations are supposed to be well differentiated. There are other population phenomena that limit this framework, one of them being deep coalescence of lineages, leading to Incomplete Lineage Sorting (ILS), which is not handled by the DTL model. The multi species coalescent is a classical model of allele evolution along a species tree, with birth of alleles and sorting of alleles at speciations, that takes into account population sizes and naturally encompasses ILS. In a reconciliation context, several attempts have been made in order to account for ILS without the complex integration of a population model. For example, ILS can be seen as a possible evolutionary pattern for the gene tree. In that case, children lineages are not independent of one another, leading to intractability results. ILS alone can be handled with LCA, but ILS + DL reconciliation is NP hard, even without transfers. Notung handles ILS by collapsing short branches of the species tree in polytomies and allowing ILS as a free diversification of gene trees on those polytomies. ecceTERA binds the maximum size of connected parts of the species tree where ILS can happen, proposing a fixed parameter tractable algorithm in that parameter. ILS and DL can be considered on an upper network instead of a tree. This models in particular introgression, with the possibility to estimate model parameters. More integrative reconciliation models accounting for ILS have been proposed, including both DL and multispecies coalescent, with DLCoal. It is a probabilistic model with a parsimony translation, proposing two sequential LCA-type heuristics handled via an intermediate locus tree between gene and species. However, outside of the gene/species reconciliation framework, ILS seems, for no particular reason, never considered in host/symbiont, nor in biogeography. Cophylogeny with more than two levels A striking aspect of reconciliation is the common methodology handling different levels of organization: it is used for comparing domain and protein trees, gene and species trees, hosts and symbiont trees, population and geographic trees. However, now that scientists tend to consider that multi-level models of biological functioning bring a novel and game changing view of organisms and their environment, the question is how to use reconciliation to bring phylogenetics to this holobiont era. Coevolution of entities at different scales of evolution is at the basis of the holobiont idea: macro-organisms, micro-organisms and their genes all have a different history bound to a common functioning in a single ecosystem. Biological systems like the entanglement of host, symbionts and their genes imply functional and evolutionary dependencies between more than two levels. Examples of multi level systems with complex evolutionary inter-dependencies Genes coevolving beyond genome boundaries The holobiont concept stresses the possibility of genes from different genomes to cooperate and coevolve. For instance, certain genes in a symbiont genome may provide a function to its host, like the production of a vital compound absent from available feeding sources. An iconic example is the case for blood-feeding or sap-feeding insects, which often depend on one or several bacterial symbionts to thrive on a resource that is abundant in sugar, but lacks essential amino-acids or vitamins. Another example is the association of Fabaceae with nitrogen-fixing bacteria. The compound beneficiary to the host is typically produced by a set of genes encoded in the symbiont genome, which throughout evolution, may be transferred to other symbionts, and/or in and out of the host genome. Reconciliation methods have the potential to reveal evolutionary links between portions of genomes from different species. A search for coevolving genes beyond the boundaries of the genomes in which they are encoded would highlight the basis for the association of organisms in the holobiont. Horizontal gene transfer routes depend on multiple levels In intracellular mutualistic symbiont insect systems, multiple occurrences of horizontal gene transfers have been identified, whether from host to symbiont, symbiont to host or symbiont to symbiont. Transfers of endosymbiont genes involved in nutrition pathways beneficiary to the insect host have been shown to occur preferentially if the donor and recipient lineages share the same host. This is also the case in insects with bacterial symbionts providing defensive protein or in obligate leaf nodule bacterial symbionts associated with plants. In the human host, gene transfer has been shown to occur preferentially among symbionts hosted in the same organs. A review of horizontal gene transfers in host/symbiont systems stresses the importance of supporting HGTs with multiple evidence. Notably it is argued that transfers should be considered better supported when involving symbionts sharing a habitat, a geographical area, or the same host. One should, however, keep in mind that most of the diversity of hosts and symbionts is unknown and that transfers may have occurred in unsampled closely related species, hosts or symbionts. The idea that gene transfer in symbionts is constrained by the host can also be used to investigate the host's phylogenetic history. For instance, based on phylogeographical studies, it is now accepted that the bacterium Helicobacter pylori has been associated with human populations since the origins of the human species. An analysis of the genomes of Helicobacter pylori in Europe suggests that they are issued from a recombination between African and Asian Helicobacter pylori. This strongly implies early contacts between the corresponding human populations. Similarly, an analysis of HGTs in coronaviruses from different mammalian species using reconciliation methods has revealed frequent contact between viral lineages, which can be interpreted as frequent host switches. Cultural evolution The evolution of elements of human culture, for instance languages and folktales, in association with human population genetics, has been studied using concepts from phylogenetics. Although reconciliation has never been used in this framework, some of these studies encompass multiple levels of organization, each represented by a tree or the evolution of a character, with a focus on the coevolution of these levels. Language trees can be compared with population trees in order to reveal vertically transmitted folktales, via a character model on this language tree. Variants in each folktale's family, languages, genetic diversity, populations and geography can be compared two by two, to link folktale diversification with languages on one side and with geography on the other side. As in genetics with symbionts sharing host promoting HGTs, linguistic barriers can foreclose the transmission of folktales or language elements. Investigating three-level systems using two-level reconciliation Multi level reconciliation is not as developed as two-level reconciliation. One way to approach the evolutionary dependencies between more than two levels of organization is to try to use available standard two-level methods to give a first insight into a biological system's complexity. Multi-gene events: implicit consideration of an intermediate level At the gene/species tree level, one typically deals with many different gene trees. In this case, the hypothesis that different gene families evolve independently is made implicitly. However, this does not need to be the case. For instance, duplication, transfer and loss can occur for segments of a genome spanning an arbitrary number of contiguous genes. It is possible to consider such multi-gene events using an intermediate guide for lower trees inside the upper one. For instance, one can compute the joint likelihood of multiple gene tree reconciliations with a dated species tree with duplication, loss and whole genome duplication or in a parsimonious setting, and one definition of the problem is NP-hard. Similarly, the DL framework can be enriched with duplication and loss of chromosome segments instead of a single gene. However, DL reconciliation becomes intractable with that new possibility. The link between two consecutive genes can also be modeled as an evolving character, subject to gain, loss, origination, breakage, duplication and transfer. The evolution of this link appears as an additional level to species and gene trees, partly constrained by the gene/species tree reconciliation, partly evolving on its own, according to genome organization. It thus models the synteny, or proximity between genes. At another scale, it can as well model the evolution of two domains belonging to a protein. The detection of "highways of transfers", the preferential acquisition of groups of genes from a specific donor, is another example of non-independence of gene histories. Similarly, multi-gene transfers can be detected. It has also led to methodological developments such as reconciliations using phylogenetic networks, seen as a tree augmented with transfer edges, which can be used to constrain transfers in a DTL model. Networks can also be used to model introgression and incomplete lineage sorting. Detecting coevolution in multiple pairs of levels It is a central question to understand the evolution of a holobiont to know what the levels are that coevolve with each other, for instance between host species, host genes, symbionts and symbiont genes. It is possible to approach the multiple inter-dependencies between all levels of evolution by multiple pairwise comparisons of two evolving entities. Reconciliation of host and symbiont on one side and geography and symbiont on the other can also help to identify patterns of diversification of host and symbiont that reflect either coevolution or patterns that can be explained by a common geographical diversification. Similarly, a study used reconciliation methods to differentiate the effect of diet evolution and phylogenetic inertia on the composition of mammalian gut microbiomes. By reconstructing ancestral diets and microbiome composition onto a mammalian phylogeny, the study revealed that both effects contribute but at different time scales. Explicit modeling of three or more levels In a model of a multi-level system as host/symbiont/genes, horizontal gene transfers should be more likely between two symbionts of a same host. This is invisible to a two-level gene tree/species tree or host/symbiont reconciliation: in some cases, looking at any combination of two levels can lead to missing an evolutionary scenario which can only be the most likely if the information from the three trees is considered together. Trying to face the limitation of these uses of standard two-level reconciliations with systems involving inter-dependencies at multiple levels, a methodological effort has been undertaken in the last decade to construct and use multi-level models. This requires the identification of at least one "intermediate" level between the upper and the lower one. Pre-reconciliation: characters onto reconciled trees A first step towards integrated three-level models is to consider phylogenetic trees at two levels and another level represented only with characters at the leaves of one of the trees. For instance, a reconciliation of host and symbiont phylogenies can be informed by geographic data. Ancestral geographic locations of host and symbiont species obtained through a character inference method can then be used to constrain the host/symbiont reconciliation: ancestral hosts and symbionts can only be associated if they belong to the same geographical location. At another scale, the evolution at the sub-gene level can be approached with a character method. Here, parts of genes (e.g. the sequence coding for protein domains) is reconciled according to a DL model with a species tree, and the genes they belong to are mentioned as characters of these parts. Ancestral genes are then reconstructed a posteriori via merge and splits of gene parts. Two-level reconciliations informed by a third level As pointed out by several studies mentioned in , an upper level can inform a reconciliation between an intermediate and lower one, notably for horizontal transfers. Three-level models can take into account these assumptions to guide reconciliations between an intermediate tree and lower levels with the knowledge of an upper tree. The model can for example give higher likelihoods to reconciliation scenarios where horizontal gene transfers happen between entities sharing the same habitat. This has been achieved for the first time with DTL gene/species reconciliations nested with a DTL gene domain and gene reconciliation. Different costs for inter and intra transfers depend on whether or not transfers happen between genes of the same genomes. Note that this model explicitly considers three levels and three trees, but does not yet define a real three-level reconciliation, with a likelihood or score associated. It relies on a sequential operation, where the second reconciliation is informed by the result of the first one. The reconciliation problem in multi-level models The next step is to define the score of a reconciliation consisting of three nested trees and to compute, given the three trees, three-level reconciliations according to their score. It has been achieved with a species/gene/domain system, where genes evolve within the species tree with a DL model and domains evolve within the gene/species system with a DTL model, forbidding domain transfers between genes of two different species. Inference involves candidate scenarios with joint scores. Computing the minimum score scenario is NP-hard, but dynamic programming or integer linear programming can offer heuristics. Variations of the problem considering multiple domains are available, and so is a simulation framework. Inferring the intermediate tree using models of 3-level lower/intermediate/upper reconciliation Just like two-level reconciliation can be used to improve lower or upper phylogenies, or to help constructing them from aligned sequences, joint reconciliation models can be used in the same manner. In this vein, a coupled gene/species DL, domain gene DL and gene sequence evolution model in a Bayesian framework improves the reconstruction of gene trees. Software Multiple pieces of software have been developed to implement the various models of reconciliation. The following table does not aim for exhaustiveness but presents a number of software tools aimed at reconciling trees to infer reconciliation scenarios or for related usage, such as correcting or inferring trees, or testing coevolution. The levels of interest section details the levels for which the software was implemented, even though it is entirely possible, for instance, to use a software made for species and gene reconciliation to reconcile host and symbionts. Parsimony or probability is the underlying model that is used for the reconciliation. References External links Phylogenetics Evolutionary biology NP-complete problems
Phylogenetic reconciliation
[ "Mathematics", "Biology" ]
9,960
[ "Evolutionary biology", "Taxonomy (biology)", "Computational problems", "Bioinformatics", "Phylogenetics", "Mathematical problems", "NP-complete problems" ]
72,235,752
https://en.wikipedia.org/wiki/Curb%20your%20dog
In New York City from the 1930s to 1978, before citywide pooper-scooper laws were enacted, street signs were put in place encouraging citizens to "curb" their dogs - defecate in the edge of the street, near the curb and in "the gutter", rather than on the sidewalk. The first known "curb your dog" signs in New York City, twenty five in number, were distributed in 1937 "at points around the city" "in an effort to train owners." In the 1970s, a curb your dog sign campaign was launched in response to a problem that was becoming intolerable. Signs were erected to educate residents that it was required for them to have their dogs defecate in the street gutter, as opposed to the sidewalk, with the intent that NYC Sanitation Department street sweeping machines would clean the streets on an overnight basis. This expensive approach to managing dog waste coincided with an NYC livability, demographic, and financial crisis and proved to be economically untenable. The signs were of a civic nature being informational and educational. They did not list fines, cite law, or express consequences. In New York City beginning in 1955, education regarding sanitation (including signage and campaigns) was seen as a cost effective way to manage a public quality of life and health concerns known as "street pollution." "Curb Your Dog" signs from the late 1960's to 1970's were often simple in presentation, with a white border and white lettering stating "Curb your dog - Keep New York Clean" against a black background. An analog sign stated "leash, gutter and clean up after your dog Please." The legacy of "curb your dog" signage remains in generational memory to such an extent that subsequent statutory laws have been confused/conflated with the educational signage campaign that ended in the late 1970's. A quote or misquote which was ascribed to Sanitation Commissioner Jessica Tisch in April 2022 stated that, "Those who don't pick up their pup's poop will be hit with up to a $250 fine under the city's "Curb Your Dog" law, which was passed back in 1978." The statement erroneously confused the "Curb Your Dog" educational campaign with pooper-scooper laws and signage enacted during the later Koch administration that threatened and imposed fines for failing to pick up after your dog. Kacik designs Walter Kacik, an industrial designer, made the graphics on "Curb York Dog" signs, garbage trucks, and "Keep New York City Clean" signs for the New York City Department of Sanitation during the John Lindsay administration in the late 1960's. The modern industrial design of the signs reduced visual pollution within the city with an aim to improve quality of life through education and advertising. The use of Helvetica within "Curb Your Dog" signs was a prominent feature of Kacik's esthetic. Helvetica was popular from 1968 into the 1970's and thus was widely used by mass-transit systems, retailers (Bloomingdale's), in advertising, and in signage. Once ubiquitous, the iconic "Kacik" "Curb your dog" signs were highly collectable when introduced (and often stolen) and remain collectable to this day with few remaining. References Signage Dog equipment Feces
Curb your dog
[ "Biology" ]
680
[ "Excretion", "Feces", "Animal waste products" ]
72,236,351
https://en.wikipedia.org/wiki/Amanita%20ponderosa
Amanita ponderosa, also known as heavy amidella or gurumelo in Spanish, is a mushroom-forming fungus in the family Amanitaceae. References ponderosa Fungi described in 1944 Fungus species
Amanita ponderosa
[ "Biology" ]
44
[ "Fungi", "Fungus species" ]
72,236,649
https://en.wikipedia.org/wiki/Redmi%208
The Redmi 8 is a line of Android-based smartphones as part of the Redmi series, a sub-brand of Xiaomi Inc. The main model Redmi 8 was announced on October 9, 2019, and released on October 12, 2019. On October 12, 2019 Redmi 8A was announced and marketed as a lite model of Redmi 8. In India, on February 11, 2020, Redmi 8A Dual was announced, which has a different camera setup compared to Redmi 8A. On April 2, 2020 Redmi 8A Dual was announced in Indonesia as Redmi 8A Pro. Design The front is made of Gorilla Glass 5 while the back is made of plastic. On the bottom of smartphones, the user can find USB-C port, speaker, microphone and 3.5mm audio jack. On the top, there is an additional microphone and IR blaster. On the left, there is a dual SIM tray with microSD slot. On the right, are the volume rocker and the power button. Also, Redmi 8 has a fingerprint scanner on the back under the camera island. References Android (operating system) devices 8 Mobile phones with multiple rear cameras Mobile phones with infrared transmitter Mobile phones introduced in 2019 Discontinued smartphones
Redmi 8
[ "Technology" ]
259
[ "Mobile technology stubs", "Mobile phone stubs" ]
72,237,900
https://en.wikipedia.org/wiki/Lichenostigma%20rupicolae
Lichenostigma rupicolae is a species of lichenicolous fungus belonging to the family Phaeococcomycetaceae. It was described in 2010 from specimens of Pertusaria rupicola, its host species. References Arthoniomycetes Fungi described in 2010 Fungi of Europe Fungi of Turkey Lichenicolous fungi Fungus species
Lichenostigma rupicolae
[ "Biology" ]
75
[ "Fungi", "Fungus species" ]