id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
58,523,832
https://en.wikipedia.org/wiki/Aspergillus%20pragensis
Aspergillus pragensis is a species of fungus in the genus Aspergillus. It is from the Candidi section. The species was first described in 2014. It has been reported to produce chlorflavonin, polar compound X, terphenyllin, 3-hydroxyterphenyllin, chlorflavonin, metabolite, 6-epi-stephacidin A, and terphenyllin. Growth and morphology A. pragensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References pragensis Fungi described in 2014 Fungus species
Aspergillus pragensis
[ "Biology" ]
166
[ "Fungi", "Fungus species" ]
58,523,928
https://en.wikipedia.org/wiki/Aspergillus%20subalbidus
Aspergillus subalbidus is a species of fungus in the genus Aspergillus. It is from the Candidi section. The species was first described in 2014. Growth and morphology A. subalbidus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References subalbidus Fungi described in 2014 Fungus species
Aspergillus subalbidus
[ "Biology" ]
107
[ "Fungi", "Fungus species" ]
58,524,064
https://en.wikipedia.org/wiki/Aspergillus%20acidohumus
Aspergillus acidohumus is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has not been reported to produce any extrolites. Growth on agar plates Apsergillus acidohumus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References acidohumus Fungi described in 2016 Fungus species
Aspergillus acidohumus
[ "Biology" ]
126
[ "Fungi", "Fungus species" ]
58,524,089
https://en.wikipedia.org/wiki/Aspergillus%20cervinus
Aspergillus cervinus is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 1914. It has been reported to produce terremutin, dihydroxy-2,5-toluquinone, xanthocillin, and sclerin. Growth and morphology A. cervinus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References cervinus Fungi described in 1914 Fungus species
Aspergillus cervinus
[ "Biology" ]
144
[ "Fungi", "Fungus species" ]
58,524,106
https://en.wikipedia.org/wiki/Aspergillus%20christenseniae
Aspergillus christenseniae is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has been reported to produce 4-hydroxymellein, terremutin, orange-red anthraquinone, and chlorflavonin. The species was named for Martha Christensen. Growth and morphology A. christenseniae has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References christenseniae Fungi described in 2016 Fungus species
Aspergillus christenseniae
[ "Biology" ]
144
[ "Fungi", "Fungus species" ]
58,524,133
https://en.wikipedia.org/wiki/Aspergillus%20kanagawaensis
Aspergillus kanagawaensis is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 1951. It has been reported to a few extrolites, including two polar indol-alkaloids and one polar indol-alkaloid. Growth and morphology A. kanagawaensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References kanagawaensis Fungi described in 1951 Fungus species
Aspergillus kanagawaensis
[ "Biology" ]
137
[ "Fungi", "Fungus species" ]
58,524,435
https://en.wikipedia.org/wiki/Aspergillus%20novoguineensis
Aspergillus novoguineensis is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has been reported to produce an asparvenone, sclerotigenin, and terremutin. References novoguineensis Fungi described in 2016 Fungus species
Aspergillus novoguineensis
[ "Biology" ]
75
[ "Fungi", "Fungus species" ]
58,524,466
https://en.wikipedia.org/wiki/Aspergillus%20nutans
Aspergillus nutans is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 1954. It has been reported to produce terremutin and some carotenoid-like extrolites. Growth and morphology A. nutans has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References nutans Fungi described in 1954 Fungus species
Aspergillus nutans
[ "Biology" ]
125
[ "Fungi", "Fungus species" ]
58,524,502
https://en.wikipedia.org/wiki/Aspergillus%20subnutans
Aspergillus subnutans is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has been reported to produce 4-hydroxymellein. Growth and morphology A. subnutans has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References subnutans Fungi described in 2016 Fungus species
Aspergillus subnutans
[ "Biology" ]
119
[ "Fungi", "Fungus species" ]
58,524,544
https://en.wikipedia.org/wiki/Aspergillus%20transcarpathicus
Aspergillus transcarpathicus is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has been reported to produce asparvenones, terremutin, 4-hydroxymellein, and xanthocillin. References transcarpathicus Fungi described in 2016 Fungus species
Aspergillus transcarpathicus
[ "Biology" ]
81
[ "Fungi", "Fungus species" ]
58,524,563
https://en.wikipedia.org/wiki/Aspergillus%20wisconsinensis
Aspergillus wisconsinensis is a species of fungus in the genus Aspergillus. It is from the Cervini section. The species was first described in 2016. It has been reported to produce an asparvenone, 4-hydroxymellein, sclerotigenin, two territrems, and cycloaspeptide. Growth and morphology A. wisconsinensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References wisconsinensis Fungi described in 2016 Fungus species
Aspergillus wisconsinensis
[ "Biology" ]
141
[ "Fungi", "Fungus species" ]
68,021,560
https://en.wikipedia.org/wiki/Aleksei%20Chernavskii
Aleksei Viktorovich Chernavskii (or Chernavsky or Černavskii) (; 17 January 1938 – 22 December 2023) was a Russian mathematician, specializing in differential geometry and topology. Biography Chernavskii was born in Moscow and completed undergraduate study at the Faculty of Mechanics and Mathematics of Moscow State University in 1959. He enrolled in graduate school at the Steklov Institute of Mathematics. In 1964 he defended his Candidate of Sciences (PhD) thesis, written under the guidance of Lyudmila Keldysh, on the topic Конечнократные отображения многообразий (Finite-fold mappings of manifolds). In 1970 he defended his Russian Doctor of Sciences (habilitation) thesis Гомеоморфизмы и топологические вложения многообразий (Homeomorphisms and topological embeddings of manifolds). In 1970 he was an Invited Speaker at the International Congress of Mathematicians in Nice. Chernavskii worked as a senior researcher at the Steklov Institute until 1973 and from 1973 to 1980 at Yaroslavl State University. From 1980 to 1985 he was a senior researcher at the Moscow Institute of Physics and Technology. From 1985 he was employed the Kharkevich Institute for Information Transmission Problems of the Russian Academy of Sciences. From 1993 he was working part-time as a professor at the Department of Higher Geometry and Topology, Faculty of Mechanics and Mathematics, Moscow State University. He wrote a textbook on differential differential geometry for advanced students. Chernavskii died on 22 December 2023, at the age of 85. Chernavskii's theorem Chernavskii's theorem (1964): If and are n-manifolds and is a discrete, open, continuous mapping of into then the branch set = { x: x is an element of and fails to be a local homeomorphism at x} satisfies dimension () ≤ n – 2. Selected publications References External links 1938 births 2023 deaths Moscow State University alumni 20th-century Russian mathematicians 21st-century Russian mathematicians Differential geometers Topologists Scientists from Moscow
Aleksei Chernavskii
[ "Mathematics" ]
498
[ "Topologists", "Topology" ]
68,023,606
https://en.wikipedia.org/wiki/Commercetools
Commercetools, stylized as commercetools, is a cloud-based headless commerce platform that provides APIs to power e-commerce sales and similar functions for large businesses. Both the company and platform are called Commercetools. The company is headquartered in Munich, Germany with additional offices in Berlin, Germany; Jena, Germany; Amsterdam, Netherlands; London, England; Durham, North Carolina; Zürich, Switzerland; Sydney, Australia; Shanghai, China and Singapore. Through its investor REWE Group it is associated with the omnichannel order fulfillment software solutions provider and the payment transactions provider . Its clients include Audi, Bang & Olufsen, Carhartt and Nuts.com. Commercetools is a founding member of the MACH Alliance. History Commercetools was founded by Dirk Hoerig and Denis Werner in 2006. It launched its platform in 2013. In 2014, Commercetools was wholly bought by REWE Digital, part of Germany’s REWE Group. Hoerig is credited with coining the term "headless commerce". In 2018, Commercetools announced a $17 million investment to support its international expansion. It expanded into the U.K market in 2019 with the opening of its London office. In 2020, Commercetools established a presence in Australia, with a team in Melbourne and a data center in Sydney. In 2019, commercetools raised $145 million from venture capital firm Insight Partners. Insight Partners' managing directors Richard Wells and Matt Gatto joined Commercetools' board of directors as part of the deal. At the same time, Commercetools was spun out by REWE. REWE remains a significant shareholder. In January 2021, commercetools partnered with car manufacturer Volkswagen Group to use the platform for its group brands, including Volkswagen, Bentley, Porsche and Audi. In May 2021, REWE group announced additional investment into Commercetools to fund growth into the Chinese market. In September 2021, commercetools raised $140m in its series C round, led by venture capital firm Accel, valuing the company at $1.9 billion. In November 2021, commercetools acquired Frontastic for an undisclosed amount. References E-commerce E-commerce software German companies established in 2006 Content management systems
Commercetools
[ "Technology" ]
460
[ "Information technology", "E-commerce" ]
68,025,443
https://en.wikipedia.org/wiki/3-Quinuclidinyl%20thiochromane-4-carboxylate
3-Quinuclidinyl thiochromane-4-carboxylate is a research compound which is the most potent muscarinic antagonist known. Tests in vitro showed it to have a binding affinity over 1000 times more potent than 3-quinuclidinyl benzilate with a Kd of 2.47 picomolars (pM). See also CS-27349 EA-3167 Metixene References Muscarinic antagonists Thiochromanes 3-Quinuclidinyl esters
3-Quinuclidinyl thiochromane-4-carboxylate
[ "Chemistry" ]
116
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,025,543
https://en.wikipedia.org/wiki/Hyperendemic
In epidemiology, the term hyperendemic disease is used to refer to a disease which is constantly and persistently present in a population at a high rate of incidence and/or prevalence (occurrence) and which equally affects (i.e. which is equally endemic in) all age groups of that population. It is one of the various degrees of endemicity (i.e. degrees of transmission of an infectious disease). Definitions According to a more precise definition given by the Robert Koch Institute in Germany, hyperendemicity is not necessarily associated with a high incidence rate. A hyperendemic disease is one which is ubiquitously present with ongoing circulation in an endemic region with a high prevalence rate. As a result, a hyperendemic region shows a relatively low incidence rate but at the same time it poses a high risk of infection to people coming into the region. According to another definition discussing malaria, a hyperendemic region is defined to be one with a seasonally high degree of endemicity where immunity does not succeed to prevent the effects of a disease for all age groups. Examples In the discussion of the dengue fever, a hyperendemic state is characterized by the continuous circulation of multiple viral serotypes in an area where a large pool of susceptible hosts and a competent vector (with or without seasonal variation) are constantly present. In another example, World Health Organization defines malaria to be hyperendemic if the percentage of persons with an enlarged spleen (spleen rate) is constantly greater than 50% for all age groups. Difference with similar epidemiological concepts Difference with holoendemic An endemic disease is one with a continuous occurrence at an expected frequency over a certain period of time and in a certain geographical location. Two terms are used when the degree of transmission or infection of an endemic disease is high: hyperendemic and holoendemic. One of the differences between hyperendemic and holoendemic diseases is that hyperendemic diseases show a seasonally intense transmission in all age groups with a period of low or no transmission, whereas in holoendemic diseases, there is perennial (year-round) high level of transmission predominantly among young population with higher immunity among adults. Difference with hotspot Justin Lessler and others from Johns Hopkins University reported a rise in the usage of the ambiguous term "hotspot" in research and policy documents in late 2010s. Hotspots have been variously described as areas of elevated incidence or prevalence, higher transmission efficiency or risk, or higher probability of disease emergence. Lessler and others suggest that a hyperendemic region or synonymously a "burden hotspot" (defined as an area of elevated disease incidence or prevalence) should be distinguished from an "emergence hotspot" (defined as an area with a high frequency of emergence or reemergence of diseases or drug-resistant strains) and a "transmission hotspot" (defined as an area of elevated transmission efficiency i.e., an elevated reproductive number, R). References Infectious diseases Epidemiology
Hyperendemic
[ "Environmental_science" ]
631
[ "Epidemiology", "Environmental social science" ]
68,026,027
https://en.wikipedia.org/wiki/Sporadic%20disease
In infectious disease epidemiology, a sporadic disease is an infectious disease which occurs only infrequently, haphazardly, irregularly, or occasionally, from time to time in a few isolated places, with no discernible temporal or spatial pattern, as opposed to a recognizable epidemic outbreak or endemic pattern. The cases are so few (single or in a cluster) and separated so widely in time and place that there exists little or no discernable connection within them. They also do not show a recognizable common source of infection. In the discussion of non-infectious diseases, a sporadic disease is a non-communicable disease (such as cancer) which occurs in people without any family history of that disease or without any inherited genetic predisposition for the disease (change in DNA which increases the risk of having that disease). Sporadic non-infectious diseases arise not due to any identifiable inherited gene, but because of randomly induced genetic mutations under the influence of environmental factors or of some unknown etiology. Sporadic non-infectious diseases typically occur late in life (late-onset), but early-onset sporadic non-infectious diseases also exist. Examples Sporadic infectious diseases Examples depend on time and place, because an infectious disease that is common in one area may be rare in another. In the United States, tetanus, rabies, and plague are considered examples of sporadic diseases. Although the tetanus-causing bacteria Clostridium tetani is present in the soil everywhere in the United States, tetanus infections are very rare and occur in scattered locations because most individuals have either received vaccinations or clean wounds appropriately. Similarly the country records a few scattered cases of plague each year, generally contracted from rodent animals in rural areas in the western part of the country. In another example, World Health Organization defines malaria to be sporadic when autochthonous cases (i.e. between two individuals in the same place) are too few and scattered to have any appreciable effect on the community. Sporadic non-infectious diseases Some examples of sporadic non-infectious diseases are sporadic Alzheimer's disease, sporadic Creutzfeldt–Jakob disease, sporadic cancers (such as sporadic basal cell carcinoma, sporadic breast cancer, sporadic medullary thyroid cancer and sporadic Kaposi's sarcoma), sporadic fatal insomnia, sporadic goitre, sporadic hemiplegic migraine, sporadic late-onset nemaline myopathy, sporadic neurofibroma and sporadic porphyria cutanea tarda. Potential source for an epidemic If the conditions are favorable for its spread (pathogenicity, susceptibility of hosts, contact rate of individuals, population density, number of vaccinated or naturally immune individuals, etc.), a sporadic infectious disease may become the starting point of an epidemic. For example, in developed countries, shigellosis (bacillary dysentery) is normally considered a sporadic disease, but in overcrowded places with poor sanitation and poor personal hygiene, it may become epidemic. Shigellosis was a sporadic disease in South Korea for many years, until 1998. Beginning in 1998 South Korea experienced a sudden epidemic of shigellosis among school children. Contaminated school meals were identified as the major source of infection, and after several years, the infection rate declined significantly. In another example, the South Asian country of Bangladesh experienced sporadic cases of dengue fever, a mosquito-borne disease, from its first outbreak in 1964 until 1999. However, in 2000, the arrival of a Thai/Myanmar strain of the highly pathogenic dengue type 3 virus into the overpopulated and poorly urbanized country (which increases human-mosquito contact), with highly favorable breeding grounds (such as open water reservoirs used by poor people and accumulation of rainwater) for the vector, and very little public awareness gave rise to a sudden epidemic of dengue, with 5,551 reported cases that year. The type 3 Dengue virus subsided after 2002 and re-emerged in 2017, once again causing an outbreak in 2019. Difficulty of measuring Molecular epidemiologist Lee Riley claims that most sporadic infections are actually part of unrecognized outbreaks, and that what appears to be endemic disease (from a traditional population-based epidemiology approach) actually consists of multiple small outbreaks (from a molecular epidemiology approach) in which seemingly unrelated (i.e., sporadic cases) are in reality epidemiologically related, because they belong to the same genotype of an infectious agent. Riley considers the differentiation of a disease occurrence as either endemic or epidemic to be not really meaningful. According to Riley, since most so-called sporadic occurrences of an endemic disease are actually small epidemics, rapid public health interventions against such occurrences can be made in the same way as they are done for recognized acute epidemics (i.e. epidemic in the traditional sense). Notes and references Notes References Works cited Infectious diseases Epidemiology
Sporadic disease
[ "Environmental_science" ]
1,029
[ "Epidemiology", "Environmental social science" ]
68,026,297
https://en.wikipedia.org/wiki/Slide%20rule%20scale
A slide rule scale is a line with graduated markings inscribed along the length of a slide rule used for mathematical calculations. The earliest such device had a single logarithmic scale for performing multiplication and division, but soon an improved technique was developed which involved two such scales sliding alongside each other. Later, multiple scales were provided with the most basic being logarithmic but with others graduated according to the mathematical function required. Few slide rules have been designed for addition and subtraction, rather the main scales are used for multiplication and division and the other scales are for mathematical calculations involving trigonometric, exponential and, generally, transcendental functions. Before they were superseded by electronic calculators in the 1970s, slide rules were an important type of portable calculating instrument. Slide rule design A slide rule consists of a body and a slider that can be slid along within the body and both of these have numerical scales inscribed on them. On duplex rules the body and/or the slider have scales on the back as well as the front. The slider's scales may be visible from the back or the slider may need to be slid right out and replaced facing the other way round. A cursor (also called runner or glass) containing one (or more) hairlines may be slid along the whole rule so that corresponding readings, front and back, can be taken from the various scales on the body and slider. History In about 1620, Edmund Gunter introduced what is now known as Gunter's line as one element of the Gunter's sector he invented for mariners. The line, inscribed on wood, was a single logarithmic scale going from 1 to 100. It had no sliding parts but by using a pair of dividers it was possible to multiply and divide numbers. The form with a single logarithmic scale eventually developed into such instruments as Fuller's cylindrical slide rule. In about 1622, but not published until 1632, William Oughtred invented linear and circular slide rules which had two logarithmic scales that slid beside each other to perform calculations. In 1654 the linear design was developed into a wooden body within which a slider could be fitted and adjusted. Scales Simple slide rules will have a C and D scale for multiplication and division, most likely an A and B for squares and square roots, and possibly CI and K for reciprocals and cubes. In the early days of slide rules few scales were provided and no labelling was necessary. However, gradually the number of scales tended to increase. Amédée Mannheim introduced the A, B, C and D labels in 1859 and, after that, manufacturers began to adopt a somewhat standardised, though idiosyncratic, system of labels so the various scales could be quickly identified. Advanced slide rules have many scales and they are often designed with particular types of user in mind, for example electrical engineers or surveyors. There are rarely scales for addition and subtraction but a workaround is possible. The rule illustrated is an Aristo 0972 HyperLog, which has 31 scales.{{refn|group=note|The Aristo 0952 HyperLog was being manufactured in 1973 and is in length overall with scales as follows. Front: LL00, LL01, LL02, LL03, DF (on the slider CF, CIF, L, CI, C) D, LL3, LL2, LL1 and LL00. Back: H2, Sh2, Th, K, A (on the slider B, T, ST, S, P, C) D, DI, Ch, Sh1, H1. Its gauge marks are , , {{strong|ρ}}, , , .}} The scales in the table below are those appropriate for general mathematical use rather than for specific professions. Notes about table Some scales have high values at the left and low on the right. These are marked as "decrease" in the table above. On slide rules these are often inscribed in red rather than black or they may have arrows pointing left along the scale. See P and DI scales in detail image. In slide rule terminology, "folded" means a scale that starts and finishes at values offset from a power of 10. Often folded scales start at π but may be extended lengthways to, say, 3.0 and 35.0. Folded scales with the code subscripted with "M" start and finish at log10 e to simplify conversion between base-10 and natural logarithms. When subscripted "/M", they fold at ln(10). For mathematical reasons some scales either stop short of or extend beyond the D = 1 and 10 points. For example, arctanh(x) approaches ∞ (infinity) as x'' approaches 1, so the scale stops short. In slide rule terminology "log-log" means the scale is logarithmic applied over an inherently logarithmic scale. Slide rule annotation generally ignores powers of 10. However, for some scales, such as log-log, decimal points are relevant and are likely to be marked. Gauge marks Gauge marks are often added to the scales either marking important constants (e.g. at 3.14159) or useful conversion coefficients (e.g. at 180*60*60/π or to find sine and tan of small angles). A cursor may have subsidiary hairlines beside the main one. For example, when one is over kilowatts the other indicates horsepower. See on the A and B scales and on the C scale in the detail image. The Aristo 0972 has multiple cursor hairlines on its reverse side, as shown in the image above. Notes References Citations Works cited Further reading Analog computers Historical scientific instruments Mechanical calculators Logarithms Logarithmic scales of measurement
Slide rule scale
[ "Physics", "Mathematics" ]
1,209
[ "Logarithms", "Physical quantities", "Quantity", "E (mathematical constant)", "Logarithmic scales of measurement" ]
68,027,492
https://en.wikipedia.org/wiki/Tailent%20Automation%20Platform
Tailent is a software company for robotic process automation (RPA) founded in Romania by Mario Popescu and Cristian Oftez, headquartered in Bucharest. The company’s software provides a digital workforce especially designed to automate complex, repetitive tasks. Tailent was also mentioned among the 40 startups selected for Startup Spotlight Online 2020. History Tailent was founded in 2015 in Bucharest, Romania as Mission Critical, by Mario Popescu and Cristian Oftez. In 2020, the name changed to Tailent, once with the launch of Tailent Automation Platform (TAP), and with entering the international market. Products Tailent develops software that is used to automate repetitive tasks, normally performed by people. The technology combines how humans read the computer displays with prebuilt components that can be combined to automate specific processes. This type of software can be used to automate any tasks performed by other business software such as CRM or ERP, or it can be used to simplify front office repetitive tasks. Tailent’s main product is Tailent Automation Platform (TAP). It combines a low-code Integrated Development Environment (IDE) called Studio, used for process creation with agents called Robots to execute the processes. Everything is managed and monitored in a management tool called Orchestrator. References External links Business software companies Automation software Artificial intelligence companies Companies based in Bucharest Software companies of Romania Companies established in 2015
Tailent Automation Platform
[ "Engineering" ]
290
[ "Automation software", "Automation" ]
68,027,602
https://en.wikipedia.org/wiki/Paul%20F.%20McMillan
Paul Francis McMillan (3 June 1956 – 2 February 2022) was a British chemist who held the Sir William Ramsay Chair of Chemistry at University College London. His research considered the study of matter under extreme conditions of temperature and pressure, with a focus on phase transitions, amorphisation, and the study of glassy states. He has also investigated the survival of bacteria and larger organisms (tardigrades) under extreme compression, studies of amyloid fibrils, the synthesis and characterisation of carbonitride nanocrystals and the study of water motion in confined environments. He has made extensive use of Raman spectroscopy together with X-ray diffraction and neutron scattering techniques. Early life and education McMillan was born in Edinburgh, Midlothian, and brought up in Loanhead, a small mining and farming village at the base of the Pentland Hills. He attended Lasswade High School, where he graduated with the Marshall Memorial medal. He then studied for a bachelor's degree in chemistry at the University of Edinburgh. After graduating, McMillan moved to Arizona State University, where he researched geochemistry with John Holloway and Alexandra Navrotsky. His doctoral research was in using vibrational spectroscopy to investigate the structures of silicate glasses. Research and career McMillan worked as a postdoctoral fellow at Arizona State University, where he installed one of the first micro-beam Raman spectroscopy instruments in the US. He used Raman spectroscopy to study high pressure minerals and materials. He was hired to a teaching position at Arizona State University in 1983, and promoted to Professor in the Department of Chemistry and Biochemistry in 1993. He was appointed Director of the Center for Solid State Science in 1997 and was named Presidential Professor of the Sciences. In 2000 he was awarded the Brunauer Cement Award of American Ceramic Society. In 2000, McMillan returned to the United Kingdom, where he was made Professor of Solid State Chemistry at University College London, an appointment jointly held with the Royal Institution. McMillan has also held visiting positions at the Universités of Nantes and Rennes, the Ecole Normale Supérieure and Université Claude Bernard. McMillan's research involved the exploration of solid state chemistry under extreme high pressure and high temperature conditions using diamond anvil cells. New compounds and materials are prepared and studied at up to a million atmospheres and thousands of degrees Celsius using spectroscopy and synchrotron X-ray diffraction. He studied the properties and structure of liquids, amorphous solids and biological molecules at high pressure. McMillan has contributed across numerous fields and has published work relating to solid state inorganic/materials chemistry, high pressure-high temperature research, amorphous solids and liquids, vibrational spectroscopy, synchrotron X-ray and neutron scattering, mineral physics, graphitic carbonitrides, battery materials and the response of bacteria to high pressures. In 2015 McMillan was a panellist on Melvyn Bragg's In Our Time on BBC Radio 4. Personal life McMillan died in London on 2 February 2022, at the age of 65. Selected publications References 1956 births 2022 deaths 20th-century British chemists 21st-century British chemists Alumni of University College London Arizona State University alumni British chemists Scientists from Edinburgh Solid state chemists
Paul F. McMillan
[ "Chemistry" ]
656
[ "Solid state chemists" ]
68,027,732
https://en.wikipedia.org/wiki/Ocean%20dynamical%20thermostat
Ocean dynamical thermostat is a physical mechanism through which changes in the mean radiative forcing influence the gradients of sea surface temperatures in the Pacific Ocean and the strength of the Walker circulation. Increased radiative forcing (warming) is more effective in the western Pacific than in the eastern where the upwelling of cold water masses damps the temperature change. This increases the east-west temperature gradient and strengthens the Walker circulation. Decreased radiative forcing (cooling) has the opposite effect. The process has been invoked to explain variations in the Pacific Ocean temperature gradients that correlate to insolation and climate variations. It may also be responsible for the hypothesized correlation between El Niño events and volcanic eruptions, and for changes in the temperature gradients that occurred during the 20th century. Whether the ocean dynamical thermostat controls the response of the Pacific Ocean to anthropogenic global warming is unclear, as there are competing processes at play; potentially, it could drive a La Niña-like climate tendency during initial warming before it is overridden by other processes. Background The equatorial Pacific is a key region of Earth in terms of its relative influence on the worldwide atmospheric circulation. A characteristic east-west temperature gradient is coupled to an atmospheric circulation, the Walker circulation, and further controlled by atmospheric and oceanic dynamics. The western Pacific features the so-called "warm pool", where the warmest sea surface temperatures (SSTs) of Earth are found. In the eastern Pacific conversely an area called the "cold tongue" is always colder than the warm pool even though they lie at the same latitude, as cold water is upwelled there. The temperature gradient between the two in turn induces an atmospheric circulation, the Walker circulation, which responds strongly to the SST gradient. One important component of the climate is the El Niño-Southern Oscillation (ENSO), a mode of climate variability. During its positive/El Niño phase, waters in the central and eastern Pacific are warmer than normal while during its cold/La Niña they are colder than normal. Coupled to these SST changes the atmospheric pressure difference between the eastern and western Pacific changes. ENSO and Walker circulation variations have worldwide effects on weather, including natural disasters such as bushfires, droughts, floods and tropical cyclone activity. The atmospheric circulation modulates the heat uptake by the ocean, the strength and position of the Intertropical Convergence Zone (ITCZ), tropical precipitation and the strength of the Indian monsoon. Original hypothesis by Clement et al.(1996) and Sun and Liu's (1996) precedent Already in May 1996 Sun and Liu published a hypothesis that coupled interactions between ocean winds, the ocean surface and ocean currents can limit water temperatures in the western Pacific. As part of that study, they found that increased equilibrium temperatures drive an increased temperature gradient between the eastern and western Pacific. The ocean dynamical thermostat mechanism was described in a dedicated publication by Clement et al. 1996 in a coupled ocean-atmosphere model of the equatorial ocean. Since in the western Pacific SSTs are only governed by stored heat and heat fluxes, while in the eastern Pacific the horizontal and vertical advection also play a role. Thus an imposed source of heating primarily warms the western Pacific, inducing stronger easterly winds that facilitate upwelling in the eastern Pacific and cool its temperature - a pattern opposite that expected from the heating. Cold water upwelled along the equator then spreads away from it, reducing the total warming of the basin. The temperature gradient between the western and eastern Pacific thus increases, strengthening the trade winds and further increasing upwelling; this eventually results in a climate state resembling La Niña. The mechanism is seasonal as upwelling is least effective in boreal spring and most effective in boreal autumn; thus it is mainly operative in autumn. Due to the vertical temperature structure, ENSO variability becomes more regular during cooling by the thermostat mechanism, but is damped during warming. The model of Clement et al. 1996 only considers temperature anomalies and does not account for the entire energy budget. After some time, warming would spread to the source regions of the upwelled water and in the thermocline, eventually damping the thermostat. The principal flaw in the model is that it assumes that the temperature of the upwelled water does not change over time. Later research Later studies have verified the ocean dynamical thermostat mechanism for a number of climate models with different structures of warming and also the occurrence of the opposite response - a decline in the SST gradient - in response to climate cooling. In fully coupled models a tendency of the atmospheric circulation to intensify with decreasing insolation sometimes negates the thermostat response to decreased solar activity. Liu, Lu and Xie 2015 proposed that an ocean dynamical thermostat can also operate in the Indian Ocean, and the concept has been extended to cover the Indo-Pacific as a whole rather than just the equatorial Pacific. Water flows from the western Pacific into the Indian Ocean through straits between Australia and Asia, a phenomenon known as the Indonesian Throughflow. Rodgers et al. 1999 postulated that stronger trade winds associated with the ocean dynamical thermostat may increase the sea level difference between the Indian and Pacific oceans, increasing the throughflow and cooling the Pacific further. An et al. 2022 postulated a similar effect in the Indian Ocean could force changes to the Indian Ocean Dipole after carbon dioxide removal. Role in climate variability The ocean dynamical thermostat has been used to explain: The observation that during Marine isotope stage 3, cooling in Greenland is associated with El Niño-like climate change in the Pacific. The decline in ENSO variability during periods with high solar variability. The transition to a cold Interdecadal Pacific Oscillation at the end of the 20th century. Volcanic and solar influences The ocean dynamical thermostat mechanism has been invoked to link volcanic eruptions to ENSO changes. Volcanic eruptions can cool the Earth by injecting aerosols and sulfur dioxide into the stratosphere, which reflect incoming solar radiation. It has been suggested that in paleoclimate records volcanic eruptions are often followed by El Niño events, but it is questionable whether this applies to known historical eruptions and results from climate modelling are equivocal. In some climate models an ocean dynamical thermostat process causes the onset of El Niño events after volcanic eruptions, in others additional atmospheric processes override the effect of the ocean dynamical thermostat on Pacific SST gradients. The ocean dynamical thermostat process may explain variations in Pacific SSTs in the eastern Pacific that correlate to insolation changes such as the Dalton Minimum and to the solar cycle. During the early and middle Holocene when autumn and summer insolation was increased, but also during the Medieval Climate Anomaly between 900-1300 AD, SSTs off Baja California in the eastern Pacific were colder than usual. Southwestern North America underwent severe megadroughts during this time, which could also relate to a La Niña-like tendency in Pacific SSTs. Conversely, during periods of low insolation and during the Little Ice Age SSTs increased. This region lies within the California Current which is influenced by the eastern Pacific that controls the temperature of upwelled water. This was further corroborated by analyses with additional foraminifera species. Increased productivity in the ocean waters off Peru during the Medieval Climate Anomaly and the Roman Warm Period between 50-400 AD, when the worldwide climate was warmer, may occur through a thermostat-driven shallowing of the thermocline and increased upwelling of nutrient-rich waters. Additional mechanisms connecting the equatorial Pacific climate to insolation changes have been proposed, however. Role in recent climate change Changes in equatorial Pacific SSTs caused by anthropogenic global warming are an important problem in climate forecasts, as they influence local and global climate patterns. The ocean dynamical thermostat mechanism is expected to reduce the anthropogenic warming of the eastern Pacific relative to the western Pacific, thus strengthening the SST gradient and the Walker circulation. This is opposed by a weakening of the Walker circulation and the more effective evaporative cooling of the western Pacific under global warming. This compensation between different effects makes it difficult to estimate the eventual outcome of the Walker circulation and SST gradient. In CMIP5 models it is usually not the dominating effect. The ocean dynamical thermostat has been invoked to explain contradictory changes in the Pacific Ocean in the 20th century. Specifically, there appears to be a simultaneous increase of the SST gradient, but also a weakening of the Walker circulation especially during boreal summer. All these observations are uncertain, owing to the particular choices of metrics used to describe SST gradients and Walker circulation strength, as well as measurement issues and biases. However, the ocean dynamical thermostat mechanism could explain why the SST gradient has increased during global warming and also why Walker circulation becomes stronger in autumn and winter, as these are the seasons when upwelling is strongest. On the other hand, warming in the Atlantic Ocean and more generally changes in between-ocean temperature gradients may play a role. Projected future changes Climate models usually depict an El Niño-like change, that is a decrease in the SST gradient. In numerous models, there is a time-dependent pattern with an initial increase in the SST gradient ("fast response") followed by a weakening of the gradient ("slow response") especially but not only in the case of abrupt increases of greenhouse gas concentrations. This may reflect a decreasing strength of the ocean dynamical thermostat with increasing warming and the warming of the upwelled water, which occurs with a delay of a few decades after the surface warming and is known as the "oceanic tunnel". On the other hand, climate models might underestimate the strength of the thermostat effect. According to An and Im 2014, in an oceanic dynamical model a doubling of carbon dioxide concentrations initially cools the eastern Pacific cold tongue, but a further increase in carbon dioxide concentrations eventually causes the cooling to stop and the cold tongue to shrink. Their model does not consider changes in the thermocline temperature, which would tend to occur after over a decade of global warming. According to Luo et al. 2017, the ocean dynamical thermostat eventually is overwhelmed first by a weakening of the trade winds and increased ocean stratification which decrease the supply of cold water to the upwelling zones, and second by the arrival of warmer subtropical waters there. In their model, the transition takes about a decade. According to Heede, Fedorov and Burls 2020, the greater climate warming outside of the tropics than inside of them eventually causes the water arriving to the upwelling regions to warm and the oceanic currents that transport it to weaken. This negates the thermostat effect after about two decades in the case of an abrupt increase of greenhouse gas concentrations, and after about half to one century when greenhouse gas concentrations are increasing more slowly. With further warming of the subsurface ocean, the strength of the ocean dynamical thermostat is expected to decline, because the decreasing stratification means that momentum is less concentrated in the surface layer and thus upwelling decreases. According to Heede and Fedorov 2021, in some climate models the thermostat mechanism initially prevails over other mechanisms and causes a cooling of the subtropical and central Pacific. Eventually most models converge to an equatorial warming pattern. Zhou et al. 2022 found that in carbon dioxide removal scenarios, the thermostat amplifies precipitation changes. Zheng et al. 2024 attributed changes of SST seasonality with global warming to the thermostat effect. Other contexts The term "ocean dynamical thermostat" has also been used in slightly different contexts: The interaction between a weakening Walker circulation and the Equatorial Undercurrent. Specifically, weaker easterly winds in the Pacific reduce the braking of the Undercurrent, thus accelerating it. This process dominates over the decrease in the eastward counterflow of the Undercurrent. Thus, a weaker Walker circulation can increase the flow of the Undercurrent and thus upwelling in the eastern Pacific, cooling it. Coupled general circulation models often do not depict this response of the Undercurrent and SST gradients correctly; the former may be the cause of the widespread underestimate of the SST gradients in these models. Stronger winds drive evaporative cooling of tropical SST. According to Heede, Fedorov and Burls 2020, in response to abrupt increases in greenhouse gas concentrations weak mean climatological winds allow the Indian Ocean to heat up more than the Pacific Ocean. This tends to induce stronger easterly winds over the Pacific which further dampen the warming in the Pacific Ocean. Unlike the ocean dynamical thermostat however this cooling effect is concentrated in the central-eastern Pacific, while westerly winds induced by warming over South America cause the eastern Pacific to warm. Notes References Sources External links Tropical meteorology Effects of climate change El Niño-Southern Oscillation events Physical oceanography
Ocean dynamical thermostat
[ "Physics" ]
2,726
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
68,028,380
https://en.wikipedia.org/wiki/Parvibacter
Parvibacter is a genus in the phylum Actinomycetota containing a single species, Parvibacter caecicola. Taxonomy In 2018, Nouioui et al. proposed merging the genus Parvibacter along with the genera Asaccharobacter and Enterorhabdus within the genus Aldercreutzia based on observed clustering of these genera within phylogenetic trees. However, subsequent phylogenetic analyses observed that Parvibacter caecicola exhibited much deeper branching compared to other Aldercreutzia species. Its phylogenetic distinctness was further demonstrated by the presence of five conserved signature indels (CSIs) that are exclusively shared by all Aldercreutzia species except for P. caecicola. Thus, P. caecicola was transferred back into the genus Parvibacter, which continues to be a validly published genus. See also List of bacteria genera List of bacterial orders References Actinomycetota Bacteria genera Monotypic bacteria genera
Parvibacter
[ "Biology" ]
210
[ "Bacteria stubs", "Bacteria" ]
68,028,866
https://en.wikipedia.org/wiki/Alexandr%20Mishchenko
Alexandr Sergeevich Mishchenko (; born August 18, 1941, in Rostov-on-Don) is a Russian mathematician, specializing in differential geometry and topology and their applications to mathematical modeling in the biosciences. Education and career After completing undergraduate study in 1965 in the Faculty of Mechanics and Mathematics of Moscow State University, Mishchenko became a graduate student in the Department of Higher Geometry and Topology of the same Faculty and graduated there in 1968 with Candidate of Sciences degree (PhD). His PhD thesis K-теория на категории бесконечных комплексов (K-theory on the category of infinite complexes) was supervised by Sergei Novikov. In 1973 Mishchenko received his Russian Doctor of Sciences degree (habilitation) with thesis Гомотопические инварианты неодносвязных многообразий (Homotopy invariants of non-simply connected varieties). Mishchenko is since 1979 a full professor in the Department of Higher Geometry and Topology, Faculty of Mechanics and Mathematics, Moscow State University. He also works at the Steklov Institute of Mathematics. His research deals with geometry and topology, application of algebraic and functional methods in the theory of smooth varieties with non-commutative geometry and topology, and applications of geometry and topology to mathematical modeling in ecology, molecular biology, bioinformatics. He has done some research on the history of mathematics, mathematical education, and the history of teaching mathematics. He is the author or coauthor of over 100 research articles. In 1970, he was an Invited Speaker at the International Congress of Mathematicians in Nice. In 1971 he was awarded, jointly with Victor Buchstaber, the Moscow Mathematical Society Prize for research on the K-theory of infinite-dimensional CW-complexes. In 1996 Mischenko, jointly with Anatoly Fomenko, was awarded the State Prize of the Russian Federation in the field of science and technology for a series of works involving investigation of invariants of smooth manifolds and Hamiltonian dynamical systems. In 2006 Mishchenko was awarded the title of Honored Professor of Moscow State University. Selected publications Articles Izv. Akad. Nauk SSSR Ser. Mat. 38, 81–106 (1974) Books with coauthors: with coauthors: with coauthors: with coauthors: with coauthors: References 1941 births Living people Moscow State University alumni Academic staff of Moscow State University 20th-century Russian mathematicians 21st-century Russian mathematicians Differential geometers Topologists
Alexandr Mishchenko
[ "Mathematics" ]
569
[ "Topologists", "Topology" ]
68,029,744
https://en.wikipedia.org/wiki/Paula%20Diaconescu
Paula L. Diaconescu is a Romanian-American chemistry professor at the University of California, Los Angeles. She is known for her research on the synthesis of redox active transition metal complexes, the synthesis of lanthanide complexes, metal-induced small molecule activation, and polymerization reactions. She is a fellow of the American Association for the Advancement of Science. Biography Diaconescu was born in Romania and received a Bachelor of Science degree from the University of Bucharest in 1998 conducting research on transition metal complexes and f-block metals. In 2003, Diaconescu received a PhD in chemistry from the Massachusetts Institute of Technology working with Christopher C. Cummins on uranium chemistry. Before joining the faculty at UCLA in 2005, she spent two years as a postdoctoral fellow at the California Institute of Technology with Robert Grubbs. Research While Diaconescu is best known for her work on the reactivity of early transition metals, lanthanides, and actinides, she has also contributed to the field of redox active ligand systems for small molecule activation. Her group has exploited ferrocene's electronic and redox properties to enable catalytic transformations with electrophilic transition metal centers. Diaconescu's research on redox active systems is studying how ferrocene's electronic and redox properties when strategically incorporated into a ligand affect the reactivity of d-block metal complexes. This extends to redox switchable catalysis and small molecule activation with applications in polyaniline nanofiber supporting metal catalysis and bioorganometallic polymers. She recognized that redox-switchable catalysis can generate multiple catalytically active species with varying reactivity. The idea is that a compound can have orthogonal reactivity between the oxidized and reduced forms of the catalyst. The ring-opening polymerization of cyclic ethers and esters as well as the polymerization of alkenes has been exploited with catalysts containing ferrocene. Selected publications Awards Diaconescu received a Sloan Fellowship in 2009, and received the Humboldt Foundation's Friedrich Wilhelm Bessel Research Award in 2014. In 2015, she was named a Guggenheim Fellow, and Diaconescu was named a fellow of the American Association for the Advancement of Science in 2019. References External links Fellows of the American Association for the Advancement of Science Massachusetts Institute of Technology alumni University of Bucharest alumni Living people American women chemists Inorganic chemists Year of birth missing (living people) Romanian women chemists 21st-century American chemists Romanian physical chemists American people of Romanian descent 21st-century American women scientists Sloan Research Fellows
Paula Diaconescu
[ "Chemistry" ]
523
[ "Inorganic chemists" ]
68,029,784
https://en.wikipedia.org/wiki/Hopin%20%28company%29
Hopin is a proprietary video teleconferencing online conference-hosting platform. As of August 2023, the company is valued about $3.85 billion, slumping from a high of $7.8 billion in 2021. It is a fully-remote company without an office address. The platform allows meeting participants for conference attending and networking online, exchange virtual business cards, and get a summary of their connections after an event. Hopin enables organizers to create engaging and interactive virtual experiences for their attendees, and has become increasingly popular in the wake of the COVID-19 pandemic. Since 2020, the platform claims to have hosted more than 80 thousand events, working with organisations and companies like the United Nations, NATO and Unilever. It also claimed to have more than 100,000 customers, including Poshmark, American Express, the Financial Times, but does not disclose price-range publicly for its advanced plans. Hopin acquired several early startups and companies in web video business including the most prominent Streamyard. Hopin raised over $1 billion in funding from various investors, including Accel, IVP, Coatue Management, Northzone, Salesforce Ventures, and others. Following a share buyback for shareholders of over $500M+, the company sold its events unit assets, including its technology, customers and engineering and product teams, to RingCentral for $50 million in August 2023. The company continued to operate its live-streaming and video hosting products Streamyard and Streamable. In February 2024, Hopin moved its headquarters to the United States and closed its UK Corporate entity. Hopin sold its business suite (Streamyard, Streamable and Superwave) to Bending Spoons in April 2024. See also Impact of the COVID-19 pandemic on science and technology List of video telecommunication services and product brands References Impact of the COVID-19 pandemic on science and technology Internet properties established in 2019 Videotelephony Web conferencing
Hopin (company)
[ "Technology" ]
410
[ "History of science and technology", "Impact of the COVID-19 pandemic on science and technology" ]
68,030,130
https://en.wikipedia.org/wiki/Interference%20freedom
In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis (and papers arising from it ) under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs. Concurrent programming had been in use since the mid 1960s for coding operating systems as sets of concurrent processes (see, in particular, Dijkstra.), but there was no formal mechanism for proving correctness. Reasoning about interleaved execution sequences of the individual processes was difficult, was error prone, and didn't scale up. Interference freedom applies to proofs instead of execution sequences; one shows that execution of one process cannot interfere with the correctness proof of another process. A range of intricate concurrent programs have been proved correct using interference freedom, and interference freedom provides the basis for much of the ensuing work on developing concurrent programs with shared variables and proving them correct. The Owicki-Gries paper An axiomatic proof technique for parallel programs I received the 1977 ACM Award for best paper in programming languages and systems. Note. Lamport presents a similar idea. He writes, "After writing the initial version of this paper, we learned of the recent work of Owicki." His paper has not received as much attention as Owicki-Gries, perhaps because it used flow charts instead of the text of programming constructs like the if statement and while loop. Lamport was generalizing Floyd's method while Owicki-Gries was generalizing Hoare's method. Essentially all later work in this area uses text and not flow charts. Another difference is mentioned below in the section on Auxiliary variables. Dijkstra's Principle of non-interference Edsger W. Dijkstra introduced the principle of non-interference in EWD 117, "Programming Considered as a Human Activity", written about 1965. This principle states that: The correctness of the whole can be established by taking into account only the exterior specifications (abbreviated specs throughout) of the parts, and not their interior construction. Dijkstra outlined the general steps in using this principle: Give a complete spec of each individual part. Check that the total problem is solved when program parts meeting their specs are available. Construct the individual parts to satisfy their specs, but independent of one another and the context in which they will be used. He gave several examples of this principle outside of programming. But its use in programming is a main concern. For example, a programmer using a method (subroutine, function, etc.) should rely only on its spec to determine what it does and how to call it, and never on its implementation. Program specs are written in Hoare logic, introduced by Sir Tony Hoare, as exemplified in the specs of processes and : }} }} Meaning: If execution of in a state in which precondition is true terminates, then upon termination, postcondition is true. Now consider concurrent programming with shared variables. The specs of two (or more) processes and are given in terms of their pre- and post-conditions, and we assume that implementations of and are given that satisfy their specs. But when executing their implementations in parallel, since they share variables, a race condition can occur; one process changes a shared variable to a value that is not anticipated in the proof of the other process, so the other process does not work as intended. Thus, Dijkstra's Principle of non-interference is violated. In her PhD thesis of 1975 in Computer Science, Cornell University, written under advisor David Gries, Susan Owicki developed the notion of interference freedom. If processes and satisfy interference freedom, then their parallel execution will work as planned. Dijkstra called this work the first significant step toward applying Hoare logic to concurrent processes. To simplify discussions, we restrict attention to only two concurrent processes, although Owicki-Gries allows more. Interference freedom in terms of proof outlines Owicki-Gries introduced the proof outline for a Hoare triple }. It contains all details needed for a proof of correctness of } using the axioms and inference rules of Hoare logic. (This work uses the assignment statement , and statements, and the loop.) Hoare alluded to proof outlines in his early work; for interference freedom, it had to be formalized. A proof outline for } begins with precondition and ends with postcondition . Two assertions within braces { and } appearing next to each other indicates that the first must imply the second. Example: A proof outline for } where is: } } } } } } } } } must hold, where stands for with every occurrence of replaced by . (In this example, and are basic statements, like an assignment statement, skip, or an await statement.) Each statement in the proof outline is preceded by a precondition and followed by a postcondition , and } must be provable using some axiom or inference rule of Hoare logic. Thus, the proof outline contains all the information necessary to prove that } is correct. Now consider two processes and executing in parallel, and their specs: }} }} Proving that they work suitably in parallel will require restricting them as follows. Each expression in or may refer to at most one variable that can be changed by the other process while is being evaluated, and may refer to at most once. A similar restriction holds for assignment statements . With this convention, the only indivisible action need be the memory reference. For example, suppose process references variable while changes . The value receives for must be the value before or after changes , and not some spurious in-between value. Definition of Interference-free The important innovation of Owicki-Gries was to define what it means for a statement not to interfere with the proof of }. If execution of cannot falsify any assertion given in the proof outline of }, then that proof still holds even in the face of concurrent execution of and . Definition. Statement with precondition does not interfere with the proof of } if two conditions hold: (1) } (2) Let be any statement within but not within an statement (see later section). Then }. Read the last Hoare triple like this: If the state is such that both and can be executed, then execution of is not going to falsify . Definition. Proof outlines for } and } are interference-free if the following holds. Let be an or assignment statement (that does not appear in an ) of process . Then does not interfere with the proof of }. Similarly for of process and }. Statements cobegin and await Two statements were introduced to deal with concurrency. Execution of the statement executes and in parallel. It terminates when both and have terminated. Execution of the statement is delayed until condition is true. Then, statement is executed as an indivisible action—evaluation of is part of that indivisible action. If two processes are waiting for the same condition , when it becomes true, one of them continues waiting while the other proceeds. The statement cannot be implemented efficiently and is not proposed to be inserted into the programming language. Rather it provides a means of representing several standard primitives such as semaphores—first express the semaphore operations as , then apply the techniques described here. Inference rules for and are: Auxiliary variables An auxiliary variable does not occur in the program but is introduced in the proof of correctness to make reasoning simpler —or even possible. Auxiliary variables are used only in assignments to auxiliary variables, so their introduction neither alters the program for any input nor affects the values of program variables. Typically, they are used either as program counters or to record histories of a computation. Definition. Let be a set of variables that appear in only in assignments , where is in . Then is an auxiliary variable set for . Since a set of auxiliary variables are used only in assignments to variables in , deleting all assignments to them doesn't change the program's correctness, and we have the inference rule elimination: is an auxiliary variable set for . The variables in do not occur in or . is obtained from by deleting all assignments to the variables in . Instead of using auxiliary variables, one can introduce a program counter into the proof system, but that adds complexity to the proof system. Note: Apt discusses the Owicki-Gries logic in the context of recursive assertions, that is, effectively computable assertions. He proves that all the assertions in proof outlines can be recursive, but that this is no longer the case if auxiliary variables are used only as program counters and not to record histories of computation. Lamport, in his similar work, uses assertions about token positions instead of auxiliary variables, where a token on an edge of a flow chart is akin to a program counter. There is no notion of a history variable. This indicates that Owicki-Gries and Lamport's approach are not equivalent when restricted to recursive assertions. Deadlock and termination Owicki-Gries deals mainly with partial correctness: } means: If executed in a state in which is true terminates, then is true of the state upon termination. However, Owicki-Gries also gives some practical techniques that use information obtained from a partial correctness proof to derive other correctness properties, including freedom from deadlock, program termination, and mutual exclusion. A program is in deadlock if all processes that have not terminated are executing statements and none can proceed because their conditions are false. Owicki-Gries provides conditions under which deadlock cannot occur. Owicki-Gries presents an inference rule for total correctness of the while loop. It uses a bound function that decreases with each iteration and is positive as long as the loop condition is true. Apt et al show that this new inference rule does not satisfy interference freedom. The fact that the bound function is positive as long as the loop condition is true was not included in an interference test. They show two ways to rectify this mistake. A simple example Consider the statement: // The proof outline for it: // Proving that does not interfere with the proof of requires proving two Hoare triples: (1) (2) The precondition of (1) reduces to and the precondition of (2) reduces to . From this, it is easy to see that these Hoare triples hold. Two similar Hoare triples are required to show that does not interfere with the proof of . Suppose is changed from the statement to the assignment . Then the proof outline does not satisfy the requirements, because the assignment contains two occurrences of shared variable . Indeed, the value of after execution of the statement could be 2 or 3. Suppose is changed to the statement , so it is the same as . After execution of , should be 4. To prove this, because the two assignments are the same, two auxiliary variables are needed, one to indicate whether has been executed; the other, whether has been executed. We leave the change in the proof outline to the reader. Examples of formally proved concurrent programs A. Findpos. Write a program that finds the first positive element of an array (if there is one). One process checks all array elements at even positions of the array and terminates when it finds a positive value or when none is found. Similarly, the other process checks array elements at odd positions of the array. Thus, this example deals with while loops. It also has no statements. This example comes from Barry K. Rosen. The solution in Owicki-Gries, complete with program, proof outline, and discussion of interference freedom, takes less than two pages. Interference freedom is quite easy to check, since there is only one shared variable. In contrast, Rosen's article uses as the single, running example in this 24 page paper. An outline of both processes in a general environment: // B. Bounded buffer consumer/producer problem. A producer process generates values and puts them into bounded buffer of size ; a consumer process removes them. They proceed at variable rates. The producer must wait if buffer is full; the consumer must wait if buffer is empty. In Owicki-Gries, a solution in a general environment is shown; it is then embedded in a program that copies an array into an array . This example exhibits a principle to reduce interference checks to a minimum: Place as much as possible in an assertion that is invariantly true everywhere in both processes. In this case the assertion is the definition of the bounded buffer and bounds on variables that indicate how many values have been added to and removed from the buffer. Besides buffer itself, two shared variables record the number of values added to the buffer and the number removed from the buffer. C. Implementing semaphores. In his article on the THE multiprogramming system, Dijkstra introduces the semaphore as a synchronization primitive: is an integer variable that can be referenced in only two ways, shown below; each is an indivisible operation: 1. : Decrease by 1. If now , suspend the process and put it on a list of suspended processes associated with . 2. : Increase by 1. If now , remove one of the processes from the list of suspended processes associated with , so its dynamic progress is again permissible. The implementation of and using statements is: Here, is an array of processes that are waiting because they have been suspended; initially, for every process . One could change the implementation to always waken the longest suspended process. D. On-the-fly garbage collection. At the 1975 Summer School Marktoberdorf, Dijkstra discussed an on-the-fly garbage collector as an exercise in understanding parallelism. The data structure used in a conventional implementation of LISP is a directed graph in which each node has at most two outgoing edges, either of which may be missing: an outgoing left edge and an outgoing right edge. All nodes of the graph must be reachable from a known root. Changing a node may result in unreachable nodes, which can no longer be used and are called garbage. An on-the-fly garbage collector has two processes: the program itself and a garbage collector, whose task is to identify garbage nodes and put them on a free list so that they can be used again. Gries felt that interference freedom could be used to prove the on-the-fly garbage collector correct. With help from Dijkstra and Hoare, he was able to give a presentation at the end of the Summer School, which resulted in an article in CACM. E. Verification of readers/writers solution with semaphores. Courtois et al use semaphores to give two versions of the readers/writers problem, without proof. Write operations block both reads and writes, but read operations can occur in parallel. Owicki provides a proof. F. Peterson's algorithm, a solution to the 2-process mutual exclusion problem, was published by Peterson in a 2-page article. Schneider and Andrews provide a correctness proof. Dependencies on interference freedom The image below, by Ilya Sergey, depicts the flow of ideas that have been implemented in logics that deal with concurrency. At the root is interference freedom. The file contains references. Below, we summarize the major advances. Rely-Guarantee. 1981. Interference freedom is not compositional. Cliff Jones recovers compositionality by abstracting interference into two new predicates in a spec: a rely-condition records what interference a thread must be able to tolerate and a guarantee-condition sets an upper bound on the interference that the thread can inflict on its sibling threads. Xu et al observe that Rely-Guarantee is a reformulation of interference freedom; revealing the connection between these two methods, they say, offers a deep understanding about verification of shared variable programs. CSL. 2004. Separation logic supports local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component. Concurrent separation logic (CSL) was originally proposed by Peter O'Hearn, We quote from: "the Owicki-Gries method involves explicit checking of non-interference between program components, while our system rules out interference in an implicit way, by the nature of the way that proofs are constructed." Deriving concurrent programs. 2005-2007. Feijen and van Gasteren show how to use Owicki-Gries to design concurrent programs, but the lack of a theory of progress means that designs are driven only by safety requirements. Dongol, Goldson, Mooij, and Hayes have extended this work to include a "logic of progress" based on Chandy and Misra's language Unity, molded to fit a sequential programming model. Dongel and Goldson describe their logic of progress. Goldson and Dongol show how this logic is used to improve the process of designing programs, using Dekker's algorithm for two processes as an example. Dongol and Mooij present more techniques for deriving programs, using Peterson's mutual exclusion algorithm as one example. Dongol and Mooij show how to reduce the calculational overhead in formal proofs and derivations and derive Dekker's algorithm again, leading to some new and simpler variants of the algorithm. Mooij studies calculational rules for Unity's leads-to relation. Finally, Dongol and Hayes provide a theoretical basis for and prove soundness of the process logic. OGRA. 2015. Lahav and Vafeiadis strengthen the interference freedom check to produce (we quote from the abstract) "OGRA, a program logic that is sound for reasoning about programs in the release-acquire fragment of the C11 memory model." They provide several examples of its use, including an implementation of the RCU synchronization primitives. Quantum programming. 2018. Ying et al extend interference freedom to quantum programming. Difficulties they face include intertwined nondeterminism: nondeterminism involving quantum measurements and nondeterminism introduced by parallelism occurring at the same time. The authors formally verify Bravyi-Gosset-König's parallel quantum algorithm solving a linear algebra problem, giving, they say, for the first time an unconditional proof of a computational quantum advantage. POG. 2020. Raad et al present POG (Persistent Owicki-Gries), the first program logic for reasoning about non-volatile memory technologies, specifically the Intel-x86. Texts that discuss interference freedom On A Method of Multiprogramming, 1999. Van Gasteren and Feijen discuss the formal development of concurrent programs entirely on the idea of interference freedom. On Current Programming, 1997. Schneider uses interference freedom as the main tool in developing and proving concurrent programs. A connection to temporal logic is given, so arbitrary safety and liveness properties can be proven. Control predicates obviate the need for auxiliary variables for reasoning about program counters. Verification of Sequential and Concurrent Programs, 1991, 2009. This first text to cover verification of structured concurrent programs, by Apt et al, has gone through several editions over several decades. Concurrency Verification: Introduction to Compositional and Non-Compositional Methods, 2112. De Roever et al provide a systematic and comprehensive introduction to compositional and non-compositional proof methods for the state-based verification of concurrent programs Implementations of interference freedom 1999: Nipkow and Nieto present the first formalization of interference freedom and its compositional version, the rely-guarantee method, in a theorem prover: Isabelle/HOL. 2005: Ábrahám's PhD thesis provides a way to prove multithreaded Java programs correct in three steps: (1) Annotate the program to produce a proof outline, (2) Use their tool Verger to automatically create verification conditions, and (3) Use the theorem prover PVS to prove the verification conditions interactively. 2017: Denissen reports on an implementation of Owicki-Gries in the "verification ready" programming language Dafny. Denissen remarks on the ease of use of Dafny and his extension to it, making it extremely suitable when teaching students about interference freedom. Its simplicity and intuitiveness outweighs the drawback of being non-compositional. He lists some twenty institutions that teach interference freedom. 2017: Amani et al combine the approaches of Hoare-Parallel, a formalisation of Owicki-Gries in Isabelle/HOL for a simple while-language, and SIMPL, a generic language embedded in Isabelle/HOL, to allow formal reasoning on C programs. 2022: Dalvandi et al introduce the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs, building on Nipkow and Nieto's encoding of Owicki–Gries in the Isabelle theorem prover. 2022: This webpage describes the Civl verifier for concurrent programs and gives instructions for installing it on your computer. It is built on top of Boogie, a verifier for sequential programs. Kragl et al describe how interference freedom is achieved in Civl using their new specification idiom, yield invariants. One can also use specs in the rely-guarantee style. Civl offers a combination of linear typing and logic that allows economical and local reasoning about disjointness (like separation logic). Civl is the first system that offers refinement reasoning on structured concurrent programs. 2022. Esen and Rümmer developed TRICERA, an automated open-source verification tool for C programs. It is based on the concept of constrained Horn clauses, and it handles programs operating on the heap using a theory of heaps. A web interface to try it online is available. To handle concurrency, TRICERA uses a variant of the Owicki-Gries proof rules, with explicit variables to added to represent time and clocks. References Formal methods Program logic Logic in computer science
Interference freedom
[ "Mathematics", "Engineering" ]
4,556
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
68,031,158
https://en.wikipedia.org/wiki/%CE%91-Aminoadipic%20acid
α-Aminoadipic acid is one of the metabolic precursor in the biosynthesis of lysine through α-aminoadipate pathway. Its conjugate base is α-aminoadipate, which is the prevalent form at physiological pH. α-Aminoadipic acid has a stereogenic center and can appear in two enantiomers, L-α-aminoadipate and D-α-aminoadipate. The L-enantiomer appears during lysine biosynthesis and degradation, whereas the D-enantiomer is a part of certain antibiotics. Metabolism Lysine degradation Through saccharopine and allysine, lysine is converted to α-aminoadipate, which is then degraded all the way to acetoacetate. Allysine is oxidized by aminoadipate-semialdehyde dehydrogenase: allysine + NAD(P)+ ↔ α-aminoadipate NAD(P)H + H+ α-Aminoadipate is then transaminated with α-ketoglutarate to give α-ketoadipate and glutamate, respectively, by the action of 2-aminoadipate transaminase: α-aminoadipate + α-ketoglutarate ↔ α-ketoadipate + glutamate Lysine biosynthesis α-Aminoadipate appears during biosynthesis of lysine in several yeast species, fungi, and certain protists. During this pathway, which is named after α-aminoadipate, the same steps are repeated in the opposite order as in the degradation reactions, namely, α-ketoadipate is transaminated to α-aminoadipate, which is then reduced to allysine, allysine couples with glutamate to give saccharopine, which is then cleaved to give lysine. Importance A 2013 study identified α-aminoadipate as a novel predictor of the development of diabetes and suggested that it is a potential modulator of glucose homeostasis in humans. D-α-Aminoadipic acid is a part of the antibiotic cephalosporin C. References Amino acids Dicarboxylic acids
Α-Aminoadipic acid
[ "Chemistry" ]
468
[ "Amino acids", "Biomolecules by chemical classification" ]
68,031,496
https://en.wikipedia.org/wiki/HD%2028246
HD 28246 (HR 1404) is a solitary star located in the southern constellation Caelum. It has an apparent magnitude of 6.38, placing it near the max visibility to the unaided eye. The star is located relatively close at a distance of about 122 light years but is recceding with a heliocentric radial velocity of . HD 28246 has a stellar classification of F5.5 V, indicating that it is an ordinary F-type main sequence star. At present it has 1.78 times the mass of the Sun and shines at 3.07 solar luminosities from its photosphere at an effective temperature of 6,519 K, giving it a yellow-white glow. HD 28246 has an iron abundance 105% that of the Sun, placing it at solar metallicity. At an age of 1.58 billion years, it spins leisurely with a projected rotational velocity of 8 km/s. References 1404 20630 Durchmusterung objects F-type main-sequence stars 028246 Caeli, 1 Caelum
HD 28246
[ "Astronomy" ]
224
[ "Caelum", "Constellations" ]
68,032,094
https://en.wikipedia.org/wiki/Rhodothermus
Rhodothermus is a genus of bacteria. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Bacteria genera Rhodothermota
Rhodothermus
[ "Biology" ]
70
[ "Bacteria stubs", "Bacteria" ]
68,032,108
https://en.wikipedia.org/wiki/Rhodothermaceae
The Rhodothermaceae are a family of bacteria. See also List of bacterial orders List of bacteria genera References Bacteria families Rhodothermota
Rhodothermaceae
[ "Biology" ]
35
[ "Bacteria stubs", "Bacteria" ]
68,032,132
https://en.wikipedia.org/wiki/Rhodothermales
The Rhodothermales are an order of bacteria. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of bacterial orders List of bacteria genera References Bacteria orders Rhodothermota
Rhodothermales
[ "Biology" ]
73
[ "Bacteria stubs", "Bacteria" ]
68,032,220
https://en.wikipedia.org/wiki/Balneola
Balneola is a genus of bacteria. See also List of bacterial orders List of bacteria genera References Bacteria genera
Balneola
[ "Biology" ]
24
[ "Bacteria stubs", "Bacteria" ]
68,032,323
https://en.wikipedia.org/wiki/Unrelated-machines%20scheduling
Unrelated-machines scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We need to schedule n jobs J1, J2, ..., Jn on m different machines, such that a certain objective function is optimized (usually, the makespan should be minimized). The time that machine i needs in order to process job j is denoted by pi,j. The term unrelated emphasizes that there is no relation between values of pi,j for different i and j. This is in contrast to two special cases of this problem: uniform-machines scheduling - in which pi,j = pi / sj (where sj is the speed of machine j), and identical-machines scheduling - in which pi,j = pi (the same run-time on all machines). In the standard three-field notation for optimal job scheduling problems, the unrelated-machines variant is denoted by R in the first field. For example, the problem denoted by " R||" is an unrelated-machines scheduling problem with no constraints, where the goal is to minimize the maximum completion time. In some variants of the problem, instead of minimizing the maximum completion time, it is desired to minimize the average completion time (averaged over all n jobs); it is denoted by R||. More generally, when some jobs are more important than others, it may be desired to minimize a weighted average of the completion time, where each job has a different weight. This is denoted by R||. In a third variant, the goal is to maximize the minimum completion time, " R||" . This variant corresponds to the problem of Egalitarian item allocation. Algorithms Minimizing the maximum completion time (makespan) Minimizing the maximum completion time is NP-hard even for identical machines, by reduction from the partition problem. Horowitz and Sahni presented: Exact dynamic programming algorithms for minimizing the maximum completion time on both uniform and unrelated machines. These algorithms run in exponential time (recall that these problems are all NP-hard). Polynomial-time approximation schemes, which for any ε>0, attain at most (1+ε)OPT. For minimizing the maximum completion time on two uniform machines, their algorithm runs in time , where is the smallest integer for which . Therefore, the run-time is in , so it is an FPTAS. For minimizing the maximum completion time on two unrelated machines, the run-time is = . They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case. Lenstra, Shmoys and Tardos presented a polytime 2-factor approximation algorithm, and proved that no polytime algorithm with approximation factor smaller than 3/2 is possible unless P=NP. Closing the gap between the 2 and the 3/2 is a long-standing open problem. Verschae and Wiese presented a different 2-factor approximation algorithm. Glass, Potts and Shade compare various local search techniques for minimizing the makespan on unrelated machines. Using computerized simulations, they find that tabu search and simulated annealing perform much better than genetic algorithms. Minimizing the average completion time Bruno, Coffman and Sethi present an algorithm, running in time , for minimizing the average job completion time on unrelated machines, R|| (the average over all jobs, of the time it takes to complete the jobs). Minimizing the weighted average completion time, R|| (where wj is the weight of job j), is NP-hard even on identical machines, by reduction from the knapsack problem. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the partition problem. Schulz and Skutella present a (3/2+ε)-approximation algorithm using randomized rounding. Their algorithm is a (2+ε)-approximation for the problem with job release times, R||. Maximizing the profit Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber consider a setting in which, for each job and machine, there is a profit for running this job on that machine. They present a 1/2 approximation for discrete input and (1-ε)/2 approximation for continuous input. Maximizing the minimum completion time Suppose that, instead of "jobs" we have valuable items, and instead of "machines" we have people. Person i values item j at pi,j. We would like to allocate the items to the people, such that the least-happy person is as happy as possible. This problem is equivalent to unrelated-machines scheduling in which the goal is to maximize the minimum completion time. It is better known by the name egalitarian or max-min item allocation. Linear programming formulation A natural way to formulate the problem as a linear program is called the Lenstra–Shmoys–Tardos linear program (LST LP). For each machine i and job j, define a variable , which equals 1 if machine i processes job j, and 0 otherwise. Then, the LP constraints are: for every job j in 1,...,n; for every machine i in 1,...,m; for every i, j. Relaxing the integer constraints gives a linear program with size polynomial in the input. Solving the relaxed problem can be rounded to obtain a 2-approximation to the problem. Another LP formulation is the configuration linear program. For each machine i, there are finitely many subsets of jobs that can be processed by machine i in time at most T. Each such subset is called a configuration for machine i. Denote by Ci(T) the set of all configurations for machine i. For each machine i and configuration c in Ci(T), define a variable which equals 1 if the actual configuration used in machine i is c, and 0 otherwise. Then, the LP constraints are: for every machine i in 1,...,m; for every job j in 1,...,n; for every i, j. Note that the number of configurations is usually exponential in the size of the problem, so the size of the configuration LP is exponential. However, in some cases it is possible to bound the number of possible configurations, and therefore find an approximate solution in polynomial time. Special cases There is a special case in which pi,j is either 1 or infinity. In other words, each job can be processed on a subset of allowed machines, and its run-time in each of these machines is 1. This variant is sometimes denoted by " P|pj=1,Mj|". It can be solved in polynomial time. Extensions Kim, Kim, Jang and Chen extend the problem by allowing each job to have a setup time, which depends on the job but not on the machine. They present a solution using simulated annealing. Vallada and Ruiz present a solution using a genetic algorithm. Nisan and Ronen in their 1999 paper on algorithmic mechanism design. extend the problem in a different way, by assuming that the jobs are owned by selfish agents (see Truthful job scheduling). External links Summary of parallel machine problems without preemtion References Optimal scheduling
Unrelated-machines scheduling
[ "Engineering" ]
1,517
[ "Optimal scheduling", "Industrial engineering" ]
72,412,764
https://en.wikipedia.org/wiki/Christopher%20O.%20Barnes
Christopher O. Barnes (born September 23, 1986) is an American chemist who is an assistant professor at Stanford University. During the COVID-19 pandemic, he studied the structure of the coronavirus spike protein and the antibodies that attack them. He was named one of ten "Scientists to watch" by Science News in 2022. Early life and education Barnes grew up in Huntersville, North Carolina. He attended North Mecklenburg High School. As a teenager, he competed in the science olympiad. He was an undergraduate at the University of North Carolina at Chapel Hill, where he was involved with the American football team. During his senior year, he was named the top student athlete. Although he had initially applied to study medicine, he changed his mind after being introduced to biophysics by Gary J. Pielak. He was a bachelor's student in psychology, and moved to chemistry for his graduate studies. In 2010 he moved to the University of Pittsburgh, where he started researching molecular pharmacology. He looked into eukaryotic transcription using crystallographic techniques and electron microscopy. After earning his doctorate, Barnes started investigating the structure of HIV and the antibodies that attack it. He looked to understand how the virus contacts/enters cells to better inform the design of therapeutics. Research and career Barnes was a postdoctoral researcher at California Institute of Technology when the COVID-19 pandemic started. He was working alongside Pamela J. Bjorkman, who challenged him to uncover the structure of immune proteins that would attack SARS-CoV-2. Barnes used high-resolution imaging to better understand coronavirus spike proteins and the antibodies that attack them. He used cryo-electron microscopy, and identified several antibodies that attach to the receptor binding domain on the coronavirus spike protein. He defined an antibody classification system to determine where on the receptor binding domain that the antibody attaches. Barnes continued to work on antibody structure when he established his own laboratory at Stanford University. These antibodies target the N-terminal domain. He is interested in identifying antibodies that can attack all coronaviruses. In September 2022 Science News named Barnes one of ten "Scientists to watch". Awards and honors 2017 Howard Hughes Medical Institute Hanna H. Gray Fellow 2022 Rita Allen Foundation Scholar Selected publications Personal life Christopher is married to scientist Naima G. Sharaf and has two sons. References 1986 births Living people People from Huntersville, North Carolina University of North Carolina at Chapel Hill alumni Stanford University Department of Chemistry faculty 21st-century American chemists Structural biologists
Christopher O. Barnes
[ "Chemistry" ]
515
[ "Structural biologists", "Structural biology" ]
72,413,138
https://en.wikipedia.org/wiki/Palisa-Wolf-Star%20Map
The Palisa-Wolf-Star Map or Palisa-Wolf-Star Atlas () is a map series produced between 1900 and 1916 as well as published between 1900 and 1931, which shows the entire starry sky visible in Europe in 210 large-scale sheets. It was published at the suggestion of the Viennese astronomer Johann Palisa (1848–1925) together with his younger colleague Max Wolf in Heidelberg to facilitate the discovery and tracking of new asteroids. At that time, Palisa had already discovered about 100 of these minor planets through visual observation at the large refractor of the University Observatory in Vienna, while Wolf was the first researcher to use astrophotography for this purpose at the new Heidelberg-Königstuhl State Observatory. The 210 star charts in the format 11 by 9 inches were recorded in Heidelberg and systematically cut out according to celestial coordinates. Palisa was not only concerned with facilitating the discovery of the many minor planets to be expected, but also with the possibility of finding "lost" asteroids again and thereby determining their orbits more precisely. The star atlas became an important tool for planetoid researchers for several decades. What is also remarkable about this work is that two astronomers competing in their field of research were able to decide to cooperate. The "photo pioneer" Wolf surpassed Palisa in the number of discovered asteroids (123 or more than 200) in the following decade, because these small bodies quickly revealed themselves in the sky photographs by a short line trace, while Palisa had to find them at the telescope by comparing them with the star chart. References External links The star atlas online at GAVO: Star maps
Palisa-Wolf-Star Map
[ "Astronomy" ]
330
[ "Outer space", "Astronomy stubs", "Outer space stubs" ]
72,413,930
https://en.wikipedia.org/wiki/Janos%20Hajdu%20%28biophysicist%29
Janos Hajdu (born 17 September 1948) is a Swedish/Hungarian scientist, who has made contributions to biochemistry, biophysics, and the science of X-ray free-electron lasers. He is a professor of molecular biophysics at Uppsala University and a leading scientist at the European Extreme Light Infrastructure ERIC in Prague. Education Hajdu matriculated in 1967 from Eötvös József Gimnasium, a grammar school in Budapest. At the age of 16, he won a science prize, which allowed him to study and perform experiments in the Institute of Medical Chemistry of the Semmelweis University Medical School in Budapest (head: Brunó Ferenc Straub). His first publication was produced in this institute. In 1968, he was admitted to Eötvös Loránd University where he received an M.Sc. in chemistry (1973). He obtained a Ph.D. in biology in 1980, "Symmetry and Structural Changes in Oligomeric Proteins" and a D.Sc. in physics in 1993, "Macromolecular Structure, Function and Dynamics: X-Ray Diffraction Studies in Four Dimensions". He left Hungary in 1981. Appointments 2003–present: Professor of Molecular Biophysics, Uppsala University, Sweden. 1995-2003: Professor of Biochemistry, Department of Biochemistry, Uppsala University, Sweden. 2007-2008: Professor of Photon Science, Stanford University, USA. 2016–present: Lead scientist at the European Extreme Light Infrastructure, Dolní Břežany, Czech Republic 2011-2016: Adviser to the Directors of the European XFEL GmbH, Hamburg, Germany. 1988-1996: Head of an MRC Laboratory at the Laboratory of Molecular Biophysics, Oxford, UK. 1988-1996: Lecturer in biochemistry/biophysics, Christ Church, Oxford University, U.K. 1983-1988: MRC/SERC Fellow at the Laboratory of Molecular Biophysics, Oxford Univ., U.K. 1981-1983: EMBO Fellow at the Laboratory of Molecular Biophysics, Oxford Univ., U.K. 1976: Roche fellow, Institute of Medical Chemistry, University of Bern, Switzerland. 1973-2003: Research Fellow, Institute of Enzymology, Hungarian Academy of Sciences, Budapest. Career and research Hajdu's first employment (1973) was with the Institute of Enzymology of the Hungarian Academy of Sciences (head: Brunó Ferenc Straub). In his early work, Hajdu exploited chemistry to determine the symmetry of multi-subunit protein complexes, and characterised structural transitions in these systems. Following an invitation by Louise Johnson Hajdu joined Johnson's crystallography team in Oxford in 1981, and spent 16 years in the Laboratory of Molecular Biophysics in Oxford (1981-1996), He was first a postdoctoral research fellow and later the head of an MRC laboratory at the Laboratory of Molecular Biophysics. in 1988, he was elected a lecturer of Christ Church, Oxford, teaching biochemistry and biophysics. In 1981, the first dedicated Synchrotron Radiation Source came to life in Daresbury, and Hajdu and his colleagues were among the first users of the facility. The new synchrotron gave them the means to pursue a new direction in structural biology which was to not only determine the structure of proteins, but to observe them functioning. The very first time-resolved X-ray diffraction experiments produced 3D movies of catalysis in crystalline enzymes and revealed structural transitions in viruses. This was a path to understand the workings of molecular machineries, but radiation damage to the sample during exposure was a serious limitation. Hajdu realised there may be a way to outrun radiation damage processes by using extremely short and intense X-ray pulses (speed of light vs. the speed of the shock wave of damage formation). Experimental tests had to wait until the arrival of the first X-ray free-electron lasers, delivering femtosecond X-ray pulses with a peak brightness exceeding synchrotrons by a factor of ten billion. Funding for building such X-ray free-electron lasers faced hurdles.   The turning point occurred in 1996, when Hajdu took up a chair at Uppsala University and set up a European research network to explore the physical limits of imaging. The project engaged an interdisciplinary approach, drawing upon structural sciences, plasma physics, optics and mathematics. Hajdu presented their findings to the US Department of Energy in 2000 as part of the scientific justification for building the first hard X-ray free-electron laser, the Linac Coherent Light Source (LCLS), at Stanford. The proof of principle experiment was performed In 2006 with a soft X-ray free-electron laser in Hamburg where Hajdu with Henry N. Chapman and colleagues demonstrated experimentally that outrunning radiation damage is possible with a femtosecond X-ray pulse. The pulse turned the nano-patterned sample into a 60,000 K plasma, but not before a diffraction pattern of the virtually undamaged object could be recorded. The object was reconstructed to the diffraction-limited resolution. When the first hard X-ray free-electron laser (LCLS) was turned on in 2009, they also showed that “diffraction before destruction” or "observation before destruction" extends to the atomic scale launching the methods of serial nano-crystallography, ultrafast diffractive imaging, flash radiography, spectroscopy, and applications in fusion energy research Achievements X-ray crystallography in four dimensions: First atomic movies on chemical reactions. Development of Laue crystallography: First structural results for proteins and viruses. Proposal for a link between late steps in protein folding and structural changes in protein function. Discovery of X-ray driven catalysis in redox enzymes. Structures for the family of mononuclear ferrous enzymes.  "Diffraction before destruction", the physical limits of imaging. The scientific case (in imaging) that assured funding for the first hard X-ray free-electron lasers in the US (the LCLS at Stanford)  and in Europe (the European XFEL, Hamburg). Scientific justification for building the European Extreme Light Infrastructure. Development of X-ray free-electron laser based structural sciences. Honours 2001: Member of the Kungliga Vetenskap-Societeten i Uppsala (The Swedish Royal Society). 2013: Honorary Member, Hungarian Academy of Sciences (2013). Awards 2022: The Gregori Aminoff Prize (2022) for "fundamental contributions to the development of X-ray free-electron laser based structural biology” and "explosive studies of biological macromolecules" together with Henry N. Chapman and John C. H. Spence. 2015: Fabinyi Rudolf Medal "for outstanding contribution to chemistry". 2012: Rudbeck Medal (2012) "for extraordinarily prominent achievements in science, to be conferred primarily for such accomplishments or findings attained at Uppsala University". 2011-2016: ERC Advanced Investigator Award "X-Ray Lasers, Photon Science, and Structural Biology" (XLASERS) ERC 291602. 2011-2016: Knut and Alice Wallenberg Award "Photon Science and X-Ray Lasers (BRIGHT-LIGHT)" KAW 2011.0081. 2005: Centre of Excellence Award, Swedish Research Council. 2001: Excellent Research Environment Award, Swedish Research Council. Hajdu is Main Editor of the Journal of Applied Crystallography and Editorial Board Member of Nature's Scientific Data. See also Henry Chapman References Living people 1948 births Scientists from Budapest Eötvös Loránd University alumni Academics of the University of Oxford Hungarian biophysicists Academic staff of Uppsala University 21st-century Hungarian physicists Crystallographers
Janos Hajdu (biophysicist)
[ "Chemistry", "Materials_science" ]
1,580
[ "Crystallographers", "Crystallography" ]
72,414,976
https://en.wikipedia.org/wiki/Biodesulfurization
Biodesulfurization is the process of removing sulfur from crude oil through the use of microorganisms or their enzymes. Background Crude oil contains sulfur in its composition, with the latter being the most abundant element after carbon and hydrogen. Depending on its source, the amount of sulfur present in crude oil can range from 0.05 to 10%. Accordingly, the oil can be classified as sweet or sour if the sulfur concentration is below or above 0.5%, respectively. The combustion of crude oil releases sulfur oxides (SOx) to the atmosphere, which are harmful to public health and contribute to serious environmental effects such as air pollution and acid rains. In addition, the sulfur content in crude oil is a major problem for refineries, as it promotes the corrosion of the equipment and the poisoning of the noble metal catalysts. The levels of sulfur in any oil field are too high for the fossil fuels derived from it (such as gasoline, diesel, or jet fuel ) to be used in combustion engines without pre-treatment to remove organosulfur compounds. The reduction of the concentration of sulfur in crude oil becomes necessary to mitigate one of the leading sources of the harmful health and environmental effects caused by its combustion. In this sense, the European Union has taken steps to decrease the sulfur content in diesel below 10 ppm, while the US has made efforts to restrict the sulfur content in diesel and gasoline to a maximum of 15 ppm. The reduction of sulfur compounds in oil fuels can be achieved by a process named desulfurization. Methods used for desulfurization include, among others, hydrodesulfurization, oxidative desulfurization, extractive desulfurization, and extraction by ionic liquids. Despite their efficiency at reducing sulfur content, the conventional desulfurization methods are still accountable for a significant amount of the CO2 emissions associated with the crude oil refining process, releasing up to 9000 metric tons per year. Furthermore, these processes usually require large amounts of energy, and are accompanied by massive costs for the industries that employ them. A greener and also complementary alternative process to the conventional desulfurization methods is biodesulfurization. Biodesulfurization implementation and pathways It has been observed that there are sulfur-dependent bacteria that make use of the sulfur in sulfur-containing compounds in their life cycles (either in their growth or metabolic processes), producing molecules with lower/no content in sulfur. In particular, heteroaromatic compounds, namely thiophenes and their derivatives, were observed to constitute important substrates for bacteria. Biodesulfurization is an attractive alternative to sulfur removal, particularly in the crude oil fractions where there is an abundance of sulfur heterocycles. To date, pilot attempts for industrial applications have resorted to the use of whole bacterial systems, because biodesulfurization involves a sequential cascade of reactions by different enzymes and a large amount of cofactors participating in redox reactions either with the sulfur atom or molecular oxygen. However, they lacked the scalability desired for an industrial setup due to overall low enzyme efficiency, product feedback inhibition mechanisms and toxicity, or inadequate conditions for long-term bacterial growth. While cell-free recombinant enzymes would be desirable, known implementations are still well below the efficiency met for whole-cell ones. There are two main pathways through which bacteria remove sulfur from sulfur-containing compounds: ring destructive pathways and sulfur-specific pathways. The ring destructive pathway consists of the selective cleavage of carbon-carbon bonds with release of small organic sulfides soluble in the surrounding aqueous environment, whereas the sulfur-specific pathways rely on successive sulfur redox reactions to release sulfur either as sulfide or sulfite anions as byproducts. The latter have thus been considered as a very promising pathway to produce sulfur-free compounds with a high calorific content, in particular in the desulfurization of sulfur heterocycles abundant in sour crude oil fractions. The most studied ring destructive pathway is the Kodama pathway and it was initially identified in Pseudomonas abikonensis and Pseudomonas jijani. The pathway comprises four main steps: i) the successive hydroxylation by NADH-dependent dioxygenases of the carbons in one of the aromatic rings, followed by ii) the dehydrogenation of the ring by a NAD+ cofactor and further iii) oxygenation promoting ring cleavage and formation of a pyruvyl branch; concluding with iv) the hydrolysis of the pyruvyl substituent to release pyruvate and the remaining of the substrate. Since the end products of the pathway are still water soluble sulfur compounds, the pathway has often been disregarded as an appealing pathway for industrial applications, in particular by the oil industry. The most well-studied sulfur specific pathway is the 4S pathway, first discovered in the bacterium Rhodococcus erythropolis (strain IGTS8), which was observed to remove sulfur from dibenzothiophenes and derivatives in three steps: i) a double oxidation of the sulfur (to sulfoxide and sulfone) performed by a flavin-dependent monoxygenase, followed by  ii) a carbon-sulfur bond cleavage by a second flavin-dependent monoxygenase and a iii) desulfination reaction through which 2-hydroxybiphenyl and sulfite are produced. In total, four enzymes are required for the process: three of which are encoded in the dszABC genes (the flavin-dependent monoxygenases DszA and DszC, and the desulfinase DszB) and fourth chromosome encoded enzyme, DszD, which is responsible for the regeneration and supply of the flavin mononucleotide cofactor required for DszA and DszC. It has also been observed that some anaerobic bacteria can use an alternative sulfur-specific pathway to produce hydrogen sulfide instead. However, to date, the desulfurization of fractions such as bitumen, vacuum gas oil, or deasphalted oil has not been observed The aerobic 4S pathway The 4S pathway is a sulfur-specific metabolic pathway of oxidative desulfurization that converts dibenzothiophene (DBT) into 2-hydroxybiphenyl and sulfite. It uses a total of four NADH molecules (three required by DszD to generate FMNH2 and a fourth to regenerate the FMN-oxide byproduct of DszA) and three molecules of oxygen, thus producing NAD+ and water as byproducts. DszC is the first enzyme to intervene in the pathway in two sequential steps, catalyzing the double oxidation of DBT first into DBT-sulfoxide and then into DBT-sulfone. It requires FMNH2 as cofactor, which is supplied by DszD, and molecular oxygen. For that reason, the efficiency of this enzyme is dependent on the activity of DszD and on environmental oxygenation. The reaction catalyzed by DszC involves three phases: 1) molecular oxygen activation leading to the formation of a hydroperoxyflavin-intermediate (C4aOOH); 2) oxidation of DBT to DBTO; and 3) dehydration of FMN. DszC is the second least efficient enzyme in the pathway with a particularly low kcat of 1.6 ± 0.3 min−1. It is also severely affected from feedback inhibition caused mostly by HPBS and 2-HBP, the products of DszA and DszB respectively, For that reason, it has been targeted for optimization through enzyme engineering. DszA is responsible for the third step of the pathway. It catalyzes the first carbon-sulfur bond cleavage, converting DBT-sulfone into 2-hydroxybiphenyl-2-sulfinate. Like DszC, DszA also requires FMNH2 provided by DszD and molecular oxygen for its catalytic cycle. Nonetheless, the reaction rate of DszA is about seven times faster than DszC. However, like DszC, it suffers feedback inhibition by the final product of the pathway, 2-HBP. At last, the desulfinase (DszB) cleaves the remaining carbon-sulfur bond in 2-hydroxybiphenyl-2-sulfinate converting it into the sulfur-free 2-hydroxybiphenyl in a two step mechanism. In the first, and rate-limiting, step, 2-hydroxybiphenyl-2-sulfinate is protonated by Cys27 in its electrophilic carbon leading to the cleavage of the carbon-sulfur bond and displacement of SO2. In the second step, a water molecule is deprotonated by Cys27 followed by the hydroxide attack to SO2 forming HSO3−. DszB is the least efficient enzyme on the pathway making it an appealing target for enhancement through protein engineering. The NADH-FMN oxidoreductase (DszD) regenerates the FMNH2 cofactor needed for the reactions catalyzed by DszC and DszA, through the oxidation of NADH to NAD+ in a two step mechanism. The first step corresponds to a hydride transfer from the nicotinamide moiety of NADH to the central nitrogen in the isoalloxazine moiety of the oxidized FMN forming FMNH. In the second step, a water molecule protonates the N1 atom of FMNH giving FMNH2. Engineering of 4S pathway enzymes The desulfurization rate for the wild-type 4S pathway enzymes is low when compared to the rate that needs to be achieved for a viable application in the industrial sector. An increase of 500-fold on the overall rate of the pathway is the required improvement for an efficient application of this biodesulfurization method. Directed evolution, rational design or a combination of both strategies are some of the approaches that have been applied to tackle the lack of catalytic efficiency and stability of the 4S enzymes. The 4S pathway best improvement to date was obtained by a directed evolution approach in which Rhodococcus strains were transformed with a plasmid encoding a modified dsz operon (which encodes for DszA, DszB and DszC). After 40 subculturing events in a medium in which DBT was the sole sulfur source, the modified Rhodococcus strains presented a 35-fold improvement. The strong feedback inhibition of DszC was also tackled by a combination of directed evolution and rational design approach to desensitize DszC to the 4S pathway product, HBP. The bacterial strain expressing the DszC A101K mutant showed higher activity relative to the wild-type strain. Additionally docking of HBP to the protein revealed that HBP forms a π-interaction with Trp327, thus inhibiting DszC. The A101K/W327C (AKWC) double mutant revealed to be desensitized to low HBP concentrations and the bacterial strain expressing the AKWC DszC was 14-fold more efficient than the wild-type strain. DszB, the final enzyme in the pathway, is also one of the slowest with a turnover rate of 1.7 ± 0.2 min−1, becoming a major bottleneck of the 4S pathway. A computational rational design approach determined a set of mutations that could accelerate the charge transfer occurring in the active site during DszB reaction mechanism, reducing the activation energy for the reaction and potentially increasing its turnover rate. DszB's catalytic efficiency and thermostability was also addressed in an experimental mutagenesis approach, the Y63F/Q65H double mutant revealed an increase in the enzyme's thermostability without loss of catalytic efficiency. DszD has also been targeted for rate enhancing mutation on the Thr62 residue. Mutation of Thr62 by Asn and Ala residues managed to increase its activity 5- and 7-fold, respectively. A computational study demonstrated that substitutions in position 62 of DszD sequence have a major impact in the activation energy for the hydride transfer reaction from NADH to FAD. The Thr62 mutation by an Asp residue returns the lowest activation energy from all possible mutants at this position due to the stabilization effect induced by Asp negative charge. See also Desulfurization References Desulfurization Biochemical engineering
Biodesulfurization
[ "Chemistry", "Engineering", "Biology" ]
2,615
[ "Biological engineering", "Desulfurization", "Separation processes", "Chemical engineering", "Biochemical engineering", "Biochemistry" ]
72,415,186
https://en.wikipedia.org/wiki/Legendrea
Legendrea is a genus of extremely rare ciliates first described by French biologist Emmanuel Fauré-Fremiet in 1908, rediscovered and re-examined in 2022. Classification The genus has 5 species including the type species, Legendrea loyezae, described in 1908. Other genera (e.g. Lacerus and Thysanomorpha) were only distinguished from Legendrea by their physical appearance, not affinities. These genera were misidentifications and were synonymised with Legendrea along with their respective species. First true taxonomic assignment of the cilliate was made in December 2022. The current 5 species are: Legendrea bellerophon [=Thysanomorpha bellerophon ] Legendrea crassa [=Penardiella crassa ] Legendrea interrupta Legendrea loyezae Legendrea pespelicani [=Lacerus pespelicani ] Taxonomic affinities The genus (and species) are distinguishable from one another by the length of their finger-like tentacles. These tentacles are located at the rear end of the cell. Varied descriptions of these tentacles have been attributed to their different morphologies when swimming versus at rest, such as in the case of L. loyezae. Phylogenetic analysis of the 18S rRNA gene within L. loyezae placed the genus within the family Spathidiidae taxonomically ranked under the order of Haptorida. A shorter gene of 18S rRNA was used to classify the species and so its affinity should be carefully interpreted with there being the likelihood of it changing to a new affinity. The sequence used to determine the identity of L. loyezae implies that it forms a sister group with the sequences from the re-identified Epispathidium papilliferum as well as an undescribed species of the same genus. The Epispathidium possessed protruding papillae that are analogous to those found on L. loyezae, although the papillae on Epispathidium were only present on its oral region. Genus divisions Although A. Jankowski did not directly observe any members of Legendrea, Jankowski published a revision of the genus that divided Legendrea into two genera based on morphological differences: Lacerus and Thysanomorpha. Legendrea bellerophon was reclassified to the genus Thysanomorpha and was renamed Thysanomorpha bellerophon, re-described as having a serrated body edge/surface with an uneven series of outgrowths with trichomes. These names of the species and genera were in use by subsequent publications, but were challenged by Weiss et al. in 2022 as misidentifications of Legendrea species. References Litostomatea Ciliate genera Microscopic eukaryotes Taxa described in 1908
Legendrea
[ "Biology" ]
584
[ "Eukaryotes", "Microorganisms", "Microscopic eukaryotes" ]
72,416,325
https://en.wikipedia.org/wiki/Sasikanth%20Manipatruni
Sasikanth Manipatruni is an American engineer and inventor in the fields of Computer engineering, Integrated circuit technology, Materials Engineering and semiconductor device fabrication. Manipatruni contributed to developments in silicon photonics, spintronics and quantum materials. Manipatruni is a co-author of 50 research papers and ~400 patents (cited about 10000 times ) in the areas of electro-optic modulators, Cavity optomechanics, nanophotonics & optical interconnects, spintronics, and new logic devices for extension of Moore's law. His work has appeared in Nature, Nature Physics, Nature communications, Science advances and Physical Review Letters. Early life and education Manipatruni completed his schooling from Jawahar Navodaya Vidyalaya. Later, he received a bachelor's degree in Electrical Engineering and Physics from IIT Delhi in 2005 where he graduated with the institute silver medal. He also completed research under the Kishore Vaigyanik Protsahan Yojana at Indian Institute of Science working at Inter-University Centre for Astronomy and Astrophysics and in optimal control at Swiss Federal Institute of Technology at Zurich. Research career Manipatruni received his Ph.D. in Electrical Engineering with minor in applied engineering physics from Cornell University. The title of his thesis was "Scaling silicon nanophotonic interconnects : silicon electrooptic modulators, slowlight & optomechanical devices". His thesis advisors were Michal Lipson and Alexander Gaeta at Cornell University. He has co-authored academic research with Michal Lipson, Alexander Gaeta, Keren Bergman, Ramamoorthy Ramesh, Lane W. Martin, Naresh Shanbhag, Jian-Ping Wang, Paul McEuen, Christopher J. Hardy, Felix Casanova, Ehsan Afshari, Alyssa Apsel, Jacob T. Robinson, :fr:Manuel Bibes spanning Condensed matter physics, Electronics and devices, Photonics, Circuit theory, Computer architecture and hardware for Artificial intelligence areas. Silicon optical links Manipatruni's PhD thesis was focused on developing the then nascent field of silicon photonics by progressively scaling the speed of electro-optic modulation from 1 GHz to 12.5 Gbit/s, 18 Gbit/s and 50 Gbit/s on a single physical optical channel driven by a silicon photonic component. The significance of silicon for optical uses can be understood as follows: nearly 95% of modern Integrated circuit technology is based on silicon-based semiconductors which have high productivity in Semiconductor device fabrication due to the use of large single crystal wafers and extraordinary control of the quality of the interfaces. However, Photonic integrated circuits are still majorly manufactured using III-V compound semiconductor materials and II-VI semiconductor compound materials, whose engineering lags silicon industry by several decades (judged by number of wafers and devices produced per year). By showing that silicon can be used as a material to turn light signal on and off, silicon electro-optic modulators allow for use of high-quality engineering developed for the electronics industry to be adopted for photonics/optics industry. This the foundational argument used by silicon electro-optics researchers. This work was paralleled closely at leading industrial research groups at Intel, IBM and Luxtera during 2005–2010 with industry adopting and improving various methods developed at academic research labs. Manipatruni's work showed that it is practically possible to develop free carrier injection modulators (in contrast to carrier depletion modulators) to reach high speed modulation by engineering injection of free carriers via pre-amplification and back-to-back connected injection mode devices. In combination with Keren Bergman at Columbia University, micro-ring modulator research led to demonstration of a number of firsts in long-distance uses of silicon photonics utilizing silicon based injection mode electro-optic modulators including first demonstration of long-haul transmission using silicon microring modulators first Error-free transmission of microring-modulated BPSK, First Demonstration of 80-km Long-Haul Transmission of 12.5-Gb/s Data Using Silicon Microring Resonator Electro-Optic Modulator, First Experimental Bit-Error-Rate Validation of 12.5-Gb/s Silicon Modulator Enabling Photonic Networks-on-Chip. These academic results have been applied into products widely deployed at Cisco, Intel. Application for computing and medical imaging Manipatruni, Lipson and collaborators at Intel have projected a roadmap that required the use of Silicon micro-ring modulators to meet the bandwidth, linear bandwidth density (bandwidth/cross section length) and area bandwidth density (bandwidth/area) of on-die communication links. While originally considered thermally unstable, by early 2020's micro-ring modulators have received wide adoption for computing needs at Intel Ayar Labs, Global foundries and varied optical interconnect usages. The optimal energy of an on-die optical link is written as : where is the optimal detector voltage (maintaining the bit error rate), detector capacitance, is the modulator drive voltage, are the electrooptic volume of the optical cavity being stabilized, refractive index change to carrier concentration and spectral sensitivity of the device to refractive index change is the change in optical transmission, B is the bandwidth of the link, Ptune the power to keep the resonator operational and B the bandwidth of the link at F frequency of the data being serialized. Manipatruni and Christopher J. Hardy applied integrated photonic links to the Magnetic resonance imaging to improve the signal collection rate from the MRI machines via the signal collection coils while working at the General Electric's GE Global Research facility. The use of optical transduction of the MRI signals can allow significantly higher signal collection arrays within the MRI system increasing the signal throughput, reducing the time to collect the image and overall reduction of the weight of the coils and cost of MRI imaging by reducing the imaging time. Cavity optomechanics and optical radiation pressure Manipatruni proposed the first observation that optical radiation pressure leads to non-reciprocity in micro cavity opto-mechanics in 2009 in the classical electro-magnetic domain without the use of magnetic isolators. In classical Newtonian optics, it was understood that light rays must be able to retrace their path through a given combination of optical media. However, once the momentum of light is taken into account inside a movable media this need not be true in all cases. This work proposed that breaking of the reciprocity (i.e. properties of media for forward and backward moving light can be violated) is observable in microscale optomechanical systems due to their small mass, low mechanical losses and high amplification of light due to long confinement times. Later work has established the breaking of reciprocity in a number of nanophotonic conditions including time modulation and parametric effects in cavities. Manipatruni and Lipson have also applied the nascent devices in silicon photonics to optical synchronization and generation of non-classical beams of light using optical non-linearities. Memory and spintronic devices Manipatruni worked on Spintronics for the development of logic computing devices for computational nodes beyond the existing limits to silicon-based transistors. He developed an extended modified nodal analysis that uses vector circuit theory for spin-based currents and voltages using modified nodal analysis which allows the use of spin components inside VLSI designs used widely in the industry. The circuit modeling is based on theoretical work by Supriyo Datta and Gerrit E. W. Bauer. Manipatruni's spin circuit models were extensively applied for development of spin logic circuits, spin interconnects, domain wall interconnects and benchmarking logic and memory devices utilizing spin and magnetic circuits. In 2011, utilizing the discovery of Spin Hall effect and Spin–orbit interaction in heavy metals from Robert Buhrman, Daniel Ralph and Ioan Miron in Period 6 element transition metals Manipatruni proposed an integrated spin-hall effect memory (Later named Spin-Orbit Memory to comprehend the complex interplay of interface and bulk components of the spin current generation) combined with modern Fin field-effect transistor transistors to address the growing difficulty with embedded Static random-access memory in modern Semiconductor process technology. SOT-MRAM for SRAM replacement spurred significant research and development leading to successful demonstration of SOT-MRAM combined with Fin field-effect transistors in 22 nm process and 14 nm process at various foundries. Working with Jian-Ping Wang, Manipatruni and collaborators were able to show evidence of a 4th elemental ferro-magnet. Given the rarity of ferro-magnetic materials in elemental form at room temperature, use of a less rare element can help with the adoption of permanent magnet based driven systems for electric vehicles. Computational logic devices and quantum materials In 2016, Manipatruni and collaborators proposed a number of changes to the new logic device development by identifying the core criterion for the logic devices for utilization beyond the 2 nm process. The continued slow down the Moore's law as evidenced by slow down of the voltage scaling, lithographic node scaling and increasing cost per wafer and complexity of the fabs indicated that Moore's law as it existed in the 2000-2010 era has changed to a less aggressive scaling paradigm. Manipatruni proposed that spintronic and multiferroic systems are leading candidates for achieving attojoule-class logic gates for computing, thereby enabling the continuation of Moore's law for transistor scaling. However, shifting the materials focus of computing towards oxides and topological materials requires a holistic approach addressing energy, stochasticity and complexity. The Manipatruni-Nikonov-Young Figure-of-Merit for computational quantum materials is defined as the ratio of " energy to switch a device at room temperature" to " energy of thermodynamic stability of the materials compared to vacuum energy, where is the reversal of the order parameter such as ferro-electric polarization or magnetization of the material" This ratio is universally optimal for a ferro-electric material and compared favorably to spintronic and CMOS switching elements such as MOS transistors and BJTs. The framework (adopted by SIA decadal plan) describes a unified computing framework that uses physical scaling (physics-based improvement in device energy and density), mathematical scaling (using information theoretic improvements to allow higher error rate as devices scale to thermodynamic limits) and complexity scaling (architectural scaling that moves from distinct memory & logic units to AI based architectures). Combining Shannon inspired computing allows the physical stochastic errors inherent in highly scaled devices to be mitigated by information theoretic techniques. Ian A. Young, Nikonov, and Manipatruni have provided a list of 10 outstanding problems in quantum materials as they pertain to computational devices. These problems have been subsequently addressed in numerous research works leading to various improved device properties for a future computer technology Beyond CMOS. The top problems listed as milestones and challenges for logic are as follows: Problems of magnetic/ferro-electric/multiferroic switching How to switch a magnetic/multiferroic (MF) state in volume of 1,000 nm3 with a stability of 100 kBT and an energy of 1 aJ ~ 6.25 eV ~ 240 kT? What are the timescales involved with magnetoelectric/ferroelectric (FE)/MF switching of a magnet/FE/MF at scaled sizes? How to overcome the Larmor precession timescale of a ferromagnet? How to switch a scaled magnet/polarization switch with low stochastic errors? What are the fundamental mechanisms governing the switching errors, fatigue for scaled FE/ME switching? What is the right combination of materials/order parameters for practical magnetoelectric switching (for example, multiferroic FE/antiferromagnet (AFM) plus FM, paraelectric/AFM plus FM, piezoelectric plus magnetostriction)? Problems of magnetic/multiferroic/ferroelectric detection How to detect the state of a magnet/ferroelectric with high read-out voltage >100 mV? For inverse spin–orbit effects, such as the spin galvanic effect/Edelstein effect, how to achieve λIREE > 10 nm with high resistivity? What is the scaling dependence of spin–orbit detection of the state of a magnet? How to detect the state of a perpendicular magnet with spin–orbit effect? Problems of interconnects and complexity How to transfer the state of a magnet/FE over long distances on scaled wire sizes (<30-nm-wide wires with pitch <60 nm)? In particular, how to improve the spin diffusion interconnects in non-magnetic conductors and magnon interconnects in magnetic interconnects? How to transduce a spintronic/multiferroic state to a photonic state (and vice versa) to enable very long-distance interconnects (>100 μm)67? The back-end of CMOS comprises multiple layers of metal wires separated by a dielectric. Tus making logic devices between these layers requires starting with an amorphous layer and a template for growth of the functional materials. How to integrate the magnetic/FE/MF materials in the back-end of the CMOS chip50,68? How to utilize stochastic switches (spin/FE) operating near practical thermodynamic conditions in a computing architecture? How to utilize the extreme scaling (with size, logic efficiency and three-dimensional integration) feasible with spin/FE devices in a computer architecture in order to achieve 10 billion switches per chip18,19 Magneto-electric spin-orbit logic is a design using this methodology for a new logical component that couples magneto-electric effect and spin orbit effects. Compared to CMOS, MESO circuits could potentially require less energy for switching, lower operating voltage, and a higher integration density. Selected publications and patents Manipatruni, Sasikanth; Nikonov, Dmitri E.; Lin, Chia-Ching; Gosavi, Tanay A.; Liu, Huichu; Prasad, Bhagwati; Huang, Yen-Lin; Bonturim, Everton; Ramamoorthy Ramesh; Young, Ian A. (2018-12-03). "Scalable energy-efficient magnetoelectric spin–orbit logic". Nature. 565 (7737): 35–42. doi:10.1038/s41586-018-0770-2. ISSN 0028-0836 Manipatruni, S., Nikonov, D.E. and Young, I.A., 2018. Beyond CMOS computing with spin and polarization. Nature Physics, 14(4), pp. 338–343 Manipatruni, S., Nikonov, D.E. and Young, I.A., 2014. Energy-delay performance of giant spin Hall effect switching for dense magnetic memory. Applied Physics Express, 7(10), p. 103001. Manipatruni, S., Nikonov, D.E. and Young, I.A., 2012. Modeling and design of spintronic integrated circuits. IEEE Transactions on Circuits and Systems I: Regular Papers, 59(12), pp. 2801–2814. Pham, V.T., Groen, I., Manipatruni, S., Choi, W.Y., Nikonov, D.E., Sagasta, E., Lin, C.C., Gosavi, T.A., Marty, A., Hueso, L.E. and Young, I.A., 2020. Spin–orbit magnetic state readout in scaled ferromagnetic/heavy metal nanostructures. Nature Electronics, 3(6), pp. 309–315. Chen, Z., Chen, Z., Kuo, C.Y., Tang, Y., Dedon, L.R., Li, Q., Zhang, L., Klewe, C., Huang, Y.L., Prasad, B. and Farhan, A., 2018. Complex strain evolution of polar and magnetic order in multiferroic BiFeO3 thin films. Nature communications, 9(1), pp. 1–9. Xu, Q., Manipatruni, S., Schmidt, B., Shakya, J. and Lipson, M., 2007. 12.5 Gbit/s carrier-injection-based silicon micro-ring silicon modulators. Optics express, 15(2), pp. 430–436. Manipatruni, S., Nikonov, D.E., Lin, C.C., Prasad, B., Huang, Y.L., Damodaran, A.R., Chen, Z., Ramesh, R. and Young, I.A., 2018. Voltage control of unidirectional anisotropy in ferromagnet-multiferroic system. Science advances, 4(11), p.eaat4229. Zhang, M., Wiederhecker, G.S., Manipatruni, S., Barnard, A., McEuen, P. and Lipson, M., 2012. Synchronization of micromechanical oscillators using light. Physical review letters, 109(23), p. 233906. Manipatruni, S., Robinson, J.T. and Lipson, M., 2009. Optical nonreciprocity in optomechanical structures. Physical review letters, 102(21), p. 213903. Fang, M.Y.S., Manipatruni, S., Wierzynski, C., Khosrowshahi, A. and DeWeese, M.R., 2019. Design of optical neural networks with component imprecisions. Optics Express, 27(10), pp. 14009–14029. Chen, L., Preston, K., Manipatruni, S. and Lipson, M., 2009. Integrated GHz silicon photonic interconnect with micrometer-scale modulators and detectors. Optics express, 17(17), pp. 15248–15256. Dutt, A., Luke, K., Manipatruni, S., Gaeta, A.L., Nussenzveig, P. and Lipson, M., 2015. On-chip optical squeezing. Physical Review Applied, 3(4), p. 044005. AI and in-memory computing Korgaonkar, K., Bhati, I., Liu, H., Gaur, J., Manipatruni, S., Subramoney, S., Karnik, T., Swanson, S., Young, I. and Wang, H., 2018, June. Density tradeoffs of non-volatile memory as a replacement for SRAM based last level cache. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA) (pp. 315–327). IEEE. Pipeline circuit architecture to provide in-memory computation functionality, US20190057050A1 Low synch dedicated accelerator with in-memory computation capability, US20190056885A1 In-memory analog neural cache, US20190057304A1, See also Michal Lipson Christopher J. Hardy Keren Bergman Silicon photonics Magneto-Electric Spin-Orbit logic References American engineers Cornell University alumni American computer scientists Scientists from Andhra Pradesh Telugu people 21st-century American engineers Indian company founders Indian emigrants to the United States IIT Delhi alumni ETH Zurich alumni Scientists from Schenectady, New York 1984 births Living people Computer hardware engineers Spintronics
Sasikanth Manipatruni
[ "Physics", "Materials_science" ]
4,208
[ "Spintronics", "Condensed matter physics" ]
72,418,328
https://en.wikipedia.org/wiki/Stable%20phosphorus%20radicals
Stable and persistent phosphorus radicals are phosphorus-centred radicals that are isolable and can exist for at least short periods of time. Radicals consisting of main group elements are often very reactive and undergo uncontrollable reactions, notably dimerization and polymerization. The common strategies for stabilising these phosphorus radicals usually include the delocalisation of the unpaired electron over a pi system or nearby electronegative atoms, and kinetic stabilisation with bulky ligands. Stable and persistent phosphorus radicals can be classified into three categories: neutral, cationic, and anionic radicals. Each of these classes involve various sub-classes, with neutral phosphorus radicals being the most extensively studied. Phosphorus exists as one isotope 31P (I = 1/2) with large hyperfine couplings relative to other spin active nuclei, making phosphorus radicals particularly attractive for spin-labelling experiments. Neutral phosphorus radicals Neutral phosphorus radicals include a large range of conformations with varying spin densities at the phosphorus. Generally, they can categorised as mono- and bi/di-radicals (also referred to as bisradicals and biradicaloids) for species containing one or two radical phosphorus centres respectively. Monoradicals In 1966, Muller et. al published the first electron paramagnetic resonance (EPR/ESR) spectra displaying evidence for the existence of phosphorus-containing radicals. Since then a variety of phosphorus monoradicals have been synthesised and isolated. Common ones include phosphinyl (R2P•), phosphonyl (R2PO•), and phosphoranyl (R4P•) radicals. Synthesis Synthetic methods for obtaining neutral phosphorus mondoradicals include photolytic reduction of trivalent phosphorus chlorides, P-P homolytic cleavage, single electron oxidation of phosphines, and cleavage of P-S or P-Se bonds. The first persistent two-coordinate phosphorus-centred radicals [(Me3Si)2N]2P• and [(Me3Si)2CH]2P• were reported in 1976 by Lappert and co-workers. They are prepared by photolysis of the corresponding three-coordinate phosphorus chlorides in toluene in the presence of an electron-rich olifin. In 2000, the Power group found that this species can be synthesised from the dissolution, melting or evaporation of the dimer. In 2001, Grützmacher et al. reported the first stable diphosphanyl radical [Mes*MeP-PMes*]• (Mes = 1,3,5-trimethylbenzene) from the reduction of the phosphonium salt [Mes*MeP-PMes*]+(O3SCF3)− in an acetonitrile solution containing tetrakis(dimethylamino)ethylene (TDE) at room temperature, yielding yellow crystals. The monomer is stable below -30 ºC in the solid state for a few days. At room temperature the species decomposes in solution and in the solid state with a half life of 30 minutes at 3 x 10−2 M. The first structurally characterised phosphorus radical [Me3SiNP(μ3-NtBu)3{μ3-Li(thf)}3X]• (X = Br, I) was synthesised by Armstrong et al. in 2004 by the oxidation of the starting material with halogens bromide or iodine in a mixture of toluene and THF at 297 K. This produces blue crystals that can be characterised by X-ray crystallography. The steric bulk of the alkyl-imido groups was identified as playing a major role in the stabilising of these radicals. In 2006, Ito et al. prepared an air tolerant and thermally stable 1,3-diphosphayclobutenyl radical. Sterically bulky phospholkyne (Mes*C≡P) is treated with 0.5 equiv of t-BuLi in THF to form a 1,3 diphosphaalkyl anion. This is reduced with iodine solution to form a red product. The species is a planar four-membered diphosphacyclobutane (C2P2) ring with the Mes* having torsional angles with the C2P2 plane. Metal stabilised radicals In 2007, Cummins et al. synthsised a phosphorus radical using nitridovanadium trisanilide metallo-ligands with similar form to Lappert, Power and co-workers' "jack-in-the-box" diphosphines. This is made by the synthesis of the radical precursor ClP[NV{N(Np)Ar}]3]2 followed by its one electron reduction with Ti[N(tBu)Ar]3 or potassium graphite to yield dark brown crystals in 77% yield. EPR data showed delocalisation of electron spin across the two 51V and one 31P nuclei. This was consistent with computation, supporting the reported resonance structures. This delocalisation across the vanadium atoms was identified as the source of stabilisation for this species due to the ease for transition metals to undergo one-electron chemistry. Cummins and co-workers postulated that the p-character of the system could be tuned by changing the metal centres. Other metals stabilised radicals have been reported by Scheer et al, and Schneider et al using ligand containing tungsten and osmium respectively. Structure and properties As previously mentioned, kinetic stabilisation through bulky ligands has been an effective strategy for producing persisting phosphorus radicals. Delocalisation of the electron has also shown a stabilising effect on phosphorus radical species. This conversely results in more delocalised spin densities, and lower coupling constants relative to 31P localised electron spin. For this reason the spin localisation on the phosphorus atom varies widely for different phosphorus radical species. Cyclic radicals like that by Ito at al have delocalisation across the rings. In this case X-ray, EPR spectroscopy, and ab initio calculations found that 80-90% of the spin was delocalised on the carbons in the C2P2 ring and the rest on the phosphorus atoms. Despite this, the aP2 constant shows similar spectroscopic property to organic radicals that contain conjugated P=C doubles bond, justifying the resonance structure used for this species. The phosphinyl radicals synthesised by Lappert and co-workers were found to be stable at room temperature for periods of over 15 days with no effect from short-term heating at 360 K. This stability was assigned to the steric bulk of the substituents and the absence of beta-hydrogen atoms. A structural study of this species conducted using X-ray crystallography, gas-phase electron diffraction, and ab initio molecular orbital calculations found that the source of this stability was not the bulkiness of the CH(SiMe3)2 ligands but the release of strain energy during homolytic cleavage at the P-P bond of the dimer that favoured the existence of the radical. The dimer shows a syn,anti conformation, which allows for better packing but has excessive crowding at the trimethylsilyl groups, while the radical monomer displays syn,syn conformation. Theoretical calculations showed that the process of cleaving the P-P bond (endothermic), relaxation to release steric strain, and rotation about the P-C bond to yield syn,syn conformation on the monomer radical (exothermic by 67.5 kJ for each unit) is an overall exothermic process. The stability of this species can therefore be attributed to the energy release of strain energy by the reorganisation of the ligands as the dimer converts to the radical monomer. This effect have been observed in other systems containing the CH(SiMe3)2 ligand and was dubbed the "Jack-in-the-box" model. Other ligand with similar flexibility, and ability to undergo conformational changes were identified as PnR2 (Pn - P, As, Sb) and ERR'2 (E = Si, Ge, Sn; R' = bulky ligand). In 2022, Streubel and co-workers investigated the electron density distribution across centres in metal-coordinated phosphanoxyl complexes. This study showed that tungsten-containing radical complexes have small amounts of spin density on the metal nuclei while in the case of manganese and iron, the spins are purely metal-centred. Biradicals Biradicals are molecules bearing two unpaired electrons. These radicals can interact ferromagnetically (triplet), antiferromagnetically (open-shell singlet) or not interact at all (two-doublet). Biradicaloids/diradicaloids are a class of biradicals with significant radical centre interaction. Synthesis The first phosphorus biradical was reported in 2011 by T. Breweies and co-workers. The biradicaloid [P(μ-NR)]2 (R=Hyp, Ter) was synthesised by the reduction of cyclo-1,3-diphospha (III)-2,4-diazanes using [(Cp2TiCl}2] as the reducing agent. The bulky Ter (trimesitylphenyl) and Hyp (hypersilyl) substituents provide a large stabilising effect. This effect is more pronounced with Ter where the biradical is stable in inert atmospheres in the solid state for long periods of time at temperatures up to 224 C. Computational studies determined that the [P(μ-NTer)]2 radical shows an openshell singlet ground state biradical character. Villinger et al later synthesised a stable cyclopentane-1,3-diyl biradical by the insertion of CO into a P–N bond of diphosphadiazanediyl. In 2017 D. Rottschäfer et al reported a N-heterocyclic vinylindene-stabilised singlet biradicaloid phosphorus compound (iPr)CP]2 (iPr = 1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene). Significant π-e− density is transferred to C2P2 ring. The species was found to be diamagnetic with temperature-independent NMR resonances, so can be considered a non-Kekulé molecule. Structure and properties The species by Villinger can undergo reaction with phosphaalkyne forming a five-membered P2N2C heterocycle with a P-C bridge. It can also undergo halogenation and reaction with elemental sulfur. Characterisation Phosphorus radicals are commonly characterized by EPR/ESR to elucidate the spin localisation of the radical across the radical species. Higher coupling constants are indicative of higher localisation on phosphorus nuclei. Quantum chemical calculations on these systems are also used to support this experimental data. Before the characterization by X-ray crystallography by Armstrong et al, the structure of the phosphorus centred radical [(Me3Si)2CH]2P• had been determined by electron diffraction. The diphosphanyl radical [Mes*MeP-PMes*]• had been stabilised through doping into crystals of Mes*MePPMeMes*. The radical synthesised by Armstrong et al was found to exist as a distorted PN3Li3X cube in the solid state. They found that upon dissolution in THF, this cubic structure is disrupted, leaving the species to form a solvent-separated ion pair. Phosphorus radical cations Synthesis Phosphorus radical cations are often obtained from the one-electron oxidation of diphosphinidenes and phosphalkenes. In 2010, the Bertrand group found that carbene-stabilised diphosphinidenes can undergo one-electron oxidation in toluene with Ph3C+B(C6F5)4− at room temperature in inert atmosphere to produce radical cations (Dipp=2,6-Diisopropylphenyl).  The Bertrand group reported the synthesis of [(cAAC)P2]•+ , [(NHC)P2]•+ and [(NHC)P2]++ . The EPR signal for [(cAAC)P2]•+ is a triplet of quintents, resulting form coupling to with 2 P nuclei and a small coupling with 2 N nuclei. NBO analysis showed spin delocalisation across two phosphorus atoms (0.27e each) and nitrogen atoms(0.14e each). Contrastingly, the [(NHC)P2]•+complex showed delocalisation mostly on phosphorus (0.33e and 0.44e) with little contribution of other elements. Other diradicals synthesised by the Bertrand group involved species single phosphorus atoms. These included [(TMP)P(cAAC)]•+ where spin is localised on phosphorus (67%) and [bis(carbene)-PN]•+ with spin density distributed over phosphorus (0.40e), central nitrogen atom (0.18e), and N atom of cAAC (0.19e). Treatment with this later cation with KC8 returns it to its neutral analogue.In 2003, Geoffroy et al. synthesised Mes*P•-(C(NMe2)2)+ through a one electron oxidation of a phosphaalkenes with [Cp2Fe]PF6. A solution of Mes*P•-(C(NMe2)2)+ is stable in inert atmosphere in the solid state for a few weeks and a few days in solution. Hyperfine couplings on EPR show strong localisation of the spin to the phosphorus nuclei (0.75e in p orbital). In 2015, the Wang group was able to isolate the crystal structure of this species with use of the oxidant of a weakly coordinating anion Ag[Al(ORF)4]−. The electron spin density, found by EPR, resides principally on phosphorus 3p and 3s orbitals (68.2% and 2.46% respectively). This was supported by DFT calculations where 80.9% of spin density was found to be localised on phosphorus atom. Weakly coordinating anions were also used to stabilise cyclic biradical cations synthesised by Schulz and colleagues where the spin density was found to reside exclusively on the phosphorus atoms (0.46e each) in the case of [P(μ-NTer)2P]•+. In the case of [P(μ-NTer)2As]•+ the spin was found to mostly reside on the As nuclei (70.6% on As compared to 29.4% on P atom). Many other cyclic radical cations have been reported. It is difficult to form radical cations with diphosphenes due to low lying HOMO at the phosphorus centre. Ghadwal and co-workers were able to synthesise a diphosphene radical cation [{(NHC)C(Ph)}P]2•+ using an NHC-derived divinyldiphosphene with a high lying HOMO and a small HOMO-LUMO gap. The stability of the species was identified as the delocalisation of the spin density across the CP2C-unit. The spin density was found to be 11-14% on each P nuclei and 17-21% on each C nuclei. Structure and properties A unique source of stability for phosphorus radical cations is the electrostatic repulsion between radical cations that prevents dimerisation. Weakly coordinating anions have been used to stabilise biradical cations. Phosphorus radical anions Synthesis The most common method for accessing radical anions is through the use of reducing agents. In 2014 the Wang group reported the synthesis of a phosphorus-centred radical anion through the reduction of a phosphaalkene using either Li in DME or K in THF yielding purple crystals. EPR data showed localisation of the spin on 3p (51.09%) and 3s (1.62%) orbitals of phosphorus. They later synthesised a diphosphorus-centred radial anion and the first di-radical di-anion from the reduction of the diphosphaalkene with KC8 in THF in the presence of 18-crown-6. In both cases the spin density resides principally on the phosphorus nuclei. Tan and co-workers used a charge transfer approach to synthesis the phosphorus radical anion coordinated CoII and FeII complexes. Here diazafluorenylidene-substituted phosphaalkene is reacted with low valent transition metal complexes to form phosphorus radical anions coordinated with metal complexes. This species displays a quartet ground state showing weak antiferromagnetic interaction of the phosphorus radical with the high-spim TMII ion. The spin density is mostly localised on TM and phosphorus nuclei. The group further synthesised radical anion lanthanide complexes which also showed antiferromagnetic interaction. The π-acid properties of boryl substituents were employed by Yamashita and co-workers to stabilise phosphorus radical anions. Here the diazafluorenylidene-substituted phosphaalkene is reacted with [Cp*2Ln][BPh4] (Ln = Dy, Tb, and Gd) followed by reduction with KC8 in the absence or presence of 2,2,2-cryptand yielding complexes with radical anion phosphaalkene fragments. EPR and DFT calculations indicate spin density mostly localised on the P nuclei (67.4%). Further reading Reviews Reactivity Potential applications References Chemistry Phosphorus compounds Free radicals
Stable phosphorus radicals
[ "Chemistry", "Biology" ]
3,800
[ "Senescence", "Free radicals", "Biomolecules" ]
72,418,810
https://en.wikipedia.org/wiki/Impacts%20of%20California%20High-Speed%20Rail
In addition to the direct reduction in travel times the HSR project will produce, there are also economic and environmental impacts of the high-speed rail system. These were also specifically noted in Proposition 1A at the time the project sought authorization from the voters of the state in 2008. The anticipated benefits apply both generally to the state overall, as well as to the regions the train will pass through, and to the areas immediately around the train stations. Estimates of current & past impacts Latest Impact Information, Overall and by Region On January 18, 2024, Derek Boughton of the Authority presented the latest financial impact analysis report through June 2023. Job training: The Central Valley Training Center The Central Valley Training Center (located in Selma, California) is an organization supported by the Authority and local non-profit and governmental organizations. Since 2020 it has provided hands-on, free, 12-week pre-apprenticeship programs in 11 trades to prepare Central Valley veterans, at-risk young adults, minority, and low-income populations for construction jobs on the CAHSR project. As of December 2023 it has graduated 11 cohorts, totaling over 176 students, and further assisted them by providing job placement as well as other support services. Annual Sustainability Reports CAHSR is designed to be an entirely environmentally sustainable system. Each year since 2018 the Authority has produced a Sustainability Report. Highlights of the 2022 report are: "Restoring more than 2,972 acres of habitat and protecting more than 3,190 acres of agricultural land; Planting more than 7,100 trees; Avoiding or sequestering 420,245 metric tons of carbon dioxide – the equivalent of removing one natural gas-fired power plant from the grid for a year; Increasing small business participation to over 700 entities; Generating between $12.7 and 13.7 billion in total economic activity in the state, with 56% investment in disadvantaged communities." Cumulative economic impact estimates The 2021 Economic Impact Factsheet estimated that as of June 2021, the statewide economic benefits of the project included 64,400–70,500 job-years of employment, $4.8–$5.2 billion in labor employment, and $12.7–13.7 billion in economic output, and that as of February 2022, 699 small businesses were involved in the project. The Authority's economic impact analysis is updated annually. The 2021 Economic Analysis Report contains data as of June 2021. STB estimates of regional needs In its 67-page ruling in May 2015, the federal Surface Transportation Board noted: "The current transportation system in the San Joaquin Valley region has not kept pace with the increase in population, economic activity, and tourism. ... The interstate highway system, commercial airports, and conventional passenger rail systems serving the intercity market are operating at or near capacity and would require large public investments for maintenance and expansion to meet existing demand and future growth over the next 25 years or beyond." Thus, the Board sees the HSR system as providing valuable benefits to the region's transportation needs. The San Joaquin Valley is also one of the poorest areas of the state. For example, the unemployment rate near the end of 2014 in Fresno County was 2.2% higher than the statewide average. And, of the five poorest metro areas in the country, three are in the Central Valley. The HSR system has the potential to significantly improve this region and its economy. A large January 2015 report to the CHSRA examined this issue. In addition to jobs and income levels in general, the presence of HSR is expected to benefit the growth in the cities around the HSR stations. It is anticipated that this will help increase population density in those cities and reduce "development sprawl" out into surrounding farmlands. Negatively-affected local communities There have also been some reported negative impacts from the project's land acquisitions and constructions. As of Oct. 2021 in the Phase 1 construction the project displaced or adversely affected immigrants (Mexican, Cambodian, and Japanese), homeless outreach organizations, homeless shelters, firefighters, nonprofits working with welfare recipients, thrift stores, and disadvantaged communities such as Wasco. Future projections for the Interim Initial Operating Segment "What Is the Value of Electrified High-Speed Rail Between Merced and Bakersfield?" in the 2022 Business Plan (p. 25) listed these estimated benefits which will come from the Interim Initial Operating Segment: Travel time will be significantly shorted, and travel will be more reliable. Car travel time is 2.5 hrs. one-way. The Amtrak San Joaquin takes 3 hrs. at best, but there are only 7 round-trips each day, and intervening freight service makes service unreliable. CAHSR is estimated to reduce travel time by up to 100 minutes, and 18 reliable round-trips are anticipated each day. With better transit inside the Central Valley, transit to the Bay Area and Sacramento as well as Southern California will improve significantly. Rail passenger trips over the same route are projected to nearly double, from 4.8 million annual riders to 8.8 million riders. Annual vehicle miles traveled will be reduced by 284 million, reducing road congestion. Greenhouse gas (GHG) emissions will be reduced by 50.6 thousand metric tons, equivalent to emissions from 10,874 passenger vehicles driven for one year. An additional $117.2 million in passenger revenues. More than 200,000 job-years due to the line's operation and community effects. Environmental issues Wildlife protection The HSR tracks will pose some serious problems for moving and migrating wildlife. Thus, the Interim Initial Operating Segment will have over 300 wildlife crossings to provide safe ways for wildlife to cross the tracks. To accomplish this, the Authority has submitted a $2 million grant to the Federal Highway Administration Wildlife Crossings Pilot Program for the proposed Central Valley 119-Mile Wildlife Crossing Monitoring Plan (total cost to be $2.5 million). This pilot project will study alternative crossing designs, research and monitor wildlife/vehicle collisions, and review the San Joaquin kit fox migration corridors. Environmental benefit calculations The Authority's Carbon Footprint Calculator shows the benefits for 5 different portions of the HSR route, including all of Phase 1 as well as the Interim Initial Operating Segment. It gives estimates of green house gas emissions of planes, autos, and HSR trains as well as the savings that using the train would create. The HSR savings estimates (per round trip) are: 142 pounds on the Merced-Bakersfield line (Interim IOS) 349 pounds for San Francisco-Los Angeles 303 pounds for San Jose-Burbank 389 pounds for San Francisco-Anaheim 337 pounds for San Francisco-Burbank In the 2022 Business Plan the Authority estimates that by 2040, the system could carry 50 million riders per year, and that at full operation, the reduction of greenhouse gas emissions will be equivalent to removing 400,000 vehicles off the road. References External links California High-Speed Rail Authority California High Speed Rail Peer Review Group California State Rail Plan (2022) DB E.C.O. North America Inc. Deutsche Bahn Electric railways in California High-speed railway lines in the United States Passenger rail transportation in California Proposed railway lines in California 25 kV AC railway electrification 2029 in rail transport Megaprojects Transportation buildings and structures in Madera County, California Buildings and structures under construction in the United States High-speed trains of the United States Rail junctions in the United States
Impacts of California High-Speed Rail
[ "Engineering" ]
1,510
[ "Megaprojects" ]
72,418,872
https://en.wikipedia.org/wiki/Functional%20MRI%20methods%20and%20findings%20in%20schizophrenia
Functional MRI imaging methods have allowed researchers to combine neurocognitive testing with structural neuroanatomical measures, consider cognitive and affective paradigms, and create computer-aided diagnosis techniques and algorithms. Functional MRI has several benefits, such as its non-invasive quality, relatively high spatial resolution, and decent temporal resolution. This is due the influential development in the scanner hardware, where it now allows for technicians to retrieve higher resolution images in a shorter amount of time. Additionally, there has been an improved motion correction and harmonization, which both aid in the generalizability and replication of findings in schizophrenia research. Recent studies have used fMRI to explore specific brain networks, such as the salience network and default mode network, to understand their roles in schizophrenia-related symptoms. Alterations in these networks may affect self-referential thoughts and responses to external stimuli, potentially contributing to symptoms like hallucinations and disorganized thinking. One particular method used in recent research is resting-state functional magnetic resonance imaging, rs-fMRI. In a 'reformulation' of the binary-risk vulnerability model, researchers have suggested a multiple-hit hypothesis that utilizes several risk factors — some bestowing a greater probability than others — to identify at-risk individuals, often genetically predisposed to schizophrenia. The process of defining clinical criteria of schizophrenia for early diagnosis has posed a great challenge for scientists. Methodology According to the DSM-5, a schizophrenia diagnosis is given if an individual possesses two or more of the following symptoms over the course of a 1-month period: delusions, hallucinations, disorganized speech, grossly disorganized or catatonic behavior, or negative symptoms. Additionally, at least one of the following three characteristics: delusions, hallucinations, and disorganized speech, must be present. A rapid increase of studies in schizophrenia has covered topics such as abnormal activity in "motor tasks, working memory attention, word fluency, emotion processing, and decision making." Researchers also focus on identifying biomarkers through fMRI scans that could aid early diagnosis. For example, abnormalities in the anterior cingulate cortex and dorsolateral prefrontal cortex are considered potential indicators of schizophrenia risk. In contrast to the abundance of research centered on positive symptoms of the disorder, fMRI research for schizophrenia primarily analyzes the 'failures' of the neural system and the resulting cognitive deficits, with an example being changes in functional connectivity. Another biomarker that can be found through fMRI scans is dysconnectivity within functioning of the cortico-striatal-thalamo-cortical networks. Because this characteristic is associated as an early signal for psychosis, it acts as a marker for predicting a schizophrenia diagnosis. To confirm that a task activates identical regions in schizophrenia patients vs. controls, the given task typically begins easily so that both patients and healthy comparison subjects perform close to 100% accuracy; the task is then increased in difficulty to distinguish activation between two groups with varying abilities of individuals. Eliminating confounding variables by using matched-controlled participants, which match the participant on race, age, sex, occupation...etc. Additionally, increasing datasets of participant groups helps the machine algorithms to improve generalizability across clinical and scanning settings. The 'basic symptoms' approach The 'basic symptoms' approach for schizophrenia, which emerged from "retrospective descriptions of the prodromal phase," represents a framework for a large portion of fMRI research, which evaluates changes in cognition and sensory perception that may affect higher-level information processes. The word 'basic' represents the earliest stages of the self-experienced symptoms of psychosis. These symptoms overall reveal the expression of neurobiological presses relating to it. This acts as an indicator for the onset of schizophrenia, and has potential in alerting researchers in earlier treatment. Moreover, researchers oppose the tendency of researchers to attribute schizophrenia to higher-order processes like working memory, attention, and executive processing, instead choosing to inspect impairments in basic sensory and perceptual functions. Deficits in basic sensory functions influence higher-order processes such as auditory emotion recognition, perceptual closure, object recognition, etc. New research also suggests that disruptions in basic visual and auditory processing could contribute to impaired social perception in schizophrenia, making it difficult for individuals to interpret body language and facial expressions accurately. In the visual system, for example, rudimentary deficits in the function of the magnocellular system result in impairments in higher-order processes like perceptual closure, object recognition, and reading. On the other hand, fMRI data has also suggested the opposite. In one study, researchers found significantly differing activity between healthy and schizophrenic patients in the left dorsal parietal cortex and left ventrolateral prefrontal cortex; as these regions are essential components of a frontal-parietal executive system, hypo-activity in these regions for schizophrenia patients during working memory tasks were theorized to be associated with deficits in executive functioning. Resting-state fMRI The 'disconnectivity hypothesis' is a key theory describing the failure of mechanisms underlying schizophrenia, specifically the failure to integrate information properly. The dysconnectivity hypothesis suggests that disruptions in communication between the brain’s frontal and temporal regions may underlie symptoms like auditory hallucinations and impaired memory, as these areas are critical for integrating sensory input and memory. Functional connectivity, which fMRI evaluates, is the activity coordination between brain regions. It is measured as "temporal correlations of low-frequency oscillations in the BOLD signal between anatomically distinct brain areas" and can reveal resting state networks. The cause for the correlations in fMRI measurements is theorized to be "correlated firing rates of interconnected neurons." Resting-state functional magnetic resonance imaging (rs-fMRI) has become a powerful tool to examine networks' functional connectivity throughout the brain, such as the default mode network (DMN). Through resting-state fMRI, scientists have observed that schizophrenia is associated with altered connectivity patterns in the default mode, central executive, and salience networks. These networks’ dysconnectivity could impact attention, emotion regulation, and self-referential thought processes. Although there are benefits to the resting state fMRI, it is important to note its limitations. fMRI scans measures the blood oxygen level-dependent response (BOLD) when patients partake in specific tasks. Therefore, when a brain region is activated, it takes in more oxygen, which measures and differentiates activity in various neurotransmitter systems. Failure to achieve this causes ambiguity in the areas that are affected, leaving researchers to only see general areas for treatment. Abnormal brain connectivity has long been theorized as a fundamental cause of psychosis in schizophrenia. rs-fMRI can help evaluate regional interactions at rest and whether there are altered, reduced, or hyperactive connections in psychiatric disorders like schizophrenia. During resting-state fMRI experiments, participants are instructed to relax and stay awake but not think of anything. It is important to note that resting-state networks can change between eyes open and eyes closed conditions. Researchers then measure spontaneous brain activation. There are several advantages to studying the resting state of brain networks — the primary reason is that spontaneous neural activity accounts for most of the brain's activity in contrast to task-based neural activity. Additionally, rs-fMRI eliminates confounding effects such as differing performances between healthy subjects and patients in tasks; rs-fMRI also requires less movement than task-based fMRI studies. Seed-based analysis/ROI approaches to analyzing functional connectivity are common in rs-fMRI for schizophrenia. A seed (region of interest) is first selected, and BOLD time series are then extracted from the seed and all other voxels. After preprocessing, the temporal correlation between the seed and other brain voxels is determined, and the software produces a functional connectivity map. Seed-based comparisons in rs-fMRI have revealed functional disconnectivity in schizophrenia patients in numerous studies, using different ROIs for their seeds — in general, schizophrenia patients show reduced connectivity. Recent studies using resting-state fMRI (rs-fMRI) have identified significant disruptions in functional connectivity across multiple brain networks in schizophrenia, including the default mode, frontotemporal, and cerebellar networks. These findings provide additional support for the dysconnectivity hypothesis, which suggests that impaired coordination between brain regions contributes to the cognitive and behavioral symptoms of schizophrenia. This information is compatible with experiment findings suggesting reduced activation in the amygdala in schizophrenia patients during sadness mood induction, for example. References Magnetic resonance imaging Neuroscience of schizophrenia
Functional MRI methods and findings in schizophrenia
[ "Chemistry" ]
1,793
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
72,419,380
https://en.wikipedia.org/wiki/Group%2013/15%20multiple%20bonds
Heteroatomic multiple bonding between group 13 and group 15 elements are of great interest in synthetic chemistry due to their isoelectronicity with C-C multiple bonds. Nevertheless, the difference of electronegativity between group 13 and 15 leads to different character of bondings comparing to C-C multiple bonds. Because of the ineffective overlap between p𝝅 orbitals and the inherent lewis acidity/basicity of group 13/15 elements, the synthesis of compounds containing such multiple bonds is challenging and subject to oligomerization. The most common example of compounds with 13/15 group multiple bonds are those with B=N units. The boron-nitrogen-hydride compounds are candidates for hydrogen storage. In contrast, multiple bonding between aluminium and nitrogen Al=N, Gallium and nitrogen (Ga=N), boron and phosphorus (B=P), or boron and arsenic (B=As) are less common. Synthesis Suitable precursors are crucial for the synthesis of group 13/15 multiple bond-containing species. In most successfully isolated structures, sterically demanding ligands are utilized to stabilize such bondings. Boraphosphenes (P=B) Boraphosphenes, also known as phosphoboranes, was first reported by Cowley and co-workers in the 1980s. [(tmp)B=P(Ar)] (tmp= 2,2,6,6,-tetramethylpiperidina, Ar= 2,4,6-t-Bu3C6H2) was characterized by mass spectroscopy (EI MS), and the corresponding dimer, diphosphadiboretane, was characterized by X-ray crystallography. The Power and co-workers later reported the structure of [P(R)=BMes2Li(Et2O)2] (R = phenyl, cyclohexane, and mesitylene), which is the first B=P double bond observed in solid state. The synthesis of [P(R)=BMes2Li(Et2O)2] starts from treating in-situ generated Mes2BPHR with 1 equivalent of t-BuLi in Et2O, followed by crystallization at low temperature. Cyclic system with P-B multiple bonds Isomerization of four-member P-B cycles was investigated by Bourissou and Bertrand. It was reported that cycle-[R2PB(R')-B(R')-P(Ph)2] (R = phenyl, isopropyl; R'= tert-butyl, 2,3,5,6-tetramethyl phenyl) isomerize to form cycle-[R2P-B(R')=P(Ph)-B(R')(Ph)] upon irradiation. An example of five-membered ring was reported by Crossley suggesting that a reaction of 1,2-diphosphinobenzene with n-BuLi and Cl2BPh yielded a benzodiphosphaborolediide. Several six-membered ring systems involving P=B double bonds have been reported. One of the example is an analogue of borazine synthesizing from MesBBr2 and CyP(H)Li. Arsinideneborates (As=B) A similar strategy to access litigated arsinideneborate was reported by Power and co-workers after the establishment of synthesizing litigated phosphinideneborates. Crystallizing [As(Ph)=BMes2Li(THF)3] with two equivalence of TMEDA yielded [As(Ph)=BMes2][Li(TMEDA)2]. Ring-systems containing As-B multiple bonds haven't been reported yet. Group 13 imides (Al=N, Ga=N, In=N) Synthesis of group 13 imides usually starts with low valent group 13 species stabilized by bulky ligands. A [2+3] cycloaddition of monomeric [DipNacnc]Al or [DipNacnc]Ga (DipNacnc= HC{(CMe)(NDip)}2) compound with sterically bulky azide, TipTerN3 (TipTer = -C6H3-2,6-(C6H2-2,4,6-iPr3)2), gives the iminotrielenes [{DipNacnc}M=N-TipTer] (M=Al, Ga). Additionally, dimers of Ga(I) or In(I) were reported to form the iminotrielens [(DipTer)M=N-Mes'Ter] with Mes'TerN3 (M = Ga, In; Mes'Ter =C6H3-2,6(Xyl-4-tBu)2). Al-N triple bonds Transient Al≡N triple bond species were also investigated by reacting monomeric alanediyl precursor with organic azides. The unstable Al≡N triple bond species [iPr2TIPTerAl≡NR] (R = Ad, SiMe3) was not capture but further rearrange to tetrazole and amino-azide alone, respectively. Phosphaalumenes and Arsaalumenes (P=Al, As=Al) The development of Al=P and Al=As species faced the difficulty due to the tendency of oligomerization of the lewis acidic Al and lewis basic P/As. In 2021, Hering-Junghans, Braunchweig, and co-workers reported the synthesis of phosphaalumens and arsaalumens with Al(I) precursors, [Al(I)Cp*]4 (Cp* = pentamethylcyclopentadiene). Reacting [Al(I)Cp*]4 with DipTer-AsPMe3 or DipTer-AsPMe3 at 1:4 ratio yielded the corresponding phosphaalumens/arsaalumens, which are stable and isolable. Gallium-pnictogen double bonds (Ga=Pn) Synthesis and characterization of Ga=Sb species was reported by Schulz and Cutsail III with the reaction of [DipNacnc]Ga (DipNacnc= HC{(CMe)(NDip)}2) with [Cp*SbCl2]. The resulting Sb radical species, [DipNacnc(Cl)Ga]2Sb, was then reduced by KC8 to give [DipNacncGa=Sb-Ga(Cl)DipNacnc]. Utilizing the similar reaction pathway, a Ga=As species, [DipNacncGa=AsCp*], was successfully synthesized and stabilized. Interestingly, no radical formation was observed comparing to the case of Ga=Sb species. With the rapid development of gallium pnictogen in the late 2010s, the first phosphagallene species was reported by Goicoechea and co-workers in 2020. The reaction of [(HC)2(NDip)2PPCO] with [DipNacncGa] gave the phosphagallene, [DipNacncGa=P-P(NDip)2(CH)2]. Reactivities Reactivities of boraphosphenes B=P double bond species has been studied for bond activation. For example, C-F activation of tris(pentafluorophenyl)borane by NHC-stabilized phosphaboranes, [(tmp)(L)B=PMes*] (L = IMe4), was reported by Cowley and co-workers. The C-F bond activation takes place at the para position, leading to the formation of C-P bond. Reactions of phenyl acetylene with the dimer of [Mes*P=B(tmp)] give an analogue of cycle-butene, [Mes*P=C(Ph)-C(H)=B(tmp)], where C-C triple bond undergoes a [2+2]-cycloaddition to P=B double bond. Phospha-bora Wittig reaction Transient boraphosphene [(tmp)B=PMes*)] (tmp = 2,2,6,6-tetramethylpiperidine, Mes* = 2,4,6-tri-tert-butylphenyl) reacts with aldehyde, ketone, and esters to form phosphaboraoxetanes, which converts to phosphaalkenes [Mes*P=CRR'] and [(tmp)NBO]x heterocycles. This method provides direct access of phosphaalkenes from carbonyl compounds. Reactivities of group 13 imides Compounds with group 13-N multiple bonds are capable of small molecule activation. Reactions of PhCCH or PhNH2 with NHC-stabilized iminoalane result in the addition of proton to N and -CCPh or -NHPh fragment to Al. The reaction with CO leads to the insertion of CO between the Al=N bond. Reactivities of Ga=Pn species Small molecule activation takes place across the P-P=Ga bonds in phosphanyl-phosphagallenes species, where the Ga=P species behave as frustrated Lewis pairs. For example, the reaction of CO2 with [DipNacncGa=P-P(NDip)2(CH2)2] results in the formation of a P=P-C-O-Ga five-membered ring species. In contrast, H2 addition to the P-P=Ga fragment in a 1,3-activation manner. E-H bond activation of protic and hydridic reagents was investigated as well. Reactions of [DipNacncGa=P-P(NDip)2(CH2)2] toward amines, phosphines, alkynes resulted in the formation of [DipNacnc(E)Ga-P-P(H)(NDip)2(CH2)2]. Reversible ammonia activation was observed under 1 bar pressure in the presence of a Lewis acid. Bonding and structures B=P double bond Natural bond orbital analysis of a borophosphide anion, [(Mes*)P=BClCp*]−, suggested that the B-P double bonds are polarized to the P atom. The B=P 𝝈-bond is mostly non-polar while the 𝝅-bond is polarized to the phosphorus (71%). DFT calculation at B3LYP/6-31G level revealed that the HOMO of [(Mes*)P=BClCp*]− has great B-P 𝝅-bonding character. In most reported phosphinideneborates, the phosphorus chemical shifts are much more deshielded than the starting materials, phosphinoboranes. The down-field resonances of phosphorus in 31P NMR suggest the delocalization of lone pairs into the empty p-orbital of boron. Ga-Pn double bond Natural bond orbital analysis was reported for Ga=Sb and Ga=Bi containing species, where electron populates more on Sb and Bi (62% and 59%, respectively). The Lewis acidic Ga results in the delocalization of electrons in Sb and Bi. References Chemical bond properties
Group 13/15 multiple bonds
[ "Chemistry" ]
2,455
[ "Chemical bond properties" ]
72,420,474
https://en.wikipedia.org/wiki/Nano-ARPES
Nano Angle-Resolved Photoemission Spectroscopy (Nano-ARPES), is a variant of the experimental technique ARPES (Angle-Resolved Photoemission Spectroscopy). It has the ability to precisely determine the electronic band structure of materials in momentum space with submicron lateral resolution. Due to its demanding experimental setup, this technique is much less extended than ARPES, widely used in condensed matter physics to experimentally determine the electronic properties of a broad range of crystalline materials. Nano-ARPES can access the electronic structure of well-ordered monocrystalline solids with high energy, momentum, and lateral resolution, even if they are nanometric or heterogeneous mesoscopic samples. Nano-ARPES technique is also based on Einstein's photoelectric effect, being photon-in electron-out spectroscopy, which has converted into an essential tool in studying the electronic structure of nanomaterials, like quantum and low dimensional materials. NanoARPES allows to determine experimentally the relationship between the binding energies and wave momenta of the electrons of the occupied electronic states of the bands with energies close and approximately 10-15 eV below the Fermi level. These electrons are ejected from a solid when it is illuminated by monochromatic photons with sufficient energy to emit photoelectrons from the surface of the material. These photoelectrons are detected by an electron analyzer placed close to the samples surface in vacuum to preserve the uncontaminated surfaces and to avoid the collisions with particles able to modify the energy and trajectory of the photoelectrons in their way to the spectrometer. As in the photoemission process, the momentum is conserved; therefore, the angular distribution of photoelectrons from a monocrystal, even if it is a nanometric size, is also enabled to directly reveal the momentum distribution of initial electronic states in that crystal. The Nano-ARPES results, as in the ARPES technique, are traditionally shown as energy-momentum dispersion relation along the high symmetry directions of the irreducible Brillouin Zone, displaying the band dispersions of the investigated materials. When the emitted photoelectrons are shown by constant energy surfaces throughout large portions of the reciprocal space, Nano-ARPES can also precisely determine the Fermi surface of the investigated materials. Due to the unique ability to spatially map the electronic dispersion of the electrons in the samples, Nano-ARPES can also generate electronic imaging of nanomaterials with high binding energy and momentum resolution. As Nano-ARPES is a scanning technique, it can use state-of-the-art ARPES spectrometers without requiring them to be able also to discriminate spatially the origin of the analysed photoelectrons. Consequently, Nano-ARPES instrumentation can profit from the most advanced spectrometers developed for ARPES setups, particularly those of the latest generation electron spectrometers with bidimensional detection and high energy and momentum resolution. Background The comprehension of the electronic band structure of solids is applied in many fields of condensed matter physics, contributing to the microscopic understanding of many phenomenological trends and guiding the interpretation of experimental spectra in photoemission, optics, inelastic neutron scattering, specific heat, among others, including the effect of spin-polarisation. Most modern band electronic structure theoretical methods employ Density Functional Theory to solve the full many-body Schrödinger equation for electrons in a solid. The consolidated experimental and theoretical approach to describe the electronic structure of solids allows the straightforward visualization of the difference between conductors, insulators, and semiconductors according to the presence of permitted and forbidden electronic states of particular energy and momentum, which can be calculated by quantum mechanics and measured using ARPES. The ARPES technique has the unique ability to determine the band structure directly. It thus helps understand the degree and type of electron interaction in the solids, corroborating or contesting band electronic structure results calculated using different theoretical approaches. However, this technique's lateral resolution, manipulation, and orientation of submicrometric of heterogeneous samples are rather limited. That is because the electrons measured in ARPES are all those electrons ejected by the photo-absorption process prompted by the incident photons. If the illuminated area of the sample is large enough to cover nonhomogeneous areas, the detected ejected electrons are the sum up of all the photoelectrons emitted by all different illuminated patches. If each area has a distinctive electronic band structure, the ARPES spectra will show the average of all of them weighted according to the size of each different patch present in the illuminated area. In fact, many complex materials are constituted by disoriented small monocrystals or composed of several nanometric monocrystals. Traditional ARPES can only provide their average electronic structure if the patch size is smaller than the spot size of the ARPES setup, typically 200 um. This limitation is also present in samples with micrometric and submicrometric zones with distinctive chemical composition due to undesired side chemical reactions, for example, originating by the contamination or oxidation of the primitive sample. Hence, being the spot size of the monochromatic photon beam typically over 200 ųm side for conventional ARPES, only homogeneous samples with this size or bigger can be studied. Consequently, a sub-micrometric lateral resolution should be added to ARPES to perform the experimental determination of the electronic structure of small crystalline materials and large samples with heterogeneities. Nano-ARPES has implemented this lateral discrimination by focalising the size of the photon incident beam within the nanometric scale. Similarly to ARPES, the electronic band structure of nanomaterials can be directly measured using Nano-ARPES by measuring the ejected electrons' kinetic energy, velocity, and absolute momentum. The photon beam focusing to a spot size down to nanometric scale has been routinely achieved in a few well-known X-ray-based methods, such as scanning transmission X-ray microscopy (STXM) and scanning photoemission microscopy (SPEM). However, these techniques are much less demanding because they typically use incident photon energies higher than 150 eV and require non-angle resolved measurements, only recording integrated signals proportional to the X-Ray absorption coefficient and core-level photoelectrons, respectively. In both cases, the Fresnel Zone Plates (FZPs) performance is the essential component determining the lateral resolution, varying from micro- to nanometric lateral resolution. Nowadays, several companies in the market provide FZPs with a resolution better than 30 nm, which has facilitated the construction and operation of several x-Ray based microscopes such as STXM and SPEM instruments in different synchrotron radiation facilities like Elettra, ALS, CLS, and MAX-lab, among others. Nano-ARPES technique, however, requires much lower incident photon energy, typically from 6 eV to 100 eV) to detect those photoelectrons emitted by the electronic states below and close to the Fermi level, which cross-section increases as the incident photon energy decreases. An alternative k-space imaging approach is based on energy-filtered photoemission microscopes (PEEMs), The lateral resolution is achieved using an electron optical column instead of focalizing the incident photon beam. This full-field k-space version of PEEM is available commercially. However, for this commercially available full-field PEEM version with k-space imaging, achieving high energy and momentum resolution is challenging. Instrumentation Typically, high energy and momentum resolution ARPES experiments are performed at synchrotrons, which can provide bright and tunable high-energy photons sources to record the electronic band structure of ordered materials directly. That yields sharp and precise E vs k dispersions and constant energy surfaces, including those corresponding to the Fermi surface of the studied materials. The conventional ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the samples angular and translationally concerning the electron spectrometer (detector), and the incident light beam focus. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample from undesired contamination and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions for kinetic energy and emission angle, the electrons are directed to the detector and counted to provide ARPES spectra-slices of the band structure along one momentum direction. The main difference of its typical instrumental setup from other conventional ARPES apparatus is that the soft x-ray beam is focused to a submicrometric spot using Fresnel Zone plates (FZP) lenses. The specimens can be mounted on a high-precision manipulator that ensures the nanoscale sample positioning in the x, y, and z directions, where the polar angle (Θ) and the azimuthal angle (Ψ) can also be automatically scanned. This basic instrumentation allows two operating modes: Nano-ARPES punctual mode (operating mode type 1) with nano-spot that maps the band structure of nanometric crystalline solids to study quasiparticle dynamics in highly correlated and non-correlated materials as in conventional ARPES, and Nano-ARPES imaging mode (operating mode type 2) that measures the spatial distribution in real space of photoelectrons of a selected binding energy and momentum values range. State-of-the-art Nano-ARPES microscopes are equipped with continuous interferometric control of the position of the samples for the FZPs, which avoids thermal and mechanical drifts. That is required to prevent undesirable distortions of the recorded Nano-ARPES images (operating mode type 2) and precision and reproducibility of E vs. k dispersion curves along specific directions of the reciprocal space. Energy constant surfaces in the reciprocal space & Fermi surface mapping In the Nano-ARPES setups, the used analyzers are the hemispherical electron energy typically installed in high energy and angular resolution conventional ARPES apparatus. They use a slit to prevent the mixing of momentum and energy channels and, consequently, can only take angular maps in one direction. To record maps over energy and two-dimensional momentum space as in conventional ARPES, either the sample needs to be rotated, or the collected photoelectrons beam should be discriminated inside the spectrometer with the electrostatic lens, keeping the sample fixed. The energy-angle-angle maps are converted to binding energy-k//x-k//y maps. These images display constant energy surfaces as a function of the reciprocal space's k//x and k//y waves vectors. The most remarkable constant energy surface is the Fermi surface map, obtained by detecting those photoelectrons with binding energy right at the Fermi level. Applications The Nano-ARPES technique is an essential tool to resolve the electronic band structure of mesoscopic or heterogeneous materials in diverse condensed Matter fields like quantum materials high-temperature superconductors, topological materials semiconductors metals, insulators with not-too-large band gap and in a wide variety of low dimensional materials and heterostructures with effects of confinements, different stackings and hybridization. Also, electronic structure changes associated with all types of phase transitions, charge density waves, bands hybridization phase separation, charge transfer, and in-operando devices can be revealed by combining nano-lateral resolution with high energy and momentum resolution. References Laboratory techniques in condensed matter physics Emission spectroscopy Electron spectroscopy
Nano-ARPES
[ "Physics", "Chemistry", "Materials_science" ]
2,377
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Emission spectroscopy", "Laboratory techniques in condensed matter physics", "Condensed matter physics", "Spectroscopy" ]
72,422,117
https://en.wikipedia.org/wiki/U%20Sagittarii
U Sagittarii is a variable star in the southern constellation of Sagittarius, abbreviated U Sgr. It is a classical Cepheid variable that ranges in brightness from an apparent visual magnitude of 6.28 down to 7.15, with a pulsation period of 6.745226 days. At its brightest, this star is dimly visible to the naked eye. The distance to this star is approximately 2,080 light years based on parallax measurements, and it is drifting further away with a radial velocity of 2 km/s. The variability of this star was announced by J. Schmidt in 1866, who found a preliminary period of 6.74784 days. It was later determined to be a variable of the Cepheid type. In 1925, P. Doig assumed that the star is a member of the open cluster Messier 25 (M25), but actual evidence of its membership would not be available until 1932 when P. Hayford made radial velocity measurements of the cluster. Membership in this cluster is now reasonably established, and as such this Cepheid serves as one of the anchors for the cosmic distance scale since the distance to the cluster can be determined independently from the star. Indeed, new research indicates U Sgr's host cluster (M25) may constitute a ternary (triple) star cluster together with NGC 6716 and Collinder 394. This is an evolved G-type supergiant star with a typical stellar classification of G1Ib. It appears to be making its third traversal of the instability strip with its period changing at the rate of . Elemental abundances are similar to those in the Sun. It has an estimated 6.6 times the mass of the Sun and 56 times the Sun's radius. The star is radiating over 4,000 times the Sun's luminosity from its photosphere at an effective temperature of 5,802 K. References Further reading Classical Cepheid variables G-type supergiants Sagittarius (constellation) 6947 Durchmusterung objects 170764 90836
U Sagittarii
[ "Astronomy" ]
433
[ "Sagittarius (constellation)", "Constellations" ]
72,422,533
https://en.wikipedia.org/wiki/HD%2036187
HD 36187, also known as HR 1835, is a solitary, bluish-white hued star located in the southern constellation Columba, the dove. It has an apparent magnitude of 5.55, making it faintly visible to the naked eye under ideal conditions. Based on parallax measurements from the Gaia spacecraft, it is estimated to be 282 light years away from the Solar System. However, it is receding rapidly with a heliocentric radial velocity of . At its current distance, HD 36187's brightness is diminished by 0.21 magnitude due to interstellar dust. HD 36187 has a stellar classification of either A1 V or A0 V, depending on the source. Nevertheless, both classes indicate that it is an ordinary A-type main-sequence star that is fusing hydrogen in its core. It has double the mass and radius of the Sun. It radiates 48 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 36187 is estimated to be 311 million years old, having completed 66.9% of its main sequence lifetime. Like many hot stars HR 1835 spins rapidly, having a projected rotational velocity of . References A-type main-sequence stars Columba (constellation) Columbae, 20 CD-37 02220 036187 025608 1835
HD 36187
[ "Astronomy" ]
283
[ "Columba (constellation)", "Constellations" ]
72,423,550
https://en.wikipedia.org/wiki/Amanita%20muscaria%20var.%20inzengae
Amanita muscaria var. inzengae, commonly known as Inzenga's fly agaric, is a basidiomycete fungus of the genus Amanita. It is one of several varieties of the Amanita muscaria fungi, all commonly known as fly agarics or fly amanitas. Biochemistry As with other Amanita muscaria, the inzengae variety contains ibotenic acid, and muscimol, two psychoactive constituents which can cause effects such as hallucinations, synaesthesia, euphoria, dysphoria and retrograde amnesia. The effects of muscimol and ibotenic acid most closely resemble that of any GABAergic compound, but with a dissociative effect taking place in low to mid doses, which are followed by delirium and vivid hallucinations at high doses. Ibotenic acid is mostly broken down into the body to muscimol, but what remains of the ibotenic acid is believed to cause the majority of dysphoric effects of consuming A. muscaria mushrooms. Ibotenic acid is also a scientifically important neurotoxin used in lab research as a brain-lesioning agent in mice. As with other wild-growing mushrooms, the ratio of ibotenic acid to muscimol depends on countless external factors, including: season, age, and habitat - and percentages will naturally vary from mushroom-to-mushroom. See also Amanita muscaria var. formosa Amanita muscaria var. guessowii References External links http://www.amanitaceae.org/?Amanita%20muscaria%20var.%20inzengae muscaria inzengae Poisonous fungi
Amanita muscaria var. inzengae
[ "Environmental_science" ]
382
[ "Poisonous fungi", "Toxicology" ]
72,424,208
https://en.wikipedia.org/wiki/Xiazhuangocaris
Xiazhuangocaris is an extinct genus of bivalved arthropod known from the Cambrian aged Chengjiang Biota of Yunnan, China. Only a single specimen (NIGP 172765) is known, around long, which only preserves the anatomy of the carapace and the trunk. The bivalved carpace has a pronounced pair of notches at its front, as well as a posterior notch at its rear. The body had at least 13 tergite-pleurite rings, which terminate in a pair of rounded caudal rami, which are fringed with setae. The describing authors interpreted the taxon as a member of Hymenocarina, which contains other Cambrian bivalved arthropods. References Hymenocarina Cambrian arthropods Species known from a single specimen Fossil taxa described in 2020
Xiazhuangocaris
[ "Biology" ]
176
[ "Individual organisms", "Species known from a single specimen" ]
72,424,494
https://en.wikipedia.org/wiki/Pleurotus%20opuntiae
Pleurotus opuntiae is a species of Agaricales fungus that grows in the semi-arid climate of central Mexico and in New Zealand, whose mushroom is edible and considered a delicacy in the cuisine of indigenous peoples of Mexico. It is known as hongo de maguey común in Mexican Spanish, seta de chumbera/nopal in Peninsular Spanish, and kjoo'wada in Otomi language. Phylogenetic research has shown that while it belongs to the Pleurotus djamor-cornucopiae clade, it forms its own intersterility group, but it has also been claimed to be genetically inter-incompatible with Pleurotus australis, Pleurotus ostreatus (extra-limital), Pleurotus pulmonarius and Pleurotus purpureo-olivaceus of New Zealand. Description Pleurotus opuntiae fruits gregariously in groups of several specimens on dead remains of the plant , from which the binomial name of the fungus derives. The mushrooms are beige or cream in color. Their gills are very decurrent and their caps, from in diameter, are quite flat and funnel-shaped, slightly rolled at the edges. They have either a very short stipe, or often basically nonexistent one. References Edible fungi Pleurotaceae Fungal tree pathogens and diseases Fungus species
Pleurotus opuntiae
[ "Biology" ]
294
[ "Fungi", "Fungus species" ]
72,424,560
https://en.wikipedia.org/wiki/Jason%20Wright%20%28astronomer%29
Jason Thomas Wright is an American astronomer. He is a professor in the Department of Astronomy and Astrophysics in the Eberly College of Science at Pennsylvania State University, where he also serves as director of the Penn State Extraterrestrial Intelligence Center. He is known for his research on the search for extraterrestrial intelligence, particularly regarding the search for technosignatures. References External links Wright's Pennsylvania State University website American astronomers Living people Boston University alumni University of California, Berkeley alumni Pennsylvania State University faculty Year of birth missing (living people)
Jason Wright (astronomer)
[ "Astronomy" ]
113
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
72,424,930
https://en.wikipedia.org/wiki/Maritza%20Lara-L%C3%B3pez
Maritza Arlene Lara-López is a Mexican astronomer whose research interests include metallicity in galaxy formation and evolution and extragalactic astronomy. She is a participant in the Galaxy And Mass Assembly survey, and a researcher and Ramón y Cajal Fellow in the faculty of physical sciences at the Complutense University of Madrid. Education and career Lara-López is originally from Puebla, and did her undergraduate studies in physics at the Meritorious Autonomous University of Puebla, graduating in 2005 with a senior thesis based on research at the National Institute of Astrophysics, Optics and Electronics in Puebla and mentored by Raul Mújica and Omar López Cruz. She became a graduate student at the Instituto de Astrofísica de Canarias in the Canary Islands of Spain, supported by the Mexican National Council of Science and Technology (CONACyT). She earned a master's degree in 2007 and completed her PhD in 2011, supervised by Jordi Cepa. She was a researcher and "Super Science Fellow" at the Australian Astronomical Observatory from 2011 to 2014. Next, she became an assistant professor of astronomy at the National Autonomous University of Mexico (UNAM), associated with the UNAM Institute of Astronomy, from 2014 to 2017. She was a research fellow and assistant professor at the Niels Bohr Institute of the University of Copenhagen from 2017 to 2020, and a postdoctoral researcher at the Armagh Observatory in Ireland from 2020 to 2022. She took her present position as Ramon y Cajal Fellow in the faculty of physical sciences at the Complutense University of Madrid in 2022. Her research collaborations include the Galaxy And Mass Assembly survey, the SAMI Galaxy Survey, and the OSIRIS Tunable Emission Line Object Survey (OTELO). Recognition Lara-López's dissertation won a prize for the best thesis in Mexico. In 2016, L'Oréal México named her as one of five winners of their L'Oréal-UNESCO Award for Women in Science Fellowship. References External links Year of birth missing (living people) Living people People from Puebla (city) Mexican astronomers Women astronomers Meritorious Autonomous University of Puebla alumni Academic staff of the National Autonomous University of Mexico Academic staff of the University of Copenhagen
Maritza Lara-López
[ "Astronomy" ]
443
[ "Women astronomers", "Astronomers" ]
72,426,420
https://en.wikipedia.org/wiki/Paul%20Portier%20%28physiologist%29
Paul Jules Portier (; 22 May 1866 – 26 January 1962) was a French physiologist who made important contributions to the discovery of anaphylaxis and the development of symbiogenesis. On a scientific expedition organised by Albert I, Prince of Monaco, he and Charles Richet discovered that toxins produced by marine animals (cnidarians such as Portuguese man o' war and sea anemone) could induce fatal shocks. They named the medical phenomenon "anaphylaxis," from which Richet went on to receive the 1913 Nobel Prize in Physiology or Medicine. Portier was the first scientist to explain that the cell organelle, mitochondrion, arose by symbiosis according to his evolutionary theory in 1918. Biography Portier was born in Bar-sur-Seine, France, to Ernest Paul and Julie Moreau Laure. He studied elementary education at the Lycée de Troyes from 1878 to 1885. After passing the final secondary examination (called bac) from the Saint-Sigisbert in Nancy, he qualified for service in the Ministry of Finance in 1888. However, he chose to study biology, following his childhood dream. In 1889, he entered the University of Paris from where he earned an M.D. in 1897 and Doctor of Science (docteur ès sciences) degree in 1912. He continued to work in the university as an assistant physician. In 1906, Albert I, Prince of Monaco founded the Institute of Oceanography (Institut océanographique de Paris); Portier was appointed its professor. When the institute was inaugurated in 1911, Portier became its first director. In 1920, he was appointed professor of professor of comparative physiology at the University of Paris. In 1923, the University of Paris created a chair of physiology, which he held for the rest of his career. He retired in 1936, and the university awarded him the position of honorary professor. He played active roles in the administrations of the French Academy of Sciences and the French Academy of Medicine. He published his last book The Biology of Butterflies in 1949. Portier married Françoise Noiret Claudine in 1911, and had three daughters, Andrée, Jeannine and Paulette. He spent his last days at his home in Bourg-la-Reine. Contributions Anaphylaxis In 1901, Albert I, Prince of Monaco organised a scientific expedition around the French coast of the Atlantic Ocean. He specifically invited Portier and Charles Richet, professor of physiology at the Collège de France, to join him for investigating the toxins produced by cnidarians (like jellyfish and sea anemones). Richet and Portier boarded Albert's ship Princesse Alice II from where they collected various marine animals. Richet and Portier extracted a toxin called hypnotoxin from their collection of jellyfish (but the real source was later identified as Portuguese man o' war) and sea anemone (Actinia sulcata) from Cape Verde Islands. In their first experiment on the ship, they injected a dog with the toxin in an attempt to immunise the dog, which instead developed a severe reaction (hypersensitivity). To confirm the findings, they knew that more experimental works were needed in the laboratory. In 1902, they repeated the injections in their laboratory and found that dogs normally tolerated the toxin at first injection, but on re-exposure, three weeks later with the same dose, they always developed fatal shock. They also found that the effect was not related to the doses of toxin used, as even small amounts in secondary injections were lethal. Thus, instead of inducing tolerance (prophylaxis) which they expected, they discovered effects of the toxin as deadly. In 1902, Richet introduced the term aphylaxis to describe the condition of lack of protection. He later changed the term to anaphylaxis on grounds of euphony. The term is from the Greek ἀνά-, ana-, meaning "against", and φύλαξις, phylaxis, meaning "protection". On 15 February 1902, Richet and Portier jointly presented their findings before the Société de biologie in Paris. The moment is regarded as the birth of allergy (the term invented by Clemens von Pirquet in 1906) study (allergology). Richet continued to study on the phenomenon and was eventually awarded the Nobel Prize in Physiology or Medicine for his work on anaphylaxis in 1913. Portier never claimed the co-discovery of anaphylaxis, instead honoured Richet as a senior scientist. After the Nobel Prize, Richet praised him for "giving up all claim to the honor of the discovery." Portier explained: "We discovered anaphylaxis without looking for it, and almost in spite of ourselves. But it was necessary to have the eyes and mind of a physiologist to understand the interest." Marine biology Portier was the first to realise that condensation of water vapour was the cause of the spout of a blowing whale and other marine mammals. He showed that condensation occurred in the expelled air as water vapour was spread and cooled down. His studies from 1909 established the principle of surface tension in insects that walked on water. His studies in 1922 involved osmoregulation (salt-water balance) in fishes. In 1934, he showed that deaths of marine birds in oil spills were due to loss of body heat caused by oils infiltrating the feathers. Symbiosis and symbiogenesis Following his interest in entomology and physiology, Portier studied how insects such as termites digest cellulose. He found out that bacteria in termite's gut were essential for cellulose digestion. In addition, the bacteria provided essential vitamins to the termites and were involved during the developmental processes of the hosts. Thus, the bacteria were symbionts. It was at the time a known fact that such bacteria were parasites. Portier began to realised that microbes could be necessary for the lives and formation of higher organisms. In 1917, he published the role of symbiosis in the lives of plants and animals, and by that time he started writing a book, he called Les Symbiotes. He was able to link the similarities of bacteria and mitochondria, the energy-producing cell organelles, and claimed that mitochondria behaved just like bacteria in culture. In 1918, Portier, summing up his observations on symbiosis in nature and his evolutionary idea (now known as symbiogenesis), published Les Symbiotes, dedicating it to Prince Albert. According to Portier, symbiosis is a universal process by which all complex life forms (eukaryotes) arose from the fusion of independent unicellular organisms; mitochondria, for examples, are just a type of bacteria. He made a statement:All living beings, all animals from Amoeba to Man, all plants from Cryptogams to Dicotyledons are constituted by an association, the emboîtement [embodiment] of two different beings. Each living cell contains in its protoplasm formations, which histologists designate by the name of mitochondria. These organelles are, for me, nothing other than symbiotic bacteria, which I call "symbiotes."As Portier himself remarked that his theory was "a veritable scientific heresy," the book and the evolutionary idea were received with scepticism and ridicule. The Société de biologie created a committee to investigate the controversy. Scientists at the Pasteur Institute openly argued that mitochondria could never be cultured and challenged Portier to demonstrate his experiments. As John Archibald described: "Les Symbiotes caused a brouhaha in France... Portier's reputation as a competent experimentalist was damaged and his grand hypothesis was ignored." The next year, Auguste Lumière published a refutation Le Mythe des Symbiotes ("The Myth of Symbiotes"). Portier had prepared a draft of the sequel to Les Symbiotes, but never published it or touched on the subject of evolution again. (Symbiogenesis is now widely accepted, and mitochondria are evidently once free-living bacteria.) Honours Portier received the Montyon Prize in 1912, La Caze in 1934, and Jean Toy in 1951 from the French Academy of Sciences. He was given Chevalier (Knight) in 1923, Officier (Officer) in 1935, and Commandeur (Commander) in 1951 of the Legion of Honour. He was honoured Commander of the Order of Saint Charles (1951) and of the Order of Cultural Merit (1954) of Monaco. He was elected member of the French Academy of Medicine in 1929 and of the French Academy of Sciences in 1936. References 1866 births 1962 deaths French physiologists University of Paris alumni Academic staff of the University of Paris Members of the French Academy of Sciences Allergology Evolutionary biology
Paul Portier (physiologist)
[ "Biology" ]
1,885
[ "Evolutionary biology" ]
72,426,662
https://en.wikipedia.org/wiki/Goldsikka%20ATM
Goldsikka ATM is an automated teller machine in Hyderabad which is India's first and the world's first real time gold ATM. History It was launched by Goldsikka Pvt. Ltd. with the collaboration of OpenCube Technologies Pvt. Ltd. in 3 December 2022 at Ashoka Raghupathi Chambers, Begumpet. It was inaugurated by Vakiti Sunitha Laxma Reddy, Chairperson of Telangana Women's Commission. In December 2023, Goldsikka ATM was launched at Ameerpet metro station, that dispenses Gold and silver coins. Functions It can dispense gold coins ranging from 0.5 grams to 100 grams. In the machine, people can use credit or debit card. It gives 24/7 service to their customers and also gives the live price of gold. The gold will be of 24 carat and can store 5 kg gold. It gives pure and hallmarked gold coins. Its price is updated from the London bullion market. Its coins are dispersed in packs which are tamper proof and certified with 999 purity. The ATM will return money to bank if there will be any failed transaction. References Automated teller machines Computer-related introductions in 2022
Goldsikka ATM
[ "Physics", "Technology", "Engineering" ]
250
[ "Machines", "Commercial machines", "Vending machines", "Automation", "Physical systems", "Automated teller machines" ]
75,204,438
https://en.wikipedia.org/wiki/V921%20Scorpii%20b
V921 Scorpii b is a candidate 60-Jupiter-mass brown dwarf orbiting V921 Scorpii, a 20-solar-mass 30,000 K Herbig Haro B0IV-class subgiant. The object is 835 AU away from V921 Scorpii. Despite the large distance, V921 Scorpii b receives a comparable amount of irradiation as Mars, owing to the large mass and luminosity of V921 Scorpii. While this candidate object is likely a brown dwarf, it is included in the Extrasolar Planets Encyclopaedia since it includes substellar objects with masses up to 60 Jupiter masses. Host star With 20 solar masses in mass and 30,000 K in temperature, V921 Scorpii is a Herbig Haro B0IV-class subgiant and the most massive star to host a substellar object. References Scorpius Brown dwarfs
V921 Scorpii b
[ "Astronomy" ]
204
[ "Scorpius", "Constellations" ]
75,204,730
https://en.wikipedia.org/wiki/Pauline%20Sherman
Pauline Mont Sherman was an American aerospace engineer and academic. She was the first female Professor in the College of Engineering at the University of Michigan and also the first woman to become Professor of Aerospace Engineering at the University of Michigan. Her research focuses on jet noise, low-density flows, two-phase flows, and especially hypersonic flows. Early life and education Sherman was born in New York in 1921 to Polish immigrants. In 1942, she began working as an Expediter and Clerk for Eugene Scherman and later joined Babcock and Wilcox as a Contract Engineer in 1950. She earned a bachelor of science degree in Engineering Mechanics from the University of Michigan in 1952, where she participated in research on aircraft icing. At that time, pursuing an engineering degree was considered unconventional for women, as they were barred from City College liberal arts classes. In 1953, she went on to receive a master's degree in Mechanical Engineering from the University of California, Berkeley, where she concurrently served as a Research Engineer. Career Sherman began her academic career by returning to the University of Michigan as an Associate Research Engineer in 1956. She assumed the position of Assistant Professor of Aerospace Engineering in 1960, becoming the first woman appointed to the engineering faculty, and was subsequently promoted to the role of Associate Professor in 1963 and later to full Professor in 1971, from which she retired in 1987. The University of Michigan created the Pauline M. Sherman Collegiate Professorship to honor her legacy. Sherman joined Sigma XI in 1955 and worked as a Consultant for the Advisory Group for Aerospace Research (AGARD) and Development of NATO in 1962. She also provided consultancy services for the Environmental Protection Agency and the Lawrence Berkeley National Laboratory and advocated for women in science. Following her retirement in 1987, she became a volunteer for the American Civil Liberties Union. Research Sherman has contributed to the field of aerospace engineering with her research focused on hypersonic flows, jet noise, electrical circuitry, two-phase flows and low-density flows. During the early 1960s, she held a supervisory role in overseeing the construction of a high-energy hypersonic wind tunnel. She highlighted its capability to provide high temperatures and pressures for extended periods, crucial for analyzing chemical non-equilibrium in nozzle expansion. She also proposed a design for a timed externally triggered quick exhaust valve, employing a double diaphragm system. Hypersonic flows Sherman researched hypersonic flows throughout her career. She demonstrated that the diameter of a Pitot tube affects the measurement of Pitot pressure, with calculations and measurements revealing variations in pressure depending on tube size, particularly showcasing a decrease in pressure for smaller tube diameters, suggesting potential solutions for reducing transducer lag time. She also showed that pressures on 3° cones matched findings for 5° cones, both showing correlation with the viscous interaction parameter, and a newly proposed calculation method closely aligned with measured pressures. Working alongside L. Talbot and T. Koga, Sherman examined the condensation of zinc vapor in a helium carrier gas through nozzle acceleration, with particle size measurements revealing most particles were under 70 A in diameter, and pressure measurements indicating significant supercooling at Mach 3. Two-phase flows Another prominent focus in her research was the investigation of two-phase flows and low-density flows. In a large supersonic nozzle, she found that particle sizes ranged from 200 to 700 Å, correlating with initial vapor pressure, and that particle numbers decreased with increased mass fraction, while static pressure showed a linear increase with initial mass fraction. Additionally, she examined the condensation of superheated zinc vapor in an inert carrier gas, and observed an onset of condensation with approximately 430 K of supercooling and compared the findings with a classical liquid drop model of nucleation, which showed reasonable agreement with the measurements. In a collaborative study, Sherman developed a dispenser that consistently feeds small particles into a laboratory burner using positive displacement and a sonic ejector, meeting the criteria for accurate chemical measurements and laser-Doppler anemometry. She also designed and implemented a low inductance circuit for evaporating metal wires and condensing metal vapor into submicron-sized spherical particles with a log normal distribution, showing a decreasing mean diameter as expansion length increased, while the impact of ambient gas type on particle size was limited in the absence of chemical reactions. Jet noise Focusing on jet noise, Sherman revealed that the jet's oscillation frequency matched the dominant sound frequency with a reflecting surface, while an insulated surface shifted sound frequencies above the audible range, and screech frequency was inversely related to the first shock cell length and decreased with higher stagnation pressure. Electrical circuitry In her work on electrical circuitry, Sherman presented an empirical method for predicting the parameters required to achieve a single pulse discharge with no oscillation or residual energy, utilizing an LRC circuit with a 14.7-μF capacitor charged to different voltages and discharging through various metal wires. Selected articles Talbot, L., Koga, T., & Sherman, P. M. (1959). Hypersonic viscous flow over slender cones. Journal of the Aerospace Sciences, 26(11), 723–730. Griffin, J. L., & Sherman, P. M. (1965). Computer analysis of condensation in highly expanded flows. AIAA Journal, 3(10), 1813–1818. McBride, D. D., & Sherman, P. M. (1972). Condensed zinc particle size determined by a time discrete sampling apparatus. AIAA Journal, 10(8), 1058–1063. Sherman, P. M. (1975). Generation of submicron metal particles. Journal of Colloid and Interface Science, 51(1), 87–93. Sherman, P. M., Glass, D. R., & Duleep, K. G. (1976). Jet flow field during screech. Applied Scientific Research, 32, 283–303. References Aerospace engineers American aerospace engineers University of Michigan faculty University of California, Berkeley faculty University of Michigan alumni University of California, Berkeley alumni 1921 births 2007 deaths
Pauline Sherman
[ "Engineering" ]
1,260
[ "Aerospace engineers", "Aerospace engineering" ]
75,205,974
https://en.wikipedia.org/wiki/First%20universal%20common%20ancestor
The first universal common ancestor (FUCA) is a proposed non-cellular entity that was the earliest organism with a genetic code capable of biological translation of RNA molecules into peptides to produce proteins. Its descendants include the last universal common ancestor (LUCA) and every modern cell. FUCA would also be the ancestor of ancient sister lineages of LUCA, none of which have modern descendants, but which are thought to have horizontally transferred some of their genes into the genome of early descendants of LUCA. FUCA is thought to have been composed of progenotes, proposed ancient biological systems that would have used RNA for their genome and self-replication. By comparison, LUCA would have had a complex metabolism and a DNA genome with hundreds of genes and gene families. Origins Long before the appearance of compartmentalized biological entities like FUCA, life had already begun to organize itself and emerge in a pre-cellular era known as the RNA world. The universal presence of both biological translation mechanism and genetic code in every biological systems indicates monophyly, a unique origin for all biological systems including viruses and cells. FUCA would have been the first organism capable of biological translation, using RNA molecules to convert information into peptides and produce proteins. This first translation system would have been assembled together with primeval, possibly error-prone genetic code. That is, FUCA would be the first biological system to have genetic code for proteins. The development of FUCA likely took a long time. FUCA was generated without genetic code, from the ribosome, itself a system evolved from the maturation of a ribonucleoprotein machinery. FUCA appeared when a proto-peptidyl transferase center started to first emerge, when RNA world replicators started to be capable to catalyze the bonding of amino acids into oligopeptides. The first genes of FUCA were most likely encoding ribosomal, primitive tRNA-aminoacyl transferases and other proteins that helped to stabilize and maintain biological translation. These random peptides produced possibly bound back to the single strand nucleic acid polymers and allowed a higher stabilization of the system that got more robust and was further bound to other stabilizing molecules. When FUCA had matured, its genetic code was completely established. FUCA was composed by a population of open-systems, self-replicating ribonucleoproteins. With the arrival of these systems, began the progenote era. These systems evolved into maturity when self-organization processes resulted in the creation of a genetic code. This genetic code was for the first time capable to organize an ordered interaction between nucleic acids and proteins through the formation of a biological language. This caused pre-cellular open systems to then start to accumulate information and self-organize, producing the first genomes by the assembly of biochemical pathways, which probably appeared in different progenote populations evolving independently. In the reduction hypothesis, where giant viruses evolved from primordial cells that became parasitic, viruses might have evolved after FUCA and before LUCA. Progenotes Progenotes (also called ribocytes or ribocells) are semi-open or open biological systems capable of performing an intense exchange of genetic information, before the existence of cells and LUCA. The term progenote was coined by Carl Woese in 1977, around the time he introduced the concept of the three domains of life (bacteria, archaea, and eukaryotes) and proposed that each domain originated from a different progenote. The meaning of the term changed with time. In the 1980s, Doolittle and Darnell used the word progenote to designate the ancestor of all three domains of life, now referred to as the last universal common ancestor (LUCA). The terms ribocyte and ribocell refer to progenotes as protoribosomes, primeval ribosomes that were hypothetical cellular organisms with self-replicating RNA but without DNA, and thus with a RNA genome instead of the usual DNA genome. In Carl Woese's Darwinian threshold period of cellular evolution, the progenotes are also thought to have had RNA as informational molecule instead of DNA. The evolution of the ribosome from ancient ribocytes, self-replicating machines, into its current form as a translational machine may have been the selective pressure to incorporate proteins into the ribosome's self-replicating mechanisms, so as to increase its capacity for self-replication. Ribosomal RNA is thought to have emerged before cells or viruses, at the time of progenotes. Progenotes composed and were the descendants of FUCA, and FUCA is thought to have organized the process between the initial organization of biological systems and the maturation of progenotes. Progenotes were dominants in the Progenote age, the time where biological systems originated and initially assembled. The Progenote age would have happened after the pre-biotic age of the RNA-world and Peptide-world but before the age of organisms and mature biological systems like viruses, bacteria and archaea. The most successful progenotes populations were probably the ones capable to bind and process carbohydrates, amino acids and other intermediated metabolites and co-factors. In progenotes, compartmentalization with membranes was not yet completed and translation of proteins was not precise. Not every progenote had its own metabolism; different metabolic steps were present in different progenotes. Therefore, it is assumed that there existed a community of sub-systems that started to cooperate collectively and culminated in the LUCA. Ribocytes and viruses In the eocyte hypothesis, the organism at the root of all eocytes may have been a ribocyte of the RNA-world. For cellular DNA and DNA handling, an "out of virus" scenario has been proposed: storing genetic information in DNA may have been an innovation performed by viruses and later handed over to ribocytes twice, once transforming them into bacteria and once transforming them into archaea. Similarly in viral eukaryogenesis, a hypothesis theorizing that eukaryotes evolved from a DNA Virus, ribocytes may have been an ancient host for the DNA virus. As ribocytes used RNA to store their genetic info, viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host ribocells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. See also Abiogenesis Alternative abiogenesis scenarios Earliest known life forms RNP world References Origin of life Hypothetical life forms Evolutionary biology Genetic genealogy Events in biological evolution Phylogenetics
First universal common ancestor
[ "Biology" ]
1,397
[ "Evolutionary biology", "Origin of life", "Hypothetical life forms", "Taxonomy (biology)", "Bioinformatics", "Biological hypotheses", "Phylogenetics" ]
75,207,572
https://en.wikipedia.org/wiki/Alfred%20Tissi%C3%A8res
Alfred Tissières (October 14, 1917 – June 7, 2003) was a Swiss molecular biologist, a pioneer in highlighting the role of ribosomes in protein biosynthesis and the initiator of studies on heat shock proteins synthesized by cells subjected to stress. He shared the Marcel Benoist Prize with Edouard Kellenberger in 1966. Early life and education Tissières was born on October 14, 1917 in Martigny. He comes from the neighboring town of Orsières. After studying medicine in Lausanne, where he obtained a doctorate in 1946, Tissières left to do a PhD in England at Cambridge, at the Molteno Institute for Research in Parasitology in the laboratory of David Keilin. Professional and scientific career From 1951 to 1952, he carried out a postdoctoral internship in the laboratory of Max Delbrück at the California Institute of Technology. He worked there on the respiration of enterobacteria with Herschel K. Mitchell. In 1953 he returned to Cambridge as a research fellow at King's College. From 1957 to 1961, he was a research associate at Harvard with James Watson. There he carried out pioneering work on the ribosomes of Escherichia coli, these structures had just been described by microscopy by George Palade. Tissières showed that they are formed of two subunits and that they are linked to messenger RNAs. Next, during a short stay at the Pasteur Institute in the laboratory of Jacques Monod in 1959 he showed, with François Gros he met at Harvard, that ribosomes are capable of incorporating amino acids into proteins. In his Nobel Prize lecture in 1968, Marshall Warren Nirenberg cited this work as having been decisive for his own discoveries. In 1963, Tissières was appointed professor at the University of Geneva where he created a laboratory dedicated to the study of ribosomes. In 1972, he completed a sabbatical stay in the laboratory of Herschel K. Mitchell at the California Institute of Technology. There he discovered that fly cells subjected to heat shock type stress synthesize particular proteins. This synthesis was linked to the “puffs” described in 1962 by Ferruccio Ritossa on polytene chromosomes from fly salivary glands subjected to the same stresses. These “puffs” were an indication of transcription from DNA to RNA, which suggested that stresses triggered gene expression. Thus this work by Tissières established the correspondence between the “puffs” and the synthesis of a group of proteins which were called heat shock proteins. Since then, numerous studies in very varied fields of biology have been devoted to these proteins which are at the origin of the concept of chaperone protein. From 1973, his laboratory devoted itself to the characterization of heat shock proteins and the regulation of messenger RNA transcription of the corresponding genes. Awards Marcel Benoist Prize shared with Edouard Kellenberger in 1966 Learned societies Member of the council of the European Molecular Biology Organization (EMBO) from 1968 to 1973 Personal life His father, Jules Tissières was a catholic conservative conseiller national for the Parti démocrate-chrétien (Suisse) from 1911 to 1918. Tissières married Virginia Wachob, an American national. An experienced mountaineer, Tissières was among the first to climb the south face of the Täschhorn and the north ridge of the Dent Blanche. He is credited with the 1952 first ascent of Mount Doonerak in the Brooks Range of Alaska. In 1954, with the Cambridge University Mountaineering Club, he unsuccessfully attempted the ascent of Rakaposhi (7780 m) in Pakistan with George Band, one of the members of the team that had conquered Everest in 1953, a contemporary film of the expedition is in the public domain. Tissières campaigned for peace and nuclear disarmament by participating in several meetings of the Pugwash movement from 1990 to 2000. Legacy The Cell Stress Society International, has offered since 2005 an annual award (biennial until 2019) in memory of Tissières to a young researcher, the Alfred Tissières Young Investigator Award. Main scientific contributions Localization of the heat shock-induced proteins in Drosophila melanogaster tissue culture cells (1980) A P Arrigo, S Fakan, A Tissières Dev Biol 1980 Jul;78(1):86-103. doi: 10.1016/0012-1606(80)90320-6. Protein synthesis in salivary glands of Drosophila melanogaster: relation to chromosome puffs (1974) A Tissières, H K Mitchell, U M Tracy; J Mol Biol Apr 15;84(3):389-98. doi: 10.1016/0022-2836(74)90447-1. Amino acid incorporation into proteins by Escherichia coli ribosomes (1960) Tissières A, Schlessinger D, Gros F. Proc Natl Acad Sci U S A. Nov;46(11):1450-63. doi:10.1073. Ribonucleoprotein particles from Escherichia coli. Tissières, A; Watson, JD; Schlessinger, D; Hollingworth, BR. J Mol Biol Volume 1 Issue 3 Page 221-233 (1959) Ribonucleoprotein particles from Escherichia Coli, Tissières, A and Watson, JD. Nature 182 (4638) (1958), pp. 778–780 References External links An interview of Alfred Tissières in Cold Spring Harbor oral archives A tribute by Pierre Spierer (in french) A tribute by Jean David Rochaix (in french) 1917 births 2003 deaths Molecular biologists
Alfred Tissières
[ "Chemistry" ]
1,181
[ "Molecular biologists", "Biochemists", "Molecular biology" ]
75,208,691
https://en.wikipedia.org/wiki/LXXXVIII%20%28album%29
LXXXVIII is the eighth solo studio album by British electronic music producer Actress, released on 3 November 2023 through Ninja Tune. It was inspired by chess and game theory, and received positive reviews from critics. Background The album was inspired by chess, with each track title containing a coordinate on a chessboard used in algebraic notation, as well as game theory. Its title is the Roman numeral for 88, which is also the name of a mixtape Actress released in 2020; 88 is included with the 3×LP edition of LXXXVIII. Critical reception LXXXVIII received a score of 82 out of 100 on review aggregator Metacritic based on six critics' reviews, indicating "universal acclaim". Daniel Bromfield of Pitchfork commented that "there are some remarkably idle stretches on this 57-minute album, which weaves between short dance tracks and long, intractable expanses of stasis; it's the inverse of the typical techno 'artist album,' where the dance tracks are sandwiched between half-baked ambient stocking-stuffers designed to show off the producer's compositional bona fides". AllMusic's Andy Kellman described it as "yet another source of mystification" as "its deconstructions and creative alterations of underground club music forms, combined with crystalline ambient compositions [...] cause more sensations of wonderment, comfort, and unease". Andrew Ryce of Resident Advisor stated that LXXXVIII "focuses on piano, electronics and voice" as Actress adds "a tinge of jazz to one of dance music's most recognisable sounds, the LP is eclectic and comforting, designed to make you squirm and relax at the same time". Reviewing the album for The Skinny, Patrick Gamble called it "an incredibly unpredictable album, with Cunningham seemingly determined to push his imaginative limits, vivifying his unique brand of experimental techno by cataloguing the possibilities of the genre". Track listing Personnel Darren J. Cunningham – production, mixing Noel Summerville – mastering Inventory Studio – design Charts References 2023 albums Actress (musician) albums 2020s concept albums Chess in music Media related to game theory Ninja Tune albums
LXXXVIII (album)
[ "Mathematics" ]
451
[ "Game theory", "Media related to game theory" ]
75,208,716
https://en.wikipedia.org/wiki/Circular%20consensus%20sequencing
Circular consensus sequencing (CCS) is a DNA sequencing method that is used in conjunction with single-molecule real-time sequencing to yield highly accurate long-read sequencing datasets with read lengths averaging 15–25 kb with median accuracy greater than 99.9%. These long reads, which are created via the formation of consensus sequencing obtained from multiple passes on a single DNA molecule, can be used to improve results for complex applications such as single nucleotide and structural variant detection, genome assembly, assembly of difficult polyploid or highly repetitive genomes, and assembly of metagenomes. CCS allows resolution of large or complex genomes – such as the California Redwood genome, nine times the size of the human genome - of any species, including variant detection single nucleotide variants (SNVs) to structural variants, with high precision. CCS also enables separation of the different copies of each chromosome (e.g., maternal and paternal for diploid), known as haplotypes. CCS reads offer the benefits of high accuracy equivalent to short-read sequencing data, but with the length necessary for complex genome assemblies and phasing of variants across the genome. Technology In this method, circularized fragments of DNA in solution float across the surface of a nanofluidic chip called a SMRT (Single Molecule, Real-Time) Cell. The surface of the chip is covered with millions of wells called zero-mode waveguides (ZMWs), each a few nanometers wide. To prepare a sample for CCS/HiFi sequencing, primers and DNA polymerase are added to SMRTbell libraries. The circularized DNA becomes trapped in the ZMW, nucleotides are added, and the DNA polymerase enzyme begins to copy the molecule base by base. As this happens, a tiny amount of light is released and read by a detector, which helps the sequencer’s computer determine the order of bases present in the sample. The circularized DNA is sequenced in repeated passes to ensure accuracy – thus the name “circular” consensus sequencing – then  the primers and adapters are removed using bioinformatics to deliver a highly accurate consensus DNA read. In CCS, the genomic DNA is prepared without amplification such that individual base modifications such as methylation can be detected during sequencing. This allows for the capture of both sequence and valuable methylation information in a single experiment. History This sequencing method was first described by Travers, K.J., et al. in Nucleic Acids Research in 2010. It was later commercialized by Pacific Biosciences in 2018 and made available on Sequel II and Revio long-read sequencing instruments. CCS technology has subsequently been used to power numerous studies in several fields, including: Human, telomere-to-telomere, whole genome assembly and pangenome research, pediatric rare disease genomic analysis, understanding DNA methylation in a rare disease cohorts, assembly of whole genomes of non-human vertebrates, assembly of whole genomics of other agriculturally significant species, analysis of cancer genomes and Metagenomics and microbial research, among others. Recognizing the importance of this technology in future genomic exploration and discovery, the editors of Nature Methods named long-read sequencing technology its method of the year for 2022. Applications Human and conservation biology CCS can be useful to researchers seeking to perform de novo sequencing assembly or studying haplotyped phased sequences from each chromosomal copy, regardless of how many chromosomes are present in the species.Many biodiversity-oriented consortia have leveraged such technology to complete their conservation biology studies including African Biogenome Project, California Conservation Genomics Project, Darwin Tree of Life, Desert Agriculture Initiative, Earth Biogenome Project, Global Ant Genomics Alliance, Human Pangenome, Telomere-to-Telomere Consortium, The 10,000 Fish Genomes Project and Vertebrate Genomes Project. Human health Circular consensus sequencing is helping researchers identify and characterize rare or structural variants with high confidence to better identify the underlying genomics of a given phenotype, with numerous applications to human health including rare disease research, microbiology and infectious disease, cancer research, and other genetic disease research areas. Rare diseases Although they occur with low frequency in the human population, rare diseases as a collective are common and most have a genetic cause, presenting unique diagnostic challenges. An estimated 50–80% of structural variants are tandem repeats. Because CCS provides a comprehensive view of variation in the human genome, producing complete, accurate, and phased assemblies for variant calling, identification of repeat expansions and medically relevant interruption sequences, it is enabling the identification of causative pathogenic variants and helping researchers discover novel disease-associated genes. Microbiology and infectious diseases Circular consensus sequencing can rapidly identify emerging pathogens and/or detection of changing pathogen genomics as part of regional or global surveillance operations.Where other molecular technologies for public health surveillance may require re-validation or the development of new panels, the unbiased nature of circular consensus sequencing delivers comprehensive genetic information to further characterize global outbreaks, pandemics, and epidemics. Cancer research Comprehensive resolution of structural variants enables researchers to better study and detect somatic variants driving cancer. Because of their size (>50 bp), structural variants and tandem repeats account for much genomic variation between individuals. Long-read RNA sequencing can be useful in cancer research to uncover sources of alternative splicing and fusion events which power cancer growth. CCS also provides an advantage over other sequencing technologies as it can provide phasing information of expressed mutations. References DNA sequencing Biotechnology
Circular consensus sequencing
[ "Chemistry", "Biology" ]
1,158
[ "nan", "Molecular biology techniques", "DNA sequencing", "Biotechnology" ]
75,209,749
https://en.wikipedia.org/wiki/Double%20liner
A double liner is a fluid barrier system that incorporates two impermeable layers separated by a permeable drainage layer also called a leak detection layer. Typically the impermeable layers are made from geomembranes with a permeable layer in between. The uppermost layer is called the primary liner while the lower layer is called the secondary liner. This combination of layers is designed to prevent hydraulic head from building on the secondary liner, thereby limiting or preventing any permeation into the secondary liner. Due to the difficulty of constructing a single large scale impermeable layer without any defects, a double liner system is more robust, as it can deal with leakage through the primary liner. A double liner system is required by the United States EPA for landfill, surface impoundments, and waste piles. History The first double geomembrane liner system was designed by geosynthetics pioneer J.P. Giroud, and installed in 1974 in Le Pont-de-Claix, France to serve as a water reservoir; this is still in service today. This system was composed of an early form of a bituminous geomembrane as the secondary liner, gravel as the drainage layer, and a butyl rubber geomembrane as the primary liner. References Geosynthetics Landfill
Double liner
[ "Materials_science", "Engineering" ]
272
[ "Materials science stubs", "Civil engineering", "Civil engineering stubs", "Materials science" ]
75,210,565
https://en.wikipedia.org/wiki/Moxon%20Medal
The Moxon Medal was established in Established in 1886. It is a triennial award made by the Royal College of Physicians to acknowledge a person who has produced distinguished observation and research in clinical medicine and is not restricted to British subjects. The award is named after Dr Walter Moxon FRCP (1836–86), a distinguished doctor who practised, taught and researched medicine at Guy’s Hospital in London, England. Medallists See also List of medicine awards Prizes named after people References Medicine awards British awards Awards established in 1886 1886 establishments in the United Kingdom Royal College of Physicians
Moxon Medal
[ "Technology" ]
116
[ "Science and technology awards", "Medicine awards" ]
75,210,974
https://en.wikipedia.org/wiki/Krupp%E2%80%93Renn%20process
The Krupp–Renn process was a direct reduction steelmaking process used from the 1930s to the 1970s. It used a rotary furnace and was one of the few technically and commercially successful direct reduction processes in the world, acting as an alternative to blast furnaces due to their coke consumption. The Krupp-Renn process consumed mainly hard coal and had the unique characteristic of partially melting the charge. This method is beneficial for processing low-quality or non-melting ores, as their waste material forms a protective layer that can be easily separated from the iron. It generates Luppen, nodules of pre-reduced iron ore, which can be easily melted down. The first industrial furnaces emerged in the 1930s, firstly in Nazi Germany and then in the Japanese Empire. During the 1950s, new facilities were constructed, notably in Czechoslovakia and West Germany. The process was discontinued in the early 1970s, with a few nuances. It was unproductive, intricate to master, and only pertinent to certain ores. In the beginning of the 21st century, Japan modernized the process to manufacture ferronickel, which is the sole surviving variant. History Setting up The direct reduction of iron ore principle was tested in the late 19th century using high-temperature stirring of ore powder mixed with coal and a small amount of limestone to adjust the ore's acidity. Carl Wilhelm Siemens' direct reduction process, which was sporadically employed in the United States and United Kingdom in the 1880s, is particularly noteworthy. This process is based on using a 3-meter in diameter and similarly lengthy drum with a horizontal axis for blowing gases preheated by two regenerators. The metallurgy industry underwent much research regarding the implementation of rotary tubular furnaces, inspired by similar equipment used in cement works. The Basset process, developed during the 1930s, is capable of even producing molten cast iron. In the 1920s, German metallurgist , head of the metallurgy department at the and professor at the Clausthal University of Technology, explored the metallurgical applications of this type of furnace. He filed a series of patents for removing volatile metals from steel raw materials. During the 1930s Johannsen initiated the development of direct-reduction iron production. The first installation underwent testing from 1931 to 1933 at the Gruson plant in Magdeburg. Research on the Krupp-Renn process continued until 1939 at the Krupp facility in Essen-Borbeck. The process, named after the Krupp company that created it and the , translating to "low furnace," displayed potential. As a result, Krupp procured patents overseas to safeguard the invention after 1932. Adoption In 1945 there were 38 furnaces worldwide, each with a capacity of 1 Mt/year. The process was favored in Germany due to the autarky policy of the Nazi regime, which prioritized the use of low-quality domestic iron ore. The transfer of technology between Nazi Germany and Imperial Japan led to the Japanese Empire benefiting from this process. Furnaces were installed in the co-prosperity sphere and operated by Japanese technicians. By the eve of the Pacific War, the process was being used in four steelworks in Japan. After World War II all installations in Germany, China, and North Korea were dismantled, with 29 furnaces sent to the USSR as war reparations. Only the Japanese and Czechoslovakian plants remained functional. In the 1950s Krupp rebuilt several large furnaces in Spain, Greece, and Germany. The Czechoslovakians were the primary drivers, constructing 16 furnaces and increasing process efficiency. The Great Soviet Encyclopedia reports that over 65 industrial plants, ranging from 60 to 110 meters in length and 3.6 to 4.6 meters in diameter, were constructed between 1930 and 1950. By 1960, 50 furnaces were producing 2 million tons per year in several countries. Disappearance The Soviet Union recovered 29 furnaces as war damage, but failed to gain significant profits from them. According to sources, the Red Army's destructive techniques in dismantling German industrial plants proved inappropriate and wasted valuable resources. It was also challenging for Russians to reconstruct these factories within the Soviet Union. Travelers from Berlin to Moscow reported observing German machinery scattered, largely deteriorating, along every meter of track and shoulder, suffering from the harsh climatic conditions. The Russian iron and steel industry did not heavily rely on technological input from the West. Eventually, the Eastern Bloc only maintained this marginal technology to a limited extent in the recently sovietized European countries, where it was eventually abandoned. Meanwhile large furnaces rebuilt in the 1950s in West Germany operated for approximately ten years before shutting down, due to the low cost of scrap and imported ore. The process then vanished from West Germany, concurrently with Western Europe. In Japan furnaces also progressed towards increasingly bigger tools. However, the dwindling of local ferruginous sand deposits, along with the low cost of scrap and imported ores, eventually resulted in the gradual discontinuation of the process. The process was steadily improved by the Japanese, who developed it under various names for specialized products including ferroalloys and the recycling of steelmaking by-products. Currently, at the start of the 21st century, the Krupp-Renn process is exclusively used for ferronickel production in Japan. By 1972 most plants in Czechoslovakia, Japan, and West Germany had ceased operations. The process was widely considered obsolete and no longer garnered the attention of industrialists. Process General principles The Krupp–Renn process is a direct reduction process that uses a long tubular furnace similar to those found in cement production. The most recent units constructed have a diameter of approximately 4.5 meters and a length of 110 meters. The residence time of the product is influenced by the slope and speed of rotation of the rotary kiln, which is inclined at an angle of roughly 2.5 percent. Prior to usage, the iron ore is crushed to less than 6 mm in particle size. The iron ore is introduced into the furnace upstream and mixed with a small amount of fuel, typically hard coal. After 6 to 8 hours, it exits the furnace as pre-reduced iron ore at 1,000 °C. The amount of iron recovered ranges from 94% to 97.5% of the initial iron in the ore. A burner located at the lower end of the furnace provides heat, transforming it into a counter-current reactor. The fuel comprises finely pulverized coal, which, upon high-temperature combustion, generates reducing gas primarily consisting of CO. Once the furnace reaches an optimal temperature, the ore-coal mixture can serve as the primary fuel source. The fumes exiting the furnace's upper end attain temperatures ranging from 850 to 900 °C and are subsequently cooled and purged of dust by water injection before discharge through the chimney. The process is efficient in producing ferronickel due to the proximity of its constituent elements. At 800 °C, carbon easily reduces iron and nickel oxides, while the gangue's other oxides are not significantly reduced. Specifically, iron(II) oxide (or wustite), which is the stable iron oxide at 800 °C, has a reducibility similar to that of nickel(II) oxide, making it impossible to reduce one without reducing the other. Process characteristics The rotary kiln's maximum temperature ranges between 1,230 and 1,260 °C, which significantly exceeds the 1,000 to 1,050 °C threshold for iron oxide reduction. The main objective is to achieve a paste-like consistency of the ore gangue. The reduced iron agglomerates into 3 to 8 mm metal nodules called . If the infusibility of the gangue is high, the temperature must be increased, up to 1,400 °C for a basic charge. It is crucial to control the gangue's hot viscosity. Among rotary drum direct reduction processes, it stands out for using high temperatures. Another distinctive attribute of the procedure involves introducing powdered coal to the furnace outlet. Furthermore, the process has evolved to enable terminating the supply of coal and running exclusively on the coal dust or coke dust introduced with the ore. In this situation, solely combustion air is injected at the furnace outlet. Thermal efficiency is improved in shaft furnaces such as blast furnaces compared to rotary furnaces due to the air absorbing some of the Luppen heat. However, the oxygen in the air partially re-oxidizes the product, meaning that the Luppen is still altered by contact with air at the end or after leaving the furnace, despite complete reduction of iron in the furnace. The hot assembly is discharged from the furnace and then rapidly cooled and crushed. The iron is separated from the slag via magnetic separation. Magnetically intermediate fines make up 5–15% of the charge. While partial melting of the charge leads to the increased density of the products, it also requires significant energy consumption. Load behavior as it passes through the furnace The furnace comprises three distinct zones: Firstly, the preheating zone heats the ore to 800 °C using the hot fumes within the furnace. Ore reduction occurs only if temperatures exceed 900-1,000 °C, while the coal releases its most volatile constituents. Secondly, the reduction zone is situated in the middle of the furnace, where coal and iron oxides combine to produce carbon monoxide. The carbon monoxide is released from the charge, generating a gaseous layer that shields the charge against the oxidizing air circulating above. As a consequence, this excessive gas is combusted, raising the temperature of the furnace walls, which then transfer the heat back to the charge due to rotary motion. The temperature eventually increases to 800 – 1,200 °C. Subsequently, the iron oxides are gradually altered into ferronickel or metallic iron. The metal produced is in the form of metallic sponge particles that are finely dispersed in the powdery gangue. Reduction is complete by the end of the furnace, and there is a minimal amount of CO produced. This is due to the fact that the charge is no longer protected from oxidation by the air blown in at the base of the furnace. As a result, a violent but shallow reoxidation of the iron occurs. Some of the oxidized iron is returned to the core of the charge by rotation where it is further reduced with residual coal. The remaining material mixes with waste to create a thick slag that cannot blend with the produced metal. This extremely hot reaction melts the non-oxidized iron and nickel, which clump together forming nodules named Luppen. Control of temperature is critical in regards to the ore's physicochemical characteristics. Overly high temperatures or unsuitable granulometry lead to the creation of rings of sintered material that accumulate on the walls of the furnace. Typically, a ring of iron-poor slag, known as slag, is formed at two-thirds of the distance along the furnace. Similarly, a metal ring usually forms around ten meters from the outlet. These rings disturb the flow of materials and gas, diminishing the furnace's useful capacity, sometimes completely obstructing it. The process's revival is hindered by the formation of a ring, particularly in China. In the early 21st century, industrialists abandoned its adoption after recognizing how critical and challenging managing this parameter was. While slag melting consumes energy, it enables us to govern the charge's behavior in the furnace. Additionally, we need a minimum of 800 to 1,000 kg of slag per ton of iron to prevent Luppen from growing too big. Slag limits coal segregation as coal is much less dense than ore and would float to the surface of the mixture. It transforms into a paste that guards the metal against oxidation when heated and simplifies both Luppen processing and furnace cleaning during maintenance shutdowns through vitrification when it gets cold. Performance with low-grade ores The Krupp-Renn process is suitable for producing pre-reduced iron ore from highly siliceous and acidic ores (CaO/SiO2 basicity index of 0.1 to 0.4), which begin generating a pasty slag at 1,200 °C. Additionally, due to the slag's acidity, it becomes vitreous, facilitating separation from the iron through easy crushing. Furthermore, this process is also ideal for treating ores with high concentrations of titanium dioxide. Due to its ability to cause slag to become especially infusible and viscous, ores that contain this oxide cannot be used with blast furnaces as they must remove all their production in liquid form. For this reason, the preferred ores for this technique are those that would become uneconomical if they had to be modified with basic additives, usually those with a low iron content (between 35 and 51%), and whose gangue needs to be neutralized. Integrated into a steelmaking complex, the Krupp-Renn process provides an alternative to sinter plants or beneficiation processes, effectively eliminating waste rock and undesired elements like zinc, lead, and tin. In a blast furnace, these elements undergo vaporization-condensation cycles which progressively saturates the furnace. However, with the Krupp-Renn process, the high temperature of the fumes prevents condensation within the furnace, before they are retrieved by the dust-removal system. The process recovers by-products or extracts specific metals. The Luppen is subsequently remelted in either the blast furnace or the cupola furnace, or the Martin-Siemens furnace, because it involves melting a pre-reduced, iron-rich charge. The process has been effective in treating ores abundant in nickel(II) oxide, vanadium, and other metals. Additionally, the process is applicable in the production of ferronickel. In this instance, saprolitic ores with a high magnesium content are as infusible as highly acidic ores, distinguishing their relevance to the process. Direct reduction methods such as this one offer the flexibility of using any solid fuel and in this case, 240 to 300 kg of hard coal is needed to process one metric ton of iron ore that contains 30 to 40% iron. Assuming a consumption of 300 kg/ton of ore at 30%, the hard coal consumption is 800 kg per ton of iron. Additionally, 300 kg of coke is consumed during the smelting of Luppen in the blast furnace. When this ore is smelted entirely in the blast furnace, total fuel consumption remains the same. However, it only uses coke, which is a much more expensive fuel than hard coal. However, using slags with over 60% silica content, making them acidic, contradicts metal desulfurization that demands highly basic slags. Consequently, 30% of the fuel's sulfur settles in the iron, entailing expensive after-treatments to eliminate it. Productivity Depending on the ore and plant size, a furnace can daily output 250 to 800 tons of pre-reduced iron ore. The biggest furnaces, up to 5 meters in diameter and 110 meters long, can process 950 to 1,000 tons of ore daily, excluding fuel. A properly operated plant typically runs for around 300 days per year. The internal refractory typically lasts 7 to 8 months in the most exposed part of the furnace and for 2 years elsewhere. In 1960, a Krupp-Renn furnace using low-grade ore yielded 100 kilotons of iron annually, while a contemporaneous modern blast furnace produced ten times as much cast iron. Direct reduction processes employing rotary furnaces frequently face a significant challenge due to the localized formation of iron and slag rings, which sinter together and gradually obstruct the furnace. Understanding the mechanism of lining formation is a complex process involving mineralogy, chemical reactions, and ore preparation. The formation of the lining ring, which progressively grows and poisons the furnace, is caused by a few elements in minute quantities. To remedy this, increasing the supply of combustion air or interrupting the furnace charging process are effective solutions. Otherwise, it may be necessary to adjust the grain size of the charged ore or the chemical composition of the mineral blend. In 1958, Krupp constructed a plant that could generate 420,000 tons per year of pre-reduced iron ore (consisting of six furnaces) which had an estimated value of 90 million Deutsche Mark, or 21.4 million dollars. By contrast, the plant erected in Salzgitter-Watenstedt in 1956–1957, which was well-integrated with an existing steelworks, only cost 33 million Deutsche Mark. At that time, a Krupp-Renn plant presented itself as a feasible substitute to the established blast furnace process, considering its investment and operating costs: initial investment cost per ton produced was nearly half and operating costs were roughly two and a half times greater. The slag, a glassy silica, can be effortlessly employed as an additive for constructing road surfaces or concrete. However, the method does not produce a recoverable gas similar to blast furnace gas, decreasing its profitability in most cases. Nevertheless, it also solves the issue regarding gas recovery. Plants built Heritage Evolution In view of its performance, the process seemed a suitable basis for the development of more efficient variants. Around 1940, the Japanese built several small reduction furnaces operating at lower temperatures: one at Tsukiji (1.8 m × 60 m), two at Hachinohe (2 furnaces of 2.8 m × 50 m), and three at Takasago (2 furnaces of 1.83 m × 27 m and 1 furnace of 1.25 m × 17 m). However, since they do not produce Luppen, they cannot be equated with the Krupp-Renn process. Although direct reduction in a rotary furnace has been the subject of numerous developments, the logical descendant of the Krupp-Renn process is the "Krupp-CODIR process". Developed in the 1970s, it is based on the general principles of the Krupp-Renn process with a lower temperature reduction, typically between 950 and 1,050 °C, which saves fuel but is insufficient to achieve partial melting of the charge. The addition of basic corrective additives (generally limestone or dolomite) mixed with the ore allows the removal of sulfur from the coal, although the thermolysis of these additives is highly endothermic. This process has been adopted by three plants: 'Dunswart Iron & Steel Works' in South Africa in 1973, 'Sunflag Iron and Steel' in 1989, and 'Goldstar Steel & Alloy' in India in 1993. Although the industrial application is now well established, the process has not had the impact of its predecessor. Finally, there are many post-Krupp-Renn direct reduction processes based on a tubular rotary furnace. At the beginning of the 21st century, their combined output represented between 1% and 2% of world steel production. In 1935 and 1960, the output of the Krupp-Renn process (1 and 2 million tons respectively) represented just under 1% of world steel production. Treatment of ferrous by-products The Krupp-Renn process, which specialized in the beneficiation of poor ores, was the logical basis for the development of recycling processes for ferrous by-products. In 1957, Krupp tested a furnace at for the treatment of roasted pyrites to extract iron (in the form of Luppen) and zinc (vaporized in the flue gases). This process is therefore a hybrid of the Waelz and Krupp-Renn processes, which is why it is called the "Krupp-Waelz" (or "Renn-Waelz") process. The trials were limited to a single 2.75 m × 40 m demonstrator capable of processing 70 to 80 t/day and were not followed up. The technical relationship between Krupp-Renn and Japanese direct reduction production processes is often cited. In the 1960s, Japanese steelmakers, sharing the observation that furnace plugging was difficult to control, developed their own low-temperature variants of the Krupp-Renn process. Kawasaki Steel commissioned a direct-reduction furnace at its (1968) and (1975) plants, the most visible feature of which was a pelletizing unit for the site's steelmaking by-products (sludge and dust from the cleaning of converter and blast furnace gases). The "Kawasaki process" also incorporates other developments, such as the combustion of oil instead of pulverized coal and the use of coke powder instead of coal mixed with ore... Almost identical to the Kawasaki process (with a more elaborate pelletizing unit), the "Koho process" was adopted by Nippon Steel, which commissioned a plant of this type at the in 1971. The Ōeyama process The production of ferronickel from laterites takes place in a context that is much more favorable to the Krupp-Renn process than to the steel industry. Lateritic ores in the form of saprolite are poor, very basic and contain iron. Production volumes are moderate, and the nickel chemistry is remarkably amenable to rotary kiln reduction. The process is therefore attractive, but regardless of the metal extracted, mastering all the physical and chemical transformations in a single reactor is a real challenge. The failure of the Larco plant at Lárymna, Greece, illustrates the risk involved in adopting this process: it was only when the ore was ready for industrial processing that it proved incompatible with the Krupp-Renn process. As a result, lower-temperature reduction followed by electric furnace smelting allows each stage to have its own dedicated tool for greater simplicity and efficiency. Developed in 1950 at the in New Caledonia, this combination has proven to be both cost-effective and, above all, more robust. Large rotating drums (5 m in diameter and 100 m or even 185 m long) are used to produce a dry powder from nickel ore concentrate. This powder contains 1.5 to 3% nickel. It leaves the drum at 800–900 °C and is immediately melted in electric furnaces. Only partial reduction takes place in the drums: a quarter of the nickel comes out in metallic form, the rest is still oxidized. Only 5% of the iron is reduced to metal, leaving unburned coal as fuel for the subsequent melting stage in the electric furnace. This proven process (also known as the RKEF process, for Rotary Kiln-Electric Furnace) has become the norm: at the beginning of the 21st century, it accounted for almost all nickel laterite processing. In the early 21st century, however, the Nihon Yakin Kogyo foundry in Ōeyama, Japan, continued to use the Krupp-Renn process to produce intermediate grade ferronickel (23% nickel), sometimes called nickel pig iron. With a monthly output of 1,000 tons of Luppen and a production capacity of 13 kt/year, the plant is operating at full capacity. It is the only plant in the world using this process. It is also the only plant using a direct reduction process to extract nickel from laterite. The process, which has been significantly upgraded, is called the "Ōeyama process". The Ōeyama process differs from the Krupp-Renn process in the use of limestone and the briquetting of the ore prior to charging. It retains its advantages, which are the concentration of all pyrometallurgical reactions in a single reactor and the use of standard (i.e. non-coking) coal, which covers 90% of the energy requirements of the process. Coal consumption is only 140 kg per ton of dry laterite, and the quality of the ferronickel obtained is compatible with direct use by the steel industry. Although marginal, the Krupp-Renn process remains a modern, high-capacity process for the production of nickel pig iron. In this context, it remains a systematically studied alternative to the RKEF process and the "sinter plant-blast furnace" combination. See also Direct reduction Direct reduced iron :fr:Histoire de la production de l'acier :fr:Friedrich Johannsen Notes References Bibliography Chemistry Iron Metals Alloys Metallurgy Blast furnaces Ore deposits
Krupp–Renn process
[ "Chemistry", "Materials_science", "Engineering" ]
5,024
[ "Metals", "Metallurgy", "History of metallurgy", "Materials science", "Blast furnaces", "Alloys", "Chemical mixtures", "nan" ]
75,211,656
https://en.wikipedia.org/wiki/Big%20dream
In Jungian dream analysis, big dreams () are dreams which have a strong impact on the dreamer and contain heavily archetypal imagery. Background According to Carl Jung, these dreams arise from the collective unconscious more than the personal unconscious, that is, their imagery is broadly shared by many people in different cultures. Jung states that these dreams appear more often in during critical phases of change in human life, being early youth, puberty, middle age and as one nears death. These dreams primarily express "eternal human problems", rather than personal issues. Despite this, they serve as milestones along the path to individuation, which includes the integration of the personal ego into a sense of becoming a universal human being. Big dreams are connected to the idea of the Hero's Journey, which Jung describes as the "life of the hero", waypoints along a human life understood in mythological terms. Examples Jung gives the example of a man who dreamt of a great snake that guarded a golden bowl in an underground vault. He explains that this image was not based directly on the dreamer's personal experience (although he had once seen a large snake at the zoo), but on archetypal imagery and collective emotion. References Analytical psychology Dream
Big dream
[ "Biology" ]
254
[ "Dream", "Behavior", "Sleep" ]
75,213,407
https://en.wikipedia.org/wiki/Maria%20Pereira
Maria Pereira (born 1986, Leiria, Portugal) is a Portuguese bioengineering scientist, creator of a glue to close open wounds without damaging tissues. Biography She was born in 1986 in Leiria. She holds a degree in Pharmaceutical Sciences from the University of Coimbra, in Portugal, and a PhD in Bioengineering from the Massachusetts Institute of Technology (MIT), in the United States, thanks to the scholarship she was awarded by the MIT-Portugal Program in 2007. She is known for having created a glue to close open wounds without damaging tissue, which is used, for example, for delicate heart operations and to treat babies with congenital heart defect, one in 100, which happens to be the leading cause of infant death in the United States. Pereira worked on her project to develop a glue that could be used anywhere in the body, including the heart. The glue needed to meet many conditions at once: withstand humidity and dynamic conditions, be elastic to expand and contract with each heartbeat, be hydrophobic (to repel blood away from the surface), biodegradable and non-toxic. In 2012, she succeeded and met even more criteria: the glue she invented only adheres where it is intended, when the surgeon shines a light on it, thus giving him total control over the process. She has been an in-house researcher at Gecko Biomedical in biotechnology and medicine in Paris since October 2013. Political career On December 28, 2015, at the age of 29, she was presented for Marcelo Rebelo de Sousa's national representative in his candidacy for the 2016 presidential elections. Awards In 2012, Novartis considered her one of four world leaders in her field. The MIT Technology Review magazine in 2014 included her in its annual list of "innovators under 35". In early 2015, she was recognized by Forbes magazine as one of the world's 30 promising talents under the age of 30. In September 2015, Time magazine considered her a "next generation leader". References External links Meetup with María Pereira at Labiotech 2017, on YouTube Maria Pereira at Gecko Biomedical. Living people 1986 births 21st-century Portuguese scientists Portuguese women scientists Bioengineers Women bioengineers Women inventors Massachusetts Institute of Technology alumni
Maria Pereira
[ "Engineering", "Biology" ]
460
[ "Bioengineers", "Biological engineering" ]
75,216,092
https://en.wikipedia.org/wiki/Tellurium%20tetraazide
Tellurium tetraazide is an inorganic chemical compound with the formula . It is a highly sensitive explosive and takes the form of a yellow solid. It has been prepared directly as a precipitate of the reaction between tellurium tetrafluoride and trimethylsilyl azide. References azide tellurium
Tellurium tetraazide
[ "Chemistry" ]
71
[ "Explosive chemicals", "Azides", "Inorganic compounds", "Inorganic compound stubs" ]
65,277,642
https://en.wikipedia.org/wiki/Twystron
A twystron is a type of microwave-producing vacuum tube most commonly found in high-power radar systems. The name refers to its construction, which combines a traveling wave tube, or TWT, with a klystron, producing a tw-ystron. The name was originally a trademark of Varian Associates, its developer, and was often capitalized. In recent times has become a generic term for any similar design. The twystron amplifies a source signal using a conventional klystron, which consists of a series of cylindrical resonant chambers fed with the source signal. An electron gun at one end of the tube produces electrons that flow through holes in the centers of the resonators. As they pass through the holes, the signal within the resonator causes the electrons to "bunch up", a process known as velocity-modulation. The resulting electron beam is an amplified version of the original signal. In a conventional klystron, this signal is then captured and used as the output. In the twystron, the output instead flows into a TWT for further amplification. The advantage of this approach is that while the multi-resonator klystron is an efficient amplifier, its bandwidth is reduced as one adds additional resonators, which makes high-power klystrons have a relatively low bandwidth generally less than 10% of the design frequency. In contrast, the TWT has a wider bandwidth response but are generally very long. By combining a klystron with a TWT, the result is a relatively compact device with improved bandwidth; typical twystrons have bandwidth up to 15% of the design point. The device was developed by Albert La Rue and Rodney Rubert in the early 1960s and was quickly adopted by many radar designs in order to improve frequency agility and thereby improve performance against radar jamming systems. The twystron was generally replaced by the extended interaction klystron and solid state amplifiers. References Radar theory Vacuum tubes Microwave technology
Twystron
[ "Physics" ]
421
[ "Vacuum tubes", "Vacuum", "Matter" ]
65,278,180
https://en.wikipedia.org/wiki/Photinia%20%C3%97%20fraseri
Photinia × fraseri, known as red tip photinia and Christmas berry, is a nothospecies in the rose family, Rosaceae. It is a hybrid between Photinia glabra and Photinia serratifolia. Description It is a compact shrub with an erect habit that can grow into a medium-sized tree. Its evergreen, oval leaves are dark green but crimson red when young, especially in early spring. Its flowers are small, with five petals, united in large white inflorescences. They bloom at the end of spring. It can reach a height of 5 meters and a diameter of 5 meters. It is frost resistant and can withstand temperatures from -5° to -10°. Cultivation The shrub tolerates moderate shade and it grows in well drained soils. It should be sheltered from the cold and dry winds of winter. It can be propagated by semi-woody cuttings in summer. Cultivars The hybrid has a number of cultivars, which include (those marked have won the Royal Horticultural Society's Award of Garden Merit): Photinia × fraseri 'Camilvy' Photinia × fraseri 'Curly Fantasy' Photinia × fraseri 'Little Red Robin', a plant similar to 'Red Robin', but dwarf in stature with an ultimate height/spread of around 2–3 ft Photinia × fraseri 'Pink Marble' or 'Cassini', a newer cultivar with rose-pink tinted new growth and a creamy-white variegated margin on the leaves Photinia × fraseri 'Red Robin' - probably the most widely planted of all Photinia × fraseri 'Robusta' Photinia × fraseri 'Super Hedger' - a newer hybrid with strong upright growth References fraseri Hybrid plants
Photinia × fraseri
[ "Biology" ]
374
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
65,279,162
https://en.wikipedia.org/wiki/Console%20war
In the video game industry, a console war describes the competition between two or more video game console manufacturers in trying to achieve better consumer sales through more advanced console technology, an improved selection of video games, and general marketing around their consoles. While console manufacturers are generally always trying to out-perform other manufacturers in sales, these console wars engage in more direct tactics to compare their offerings directly against their competitors or to disparage the competition in contrast to their own, and thus the marketing efforts have tended to escalate in back-and-forth pushes. While there have been many console wars to date, the term became popular between Sega and Nintendo during the late 1980s and early 1990s as Sega attempted to break into the United States video game market with its Sega Genesis console. Through a novel marketing approach and improved hardware, Sega had been able to gain a majority of the video game console market by 1991, three years after the Genesis’ launch. This caused back and forth competition between the two companies throughout the early 1990s. However, Nintendo eventually regained its market share and Sega stopped making home console hardware by 2001. Background and etymology The video game console market started in 1972 with the release of the first home console, the Magnavox Odyssey. As more manufacturers entered the market and technology improved, the market began to coalesce around releases of more advanced hardware every few years on a predictable cycle, which are typically grouped into generations. Since 1972, there have been nine console generations, with two to three dominant manufacturers controlling the marketplace. As with most industries without a single dominant leader, console manufacturers have marketed their products in a manner to highlight them in a more favorable manner compared to their competitors', or to focus on features that their competitors may lack, often in aggressive manners. For example, console manufacturers in the 1980s and 1990s heavily relied on the word size of the central processor unit, emphasizing that games had better capabilities with 16-bit processors over 8-bit ones. This type of aggressive marketing led video game journalists to call the competitive marketing a "war" or "battle" as early as August 1988. As each new console generation emerged with new marketing approaches, journalists and consumers continued to use variations of the "war" language, including "system wars" and "console wars". By the early 2000s, the term "console war" was most commonly used to describe heated competition between console manufacturers within any generation. Nintendo versus Sega While not the only console war, the rivalry between Sega and Nintendo for dominance of the North American video game market in the late 1980s and early 1990s is generally the most visible example of a console war. It established the use of aggressive marketing and advertising tactics by each company to try to gain control of the marketplace, and ended around 1995 when a new player, Sony, entered and disrupted the console space. Background The United States video game industry suffered a severe market crash in 1983 from numerous factors which led to a larger market recession and increasing popularity of personal computers as a video game platform. A key contributing factor to the crash was the loss of publishing control for console games. Early success by some of the first third-party developers like Activision for the Atari VCS console led to venture capitalists bringing in teams of inexperienced programmers to try to capture the same success, but only managed to flood the market with poor quality games, which made it difficult for good quality games to sell. The video game crash impacted other factors in the industry that were already in decline, such as video game arcades. In Japan, Nintendo had released its Famicom (Family Computer) console in 1983, one of the first consoles of the third generation. Japan did not have a similar third-party development system in place, and Nintendo maintained control on the manufacturing of game cartridges for the Famicom using a licensing model to limit which third-party games were published on it. Nintendo looked to release the unit in the United States, but recognized that the market was still struggling from the 1983 crash. Nintendo took several steps to redesign the Famicom prior to a United States launch. It was made to look like a VCR unit rather than a console, and was given the name the "Nintendo Entertainment System" to distance it from being a video game console. Further, Nintendo added a special 10NES lockout system that worked as a lock-and-key system with game cartridges to further prevent unauthorized games from being published for the system and avoid the loss of publishing control that had caused the 1983 crash. The NES revitalized the U.S. video game industry and established Nintendo as the dominant name in video game consoles over Atari. In lifetime sales, the NES had sold nearly 62 million units worldwide, with 34 million in North America. At the same time, Sega was looking to get into the video game console industry as well, having been a successful arcade game manufacturer, but due to the downturn in arcade game business, looked to use that expertise for the home market. They released the SG-1000 console in Japan the same day as the Famicom in 1983, but sold only 160,000 units of the SG-1000 in its first year. Sega redesigned the SG-1000 twice to try to build a system to challenge Nintendo's dominance; the SG-1000 Mark II remained compatible with the SG-1000 but failed to gain any further sales. The next iteration, the Sega Mark III, was released in 1985, using Sega's arcade hardware for its internals to provide more refined graphics. The console was slightly more powerful than the Famicom, and Sega's marketing attempted to push on the more advanced graphics their system offered over the Famicom. Sega attempted to follow Nintendo with a worldwide release of the Mark III, rebranded as the Master System. The Master System was released in the United States in 1986, but Nintendo of America developed a licensing plan in the U.S. to keep developers exclusive to the NES, limiting the library of games that Sega could offer and to also ensure that another gaming crash didn't begin. Further, Sega's third-party distributor, the toy company Tonka, opted against localizing several of the Japanese games Sega had created, further capping the game library Sega could offer in the U.S. Only a total estimated two million systems were sold. Entering the United States' market The fourth generation of video game consoles was started by the launch of NEC's PC Engine in 1987 in Japan. While the PC Engine used an 8-bit CPU, it included 16-bit graphic rendering components, and NEC marketed this heavily as a 16-bit game console to distinguish it from the Famicom and Mark III; when NEC brought the PC Engine worldwide, it was rebranded as the "TurboGrafx-16" to emphasize this. After the release of the TurboGrafx-16, use of the bit designation caught on, which led manufacturers to focus their advertising heavily on the number of bits in a console system for the next two console generations. NEC was one of many competitors to Sega and Nintendo. Following a similar path they had done for the Mark III, Sega used their arcade game technology, now using 16-bit processor boards, and adapted those into a home console, released in Japan in October 1988 as the Mega Drive. Compared to its prior consoles, the Mega Drive was designed to be more mature-looking and less like a toy compared to the Famicom to appeal to an older demographic of gamers, and "16-bit" was emblazoned on the console's case to emphasize this feature. While the system was positively received by gaming magazines like Famitsu, it was overshadowed by the release a week prior of Super Mario Bros. 3 for the Famicom. As with the Master System, Sega also planned for a major push of the Mega Drive into the United States to challenge Nintendo's dominance among other markets, with the unit rebranded as the Sega Genesis. Sega was dissatisfied with Tonka's handling of the Master System and so sought a new partner through the Atari Corporation led by Jack Tramiel. Tramiel was bullish on the Genesis due to its cost, and turned down the offer, instead focusing more on the company's computer offerings. Sega instead used its dormant Sega of America branch to run a limited launch of the console in August 1989 in test markets of New York City and Los Angeles, with its launch system being bundled with the port of the arcade game Altered Beast. In October 1989, the company named former Atari Entertainment Electronics Division president Michael Katz as CEO of Sega of America to implement a marketing strategy for a nation-wide push of the Genesis with a target of one million consoles. Katz used a two-prong strategy to challenge Nintendo. The first was to stress the arcade-like capabilities of the Genesis with the capabilities of games like Altered Beast compared to the simpler 8-bit graphics of the NES, and devising slogans such as "Genesis does what Nintendon't." Katz also observed that Nintendo still held most of the rights to arcade game ports for the NES, so the second part of his strategy was to work with the Japanese headquarters of Sega to pay celebrities for their naming rights for games like Pat Riley Basketball, Arnold Palmer Golf, Joe Montana Football, and Michael Jackson's Moonwalker. Most of these games were developed by Sega's Japanese programmers, though notably, Joe Montana Football had originally been developed by Mediagenic, the new name for Activision after it had become more involved in publishing and business application development alongside games. Mediagenic had started a football game which Katz wanted to brand under Joe Montana's name, but unknown to Katz at the time, the game was only partially finished due to internal strife at Mediagenic. After the deal had been completed and Katz learned of this, he took the game to Electronic Arts. Electronic Arts had already made itself a significant force in the industry as they had been able to reverse engineer the cartridge format for both the NES and the Genesis, though Electronic Arts' CEO Trip Hawkins felt it was better for the company to develop for the Genesis. Electronic Arts used their reverse engineering knowledge as part of their negotiations with Sega to secure a freer licensing contract to develop openly on the Genesis, which proved beneficial for both companies. At the time Katz had secured Mediagenic's Joe Montana football, Electronic Arts was working on its John Madden Football series for personal computers. Electronic Arts was able to help bring Joe Montana Football, more as an arcade title compared to the strategic John Madden Football, to reality, as well as bringing John Madden Football over as a Genesis title. The second push in 1991 The Genesis still struggled in the United States against Nintendo, and only sold about 500,000 units by mid-1990. Nintendo had released Super Mario Bros. 3 in February 1990 which further drove sales away from Sega's system. Nintendo themselves did not seem to be affected by either Sega's or NEC's entry into the console market. Sega's president Hayao Nakayama wanted the company to develop an iconic mascot character and build a game around it as one means to challenge Nintendo's own Mario mascot. Company artist Naoto Ohshima came up with the concept of Sonic the Hedgehog, a fast anthropomorphic character with an "attitude" that would appeal to teenagers and incorporating the blue color of Sega's logo, and Yuji Naka helped to develop the game Sonic the Hedgehog to showcase the character as well as the graphics and processing speed of the Genesis. The game was ready by early 1991 and launched in North America in June 1991. Separately, Sega fired Katz and replaced him with Tom Kalinske as Sega of America's new CEO in mid-1990. Kalinske had been president of Mattel and did not have much experience in video games but recognized the razor and blades model, and developed a new strategy for Sega's push to challenge Nintendo's dominance in America with four key decisions, which included cutting the price of the Genesis from to , and continue the same aggressive marketing campaigns to make the Genesis look "cool" over the NES and of Nintendo's upcoming Super Nintendo Entertainment System (SNES). Further, Kalinske pushed hard for American developers like Electronic Arts to create games on the Genesis that would better fit American preferences, particularly sports simulation games which the console had gained a reputation for. Finally, Kalinske insisted on making Sonic the Hedgehog the bundled game on the system following its release in June 1991, replacing Altered Beast and even offering those that had purchased a Genesis with Altered Beast a trade-in replacement for Sonic. Under Kalinske, Sega also revamped their advertising approach, aiming for more of a young adult audience, as Nintendo still was positioning the SNES as a child-friendly console. Advertising focused on Sonic, the edgier games in the Genesis library, and its larger library of sports games which appealed to this group. Television ads for the Genesis and its games ended with the "Sega Scream" – a character shouting the name "Sega" to the camera in the final shot – which also caught on quickly. These changes, all predating the SNES's planned North American release in September 1991, gave Sega its first gain on Nintendo in the U.S. market. Further, the price cut to made the Genesis a cheaper option than the planned price for the SNES led many families to purchase the Genesis instead of waiting for the SNES. The Genesis had a larger library of games for the U.S. with over 150 titles by the time the SNES launched alongside eight games, and Sega continued to push out titles that drew continuous press throughout the year, whereas with the SNES, its game library was generally held up by flagship Mario and Zelda games that only came at out once a year, along with less which further made the Genesis a more desirable option. For Nintendo, up until 1991, they had been passive towards Sega's approach in North America, but as the SNES launch approach, the company recognized that they were losing ground. The company shifted their advertising in North America to focus on more of the advanced features of the SNES that were not present in the Genesis, such as its Mode 7 to create simulated 3D perspective effects. When the SNES launched, this would be most prominently seen with the release of F-Zero, where the 3D made the game look more complex, compared to earlier 3rd person racing games on home consoles. Pilotwings used Mode 7 to better simulate the landings that would happen, after players completed the other objectives in the level. The initial shipment of one million SNES units sold out quickly and a total of 3.4 million SNES were sold by the end of 1991, a record for a new console launch, but the Genesis maintained strong sales against the SNES. The Genesis's resilience against the SNES led several of Nintendo's third-party developers to break their exclusive development agreements with Nintendo and seek out licenses to also develop for Genesis. These developers included Acclaim, Konami, Tecmo, Taito, and Capcom, the latter of which arranged to have a special licensing mechanism with Sega that allowed them to publish select titles exclusively for the Genesis. During this period, the push for marketing by both Nintendo and Sega led to the growth of video game magazines. Nintendo had already established Nintendo Power in 1988 in part to serve as a help guide for players on its popular titles, and was able to use this further to advertise the SNES and upcoming games. Numerous other titles grew in the late 1980s and early 1990s, giving Sega the opportunity to market its games heavily in these publications. The war escalates in 1992 and 1993 Nintendo publicly acknowledged that it knew it was no longer in the dominant position in the console market by 1992. A year into the SNES's release, the SNES's price was lowered to to match the Genesis, to which Sega reduced the Genesis to shortly after. The SNES was helped by Capcom's decision to maintain exclusivity of its home port of its popular brawler arcade game Street Fighter II: The World Warrior to the SNES when it was released in June 1992. Nintendo also experimented with including processing chips within game cartridges to augment to power of the SNES, with the Super FX chip bring real-time 3D rendering first used in Star Fox. While the SNES outsold the Genesis in the U.S. in 1992. the Genesis still had a larger install base. The success of Street Fighter II both as an arcade game and as a home console title led to the growth of the fighting game genre, and numerous variations from other developers followed. Of significant interest was Midway's Mortal Kombat, released to arcades in 1992. Compared to most other fighting games at the time, Mortal Kombat was much more violent. The game showed combatants’ blood splatter during combat and allowed players to end matches in graphically intense "fatalities.” Because of its controversial style and gameplay, the game proved extremely popular in arcades. By 1993, Both Nintendo and Sega recognized the need to have Mortal Kombat on their consoles. However, Nintendo, fearing issues with the game’s violence, licensed a “clean” version of the game from Acclaim for the SNES. Which included replacing the blood splatter with sweat and removing the aforementioned fatalities. Sega also licensed a censored version of the game for the Genesis. However, players could enter a cheat code that reverted the game back to its original arcade version. Both home versions were released in September, and approximately 6.5 million units were sold over the game’s lifetime. But the Genesis version was more popular with three to five times more sales than its SNES counterpart. The popularity of the home console version of Mortal Kombat, coupled with other moral panics in the early 1990s, led to concerns from parents, activists and lawmakers in the United States, leading up to the 1993 congressional hearings on video games first held in December. Led by Senators Joe Lieberman and Herb Kohl, the Senate Committees on Governmental Affairs and the Judiciary brought several of the video game industry leaders, including Howard Lincoln, vice president of Nintendo of America, and Bill White, vice president of Sega of America, to discuss the way they marketed games like Mortal Kombat and Night Trap on consoles to children. Lincoln and White accused each other's companies of creating the issue at hand. Lincoln stated that Nintendo had taken a curated approach to selecting games for their consoles, and that violent games had no place in the market. White responded that Sega purposely was targeting an older audience than Nintendo, and had created a ratings system for its games that it had been trying to encourage the rest of the industry to use; further, despite Nintendo's oversight, White pointed out that there were still many Nintendo titles that incorporated violence. With neither Lincoln nor White giving much play, Lieberman concluded the first hearing with a warning that the industry needs to come together with some means to regulate video games or else Congress would pass laws to do this for them. By the time of the second hearing in March 1994, the industry had come together to form the Interactive Digital Software Association (today the Entertainment Software Association) and were working to establish the Entertainment Software Rating Board (ESRB), a ratings panel, which ultimately was introduced by September 1994. Despite Sega offering its ratings system as a starting point, Nintendo refused to work with that as they still saw Sega as their rival, requiring a wholly new system to be created. The ESRB eventually established a form modelled off the Motion Picture Association of America (MPAA)'s rating system for film, and the committee was satisfied with the proposed system and allowed the video game industry to continue without further regulations. The arrival of Sony and the end of the war In 1994 and 1995, there was a contraction in the video game industry, with NPD Group reporting a 17% and 19% year-to-year drop in revenue. While Sega had been outperforming Nintendo in 1993, it still carried corporate debt while Nintendo remained debt-free from having a more dominant position in the worldwide market, even beating Sega in the North American and US market winning the 16 bit console war. To continue to fight Nintendo, Sega's next console was the Sega Saturn, first released in November 1994 in Japan. It brought in technology used by Sega's arcade games that used 3d polygonal graphics, and launch titles featured home versions of these arcade games including Virtua Fighter. While Virtua Fighter was not a pack-in game, sales of the title were nearly 1:1 with the console in Japan. Sega, recognizing that they had numerous consoles with disparate games they were now trying to support, decided to put most of their attention onto the Saturn line going forward, dropping support for the Genesis despite its sales still being strong in the United States at the time. At the same time, a new competitor in the console marketplace emerged, Sony Computer Entertainment, with the introduction of the PlayStation in December 1994. The PlayStation moved away from cartridges and took advantage of nascent CD-ROM technology for game distribution, allowing much more data to be stored on each disc and reducing the costs for reproduction. Nintendo had worked with Sony on a prototype add-on for the SNES, the Super NES CD-ROM, that would allow it to read CD-ROMs, but the project was terminated by 1992 after Nintendo revealed it opted to start working with Philips and its own optical disc technology, while Sony used their development towards the PlayStation. Sega, aware of Sony's potential competition in Japan, made sure to have enough Saturns ready for sale on the day the PlayStation first shipped as to overwhelm Sony's offering. Both Sega and Sony turned to move these units to the North American market. With the formation of the ISDA, a new North American tradeshow, the Electronic Entertainment Expo (E3) was created in 1995 to focus on video games, to distinguish it from the Consumer Electronics Show (CES), which covered all home electronics. Nintendo, Sega and Sony gave their full support to E3 in 1995. Sega believed they had the stronger position going into E3 over Sony, as gaming publications, comparing the Saturn to the PlayStation, rated the Saturn as the better system. At the first E3 in May 1995, Sega's Kalinske premiered the North American version of the Saturn, announced its various features and its selling price of , and said that while it would officially launch that same day, they had already sent a number of systems to selected vendors for sale. Sony's Olaf Olafsson of Sony Electronic Publishing began to cover the PlayStation features, then invited Steve Race, president of Sony Computer Entertainment America to the stage. Race stated the launch price of the PlayStation, "", and then left to "thunderous applause". The surprise price cut caught Sega off-guard, and, in addition to several stores pulling Sega from their lineup due to being shunned from early Saturn sales, the higher price point made it more difficult for them to sell the system. As a result of this strategy by Sony, future E3s became a battleground for other console wars, with journalists judging the various hardware manufacturers' presentations to determine which one had the most successful pitches. When the PlayStation officially launched in the United States in September 1995, its sales over the first two days exceeded what the Saturn had sold over the prior five months. Because Sega had invested heavily on Saturn into the future, Sony's competition drastically hurt the company's finances. In the case of Nintendo, they bypassed the 32-bit CPU and instead their next offering was the Nintendo 64, a 64-bit CPU console first released in June 1996. While this gave them powerful capabilities such as 3D graphics to keep up and surpass those on the Saturn and PlayStation, it was still a cartridge-based system limiting how much information could be stored for each game. This decision ultimately cost them Square Soft who moved their popular Final Fantasy series over to the PlayStation line to take advantage of the larger space on optical media. The first PlayStation game in the series, Final Fantasy VII, drove sales of the PlayStation, further weakening Nintendo's position and driving Sega further out of the market. By this point, the console war between Nintendo and Sega had evaporated, with both companies now facing Sony as their rival. Sega made one more console, the Dreamcast, which had a number of innovative features including a built-in modem for online connectivity, but the console's lifespan was short-lived in part due to the success of Sony's next product, the PlayStation 2, currently being the best-selling home console of all time. Sega left the home console hardware business in 2001 to focus on software development and licensing. Nintendo remains a key player in the home console business, but more recently has taken a "blue ocean strategy" approach to avoid competing directly with Sony or Microsoft on a feature-for-feature basis with consoles like the Wii, Nintendo DS, and Nintendo Switch. Legacy The Sega/Nintendo console war is the subject of the non-fiction novel Console Wars by Blake Harris in 2014, as well as a film adaption/documentary of the book in 2020. Sega and Nintendo have since collaborated on various software titles. Sega has developed a biennial Mario & Sonic at the Olympics series of sports games based on the Summer and Winter Olympics since 2008 featuring characters from both the Super Mario and Sonic series, while Nintendo has developed the Super Smash Bros. crossover fighter series for numerous Nintendo properties that has included Sonic as a playable character along with other Sonic characters in supporting roles since Super Smash Bros. Brawl. Sony versus Microsoft Background Since the sixth generation, both Sony and Microsoft have been direct competitors for home consoles. Since 2000, both companies have released a new console model within a year of each other with roughly comparable specifications. While Nintendo also has remained a significant competitor to both companies, its development and marketing strategy using the "blue ocean" approach is considered fundamentally different from Sony or Microsoft that it is usually not regarded as major participant in the console war. Initial Challenge From Microsoft Microsoft specifically entered the console market with the Xbox console in 2001 as it saw Sony's PlayStation 2 as a potential competitor to the home computer as a ubiquitous device in the living room. Whereas the PlayStation 2 was developed from mostly custom components, Microsoft approached the Xbox as a highly refined personal computer based on Microsoft Windows and DirectX technology. The original Xbox did not compete well against the PlayStation 2, selling only about 24 million units worldwide against the PlayStation 2's 155 million, with Microsoft reportedly failing to profit on the console hardware. Nonetheless, Microsoft, satisfied with the Xbox's overall performance, reaffirmed its commitment to the console marketplace with the reveal of the Xbox 360 in 2005. Xbox 360 vs PlayStation 3 Microsoft was able to take lessons learned from the first Xbox to its second model, the Xbox 360 released in 2005, ahead of Sony's release of the PlayStation 3 in 2006. Besides the earlier release and improved design, Microsoft had secured more first-party developers in its Microsoft Game Studios, mimicking Sony's own first-party developers and other third-party developers for several console exclusives. The PlayStation 3, on the other hand, had fewer exclusives at launch and was hampered by a higher price point at launch, giving the Xbox 360 an edge in the first years of release. Both consoles aimed to include multimedia feature into high-definition movie playback. One miscue Microsoft had was backing the HD-DVD standard for movie playback over the Blu-ray standard that Sony had selected, as shortly after the Xbox 360's release, the movie industry had standardized on Blu-ray. The Xbox 360 also suffered from the "Red Ring of Death", a hardware fault on a large fraction of retail models that cost Microsoft over in repairs over the console's lifetime. Both consoles were challenged by Nintendo's Wii and specifically its novel Wiimote motion-sensing device. To compete, both Microsoft and Sony released their motion-sensing systems, the Kinect and PlayStation Move, respectively, for their consoles. The companies also released console refreshes mid-generation. Microsoft released a low-cost Xbox 360 S, which shipped with less internal storage space, as well as a high-end Xbox 360 E, which shipped with more storage space and the Kinect sensor. Sony released two different Slim models of the PlayStation 3 that reduced the system size and subsequent retail price which helped improve sales. Ultimately, the Xbox 360 sold an estimated 84 million units, based on industry estimates as Microsoft stopped reporting its sales, while the PlayStation 3 sold 87 million units; the Wii comparatively sold over 101 million units. Xbox One vs PlayStation 4 Sony and Microsoft both released their next consoles, the PlayStation 4 and the Xbox One, in 2013. Sony considered the difficulties developers had with using the custom instruction set for the Cell processor on the PlayStation 3 and restructured the PlayStation 4 to use the more standard x86 instruction set used by most personal computers helping to bring development in convergence with computer systems. Microsoft initially wanted to drive the Xbox One as a replacement for a cable box in the living room as a single source for entertainment with features aimed around television viewing in addition to gaming. To achieve this, the Xbox One was to be shipped with Kinect and was to use an always-on Internet connection to enable numerous features, such as the ability to share games with other family members. However, when these features were first promoted, there was a heavy backlash from journalists and consumers, considering these unnecessary, privacy-invading features. Microsoft had to pull many of these features from the Xbox One before launch, such as eliminating the always-connected requirement and the need to use Kinect. Sony took the opportunity in their PlayStation 4 marketing to play off Microsoft's missteps, such as demonstrating the simplicity of game sharing by simply passing along the physical media to another person, as well as its lower price point. While Microsoft was able to course-correct the Xbox One after launch, Sony had gained enough ground with the capabilities of the PlayStation 4 along with a strong library of console-exclusive titles, and the PlayStation 4 outsold the Xbox One, 117 million units to 52 million units. The Xbox One was ultimately the more expensive of the two, however, both console prices were high when compared to the historical console market, setting a trend for ever more expensive consoles. Xbox Series X|S vs PlayStation 5 Both companies released their next consoles in 2020, the PlayStation 5 and the Xbox Series X and Series S. Both console families represent technology improvements with similar target specifications, including high-resolution and high framerates, high-speed internal storage, and backward compatibility with earlier systems. More recently Microsoft has expanded game offerings beyond consoles, such as Xbox Game Pass and the xCloud game streaming service, as to move away from a console war mentality. Phil Spencer, head of Xbox for Microsoft, stated that they see Xbox in competition with Netflix and other online streaming services vying for entertainment options, rather than Sony. Similarly, Sony launched a more intense focus into streaming services. For example, in 2020, a "media remote" was launched, advertising "effortless control of a wide range of blockbuster entertainment on the PS5". With Microsoft's acquisition of Zenimax Media in 2021 and acquisition of Activision Blizzard in 2023, the potential for escalation in Sony/Microsoft console war grew, as Microsoft could potentially make Bethesda Softworks and Activision Blizzard's games exclusive to the Xbox line. Microsoft's potential ownership of the Call of Duty series had become a focus of Sony's concerns of the acquisition. While Microsoft has given Sony a written commitment to keep the Call of Duty series on the PlayStation consoles for several years, Sony has expressed concern that this is not adequate and that Microsoft would make the series Xbox-exclusive following that period. As regulatory agencies considered these positions, Microsoft stated that they had been losing the console war against Sony, having always been in a weaker sales position against the PlayStation 5. Other console wars Atari versus Intellivision Following the release of the Atari 2600 in 1977, Mattel sought to enter the console market, and released the Intellivision in 1979. The console was designed for improved graphics and other features compared to the Atari 2600, a factor that dominated the marketing campaign for Intellivision. Intellivision's launch included a number of sports games, using licenses from the major sports leagues , and included an advertising campaign with sports writer George Plimpton. Mattel also focused on hardware accessories to the console like a keyboard for programming. While the Atari 2600 sold an estimated 30 million consoles, the Intellivision sold around 5 million units and was considered the primary competitor to Atari in the second generation of video game consoles. In the following years, Mattel sought to expand the Intellivision line, releasing the Intellivision II in 1983 and with development of an Intellivision III starting in 1982. Atari released its successor to the 2600, the Atari 5200, in 1982 to compete with the Intellivision. The console war between Atari and Intellivision was shaken up by the arrival of Coleco's ColecoVision in 1982, which was a further technological improvement over both Atari and Intellivision. Both Atari and Mattel suffered significant financial losses in the video game crash of 1983. Atari would scale back its video game efforts in the years that followed, while Mattel sold off the Intellivision brand in 1984. After decades where the intellectual property from both Atari and Mattel shifted across different agencies, Atari SA acquired the Intellivision brand and rights to over 200 games from its systems in May 2024, which Atari SA jokingly stated to have put an end to the decades-long console war. 1990s handheld consoles A number of major handheld consoles were released on the market within about a year of each other: Nintendo's Game Boy, Sega's Game Gear, and the Atari Lynx. While the Game Boy used a monochromatic display, both the Game Gear and Lynx had colour displays. As these handheld releases were alongside the Sega v. Nintendo console war, they were also subject to heavy marketing and advertising to try to draw consumers. However, the Game Boy ultimately won out in this battle, selling over 118 million units over its lifetime (including its future revisions) compared to 10 million for the Game Gear and 3 million for the Lynx. The Game Boy initially sold for or more cheaper than its competitors, and had a larger library of games, including what is considered the handheld's killer app, Tetris, that drew non-gamers to purchase the handheld to play it. Modern handheld consoles Nintendo's DS, 3DS, and Switch each faced competitors in the form of Sony's PlayStation Portable (PSP) and PS Vita, and the Steam Deck from Valve. In each instance, Nintendo managed to get higher sales despite having the weaker hardware. In video games The Hyperdimension Neptunia series of video games started as a parody of the console wars, incorporating personified consoles, developers, consumers, and other such figures within the gaming industry. See also Browser wars Format war Smartphone patent wars References History of video games Business rivalries
Console war
[ "Technology" ]
7,176
[ "History of video games", "History of computing" ]
65,279,895
https://en.wikipedia.org/wiki/Fellows%20of%20the%20Network%20Science%20Society
Each year since 2018, the Network Science Society (NetSci Society) selects up to 7 members of the network science community to be Fellows based on their enduring contributions to network science research and to the community of network scientists. Fellows are chosen from nominations received by the Network Science Society Fellowship Committee and are announced at the NetSci Conference hosted every year. 2022 Fellows of the Network Science Society Fan Chung Vittoria Colizza Noshir Contractor Santo Fortunato Byungnam Kahng Yamir Moreno Olaf Sporns 2021 Fellows of the Network Science Society Lada Adamic Albert-László Barabási Peter Sheridan Dodds Jürgen Kurths Vito Latora Marta Sales-Pardo 2020 Fellows of the Network Science Society Alex Arenas Alain Barrat Ginestra Bianconi Jennifer A. Dunne Michelle Girvan Adilson E. Motter Brian Uzzi 2019 Fellows of the Network Science Society The 2019 Fellows of the Network Science Society were honored at the 2019 NetSci Conference in Vermont, USA. Guido Caldarelli Raissa M. D'Souza Stuart A. Kauffman Jon M. Kleinberg José Fernando F. Mendes Anna Nagurney Luís A. Nunes Amaral 2018 Fellows of the Network Science Society The 2018 Fellows of the Network Science Society were honored at the 2018 NetSci Conference in Paris, France. Réka Albert Mark Granovetter Yoshiki Kuramoto Mark E. J. Newman Steven H. Strogatz Alessandro Vespignani Duncan J. Watts References External links Network Science Society Science and technology award winners
Fellows of the Network Science Society
[ "Technology" ]
314
[ "Science and technology awards", "Network science", "Computer science", "Science and technology award winners" ]
65,284,412
https://en.wikipedia.org/wiki/Identity%20V
Identity V is a 2018 free-to-play asymmetrical multiplayer survival horror game developed and published by the Chinese company NetEase in cooperation with Behaviour Interactive, creators of the similar game Dead by Daylight. Players take the role of the Hunter or a Survivor in a one-versus-four game; the Hunter must hunt and eliminate Survivors, typically through rocket chairs, while Survivors must evade the Hunter and collaborate to open the exit gates by decoding cipher machines. It was initially released on April 2, 2018, by NetEase in China and later globally released on July 5, 2018, for iOS users and July 11, 2018, for Android clients. It is currently available on the App Store, Google Play Store, and Windows systems. Gameplay Basic gameplay Five players are divided into two factions: one player acts as the Hunter, while the other four act as Survivors with different roles in the match. The objective of the Hunter is to eliminate all the survivors before they can escape by chasing and placing them on Rocket Chairs. Meanwhile, the Survivors aim to escape through two exit gates after having decoded five Cipher Machines, or through the dungeon if there is only one Survivor left with at least two Cipher Machines decoded. The Hunter wins the match by eliminating at least three Survivors, while the Survivors win if three or more escape from the exit gate; otherwise, the game ends in a tie. The Survivors can rescue their teammates from the Rocket Chairs before the elimination process ends. Once eliminated, Survivors can spectate their teammates in-game. Rewards vary based on the game mode and the players' performance. There are various characters to choose from, each having their unique abilities. Hunters also possess "Secondary Skills", talents that may aid the Hunter during the match, while Survivors may use the terrains of the map to their advantage. The game features 4 types of in-game currencies not affiliated with its core gameplay: Clues, Fragments, Inspirations, and Echoes. Clues are obtained by partaking in matches and used to unlock new characters and purchase decorative items. Fragments are used to purchase cosmetic items, i.e., character skins and effect accessories. Inspirations are used for the game's character skin gacha system, and Echoes are obtained through spending real-life money and can purchase most items directly. Persona "Intrinsic Persona Web", also known as Persona, gives both factions extra talents during matches. Different personas grant different conditional perks in combat, such as highlighting nearby allies for Survivors or shortening attack recovery time for Hunters. The maximum number of points allowed to be used is 120, which can be obtained permanently through participating in matches. Ranked matches Identity V features two different ranking systems in the form of Character Points and Tier Divisions. Both are used only in Ranked Matches, available at specific time slots during the day. Character Points are earned by playing ranked matches with a specific character and are used to determine one's spot on the leaderboards. Character Points are specific to each character and may slowly decrease over time. Character Points will also be reset when a new Season arrives. Badges can be earned for a specific character depending on points earned. Identity V also features eight Tiers split into subdivisions, which players can be sent up and down depending on their performance. Winning Ranked matches earns Rank Points, and losing loses Rank Points. Once a certain number of Rank Points is reached, the player can go up to the next subdivision if they gain Rank Points in their next ranked match. The tier bracket the player ends up in will determine what reward they get once the season ends. Similar to Character Points, Tier Divisions are reset when a new Season arrives. Alongside normal Rank Matches, players can participate in Five Players ranking matches, available at specific time slots during weekends. Five Players ranking mode uses a different set of Tiers and subdivisions but otherwise retains the same rules as normal Rank Matches. Other game modes Entertainment game modes, belonging to the Violent Struggle submenu, have their own rules and adjustments and commence on fixed hours during the day. In Duo Hunters, two Hunters team up against eight Survivors. Survivors gain an extra hit point; this game mode features telephone booths from which characters of both factions can directly purchase items with acquired game points to further strengthen their abilities. The Blackjack mode plays in the style of the card game Blackjack, with a different player becoming a Hunter in each round. In Tarot, a team of three Survivors (two Squires and a King) and one Hunter (playing as their team's Knight) face off against another team of the same composition, aiming to eliminate the opposing King before their rivals do. A special version, called Crystal Ball Tarot, gives certain characters temporary buffs by activating crystal balls. A fourth mode added in 2021, known as Chasing Shadows, features six Survivors who have to race against each other on an obstacle course. Survivors can team up into three teams of two to participate in the game mode. A fifth mode added in August 2022, called Frenzy Rhapsody, features six Survivors split into two teams and incorporates dodgeball elements into gameplay. A sixth mode was announced in August 2023, called Hide and Seek. The actual gameplay resembles prop-hunting, where two Hunters must find and eliminate six Survivors disguised as objects within 5 minutes. A seventh gamemode, "Copycat", features a battle of wits between 10 players split into 3 factions: Detectives, Copycats, and Mystery Guests, each having their own winning conditions. The gamemode takes great inspiration from social deduction games, such as Mafia and Among Us. Setting Story Mode The player initially assumes the role of Orpheus, an amnesiac detective who, in the case of a missing memory, arrives at the Oletus Manor, the main setting of the game. The Oletus Manor is a large manor owned by a mysterious individual that holds "games" of the Hunter and the Hunted. Throughout the duration of his stay, Orpheus collects evidence from written records, or "diaries", allowing him to visualize "games" from a participant's perspective. On October 28, 2021, NetEase released a major expansion pack called "Time of Reunion" that expanded the storyline, particularly Orpheus's past and his involvement in the manor games. On April 20, 2023, NetEase released Update 2.0, "Ashes of Memory", which changes the main character to Alice DeRoss, a journalist whose investigations lead her to the Oletus Manor. Crossovers The game has had many crossover events. Identity V has collaborated with video game franchises Danganronpa and Persona, the film Edward Scissorhands, and the manga/anime series The Promised Neverland, Death Note, Junji Ito's Tomie, Mitsuji Kamata, Bungo Stray Dogs, and Case Closed. There have been many crossovers exclusive to the game's Chinese servers, such as with Crazy Alien, McDonald's, Fei Ren Zai, and KFC. From June to July 2022, Identity V featured a crossover event with the Chinese television series Link Click. Identity V has also collaborated with the game Project Zero II, lasting from November to December 2023. References External links Official Website (English) 2018 video games Android (operating system) games Asymmetrical multiplayer video games Detective video games Gacha games 2010s horror video games Survival horror video games IOS games NetEase games Video games developed in China Video games set in country houses Windows games
Identity V
[ "Physics" ]
1,512
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
65,286,671
https://en.wikipedia.org/wiki/Study%20of%20animal%20locomotion
The study of animal locomotion is a branch of biology that investigates and quantifies how animals move. Kinematics Kinematics is the study of how objects move, whether they are mechanical or living. In animal locomotion, kinematics is used to describe the motion of the body and limbs of an animal. The goal is ultimately to understand how the movement of individual limbs relates to the overall movement of an animal within its environment. Below highlights the key kinematic parameters used to quantify body and limb movement for different modes of animal locomotion. Quantifying locomotion Walking Legged locomotion is the dominant form of terrestrial locomotion, the movement on land. The motion of limbs is quantified by the kinematics of the limb itself (intralimb kinematics) and the coordination between limbs (interlimb kinematics). To quantify the intralimb kinematics and interlimb coordination during walking, the stance and swing phases of the step cycle must be isolated. Stance is associated with the portion of the step where the leg contacts the ground, whereas, swing is where the leg lifts off the ground and moves forward along the body. High-speed videography is used to record the motion of the legs. Pose-estimation methods are then used to track key point(s) on each leg, typically at the joints of the leg. After extracting the positions of each leg throughout a recording, there are several ways of determining the stance and swing phases of the step cycle. One approach involves using peak and trough detection of the leg tip positions in ego-centric coordinates and after the animal has been aligned to a common heading (Fig. 1). Alternatively, swing and stance can be classified as leg tip velocities above and below a chosen threshold, respectively. In this case, leg tip velocities are calculated in allocentric, or world-oriented, coordinates. Once swing and stance phases are determined, the following kinematic and coordination parameters can be calculated. Intralimb kinematic parameters: Anterior Extreme Position (AEP): the forwardmost position of the leg (i.e. usually the start of stance phase). Posterior Extreme Position (PEP): the rearmost position of the leg (i.e. usually the start of swing phase). Step duration: elapsed time between two onsets of stance. Step frequency: inverse of stride duration (i.e. number of strides per second) Stance duration: time elapsed between stance onset and swing onset. Swing duration: time elapsed between swing onset and the subsequent stance onset . Step amplitude: the distance a leg travels during swing in a ego-centric reference frame. Step length: the distance from the stance onset to stance onset in a world reference frame. Stride range of motion: the leg's integrated path between stance onset and swing offset. Joint angles: Walking can also be quantified through the analysis of joint angles. During legged locomotion, an animal flexes and extends its joints in an oscillatory manner, creating a joint angle pattern that repeats across steps. The following are some useful joint angle analyses for characterizing walking: Joint angle trace: a trace of the angles that a joint exhibits during walking. Joint angle distribution: the distribution of angles of a joint. Joint angle extremes: the maximum (extension) and minimum (flexion) angle of a joint during walking. Joint angle variability across steps: the variability between joint angle traces of several steps. Interlimb kinematic parameters Phase offsets: the lag of a leg relative to the stride period of a reference leg. Number of legs in stance: The number of legs in stance at a single point in time. Tripod coordination strength (TCS): specific to hexapod interlimb coordination, this parameter determines how much the interlimb coordination resembles the canonical tripod gait. TCS is calculated as the ratio of the total time legs belonging to a tripod (i.e. left front, middle right, and hind left legs, or vice versa) are in swing together, by the time elapsed between the first leg of the tripod that enters swing and the last leg of the same tripod that exits swing. Relationship between several joint angles: the relative angles of two joints, either from the same leg or between legs. For example, the angle of a human's left femur-tibia (knee) joint when the right femur-tibia joint is at its most flexed or extended angle. Measures of walking stability Static stability: minimum distance from the center of mass (COM) to any edge of the support polygon created by the legs in stance for each moment in time. A walking animal is statically stable if there are enough legs to form the support polygon (i.e. 3 or more) and the COM is within the support polygon. Moreover, static stability is at its maximum when it lies at the center of the support polygon. Steps to calculate static stability are as follows: Find which legs are in stance and the location of the center of mass. Note, if there are less than 3 legs in stance then the animal is not statically stable. Form the support polygon by creating edges between these legs in a clock-wise manner. Determine if the center of mass lies inside or outside of the support polygon. The ray casting algorithm is a common approach of finding if a point is located within a polygon. If the center of mass is outside of the polygon then the animal is statically unstable. If the center of mass is inside the support polygon, calculate static stability by computing the minimum distance of the center of mass to any edge of the polygon. Dynamic stability: dictates the degree to which deviations from periodic movement during walking will result in instability. Analyzing kinematics across steps Quantifying walking often involves assessing the kinematics of individual steps. For more information on methods for acquiring this data, see Methods of Study. The first task is to parse walking data into individual steps. Methods for parsing individual steps from walking data rely heavily on the data collection process. At a high-level, walking data should be periodic with each cycle reflecting the movements of one step, and steps can therefore be parsed at the peaks of the signal. It is often useful to compare or pool step data. One difficulty in this pursuit is the variable length of steps both within and between legs. There are many ways to align steps, the following are a few useful methods. Stretch step: steps of variable durations may be stretched to the same duration. Step phase: the phase of each step can be computed which quantifies how far through the step each data point is. This normalizes the data by step length, allowing data from steps of variable lengths to be compared. The Hilbert transform may be used to calculate phase, however a manual phase calculation may be better for aligning peak (swing and stance start) alignment. Fruit flies have six legs and four joints per leg with many joints moving in multiple planes. Thus, there are many kinematic degrees of freedom. Therefore, the continuous variability in coordination patterns across walking speeds and across individual flies can be visualized in a low dimensional embedding, using techniques such as principal components analysis and UMAP. In addition to stability, the robustness of a walking gait is also thought to be important in determining the gait of a fly at a particular walking speed. Robustness refers to how much offset in the timing of a legs stance can be tolerated before the fly becomes statically unstable. For instance, a robust gait may be particularly important when traversing uneven terrain, as it may cause unexpected disruptions in leg coordination. Using a robust gait would help the fly maintain stability in this case. Analyses suggest that flies may exhibit a compromise between the most stable and most robust gait at a given walking speed. Speed-dependent kinematic changes Many animals alter walking kinematics as they modulate walking speed. An interlimb kinematic parameter that is commonly speed dependent is gait, the stepping pattern across legs. While some animals alternate between distinct gaits as a function of speed, others move along a continuum of gaits. Similarly, animals commonly modulate intralimb parameters across speed. For example, fruit flies decrease stance duration and increase step length as forward speed increases. Importantly, kinematics are not only modulated across forward velocity, but also rotational and sideslip velocities. In these cases, asymmetry in the modulation between left and right legs is common. Flight Aerial locomotion is a form of movement used by many organisms and is typically powered by at least one pair of wings. Some organisms, however, have other morphological features that allow them to glide. There are many different flight modes, such as takeoff, hovering, soaring, and landing. Quantifying wing movements during these flight modes will provide insight about the body and wing maneuvers that are required to execute these behaviors. Wing orientation is quantified throughout the flight cycle by three angles that are defined in a coordinate system relative to the base of the wing. The magnitude of these three angles are often compared for upstrokes and downstrokes. In addition, kinematic parameters are used to characterize the flight cycle, which consists of an upstroke and a downstroke. Aerodynamics are often considered when quantifying aerial locomotion, as aerodynamic forces (e.g. lift or drag) are able to influence flight performance. Key parameters from these three categories are defined as follows: Angles to quantify wing orientation Wing orientation is described in the coordinate system centered at the wing hinge. The x-y plane coincides with the stroke plane, the plane parallel to the plane that contains both wing tips and is centered at the wing base. Assuming the wing can modeled by the vector passing through the wing base and wing tip, the following angles describe the orientation of the wing: Stroke position: angle describing the anterior-to-posterior motion of the wings relative to the stroke plane. This angle is computed as the projection of the wing vector onto the stroke plane. Stroke deviation: angle describing the vertical amplitude of the wings relative to the stroke plane. This angle is defined as the angle between the wing vector and its projection onto the stroke plane. Angle of attack: angular orientation of the wings (i.e. tilt) relative to the stroke plane. This angle is computed as the angle between the wing cross section vector and the stroke plane. Kinematic parameters Upstroke amplitude: angular distance through which the wings travel during an upstroke. Downstroke amplitude: angular distance through which the wings travel during a downstroke. Stroke duration: time elapsed between the onset of two consecutive upstrokes. Wingbeat frequency: inverse of stroke duration. The number of wingbeats per second. Flight distance per wingbeat: the distance covered during each wingbeat. Upstroke duration: time elapsed between the onset of an upstroke and the onset of a downstroke. Downstroke duration: time elapsed between the onset of a downstroke and the onset of an upstroke. Phase: if an organism has both front and hind wings, the lag of a wing pair relative to the other (reference) wing pair. Aerodynamic parameters Reynolds number: ratio of inertial forces to viscous forces. This metric helps describe how wing performance changes with body size. Swimming Aquatic locomotion is incredibly diverse, ranging from flipper and fin based movement to jet propulsion. Below are some common methods for characterizing swimming: Fin and flipper locomotion Body, tail, or fin angle: the curvature of the body or displacement of a fin or flipper. Tail or fin frequency: the frequency of a fin or tail completing one movement cycle. Jet propulsion Jet propulsion consists of two phases - a refill phase during which an animal fills a cavity with water, and a contraction phase when they squeeze water out of the cavity to push them in the opposite direction. The size of the cavity can be measured in these two phases to compare the amount of water cycled through each propulsion. Methods of study A variety of methods and equipment are used to study animal locomotion: Treadmills are used to allow animals to walk or run while remaining stationary or confined with respect to external observers. This technique facilitates filming or recordings of physiological information from the animal (e.g., during studies of energetics). Some treadmills consist of a linear belt (single or split belt) that constrains the animal to forward walking, while others allow 360 degrees of rotation. Non-motorized treadmills move in response to an animal's self-initiated locomotion, while motorized treadmills externally drive locomotion and are often used to measure the endurance capacity (stamina) of animals. Tethered locomotion Animals may be fixed in place, allowing them to move while remaining stationary relative to their environment. Tethered animals can be lowered onto a treadmill to study walking, suspended in air to study flight, or submersed in water to study swimming. Untethered locomotion Animals may move through an environment without being held in place and their movement can be tracked for analysis of that behavior. However freely moving animals are more challenging to track in 3d for detailed kinematic analysis of intralimb coordination. Visual arenas locomotion can be prolonged and sometimes controlled using a visual arena displaying a particular pattern of light. Many animals use visual queues from their surroundings to control their locomotion and so presenting them with a pseudo optic flow or context-specific visual feature can prompt and prolong locomotion. Racetracks lined with photocells or filmed while animals run along them are used to measure acceleration and maximal sprint speed. High-speed videography for the study of the motion of an entire animal or parts of its body (i.e. Kinematics) is typically accomplished by tracking anatomical locations on the animal and then recording video of its movement from multiple angles. Traditionally, anatomical locations have been tracked using visual markers that have been placed on the animal's body. However, it is becoming increasingly more common to use computer vision techniques to achieve markerless pose estimation. Marker-based pose estimation: Visual markers must be placed on an animal at the desired regions of interest. The location of each marker is determined for each video frame, and data from multiple views is integrated to give positions of each point through time. The visual markers can then be annotated in each frame manually. However, this is a time-consuming task, so computer vision techniques are often used to automate the detection of the markers. Markerless pose estimation: User-defined body parts must be manually annotated in a series of frames to use as training data. Deep learning and computer vision techniques are then employed to learn the location of the body parts in the training data. Next, the trained model is used to predict the location of the body parts in each frame on newly collected videos. The resulting time series data consists of the positions of the visible body parts at each frame in the video. Model parameters can be optimized to minimize tracking error and increase robustness. The kinematic data obtained from either of these methods can be used to determine fundamental motion attributes such as velocity, acceleration, joint angles, and the sequencing and timing of kinematic events. These fundamental attributes can be used to quantify various higher level attributes, such as the physical abilities of the animal (e.g., its maximum running speed, how steep a slope it can climb), gait, neural control of locomotion, and responses to environmental variation. These can aid in formulation of hypotheses about the animal or locomotion in general.Marker-based and markerless pose estimation approaches have advantages and disadvantages, so the method that is best suited for collecting kinematic data may be largely dependent on the animal of study. Marker-based tracking methods tend to be more portable than markerless methods, which require precise camera calibration. Markerless approaches, however, overcome several weaknesses of marker-based tracking, since placing visual markers on the animal of study may be impractical, expensive, or time-consuming. There are many publicly accessible software packages that provide support for markerless pose estimation. Force plates are platforms, usually part of a trackway, that can be used to measure the magnitude and direction of forces of an animal's step. When used with kinematics and a sufficiently detailed model of anatomy, inverse dynamics solutions can determine the forces not just at the contact with the ground, but at each joint in the limb. Electromyography (EMG) is a method of detecting the electrical activity that occurs when muscles are activated, thus determining which muscles an animal uses for a given movement. This can be accomplished either by surface electrodes (usually in large animals) or implanted electrodes (often wires thinner than a human hair). Furthermore, the intensity of electrical activity can correlate to the level of muscle activity, with greater activity implying (though not definitively showing) greater force. Optogenetics is a method used to control the activity of targeted neurons that have been genetically modified to respond to light signals. Optogenetic activation and silencing of neurons can help determine which neurons are required to carry out certain locomotor behaviors, as well as the function of these neurons in the execution of the behavior. Sonomicrometry employs a pair of piezoelectric crystals implanted in a muscle or tendon to continuously measure the length of a muscle or tendon. This is useful because surface kinematics may be inaccurate due to skin movement. Similarly, if an elastic tendon is in series with the muscle, the muscle length may not be accurately reflected by the joint angle. Tendon force buckles measure the force produced by a single muscle by measuring the strain of a tendon. After the experiment, the tendon's elastic modulus is determined and used to compute the exact force produced by the muscle. However, this can only be used on muscles with long tendons. Particle image velocimetry is used in aquatic and aerial systems to measure the flow of fluid around and past a moving aquatic organism, allowing fluid dynamics calculations to determine pressure gradients, speeds, etc. Fluoroscopy allows real-time X-ray video, for precise kinematics of moving bones. Markers opaque to X-rays can allow simultaneous tracking of muscle length. Many of the above methods can be combined to enhance the study of locomotion. For example, studies frequently combine EMG and kinematics to determine motor pattern, the series of electrical and kinematic events that produce a given movement. Optogenetic perturbations are also frequently combined with kinematics to study how locomotor behaviors and tasks are affected by the activity of a certain group of neurons. Observations resulting from optogenetic experiments may provide insight into the neural circuitry that underlies different locomotor behaviors. It is also common for studies to collect high-speed videos of animals on a treadmill. Such a setup may allow for increased accuracy and robustness when determining an animal's poses across time. Modeling animal locomotion Models of animal locomotion are important for gaining new insights and predications on how kinematics arise from the interactions of the nervous, skeletal, and/or muscular systems that would otherwise be difficult to glean from experiments. The following are types of animal locomotion models: Neuromechanical models Neuromechanics is a field that combines biomechanics and neuroscience to understand the complex interactions between the physical environment, nervous system, and the muscular and skeletal systems that consequently result in anticipated body movement. Therefore, neuromechanical models aim to simulate movement given the neural commands to specific muscles, and how those muscles are connected to the animal's skeleton. The key components of neuromechanical models are: A morphologically accurate 3D model of the animal's skeleton consisting of rigid bodies (i.e. bones) that are arranged in a naturalistic manner. In these models, the properties of each rigid body, like mass, length, and width, need to be prescribed. Additionally, the joints between rigid bodies need to be defined, both in terms of type (e.g. hinge and ball-in-socket) and degrees of freedom (i.e. how the rigid bodies move relative to one another). The final step is to assign a mesh object to each rigid body that determines the appearance (e.g. outer surface of a bone) and other contact properties of the rigid bodies. These skeletal models can be built using a variety of 3D modeling programs, such as Blender and Opensim Creator. After the skeletal model is built, the next step is to accurately define the attachment points of muscle to the rigid bodies. This assignment is crucial for the rigid bodies to be articulated in a naturalistic way. There are several type of muscle models that simulate the dynamics of muscle activation, contraction, and relaxation, which include Hill-type and Ekeberg-type muscle models. Neural controllers that simulate motor neuron recruitment and activity by central commands are used to dictate the timing and strength of modeled muscle activation. There are many flavors of these controllers, such as coupled phase oscillator and neural network models. An environment that incorporates physics is essential in simulating realistic movement of neuromechanical models because they will abide by the laws of physics. Environments used for physics simulation include, Opensim, PyBullet, and MuJoCo. References Kinematics Animal locomotion Articles containing video clips
Study of animal locomotion
[ "Physics", "Technology", "Biology" ]
4,484
[ "Animal locomotion", "Physical phenomena", "Machines", "Kinematics", "Behavior", "Animals", "Classical mechanics", "Physical systems", "Motion (physics)", "Mechanics", "Ethology" ]
65,287,796
https://en.wikipedia.org/wiki/Grammistin
Grammistins are peptide toxins synthesised by glands in the skin of soapfishes of the tribes Grammistini and Diploprionini which are both classified within the grouper subfamily Epinephelinae, a part of the family Serranidae. Grammistin has a hemolytic and ichthyotoxic action. The grammistins have secondary structures and biological effects comparable to other classes of peptide toxins, melittin from the bee stings and pardaxins which are secreted in the skin of two sole species. A similar toxin has been found to be secreted in the skin of some clingfishes. Grammistins have a distinctive bitter taste. Soapfishes increase the amount of toxin released in their skin if they are stressed and other species of fish kept in a confined space with a stressed soapfish normally die. If ingested at a high enough dosage the toxin is lethal to mammals with some symptoms being similar to those produce by ciguatoxins. Grammistins also cause hemolysis of mammalian blood cells. The main purpose of the secretion of grammastin is defensive and when a lionfish (Pterois miles) tries to predate on a soapfish it immediately ejects it from its mouth, suggesting that it had detected the bitter taste. Grammistins affect organisms by cytolysis and hemolysis. As well as being toxic they are also antibiotic and antimicrobial. References Ichthyotoxins Peptides Biological toxin weapons
Grammistin
[ "Chemistry" ]
313
[ "Biomolecules by chemical classification", "Chemical weapons", "Molecular biology", "Peptides", "Biological toxin weapons" ]
65,288,265
https://en.wikipedia.org/wiki/Smart%20mobility
Smart mobility refers to many modes of transport. Some smart mobility services include: public transport (with real-time timetabling and route optimization, seamless travel and digital ticketing) Carsharing Mobility as a service (MaaS) Mobility on Demand (MOD) autonomous transport systems smart mobility services in freight and logistics drones and low-altitude aerial mobility Overview Mobility-as-a-Service Mobility-as-a-Service enables multimodal mobility by providing user-centric information and travel services (navigation, location, booking, payment, ...) hence allowing mobility as a seamless service across all transport modes. Mobility on Demand also don't require ownership of private automobiles and gives convenient access to a range of travel modes while socialising the high initial costs of switching to electric-vehicle based mobility. Integrated mobility on demand services can contribute to modal shift to public transport and also addresses spatial inefficiencies of private transport. The car also has a small role to play within smart mobility (it is particularly useful in a context which does not require personal ownership of the car, see above). Cars can be made to use intelligent transportation systems. See also Smart city Shared mobility European Green Deal: Smart mobility is a component thereof Remote work References Transportation engineering Shared transport
Smart mobility
[ "Engineering" ]
261
[ "Civil engineering", "Transportation engineering", "Industrial engineering" ]
65,288,458
https://en.wikipedia.org/wiki/Whitechapel%20Mount
Whitechapel Mount was a large artificial mound of disputed origin. A prominent landmark in 18th century London, it stood in the Whitechapel Road beside the newly constructed London Hospital, being not only older, but significantly taller. It was crossed by tracks, served as a scenic viewing-point (and a hiding place for stolen goods), could be ascended by horses and carts, and supported some trees and formal dwelling-houses. It has been interpreted as: a defensive fortification in the English Civil War; a burial place for victims of the Great Plague; rubble from the Great Fire of London; and as a laystall (hence it was sometimes called Whitechapel Dunghill). Possibly all of these theories are true to some extent. Whitechapel Mount was physically removed around 1807. Because Londoners widely believed in the "Great Fire rubble" theory, the remains were sifted by antique hunters, and some sensational finds were claimed. It survives in its present-day placename Mount Terrace, E1. Location Neighbourhood Whitechapel Mount was on the south side of the Whitechapel Road, on the ancient route from the City of London to Mile End, Stratford, Colchester and Harwich. In the 18th century and later the surroundings were mostly fields: grazing for cows or market gardens. Leaving London, a traveller would pass the church of St Mary Matfelon (origin of the name "white chapel") on the right, then a windmill, before arriving at the Mount. Across the Whitechapel Road was a burying ground and the Ducking Pond. Further east, at the turnpike, the name changed to Mile End Road, with Dog Row (today Cambridge Heath Road) branching off to Bethnal Green. According to John Strype (1720) the neighbourhood was a busy one, with good inns for travellers in Whitechapel and good houses for sea captains in Mile End. Even so, said Strype, the Whitechapel Road was "pestered" by illegally built, poor quality dwellings. Another source said it was infested by highwaymen, footpads and riff-raff of all kinds who preyed on travellers to and from London. News and court reports speak of murders and robberies. In two Old Bailey cases stolen property was buried in, and recovered from, Whitechapel Mount. It was a place of resort for pugilists and dog-fighters. From the summit of Whitechapel Mount an extensive view of the hamlets of Limehouse, Shadwell, and Ratcliff could be obtained. On maps Whitechapel Mount's position is first depicted in a 1673 building plan by Sir Christopher Wren, where he refers to it as "the mud wall called the Fort". In Joel Gascoyne's survey of the parish of Stepney (1703) it is called The Dunghill: it is at least 400 yards long, and is crossed not only by a path, but a road. In John Rocque's map of London (1746) it has been horizontally truncated, but is shown with substantial elevation, with at least one dwelling-house – if not a terrace of houses – on its western end. In Richard Blome's map of 1755 it has a large dwelling-house with front drive and appears alongside the newly built London Hospital. In John Cary's 1795 map its western portion has been truncated by the newly built New Road, but appears to have a substantial building in its northwest corner. Possible origins Civil War fortification In 1643 London was hastily fortified against the Royalist armies, for "there is terrible news that [Prince] Rupert will sack it and so a complete and sufficient dike and earthern wall and bulwarks must be made". Twenty-three or 24 earthen forts were built at intervals around the city and its main suburbs; neighbouring forts were in sight of one another. These forts were manned by volunteers e.g. men too old to be in the regular militia; local innkeepers were ordered to provide them with food. The forts were interconnected by an earth bank and trench, dug by "great numbers of men, women and young children". All social classes joined in the labour:From ladies down to oyster-wenches Labour'd like pioneers in trenches, Fell to their pick-axes and tools, And help'd the men to dig like moles. The completed ring was 18 miles in circumference. A visiting Scotsman walked it; it took him 12 hours. Fort No.2, officially a hornwork with two flanks, commanded the Whitechapel Road. The Scottish traveller, who inspected it, said it was a "nine angled fort only pallosaded and single ditched and planted with seven pieces of brazen ordnance [brass cannon], and a court du guard [guardhouse] composed of timber and thatched with tyle stone as all the rest are". There was a trench around its base. Daniel Lysons said the earthwork at Whitechapel was 329 foot long, 182 foot broad and more than 25 foot above ground level. After the civil war these fortifications were swiftly removed because they spoiled productive agricultural land. However traces remained and one of these was Whitechapel Mount. Wrote Lysons (1811): "The east end was till of late years very perfect; on the west side some houses had been built. The surface on the top, except where it had been dug away, was perfectly level. Mount Street, Mayfair, is another relict name of one of these civil war forts. Great Plague burial ground In the bubonic epidemic of 1665 an estimated 70,000-100,000 Londoners died of the plague. Aldgate, Whitechapel and Stepney were badly affected. This placed a strain on the burial facilities and some were buried in mass graves. Official mass graves were made by opening a deep pit and leaving it open for as long as it took to fill with cadavers, which was done downwind. The number of lost plague pits in London is commonly exaggerated — most mass graves were actually in pre-existing churchyards . However there were some gaps in the records and "undoubtedly some temporary and irregular plague burial sites". Those willing to man the dead carts (and dispose of the corpses) were not fastidious; a contemporary said they were ‘very idle base liveing men and very rude", drawing attention to their task by swearing and cursing. No contemporaneous record confirms corpses were officially buried in Whitechapel Mount. It is known that there were several burial grounds or plague pits in the vicinity e.g. across the road. Whether the mound was used as a burial ground has been disputed. According to popular tradition, it was. In one version, Whitechapel Mount's origin was that rubble from the Fire of London was thrown over a plague pit to cover it up. These rumours were denied by the authorities when they proposed to remove Whitechapel Mount; they had it "pierced" in an effort to refute them.   The clearest reputable source is the author Joseph Moser, who wrote In the course of last summer [1802], when part of the rubbish of [Whitechapel Mount] had just been removed, I had the curiosity to inspect the place, and observed in the different strata a great number of human bones, together with those, apparently, of different animals, oxen, or cows, and sheep’s horns, bricks, tiles, &c. The bones and other exuvia of animals were in many places, especially towards the bottom, bedded in a stiff, viscid earth, of the blueish colour and consistence of potter's clay, which was unquestionably the original ground, thrown into different directions, as different interments operated upon its surface. A complication is that the London Hospital itself may have used the Mount for burials: a hospital history says that by 1764 "The Mount Burying Ground was full". Great Fire detritus The Great Fire of London occurred the year after the plague. The nearest burnt environment was a mile away. Even so, respectable sources asserted that Whitechapel Mount was formed or augmented from rubble from the Fire of London, including Dr Markham the rector of Whitechapel Church, who took pains to investigate the subject. This origin theory was widely believed, though some denied it. When the Mount was dismantled in the 19th century the belief attracted flocks of antique hunters; see below. Laystall The above theories about the origin of Whitechapel Mount, by themselves, may not account for its sheer size as shown in the maps and illustrations cited. A laystall was an open rubbish tip; towns relied on them for disposing of garbage, and did so until the sanitary landfills of the 20th century. By legislation of October 1671 seven laystalls were appointed for the City of London. All street sweepings and household rubbish was to be collected in carts and had to be tipped in one of these, and nowhere else. One of them was Whitechapel Mount: it was to receive the rubbish from the wards of Portsoken, Tower, Duke's Place and Lime Street. However, a laystall was more than a passive tip: it was a business. Its proprietor employed gangs of men, women and boys to sort the rubbish and recycle it. Without him, nothing prevented indiscriminate fly-tipping and random lateral spread onto adjoining properties. William Guy who investigated the laystalls of mid-19th century London reported In most of the laystalls or dustmen's yards, every species of refuse matter is collected and deposited:– nightsoil, the decomposing refuse of markets, the sweepings of narrow streets and courts, the sour-smelling grains from breweries, the surface soil of the leading thoroughfares, and the ashes from the houses. The proportion in which these several matters are collected, vary... In all these establishments the bulk of the deposits consists of dust from the houses, which is sifted on the spot by women and boys seated on the dust-heaps, assisted by men who are engaged in filling the sieves, sorting the heterogeneous materials, or removing and carting them away. While the legislation speaks of "dung, soil, filth and dirt", most domestic refuse (by volume) comprised household dust typically coal ash – hence the expression "dustman" – and this could be used for making bricks. At the close of the 18th century there was a tremendous demand for bricks to build the rapidly expanding London; "At night, a 'ring of fire' and pungent smoke encircled the City"; there were numerous local brickfields e.g. at Mile End. An Old Bailey case of 1809 records that bricks were being delivered from the diminishing Whitechapel Mount. Removal The construction of the East and West India Docks early in the nineteenth century caused roads to be made through the low marshy fields extending from Shadwell and Ratcliff to Whitechapel. New Street/Cannon Street Road, leading from Whitechapel Mount to St George in the East so increased the value of the land on each side of it, that the Corporation of London decided to take down the Mount. This occurred in 1807–8, and Mount Place, Mount Terrace, and Mount Street were then built on that site, thus marking the spot where the Mount stood. The process took several years, efforts at first being desultory. There was even a proposal – a prefiguring of the Regent's Canal – to bring a canal from Paddington to the docks at Wapping by passing through Whitechapel Mount. The soil of the Mount was used to make bricks, see above, and these were delivered to e.g. Wentworth Street, Bethnal Green and "Hanbury's Brewhouse" (the Black Eagle Brewery), buildings that stand today. Antique hunters Believing it to contain detritus from the Fire of London, flocks of cockney enthusiasts examined the Mount's soil. Various antiques were found in it – or alleged to be found in it, since it gave them provenance – including a silver tankard and a Roman coin. The most spectacular find was a carved boar's head with silver tusks; it was, said reputable antiquarians, an authentic memento of the Boar's Head Inn, Eastcheap: a setting for Shakespeare plays, Falstaff, Mistress Quickly and so forth, and genuinely burnt down in the Fire of London. Archaeology For lack of access, there has been no systematic archaeological investigation of the site. However, redevelopment of the Royal London Hospital has allowed occasional glimpses. Mackinder (1994), London Hospital Medical College, Newark Building, Grid Reference TQ3456815: Aitken (2005), The Front Green, Royal London Hospital, Grid Reference TQ347816: In theatre, literature and popular song The Skeleton Witness: or, The Murder at the Mound by William Leman Rede was a play performed on the English and American stage. The villain does a murder and conceals the body in Whitechapel Mount. The crime is done in such a way that, should the remains be discovered, the impoverished hero will get the blame, though innocent. The hero goes abroad for seven years and makes his fortune. On his return to London, about to marry the heroine, he learns, to his horror, that the Mount is to be cleared away, digging to start on the morrow. To stop this, he hurries off to the Mount's owner and buys it at an extravagant price. The vendor, suspicious, wonders if the Mount contains a buried treasure and sends some men to poke around. They discover the skeleton. By a complicated plot twist the hero proves his innocence and lives happily ever after with his female "love interest." The Mount was used as a metonymy to denote the east end or extremity of London town, as in "from the farthest extent of Whitechapel mount to the utmost limits of St Giles", or "[the news] was all over town, from Hyde Park Corner to Whitechapel dunghill". The Coachman, a popular 18th century song, began I'm a boy full spunk, and my name's little Joe, It's I that can tip the long trot; From Whitechapel-mount up to fam'd Rotten-row, With the ladies sometimes is my lot. References and notes Sources External links Mount Terrace in the Survey of London 17th-century forts in England 18th century in London Archaeology of London Buildings and structures in Whitechapel Death in London Great Fire of London History of the London Borough of Tower Hamlets Mounds Waste Whitechapel Great Plague of London
Whitechapel Mount
[ "Physics" ]
3,034
[ "Materials", "Waste", "Matter" ]
65,288,835
https://en.wikipedia.org/wiki/M%C3%A9sz%C3%A1ros%20effect
The Mészáros effect "is the main physical process that alters the shape of the initial power spectrum of fluctuations in the cold dark matter theory of cosmological structure formation". It was introduced in 1974 by Péter Mészáros considering the behavior of dark matter perturbations in the range around the radiation-matter equilibrium redshift and up to the radiation decoupling redshift . This showed that, for a non-baryonic cold dark matter not coupled to radiation, the small initial perturbations expected to give rise to the present day large scale structures experience below an additional distinct growth period which alters the initial fluctuation power spectrum, and allows sufficient time for the fluctuations to grow into galaxies and galaxy clusters by the present epoch. This involved introducing and solving a joint radiation plus dark matter perturbation equation for the density fluctuations , in which , the variable and is the length scale parametrizing the expansion of the Universe. The analytical solution has a growing mode . This is referred to as the Mészáros effect, or Mészáros equation. The process is independent of whether the cold dark matter consists of elementary particles or macroscopic objects. It determines the cosmological transfer function of the original fluctuation spectrum, and it has been incorporated in all subsequent treatments of cosmological large scale structure evolution (e.g. ). A more specific galaxy formation scenario involving this effect was discussed by Mészáros in 1975 explicitly assuming that the dark matter might consist of approximately solar mass primordial black holes, an idea which has received increased attention (e.g. ) after the discovery in 2015 of gravitational waves from stellar-mass black holes. References Dark matter
Mészáros effect
[ "Physics", "Astronomy" ]
352
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
65,289,048
https://en.wikipedia.org/wiki/Potamology
Potamology (from - river, - science) is the study of rivers, a branch of hydrology. The subject of study is the hydrological processes of rivers, the morphometry of river basins, the structure of river networks; channel processes, regime of river mouth areas; evaporation and infiltration of water in a river basin; water, thermal, ice regime of rivers; sediment regime; sources and types of rivers feeding, and various chemical and physical processes in rivers. Bibliography Lindeman, R. L., "The trophic-dynamic aspect of ecology", Ecology, 1942, XXIII, pp. 399–418. Williams, R. B., "Computer simulation of energy flow in Cedar Bog Lake, Minnesota, based on the classical studies of Lindeman", Systems analysis and simulation in ecology (a cura di B. C. Patten), vol. I, New York 1971, pp. 543–582. Hydrology
Potamology
[ "Chemistry", "Engineering", "Environmental_science" ]
207
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
65,289,853
https://en.wikipedia.org/wiki/Teachability%20Hypothesis
The Teachability Hypothesis was produced by Manfred Pienemann. It was originally extracted from Pienemann's Processibility model. It proposes that learners will acquire a second language (L2) features if what is being taught is relatively close to their stage in language development. Description The Teachability Hypothesis is based on previous psycholinguistic research in second language acquisition done by Meisel, Clahsen, and Pienemann (1981) and is reflective in Pienemann's Processibility theory. The hypothesis reports that some aspects of language are sequenced in a way that follows the developmental levels of language in which Pienemann coined those these features as 'developmental'. This sequence is reflective of the natural stages that learners will go through when learning a second language. Pienemann (1984) emphasizes that teachability of L2 structures have psychological constraints are universally shared. Language sequences have been reflected in wh-questions, some grammatical morphemes, negation, possessive determiners, and relative clause. Other features that do not have a developmental level of acquisition and can be acquired at any point in time Pienemann called 'variational' features. Pienemann (1981) concludes that formal instruction needs to be directed towards the ‘natural’ process of second language acquisition. In Pienemann's (1984, 1998) study, he predicted that by following the natural order hypothesis, learners must pass through a set sequence of stages when acquiring language features. However, the instruction is only effective if the learners' interlanguage is close to the step of acquiring that structure Pienemann (1984, 1989, 1998). In addition to following natural acquisition order Pienemann (2013) argued that natural order of acquisition is unbeatable. Thus, instruction cannot make a learner to skip a stage. This means that a learner who is classified at stage 2 in a specific language feature will not benefit from instruction that is directed at learners who are at stage 4. Although, learners who are at stage 3 in a specific language feature may benefit from instruction that is directed at learners who are at stage 4. The reasoning for this is based on the learner's readiness. Implications: Readiness A barrier that the teachability Hypothesis mentions that can prevent the natural development of language acquisition is 'readiness'. Second language learners will not develop and progress through the same stages at the same time. This means that a learner's readiness refers to when a learner is able to move on to the next stage in the sequence of a particular language. The teachability Hypothesis has been used by second language researchers to understand student readiness in acquiring specific linguistic abilities. Importance Second language education The teachability hypothesis provides reasoning for the varied rate at which second languages are acquired. This hypothesis allows educational professionals such as, second language instructors to gain a sense of reasoning as to why their learners may or may not be succeeding as rapidly as their peers. It also documents the importance of teaching to a certain developmental level rather than a standard level or to age. Educational professionals can apply Pienemann's (1988) conclusion of second language learning to their lessons by designing targeted instructions to be conscientious towards student readiness for the outcome of the target learning to be successful. Second language acquisition research The Teachability Hypothesis is important to the framework of psycholinguistic theories as it examines the reasoning as to why learners linguistic capabilities may not be developing at the same rate as other learners. In addition, Second language researches have been studying issues around language pedagogy. Common issues in which the Teachability Hypothesis has provided an explanation is whether and to what degree instruction helps in second language acquisition. Second language acquisition researchers will often position themselves on a scale of the importance of instruction and innate learning. There are four main positions (1) interface position, (2) Variability Hypothesis, (3) Weak Interface Position, and (4) the Teachability Hypothesis. The Teachability Hypothesis favours teaching according to natural development, it has supported second/foreign language pedagogies teaching approaches such as the Learning-Centered approach. It has also supported classroom structure, instruction time, and use of first language in the classroom. Through these perspectives on language acquisition, second language processing can be understood. Supporting research Comparing teaching approaches to the Teachability Hypothesis References Cognitive psychology
Teachability Hypothesis
[ "Biology" ]
878
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
73,867,030
https://en.wikipedia.org/wiki/HD%20124448
HD 124448, also called Popper's Star and V821 Centauri, is an extreme helium star in the Centaurus constellation. Discovered by astronomer Daniel Popper, this star has a spectral classification of B2-B3 and a radius of . Peter M. Corben et al. announced that the star is a variable star, in 1972. It was given its variable star designation, V821 Centauri, in 1981. References Centaurus B-type stars PV Telescopii variables 124448 Centauri, V821 069619
HD 124448
[ "Astronomy" ]
121
[ "Centaurus", "Constellations" ]
73,867,159
https://en.wikipedia.org/wiki/Elbio%20Dagotto
Elbio Rubén Dagotto is an Argentinian-American theoretical physicist and academic. He is a distinguished professor in the department of physics and astronomy at the University of Tennessee, Knoxville, and Distinguished Scientist in the Materials Science and Technology Division at the Oak Ridge National Laboratory. Dagotto is most known for using theoretical models and computational techniques to explore transition metal oxides, oxide interfaces, high-temperature superconductors, topological materials, quantum magnets, and nanoscale systems. He authored the book, Nanoscale Phase Separation and Colossal Magnetoresistance which has focused on transition metal oxides, particularly manganese oxides with the colossal magneto-resistance effect and co-edited the book, Multifunctional Oxide Heterostructures. Dagotto held appointments as a Member of the Solid State Sciences Committee at the National Academy of Sciences and as a Divisional Editor for Physical Review Letters. He is a Fellow of both the American Association for the Advancement of Science (AAAS) and the American Physical Society (APS), and has also been recognized as an Outstanding Referee by the APS and Europhysics Letters (EPL). Furthermore, he is the recipient of the 2023 David Adler Lectureship Award in the Field of Materials Physics and recipient of the 2023 Alexander Prize of the University of Tennessee. Education and career Dagotto studied physics at the Institute Balseiro, Bariloche Atomic Centre, Bariloche, Argentina, where he received the title of Licenciado. Continuing in the Centro Atomico Bariloche, he received his PhD in the field of High Energy Physics, specifically in lattice gauge theories. He then moved as postdoctoral researcher to the department of physics, University of Illinois at Urbana-Champaign under the supervision of Eduardo Fradkin and John Kogut. His second postdoctoral appointment was at the Kavli Institute for Theoretical Physics, at the University of California, Santa Barbara, where he collaborated with Douglas James Scalapino, John Robert Schrieffer and Robert Sugar. Dagotto became assistant, associate and then full professor at the department of physics, Florida State University. There, he was associated with the National High Magnetic Field Laboratory, working in the theory group. He works in a Correlated Electron Group with Adriana Moreo, and has had a joint appointment between the University of Tennessee (UT), Knoxville, and Oak Ridge National Laboratory (ORNL) since 2004. Research Dagotto's research has primarily focused on strongly correlated electronic materials, and lately in quantum materials, where correlation and topological effects are intertwined. In the presence of strong correlation, the interactions between electrons play a crucial role and the one-electron approximation, used for example in semiconductors, is no longer valid. In this framework, he has worked on theories for many families of materials, such as high critical temperature superconductors and manganese oxides with the colossal magnetoresistance. The overarching theme of his work is that correlated electrons must be considered in the broader context of complexity. As described by Philip W. Anderson in his publication, “More Is Different” having simple fundamental interactions among particles does not imply the ability to reconstruct their collective properties. Dagotto argued that in correlated electronic systems, similar emergence occurs, and these complex systems spontaneously form complicated states and self-organize in patterns impossible to predict by mere inspection of the simple electron-electron interactions involved. Because of its intrinsic difficulty, to study complexity and emergence in quantum materials the use of computational techniques is crucial. He has employed Monte Carlo, density matrix renormalization group, and Lanczos methods. Together with collaborators, he also developed new algorithms to study systems described by spin-fermion models, with a mixture of quantum and classical degrees of freedom, such as in the double exchange context used for materials in the central part of the 3d row of the periodic table. Scientific work In 1992, Dagotto, in collaboration with José Riera and Doug Scalapino, opened the field of ladder compounds, materials with atomic substructures containing two chains next to each other and with inter-ladder coupling (along rungs) of magnitude comparable to that in the long direction (along legs). This research was the first to demonstrate that the transition from one chain to a full two-dimensional plane was not a smooth process simply involving the addition of one chain to another. Instead, it was revealed that even and odd number of chains (called legs due to its ladder-like geometry) belong to classes with quite different behavior. The even-leg ladders, with two legs being the most dramatic case, were theoretically predicted by Dagotto to display a spin gap, spin liquid properties, and tendencies toward superconductivity upon hole doping, all properties confirmed experimentally in materials of the family of copper-based high critical superconductors. Even in the more recently discovered iron-based high critical temperature superconductor, the "123" materials such as BaFe2S3 with ladder geometry also display superconductivity under high pressure. Dagotto employed computational techniques to study model Hamiltonians for high critical temperature superconductors based on copper, thus reducing the uncertainty in the analysis of these models when employing other approximations, such as mean field or variational methods. In 1990, he along with research collaborators, and other groups independently, realized that the dominant attractive channel for Cooper pairs of holes in an antiferromagnetic background is the dx2-y2 channel. In 1990, he studied dynamical properties of the Hubbard model and t-J model computationally, addressing photoemission dispersions and quasiparticle weights. In 1998, Dagotto developed the Monte Carlo techniques that allowed for the first computational studies of spin-fermion models for manganites, in collaboration with Seiji Yunoki and Adriana Moreo. Employing these techniques, phase separation involving electronic degrees of freedom, dubbed "electronic phase separation" was discovered. The computational techniques developed by him and research collaborators unveiled the strong competition between a ferromagnetic metallic state and complex charge-orbital-spin ordered insulating states, providing the explanation for the colossal magnetoresistance effect in manganites. More recently, similar Monte Carlo techniques have been employed by him and collaborators to study properties of iron-based superconductors, revealing the role of the lattice to stabilize the electronic nematic regime above the antiferromagnetic critical temperature. In a highly cited 2005 publication, Dagotto argued that the electronic degree of freedom in transition metal oxides and related materials displays characteristics similar to those of soft matter, where complex patterns arise from deceptively simple interactions. In 2006, Dagotto and Ivan Sergienko developed a theory to understand the multiferroic properties of narrow bandwidth perovskites and other oxides. Their spin arrangements break inversion symmetry, and this triggers ferroelectric properties, leading to multiferroics, which are materials with both magnetic and ferroelectric properties. He, along with Ivan Sergienko, Cengiz Sen, Silvia Picozzi and collaborators also proposed magnetostriction as a mechanism for multiferroicity. Dagotto made several other contributions to theoretical condensed matter physics. Together with Pengcheng Dai and Jiangping Hu, in 2012 they were among the first to argue that the iron based high critical temperature superconductors are not located in the weak Hubbard coupling limit. Instead they are in the intermediate Hubbard coupling regime, thus requiring a combination of localized and itinerant degrees of freedom. In particular, iron selenides are an example of materials where electronic correlations and spin frustration cannot be ignored. With Julian Rincon, Jacek Herbrych and collaborators, employing the density matrix renormalization group, they computationally discovered “block” states in low-dimensional multi-orbital Hubbard models. Spin blocks are groups of spin that are aligned ferromagnetically, anti-ferro coupled among them, and they display exotic dynamical spin structure factors with a mixture of spin waves and optical modes. Among the related findings, Herbrych, Dagotto and collaborators revealed the existence of a spin spiral made out of blocks, a state never reported before. When this spiral one-dimensional state is placed over a two-dimensional superconducting plane, Majorana fermions developed at the chain by proximity effect from the plane, and for this reason this chain-plane geometry has potential value in topological quantum computing. He, together with Narayan Mohanta and Satoshi Okamoto, also reported Majoranas in a two-dimensional three-layer geometry with a skyrmion crystal at the bottom, an electron gas in the middle, and a standard superconductor at the top with a carved one-dimensional channel. Within topology in one dimension, he, Nirav Patel, and collaborators proposed a fermionic two-orbital electronic model that becomes the S=1 Haldane chain in strong Hubbard coupling, and has similarities with the AKLT state of spin systems. The proposed fermionic model has a spin gap and spin liquid properties, as the Haldane chain, and it is quite different from the S=1/2 Heisenberg chain. Moreover, he and collaborators predicted superconductivity upon hole doping, similarly as it occurs in ladders due to the existence of preformed spin ½ singlets in the ground state as in a resonant valence bond state. Dagotto also contributed to theoretical aspects of oxide interfaces, where oxides are grown one over the other creating interfaces where reconstructions of the spin, charge, orbital, and lattice can occur. Together with Shuai Dong and collaborators, he showed that a superlattice made of insulating Mn-oxide components becomes globally metallic in the new geometry. He has also worked in skyrmions. In the early stages of his career, he made contributions: to particle physics in the context of lattice gauge theories, to the interface between particle physics and condensed matter, and to frustrated spin systems. Personal life Dagotto is married to Adriana Moreo, another physicist with whom he has two children; they met as undergraduates at the Balseiro Institute. Awards and honors 1998 – Fellow, American Physical Society 2006 – Member, Solid State Sciences Committee of the National Academy of Sciences 2008 – Outstanding Referee, American Physical Society (APS) 2010 – Fellow, American Association for the Advancement of Science (AAAS) 2012 – Outstanding Referee, Europhysics Letters (EPL) 2019, 2021, 2023 – Teacher of the year award, University of Tennessee 2023 – David Adler Lectureship Award in the Field of Materials Physics, American Physical Society with citation "For pioneering work on the theoretical framework of correlated electron systems and describing their importance through elegant written and oral communications." 2023 – Alexander Prize, University of Tennessee Bibliography Books Nanoscale Phase Separation and Colossal Magnetoresistance (2003) ISBN 9783540432456 Multifunctional Oxide Heterostructures (2012) ISBN 9780199584123 Selected articles Dagotto, E., Riera, J., & Scalapino, D. (1992). Superconductivity in ladders and coupled planes. Physical Review B, 45(10), 5744. Barnes, T., Dagotto, E., Riera, J., & Swanson, E. S. (1993). Excitation spectrum of Heisenberg spin ladders. Physical Review B, 47(6), 3196. Dagotto, E. (1994). Correlated electrons in high-temperature superconductors. Reviews of Modern Physics, 66(3), 763. Dagotto, E., & Rice, T. M. (1996). Surprises on the way from one-to two-dimensional quantum magnets: The ladder materials. Science, 271(5249), 618–623. Yunoki, S., Hu, J., Malvezzi, A. L., Moreo, A., Furukawa, N., & Dagotto, E. (1998). Phase separation in electronic models for manganites. Physical Review Letters, 80(4), 845. Moreo, A., Yunoki, S., & Dagotto, E. (1999). Phase separation scenario for manganese oxides and related materials. Science, 283(5410), 2034–2040. Dagotto, E., Hotta, T., & Moreo, A. (2001). Colossal magnetoresistant materials: the key role of phase separation. Physics Reports, 344(1–3), 1–153. Dagotto, E. (2005). Complexity in strongly correlated electronic systems. Science, 309(5732), 257–262. Sergienko, I. A., & Dagotto, E. (2006). Role of the Dzyaloshinskii-Moriya interaction in multiferroic perovskites. Physical Review B, 73(9), 094434. Dai, P., Hu, J., & Dagotto, E. (2012). Magnetism and its microscopic origin in iron-based high-temperature superconductors. Nature Physics, 8(10), 709–718. References University of Tennessee faculty Living people 21st-century American physicists 21st-century American academics Theoretical physicists Year of birth missing (living people) Fellows of the American Physical Society
Elbio Dagotto
[ "Physics" ]
2,799
[ "Theoretical physics", "Theoretical physicists" ]
73,868,948
https://en.wikipedia.org/wiki/Hay%20meadow
A hay meadow is an area of land set aside for the production of hay. In Britain hay meadows are typically meadows with high botanical diversity supporting a diverse assemblage of organisms ranging from soil microbes, fungi, arthropods including many insects through to small mammals such as voles and their predators, and up to insectivorous birds and bats. History Up until the turn of the 20th century, most farms in Britain were relatively small and each farm relied on the power of horses for transport and traction including ploughing. Even in the towns and cities, many horses were still in use pulling carriages and carts and delivering milk and bread to the door and Pit ponies were in widespread use in all the coal mining regions. The onset of war in 1914 required many horses and young men to be deployed in the European battlefields, many of whom never returned. This pattern was repeated in 1939. The two world wars made enormous technological strides in devising mechanised forms of transport which were built on to provide oil powered farm equipment including the ubiquitous tractors. During the same decades, British governments were strongly encouraging the population to grow more food especially at times when Atlantic convoys of food from the Americas were being lost to enemy torpedo activities. As a consequence of all these pressures, British farms became steadily larger and abandoned the use of horses in favour of oil fuelled farm machinery. Without the need to feed horses, there was no apparent need to maintain hay-meadows and most were ploughed up and re-sown to provide fodder crops such as mono-culture grass species for silage, brassica or turned over to direct food production such as cereal crops, potatoes or oil-seed rape. Types Northern hay meadows Northern hay meadows are largely restricted to the northern counties of England including Northumberland, County Durham and Yorkshire with a few in the Scottish border counties. Water meadows Some pastures close to rivers have traditionally been managed as Water meadows. These occur on land that either floods naturally in the wintertime such as those on the River Thames around Oxford or is deliberately flooded using sluices such as those on the Somerset levels. Flooding deposits new nutrient rich sediment on the land but also changes the plant distribution towards those plants that are tolerant of periodic inundation. Lowland meadows and pastures Probably the most frequently encountered, lowland meadows are often relics that have been retained since horses were last used on farms. Their species richness and diversity depend on their ongoing management. This involves the winter grazing, often with sheep and then the land being left until mid-summer when the hay crop is taken. Once growth has re-established the such meadows are often grazed by cattle. The lack of any artificial fertilisers or pesticides allow a very diverse flora to establish in which no one species dominates. The presence of hemi-parasitic plants such as Yellow Rattle and Eye-bright assist in controlling over-growth of grasses. Orchids are common components of these meadow communities and these rely on fungal mycelium in the earth both for germination of orchid seeds but also as part of a commensal relationship with the orchids. References Grasslands Meadows
Hay meadow
[ "Biology" ]
634
[ "Grasslands", "Ecosystems" ]
73,871,341
https://en.wikipedia.org/wiki/TM44%20inspections
TM44 inspections, or Air Conditioning Energy Assessments (ACEA), are required by law in the United Kingdom under the Energy Performance of Buildings Regulations 2007, amended in 2011 and 2012. They are designed to improve energy efficiency and reduce the carbon dioxide emissions of air conditioning systems. The name "TM44" refers to the technical memorandum TM44, which provides guidelines for these inspections. Purpose TM44 inspections are aimed at assessing the efficiency of air conditioning systems and providing building owners and operators with advice on how to improve the efficiency, reduce the energy consumption and decrease operating costs. The inspections also look for any faults in the system and check that the system is properly maintained. Inspection frequency and requirements Air conditioning systems with a total cooling capacity of more than 12 kW must undergo an inspection at least once every five years. This includes individual units which combined exceed 12 kW. The first inspection must occur within five years of the installation of the system. Subsequent inspections must be conducted by an accredited energy assessor within five years of the previous inspection. The Energy Performance of Buildings Directive (EPBD) states that an inspection report should include: The current efficiency of the equipment Suggestions for improving the efficiency of the equipment The adequacy of equipment maintenance Any faults in the systems and suggested actions The size of the air conditioning system compared to the cooling requirements of the building Enforcement The enforcement of the TM44 inspection is undertaken by local authorities. Failure to commission, keep or provide an updated TM44 inspection report when required by an enforcement authority could lead to a fine of £300 per unit. There is no legal requirement for organisations to act on or implement any of the recommendations made in the report. However, organisations that are looking to make their systems more energy efficient to support them on their journey to reducing their carbon emissions, may consider making changes to the way their systems run. Impact TM44 inspections aim to reduce the energy consumption of air conditioning units, thereby reducing their impact on the environment and cutting costs for businesses. By enforcing these inspections, the UK government encourages businesses to be more energy efficient and promotes a more sustainable approach to air conditioning. References External links The Energy Performance of Buildings (Certificates and Inspections) (England and Wales) Regulations 2007 Guide to air conditioning inspections for buildings CIBSE Knowledge Portal: TM44 Energy Performance of Buildings Directive TM44 Inspections Heating, ventilation, and air conditioning Regulatory compliance Regulation in the United Kingdom Health and safety in the United Kingdom Energy efficiency policy Emissions reduction
TM44 inspections
[ "Chemistry" ]
503
[ "Greenhouse gases", "Emissions reduction" ]
73,871,528
https://en.wikipedia.org/wiki/Danuglipron
Danuglipron is a small-molecule GLP-1 agonist developed by Pfizer that, in an oral formulation, is under investigation as a therapy for diabetes mellitus. Initial results from a randomized controlled trial indicate that it reduced weight and improved diabetic control. The most commonly reported adverse events were nausea, diarrhea, and vomiting. See also Lotiglipron Orforglipron References Experimental drugs GLP-1 receptor agonists Experimental diabetes drugs Nitriles Fluoroarenes Pyridines Ethers Piperidines Oxetanes Benzimidazoles Carboxylic acids
Danuglipron
[ "Chemistry" ]
131
[ "Carboxylic acids", "Drug safety", "Functional groups", "Organic compounds", "Ethers", "Nitriles", "Abandoned drugs" ]
73,871,740
https://en.wikipedia.org/wiki/List%20of%20spacetimes
This is a list of well-known spacetimes in general relativity. Where the metric tensor is given, a particular choice of coordinates is used, but there are often other useful choices of coordinate available. In general relativity, spacetime is described mathematically by a metric tensor (on a smooth manifold), conventionally denoted or . This metric is sufficient to formulate the vacuum Einstein field equations. If matter is included, described by a stress-energy tensor, then one has the Einstein field equations with matter. On certain regions of spacetime (and possibly the entire spacetime) one can describe the points by a set of coordinates. In this case, the metric can be written down in terms of the coordinates, or more precisely, the coordinate one-forms and coordinates. During the course of the development of the field of general relativity, a number of explicit metrics have been found which satisfy the Einstein field equations, a number of which are collected here. These model various phenomena in general relativity, such as possibly charged or rotating black holes and cosmological models of the universe. On the other hand, some of the spacetimes are more for academic or pedagogical interest rather than modelling physical phenomena. Maximally symmetric spacetimes These are spacetimes which admit the maximum number of isometries or Killing vector fields for a given dimension, and each of these can be formulated in an arbitrary number of dimensions. Minkowski spacetime de-Sitter spacetime where is real and is the standard hyperbolic metric. Anti de-Sitter spacetime Black hole spacetimes These spacetimes model black holes. The Schwarzschild and Reissner–Nordstrom black holes are spherically symmetric, while Schwarzschild and Kerr are electrically neutral. Schwarzschild spacetime where is the round metric on the sphere, and is a positive, real parameter. Kruskal spacetime (Maximally extended Schwarzschild spacetime) where is defined implicitly. Reissner–Nordstrom spacetime Kerr spacetime Kerr–Newman spacetime See Boyer–Lindquist coordinates for details on the terms appearing in this formula. Cosmological spacetimes FLRW spacetime , where is often restricted to take values in the set . Lemaître–Tolman spacetime Gravitational wave spacetimes pp-wave spacetime Other Spherically symmetric spacetime Asymptotically flat spacetime Non-relativistic spacetime Static spacetime Einstein static universe spacetime Alcubierre spacetime Ellis wormhole spacetime Gödel spacetime Taub–NUT spacetime Kasner spacetime Mixmaster spacetime See also Spacetime symmetries Quantum spacetime References Sources General Relativity, R. Wald, The University of Chicago Press, 1984, Spacetime and Geometry, S. Carroll, Cambridge University Press, 2019, General relativity
List of spacetimes
[ "Physics" ]
585
[ "General relativity", "Theory of relativity" ]
73,872,084
https://en.wikipedia.org/wiki/Department%20of%20Chemistry%2C%20University%20of%20York
The Department of Chemistry at the University of York opened in 1965 with Sir Richard Norman being the founding professor of the department. The department has since grown to over 820 students and provides both undergraduate and postgraduate courses in Chemistry and other related fields, with the current Head of department being Professor Caroline Dessent. Research Chemistry Research Centres Centre for Hyperpolarisation in Magnetic Resonance The Centre for Hyperpolarisation in Magnetic Resonance is a interdisciplinary research centre jointly-run with the Department of Psychology, with a focus on the development of techniques for nuclear magnetic resonance and magnetic resonance imaging. The centre is primarily located on York Science Park housing high resolution NMR machines and a 7 T pre-clinical MRI scanner. Recent research has focused on Nitrogen-15 Hyperpolarized Nuclear Magnetic Resonance to study Nitrogen cycle Synthons. The centre has also been working on research involving the tracking of anticancer agents for use in MRI. Centre of Excellence in Mass Spectrometry Created in 2008 the centre is a joint venture with the Department of Biology and is currently based in the York Science Park. Research areas are very broad and range from therapeutic protein discovery to supporting archaeological research. Green Chemistry Centre of Excellence The research centre is associated with both the Biorenewables Development Centre and Centre for Novel Agricultural Products and focuses on making changes to help promote low carbon and bio-based economy. Recent research had led to exploration of harvesting fog for a renewable source of water and development of absorption techniques to reduce the impact of pollutants on the environment. Wolfson Atmospheric Chemistry Laboratories Established in 2013, the research group studies and develops technology for atmospheric science with key areas of focus including air pollution as well as ozone depleting substances. The group also runs the Cape Verde Atmospheric Observatory studying the effects of nitrogen oxides and various other pollutants are having on the environment. Facilities Developments Since 2010 the Department of Chemistry has undergone £29 million of renovations with the development of phase 2 of the Dorothy Hodgkin research building. The 3 story building costing £9.4 million provides space for 100 researchers with research interests ranging from medical chemistry to solar energy. A secondary development consisting of chemistry F block was completed in 2014. The site provides new undergraduate facilities and social space. The top floor of the research building also houses the Green Chemistry centre of Excellence. The centres for Biorenewables Development and Centre for Hyperpolarisation in Magnetic Resonance have also recently undergone developments. Associated departments and research centres Biochemistry York JEOL Nanocentre York Structural Biology Laboratory (Department of Biology, University of York) Archaeological and Paleoenvironmental Chemistry References Departments of the University of York 1965 establishments in England Chemistry laboratories
Department of Chemistry, University of York
[ "Chemistry" ]
539
[ "Chemistry laboratories" ]
73,872,213
https://en.wikipedia.org/wiki/Wireless%20clicker
A wireless clicker or wireless presenter is a handset remote used to control a computer during a presentation, by emulating a "mouse click" + "some keys of a PC keyboard"; usually incorporating a laser pointer to pinpoint screen details. It is mainly used for presentations with a video projector or a big TV screen (for example a computer presentation created with PowerPoint, Impress or VCN ExecuVision), allowing the presenter to move freely in front of the audience. PC interface It consists of a transmitter similar to a remote control and a small receiver, usually connected to a USB port on the computer, that detects it as if it were a mouse. Control signals are transmitted by radio (for example 2.4 GHz, Bluetooth ) or in some models by infrared. Usually no additional programs are needed on the computer. Typically does not use any special communication protocol in the presentation program, but instead emulates simple keyboard inputs ( arrow keys, F5 function key, etc..) and some other type of mouse interface (buttons, scroll wheel displacement, etc..). The range of devices operated by radio is usually specified between 10 and 15 meters, in fact if it is at the coverage limit, the orientation of the pointer and therefore of the antenna printed on the PCB may be critical. An infrared connection can only be used with a clear line of sight to the PC. The item connected to the PC is powered by the USB port and the remote control is usually powered by small batteries such as: button batteries, AA batteries or AAA batteries, although there are some rechargeable models. Included elements Pen-drive and timer Among other additional elements it may include a built-in flash memory in the receiver part, so that there is no need to plug in an additional flash drive to carry the presentation files. Some models include a timer with a built-in vibration alarm, to notify the speaker when time is up There are also clickers that completely emulate a mouse, with them you can move the mouse cursor around the screen and thus control programs (e.g. Logitech). Inversely, some wireless mice are also equipped with some specific characteristics. Laser pointer Apart from the elements to control the presentation, the wireless presenters often contain a laser pointer. A distinctive feature among the models offered in the market, is the color of the laser. For presentations, green is more visible than red. Benefits Using a wireless clicker, helps the presenter to move freely, instead of being obliged to stay next to the computer, can get closer and keep in touch with the audience, watching the presentation together with them, being able to use the built-in laser pointer, in order to emphasize specific points in the dialogue with the participants. This makes interactive learning possible and minimizes some negative characteristics of face-to-face teaching. In fact, it works especially well when the presentation slides consist only of images used to provide an emotional background, by illustrating the actual presentation. Another beneficial effect of the clicker remote is that it is "hand-squeezable" as an anti-stress object. The possibility of being able to use the handset, in a stressful situation, may act as an anti-stress ball, calming some people who otherwise tend to feel nervous or stressed during presentations, when speaking in front of a large audience. One can remember the case of The Caine Mutiny, when Captain Queeg is questioned, he tries to calm down by moving in his hand, a pair of steel Baoding balls. References Bibliography External links Public speaking
Wireless clicker
[ "Technology" ]
730
[ "Multimedia", "Presentation" ]
73,872,282
https://en.wikipedia.org/wiki/Platygloeaceae
The Platygloeaceae are a family of fungi in the class Pucciniomycetes. Species in the family have auricularioid basidia (tubular with lateral septa) and are typically plant parasites on angiosperms, though Platygloea species appear to be saprotrophic. References Pucciniomycotina Basidiomycota families
Platygloeaceae
[ "Biology" ]
83
[ "Fungus stubs", "Fungi" ]
73,873,037
https://en.wikipedia.org/wiki/NGC%202078
NGC 2078 is an emission nebula with an apparent magnitude of 10.9, located in the constellation Dorado. It was discovered on September 24, 1826, by James Dunlop. References Emission nebulae Dorado 2078
NGC 2078
[ "Astronomy" ]
46
[ "Dorado", "Constellations" ]
73,873,864
https://en.wikipedia.org/wiki/The%20Innovation%20Delusion
The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work that Matters Most is a book by historians of technology Lee Vinsel and Andrew L. Russell that was published in 2020. It explores how the ideology of change for its own sake has proven a disaster. It draws on the story of how society has devalued the work that underpins modern life, and shifted focus away from the pursuit of growth at all costs and back toward neglected activities like maintenance, care, and upkeep. Overview The book begins with Vinsel and Russell's criticism of innovation today, more specifically "innovation speak". They make an effort to distinguish "innovation" from "innovation speak", noting that "innovation-speak" as a buzzword that tech companies are using to try and convince consumers to buy technology to rely on rather than a technology we use and need. They write "Unlike actual innovation, which is tangible, measurable, and much less common, innovation-speak is a sales pitch about a future that doesn't exist yet." They list examples of innovation such as: electric power, reinforced concrete, and synthetic materials like Teflon and neoprene. Vinsel and Russell continue the rest of the book emphasizing in order to have a prosperous society, we need to make sure all citizens have access to basic goods, including modern infrastructure, resources and care. They attempt to raise awareness about the need for maintenance, repair, and care of what we already have in this world, both living and inanimate. Throughout the book the authors lays out several precedents where lack of maintenance, repair and care has caused immeasurable harm to human life. One example they give is the FIU pedestrian bridge that existed until 2018. The bridge acclaimed for its innovation would later collapse onto the road beneath it. Critical reception Sara Holder, critic for the Library Journal, noted that "Vinsel and Russell’s observations make a compelling counterpoint to the innovation mania that has dominated this decade". Kirkus Reviews describes it as "A refreshing, cogently argued book that will hopefully make the rounds at Facebook, Google, Apple et al." References Further reading 2020 non-fiction books Philosophy of technology
The Innovation Delusion
[ "Technology" ]
450
[ "Philosophy of technology", "Science and technology studies" ]
73,876,185
https://en.wikipedia.org/wiki/Magicplan
Magicplan is a mobile application developed by Sensopia Inc. The app enables users to create 2D and 3D floor plans in real time by using the camera of a smartphone or tablet alongside augmented reality (AR) technology. Magicplan is currently available for iOS and Android devices and has been used by professionals in various industries, such as architecture, real estate, and home improvement. History Magicplan was founded and developed in 2011 by Sensopia Inc in Montreal, Canada. In 2016, Sensopia merged with B&O, a German company conglomerate in the housing sector. Since then, Magicplan holds offices in Montreal and Munich. The first version of Magicplan was launched in 2011 for iOS devices, and the Android version followed in 2013. The app gained widespread recognition and was included in Apple's 2017 Best Apps of the Year list. Features Magicplan combines AR technology, through Apple's ARKit with deep learning. Users can capture room dimensions, add objects such as furniture and appliances, and annotate the plan with additional information. The app includes various features, including: 2D and 3D floor plan generation Automatic room and objects detection and measurement Object library with customizable items Integration with other cloud storage services Exporting in multiple file formats (e.g., PDF, JPG, DXF, OBJ, IFC and CSV) Field reports including photos, 360° images, notes, custom forms, and markups Take-off and work estimates to calculate pricing Collaboration tools for sharing and editing floor plans in real time Applications Magicplan is used by professionals and individuals for various purposes, such as: Real estate agents and brokers for marketing properties Architects and interior designers for project planning Construction and remodeling contractors to stay organized and prepare estimates Homeowners and renters for space planning and renovation DIY projects Claims adjusters and restoration contractors for estimating property damage Home inspectors for creating inspection reports Facility managers for managing building assets and maintenance tasks External links Official website References Mobile applications Computer-aided design Augmented reality applications IOS software Android (operating system) software
Magicplan
[ "Engineering" ]
427
[ "Computer-aided design", "Design engineering" ]
73,876,743
https://en.wikipedia.org/wiki/Anaxam
ANAXAM stands for "Analytics with Neutrons And X-rays for Advanced Manufacturing" and it is a knowledge and technology transfer centre in Switzerland. Anaxam is part of the federal government's "Digitalisation" action plan and a member of the Advanced Manufacturing Technology Transfer Centers (AM-TTC) association. Anaxam is located on the Park Innovaare campus in the canton of Aargau. It is a non-profit organisation, that aims to provide industry with access to advanced analytical methods originally developed for basic research. Anaxam works with project partners on the basis of "public-private partnerships". The centre provides industry with materials analysis using neutron and synchrotron radiation (X-rays) in the field of non-destructive material testing. The technologies offered support companies in the optimisation of processes and products as well as in quality control and quality assurance. The project partners come from the raw materials industry, the metal industry, medical technology, the pharmaceutical industry and the automotive industry, among others. They include large companies as well as SMEs, regional companies as well as national and international companies. Amongst others, ANAXAM uses the large-scale research facilities of the Paul Scherrer Institute (PSI) – particularly the Swiss Spallation Neutron Source (SINQ) and the Swiss Light Source (SLS). ANAXAM is located in the immediate vicinity of the Paul Scherrer Institute on the Park Innovaare Campus in Villigen, Switzerland. Structure The knowledge and technology transfer centre is part of Switzerland’s ‘Digitalisation’ action plan. ANAXAM is a part of the innovation landscape in the canton of Aargau. References External links ANAXAM PSI FHNW Swiss Nanoscience Institute Canton of Aargau ANAXAM-Business Report 2021 (german) ANAXAM brochure Materials science Technology transfer Particle physics facilities Accelerator physics Synchrotron radiation Industrial processes
Anaxam
[ "Physics", "Materials_science", "Engineering" ]
397
[ "Applied and interdisciplinary physics", "Materials science", "Experimental physics", "nan", "Accelerator physics" ]
73,877,829
https://en.wikipedia.org/wiki/V701%20Coronae%20Australis
V701 Coronae Australis (HD 176723; HR 7197; 40 G. Coronae Australis), or simply V701 CrA, is a solitary, yellowish-white hued variable star located in the southern constellation Corona Australis. It has an average apparent magnitude of 5.72, making it faintly visible to the naked eye under ideal conditions. The object is located relatively close at a distance of 213 light-years based on Gaia DR3 parallax measurements, and it is currently receding with a poorly constrained heliocentric radial velocity of . At its current distance, V701 CrA's brightness is diminished by a quarter of a magnitude due to extinction and it has an absolute magnitude of +1.55. The object was first suspected to be variable in 1990. The variations matched that of δ Scuti variables. Three years later, it was confirmed to be variable and was given the variable star designation V701 Coronae Australis. It ranges from magnitude 5.69 to 5.73 within 3.25 hours. V701 CrA has a stellar classification of F2 III/IV, indicating that it is an evolved F-type star with the blended luminosity class of a subgiant and giant star. It has also been given a class of F0 IIIn, indicating broad or nebulous absorption lines due to rapid rotation. It has 1.83 times the mass of the Sun and a slightly enlarged radius of . It radiates 17.5 times the luminosity of the Sun from its photosphere at an effective temperature of . The star spins rapidly with a projected rotational velocity of , which causes it to have an equatorial bulge that is 26% larger than the poles. It is metal deficient with an iron abundance 62% that of the Sun ([Fe/H] = −0.21) and it is estimated to be 1.25 billion years old. V701 CrA was considered to be a chemically peculiar star and was given a class of FpSr. Its peculiarity is now considered to be doubtful. References F-type subgiants F-type giants Delta Scuti variables Corona Australis Coronae Australis, 40 Coronae Australis, V701 CD-38 13300 176723 093552 7197
V701 Coronae Australis
[ "Astronomy" ]
486
[ "Corona Australis", "Constellations" ]
73,877,873
https://en.wikipedia.org/wiki/A%20House%20on%20Water
A House on Water is a book that explores the social and psychological impacts of temporary marriage and religious concubinage in Iran, researched and coordinated by Kameel Ahmady, a British-Iranian anthropologist and social researcher. The book is based on a research project that Ahmady and his team conducted between 2017 and 2018 in three major cities of Iran: Tehran, Isfahan, and Mashhad. The book aims to provide a historical overview of temporary marriage in Iran and the world and to examine its prevalence among different social groups and its consequences for those who choose this type of marriage. Background While Ahmady was researching female genital mutilation (FGM) and child marriage in 2016, he noticed a connection between the age of people and their lived experiences. He believes that when he was studying child marriage and FGM, he realized that most of those who married in childhood were likely to have temporary or religious concubinage marriages and that there was a link between the two phenomena. Therefore, Ahmady and his colleagues decided to investigate the how's and whys of temporary marriage in Iran after completing their research on child marriage. Content A House on Water is a book that presents the findings of a research project that Ahmady conducted in 2016 on temporary marriage and religious concubinage in Iran. He and his colleagues used different methods to collect data from people in Tehran, Mashhad, and Isfahan. They discovered that this phenomenon was driven by the desire for pleasure and the ease of child marriage, which had harmful consequences for women's reputations and men's view of permanent marriage. Ahmady criticizes Iran's laws on religious concubinage, which allow early marriage and cause problems for young girls and boys, but have been overlooked and have contributed to child marriage. Ahmady and his team proposes some ways to make temporary marriage with less social and personal harm. The solutions include such measures as increasing the minimum age of marriage for girls and boys to 18 years old to break the connection between child marriage and temporary marriage, ensuring free education for all children, helping divorced and widowed women, and educating people about the rights and duties of temporary marriage. Republication Shiraz Publishing House originally published A House on Water in 2017. In 2021, Avaye Buf reprinted the book in Denmark and released electronic and audiobook editions, bringing the work to new audiences and formats. The English translation of A House on Water was first published in 2019 by Mehri Publishing House in London. Lambert Publishing House released a German edition in 2022, which included updated research and statistics on child marriage and temporary marriage. In 2023, Daneshfar Publishing House published a Kurdish translation in Erbil, Iraqi Kurdistan. Through multiple languages and editions, the book has reached wider audiences and incorporated the author's latest findings on key social issues. References 2022 non-fiction books English-language non-fiction books Sociobiology Iranian books Temporary marriages Anthropology books Books about marriage
A House on Water
[ "Biology" ]
600
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
73,879,319
https://en.wikipedia.org/wiki/2023%20heat%20waves
A number of heat waves began across parts of the northern hemisphere in April 2023, many of which are ongoing. Various heat records have been broken, with July being the hottest month ever recorded. Scientists have attributed the heat waves to man-made climate change. Another cause is the El Niño phenomena which began to develop in 2023. However, recent findings show that climate change is exacerbating the strength of El Niño. The heatwaves caused severe damage in areas such as the western United States, southern Europe, and parts of Asia. The abnormal temperatures have led to a "very extreme" likelihood of wildfires, according to the Fire Weather Index. The heatwaves were also occurring alongside some unusually heavy flooding. In response to the heatwave some leaders called for greater action to stop climate change. President of the United States Joe Biden has taken some measures to protect the population from extreme heat. Background Heat waves are one of the deadliest hazards, and in line with the IPCC prediction their frequency and magnitude are rising due to man-made climate change. The July heat wave in Southern Europe and North America would be virtually impossible and the heat wave in China would be a 1 in 250-year event without climate change. But due to climate change those events are now common. July 2023 was the hottest July on Earth in the last 120,000 years and the hottest July from the beginning of temperature measurement with a wide margin. During each day in July 2023, two billion people experienced heat conditions made at least three times more likely due to climate change and 6.5 billion people experienced this impact at least one day in the month. Another cause is the El Niño phenomena which began to develop in 2023. El Niño begin when parts of the Pacific Ocean became warmer than average and generally cause a rise in the global temperature as it "is moving some of the energy up from depth and dumping it into the atmosphere", However, recent findings show that climate change is exacerbating the strength of El Niño. It is increasing "the "variability" of the El Niño-Southern Oscillation" creating both stronger El Niño and La Niña events. Climate change may also cause changes in the jet streams that probably contributed to the heat waves. Warming in certain Arctic regions makes the jet stream weaker and wavier, causing different weather patterns to stay longer over the same place. International 2023 was the hottest year on record, 1.48 °C warmer than the pre-industrial level. The world breached the Paris Agreement 1.5 °C warming mark for a record number of days. From January to September, the global mean temperature was 1.40 °C higher than the pre-industrial average (1850–1900). January 2023 was the seventh warmest on record – 0.25 °C warmer than the normal but 0.33 °C cooler than January 2020. In July, the global average temperature was 17.32 °C (63.17 °F). Oceans Above-average temperatures in the northeastern South Pacific were recorded in March 2023. The average sea temperature of the North Atlantic Ocean was on 5 March, exceeding the previous record set in 2020 by 0.1 °C. On 5 June, the recorded temperature rose to , surpassing a record set in 2010 by 0.1 °C. On 1 August 2023, the average sea surface temperature reached , the highest ever recorded. In September, the sea ice in Antarctica was far below any previous recorded winter level. Africa North Africa There was a three-day heatwave in the Western Mediterranean region, originating in North Africa, from 26 to 28 April. The temperature reached over in parts of Morocco and Algeria. Morocco, Algeria and Tunisia still reached temperatures of up to on 13 July. Another heat wave hit Tabarka, Tunisia, on 14 July. Data recorded on 18 July showed temperatures of in Tozeur, Tunisia, in Chlef, Algeria (with many other places exceeding ), and in Kharga, Egypt. There had been a week of power cuts and temperatures in Cairo by 19 July. The heat wave became even more intense in the following days. Data recorded on 23 July showed temperatures of in Algiers, Algeria, the highest recorded temperature in the city, as well as temperatures in Kairouan, Tunisia. On 24 July, Tunis, Tunisia reached a temperature of . On 24 July, Ghar el-Melh in northern Tunisia reached a temperature of . Heavy rain flooded Al-Marj and Derna on 12 September, killing nearly 250 in Derna, Libya. Up to 20,000 were killed on 14 September, as new floods hit Bengazi, Libya. Sub-Saharan Africa Data recorded on 18 July showed temperatures of in the Faya station, Chad and temperatures in Bilma, Niger. 20 July saw temperatures suddenly drop to a seasonal low. Asia Middle East A heat wave in early June hit Israel, with temperatures between in Jerusalem and in the Jordan Valley. Along with high winds, the heat wave caused hundreds of bushfires. Roads and some buildings were evacuated amid rolling electricity outages on 2 July. Firefighters quickly controlled the fires, limiting property damage. In Iran, a heat index was recorded at Persian Gulf International Airport on 16 July. Al Ahsa, Saudi Arabia recorded on 18 July. A local record rainfall was recorded as around 125 kilograms per square meter of rainfall (equivalent to the usual amount for the whole of September) hit Istanbul in under 6 hours on 7 September according to the city's governor as heavy rain fell across north west Turkey's Black Sea coast. Floods also hit houses areas in Igneada district of Kirklareli, Turkey on that day. Heavy rainstorms triggered flash floods across northwest Turkey, killing at least 6 according to the state broadcaster, TRT Haber. Other parts of Asia February 2023 was recorded as the warmest February in India for 122 years. Starting in April 2023, a record-breaking heat wave in Asia has affected multiple countries, including India, China, Laos and Thailand. Turkmenistan recorded temperatures in early April. The European Union's Copernicus Atmosphere Monitoring Service said on 26 April that fires started in May continued to burn from Russia's Chelyabinsk, Omsk and Novosibirsk Oblasts, Primorye Krai, Kazakhstan and Mongolia. Temperatures over were recorded in Uzbekistan, Kazakhstan and China on 31 May. Beijing hit on 6 June after ten days of temperatures above . The government ordered all employers to stop any outdoor work and for people to work from home if possible. Parts of Mongolia recorded , especially in eastern steppe and southern Gobi provinces, with a prediction of for 21 June, in the Khanbogd Soum. Heatstroke claimed 22 lives in Mardan and Islamabad, Pakistan on 26 June. On 16 July, China recorded a record-breaking temperature of at Aydang, Turpan, Xinjiang Province as well as Sanbao, China. Temperatures above persisted for four weeks in Beijing. The Japanese government and NHK issued health advice to the general public as heatstroke alerts for 20 of the country's 47 prefectures, mostly east and southwest on 16 July. Temperatures nearing were recorded in Kiryu city, Gunma Prefecture, Hachioji in western Tokyo, Hirono town in Fukushima Prefecture and Nasushiobara. Torrential rain poured down in northern Japan, causing floods and a resultant landslide on 16 July. The Korea Meteorological Administration (KMA) issued a heat wave warning on the 19th and warned people to expected temperatures of 33 °C for the foreseeable future, with southern Gangwon Province, North Chungcheong Province and some southern regions to see some rain which was expected to be about 5 to 20 millimetres in total depth on the 19th. On the 20th a heatwave is declared in Mongolia after temperatures of continue in the Khanbogd soum. Japan's government and Fire and Disaster Management Agency responded to the ongoing excessive heat by issuing more health advisory notices to the public on 19 July. The Fire and Disaster Management Agency's official weekly statistics for 19 July showed 8,189 (4,484 were 65 or over) were hospitalized that week, up 200% compared to the previous week and the same week in 2022. Tokyo had the highest number with 1,066, up 460% compared to 2022. 3,215 got heatstroke at home 1,445 got it outside nationwide. 3 people had also died that week as the heat reached 39 °C in many parts of Japan on 19 July. South Korea and China had reported deadly floods due to unusually heavy rains, leaving several dozen people dead on the 23rd. The record heat continued in Japan and Korea on the 23rd. The remnants of Typhoon Doksuri hits Hebei province and Beijing, killing 21 people as it did so. 744.8 millimeters (29.3 inches) of rain fell on Beijing between July 26 and August 2 according to the Beijing Meteorological Bureau as Beijing and the province of Hebei had floods that destroyed roads, knocked out power and cut water pipes. Zhuozhou, in Hebei was so badly hit that local police posted a plea on Weibo for lights to assist rescue workers in the devastated city. August It was at Seoul, S. Korea on August 4. 33 died on 14 August as a flood induced landslide hit villages in India's Himalayan regions. September Flood induced landslides hit Shimla in the Indian Himalayas on 7 September. October High temperatures, a sand storm and air pollution form a toxic smog over Dushanbe on 8 October. Europe The Balkans July Athens records a temperature of and Santorini hits on 19 July 2023. Bush fires hit Rhodes on 22 and 23 July. Bush fires hit Lardos and Kiotari, Greece on 23 July. Rhodes was on the 23rd, with the evacuation of tourists starting that day. Eight EU countries sent firemen to help Greece, while Israel, Jordan and Turkey sent mostly aerial equipment to the Greek fire brigade. Corfu is hit by forest and bush fires. Tourists are evacuated from Corfu in response on the 25th. Albania set its all-time high temperature record at in Kuçovë on 24 July. Wildfires hit Turkey, Bulgaria, Croatia, Albania, North Macedonia, Greece and southern Italy on the 26th and 27th. The 27th saw wild fires in and around Sicily, Dubrovnik, Rhodes, Gran Canaria, Lisbon and Cascais in Portugal. August Heavy rain causes mass flooding in Slovenia on the 6th and 7th, killing seven people and causing over $500 million worth of damage over 66% of the nation's area and is reported as the worst natural disaster to have hit the country in history. Over 600 firefighters from Greece and several other European countries, along with a fleet of water-dropping planes and helicopters, fought major wildfires in Greece, 20 people have died in the fires by 27 August. September Heavy rainstorms hit Greece, Turkey and Bulgaria on 7 September, killing 11 leading to the declaration of a state of emergency in the affected region of Bulgaria. Heavy rains hit the Pagasetic Gulf and central Greece. Kala Nera village and the nearby port city of Volos were flooded by heavy rains on 7 September in which 2 people died. Larissa was evacuated due to heavy flooding, central Greece, on 7 September. The Greek Minister of Climate Crisis and Civil Protection Vassilis Kikilias urged people to stay indoors on 7 September. Heavy floods along Bulgaria's Black Sea coast triggered a state of emergency in that region of Bulgaria on 7 September. The Iberian Peninsula April Europe broke its temperature record for April when the air at Córdoba Airport reached . On 26 April, a Sentinel-2 image showed that the Fuente de Piedra Lagoon went completely dry for the first time. A rapid attribution study by World Weather Attribution found that the heatwave would probably have been more than 2 °C cooler without climate change and that climate change made the heat wave 100 times more likely to occur. July Temerities for 19 July 2023 stood at. Seville . Madrid . Catalonia- 45.3 C (113 F) (an all-time record). A forest fire hit La Palma on 22 July that led to 500 people being evacuated. The 27th saw wild fires in and around Sicily, Dubrovnik, Rhodes, Gran Canaria, Lisbon and Cascais in Portugal. August Forest fires hit 19 villages in the Algarve, including Odeceixe and Monchique, on the 8th. The highest recorded temperature of was reached in Valencia on 11 August. Intense Heatwave conditions, the third such occurrence this summer, were expected to continue in the coming days and week over the Mediterranean region. Wild fires occur across Tenerife on 16 August. Major wildfires hit Tenarife on that day. Parts of the island of Mallorca was put under yellow and orange weather alerts. Palma police reported felled trees and weather induced damage to buildings in the city. As storms lashed Palma Port the P&O Cruises ship Britannia broke free of its mooring and collided with another ship. The wind had gusts of up to 100 km/h on 28 August. September Heavy rain hits Spain on 3 September and 4 September. Madrid's Mayor, José Luis Martínez-Almeida, told people to stay indoors and said the 1972 rain fall record had been broken with a volume approaching 120 litres per square metre on 4 September. The Spanish National Weather Agency declared a nationwide red alert for flooding due to extreme rain fall. A man died during a flood in Villamanta as unusually high amounts of hail and rain hit regions of Castile, Catalonia and Valencia on 5 September. UK and Ireland June June 2023 was the warmest June on recorded for both Ireland and the UK, by average temperature. July July 2023 in Lancashire, Merseyside, Greater Manchester and Northern Ireland record the wettest July ever with Lancashire's being the wettest on record with 247% of normal rainfall levels. September In Ireland, a rare tropical night was recorded in Valentia, Kerry on 5 September, with the overnight temperature not dropping below 22.3 °C (72.1 °F) at the weather station. A high temperature warning was also issued for parts of Ireland on 7 September, as some parts of the country were forecast to reach as high as 31 °C (88 °F). The Port Clarence and Stockton Village districts of Teesside were flooded on 5 September. Heavy fog and hit Langland Bay near Swansea and Barry Island Cardiff Bay on 10 September. Heavy rain induced floods hit Exeter in Devon on 24 September. The A83 was hit by seven landslides. The Met Office issued an amber flood warning that morning in Angus, Perth and Kinross, Aberdeenshire, Moray and Highland until 2:00pm the next day. The A83 between Tarbet and Lochgilphead was blocked by a major landslide. Heavy rain induced flooding hits several places in Wales on 22 September. Heavy rain induced flooding hits parts of Greater London on 26 September. Major rain induced flooding and landslides hit Scotland between 28 September and 10 October. October Over a month's worth of rain fell in 24 hours over Scotland on 7 October. Temperatures reached 23 °C (74 °F) during a warm spell across eastern Ireland on 8 October. Trains are canceled as emergency repairs occur on the line between Morpth and Newcastle on 9 October. The Whitesands flood protection scheme in Dumfries is given the go-ahead by Dumfries and Galloway Council on 6 October. Midday temperatures in Oxford and Banbury UK, hit on 9 October. Rest of Europe July Northern Norway's Slettnes Lighthouse, reached on 13 July A major extended heatwave affecting most of Europe through mid-July was named "Cerberus" by the Italian Meteorological Society and brought record temperatures into the Arctic. On July 18, temperatures reached as high as: Sardinia or . Rome 42.9 C, or 109F (an all-time record). Sicily- 46.3 C (115 F) (an all-time record). 19 July 2023 16 Italian cities were under red alerts for heat, including Rome, Florence and Bologna on 23 July. Officially, the air temperature reached on 24 July at Jerzu, Sardinia during the heatwave which if validated, would be the highest temperature recorded in Europe during the month of July 1 died and 15 were hurt as a storm hits La Chaux-de-Fonds in northwestern Switzerland on the 23rd. The 27th saw wild fires in and around Sicily, Dubrovnik, Rhodes, Gran Canaria, Lisbon and Cascais in Portugal. August In Ukraine, the temperature reached , around Zaporizhzhia and lake in Sloviansk on August 5. 30 cm of hail fell on the German city of Reutlingen on August 5. Floods and landslides hit southern Norway on August 9. A poorly built dam on the Glåma River, at the Braskereidfoss hydroelectric power plant collapses. 3,500 were evacuated from forest fires in Argeles-sur-Mer, France on 16 August. Major wildfires hit Tenarife on that day. It reaches in the Alpes-Maritimes, Var, Hautes-Alpes and Alpes-de-Haute-Provence departments in the Provence-Alpes-Côte d'Azur (PACA) region of France on 17 August. The district office of Garmisch-Partenkirchen in Bavaria, declared a state of disaster in the municipality of Bad Bayersoien, after 80% of its building were seriously damaged by 8 cm hailstones on 27 August. The temperature hit in and Menton Carcassonne, Toulouse and in the rest of France on 23 August. The Poppea cyclone struck the Ligurian Sea in Northern Italy causing landslides and floods have washed away roads overlooking Lake Como on 27 August. Flash floods hit Genoa on 28 August and Sondrio as flood alerts remained in place in Piedmont and Lombardy remained in place on that day. Over 100mm of rain fell in regions of Vorarlberg, Tyrol, Salzburg and Carinthia causing evacuations, rail routes and bridges were closed due to flash flooding in Bad Gastein, Salzburg on 29 August. North America May Arviat, Nunavut recorded on 13 May. Monthly record highs were also set in several cities in the Pacific Northwest during that time. June An intense heat wave impacted Puerto Rico and the Caribbean in early June, bringing record highs to San Juan and causing the heat index to reach in one town. In Mexico, Merida, Yucatan reached it highest recorded heat index of on June 11, surpassed the next day to reach ; the air temperature was nearly . The heat wave swept northern states, such as Sonora where temperatures (before the heat index) were recorded as high as 49 °C (120 °F). Over 100 people died from heat stroke or dehydration. July In Canada on July 8 Norman Wells, Northwest Territories, at 65°N, reached , the first temperature over 37.8 °C (100 °F) this far north. Massive wildfires, consuming more land area in Canada this year than ever recorded continued to rage in the area. Canada recorded 3 new temperature highs on 11 July: in Fort Good Hope, Northwest Territories (NWT) 37.9C (100.2F), in Norman Wells, Northwest Territories (NWT) in Ottawa. Miami broke its record for the most consecutive days with the heat index exceeding over 100 °F (38 °C) ending on July 26 after 46 days. Death Valley recorded a daily record high of 128 °F (53.3 °C) on July 16 surpassing the previous marks of 127 °F in 1972 and 2005. In the United States, "an extreme heat wave" affected many states including Texas, New Mexico, Arizona, Nevada, and California. Temperatures reached as high as 53 °C (128 °F) in Death Valley, while Phoenix reached 48 °C (119 °F) on a few days and broke the previous record of 18 consecutive days exceeding , for a total of 31 consecutive days (every day in July). Heat warnings were issued across many southern states as far east as Florida, where record high ocean temperatures were observed. The U.S. National Weather Service (NWS) issued its first ever excessive heat advisory for Miami, Florida on July 17, 2023. Phoenix recorded its all-time high minimum temperature of 97 °F (36.1 °C) on July 19 surpassing the previous record of 96 °F on July 15, 2003. Phoenix also broke its record for most consecutive days with highs over 110 °F (43.3 °C) ending on July 30 after 31 days exceeding the previous streak of 18 days back in 1974 by a significant margin. On July 31 the Eagle Bluff Fire triggered the evacuation of Osoyoos, British Columbia. Both US and Canadian officials estimated that around 890 ha (2,200 acres) on the Canadian side of the border were on fire due to extended dryness and heat in June into July, due to seasonally high temperatures of 31–32 °C (88–90 °F) at the time of the fire. 2,000 acres (3.13 square miles) on the US side of the border burned. One firefighter died on the Canadian side. 1,500 blazes were burning in Washington State and British Columbia that day and 101 were out of control. August California's week old York Fire was the state's biggest that year by August 1, at over 125 square miles (323.7 square kilometers) and was 23% contained according to Californian fire officials. About 400 firefighters were fighting the blaze and had to balance their efforts with concerns about disrupting the fragile ecosystem in California's Mojave National Preserve. Wildfires burned down Kaanapali, north of the town of Lahaina, Hawaii on the 10th. 20,000 evacuated from Yellowknife 11 August and near Ndılǫ, Dettah and Ingraham Trail on 1 August. Enterprise was mostly destroyed and Hay River was endangered by the fires. A major heat wave affected the Midwestern United States on August 23 and 24. Prior to the heat wave, 126 million Americans were under heat alerts. At Chicago O'Hare International Airport, temperatures hit on August 23, with a heat index reaching . On August 24, temperatures hit at O'Hare with a heat index of . The high on August 24 also became the latest in-season high of for Chicago, while the heat index was the hottest on record. In Rockford, the high reached , with a heat index of due to dew points of . In Des Moines, Iowa, the high on August 23 broke a daily record at , and the heat index of became the highest heat index in August. By August 24, Des Moines had recorded 6 consecutive days with heat indexes above . On August 27, New Orleans reached an all-time record high of . Maui County Emergency Management Agency announced the end of its bush fire evacuation order on 27 August. September A major heat wave took place in the northern portion of the United States, especially around Minnesota around the first few days of September 2023. Maximum temperatures were above throughout almost the entire state of Minnesota, with some areas of the state recording temperatures above . The Twin Cities had its hottest Labor Day on record with a maximum temperature of . A monthly record was also set that day in Marquette, Michigan at . The next day, a monthly record was tied at Abilene, Texas, with a high of , with daily records also being broken in Raleigh, North Carolina; Washington, D.C.; Alpena, Michigan; Concord, New Hampshire; and Islip, New York. The next day, Dulles International Airport recorded their hottest September temperature ever, at . By September 7, New York City had their first official heat wave all year. That day, sports in Suffolk County, New York, were postponed until at least 6 p.m. due to heat index values of . The heat index in Newark, New Jersey, reached , with the air temperature of being a record for September 7. By September 8, Philadelphia's heat wave reached six days, with two records being set both on the high and low temperature during that six day timeframe. On that day, monthly record warm low temperatures were set at Islip and Caribou, Maine. The first 13 days of September were the warmest on record in Philadelphia. The 0% contained Oregon Road Fire and the 10% contained 2-day-old Grays Fire had killed 2 people so far and covered 10,000 acres of woodland. Mass evacuations occurred in case wildfires engulfed the threatened towns of Medical Lake and Four Lakes. Spokane County Emergency Management said the Grays Fire had stopped growing towards Medical Lake and Four Lakes on 3 September. Yellowknife and West Kelowna were threatened by major forest fires on 18 September and Yellowknife was evacuated One person had died and at least 185 buildings were destroyed after a 0% contained 15-square-mile wind-driven wildfire spread out west side of Medical Lake, west of Spokane on 18 September and 19 September. Canada recorded its most severe wildfire season to date, with a total of just over 1,000 active fires on 18 September; 236 were in the usually colder Northwest Territories, which had so far consumed over 2,000,000 hectares of land and the evacuation of over half of the territory's population. Major forest fires begin to approach Yellowknife. West Kelowna's firefighters were losing the battle with the 10,500 hectares 0% contained McDougall Creek wildfire, resulting in the evacuation of thousands of residents in West Kelowna and north of nearby Kelowna. The premier of British Columbia declared a state of emergency and further evacuations from the cities to the east of Vancouver. October A late season heat wave took place in a large part of the United States, starting on September 30 and continuing into October 3, especially in the northern states such as Minnesota. At least 15 US states had recorded temperatures of and above, with some areas experiencing temperatures above as far north as Iowa. This heat wave was notable for record high minimum temperature, with International Falls reaching its record high minimum temperature for October on October 2 with a temperature of . The heat wave resulted in a cancellation of the Twin Cities Marathon for the first time in its history since it began in 1982. The lakes were warm enough to go swimming in many parts of Minnesota. Temperatures in Minneapolis reached , the warmest ever for October. The heat spread east on October 3, and in Canada, Ottawa set monthly record on October 3, at . On October 4, monthly record high temperatures were reached with temperatures of in Syracuse, New York, and in Burlington, Vermont, In Montreal, the temperature reached with a humidex of 33, considered "exceptional" for that time of year. Montreal also recorded a record high on October 5, before the heat wave broke. Central America July The Panamanian Institute of Meteorology and Hydrology issues public health advice as it reports temperatures of in Panama City, with abnormal high heat in Chiriqui, Veraguas, Los Santos, Herrera, Cocle, West Panama, Colon, Darien, Guna Yala and Embera Wounan regions on July 29. South America August On August 2, 2023, a heat wave hit South America, leading to temperatures in many areas above in midwinter with some locations setting all-time heat records. 12 August saw Rio de Janeiro break a 117-year heat record. Chile saw highs towards 40C and Bolivia saw temperatures rise badly Asunción saw . November On November 8, 2023, Brazil was hit by another heat wave. Rio de Janeiro had the warmest day of the year, with temperatures reaching 42.5 °C (108.5 °F) on November 12. The city also had a real-feel temperature of 58.5 °C (137.3 °F) on November 14, the highest since 2014. Australia February The Bureau of Meteorology forecast heatwaves occurring around Australia over the next few days and had already put high-temperature warnings in place in all of Australia's states except for the Australian Capital Territory, Jervis Bay Territory and the Northern Territory on 15 February. Russian Federation April The head of the Ministry of Emergency Situations, Alexander Kurenkov, traveled to the fire that hit Kurgan Oblast on 26 April. The European Union's Copernicus Atmosphere Monitoring Service said on 26 April that fires started in May continued to burn from Russia's Chelyabinsk, Omsk and Novosibirsk Oblasts, Primorye Krai, Kazakhstan and Mongolia. May 80 fires were active over an area of 113,500 hectares (280,000 acres) in the regions of the Ural Federal District on 8 May. June 37.9 degrees Celsius (100.2 Fahrenheit) was recorded in Jalturovosk on 3 June. 8 June data was recorded at Zdvinsk Tomsk tie Verkhoyansk Kupino Kolyvan Ermakovskoe Tastyp It was revealed on 8 June that the wildfires in Russia's Ural Mountains during May had killed at least 21 people. July On 11 July 2023 in Yekaterinburg (56° north latitude) in Russia for the first time in the history of meteorological observations (more than 187 years), a temperature of + was recorded and Verkhoyansk, in Russia's Sakha Republic, recorded a temperature of on 11 July. August 3 August saw predictions of heatwave on 5 August in the Krasnodar Krai. A large fire occurred on 29 August in the Gelendzhik in Krasnodar Krai area and had an area of fifty thousand square meters according to the Russian Ministry of Emergency Management. The strong winds fanned the flames that day. 195 people, 45 firefighting units, a Mi-8 helicopter helped fight the fire, but the helicopter is grounded at night. The local administration of Gelendzhik called on the local residents to help in fight the fire. Political, charity, NGO, UN, scientific and corporate responses China's top diplomat Wang Yi and American diplomat John Kerry called for "global leadership" on climate issues. Some meteorological scientists officially blamed climate change for the event. A marine heat wave roughly 60 miles off California's coast was accused by meteorologists of helping fuel Hurricane Hilary. On August 2, Xi Jinping, General Secretary of the Central Committee and the president of the People's Republic of China, urged local officials to make every effort to find the 29 individuals who are missing and the may people who were trapped rising flood waters. In the end of July, Joe Biden announced some measures to protect Americans from the heat waves. Those include a safety rules for workers who work outdoors with the mechanism of enforcement, ensuring access to water to communities in danger, improving weather forecasts. The money is partly coming from the Infrastructure Investment and Jobs Act and Inflation Reduction Act. Biden also met the mayors of Phoenix, Arizona and San Antonio to understand better the needs of the cities who were severely impacted by the heatwave. Even before the heatwave some measures already have been taken: cooling centers and more efficient buildings. Antarctica The temperatures in an area of east Antarctica, known as "Dome C" was above normal temperatures on 27 September, unlike March 18, 2022, when it reached at , above normal seasonal heat levels. See also Weather of 2023 2023 Slovenia floods 2023 Emilia-Romagna floods 2023 Asia heat wave 2023 European heat waves 2023 Western North America heat wave References 2023-related lists Weather-related lists
2023 heat waves
[ "Physics" ]
6,480
[ "Weather", "Physical phenomena", "Weather-related lists" ]
73,879,650
https://en.wikipedia.org/wiki/K2-21
K2-21, also known as EPIC 206011691, is a red dwarf star located away in the constellation Aquarius. It hosts two known exoplanets, discovered in 2015 by the transit method as part of Kepler's K2 mission. Both planets have significantly lower densities than Earth, indicating that they are not rocky planets and are better described as mini-Neptunes. The inner planet, K2-21b, is less dense than the outer planet, K2-21c. References Aquarius (constellation) M-type main-sequence stars Planetary systems with two confirmed planets J22411288-1429202 240766850
K2-21
[ "Astronomy" ]
142
[ "Constellations", "Aquarius (constellation)" ]
73,880,978
https://en.wikipedia.org/wiki/Sulbactam/durlobactam
Sulbactam/durlobactam, sold under the brand name Xacduro (by Innoviva Specialty Therapeutics), is a co-packaged medication used for the treatment of bacterial pneumonia caused by Acinetobacter baumannii-calcoaceticus complex. It contains sulbactam, a beta-lactam antibacterial and beta-lactamase inhibitor; and durlobactam, a beta-lactamase inhibitor. Sulbactam/durlobactam was approved for medical use in the United States in May 2023. Medical uses Sulbactam/durlobactam is indicated for the treatment of hospital-acquired bacterial pneumonia and ventilator-associated bacterial pneumonia, caused by susceptible isolates of Acinetobacter baumannii-calcoaceticus complex. History The efficacy of sulbactam/durlobactam was established in a multicenter, active-controlled, open-label (investigator-unblinded, assessor-blinded), non-inferiority clinical trial in 177 hospitalized adults with pneumonia caused by carbapenem-resistant A. baumannii. Participants received either sulbactam/durlobactam or colistin (a comparator antibiotic) for up to 14 days. Both treatment arms also received an additional antibiotic, imipenem/cilastatin, as background therapy for potential hospital-acquired bacterial pneumonia/ventilator-associated bacterial pneumonia pathogens other than Acinetobacter baumannii-calcoaceticus complex. The primary measure of efficacy was mortality from all causes within 28 days of treatment in participants with a confirmed infection with carbapenem-resistant A. baumannii. Of those who received sulbactam/durlobactam, 19% (12 of 63 participants) died, compared to 32% (20 of 62 participants) who received colistin; this demonstrated that sulbactam/durlobactam was noninferior to colistin. Resistances Overall, 2.3% of Acinetobacter baumannii strains are resistant to sulbactam/durlobactam. This percentage increases to 3.4% and 3.7% in the subgroups of carbapenem-resistant and colistin-resistant Acinetobacter, respectively. In Acinetobacter strains producing metallo-beta-lactamases, sulbactam/durlobactam resistance is 100%. Society and culture Legal status Sulbactam/durlobactam was approved for medical use in the United States in May 2023. The FDA granted the application for sulbactam/durlobactam fast track and priority review designations. References External links Beta-lactamase inhibitors Combination antibiotics
Sulbactam/durlobactam
[ "Chemistry" ]
603
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
78,200,945
https://en.wikipedia.org/wiki/Brauer%27s%20height%20zero%20conjecture
The Brauer Height Zero Conjecture is a conjecture in modular representation theory of finite groups relating the degrees of the complex irreducible characters in a Brauer block and the structure of its defect groups. It was formulated by Richard Brauer in 1955. Statement Let be a finite group and a prime. The set of irreducible complex characters can be partitioned into Brauer -blocks. To each -block is canonically associated a conjugacy class of -subgroups, called the defect groups of . The set of irreducible characters belonging to is denoted by . Let be the discrete valuation defined on the integers by where is coprime to . Brauer proved that if is a block with defect group then for each . Brauer's Height Zero Conjecture asserts that for all if and only if is abelian. History Brauer's Height Zero Conjecture was formulated by Richard Brauer in 1955. It also appeared as Problem 23 in Brauer's list of problems. Brauer's Problem 12 of the same list asks whether the character table of a finite group determines if its Sylow -subgroups are abelian. Solving Brauer's height zero conjecture for blocks whose defect groups are Sylow -subgroups (or equivalently, that contain a character of degree coprime to ) also gives a solution to Brauer's Problem 12. Proof The proof of the if direction of the conjecture was completed by Radha Kessar and Gunter Malle in 2013 after a reduction to finite simple groups by Thomas R. Berger and Reinhard Knörr. The only if direction was proved for -solvable groups by David Gluck and Thomas R. Wolf. The so-called generalized Gluck—Wolf theorem, which was a main obstacle towards a proof of the Height Zero Conjecture was proven by Gabriel Navarro and Pham Huu Tiep in 2013. Gabriel Navarro and Britta Späth showed that the so-called inductive Alperin—McKay condition for simple groups implied Brauer's Height Zero Conjecture. Lucas Ruhstorfer completed the proof of these conditions for the case . The case of odd primes was finally settled by Gunter Malle, Gabriel Navarro, A. A. Schaeffer Fry and Pham Huu Tiep using a different reduction theorem. References Conjectures that have been proved
Brauer's height zero conjecture
[ "Mathematics" ]
475
[ "Mathematical theorems", "Mathematical problems", "Conjectures that have been proved" ]
78,201,193
https://en.wikipedia.org/wiki/RX%20J1633.3%2B4718
RX J1633.3+4718 (RX J1633+4718) known as RXS J16333+4718 according to VLBI Network observations, is a narrow-line Seyfert 1 galaxy, located in the constellation of Hercules. It has a redshift of (z) 0.116 and is located 1.75 billion light years from Earth. The first known reference to this galaxy comes from a radio source which was identified in 1995 in the IRAS catalogue as F16319+4725. Description RX J1633.3+4718 contains a radio loud active galactic nucleus (AGN) with a measured radio loudness parameter of R5 = fv(5 GHz)/fv(4400Å) > 100. Furthermore, the AGN is hosted in a disk galaxy found interacting with a starburst galaxy. The two nuclei of both galaxies are about 8 kiloparsecs from each other. Two unique components are found in RX J1633.3+4718, mainly a core component and a north component. Both of the components are connected with the two galaxies in the interacting system, with measured flux densities of 24.48 mJy and 0.79 mJy. The core component is unresolved. It has an inverted radio spectrum whereas the spectral index is found steep. The core component is suggested to be significantly variable, hinting at the presence of jet activity in RX J1633.3+4718. This is further confirmed by the jet's high brightness temperature of 1011.3 K and parsec-scale core-jet radio morphology when seen in high resolution observations at 1.7 and 5 GHz respectively. The north component on the other hand, is fainter and located at a position angle of 352°, away from the core by 3.8 arcsec. The accretion disk of RX J1633.3+4718 is shown to have ultrasoft excess X-ray emission. It is lower than 0.5 KeV with the temperature of the galaxy's disk estimated to be 40 electronvolts based on a disc model for soft excess. The mass of the black hole in RX J1633+4718 is estimated to be 3 x 106 Mʘ while a bolometric luminosity of Lbol ≈ 1.51 x 1044 erg s−1 was derived from an unabsorbed flux measurement in a 0.001-100 KeV energy band. References External links RX J1633.3+4718 on SIMBAD Seyfert galaxies Active galaxies Hercules (constellation) 140641 F16319+4725 Interacting galaxies
RX J1633.3+4718
[ "Astronomy" ]
564
[ "Hercules (constellation)", "Constellations" ]
78,201,230
https://en.wikipedia.org/wiki/Lanmaoa%20borealis
Lanmaoa borealis is a species of fungus that is commonly found in Michigan. Description It has a yellow or pale yellow stipe that is blue on the top and at the base it is reddish-brown. References Boletaceae Fungus species
Lanmaoa borealis
[ "Biology" ]
51
[ "Fungi", "Fungus species" ]
78,201,429
https://en.wikipedia.org/wiki/Karel%20Chodounsk%C3%BD
Karel Chodounský (Latinized as Carolus Josephus Petrus Chodounsky; 18 May 1843 – 12 May 1931) was a Czech physician, pharmacologist and promoter of mountaineering. He was the founding professor at the institute of pharmacology at Masaryk University. He is known as the author of the first Czech pharmacology textbook called Farmakologie, printed in 1905. Life and work Chodounský was born in Studénka (today part of Bakov nad Jizerou) to estate administrator Petr and Julia née Svobodová. He studied medicine at the University of Prague in 1868, also spending time in the natural history museum. He worked as an assistant in physiology under Jan Evangelista Purkyně. He worked as a physician, practising privately until 1895 and in 1884 he became an assistant professor of balneotherapy. He habilitated in pharmacology and toxicology in 1888 and became an associate professor in 1895. He headed the institute of pharmacology at Prague and received a doctorate in 1900 and became a full professor of pharmacology in 1902. Chodounský demonstrated by experiments that he conducted on himself that infectious colds had nothing to do with low temperatures. He wrote about this in his 1907 book Erkältung und Erkältungskrankheiten. In 1919 he was involved in establishing pharmacology at the Masaryk University, Brno, and served there until 1923. Chodounský was a champion skater and took a keen interest in mountaineering. He was involved in the establishment of the Slovenian Alpine Society in Prague in 1897, serving as its president until 1914. He promoted the training of alpine guides, the opening of trails, and the publication of books on mountaineering. In 1901 he founded a society to support Slovenian students in Prague. He edited the Czech medical periodical Časopis lékařů českých (1878–1888) and for his contributions he received a Knight's Cross of Emperor Franz Joseph. He was also awarded an honorary doctorate in 1929 by Masaryk University. He died on 12 May 1931 in Prague, aged 87. His daughter Marie Chodounská (d. 1922) became an artist. References 1843 births 1931 deaths People from Bakov nad Jizerou Czech physicians Pharmacologists
Karel Chodounský
[ "Chemistry" ]
475
[ "Pharmacology", "Biochemists", "Pharmacologists" ]
78,201,487
https://en.wikipedia.org/wiki/Symonds%20St%20Public%20Conveniences%20and%20Former%20Tram%20Shelter
Symonds St Public Conveniences and Former Tram Shelter is a category 2 historic place in Auckland, New Zealand, and included the first standalone street toilets in Auckland to cater to both men and women. The toilets were later converted into a male-only facility during the Second World War, the women's facilities being reopened in 2000. History In April 1910, the Public Conveniences and Former Tram Shelter was opened on the corner of Symonds Street and Grafton Bridge. This coincided with the building and opening of Grafton Bridge. Following Auckland's branch of the Women's Political League pushing for urban public toilets for women, they were the first standalone street toilets in Auckland to cater to both men and women. Previously the closest toilets with access for women were at the public libraries and Smith & Caughey's department store. The structure incorporated a tram shelter, that from 1956 was used as a bus shelter when the trams were discontinued. To offer greater privacy to women, the access to the women's toilet was from inside the shelter, but this was altered in the 1920s to have an external access like the men's toilets as the shelter was frequently in use, including purposes that were unanticipated such as rough sleeping. During the Second World War, the toilets were converted into being a male-only facility. In 2000, the women's toilets were re-installed. In 2020–2022, a major conservation project costing worked to reinstate many of the earlier features of the structure. This included original 1910 tiles that were found during the conservation project. They reopened in August 2022. This project included a seismic upgrade, and the restoration was awarded a Heritage Award at the Te Kāhui Whaihanga New Zealand Institute of Architects Auckland Architecture Awards in 2023. Description The Public Conveniences and Former Tram Shelter is a single-storey, Edwardian Baroque, masonry building designed by Walter Ernest Bush, the architect of the Grafton Bridge. The main façade facing towards the end of Karangahape Road has a wide central archway with arched windows on either side. References External links Toilets for All on Heritage et AL with archival images of the Symonds St Public Conveniences and Former Tram Shelter Heritage New Zealand Category 2 historic places in the Auckland Region Buildings and structures completed in 1910 Karangahape Learning Quarter Toilets Buildings and structures in Auckland Historic preservation in New Zealand
Symonds St Public Conveniences and Former Tram Shelter
[ "Biology" ]
481
[ "Excretion", "Toilets" ]
78,202,631
https://en.wikipedia.org/wiki/National%20Security%20Memorandum%20on%20Artificial%20Intelligence
The Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence is a memorandum signed by U.S. president Joe Biden. The memorandum is described as seeking to advance U.S. leadership in the development of safe, secure, and trustworthy artificial intelligence (AI); enable the U.S. government to use AI for national security; and contribute to international AI governance. References United States law stubs 2024 in American politics Executive orders of Joe Biden Regulation of artificial intelligence
National Security Memorandum on Artificial Intelligence
[ "Technology" ]
126
[ "Computing and society", "Regulation of artificial intelligence" ]