id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
75,473,998 | https://en.wikipedia.org/wiki/EDP-305 | EDP-305 is a non-bile acid farnesoid X receptor (FXR) agonist developed by Enanta Pharmaceuticals for non-alcoholic fatty liver disease. According to preclinical research CYP3A4 is the main enzyme used to metabolize the drug and there is a low potential for drug interactions.
References
Farnesoid X receptor agonists
Experimental drugs developed for non-alcoholic fatty liver disease
Benzosulfones
Ureas
Tert-butyl compounds
Sterols | EDP-305 | [
"Chemistry"
] | 104 | [
"Organic compounds",
"Ureas"
] |
75,474,225 | https://en.wikipedia.org/wiki/Mitochondrial%20pyruvate%20carrier | The mitochondrial pyruvate carriers are composed of:
Mitochondrial pyruvate carrier 1
Mitochondrial pyruvate carrier 2
The pyruvate carriers are involved in mitochondrial metabolism but it is possible to compensate for their loss of function. They have been studied for a role in cardiac stress adaption.
References
Human genes
Transport proteins
Solute carrier family
Autosomal recessive disorders
Inborn errors of carbohydrate metabolism | Mitochondrial pyruvate carrier | [
"Chemistry"
] | 88 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism",
"Protein stubs",
"Biochemistry stubs"
] |
75,474,255 | https://en.wikipedia.org/wiki/Azemiglitazone | Azemiglitazone (MSDC-0602) is a novel insulin sensitizer designed to retain the effect of thiazolidinediones on mitochondrial pyruvate carriers with limited PPAR-gamma binding. It is hoped to have fewer adverse effects than the thiazolidinediones and is being developed by Cirius Therapeutics for type 2 diabetes and non-alcoholic fatty liver disease. It is formulated as its potassium salt, azemiglitazone potassium (MSDC-0602K).
References
Experimental diabetes drugs
Experimental drugs developed for non-alcoholic fatty liver disease
Thiazolidinediones
3-Methoxyphenyl compounds
Ketones
Aromatic ethers | Azemiglitazone | [
"Chemistry"
] | 147 | [
"Pharmacology",
"Ketones",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
75,474,304 | https://en.wikipedia.org/wiki/Bundeswehr%20Institute%20of%20Microbiology | The Bundeswehr Institute of Microbiology (IMB, military abbreviation InstMikroBioBw) in Munich is the German Armed Forces' scientific competence center in the field of medical defense against biological warfare agents and other dangerous pathogens or biotoxins. The institute provides procedures and methods for the rapid and unambiguous identification and verification of allegations of the use of biological warfare agents, conducts specialized training, and participates in the development of medical biodefense concepts and strategies.
Tasks
Provide expertise, specialized diagnostic capabilities, principles, concepts, guidelines, and procedures for preserving/restoring the health of populations and individuals exposed to biological warfare agents.
Deployment of specialized, rapidly deployable military response teams in biological threat situations, investigation of unexplained outbreaks of infectious diseases, and medical verification of biological agent use.
Research on the epidemiology, epidemic management, pathomechanisms, prevention, diagnostics, and treatment of diseases caused by biological warfare agents.
Advising the German Ministry of Defense and other federal agencies on scientific and medical issues related to bioweapons disarmament and arms control.
Directors
1984–1994: Colonel Dr. med. vet. Ahrens
1994–2008: Colonel Dr. med. Ernst-Jürgen Finke
2008–2019: Colonel Prof. Dr. med. Lothar Zöller
Since 2019: Colonel Prof. Dr. med. Roman Wölfel
History
The institute was established in 1966 as the Microbiology Laboratory Group at the Medical Corps School of the Bundeswehr (now: Bundeswehr Medical Academy) in Munich. In 1984, today's Bundeswehr Institute of Microbiology was officially founded as an independent military unit. It has been stationed in the Ernst-von-Bergmann barracks in the north of Munich ever since.
In response to the September 11 attacks in 2001, the German Council of Science and Humanities recommended that the Bundeswehr Institute of Microbiology be developed into a national military competence center for biodefense.
The institute provides medical diagnostics for biological warfare agents and naturally occurring infectious agents of military importance for all members of the Bundeswehr. These services include infectious agents of biological risk groups 3 and 4 and are also available to civilian healthcare facilities. In September 2012, the Central Diagnostic Department (ZBD) of the Institute was flexibly accredited by the German Accreditation Body according to ISO 15189.
In August 2002, along with the Bundeswehr Institute of Radiobiology and the Bundeswehr Institute of Pharmacology and Toxicology, the institute became an independent entity under the Central Medical Service of the Bundeswehr and was placed under the supervision of the Bundeswehr Medical Office. Since 2012, all three institutes have once again been under the military command of the Medical Academy, but now as independent military units at battalion level.
Since 2010, the institute, together with the Technical University of Munich, the Ludwig Maximilian University of Munich, and the Helmholtz Zentrum München, forms the partner site in Munich of the German Center for Infection Research (DZIF). In February 2013, the cooperation with the Institute of Microbiology, Immunology, and Hygiene and the Institute of Virology of the TU Munich began. In 2016, a cooperation agreement was signed with the University of Stuttgart-Hohenheim.
Since 2009, the Medical Biodefense Conference has been organized in the format of an international conference.
The Bundeswehr Institute for Microbiology has been leading the development of a modular and rapidly deployable mobile laboratory system for the German Armed Forces since 2007. The institute's mobile lab systems are crafted to deliver a prompt response to sudden disease outbreaks, featuring adaptable configurations and cutting-edge biosafety measures. The incorporation of a collapsible glove box with sturdy polycarbonate walls ensures a secure working environment for handling highly infectious samples. Leveraging diagnostic technologies like qPCR, ELISA, and NGS, the system aims for expeditious turnaround times in sample analysis. With minimal infrastructure requirements, it can be rapidly deployed worldwide and utilized across diverse environments. Following initial deployments in the Balkans, the mobile laboratory seamlessly integrated into the European Mobile Lab Project (EMLab) from 2013. Remarkably, these systems played a pivotal role during the 2014 Ebola outbreak in West Africa and are now acknowledged as a global technical standard for diagnostic field operations in combating disease outbreaks.
The Bundeswehr Institute of Microbiology reached a significant milestone during the COVID-19 pandemic. On January 27, 2020, researchers at the institute diagnosed the first cases of illness caused by the SARS-CoV-2 virus in Germany, marking a pivotal early detection in the laboratory. Notably, the institute successfully cultured the virus in cell cultures, achieving a feat previously only accomplished by Australian researchers outside of China. Furthermore, the research group sequenced the genome of SARS-CoV-2, providing comprehensive insights beyond the partial information available from Chinese online transmissions. The institute offered the initial description of the replication of SARS-CoV-2 in the nasal and throat cavity and the excretion of the virus in the stool.
On May 19, 2022, amidst the largest outbreak of mpox in Europe to date, the Bundeswehr Institute of Microbiology confirmed the first case of mpox in Germany.
References
External links
Official Website
Bundeswehr
Biological warfare facilities
Biosafety level 3 laboratories
Medical research | Bundeswehr Institute of Microbiology | [
"Biology"
] | 1,107 | [
"Biological warfare facilities",
"Biological warfare"
] |
75,474,478 | https://en.wikipedia.org/wiki/HU6 | HU6 is a prodrug of the mitochondrial uncoupler 2,4-dinitrophenol (DNP) that is intended to "minimize the rapid absorption and high peak blood concentrations of DNP to provide a wider therapeutic index and improve safety." Developed by Rivus Pharmaceuticals, the drug is tested to reduce weight and liver fat in humans with risk factors for metabolic dysfunction-associated steatohepatitis. In a phase 2a trial, the higher dosage levels reduced liver fat on average by more than 30 percent and also reduced body weight significantly. A phase 2b trial was launched in late 2023. A phase 2b trial in patients with metabolic dysfunction-associated steatohepatitis was subsequently initiated. Data from this study are expected to be reported in 2025. Additionally, a phase 2a study was performed in patients suffering from heart failure with preserved ejection fraction, a disease that is mediated by visceral fat and obesity. The study achieved the primary endpoint of weight loss, as well as a number of secondary endpoints.
References
Prodrugs
Uncouplers
Experimental drugs developed for non-alcoholic fatty liver disease
Experimental anti-obesity drugs
Nitrobenzenes
Nitroimidazoles | HU6 | [
"Chemistry"
] | 253 | [
"Chemicals in medicine",
"Cellular respiration",
"Prodrugs",
"Uncouplers"
] |
75,474,503 | https://en.wikipedia.org/wiki/NGC%201106 | NGC 1106 is a lenticular, non-barred spiral galaxy with considerable structure (type SA0^+), located in the Perseus constellation. It was discovered by astronomer John Herschel on 18 September 1828.
Characteristics
In 2016, astronomers confirmed NGC 1106 contains a Compton-thick active galactic nucleus, after extensive analysis of the galaxy's X-ray spectra. Due to the AGN in its center, it's also classified as a type II Seyfert galaxy, meaning it has the characteristic bright core of a Seyfert galaxy, as well as appearing bright when viewed at infrared wavelengths.
Star formation
A study released in 2022 detected active star formation in NGC 1106. The research involved the use of far-ultraviolet and mid-infrared analysis, both techniques are extensively used as star formation rate tracers.
NGC 1086 Group
NGC 1106 is a member of the NGC 1086 Group (also known as LGG 78). The other three galaxies are: NGC 1086, UGC 2349, and UGC 2350.
See also
Other Seyfert galaxies include:
Messier 77
NGC 7213
NGC 5128
References
Unbarred spiral galaxies
Perseus (constellation)
1106
010792
Discoveries by John Herschel
Lenticular galaxies
Astronomical objects discovered in 1828
Galaxies discovered in 1828
02322
+07-06-076
02474+4127 | NGC 1106 | [
"Astronomy"
] | 286 | [
"Perseus (constellation)",
"Constellations"
] |
75,474,927 | https://en.wikipedia.org/wiki/HEC96719 | HEC96719 is a tricyclic farnesoid X receptor agonist developed for non-alcoholic steatohepatitis.
References
Farnesoid X receptor agonists
Experimental drugs developed for non-alcoholic fatty liver disease
Chloroarenes
Isoxazoles
Cyclopropanes
Carboxylic acids
Benzoxepines
Pyridines
Spiro compounds | HEC96719 | [
"Chemistry"
] | 82 | [
"Organic compounds",
"Carboxylic acids",
"Functional groups",
"Spiro compounds"
] |
75,475,368 | https://en.wikipedia.org/wiki/Tipelukast | Tipelukast (KCA 757 or MN-001) is a sulfidopeptide leukotriene receptor antagonist with suspected anti-inflammatory properties. It is developed by MediciniNova.
References
Receptor antagonists
Leukotriene antagonists
Experimental drugs developed for non-alcoholic fatty liver disease
Acetophenones
Phenols
Thioethers
Carboxylic acids | Tipelukast | [
"Chemistry"
] | 83 | [
"Receptor antagonists",
"Neurochemistry",
"Carboxylic acids",
"Functional groups"
] |
75,475,574 | https://en.wikipedia.org/wiki/Elma%20Parsamyan | Elma Parsamian is a Soviet and Armenian astrophysicist and astronomer. She works at the Byurakan Observatory. She serves as the Principal Research Associate of the scientific group.
Early life
She was born in Yerevan, Armenia on December 23, 1929. After moving to Moscow with her father, from 1938 to 1941, Parsamian studied at Moscow School N213. During her school years, she admired the study of astronomy, and decided to become an astronomer. From 1949 to 1954, Elma Parsamian studied at the Astronomy Department of Physical and Mathematical Faculty at Yerevan State University where she graduated with a specialization in Astrophysics.
Career
She joined the staff of the Byurkan Astrophysical Observatory (BAO) and stayed there. In 1961, she earned her Ph.D. degree in Physical-Mathematical Sciences, and became a Doctor of Physical-Mathematical Sciences in 1963.
Elma Parsamian achieved professorship in 1989, and in 2000, she was selected as a corresponding member of the National Academy of Sciences of Armenia. Her main research fields include variable and non-stable stars, galactic nebulae and archaeoastronomical studies.
Recognition
For Valorous Work (1971)
Anania Shirakatsi medal (2003)
Honorary Diploma of NAS RA, ArAS/BAO Prize for Services in Astronomy (2009)
References
Year of birth missing (living people)
Living people
Soviet astronomers
Soviet astrophysicists
Armenian astronomers
Armenian astrophysicists
Women astronomers
Women astrophysicists
Yerevan State University alumni
Soviet expatriates in Mexico
Armenian expatriates in Mexico | Elma Parsamyan | [
"Astronomy"
] | 322 | [
"Women astronomers",
"Astronomers"
] |
75,475,815 | https://en.wikipedia.org/wiki/Crinecerfont | Crinecerfont, sold under the brand name Crenessity, is a medication used for the treatment of congenital adrenal hyperplasia. It is a corticotropin-releasing factor type 1 receptor (CRF1R) antagonist developed to treat classic congenital adrenal hyperplasia due to 21-hydroxylase deficiency (21OHD). It is taken by mouth.
The most common side effects of crinecerfont in adults include fatigue, dizziness, and arthralgia (joint pain). For children, the most common side effects include headache, abdominal pain, and fatigue.
Crinecerfont was approved for medical use in the United States in December 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Crinecerfont is indicated as adjunctive treatment to glucocorticoid replacement to control androgens in people four years of age and older with classic congenital adrenal hyperplasia.
Adverse effects
The US Food and Drug Administration prescription label for crinecerfont has a warning for acute adrenal insufficiency or adrenal crisis.
History
Crinecerfont's approval is based on two randomized, double-blind, placebo-controlled trials in 182 adults and 103 children with classic congenital adrenal hyperplasia. In the first trial, 122 adults received crinecerfont twice daily and 60 received placebo twice daily for 24 weeks. After the first four weeks of the trial, the glucocorticoid dose was reduced to replacement levels, then adjusted based on levels of androstenedione, an androgen hormone. The primary measure of efficacy was the change from baseline in the total glucocorticoid daily dose while maintaining androstenedione control at the end of the trial. The group that received crinecerfont reduced their daily glucocorticoid dose by 27% while maintaining control of androstenedione levels, compared to a 10% daily glucocorticoid dose reduction in the group that received placebo.
In the second trial, 69 children received crinecerfont twice daily and 34 received placebo twice daily for 28 weeks. The primary measure of efficacy was the change from baseline in serum androstenedione at week four. The group that received crinecerfont experienced a statistically significant reduction from baseline in serum androstenedione, compared to an average increase from baseline in the placebo group. At the end of the trial, children assigned to crinecerfont were able to reduce their daily glucocorticoid dose by 18% while maintaining control of androstenedione levels compared to an almost 6% daily glucocorticoid dose increase in children assigned to placebo.
The US Food and Drug Administration (FDA) granted the application for crinecerfont fast track, breakthrough therapy, orphan drug, and priority review designations. The FDA granted the approval of Crenessity to Neurocrine Biosciences, Inc.
Society and culture
Legal status
Crinecerfont was approved for medical use in the United States in December 2024.
Names
Crinecerfont is the international nonproprietary name.
Crinecerfont is sold under the brand name Crenessity.
References
Further reading
External links
Receptor antagonists
Fluoroarenes
Chloroarenes
Alkyne derivatives
Cyclopropyl compounds
Thiazoles
Tertiary amines | Crinecerfont | [
"Chemistry"
] | 743 | [
"Neurochemistry",
"Receptor antagonists"
] |
75,475,887 | https://en.wikipedia.org/wiki/Onfasprodil | Onfasprodil (MIJ821) is a drug delivered via intravenous infusion that is designed as a fast-acting treatment for treatment-resistant depression. It works as a negative allosteric modulator of the NMDA receptor subunit 2B (NR2B). The drug is developed by Novartis.
References
Experimental antidepressants
NMDA receptor antagonists
Drugs developed by Novartis
2-Fluorophenyl compounds
Pyridines
Negative allosteric modulators | Onfasprodil | [
"Chemistry"
] | 107 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,476,142 | https://en.wikipedia.org/wiki/Vincent%20Marks | Vincent Marks (10 June 1930 – 6 November 2023) was an English pathologist and clinical biochemist known for his works on studying insulin and hypoglycemia. His contributions to medical science include simplifying low blood glucose testing, introducing insulin radioimmunoassay, and advancing diabetes research. Marks played an important role in high-profile medico-legal cases, notably providing expert testimony that helped acquit Danish-born British socialite Claus von Bülow in 1985, a case that was the basis of the Oscar-winning movie Reversal of Fortune (1990).
Marks was also a nutritionist who studied intestinal hormones and coined the term "muesli belt malnutrition", referring to parents feeding their children what is considered extremely healthy foods, but, in the process depriving them of essential fats.
Early life
Vincent Marks was born on 10 June 1930, in Harlesden, North West London, to Lewis and Rose (née Goldbaum) Marks, in a Jewish household. His parents ran a pub. Marks attended Tottenham Grammar School before going to study medicine on a scholarship at Brasenose College, Oxford, in 1948. He completed his training and qualified as a doctor from the St Thomas' Hospital in London, in 1954.
It is noted that his interest in medicine was driven in part by his mother's insistence that their childhood home be neat and tidy for the "doctor's visit", leading him and his brother to think highly of doctors and medicine as a profession. During his time at Oxford, he was branded a communist after demanding that The Daily Worker, a newspaper mouthpiece of Communist Party of Great Britain, be introduced in the university's common rooms. He later joined the party, but left it in 1956 following the suppression of the Hungarian Uprising by the Soviet Union. In the 1980s he was a member of the Social Democratic Party (SDP).
Career
Marks began his career in the late 1950s at the National Hospital for Neurology and Neurosurgery, focusing on detecting low blood sugar and researching pancreatic and glucose-management hormones. Notably, he simplified the testing for low blood glucose using glucose oxidase, a method that foreshadowed modern diabetes diagnostics including colour-changing glucose strips. Collaborating with South African medical researcher Ellis Samols, Marks introduced insulin radioimmunoassay into the UK, transforming insulin level measurement. The method had earlier been developed in the United States.
Marks moved to Surrey in 1962, working as a consultant chemical pathologist in Epsom. He co-authored the textbook Hypoglycaemia in 1965, and later became a professor of biochemistry at the University of Surrey in 1970. Marks established a laboratory for insulin testing and founded a master's course in clinical pathology. His laboratory was among the first to offer insulin assays for testing across National Health Service (NHS) hospitals in the United Kingdom. His research extended to monitoring drug levels in the blood and investigating hormones like melatonin and insulin-like growth factors.
Marks also studied intestinal hormones and helped designate the gastric inhibitory polypeptide (GIP) as an obesity hormone. He also coined the term "muesli belt malnutrition", referring to parents feeding their children what is considered extremely healthy foods, but, in the process depriving them of essential fats. He explored this topic in his book Panic Nation (2006), which he co-authored with Stanley Feldman.
Marks gained prominence in the medico-legal field, providing expert opinions in notable cases. His testimony in Danish-born British socialite Claus von Bülow's 1985 trial challenged accusations of insulin injection and led to an acquittal. The case was made into a book and later into an Oscar winning movie, Reversal of Fortune. In his testimony, Marks said that the insulin-covered needle was most likely planted by someone who did not realize that insulin is cleaned off the needles once it is injected. Marks also testified against Beverley Allitt in 1993, who used insulin to murder four children, and at the trial of Colin Norris in 2008. In 2007, he co-authored Insulin Murders detailing his involvement in high-profile medico-legal cases and reflecting on his career. The book was among the first to talk about insulin as a murder weapon and documented more than 50 years of medical cases in which insulin had been used as a weapon.
Marks retired in 1995 but remained active as an emeritus professor, contributing to research, publishing, and medico-legal work. He served as the president at the Association of Clinical Biochemists between 1989 and 1991, and as the vice president at the Royal College of Pathologists. In a career spanning more than 50 years, he authored over 50 papers, contributed to more than 300 research publications, and authored almost 20 textbooks. His last book, The Forensic Aspects of Hypoglycaemia, was published in 2019.
Personal life
In 1957, Marks married sculptor and artist Averil Sherrard and had two children. Marks was known to have been an atheist and a humanist who was opposed to religion. Along with his wife, he campaigned for various causes including saving a park in Guildford, Surrey, where they lived, from developers. His brother John Marks was also a doctor, and the chair of the British Medical Association.
Marks died on 6 November 2023, at the age of 93.
Selected works
References
1930 births
2023 deaths
Chemical pathologists
Biochemistry educators
People from Harlesden
Communist Party of Great Britain members
English pathologists
20th-century English chemists
20th-century British biologists
English biochemists
British nutritionists
Academics of the University of Surrey
20th-century English Jews
Jewish chemists
Jewish biologists
Jewish British scientists
Alumni of Brasenose College, Oxford
Scientists from London
Academics from London
English humanists
English atheists
Jewish humanists
Jewish atheists | Vincent Marks | [
"Chemistry",
"Biology"
] | 1,194 | [
"Biochemistry",
"Chemical pathology",
"Chemical pathologists",
"Biochemistry educators"
] |
75,476,697 | https://en.wikipedia.org/wiki/1%2C4-Butanedithiol | 1,4-Butanedithiol is an organosulfur compound with the formula . It is a malodorous, colorless liquid that is highly soluble in organic solvents. The compound has found applications in biodegradable polymers.
Reactions
Alkylation with geminal dihalides gives 1,3-dithiepanes. Oxidation gives the cyclic disulfide 1,2-dithiane:
It forms self-assembled monolayers on gold.
It is also used in polyadditions along with 1,4-butanediol to form sulfur-containing polyester and polyurethanes containing diisocyanate. Several of these polymers are considered biodegradable and many of their components are sourced from non-petroleum oils.
Related compounds
Dithiothreitol
1,3-Propanedithiol
References
Reagents for organic chemistry
Thiols
Foul-smelling chemicals | 1,4-Butanedithiol | [
"Chemistry"
] | 192 | [
"Organic compounds",
"Thiols",
"Reagents for organic chemistry"
] |
75,477,328 | https://en.wikipedia.org/wiki/Zelatriazin | Zelatriazin (NBI-1065846 or TAK-041) is a small-molecule agonist of GPR139. It was developed for schizophrenia and anhedonia in depression but trials were unsuccessful and its development was discontinued in 2023.
References
Receptor agonists
Small-molecule drugs
Experimental drugs developed for schizophrenia
Experimental antidepressants
Benzotriazines
Trifluoromethyl ethers
Carboxamides | Zelatriazin | [
"Chemistry"
] | 92 | [
"Receptor agonists",
"Neurochemistry"
] |
75,477,489 | https://en.wikipedia.org/wiki/NGC%201376 | NGC 1376 is a spiral galaxy located around 180 million light-years away in the constellation Eridanus. It was discovered in 1785 by William Herschel, and it is 79,000 light-years across. NGC 1376 is not known to have an active galactic nuclei, but it does have lots of star-forming regions.
Characteristics
Concentrated along the spiral arms of NGC 1376, bright blue knots of gas highlight areas of active star formation. These regions show an excess of light at ultraviolet (UV) wavelengths because they contain brilliant clusters of hot, newborn stars that are emitting UV light. The less intense, red areas near the core and between the arms consist mainly of older stars. The reddish dust lanes delineate cooler, denser regions where interstellar clouds collapse to form new stars. Behind the spiral arms is a sprinkling of reddish background galaxies.
NGC 1376 belongs to a class of spirals that are seen nearly face on from our line of sight. Its orientation aids astronomers in studying details and features of the galaxy from a relatively unobscured vantage point.
NGC 1376 is home to a supernova (SN 1990go) that rivaled the brightness of the entire nucleus (as seen from ground-based telescopes) for several weeks. This was observed in 1990.
References
External links
Unbarred spiral galaxies
Eridanus (constellation)
Astronomical objects discovered in 1785
Galaxies discovered in 1785
Discoveries by William Herschel
13352
03346-0512
-01-10-011
1376 | NGC 1376 | [
"Astronomy"
] | 308 | [
"Eridanus (constellation)",
"Constellations"
] |
75,477,596 | https://en.wikipedia.org/wiki/Ponsegromab | Ponsegromab (PF-06946860) is a monoclonal antibody that works as a GDF-15 inhibitor. It is developed by Pfizer for cancer cachexia.
In September 2024, Pfizer disclosed that ponsegromab led to significant body weight increases in patients with non-small cell lung cancer, pancreatic cancer, or colorectal cancer in a phase 2 clinical trial.
References
Monoclonal antibodies
Experimental cancer drugs | Ponsegromab | [
"Chemistry"
] | 101 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,478,671 | https://en.wikipedia.org/wiki/Cendakimab | Cendakimab (RPC4046; ABT 308; CC-93538) is a monoclonal antibody against interleukin 13. It is developed by Bristol Myers Squibb for eosinophilic esophagitis.
References
Drugs developed by Bristol Myers Squibb
Anti-interleukin monoclonal antibodies | Cendakimab | [
"Chemistry"
] | 75 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,478,714 | https://en.wikipedia.org/wiki/Obexelimab | Obexelimab is an experimental drug developed to treat IgG4-related disease and lupus. It works as a "bifunctional, non-cytolytic, humanised monoclonal antibody that binds CD19 and Fc gamma receptor IIb to inhibit B cells, plasmablasts, and CD19-expressing plasma cells."
References
Drugs developed by Bristol Myers Squibb
Monoclonal antibodies | Obexelimab | [
"Chemistry"
] | 88 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,478,745 | https://en.wikipedia.org/wiki/Danicamtiv | Danicamtiv is a cardiac myosin activator developed by Bristol Myers Squibb to treat dilated cardiomyopathy.
References
Drugs developed by Bristol Myers Squibb
Pyrazoles
Sulfones
Piperidines
Ureas
Isoxazoles
Fluorine compounds | Danicamtiv | [
"Chemistry"
] | 60 | [
"Organic compounds",
"Sulfones",
"Functional groups",
"Ureas"
] |
75,479,302 | https://en.wikipedia.org/wiki/PL-3994 | PL-3994 is an experimental bronchodilator that acts as an agonist of the natriuretic peptide receptor A. It is developed by Palatin Technologies.
References
Receptor agonists
Bronchodilators
Experimental drugs | PL-3994 | [
"Chemistry"
] | 51 | [
"Receptor agonists",
"Neurochemistry"
] |
75,479,357 | https://en.wikipedia.org/wiki/Polyelectrolyte%20theory%20of%20the%20gene | The polyelectrolyte theory of the gene proposes that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. These charges maintain the uniform physical properties needed for Darwinian evolution, regardless of the information encoded in the genetic biopolymer. DNA is such a molecule. Regardless of its nucleic acid sequence, the negative charges on its backbone dominate the physical interactions of the molecule to such a degree that it maintains uniform physical properties such as its aqueous solubility and double-helix structure.
The polyelectrolyte theory of the gene was proposed by Steven A. Benner and Daniel Hutter in 2002 and has largely remained a theoretical framework astrobiologists have used to think about how life may be detected beyond Earth. This idea was later linked by Benner to Erwin Schrödinger's view of the gene as an "aperiodic crystal" to make a robust, universally generalized concept of a genetic biopolymer—a biopolymer acting as a unit of inheritance in Darwinian evolution.
Benner and others who built on his work have proposed methods for how to concentrate and identify genetic biopolymers on other planets and moons within the solar system using electrophoresis, which uses an electric field to concentrate charged compounds.
Although few have tested the polyelectrolyte theory of the gene, in 2019, lab experiments challenged the universality of this idea. This work was able to create non-electrolyte polymers capable of limited Darwinian evolution, but only up to a length of 72 nucleotides.
Physical structure of polyelectrolytes
A polyelectrolyte is a polymer with repeating electrostatically charged units. In the context of the polyelectrolyte theory of the gene, this polyelectrolyte is a biopolymer—a polymer derived from a living system—with a repeated ionically charged unit, similar to the genetic biopolymer in modern biology, DNA. Although RNA does not act as a genetic biopolymer archive in modern biology—except in the case of some viruses such as coronavirus and HIV—the RNA World hypothesis suggests that RNA may have preceded DNA as life’s first genetic biopolymer. The nucleotide building blocks that make up DNA and RNA are connected by negatively charged phosphate groups. These phosphodiester linkages create the repeating negative charges on the molecule’s backbone that give DNA and RNA their polyelectrolyte nature.
Polyelectrolytes in the context of genetic biopolymers
To participate in Darwinian evolution, which can be described as "descent with modification", a unit of inheritance must be capable of imperfect replication to occasionally produce a new modified unit of inheritance, which must still be capable of being replicated. This imperfect replication leads to the variation on which Darwinian evolution can act.
The polyelectrolyte theory of the gene attempts to understand modern biology’s unit of inheritance, DNA, at a generalizable level. In 2002, Steven A. Benner and Daniel Hutter identified the repeated charges in DNA's phosphodiester linkages as crucial to its function as a genetic biopolymer. They proposed with the polyelectrolyte theory of the gene that repeated ionic charges—positive or negative—are a general requirement for all water-dissolved genetic biopolymers to undergo Darwinian evolution anywhere in the cosmos.
This concept works in tandem with the view of the gene as an "aperiodic crystal" as proposed by Erwin Schrödinger in his 1944 book "What Is Life?". An aperiodic crystal, as Schrödinger describes it, has a discrete set of molecular building blocks in a non-repeating arrangement. DNA is an aperiodic crystal composed of discrete nucleobases (A, T, C, and G), which are arranged based on the information they encode, not in any repeated format. While this idea of an "aperiodic crystal" was not initially linked to the polyelectrolyte theory of the gene, Benner, in later work, connected the two.
Polyelectrolytes remain physically uniform regardless of the information encoded
In biochemistry, the structure of a biomolecule dictates its function, and therefore changes in structure cause changes in function. To work as a unit of inheritance, the genetic biopolymer must maintain shape and, therefore, physical and chemical consistency, regardless of the information the structure encodes. DNA is such a molecule. No matter what the nucleic acid sequence is, DNA maintains a consistent double helix structure and, therefore, the consistent physical properties that allow it to remain dissolved in water and be replicated by cellular machinery. The polyelectrolyte theory of the gene reasons that DNA can maintain its shape regardless of mutations because the negative charges on the phosphate backbone dominate the physical interactions of the molecule to such a degree that changes in the nucleic acid sequence, the encoded information, do not affect the overall physical behavior of the molecule.
For example, thymidine nucleotides (T) are very soluble in water while guanosine nucleotides (G) are more insoluble; however, an oligonucleotide—a short polynucleotide sequence—composed of only thymine and one composed of only guanine has the same overall structure and physical properties. If changes in the nucleic acid sequence, which encodes genetic information, change the physical properties of DNA, these changes could break down the mechanism by which DNA replicates.
This physical uniformity is very rare in nature. Take another biopolymer, for example, proteins. The nucleic acid sequence in DNA codes for the sequence of amino acids that make up proteins. A change to even a single amino acid in the primary sequence of a protein can completely change the physical properties of that protein. For example, the sickle-cell trait is caused by a single mutation of an adenine to a thymine in the hemoglobin gene, causing a switch from a glutamic acid to a valine. This completely changes the three-dimensional structure of hemoglobin and thus changes the physical properties of the protein that lead to the sickle-cell trait.
Proteins are sensitive to changes in amino acid sequence because the 20 different amino acid side chains form bonds and partial bonds with each other. In addition, the protein backbone has a dipole moment—having partially positive and partially negative sides—which can further create interactions within the molecule. These side-chain and backbone interactions are sensitive to changes in the environment and amino acid sequence. It is unlikely that a protein could act as a genetic biomolecule because changes in amino acid sequence lead to changes in overall physical structure and properties.
Another non-electrolyte biopolymer would suffer the same challenges as a protein when acting as a genetic biomolecule. Changes in physical properties with changes in encoded information would mean that such a molecule would struggle to be replicated with certain sequences of encoded information, as those sequences would result in physical properties incompatible with replication. This problem means that the hypothetical protein gene would not be able to explore all possible genetic sequences, as certain sequences would cause the molecule to fail to be replicated based on the physical structure of its gene, not on the fitness of what the gene codes for.
Benner and Hutter initially described this property of DNA as being "capable of surviving modifications in constitution without loss of properties essential for replication" or the acronym COSMIC-LOPER. This acronym gives scientists a shorthand way of describing the complex idea of a genetic biopolymer having the physical uniformity regardless of encoded information that allows it to be replicated.
Although RNA is often described as a genetic biopolymer because of its theorized role as life’s first unit of inheritance (RNA World), it is not entirely COSMIC-LOPER. RNA, especially sequences high in guanine (G), is capable of folding and performing enzyme-type chemistry. Folding in guanine-rich RNA sequences prevents the templating ability of RNA and thus its ability to be replicated in an RNA-world scenario, for the same reason it would be difficult for a protein-based gene to replicate.
Repeated ionic charges increase solubility in water
The repeated negative charges increase the solubility of DNA and RNA in water. Because ionic charges are highly soluble in water, having them on the molecule's backbone increases the molecule's solubility. If the backbone of a hypothetical genetic biopolymer were linked together in a non-ionic fashion, the solubility of the whole molecule would decrease. Solubility is important because, in order to be replicated, DNA—or any other genetic biomolecule—must be soluble to interact with replicative machinery.
Repeated ionic charges promote Watson–Crick base pairing specificity
The repeated negative charges of the DNA backbone electrostatically repel each other, preventing interactions both within and between DNA strands. This repulsion promotes specific interactions along the Watson–Crick 'edge' of the nucleobases, promoting Watson–Crick base pairing specificity—A pairs with T and C pairs with G.
Repeated ionic charges prevent folding
The repeated negative charges on the backbone keep DNA and many RNA molecules from folding and allow them to act as templates. In water, molecules take on a conformation that is the most energetically favorable, with the lowest Gibbs free energy. This configuration maximizes favorable interactions (hydrogen bonding, positive-negative charge interactions, van der Waals interactions) and minimizes unfavorable interactions (i.e., hydrophilic-hydrophobic interactions and like charge interactions). In the case of double-stranded DNA and RNA, the most energetically favorable form is the linear double helix configuration because it maximizes interactions between base pairs and between the negatively charged backbone and the surrounding water molecules while minimizing interactions between the negatively charged phosphodiester linkages of the backbone. If the double-stranded DNA or RNA molecule folded, it would exchange favorable water-backbone interactions for unfavorable backbone-backbone interactions. A biopolymer without an ionically charged backbone, like proteins, would not produce unfavorable backbone-backbone interaction during folding and thus would readily fold and aggregate. This inherent tendency towards linearity improves DNA’s ability to act as a template for replication because folded and aggregated conformations are inaccessible to replication machinery.
Lab experiments
Lab experiments conducted with non-electrolyte analogs of DNA and RNA initially inspired Benner and Hutton to publish on the polyelectrolyte theory of the gene. During the late ‘80s and '90s, scientists developed synthetic DNA-like molecules to bind to and silence unwanted mRNA gene products as a way to treat disease. As part of this exploratory research, researchers developed a variety of non-electrolyte RNA and DNA analogs that would be able to cross the cell membrane, which DNA and RNA are incapable of doing because of their charged backbones. One of these analogs substituted a sulfone (SO₂) for the natural phosphodiester (PO₂⁻) linkage. While initial experiments showed the sulfone analog to have very similar properties to DNA as a dimer—two nucleotides linked together—when longer sulfone analogs were synthesized, they folded, lost Watson–Crick base pair specificity, and had dramatic changes in physical properties due to small changes in nucleic acid sequence. The reduction in the quality of the traits that make DNA a good genetic molecule was seen with all the nonionic linkers that were tested as of 2002.
The closest non-electrolyte analog to maintaining the qualities of DNA was the polyamide-linked nucleic acid analog (PNA), which replaced the phosphodiester linkage of DNA with an uncharged N-(2-aminoethyl)glycine linkage. Even Benner and Hutter questioned if PNA might disprove their polyelectrolyte hypothesis; however, even though PNA maintained the qualities of DNA up to a length of 20 nucleotides, beyond that length, the molecules started to lose Watson–Crick base pair specificity, aggregated, and became sensitive to changes in nucleic acid sequence.
Lab experiments that challenge the polyelectrolyte theory of the gene
In 2019, a group led by Philipp Holliger in Cambridge, England, developed non-electrolyte P-alkylphosphonate nucleic acids (phNA) DNA analogs that were able to undergo templated synthesis and directed evolution. The phNA analogs substituted the charged oxygen on DNA’s phosphate backbone with an uncharged methyl or ethyl group. While other DNA analogs have been shown to undergo templated synthesis and directed evolution, this discovery was the first time a non-electrolyte DNA analog had been shown to have these properties and the first time the polyelectrolyte theory of the gene had been experimentally challenged. However, the Template-directed synthesis of phNA was only performed up to a length of 72 nucleotides. This is around the length of the shortest naturally occurring gene, tRNA, but is roughly an order of magnitude shorter than the genome of the smallest free-living organism. The human genome for reference is 3.05×10⁹ base pairs long.
As an "agnostic biosignature"
Since its inception, the polyelectrolyte theory of the gene has been put in the context of searching for life in the universe. This theory, combined with Schrödinger's view of a gene as an aperiodic crystal, provides a so-called "agnostic biosignature", a sign of life that does not presuppose any biochemistry. In other words, a generalized view of life should hold anywhere in the universe.
Since the theorized genetic polyelectrolyte biomolecules could be charged either positively or negatively, as in the case of DNA and RNA, they can be concentrated in water with an electric field using electrophoresis or electrodialysis. This hypothetical concentration device has been called an agnostic life-finding device. Similar to how electrophoresis works to separate DNA molecules, negatively charged molecules, like DNA or RNA, would be attracted to a positively charged anode, and positively charged genetic biomolecules would be attracted to a negatively charged cathode.
Once the polyelectrolyte biomolecule has been concentrated, Benner suggests the molecules be tested for size and shape uniformity. In addition, the molecules should be tested for the use of a limited number of building blocks arranged in a non-repeating fashion, an aperiodic crystal structure. Benner has suggested that this could be done using matrix-assisted laser desorption ionization (MALDI) paired with an orbitrap high-resolution mass spectrometer. Another suggested approach has been to use nanopore sequencing technology, although questions of whether the solar radiation experienced during transit and on-site would affect the functionality of the device remain. While space agencies have yet to use any of these proposed systems for life detection, they may be used in the future on Mars, Enceladus, and Europa.
Despite the polyelectrolyte theory of the gene and the aperiodic crystal view of the gene being described as agnostic biosignatures, these theories are terra-, or earth-life, centric. It is unknown what life on another world might be; while it is often stated that life of any kind needs biomolecules and water, this may not be true.
References
Wikipedia Student Program
Origin of life | Polyelectrolyte theory of the gene | [
"Biology"
] | 3,280 | [
"Biological hypotheses",
"Origin of life"
] |
75,479,694 | https://en.wikipedia.org/wiki/Verinurad | Verinurad is a selective URAT1 inhibitor developed for gout and heart failure by AstraZeneca.
References
Drugs developed by AstraZeneca
Nitriles
Naphthalenes
Pyridines
Thioethers
Carboxylic acids | Verinurad | [
"Chemistry"
] | 54 | [
"Carboxylic acids",
"Nitriles",
"Functional groups"
] |
75,480,170 | https://en.wikipedia.org/wiki/Evazarsen%20sodium | Evazarsen sodium (IONIS-AGT-LRx) is an antisense RNA designed to inhibit angiotensinogen as an alternative to other mechanisms to target the renin–angiotensin–aldosterone system.
References
Antisense RNA
Agents acting on the renin-angiotensin system | Evazarsen sodium | [
"Chemistry"
] | 70 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,480,859 | https://en.wikipedia.org/wiki/Petz%20recovery%20map | In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of a quantum analog of Bayes theorem. Proposed by Dénes Petz, the Petz recovery map is a quantum channel associated with a given quantum channel and quantum state. This recovery map is designed in a manner that, when applied to an output state resulting from the given quantum channel acting on an input state, it enables the inference of the original input state. In essence, the Petz recovery map serves as a tool for reconstructing information about the initial quantum state from its transformed counterpart under the influence of the specified quantum channel.
The Petz recovery map finds applications in various domains, including quantum retrodiction, quantum error correction, and entanglement wedge reconstruction for black hole physics.
Definition
Suppose we have a quantum state which is described by a density operator and a quantum channel , the Petz recovery map is defined as
Notice that is the Hilbert-Schmidt adjoint of .
The Petz map has been generalized in various ways in the field of quantum information theory.
Properties of the Petz recovery map
A crucial property of the Petz recovery map is its ability to function as a quantum channel in certain cases, making it an essential tool in quantum information theory.
The Petz recovery map is a completely positive map, since (i) sandwiching by the positive semi-definite operator is completely positive; (ii) is also completely positive when is completely positive; and (iii) sandwiching by the positive semi-definite operator is completely positive.
It's also clear that is is trace non-increasing, since
From 1 and 2, when is invertible, the Petz recovery map is a quantum channel, viz., a completely positive trace-preserving (CPTP) map.
References
Further reading
Brief Overview of the Petz recovery map and its applications
Quantum information theory
Eponymous equations of physics
Von Neumann algebras | Petz recovery map | [
"Physics"
] | 392 | [
"Eponymous equations of physics",
"Equations of physics"
] |
75,480,886 | https://en.wikipedia.org/wiki/Barbara%20Cannon | Barbara Cannon is a British-Swedish biochemist, physiologist and an academic. She is an emeritus professor at Stockholm University as well as the chairman of the scientific advisory board at The Helmholtz Centre. She is also a consultant at Combigene.
Cannon is most known for her work on mammalian thermogenesis, primarily focusing on the function of brown adipose tissue. She is the recipient of the 2014 King's Medal from the Order of the Seraphims, Sweden.
Cannon is a Fellow of the Royal Swedish Academy of Sciences.
Education
Cannon completed her B.Sc in Biochemistry from London University in 1967. In 1971, she obtained a Ph.D. in Physiology from Stockholm University.
Career
Cannon started her academic career in 1974 at Stockholm University, where she held various positions, including a research associate at the Wenner-Grenn Institute from 1974 to 1980. Subsequently, she served as an associate professor from 1980 to 1983 and then as a professor of physiology from 1983 to 2013. Since 2013, she has held the title of emeritus professor at Stockholm University.
Cannon's involvement with the Royal Swedish Academy of Sciences included a tenure as vice president from 2003 to 2008 and subsequently as president from 2012 to 2015. Furthermore, she played an important role in the Nobel Foundation, serving as a member of the Trustees from 2006 to 2011 and taking on the role of chairman from 2008 to 2011.
Research
Cannon has conducted research in the field of mammalian thermogenesis. Her research portfolio includes 185 original articles, as well as 125 invited review articles and book chapters. Notably, she authored a fundamental review on brown adipose tissue function in Physiological Reviews and a paradigm-changing review article for the American Journal of Physiology where she presented findings from radiology literature suggesting the existence of brown adipose tissue in adult humans.
Function and significance of brown adipose tissue
Cannon's initial publications, alongside Stanley Prusiner, definitively showed that thermogenesis was primarily driven by mitochondrial uncoupling, likely induced by the presence of free fatty acids. She subsequently showcased important elements in controlling the immediate function of the uncoupling protein, involving fatty acids, possibly their CoA derivatives, and reactive oxygen species (ROS) products. Her articles on regulating ATP synthase levels in mitochondria relative to electron transport chain density, demonstrated in brown adipose tissue, suggest that this concept applies universally, with the P1 isoform of subunit c governing fully assembled enzyme levels in the membrane.
Cannon pioneered primary cell culture systems that underpin knowledge of brown adipocyte development and recruitment. Using these cultures, she identified adrenergic signal transduction pathways responsible for both acute thermogenesis and chronic actions like cell proliferation and differentiation triggered by noradrenaline. She further clarified that the development paths of brown and white adipocytes are separate, with brown adipocytes showing characteristics of skeletal muscle early in their differentiation process. In her later research, she found that cultured adipocytes from different white adipose depots contained precursor cells capable of adopting a brown-like or "brite" phenotype, also known as beige fat. Although these findings have sparked significant interest, she has advised caution regarding the belief that these cells alone will solve obesity problems.
Focusing on integrative physiology, Cannon's research on mice lacking the UCP1 gene revealed that there are no alternative mechanisms for adrenergically induced adaptive thermogenesis apart from UCP1 in brown adipose tissue. This finding challenges the notion of adaptive adrenergic muscular thermogenesis and suggests that UCP1-deficient mice tend to develop modest obesity spontaneously. Moreover, she advocated for humanizing mice by providing them with appealing diets and maintaining their housing conditions at thermoneutral temperatures to mimic the metabolism of adult humans at its lowest point. Furthermore, her review on active brown adipose tissue in adult humans has prompted numerous follow-up experiments and offered promising avenues for pharmacological interventions in obesity management.
Awards and honors
1989 – Fellow, the Royal Swedish Academy of Sciences
2009 – Honorary doctor, Monash University, Melbourne
2012 – Knut Schmidt-Nielsen Prize Lecture, International Union of Physiological Sciences
2013 – Honorary doctor, Royal Veterinary College, London
2013 – European Lipid Research Award, EuroFedLipid
2014 – Honorary doctor, Buckingham University, Buckingham, UK
2014 – King's Medal (12th size), the Order of the Seraphims
2016 – Prize for Scientific Reviews, Experimental Biology and American Physiological Society
2016 – Recipient of the Order of the Rising Sun, Gold and Silver Star, Japan
2017 – Fellow, Academia Europaea
Selected articles
Cannon, B., & Nedergaard, J. (2004). Brown adipose tissue: function and physiological significance. Physiological Reviews, 84, 277 - 359.
Nedergaard, J., Bengtsson, T., & Cannon, B. (2007). Unexpected evidence for active brown adipose tissue in adult humans. American Journal of Physiology. Endocrinology and Metabolism, 293, E444- E452.
Feldmann, H. M., Golozoubova, V., Cannon, B., & Nedergaard, J. (2009). UCP1 ablation induces obesity and abolishes diet-induced thermogenesis in mice exempt from thermal stress by living at thermoneutrality. Cell Metabolism, 9(2), 203–209.
Whittle, A. J., Carobbio, S., Martins, L., Slawik, M., Hondares, E., Vázquez, M. J., ... & Vidal-Puig, A. (2012). BMP8B increases brown adipose tissue thermogenesis through both central and peripheral actions. Cell, 149(4), 871–885.
Fischer, A. W., Cannon, B., & Nedergaard, J. (2018). Optimal housing temperatures for mice to mimic the thermal environment of humans: an experimental study. Molecular metabolism, 7, 161–170.
References
Biochemists
Physiologists
Alumni of the University of London
Stockholm University alumni
Academic staff of Stockholm University
Living people
Year of birth missing (living people) | Barbara Cannon | [
"Chemistry",
"Biology"
] | 1,292 | [
"Biochemistry",
"Biochemists"
] |
75,484,153 | https://en.wikipedia.org/wiki/Gadolinium%28III%29%20perchlorate | Gadolinium(III) perchlorate is an inorganic compound with the chemical formula Gd(ClO4)3. It can be obtained by reacting gadolinium(III) oxide and perchloric acid (70~72%) at 80 °C. It can form colorless Gd(ClO4)3·9H2O·4C4H8O2 complex crystals with 1,4-dioxane. It reacts with inositol, sodium acetate, and sodium hydroxide to obtain complexes containing giant molecular clusters {Gd140}. It reacts with chromium(III) chloride and 2,2'-bipyridine at pH=5.1 to obtain [GdCr(bipy)2(OH)2(H2O)6](ClO4)4·2H2O.
References
Gadolinium compounds
Perchlorates | Gadolinium(III) perchlorate | [
"Chemistry"
] | 194 | [
"Perchlorates",
"Salts"
] |
75,484,173 | https://en.wikipedia.org/wiki/Steinitz%27s%20theorem%20%28field%20theory%29 | In field theory, Steinitz's theorem states that a finite extension of fields is simple if and only if there are only finitely many intermediate fields between and .
Proof
Suppose first that is simple, that is to say for some . Let be any intermediate field between and , and let be the minimal polynomial of over . Let be the field extension of generated by all the coefficients of . Then by definition of the minimal polynomial, but the degree of over is (like that of over ) simply the degree of . Therefore, by multiplicativity of degree, and hence .
But if is the minimal polynomial of over , then , and since there are only finitely many divisors of , the first direction follows.
Conversely, if the number of intermediate fields between and is finite, we distinguish two cases:
If is finite, then so is , and any primitive root of will generate the field extension.
If is infinite, then each intermediate field between and is a proper -subspace of , and their union can't be all of . Thus any element outside this union will generate .
History
This theorem was found and proven in 1910 by Ernst Steinitz.
References
Field (mathematics)
Theorems in abstract algebra | Steinitz's theorem (field theory) | [
"Mathematics"
] | 245 | [
"Theorems in algebra",
"Theorems in abstract algebra"
] |
75,484,264 | https://en.wikipedia.org/wiki/Mitotrope | Mitotropes are a novel class of drugs that aim to improve cardiac performance by influencing the mitochondria. Their intended effect is similar to the calcium-based inotropes, and intend to have fewer long-term side effects.
References
Mitochondria
Drugs acting on the cardiovascular system | Mitotrope | [
"Chemistry"
] | 63 | [
"Pharmacology",
"Mitochondria",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Metabolism"
] |
75,484,430 | https://en.wikipedia.org/wiki/D%20domain | D-domain (Dimerization domain) is found in the upstream of the F-box domain, which is a conserved dimerization motif located in WD40 repeat F box proteins, such as Cdc4, Met30, β-TrCP and Pop1/2. But Vts1, a RNA binding protein at the SAM domain found in yeast contain D-domain though it does not have any F-box domain.
As targeting domain or docking site, D-domain is found in the ETS-domain transcription factor Elk-1. It is distinct from the phospho-acceptor motifs and plays a crucial function in the efficient phosphorylation and activation of Elk-1 by MAP kinases (MAPKs) such as extracellular signal-regulated protein kinase (ERK), JNK, mitogen and stress-activated protein kinase-1, and ribosomal S6 kinase.
Additionally this domain can be incorporated into chimeric antigen receptor (CAR) designs for T cell therapies that allows for the specific recognition and binding of target antigens, such as CD123, which is a potential therapeutic target for hematologic malignancies like acute myelogenous leukemia (AML).
Core components
D-domain is formed up of three alpha helices which generate a parallel dimer by self-associating in a right-handed super-helical way. There are two possible configurations for this domain's N terminus; those are an unstructured loop and an amphipathic alpha-helix (H0). Interactions with the adjacent thyroid hormone receptor ligand-binding domain's (TR-LBD), AF-2 coactivator-binding groove are necessary for the creation of the H0 structure of D-domain. While additional C-terminal residues are crucial only for JNKs, residues in the N-terminal end of the D-domain are significant for not only JNK MAPKs but also ERK.
The unique topology of D-domain enables it to target epitopes that may not be accessible to scFv CDR loops, offering the potential for improved antigen recognition.
Function
D-domain can interconnect with another D-domain which belongs to indistinguishable protein. This type of interactions is called homotypic interactions. For instance, this kind of domain is important for the interaction of a subclass of F-box proteins which is named after WD40. This arranges in the π-system configuration that is known as suprafacial configuration which is observed between E2-site of every SCF protomer and the substrate-binding site. D-domain is also involved in the self-efficient binding of Fbw7 and stable dimerization of cyclin E T380 phospho-degron to Fbw7. Dimerization of β-TrCP1 and β-TrCP2 also found in NH2-terminal of D-domain. This domain in the thyroid hormone receptor (TR) connects the DNA-binding domain (DBD) with the ligand-binding domain (LBD). It can form functionally useful extensions of the DBD and LBD. It also can unfold for the purpose of allowing TRs to adjust to various DNA response components and have the ability to substantially control rotational flexibility and TR DNA binding activity. This domain also serves as a JNK-binding motif, with variations in the respective kinase binding capacity observed between the c-Jun D-domain and the Elk-1 D-domain. The cytoplasmic region of the receptor for modern glycation end-products (RAGE) contains a sequence similar to the D-domain, which is important for the direct interaction between ERK and RAGE. This interaction is independent of the phosphorylation status of ERK and is conserved across species. Targeting via this increases the specificity and efficiency of the MAP kinase signal transduction pathway.
D-domain CARs have demonstrated potent antitumor activity in xenograft models, leading to complete durable remission in AML models. It can also be used to generate functional, bi-specific CARs by combining them with other specific targeting domains, such as a CD19-specific scFv.
Regulation
Mutations in the D-domain can selectively inhibit TR interactions with specific DNA response elements and affect TR activity. In addition, it can be engineered to be less immunogenic by removing putative T cell epitopes, potentially reducing the risk of antigen-independent exhaustion. On the other hand, trivial effect on phosphorylation is observed due to mutation at the D-domain of p38MAPKs, which signifies the inertness of this domain to the interaction of Elk-1 to p38 MAPKs. Also, dimerization of the SCF complex facilitated by the D-domain shows insignificant overtly impact on catalytic competence or substrate affinity but enhances lysine acceptor site utilization.
References
Protein domains | D domain | [
"Biology"
] | 1,025 | [
"Protein domains",
"Protein classification"
] |
75,484,800 | https://en.wikipedia.org/wiki/Tegoprubart | Tegoprubart (AT-1501) is an experimental humanized monoclonal antibody that inhibits CD40L. It is developed for ALS and transplant rejection.
References
Experimental monoclonal antibodies | Tegoprubart | [
"Chemistry"
] | 44 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
74,054,860 | https://en.wikipedia.org/wiki/NGC%204593 | NGC 4593 is a barred spiral galaxy located in the constellation Virgo. It is located at a distance of about 120 million light years from Earth, which, given its apparent dimensions, means that NGC 4593 is about 125,000 light years across. It was discovered by William Herschel on April 17, 1784. It is a Seyfert galaxy.
Characteristics
NGC 4593 is a barred spiral galaxy with a nearly complete ring. The galaxy has a large elliptical/boxy pseudobulge with the bar emerging from its northeast and southwest corner. From its end of the bar begin two diffuse smooth spiral arms that can be traced for about half a revolution. At the south part of the ring there could emerge a third, smaller spiral arm. One arm emerges from the ring at one end of the bar while a second emerges about 15 degrees before the other end.
Active nucleus
The nucleus of NGC 4593 has been found to be active and it has been categorised as a type I Seyfert galaxy. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of NGC 4593 is estimated to be based on reverberation mapping or based on X-ray flux variations.
The nucleus has been found to be a bright X-rays source. The source is variable both in flux and spectrum, varying at a timescale of few kiloseconds. The changes in the X-ray band are followed by variations in the ultraviolet and visual light band, with the lag being 1.3 ± 0.5 days in the V-band. The X-rays observations by Chandra X-ray Observatory indicate the presence of a warm absorber and outflows of ionised gas that are generated at different distances from the nucleus. The overall X-ray spectrum indicates the presence of a hot corona, which generates the hard X-rays, and a warm medium, which is responsible for the soft X-rays excess.
A circumnuclear dust ring with a radius of 5 arcseconds that is connected with the dust lanes in the bar of the galaxy is seen in visible light. Similar rings in other galaxies have been found to exhibit intense star formation, but that isn't the case with NGC 4593, indicating that starburst activity is episodic. Inside the ring lies a single spiral arm and no other dust features.
Nearby galaxies
NGC 4593 is the foremost galaxy in a galaxy group known as NGC 4593 group. Other members of the group the spiral galaxy NGC 4602 and the smaller galaxies MCG-01-32-37, MGC-01-32-33, SVEN 314, and SVEN 328. Markarov et al. consider NGC 4604 to be a member of this group as well. SVEN 314 is a dwarf galaxy which lies at a projected distance of 22 kpc and is the closest galaxy to NGC 4593. There are evidence that NGC 4693 is interracting with MGC-01-32-33, which lies about two disk radii away, as the spiral pattern is slightly distorted towards the direction of that galaxy, possibly as a result of tidal forces.
Other galaxies near NGC 4593 group include UGC 7798 and its group, IC 804, NGC 4626, NGC 4628, and NGC 4671. These galaxies were considered to be part of the Virgo II Groups, but that isn't accepted anymore, and they are considered to lie between the Local Supercluster and Hydra-Centaurus Supercluster.
References
External links
NGC 4593 on SIMBAD
Barred spiral galaxies
Ring galaxies
Seyfert galaxies
Virgo (constellation)
4593
Markarian galaxies
42375
Discoveries by William Herschel
Astronomical objects discovered in 1784 | NGC 4593 | [
"Astronomy"
] | 804 | [
"Virgo (constellation)",
"Constellations"
] |
74,056,025 | https://en.wikipedia.org/wiki/Four%20Core%20Genotypes%20mouse%20model | Four Core Genotypes (FCG) mice are laboratory mice produced by genetic engineering that allow biomedical researchers to determine if a sex difference in phenotype is caused by effects of gonadal hormones or sex chromosome genes. The four genotypes include XX and XY mice with ovaries, and XX and XY mice with testes. The comparison of XX and XY mice with the same type of gonad reveals sex differences in phenotypes that are caused by sex chromosome genes. The comparison of mice with different gonads but the same sex chromosomes reveals sex differences in phenotypes that are caused by gonadal hormones.
Development
The FCG model was created by Paul Burgoyne and Robin Lovell-Badge at the National Institute for Medical Research, London (now Francis Crick Institute). The model involves deleting the testis-determining gene Sry from the Y chromosome, and inserting Sry onto chromosome 3. Therefore the sex chromosomes no longer determine the type of gonad, so that XX and XY mice can have the same type of gonad and gonadal hormones.
Significance
The FCG model has been used to discover that the XX and XY animals respond differently in models of human physiology and disease, including autoimmunity, metabolism, cardiovascular disease, cancer, Alzheimer’s disease, and neural and behavioral processes. These findings imply that some sex chromosome genes may protect from disease, rationalizing the search for therapies that enhance such protective factors.
References
Genetic engineering
Sex | Four Core Genotypes mouse model | [
"Chemistry",
"Engineering",
"Biology"
] | 314 | [
"Biological engineering",
"Genetic engineering",
"Sex",
"Molecular biology"
] |
74,056,074 | https://en.wikipedia.org/wiki/Rhenium%20trioxide%20fluoride | Rhenium trioxide fluoride is an inorganic compound with the formula . It is a white, sublimable, diamagnetic solid, although impure samples appear colored. It one of the few oxyfluorides of rhenium, the other major one being rhenium dioxide trifluoride . The material has no applications, but it is of some academic interest as a rare example of a trioxide fluoride.
Synthesis and reactions
Rhenium trioxide fluoride can be prepared by fluorination of rhenium trioxide:
With Lewis bases (L) the compound forms adducts with the formula , e.g., L = diethyl ether and acetonitrile.
Structure and related compounds
According to X-ray crystallography, the compound adopts a helical chain structure featuring octahedral Re centers linked by two fluoride and two oxide bridging ligands. In contrast with , and crystallize with simpler structures. The Mn compound crystallizes as a tetrahedral monomer. The technetium compound crystallizes as dimers with fluoride bridges. Also contrasting with the structure of rhenium trioxide fluoride is that of rhenium trioxide chloride, which is a monomer.
References
Rhenium compounds
Fluorides
Transition metal oxides | Rhenium trioxide fluoride | [
"Chemistry"
] | 283 | [
"Fluorides",
"Salts"
] |
74,056,212 | https://en.wikipedia.org/wiki/Rhenium%20dioxide%20trifluoride | Rhenium dioxide trfluoride is an inorganic compound with the formula . A white diamagnetic solid, it one of the few oxyfluorides of rhenium, another being rhenium trioxide fluoride, . The material is of some academic interest as a rare example of an dioxide trifluoride. It can be prepared by the reaction of xenon difluoride and rhenium trioxide chloride:
According to X-ray crystallography, the compound can exist in four polymorphs. Two polymorphs adopt chain-like structures featuring octahedral Re centers linked by [bridging bridging fluoride]]s. Two other polymorphs adopt cyclic structures and , again featuring octahedral Re centers and bridging fluorides. Like related oxyfluorides, these coordination oligomers break up in the presence of Lewis bases. Adducts of the formula where L = acetonitrile have been crystallized.
References
Rhenium compounds
Fluorides
Transition metal oxides | Rhenium dioxide trifluoride | [
"Chemistry"
] | 230 | [
"Fluorides",
"Salts"
] |
74,059,230 | https://en.wikipedia.org/wiki/Kramkov%27s%20optional%20decomposition%20theorem | In probability theory, Kramkov's optional decomposition theorem (or just optional decomposition theorem) is a mathematical theorem on the decomposition of a positive supermartingale with respect to a family of equivalent martingale measures into the form
where is an adapted (or optional) process.
The theorem is of particular interest for financial mathematics, where the interpretation is: is the wealth process of a trader, is the gain/loss and the consumption process.
The theorem was proven in 1994 by Russian mathematician Dmitry Kramkov. The theorem is named after the Doob-Meyer decomposition but unlike there, the process is no longer predictable but only adapted (which, under the condition of the statement, is the same as dealing with an optional process).
Kramkov's optional decomposition theorem
Let be a filtered probability space with the filtration satisfying the usual conditions.
A -dimensional process is locally bounded if there exist a sequence of stopping times such that almost surely if and for and .
Statement
Let be -dimensional càdlàg (or RCLL) process that is locally bounded. Let be the space of equivalent local martingale measures for and without loss of generality let us assume .
Let be a positive stochastic process then is a -supermartingale for each if and only if there exist an -integrable and predictable process and an adapted increasing process such that
Commentary
The statement is still true under change of measure to an equivalent measure.
References
Probability theorems | Kramkov's optional decomposition theorem | [
"Mathematics"
] | 297 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
74,060,984 | https://en.wikipedia.org/wiki/Hong%20Chen%20%28engineer%29 | Hong Chen (, born 1963) is a Chinese engineer specializing in control theory and its application to automotive control systems and automated driving. She is a distinguished professor of control science and engineering at Tongji University in Shanghai,, dean of the Tongji University College of Electronic and Information Engineering, and holder of the Porsche Chair at Tongji University.
Education and career
Chen was a student of process control at Zhejiang University, where she earned a bachelor's degree in 1983 and a master's degree in 1986. In 1986, she joined the faculty of Jilin University. She went to the University of Stuttgart in Germany for doctoral study in system dynamics and control engineering, beginning in 1993, and completed her Ph.D. there in 1997. Her dissertation, Stability and Robustness Considerations in Nonlinear Model Predictive Control, was supervised by .
She returned to Jilin in 1999 as a professor, later becoming Tang Aoqing Professor and Director of the State Key Laboratory of Automotive Simulation and Control. In 2019 she moved to Tongji University as a distinguished professor; she was named as the dean of the College of Electronic and Information Engineering in 2020.
Recognition
Chen was elected as an IEEE Fellow, in the 2023 class of fellows, "for contributions to predictive control and applications in automotive systems".
References
External links
1963 births
Living people
Control theorists
Zhejiang University alumni
University of Stuttgart alumni
Academic staff of Jilin University
Academic staff of Tongji University
Fellows of the IEEE
20th-century Chinese women engineers
21st-century Chinese women engineers
20th-century Chinese engineers
21st-century Chinese engineers | Hong Chen (engineer) | [
"Engineering"
] | 313 | [
"Control engineering",
"Control theorists"
] |
74,061,067 | https://en.wikipedia.org/wiki/Markarian%20590 | Markarian 590, also known as NGC 863, NGC 866, and NGC 885, is an unbarred spiral galaxy located in the constellation Cetus. It is located at a distance of about 300 million light years from Earth, which, given its apparent dimensions, means that NGC 863 is about 110,000 light years across. It is a change looking Seyfert galaxy.
Observational history
Markarian 590 was discovered by William Herschel on January 6, 1785. The galaxy was also discovered independently by Lewis Swift on 3 October 1886, while he also catalogued it again as a different galaxy on 31 October 1886, and thus the galaxy is listed three times in the New General Catalogue. John Louis Emil Dreyer described it as very faint, round, brighter middle, stellar.
One supernova has been observed in Markarian 590, SN 2018djd. SN 2018djd is a type Ia supernova discovered by All Sky Automated Survey for SuperNovae (ASAS-SN) on 12 July 2018. The supernova was detected in images obtained on 10.61 July 2018, when it had a magnitude of 16.5. It reached a maximum apparent magnitude of 15.4.
Characteristics
The nucleus of Markarian 590 has been found to be active. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of NGC 4593 is estimated to be based on reverberation mapping.
The active galactic nucleus (AGN) of Markarian 590 has been categorised as change looking. This category of Seyfert galaxies is characterised by a change in the spectrum, with the broad emission lines disappearing or appearing, and thus changing the galaxy from to type I to type II and vice versa. Markarian 590 was originally characterised as a type I Seyfert galaxy, but later observations categorised the galaxy as type 1.5 and type 1.9-2.
The broad line emission of Markarian 590 strengthened by a factor of tens from the 1970s to the 1990s and then decreased about 100 times in the 2000s in optical, UV, and X-ray wavelengths, and the broad component of the Hβ emission line disappeared completely. Observations by Suzaku X-ray satellite in 2011 revealed that the soft X-ray excess emission could no longer be detected, while the X-ray continuum flux had minimal change. The X-ray spectrum doesn't show evidence of obstruction; instead the variation is caused by a change in accretion rate. Observations of the galaxy in the infrared wavelengths revealed a sharp decrease in luminosity between 2000 and 2001. Also, during the low activity period, the radius of the circumnuclear dust torus decreased to 32 light days. In 2014, the soft excess emission had reappeared in observations by Chandra X-ray Observatory, as well as the broad MgII emission line.
The radio emission of the galaxy is concentrated to a single core source, and extends to two components at a radius of about 2 arcsec (~1 kpc) and 6 arcsec (~3 kpc) from the core, that probably are related to the ring-like molecular gas structures observed in CO(3-2) imaging. The outer gas ring is probably related to the spiral arms of the galaxy while the inner ring is related to faint dust lanes. The central molecular gas mass is estimated to be less than , not significantly less than other AGNs. A faint parsec-scale radio jet extending 2.8 mas to the north has been detected using very-long-baseline interferometry. The radio emission exhibits long term variation that follows that of the other wavelengths.
X-ray observations of the galaxy has shown the presence of ultra fast outflows, in the form of blueshifted absorption lines of O viii, Ne ix, Si xiv, and Mg xii.
References
External links
NGC 863 on SIMBAD
Unbarred spiral galaxies
Seyfert galaxies
Cetus
NGC objects
01727
Markarian galaxies
08586
Discoveries by William Herschel
Astronomical objects discovered in 1785 | Markarian 590 | [
"Astronomy"
] | 858 | [
"Cetus",
"Constellations"
] |
74,061,980 | https://en.wikipedia.org/wiki/Order%20of%20Gagarin | The Order of Gagarin is a high-ranking Russian award established to recognize outstanding contributions to the advancement of the Russian and Soviet space program. Named after the Soviet cosmonaut Yuri Gagarin, who became the first human to journey into space in 1961, the Order of Gagarin was created in 2023 in honor of the 60th anniversary of Valentina Tereshkova's solo mission on the Vostok 6 on 16 June 1963 and the 62nd anniversary of Yuri Gagarin's historic spaceflight aboard the Vostok 1 spacecraft.
History
On April 12, 2023, Yuri Borisov, the Director General of Roscosmos, announced that Russian President Vladimir Putin had approved the proposal by Roscosmos to establish the Order in the name of Gagarin. According to the statute approved by the Decree of the President of the Russian Federation on May 27, 2023, No. 385, the Order of Gagarin is awarded to Russian citizens primarily for successful crewed spaceflight, crewed spaceflight programs for exploration, development, and utilization of space. The primary objective of the Order of Gagarin is to acknowledge individuals and organizations that have significantly contributed to the field of space exploration and related scientific disciplines.
The Order is worn on the left side of the chest, and if other orders of the Russian Federation are present, it is placed after the Order of Saint Catherine.
Notable Recipients
The Order of Gagarin has been bestowed upon numerous distinguished individuals and organizations in recognition of their remarkable contributions to the field of space exploration.
Valentina Tereshkova, Hero of the Soviet Union and member of the State Duma.
References
External link
Space exploration
Roscosmos
Orders, decorations, and medals of Russia
Awards established in 2023 | Order of Gagarin | [
"Astronomy"
] | 350 | [
"Space exploration",
"Outer space"
] |
74,062,222 | https://en.wikipedia.org/wiki/Manganese%20trioxide%20fluoride | Manganese trioxide fluoride is an inorganic compound with the formula . A green diamagnetic liquid, the compound has no applications, but it is of some academic interest as a rare example of a metal trioxide fluoride.
The compound was detected in the 1880s but was only purified and crystallized much more recently. It can be prepared by fluorosulfuric acid and potassium permanganate:
crystallizes as a monomer, as confirmed by X-ray crystallography. The molecules are tetrahedral with Mn-O and Mn-F distances of 1.59 and 1.72 Å, respectively.
In contrast with , and have more complex structures as solids. The Re compound crystallizes as chains or rings consisting of fluoride-bridge octahedra. crystallizes as dimers with fluoride bridges. The rhenium compound also forms stable adducts with Lewis bases, whereas the is unstable in the presence of Lewis bases.
References
Manganese compounds
Fluorides
Transition metal oxides | Manganese trioxide fluoride | [
"Chemistry"
] | 216 | [
"Fluorides",
"Salts"
] |
74,062,309 | https://en.wikipedia.org/wiki/Hexachlorodisiloxane | Hexachlorodisiloxane is a chemical compound composed of chlorine, silicon, and oxygen. Structurally, it is the symmetrical ether of two trichlorosilyl groups, and can be synthesized via high-temperature oxidation of silicon tetrachloride: 2SiCl4{} + O2 ->[\atop{950-970\,^\circ\text{C}}] 2(SiCl3)2O{} + Cl2
At room temperature, it is a colorless liquid that hydrolyzes upon exposure to water to give silicon dioxide and hydrochloric acid: (SiCl3)2O + 3H2O -> 2SiO2 + 6HCl Intense heat evinces a similar decomposition: 2(SiCl3)2O ->[\atop{\Delta}] SiO2{} + 3SiCl4
Reaction with antimony trifluoride gives the analogous hexafluorodisiloxane.
Sources
References
Siloxanes
Chlorosilanes | Hexachlorodisiloxane | [
"Chemistry"
] | 224 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
74,062,937 | https://en.wikipedia.org/wiki/Photosymbiosis | Photosymbiosis is a type of symbiosis where one of the organisms is capable of photosynthesis.
Examples of photosymbiotic relationships include those in lichens, plankton, ciliates, and many marine organisms including corals, fire corals, giant clams, and jellyfish.
Photosymbiosis is important in the development, maintenance, and evolution of terrestrial and aquatic ecosystems, for example in biological soil crusts, soil formation, supporting highly diverse microbial populations in soil and water, and coral reef growth and maintenance.
When one organism lives within another symbiotically it’s called endosymbiosis. Photosymbiotic relationships where microalgae and/or cyanobacteria live within a heterotrophic host organism, are believed to have led to eukaryotes acquiring photosynthesis and to the evolution of plants.
Occurrence
Lichens
Lichens represent an association between one or more fungal mycobionts and one or more photosynthetic algal or cyanobacterial photobionts. The mycobiont provides protection from predation and desiccation, while the photobiont provides energy in the form of fixed carbon. Cyanobacterial partners are also capable of fixing nitrogen for the fungal partner. Recent work suggests that non-photosynthetic bacterial microbiomes associated with lichens may also have functional significance to lichens.
Most mycobiont partners derive from the ascomycetes, and the largest class of lichenized fungi is Lecanoromycetes. The vast majority of lichens derive photobionts from Chlorophyta (green algae). The co-evolutionary dynamics between mycobionts and photobionts are still unclear, as many photobionts are capable of free-living, and many lichenized fungi display traits adaptive to lichenization such as the capacity to withstand higher levels of reactive oxygen species (ROS), the conversion of sugars to polypols that help withstand dedication, and the downregulation of fungal virulence. However, it is still unclear whether these are derived or ancestral traits.
Currently described photobiont species number about 100, far less than the 19,000 described species of fungal mycobionts, and factors such as geography can predominate over mycobiont preference. Phylogenetic analyses in lichenized fungi have suggested that, throughout evolutionary history, there has been repeated loss of photosymbionts, switching of photosymbionts, and independent lichenization events in previously unrelated fungal taxa. Loss of lichenization has likely led to the coexistence of non-lichenized fungi and lichenized fungi in lichens.
Sponges
Sponges (phylum Porifera) have a large diversity of photosymbiote associations. Photosymbiosis is found in four classes of Porifera (Demospongiae, Hexactinellida, Homoscleromorpha, and Calcarea), and known photosynthetic partners are cyanobacteria, chloroflexi, dinoflagellates, and red (Rhodophyta) and green (Chlorophyta) algae. Relatively little is known about the evolutionary history of sponge photosymbiois due to a lack of genomic data. However, it has been shown that photosymbiotes are acquired vertically (transmission from parent to offspring) and/or horizontally (acquired from the environment). Photosymbiotes can supply up to half of the host sponge’s respiratory demands and can support sponges during times of nutrient stress.
Cnidaria
Members of certain classes in phylum Cnidaria are known for photosymbiotic partnerships. Members of corals (Class Anthozoa) in the orders Hexacorallia and Octocorallia form well-characterized partnerships with the dinoflagellate genus Symbiodinium. Some jellyfish (class Scyphozoa) in the genus Cassiopea (upside-down jellyfish) also possess Symbiodinium. Certain species in the genus Hydra (class Hydrozoa) also harbor green algae and form a stable photosymbiosis.
The evolution of photosymbiosis in corals was likely critical for the global establishment of coral reefs. Corals are likewise adapted to eject damaged photosymbionts that generate high levels of toxic reactive oxygen species, a process known as bleaching. The identity of the Symbiodinium photosymbiont can change in corals, although this depends largely on the mode of transmission: some species vertically transmit their algal partners through their eggs, while other species acquire environmental dinoflagellates as newly-released eggs. Since algae are not preserved in the coral fossil record, understanding the evolutionary history of the symbiosis is difficult.
Bilaterians
In basal bilaterians, photosymbiosis in marine or brackish systems is present only in the family Convolutidae. In the group Acoela there is limited knowledge on the symbionts present, and they have been vaguely identified as zoochlorella or zooxanthella. Some species have a symbiotic relationship with the chlorophyte Tetraselmis convolutae while others have a symbiotic relationship with the dinoflagellates Symbiodinium, Amphidinium klebsii, or diatoms in the genus Licomorpha.
In freshwater systems, photosymbiosis is present in platyhelminths belonging to the Rhabdocoela group. In this group, members of the Provorticidae, Dalyeliidae, and Typhloplanidae families are symbiotic. Members of Provorticidae likely feed on diatoms and retain their symbionts. Typhloplanidae have symbiotic relationships with the chlorophytes in the genus Chlorella.
Molluscs
Photosymbiosis is taxonomically restricted in Mollusca. Tropical marine bivalves in the Cardiidae family form a symbiotic relationship with the dinoflagellate Symbiodinium. This family boasts large organisms often referred to as giant clams and their large size is attributed to the establishment of these symbiotic relationships. Additionally, the Symbiodinium are hosted extracellularly, which is relatively rare. The only known freshwater bivalve with a symbiotic relationship are in the genus Anodonta which hosts the chlorophyte Chlorella in the gills and mantle of the host. In bivalves, photosymbiosis is thought to have evolved twice, in the genus Anodonta and in the family Cardiidae. However, how it has evolved in Cardiidae could have occurred through different gains or losses in the family.
Gastropods
In gastropods, photosymbiosis can be found in several genera.
The species Strombus gigas hosts Symbiodinium which is acquired during the larval stage, at which point it is a mutualistic relationship. However, during the adult stage, Symbiodinium becomes parasitic as the shell prevents photosynthesis.
Another group of gastropods, heterobranch sea slugs, have two different systems for symbiosis. The first, Nudibranchia, acquire their symbionts through feeding on cnidarian prey that are in symbiotic relationships. In Nudibranchs, photosymbiosis has evolved twice, in Melibe and Aeolidida. In Aeolidida it is likely there have been several gains and losses of photosymbiosis as most genera include both photosymbiotic and non-photosymbiotic species. The second, Sacoglossa, removes chloroplasts from macroalgae when feeding and sequesters them into their digestive tract at which point they are called kleptoplasts. Whether these kleptoplasts maintain their photosynthetic capabilities depends on the host species ability to digest them properly. In this group, functional kleptoplasy has been acquired twice, in Costasiellidae and Plakobranchacea.
Chordates
Photosymbiosis is relatively uncommon in chordate species. One such example of photosymbiosis is in ascidians, the sea squirts. In the genus Didemnidae, 30 species establish symbiotic relationships. The photosynthetic ascidians are associated with cyanobacteria in the genus of Prochloron as well as, in some cases, the species Synechocystis trididemni. The 30 species with a symbiotic relationship span four genera where the congeners are primarily non-symbiotic, suggesting multiple origins of photosymbiosis in ascidians.
In addition to sea squirts, embryos of some amphibian species (Ambystoma maculatum, Ambystoma gracile, Ambystoma jeffersonium, Ambystoma trigrinum, Hynobius nigrescens, Lithobates sylvaticus, and Lithobates aurora) form symbiotic relationships with the green alga in the genus of Oophila. This algae is present in the egg masses of the species, causing them to appear green and providing oxygen and carbohydrates to the embryos. Similarly, little is known about the evolution of symbiosis in amphibians, but there appears to be multiple origins.
Protists
Photosymbiosis has evolved multiple times in the protist taxa Ciliophora, Foraminifera, Radiolaria, Dinoflagellata, and diatoms. Foraminifera and Radiolaria are planktonic taxa that serve as primary producers in open ocean communities. Photosynthetic plankton species associate with the symbiotes of dinoflagellates, diatoms, rhodophytes, chlorophytes, and cyanophytes that can be transferred both vertically and horizontally. In Foraminifera, benthic species will either have a symbiotic relationship with Symbiodinium or retain the chloroplasts present in algal prey species. The planktonic species of Foraminifera associate primarily with Pelagodinium. These species are often considered indicator species due to their bleaching in response to environmental stressors. In the Radiolarian group Acantharia, photosynthetic species inhabit surface waters whereas non-photosynthetic species inhabit deeper waters. Photosynthetic Acantharia are associated with similar microalgae as the Foraminifera groups, but were also found to be associated with Phaeocystis, Heterocapsa, Scrippsiella, and Azadinium which were not previously known to be involved in photosynthetic relationships. In addition, several of the species present in symbiotic relationships with Acantharia were oftentimes identical to the free-living species, suggesting horizontal transfer of symbiotes. This provides insight into the evolutionary patterns responsible for these symbiotic relationships, suggesting that the selection for symbiosis is relatively weak and symbiosis is likely a result of the adaptive capacity of the host plankton species.
References
Photosynthesis
Symbiosis
Ecology | Photosymbiosis | [
"Chemistry",
"Biology"
] | 2,393 | [
"Behavior",
"Symbiosis",
"Biological interactions",
"Photosynthesis",
"Ecology",
"Biochemistry"
] |
74,063,421 | https://en.wikipedia.org/wiki/Glaucocystis%20nostochinearum | Glaucocystis nostochinearum is a species of glaucophyte in the family Glaucocystaceae. The species is the type species of its genus, Glaucocystis. The species can be found in the waters of Northern Europe, North America and Oceania.
The species has one form and one variety:
Glaucocystis nostochinearum f. immanis Schmidle, 1902
Glaucocystis nostochinearum var. moebii Gutwinski, 1901
A second variety, Glaucocystis nostochinearum var. incrassata is now accepted as its own species, Glaucocystis incrassata.
References
Archaeplastida
Plants described in 1866 | Glaucocystis nostochinearum | [
"Biology"
] | 161 | [
"Algae",
"Glaucophyta"
] |
74,064,403 | https://en.wikipedia.org/wiki/Digital%20euro | The Digital Euro is the project of the European Central Bank (ECB), decided in July 2021, for the possible introduction of a central bank digital currency (CBDC). The aim is to develop a fast and secure electronic payment instrument that would complement the Euro for individuals and businesses in its existing form as cash and in bank accounts, and would be issued by the European System of Central Banks of the Eurozone.
After concluding a two-year investigation into the design and distribution models for a digital euro, the ECB decided on 18 October 2023 to enter the preparation phase, which involves tasks such as finalizing the rulebook and selecting providers to develop the required platform and infrastructure, setting the stage for the potential issuance of a digital euro.
Arguments and motivations for introducing a digital euro
Arguments and motives for the introduction of a digital euro are, according to the ECB:
Preserving central bank money's role as a monetary anchor for the payment system.
Provide free digital access to a secure legal tender in the Eurozone
Expanding payment options through alternative central bank money alongside cash and book money in commercial bank accounts, contributing to availability and inclusion
Building trust in digital cash through a high level of privacy protection
Promote innovation in retail payments
Limiting the spread of foreign digital currencies to safeguard the financial stability and monetary sovereignty of the Eurozone
Programmability would allow targeted incentives to encourage social responsibility and discourage antisocial spending
Wealth redistribution and social aid would be greatly simplified.
In addition to these motives, the possibility of a further decline in the use of cash as a means of payment plays a role in the discussion on the digital euro.
Criticism and risks of the digital euro
Increased centralisation and central planning of monetary policy
Loss of privacy
Risk of financial censorship and loss of human rights
Hacking and information security issues
Higher risks of loss of central bank independence and political influence on monetary policy
Potential for much faster transmission of bad monetary policy
Risks to banking system of bank runs towards CBDC
Distribution fairness issues (Cantillon effect)
Higher political control of individual spending and saving
Weakened Property Rights
While Christine Lagarde has publicly addressed some of these risks, critics consider her responses inadequate. Critics point to the digital Renminbi CBDC and how it has experimented with many of the right-restricting features (including geo-fencing, geo-tracking, amount limits, time limits, etc). They also point to the e-naira and venezuelan Petro and their monetary policy issues. Similarly de-banking issues and financial censorship have been on the rise in recent years, critics fear that a centrally planned, centrally controlled and managed system would have the potential for a much higher level of censorship and discrimination by authorities.
The prototype developed by the ECB as part of the investigation phase includes conditional payments, this points to the potential for programmability of the digital euro and thus similar risks to individual rights as in the digital renminbi.
According to the Human Rights Foundation, CBDCs risk imposing sweeping financial surveillance, restriction of financial activity, frozen funds, seizure of funds, negative interest rates, tools for corruption, cyberattack risks and disruptions to financial stability.
Preparations for the possible introduction of a digital euro
On 2 October 2020, the ECB published a report outlining the introduction of a digital euro from the perspective of the Eurosystem.
Since 2020, several projects have been launched in collaboration with the European Investment Bank (EIB) to test the issuance, control and transfer of central bank digital currency, as well as securities tokens and smart contracts on a blockchain.
2021
After preliminary planning and presenting public consultation results in early 2021, the ECB launched the digital euro project in July 2021 to prepare for its potential introduction. No technical barriers were identified during the preliminary planning. The research, which is scheduled to run until autumn 2023, aims to shed light on the distribution to merchants and citizens, the impact on markets and the necessary European legislation. No preliminary decision has therefore been taken on the introduction of the digital euro.
2022
In September 2022, the ECB announces a collaboration with five companies (Amazon, CaixaBank, Worldline, EPI and Nexi) to develop potential user interfaces for the digital euro.
Burkhard Balz, a member of the Bundesbank's Executive Board, sees the digital euro not least as a means of strengthening European sovereignty in payments. In his view, the digital euro could be designed to support programmable payments in a highly automated environment.
The first "Progress on the investigation phase of a digital euro" report was published by the ECB in September 2022.
Speaking at the conference "Towards a legislative framework enabling a digital euro for citizens and businesses" held in Brussels in early November 2022, Christine Lagarde, President of the European Central Bank, reiterated that the digital euro is not a stand-alone project limited to the area of payments. Rather, it is a cross-policy and truly European initiative that has the potential to have an impact on society as a whole.
In December 2022, the ECB published the second progress report on the investigation phase.
2023
In January 2023, the ECB invited experts in the field of payments/finance to express their interest in contributing to the development of a set of rules for the digital euro.
At the end of May 2023, the ECB published the results of a market research and prototyping project. The market research had shown that there was a sufficiently large pool of European vendors capable of developing digital euro solutions. It had also shown that different types of architectural and technological design options were available to develop a technical solution for a digital euro. The prototyping project involved the integration of five user interfaces developed by different vendors for each use case (front-end prototypes) and a settlement system designed and developed by the Eurosystem (back-end prototype). Different design options were tested to determine whether they could be technically implemented and integrated into the Eurosystem's settlement system. The tests showed that it would be possible to integrate a digital euro smoothly into the existing payment landscape, while leaving room for the market to use innovative features and technologies in the dissemination of a digital euro. The results also confirmed that, in principle, a digital euro could function both online and offline using different technical concepts. The question remains whether an offline solution that meets the Eurosystem's requirements and achieves the necessary scale can be implemented in the short to medium term using existing technology.
On 18 October 2023 the ECB announced that a decision had been made to move forward with preparation phase, including a public pilot and aiming for a possible launch by 2025-2026.
Views on the possible introduction of a digital euro
The Governing Council will make a decision by the end of 2025 on whether to proceed to the next stage of planning for a digital euro.
The Verbraucherzentrale Bundesverband (Public German Consumer Protection Organisation) sees the digital euro as a public good and thus an opportunity to make digital payments more consumer-oriented.
The German Informatics Society (GI) sees the introduction of a digital euro and the simultaneous decline of cash as a threat to informational self-determination and privacy; there is a danger of the "Gläserner Mensch" (German metaphor for data protection, representing the complete "screening" of people and their behavior by a monitoring state, which is perceived as negative).
German Banking industry
From the perspective of the German Banking Industry Committee, the global trend toward central bank digital currency is unmistakable and represents both an opportunity and a challenge. The planned introduction of a digital euro is seen as an important contribution to the digitalization of the economy and society and to securing Europe's sovereignty and competitiveness. The digital money ecosystem proposed in a policy paper is intended to go beyond digital cash and consists of three elements:
Retail CBDC for personal use
Wholesale CBDC for banks and savings banks
Giral money tokens for industrial use
The Association of German Banks supports the introduction of a digital euro. In a position paper published in February 2023, the private banks emphasize the evolution of cash, their role in its issuance, its openness to innovation, and their desire to minimize the risks of its introduction. Among the risks, the private banks specifically mention those affecting their business (disintermediation, diminishing returns, weakening of their customer relationship) and that a weakening of their investment capabilities could lead to the failure of the digital euro.
The Bundesverband der Deutschen Volksbanken und Raiffeisenbanken (BdV) welcomes the ECB's plans for a digital euro, but criticizes that the needs of the economy for a future digital payment method have not yet been sufficiently taken into account.
German Industrial sector
In a position paper published at the end of September 2022, the Bundesverband der Deutschen Industrie (BDI, Federation of German Industries) warns that the needs of the industry must be taken into account when introducing a digital euro. The programmability of payments is a key demand.
Eurogroup
For Paschal Donohoe, president of the Eurogroup, a body of finance ministers from euro member states, the digital euro project is about maintaining the link between citizens and central bank money: as central bank money, the digital euro would be convertible one-to-one into euro banknotes.
Unlike the industry, the Eurogroup does not want the digital euro to be equipped with additional functions.
European Commission
The European Commission has submitted a legislative proposal for the introduction of a digital euro that should be made available as a legal tender not only to banks, but above all to the general public. Presented on 28 June 2023, this proposal outlines the fundamental requirements for its possible implementation. However, the final decision lies with EU member states and the European Parliament, who are currently 12 November 2024 negotiating this proposal. If the legislation is approved, the European Central Bank (ECB) will further develop the technical and operational aspects, aiming to create a secure and reliable digital euro, available alongside cash and traditional bank payment accounts.
Additionally, the proposal aims to protect user privacy, similar to cash, without extra oversight of individual transactions by the governments. For security purposes, there are considerations for limits on the amount individuals can hold in digital euros. All of this forms part of a preparatory phase that may take several years, with an expected implementation date from 2028 if the legislation is approved.
Further reading
Nicola Bilotta, Erwin Voloder: Going Global: The Political Ambition and Economic Reality of the (Digital) Euro, in: Nicola Bilotta, Fabrizio Botti (Hrsg.): Digitalization and Geopolitics: Catalytic Forces in the (Future) International Monetary System, Edizioni Nuova Cultura 2023, ISBN 978-8-83365-572-7.
Annelieke A. M. Mooij: A digital euro for everyone. Can the European System of Central Banks introduce general purpose CBDC as part of its economic mandate? In: Journal of Banking Regulation. 2022. doi:10.1057/s41261-021-00186-w
Annelieke A. M. Mooij: European Central Bank Digital Currency: the digital euro – What design of the digital euro is possible within the European Central Bank's legal framework? BRIDGE Network – Working Paper 14, May 2021.
Peter Bofinger: Grundzüge der Volkswirtschaftslehre. 5. edition. Pearson, Munich 2019, ISBN 978-3-86894-368-9, p. 561–578, Chapter 28: Digitalisierung des Geldes und die Zukunft der Geldpolitik.
Isabella Lindner: The Euro on its way to internationalization – Potential Geopolitical Impacts. In: Klemens H. Fischer (Publisher): European Security Put to the Test. Perspectives and Challenges for the Next Decade. (= AIES Beiträge zur Europa- und Sicherheitspolitik. Volume 6). Nomos Verlag, 2021, ISBN 978-3-8487-8558-2.
Thomas Mayer: A Digital Euro to Compete with Libra. In: The Economists' Voice. Volume 16, Issue 1, 2019.
Philipp Sandner, Jonas Groß: Der digitale Euro aus geopolitischer Perspektive. In: Johannes Beermann (Hrsg.): 20 Jahre Euro. Zur Zukunft unseres Geldes. Siedler, Munich 2022, ISBN 978-3-8275-0165-3, P. 409–436.
References
External links
Official website by the ECB
Human Rights Foundation CBDC tracker
Report on a digital euro, European Central Bank | Eurosystem October 2020
Bericht des Eurosystems über das öffentliche Konsultationsverfahren zu einem digitalen Euro, Europäische Zentralbank | Eurosystem April 2021
Friedrich Thießen: Digitaler Euro. Funktionsweise und kritische Würdigung, ZBW – Leibniz-Informationszentrum Wirtschaft 2021
Kommt der digitale Euro?, Bundesverband deutscher Banken, 18. August 2021
Europa braucht neues Geld – Das Ökosystem aus CBDC, Giralgeldtoken und Triggerlösung, Die deutsche Kreditwirtschaft, 5. Juli 2021
Heike Mai: Der digitale Euro. Politische Ambitionen treffen auf ökonomische Realitäten, Deutsche Bank Research, 2. Juli 2021
Auf einen Blick: Der digitale Euro, VÖB-Factsheet, 02/201
Vormarsch des digitalen Euro?, Foresight und Technikfolgenabschätzung: Monitoring von Zukunftsthemen für das Österreichische Parlament, November 2021
Philipp Sandner, Jonas Groß, Lena Grale: Der digitale Euro. Einfluss auf die deutsche Wirtschaft, Konrad-Adenauer-Stiftung e.V. 2021
Digitaler Euro auf der Blockchain. Infopapier, bitkom 2020
Markus Brunnermeier, Jean-Pierre Landau: The digital euro: policy implications and perspectives, Policy Department for Economic, Scientific and Quality of Life Policies Directorate-General for Internal Policies, January 2022
CBDC Manifesto, Digital Euro Association/CBDC ThinkTank, 2022-10-11
Banking
Computing and society | Digital euro | [
"Technology"
] | 2,989 | [
"Computing and society"
] |
74,064,924 | https://en.wikipedia.org/wiki/Markarian%20273 | Markarian 273 is a galaxy merger located in the constellation Ursa Major. It is located at a distance of about 500 million light years from Earth, which, given its apparent dimensions, means that Markarian 273 is about 130,000 light years across. It is an ultraluminous infrared galaxy and a Seyfert galaxy.
Characteristics
Markarian 273 is a galaxy merger, the result of two or more galaxies colliding. When observed in mid infrared, two nuclei are visible, with a projected separation of about 0.75 kiloparsec. The southwest nucleus is known to be active, due its X-ray emission, while the northeast nucleus too displays a heavily absorbed X-ray spectrum, indicating that is also active. The optical emission of the southwest nucleus corresponds to a type II Seyfert galaxy while the north one of a LINER. A third component in the nuclear region is visible at the southeast in the radiowaves and could be a star cluster.
The galaxy experiences a starburst, with a star formation rate of 139 per year. This activity makes the galaxy shine bright in the infrared and it is categorised as ultra-luminous infrared galaxy, with total infrared luminosity of the galaxy is estimated to be . The startburst takes place in a rotating disk with a radius 120 pc and a total mass of which surrounds the north nucleus. It has been suggested that this is the location of compact luminous supernovae remnants and radio supernovae. The startburst is fed by large amounts of cold molecular gas. The gas has complex kinematics due to the presence of outflows. A kiloparsec scale outflow is visible towards the north in CO imaging, with the flow rate of 600 per year. The outflows reach about 5 kpc from the nucleus. There is also evidence of a bipolar superbubble.
The merger has a tidal tail extending southwards for 40 kiloparsecs, that is seen edge-on. Also south of the galaxy lies a giant X-ray nebula, measuring 40 by 40 kiloparsecs in size, that isn't closely related with the tidal tail. The gas temperature of the nebula is estimated to be 7 million K, possibly heated by galactic outflows. Filaments and clumps of ionised gas visible in OIII are extending about 23 kpc to the east. A warm gas ionised halo extends about 45 kpc from the nucleus, and is probably tidal debris from the merger. When observed in radiowaves the galaxy has two large plumes, one to the south, extending to about 100 kpc, and one dimmer to the north, extending to about 190 kpc.
See also
Arp 220 - the closest ultraluminous infrared galaxy to Earth
NGC 6240 - galaxy merger and ultraluminous infrared galaxy
References
External links
Markarian 273 on SIMBAD
Interacting galaxies
Peculiar galaxies
Seyfert galaxies
Luminous infrared galaxies
Ursa Major
08696
Markarian galaxies
48711
Galaxy mergers | Markarian 273 | [
"Astronomy"
] | 623 | [
"Ursa Major",
"Constellations"
] |
74,065,053 | https://en.wikipedia.org/wiki/Marcin%20Kortylewski | Marcin Kortylewski is a Polish American cancer researcher and immunologist. He is currently professor of immuno-oncology at the Beckman Research Institute of the City of Hope National Medical Center in Duarte, California. His research has shown that the STAT3 protein plays a role in protecting cancers from immune responses and contributes to resistance to therapies. Later he developed a two-pronged strategy for cancer immunotherapy using simultaneous STAT3 inhibition and TLR9 immune stimulation. Kortylewski invented platform strategy for delivery of oligonucleotides, such as siRNA, miRNA, decoy DNA, antisense molecules and others to selected immune cells.
Education
Kortylewski was born in Poznań, Poland. He received his M.S. in biotechnology from Adam Mickiewicz University and his Ph.D. in molecular biology from the Poznań University of Medical Sciences in Poznań, Poland. Dr. Kortylewski completed postdoctoral training in cancer biology in Institute of Biochemistry at RWTH Aachen in Germany and later in tumor immunology at H. Lee Moffitt Cancer Center in Tampa, Florida in USA.
Career
Kortylewski began his post-graduate career in 1999 as a postdoctoral fellow in Iris Behrman’s lab in RWTH Aachen/Institute of Biochemistry chaired by Peter C. Heinrich. During his tenure there, he co-authored numerous research articles. Later, he moved to H. Lee Moffitt Cancer Center in Tampa, Florida in USA to train with Richard Jove and Hua Yu. In 2005, he became Assistant Research Professor in the Beckman Research Institute of the City of Hope National Medical Center in Duarte, California. There, he became tenured faculty in 2010 and full professor at the Department of Immuno-Oncology in 2021.
Kortylewski's research group focuses on understanding the mechanisms by which cancers evade the immune system and explores methods to enhance antitumor immune responses using DNA and RNA-based drugs. In early 2000s, he demonstrated that tumors turn off immune cell activity using a transcription factor, STAT3. His studies characterized STAT3 as a multitasking protein which prevents immune activation, while stimulating tumor vascularization and metastasis. Kortylewski invented a two-pronged strategy for cancer immunotherapy combining STAT3 blocking using siRNA with triggering of immune receptor, Toll-like receptor 9 (TLR9) using CpG motif DNA. Later on, he adopted the strategy as a platform for delivery of various oligonucleotide drugs to target oncogenic or immune regulators, such as STAT3, NF-kB or selected miRNAs in human or mouse immune cells in vivo.
Kortylewski is a co-founder of a biomedical startup company, currently under the name Duet Biotherapeutics Inc., focused on advancing CpG-STAT3 inhibitors to clinical trials for cancer immunotherapy. He is an active contributor to the field of immune-oncology and oligonucleotide therapeutics, serving on scientific and editorial boards of journals and various organizations.
Awards
In 2016, Kortylewski was a recipient of an Outstanding Young Investigator Award from American Society of Gene and Cell Therapy, granted based on the contributions to the field of gene and cell therapy. He received the award specifically for his work on “Eliminating Tumor Immune Defenses using Oligonucleotide Therapeutics”.
Patents
References
Cancer researchers
Immunotherapy
Immunologists
Biotechnologists
Year of birth missing (living people)
Living people | Marcin Kortylewski | [
"Biology"
] | 730 | [
"Biotechnologists"
] |
74,066,408 | https://en.wikipedia.org/wiki/Rhenium%20trioxide%20chloride | Rhenium trioxide chloride is an inorganic compound with the formula . It is a colorless, distillable, diamagnetic liquid. It is a rhenium oxychloride. The material is used as a reagent in the preparation of rhenium compounds.
Synthesis and reactions
Rhenium trioxide chloride can be prepared by chlorination of rhenium trioxide:
With Lewis bases (L), rhenium trioxide chloride reacts to form adducts with the formula .
The compound hydrolyzes readily to give perrhenic acid.
Structure
The compound adopts a tetrahedral structure with Re-O and Re-Cl bond distances of 1.71 and 2.22 Å. In contrast rhenium trioxide fluoride () is polymeric with octahedral Re centers.
References
Rhenium compounds
Chlorides
Transition metal oxides | Rhenium trioxide chloride | [
"Chemistry"
] | 186 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
74,066,841 | https://en.wikipedia.org/wiki/Prix%20Georges%20Lema%C3%AEtre | The Prix Georges Lemaître is an award created in 1995, in celebration of the centenary of the birth in 1894 of Georges Lemaître. The Association des Anciens et Amis de l'Université catholique de Louvain (Association of the Alumni and Friends of the Université catholique de Louvain) initiated the award, as well as the Fondation Georges Lemaître (Georges Lemaître Foundation). The prize, endowed with 25,000 euros as of 2003, is awarded every two years to a scientist who has made a remarkable contribution to développement et à la diffusion des connaissances dans les domaines de la cosmologie, de l'astronomie, de l'astrophysique, de la géophysique, ou de la recherche spatiale (development and dissemination of knowledge in the fields of cosmology, astronomy, astrophysics, geophysics, or space science). The winner is chosen by an international jury of scientists, chaired by the rector of the Université catholique de Louvain.
List of recipients
1995 — Philip James Edwin Peebles, astrophysicist and cosmologist
1997 — Jean-Claude Duplessy, geochemist and climatologist
1999 — Jean-Pierre Luminet, astrophysist, and Dominique Lambert, philosopher of science
2001 — Kurt Lambeck, geophysicist
2003 — Alain Hubert, explorer and climatologist
2005 — Édouard Bard, climatologist
2007 — Susan Solomon, atmospheric chemist and climatologist
2009 — Jean Kovalevsky, astronomer
2010 — André Berger, climatologist
2012 — Michael Heller, cosmologist
2015 — Anny Cazenave, geophysicist, and Jean-Philippe Uzan, theoretical physicist and cosmologist
2017 — Kip Thorne, theoretical physicist
2019 — George F. R. Ellis, theoretical physicist and cosmologist
2021 — hiatus due to COVID–19 pandemic
2023 — Sheperd S. Doeleman, astrophysicist
References
Belgian awards
Science and technology awards
Awards established in 1995 | Prix Georges Lemaître | [
"Technology"
] | 439 | [
"Science and technology awards",
"Science award stubs"
] |
74,067,045 | https://en.wikipedia.org/wiki/Statistics%20of%20the%20Hebrew%20Bible | Statistics of the Hebrew Bible is the counting of verses, words, and letters in the Bible which has been known since the days of the Talmud (around the 3rd century). Later in the Masora period (between the 5th and 10th centuries), counting words and letters was one of the basic acts that were done to create a uniform version of the Bible and to safeguard it from disruptions. In the Babylonian Talmud, it is said that the families of the dead "scribes" in the Bible were named after a male working in counting the letters and words in the Torah. In Judaism, some regard the practice of counting letters and words as a mitzvah and a virtue.
According to the current version, the Hebrew Bible has approximately 22,864 verses, 306,757 Hebrew words, and 1,202,972 Hebrew letters. Out of these, there are 5,845 verses, 79,980 Hebrew words, and 304,805 letters in five books of the Torah. Various statistics of the Hebrew Bible have been published in Jewish literature over the generations.
References
Applied statistics | Statistics of the Hebrew Bible | [
"Mathematics"
] | 223 | [
"Applied mathematics",
"Applied statistics"
] |
74,069,600 | https://en.wikipedia.org/wiki/Jugokrom | Chemico-Electrometallurgical Combine "Jugohrom" (abbreviated and commonly known as "Jugohrom"; , ) was one of the largest combines in the industry of Macedonia. During the period of 1985–1988, it had approximately 7,000 employees, and prior to its closure in November 2016, it had around 1,000 employees.
It was first privatised in the year 2000 when it operated under the name Silmak, and later changed ownership multiple times.
The main challenges faced by the combine were the supply of electricity and environmental issues. The combine was closed in November 2016 due to non-compliance with environmental permit requirements.
History
The combine was established by the decision of the Government of the Socialist Republic of Macedonia in 1952 as the Factory for Chrome Products and Ferroalloys (), with the aim of exploiting the chromium ore from the Ljuboten serpentinite massif using electricity from the newly constructed Mavrovo Hydropower System.
Construction began in 1953, and production started in 1957 with the construction of four electric arc smelting furnaces, although the production of chromium salts had already begun in the "Chemistry" department in 1950.
In the period from 1960 to 1965, the production of calcium carbide and calcium cyanamide began.
The combine continued to be built and expanded until the 1990s when production began to be discontinued in several departments, and some of them were separated and operated as independent units.
After several transformations, in 2000, it continued to operate successfully under the name Silmak with dominant foreign ownership.
The main challenge for the combine was the supply of electricity, which was consistently deficient and expensive, and the production of ferroalloys in electric smelting furnaces was one of the largest consumers of electricity.
Departments
The main departments and products were:
Electrometallurgy (since 1957)
Ferrochrome, ferrosilicon, silicochrome, technical silicon, ferromanganese, and others, all with various concentrations and qualities in terms of carbon content.
Chemistry (since 1950)
Bichromate and others.
Raduša Mines and Chromite Separation
"Non-Metals" from Tetovo (since 1955)
"Stogovo" from Kičevo (since 1975)
Industry for Construction Elements "Vratnica" (since 1976)
Medical Plastics from Tetovo (since 1982)
Within the complex of the combine, there was also the railway station "Jegunovce Factory" (), which is still regularly operated as part of the Skopje–Kičevo line.
Facilities
The "Jugokrom" Combine built hotels such as "Slavija" and "Popova Šapka" in the ski center of Popova Šapka, the hotel "Neda" in Galičnik, the hotel "Riviera" in Ohrid, and others.
Ecology
The production of chromium salts has been discontinued due to the harmful effects of hexavalent chromium, which still poses a threat to water pollution in the area of Mount Žeden, where the Rašče Spring originates. In 2020, a plan was presented for the cleanup of the landfill where there are waste materials containing hexavalent chromium, but due to legal obstacles, its implementation has not yet begun.
Gallery
References
Jegunovce Municipality
Manufacturing companies of North Macedonia
Defunct companies of North Macedonia
Metallurgical industry of North Macedonia | Jugokrom | [
"Chemistry"
] | 703 | [
"Metallurgical industry of North Macedonia",
"Metallurgical industry by country"
] |
74,069,799 | https://en.wikipedia.org/wiki/Vanadium%20dioxide%20fluoride | Vanadium dioxide fluoride is the inorganic compound with the formula . It is an orange diamagnetic solid. The compound adopts the same structure as iron(III) fluoride, with octahedral metal centers and doubly bridging oxide and fluoride ligands. It is prepared by the reaction of vanadium pentoxide and vanadium(V) oxytrifluoride:
An alternative synthesis using hexamethyldisiloxane:
Reactions
Like some other transition metal oxyfluorides, reacts with Lewis bases to give 1:2 adducts. One example is the yellow bis(pyridine) derivative .
has attracted some interest as a cathode in batteries.
References
Oxyfluorides
Vanadium(V) compounds
Vanadyl compounds | Vanadium dioxide fluoride | [
"Chemistry"
] | 175 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
74,071,536 | https://en.wikipedia.org/wiki/USA-319 | USA-319, also known as GPS-III SV05, NAVSTAR 81 or Neil Armstrong, is a United States navigation satellite which forms part of the Global Positioning System. It was the fifth GPS Block III satellite to be launched.
Satellite
SV05 is the fifth GPS Block III satellite. Satellite construction was completed in early 2021.
The spacecraft is built on the Lockheed Martin A2100 satellite bus, and weighs approximately .
SV05 is the 24th operational Military Code (M-Code) satellite to join the GPS constellation, the last required for M-Code Full Operational Capability.
Launch
USA-319 was launched by SpaceX on 17 June 2021 at 16:09 UTC, atop Falcon 9 booster B1062. This booster had previously launched SV04 a year prior.
The launch took place from SLC-40 at Cape Canaveral Space Force Station, and placed USA-319 directly into semi-synchronous orbit. About eight minutes after launch, Falcon 9 successfully landed on the droneship Just Read the Instructions.
Orbit
As of 2023, USA-319 was in a 55.3 degree inclination orbit with a perigee of and an apogee of .
References
GPS satellites
USA satellites
SpaceX military payloads
Spacecraft launched in 2021 | USA-319 | [
"Technology"
] | 257 | [
"Global Positioning System",
"GPS satellites"
] |
74,072,119 | https://en.wikipedia.org/wiki/Orthogonal%20circles | In geometry, two circles are said to be orthogonal if their respective tangent lines at the points of intersection are perpendicular (meet at a right angle).
A straight line through a circle's center is orthogonal to it, and if straight lines are also considered as a kind of generalized circles, for instance in inversive geometry, then an orthogonal pair of lines or line and circle are orthogonal generalized circles.
In the conformal disk model of the hyperbolic plane, every geodesic is an arc of a generalized circle orthogonal to the circle of ideal points bounding the disk.
See also
Orthogonality
Radical axis
Power center (geometry)
Apollonian circles
Bipolar coordinates
References
Circles | Orthogonal circles | [
"Mathematics"
] | 136 | [
"Circles",
"Pi",
"Geometry",
"Geometry stubs"
] |
74,072,550 | https://en.wikipedia.org/wiki/Enterprise%20Extender | IBM Enterprise Extender (EE) is a standard internet transport protocol for IBM Systems Network Architecture (SNA) High Performance Routing traffic over IP. Enterprise Extender is analogous to, but independent of, Transmission Control Protocol (TCP). EE and TCP traffic can be carried over the same connections.
Enterprise extender was developed by the Internet Engineering Task Force and the APPN Implementers' Workshop, and standardized in 1997 in Internet RFC 2352.
Enterprise Extender traffic is transmitted as UDP datagrams. It is integrated with Systems Network Architecture in z/OS systems, and implemented in software, such as IBM Personal Communications of Windows (PCOM), or hardware such as Cisco routers with the SNA Switching Services feature (SNASw), in remote systems.
References
External links
RFC 2353 APPN/HPR in IP Networks
Enterprise Extender: Concepts and Considerations SHARE 2012 Winter Technical Conference Session 10821
Network software
Transport layer protocols
Systems Network Architecture | Enterprise Extender | [
"Engineering"
] | 197 | [
"Network software",
"Computer networks engineering"
] |
74,072,851 | https://en.wikipedia.org/wiki/USA-343 | USA-343, also known as GPS-III SV06, NAVSTAR 82 or Amelia Earhart, is a United States navigation satellite which forms part of the Global Positioning System. It was the sixth GPS Block III satellite to be launched.
Satellite
SV06 is the sixth GPS Block III satellite. It was declared operational on 31 January 2023.
The spacecraft is built on the Lockheed Martin A2100 satellite bus, and weighs approximately .
Launch
USA-343 was launched by SpaceX on 18 January 2023 at 12:24 UTC, atop Falcon 9 booster B1077.
The launch took place from SLC-40 at Cape Canaveral Space Force Station, and placed USA-343 directly into semi-synchronous orbit. About eight minutes after launch, Falcon 9 successfully landed on the droneship A Shortfall of Gravitas.
Orbit
As of 2023, USA-343 was in a 55.1 degree inclination orbit with a perigee of and an apogee of .
References
GPS satellites
USA satellites
SpaceX military payloads
Spacecraft launched in 2023 | USA-343 | [
"Technology"
] | 221 | [
"Global Positioning System",
"GPS satellites"
] |
74,073,245 | https://en.wikipedia.org/wiki/Doignon%27s%20theorem | Doignon's theorem in geometry is an analogue of Helly's theorem for the integer lattice. It states that, if a family of convex sets in Euclidean space have the property that the intersection of every contains an integer point, then the intersection of all of the sets contains an integer point. Therefore, integer linear programs form an LP-type problem of combinatorial and can be solved by certain generalizations of linear programming algorithms in an amount of time that is linear in the number of constraints of the problem and fixed-parameter tractable in its The same theorem applies more generally to any lattice, not just the integer
The theorem can be classified as belonging to convex geometry, discrete geometry, and the geometry of numbers. It is named after Belgian mathematician and mathematical psychologist Jean-Paul Doignon, who published it in 1973. Doignon credits Francis Buekenhout with posing the question answered by this It is also called the Doignon–Bell–Scarf theorem, crediting mathematical economists David E. Bell and Herbert Scarf, who both rediscovered it and pointed out its applications to integer
The result is tight: there exist systems of half-spaces for which every have an integer point in their intersection, but for which the whole system has no integer intersection. Such a system can be obtained, for instance, by choosing halfspaces that contain all but one vertex of the unit cube. Another way of phrasing the result is that the Helly number of convex subsets of the integers is More generally, the Helly number of any discrete set of Euclidean points equals the maximum number of points that can be chosen to form the vertices of a convex polytope that contains no other point from the Generalizing both Helly's theorem and Doignon's theorem, the Helly number of the Cartesian product
References
Theorems in convex geometry
Theorems in discrete geometry
Geometry of numbers
Lattice points | Doignon's theorem | [
"Mathematics"
] | 382 | [
"Geometry of numbers",
"Lattice points",
"Theorems in convex geometry",
"Theorems in discrete mathematics",
"Theorems in geometry",
"Theorems in discrete geometry",
"Number theory"
] |
74,075,136 | https://en.wikipedia.org/wiki/Lanthanide%20chlorides | Lanthanide chlorides are a group of chemical compounds that can form between a lanthanide element (from lanthanum to lutetium) and chlorine. The lanthanides in these compounds are usually in the +2 and +3 oxidation states, although compounds with lanthanides in lower oxidation states exist.
Lanthanide dichlorides
Divalent chlorides are formed by neodymium, samarium, europium, dysprosium, thulium and ytterbium. They can be prepared by reducing the trivalent chloride with lithium metal/naphthalene in tetrahydrofuran:
LnCl3 + Li → LnCl2 + LiCl (Ln=Nd,Sm,Eu)
Reducing the chloride with the metal or hydrogen is also possible:
2 LnCl3 + Ln → 3 LnCl2 (Ln=Nd,Sm,Eu?,Dy,Tm,Yb)
2 LnCl3 + H2 → 2 LnCl2 + 2 HCl (Ln=Nd,Sm,Eu,Dy,Tm,Yb)
Lanthanide trichlorides
The lanthanide trichlorides can generally be prepared by dissolving the oxide or carbonate with hydrochloric acid. They are produced commercially by carbothermic reaction of the oxide. To produce the anhydrous forms of these trichlorides, the ammonium chloride route is taken. The anhydrous lanthanide trichlorides have high melting points and are generally pale colored.
See also
Lanthanide
Chloride
References
Lanthanide compounds
Chlorides
Lanthanide halides | Lanthanide chlorides | [
"Chemistry"
] | 355 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
66,840,661 | https://en.wikipedia.org/wiki/Elizabeth%20Ann | Elizabeth Ann (born December 10, 2020) is a black-footed ferret, the first U.S. endangered species to be cloned. The animal was cloned using the frozen cells from Willa, a black-footed female ferret who died in the 1980s and had no living descendants. The cloning process was led by Revive & Restore, a biodiversity non-profit.
Background
Black-footed ferrets are the only ferret species native to the United States. The black-footed ferret is one of the most endangered and rarest land mammals in North America; a small pack of them was found in Wyoming in 1981. The limited genetic diversity found among the pack put the species at risk. Scientists sent genetic material from Willa to San Diego Zoo’s Frozen Zoo in 1988. Willa's egg was implanted in a surrogate domestic ferret in November 2020, to avoid putting an endangered ferret at risk. Elizabeth Ann was delivered via c-section on December 10.
Life
Elizabeth Ann will live in Colorado and be studied for scientific purposes; she will not be released into the wild. By February 2022, Elizabeth Ann had reached puberty and scientists were looking for a viable mate. A panel discussion, organized by the Draper Natural History Museum in October 2022, informed the public that Elizabeth Ann had a hysterectomy for unspecified reasons, but also that other clones were on their way. Elizabeth Ann remained healthy but was unable to breed due to hydrometra, a condition causing fluid retention within the uterus, alongside an underdeveloped uterine horn. As these conditions are common in black-footed ferrets, they are not believed to be linked to the cloning process.
Other clones
In April 2024, the U.S. Fish and Wildlife Service announced the birth of two new black-footed ferret clones, Noreen and Antonia, who were cloned from the same genetic material as Elizabeth Ann. Noreen was born at the National Black-footed Ferret Conservation Center in Colorado, while Antonia resides at the Smithsonian's National Zoo & Conservation Biology Institute near Front Royal, Virginia. Both were healthy and reaching expected developmental and behavioral milestones. The Service and its research partners planned to breed Noreen and Antonia once they reached reproductive maturity later in 2024. Antonia gave birth to a litter of three kits in June 2024, two of which (one female, one male) survived.
See also
List of cloned animals
References
2020 animal births
Cloned animals
Individual musteloids | Elizabeth Ann | [
"Biology"
] | 515 | [
"Cloning",
"Cloned animals"
] |
66,843,966 | https://en.wikipedia.org/wiki/Nissan%20EM%20motor | Nissan EM is a brand of electric motors by Nissan. The first EM motor, the EM61, debuted in 2010 as part of the first-generation Nissan Leaf. The EM series of motors have since been used in various hybrid and all-electric Nissan vehicles.
EM61
The EM61 made its debut in 2010. It was used only in the first generation Nissan Leaf (ZE0 2010-2012). The EM61 generates 280Nm of peak torque and has a max rpm of 10,390.
EM57
The EM57 was first released with the 'AZE0' Nissan Leaf refresh in 2013. This motor has a smaller footprint compared to the EM61, allowing for 11.7 kg of weight savings in the inverter/motor package. The motor also trades some peak torque for a more efficient power range. It peaks at 250Nm of torque and has a max rpm of 10,500.
It is used in the following electric vehicles:
Nissan Leaf (AZE0 2013–2017)
Nissan e-NV200 (2014–present)
Nissan Leaf (ZE1 40kWh, 2018–present)
Nissan Leaf (ZE1 e+ 62kWh, 2019–present)
It is also used in the following hybrids:
Nissan Note e-Power (2017–2020)
Nissan Serena e-Power (2018–present)
Nissan Kicks e-Power (2020–present)
EM57 refresh
In 2018, the EM57 motor received an update with the introduction of the ZE1 Nissan LEAF. Depending on which inverter was mounted on the motor, power levels were increased to 110kW (320Nm) and on the e+ model it was further raised to 160kW (340Nm). The rpm range is also increased to 11,330 on the e+ LEAF. The motor received three tweaks:
Slight reduction of permanent magnet material
L-shaped coolant inlet
Minor casting tweak to front&rear cover
EM47
The EM47 motor released in 2020 with the refreshed Nissan Note. It is only used in Nissan's e-POWER lineup. It is matched with an inverter which has a 40% size reduction and 30% weight reduction. The EM47 has a max speed of 10,500rpm and produce 254Nm of torque.
It is used in the following hybrids:
Nissan Note e-Power (2020–present)
Nissan Kicks e-Power (2022–present; Thailand)
References
Electric motors
EM | Nissan EM motor | [
"Technology",
"Engineering"
] | 513 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
66,844,294 | https://en.wikipedia.org/wiki/Restoration%20Ecology%20%28journal%29 | Restoration Ecology is a bimonthly peer-reviewed scientific journal covering research on restoration ecology. It was established in 1993 and is published by Wiley-Blackwell on behalf of the Society for Ecological Restoration. The editor-in-chief is Stephen Murphy (University of Waterloo).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.721.
References
External links
Journal page at society website
Wiley-Blackwell academic journals
Bimonthly journals
Ecology journals
Academic journals established in 1993
English-language journals | Restoration Ecology (journal) | [
"Environmental_science"
] | 118 | [
"Environmental science journals",
"Ecology journals"
] |
66,845,409 | https://en.wikipedia.org/wiki/Roller%20hearth%20furnace | A roller hearth furnace is a type of industrial furnace used for heat treating material in continuous and index-continuous production processes.
Construction
Roller hearth furnaces are built with two openings opposite each other to allow items to pass into, through and out of the furnace while moving in one direction. Depending on their application and operating temperature they may contain either a conveyor belt or furnace rollers for the movement of items through the furnace. Various heating methods are available, including gas-fire and resistive heating.
See also
Industrial furnace
Kiln
Industrial oven
References
Industrial furnaces | Roller hearth furnace | [
"Chemistry"
] | 113 | [
"Metallurgical processes",
"Industrial furnaces"
] |
66,845,609 | https://en.wikipedia.org/wiki/Wood-pasture%20hypothesis | The wood-pasture hypothesis (also known as the Vera hypothesis and the megaherbivore theory) is a scientific hypothesis positing that open and semi-open pastures and wood-pastures formed the predominant type of landscape in post-glacial temperate Europe, rather than the common belief of primeval forests. The hypothesis proposes that such a landscape would be formed and maintained by large wild herbivores. Although others, including landscape ecologist Oliver Rackham, had previously expressed similar ideas, it was the Dutch researcher Frans Vera, who, in his 2000 book Grazing Ecology and Forest History, first developed a comprehensive framework for such ideas and formulated them into a theorem. Vera's proposals, although highly controversial, came at a time when the role grazers played in woodlands was increasingly being reconsidered, and are credited for ushering in a period of increased reassessment and interdisciplinary research in European conservation theory and practice. Although Vera largely focused his research on the European situation, his findings could also be applied to other temperate ecological regions worldwide, especially the broadleaved ones.
Vera's ideas have met with both rejection and approval in the scientific community, and continue to lay an important foundation for the rewilding-movement. While his proposals for widespread semi-open savanna as the predominant landscape of temperate Europe in the early to mid-Holocene have at large been rejected, they do partially agree with the established wisdom about vegetation structure during previous interglacials. Moreover, modern research has shown that, under the current climate, free-roaming large grazers can indeed influence and even temporarily halt vegetation succession. Whether the Holocene prior to the rise of agriculture provides an adequate approximation to a state of "pristine nature" at all has also been questioned, since by that time anatomically modern humans had already been omnipresent in Europe for millennia, with in all likelihood profound effects on the environment.
The severe loss of megafauna at the end of the Pleistocene and beginning of the Holocene known as the Quaternary extinction event, which is frequently linked to human activities, did not leave Europe unscathed and brought about a profound change in the European large mammal assemblage and thus ecosystems as a whole, which probably also affected vegetation patterns. The assumption, however, that the pre-Neolithic represents pristine conditions is a prerequisite for both the "high-forest theory" and the Vera hypothesis in their respective original forms. Whether or not the hypothesis is supported may thus further depend on whether or not the pre-Neolithic Holocene is accepted as a baseline for pristine nature, and thus also on whether the Quaternary extinction of megafauna is considered (primarily) natural or man-made.
Vera's hypothesis has important repercussions for nature conservation especially, because it advocates for a reorientation of emphasis away from the protection of old-growth forest (as per the competing high forest theory) and towards the conservation of open and semi-open grasslands and wood pastures, through extensive grazing. This aspect in particular has attracted considerable attention, and has made Vera's hypothesis an important point of reference for conservation grazing and rewilding initiatives. The wood-pasture hypothesis also has points of contact with traditional agricultural practices in Europe, which may conserve biodiversity in a similar way to wild herbivore herds.
Names and definitions
Frans Vera's hypothesis has many names, since Vera himself did not provide a distinguished name for it. Instead, he simply referred to it as the alternative hypothesis, alternative to the high-forest theory, which he called the null hypothesis. As a result, it has been called by many names over the years, including the wood-pasture hypothesis, the wooded pasture hypothesis, the Vera hypothesis, the temperate savanna hypothesis and the open woodland hypothesis. Especially in Continental Europe, it is commonly known as the megaherbivore hypothesis and literal translations of it.
Vera limited the geographic area of his ideas to Western and Central Europe between 45°N and 58°N latitude and 5°W and 25°E longitude. This includes most of the British Isles and everything between France (except the Southern third) and Poland and Southern Scandinavia to the Alps. Furthermore, he confined it to altitudes below . By extension, the North American East Coast is also addressed as an analogy with a comparable climate.
High-forest theory
Heinrich Cotta: high-forest theory
In his 1817 work Anweisungen zum Waldbau (Directions for Silviculture), Heinrich Cotta posited that if humans abandoned his native Germany, in the space of 100 years it would be "covered with wood". This assumption laid the foundation for what is now called the high-forest theory, which assumes that deciduous forests are the naturally predominant ecosystem type in the temperate, broad-leaved regions.
Frederic Clements: linear succession
Later, this position was accompanied by Clements' formulation of the theory of linear succession, meaning that under the right conditions bare ground would, over time, invariably become colonised by a succession of plant communities eventually leading to closed stands dominated by the tallest plant species. Because in most of the temperate hemisphere the potentially tallest plants are trees, the final product would therefore chiefly be forest. Albeit with changes in conceptualisation and some modifications, this concept remains the one favoured by most, and provides the conceptual framework for many forest-related methods and customs in forestry and conservation. This includes the doctrine advocated by German forest-ecologist Knut Sturm, which highlights the importance of non-intervention and space of time for forest protection, as it is implemented in forest reserves such as Białowieża.
Further refinements
Clements' notion of stable climax communities was later challenged and refined by authorities such as Arthur Tansley, Alexander Watt and Robert Whittaker, who championed the inclusion of dynamic processes, like temporary collapse of canopy cover because of windthrow, fire or calamities, into Clements' framework. This, however, did not change anything about the status of the "high-forest theory" as the commonly accepted view; that without human intervention closed-canopy forest would dominate the global temperate regions as the potential natural vegetation. This is also the concept that was advocated by European plant experts like Heinz Ellenberg, Johannes Iversen and Franz Firbas.
The reconstruction of vegetation history
Apart from theoretical considerations, this concept has relied and continues to rely heavily on both field observations and, more recently, on findings from pollen analysis, which allow inferences about the vegetation structure of past epochs. For example, vegetation trends can be reconstructed from the ratio of tree pollen to pollen associated with grassland. Pollen analysis is the most widely used means of generating historic vegetation data and the analysis of pollen data has provided a solid database from which a predominance of forest throughout the early stages of the Holocene of temperate Europe, especially the Atlantic, is generally inferred, although the possibility of regional differences remains open. On that basis, the history of vegetation in Europe is generally reconstructed as a history of forest.
Pollen analysis, however, has been criticized for its inherent bias towards wind-pollinated plant species and, importantly, wind-pollinated trees, and has been shown to overestimate forest cover. To account for this bias, a corrective model (REVEALS) is used, whose application leads to results that differ substantially from those drawn from the traditional comparison of pollen percentages alone. Alternatively to or in combination with pollen, fossil indicator organisms – such as beetles and molluscs – can be used to reconstruct vegetation structure.
Large herbivores and high-forest theory
There is no general agreement on herbivores and their influence on succession in natural ecosystems in the temperate hemisphere. In the high-forest theory framework, wild herbivores are mostly considered as minor factors, derived from the assumption that the natural vegetation was forest. Therefore, wild herbivores were characterised by Tansley as followers of succession, not as actively influencing it, because otherwise Europe would not have been forested. From this assumption the principle was developed that the natural abundance of herbivores does not hinder forest succession, which means that herbivore numbers are necessarily considered too high once as they impede natural forest regeneration. For example, WWF Russia considers five to seven animals the optimal density of bison per 1000 ha (10 km²), because if the population exceeds 13 animals per 1000 ha, first signs of vegetation suppression are observed. Consequently, the bison population in Białowieża is controlled by culling. Similarly, it is widely believed that two to seven deer per is a sustainable number based on the assumption that if deer numbers exceed this bar, they start having a negative impact on woodland regeneration. Consequently, culling is commonly seen as necessary to reduce a perceived overabundance of deer to sustainable levels and mimic natural predation.
Others, however, have criticised this view. In a 2023 publication, Brice B. Hanberry and Edward K. Faison argued that in the eastern United States, where white-tailed deer are commonly considered overabundant due to the extirpation of wolves and cougars, there are currently no more deer than there were historically when these predators were present. Furthermore, they found that even at densities that are perceived as too high, the influence of deer may be ecologically beneficial. The assumption that population control through hunting is necessary in order to mimic the effect of natural predators is also not entirely supported by scientific analyses of natural predator-prey dynamics. Instead, the control of herbivore numbers in nature probably depends on other factors. A perhaps more important influence predators may have on prey animals is the landscape of fear their presence can create, promoting landscape heterogeneity. However, in the presence of megafauna over , which are largely immune to predation, even this ability is limited. Overall, how ungulate populations are controlled in nature is controversial, and food availability is an important constraint, even in the presence of apex predators.
In regions with relatively intact large-mammal assemblages in Africa and Asia, as well as in European rewilding areas where "naturalistic grazing" is practised, herbivore biomass exceeds the values commonly deemed appropriate for temperate forests many times over. Here, herbivore biomass reaches a maximum of per , while the mammoth steppe with an estimated per km2 falls within a similar range. The herbivore biomass of Britain during the Eemian interglacial has been estimated as more than per km2, which is equivalent to more than 2.5 fallow deer per ha. Hence, the ecologist Christopher Sandom and others have suggested that the comparatively high forest cover of the pre-Neolithic European Holocene may be a consequence of megaherbivore extinctions during the Quaternary extinction event, as compared to the last interglacial in Europe with a pristine megafauna, the Eemian, the early stages of the Holocene appear to have been much more forested. According to the authors, this is unlikely to be the result of the latter's only slightly cooler climate as compared to the Eemian. However, this is also subject to debate.
Background: grazers and browsers
The impact herbivores have on the landscape level depends on their way of feeding. Namely, browsers like roe deer, elk and the black rhino focus on woody vegetation, while the diet of grazers like horse, cattle and the white rhino is dominated by grasses and forbs. Intermediate feeders, like the wisent and the red deer, fall in between. Generally, grazers tend to be more social, less selective in their food choices and forage more intensively. Therefore, their impact on vegetation composition tends to be higher, as well as their ability to maintain open spaces.
Since the extinction of the aurochs in 1627 and the wild horse around 1900, none of the remaining large wild herbivores in Europe is an obligate grazer. Similarly, domesticated descendants of aurochs and wild horse, cattle and horse, are now largely kept in stables, factory farms and close to settlements, making them effectively extinct in the landscape. What remains are browsers and mixed feeders – roe deer, red deer, elk, wild boar, wisent and beaver, often in low densities. Backbreeding-projects, such as the German Taurus project and the Dutch Tauros programme are addressing this issue by breeding domestic cattle that can be released into the landscape as hardy and sufficiently similar proxies to act as ecological replacements for the aurochs. Similarly, primitive horse breeds such as the Konik, Exmoor pony and the Sorraia are being used as proxies for the tarpan.
Frans Vera
Vera argued that the dominating landscape-type of the early to mid-Holocene was not closed forest, but a semi-open, park-like one. This semi-open landscape, he proposed, was created and maintained by large herbivores. During the Holocene, these herbivores included aurochs, European bison, red deer and tarpan. Up to the Quaternary extinctions, many other megafaunal mammals like the straight-tusked elephant or Merck's rhinoceros existed in Europe as well, that probably kept the forests open during warm interglacial periods like the Eemian interglacial. Vera also postulated that lowland forest did not emerge on a large scale before the onset of the Neolithic period and subsequent local extinctions of herbivores, which in turn allowed forests to thrive more unhindered. Indeed, investigations point to at least locally open circumstances, for example in floodplains, on infertile soils, chalklands and in submediterranean and continental areas, but maintain that forest largely dominated.
In his book Vera also discussed the decline of ancient oak-hickory-forest communities in Eastern North America. Many forests that stem from Pre-Columbian times (old-growth forests) feature light-demanding oaks and hickories prominently. However, these do not readily regenerate in modern forests; a phenomenon commonly referred to as oak regeneration failure. Instead, shade-tolerant species such as red maple and American beech dominate increasingly. While the cause is still poorly understood, a lack of natural fire is commonly presumed to play a role. Vera instead suggested that the grazing and browsing of wild herbivores, most importantly American bison, created the conditions oaks and hickories need for successful regeneration to happen, and explained the modern lack of regeneration of these species in forests with the mass-slaughter of bisons committed by European settlers.
Paleoecological evidence drawn from fossil Coleoptera deposits has also shown that, albeit rare, beetle species associated with grasslands and other open landscapes were present throughout the Holocene of Western Europe, which points to open habitats being present, but restricted. However, paleoecological data from previous interglacials when the larger megafauna was still present indicate widespread warm temperate savannah. This could mean that elephants and rhinos were more effective creators of open landscapes than the herbivores left after the Quaternary extinction event. On the other hand, traditional animal husbandry may have mitigated the effects of possibly human-induced megafaunal die-off, allowing the survival of species of the open landscape previously created and maintained by megafauna.
Frans Vera was not the first to question the high-forest paradigm. Botanist Francis Rose had expressed doubts already in the 1960s, knowing about British plant and lichen species and their light requirements. The relationship between large grazers and landscape openness, and the significance of the Quaternary extinctions of megafauna in this regard, had also been recognized prior to Vera. In 1992, for example, the archaeologist Wilhelm Schüle theorized that the genesis of closed forest in temperate Europe was the result of prehistoric man-made megafauna extinctions. Landscape ecologist Oliver Rackham, in a 1998 article entitled "Savanna in Europe", envisaged a kind of savanna as the original predominant landscape type of northwestern Europe. Vera, however, was the first to develop a comprehensive theorem to explain why forest did not dominate even in the Holocene, and to thus propose a real alternative to the high-forest theory.
In some of its aspects, the wood-pasture hypothesis bears similarity to which was proposed by but challenged and refuted by scholars such as Reinhold Tüxen and .
Main arguments
Oak and hazel
Vera relies on several lines of argument based on experiments, ecology, evolutionary ecology, palynology, history and etymology. One of his main arguments is of an ecological nature; the widespread lack of successful regeneration of light-demanding tree species in modern forests. Especially the lack of regeneration of pedunculate oak, sessile oak (together hereafter addressed as "oak") and common hazel in Europe. He contrasts this reality with European pollen deposits from previous ages, where oak and hazel often form a dominant amount of pollen, making a dominance of these species in previous ages conceivable. Especially in regard to hazel, sufficient flowering is only achieved when enough sunlight is available, i.e. the plant grows outside of a closed canopy. He argues that the only explanation for the great abundance of oak and hazel pollen in previous ages is that the primeval landscape was open, and this contrast forms the principal theorem of his hypothesis. It has also been suggested that oak requires disturbances for successful establishment, disturbance large herbivores may provide.
However, pollen records from islands that lacked many of the large grazers and browsers that, according to Vera, were essential for the maintenance of landscapes with an open character in temperate Europe show almost no differences in comparison to mainland Europe. More specifically, pollen records from Holocene Ireland, which during the early Holocene was apparently, owing to a lack of fossils, devoid of any big herbivores except for abundant wild boar and rare red deer, show almost equally high percentages of oak and hazel pollen. Thus it could be concluded that large herbivores were not a required factor for the degree of openness in a landscape, and that the abundance of pollen from species that are unable to reproduce and regenerate sufficiently under a closed canopy, such as hazel and oak, can only be explained by other factors like windthrow and natural fires.
Vera's notion may be supported by observations over the course of 20 years forest regeneration in forest gaps created by windthrow, which showed that hornbeam and beech dominate the emerging stands and largely displace oaks on fertile, nutrient-rich soil. However, after the last Ice Age oak returned earlier to Central and Western Europe than beech or hornbeam, which may have contributed to its commonness, at least during the early Holocene. Still, other shade-tolerant tree species like lime and elm were equally fast returnees, and do not seem to have limited oak abundance.
On the other hand, substantial natural oak-regeneration commonly takes place outside of forests in fringe and transitional habitats, suggesting that a focus on regeneration in forests in an attempt to explain oak regeneration failure may be insufficient in regard to the ecology of Central European oak species. Rather, an underestimated reason for widespread failure of oak regeneration may be found in the direct effects of land-use changes since the early modern period, which has led to a more simplistic, homogeneous landscape, as spontaneous regeneration of both oak and hazel does frequently occur in margins, thickets, and low-grazing-intensity or abandoned pasture/arable land. Overall, oak is an adept coloniser of open areas and especially of transitional zones between vegetation zones such as forest and open grassland. Looking for regeneration within forests may therefore be futile from the outset. There is, therefore, no general "failure" in oak regeneration, but only a failure of oak regeneration within closed forests. This, however, may be expectable and natural given oak's colonising nature.
Furthermore, new species of oak mildew (Erysiphe alphitoides) observed on European oaks for the first time at the beginning of the 20th century have been cited as a possible reason for the modern lack of oak regeneration in forests, since they affect the shade tolerance, particularly of young pedunculate and sessile oaks. Although the origin of these new oak pathogens remains obscure, it seems to be an invasive species from the tropics, possibly conspecific with a pathogen found on mangos.
Ecological anachronisms
Vera prominently argued that since other light-demanding and often thorny woody species exist in Europe—species such as common hawthorn, midland hawthorn, blackthorn, Crataegus rhipidophylla, wild pear and crab apple—their ecology can only be explained under the influence of large herbivores, and that in the absence of these they represent an anachronism.
Shortcomings of pollen analysis
Vera further contested that pollen diagrams can adequately display past species occurrences since, inherently, pollen deposits tend to overrepresent species that are wind-pollinated and notoriously underrepresent species that are pollinated by insects. Furthermore, he proposed that an absence of grass pollen in pollen diagrams can be explained by high grazing pressure, which would prevent the grasses from flowering. Under such conditions, he claimed, open environments with only scattered mature trees may appear as closed forests in pollen deposits. He consequently proposed that the conspicuous scarcity of grass pollen in pollen deposits dating from the pre-Neolithic Holocene might not necessarily speak against the existence of open environments dominated by grasses. However, it is generally considered that over 60% tree pollen in pollen deposits indicates a closed forest canopy, which is true for the vast majority of European early to mid-Holocene deposits. Sites with less than 50% arboreal pollen, on the other hand, are consistently associated with human activities.
Circular reasoning
Vera stressed that the prevailing high-forest theory was born out of observations of spontaneous regeneration in the absence of grazing animals. He argued that the presupposition that these animals do not exert a significant influence on natural regeneration, and thus on the vegetation structure as a whole, has been made without comparative confirmation, and is therefore a circular argument. Indeed, modern forestry and forest theory arose largely in the modern era and went hand in hand with the ongoing inclosure of common land throughout Europe. A consequence thereof was in many cases a ban of livestock from the forests, which had previously largely been open woodland pastures, often dominated by oaks. These were multifunctional and used for a range of purposes, from pannage and livestock grazing to the harvest of tree hay, coppice, timber and oak galls for the manufacture of ink, as well as for the production of charcoal, crops and fruit. This former usage of forests is often still revealed by a big age gap between tree generations, particularly if the oldest trees are mainly oaks, and many Central European forest reserves originated as common wood-pastures.
Shifted baselines
In nature conservation, a shifted baseline is a baseline for conservation targets and desired population sizes that is based on non-pristine conditions. In this sense, the term was coined by marine biologist Daniel Pauly when he observed that some fisheries scientists used the population sizes of fish at the beginning of their own careers to assess a desired baseline, notwithstanding whether the fishing stocks they used as baselines had already been diminished by human exploitation. He noticed, that the estimations these scientists took for reference markedly differed from historical accounts. Consequently, he concluded that over generations the perception of what is considered to be normal would change, and so may what is considered a depleted population. Pauly called this the shifting baseline syndrome.In line with this, it may be argued that the prevalence of closed-canopy forest as the prevailing conservation narrative in Europe similarly arises from multiple shifted baselines:
While it is plausble that lions (Panthera speleae, P. leo leo), leopards (Panthera pardus spelaea, P. pardus tulliana), hyenas (Hyaena hyaena prisca, Crocuta crocuta spelaea), dholes (Cuon alpinus europeus), wild ass (Equus hydruntinus, E. hemionus kulan) and moon bears (Ursus thibetanus mediterraneus, U. t. permjak), among other victims of European Quaternary and Holocene extinctions, would still be native to Europe, had they not been evicted by humans, none of these species are listed as such in the EU's Habitats Directive's annexes. Likewise, globally extinct megafauna such as straight-tusked elephants and rhinos would likely be native to Europe without human interference, and they would in all probability have a strong positive impact on biodiversity and ecosystem functions. It is therefore very likely that the megafauna extinctions of the late Pleistocene and early Holocene had profound implications for European and worldwide ecosystems, especially given the paramount importance comparable animals have for modern ecosystems.
Vera pointed out that words like wold and forest used to have different connotations than they do today. While today, a forest is a dense and reasonably large tract of trees, the medieval Latin forestis, from which it derives, assigned open stands of trees, and was a wild and uncultivated land home also to aurochs and wild horses. According to historical sources, these forestis included hawthorn, blackthorn, wild cherry, wild apple and wild pear, as well as oaks, all of which are light-demanding species that cannot regenerate successfully in closed-canopy forest. From this Vera concluded that original wildwoods still existed in Europe during the Medieval period. Thus, when scholars of the 19th and 20th century assumed that grazing animals had destroyed the original European closed-canopy wildwoods, they were misinterpreting these terms. Instead, these forests, he found, had been destroyed following the industrial revolution and the population growth it caused, which in turn caused overexploitation.
He further argued that from this initial misinterpretation gave rise to another misinterpretation: that forest regeneration would naturally take place inside the forest. Thus, scholars of the 19th and 20th century such as interpreted medieval grazing regulations to allow tree regeneration in coppiced mantle and fringe vegetation as intended to allow regeneration in a forest. In their time, solid firewood was preferred to the medieval coppice bundles, e.g. faggots. However, the production of solid firewood required the felling of trees at an age when they could no longer produce suckers, an ability that trees commonly lose with progressing age. This then led to a different management system: the replacement by saplings planted or naturally regenerated via, for example, shelterwood cuttings. Initially, these trees regenerated inside the forests were differentiated from wild growth outside the forests. In German, the former were referred to as natural regeneration (Naturverjüngung) while the latter had a different name: Holzwildwuchse. Thus, natural regeneration was not synonymous with the natural regeneration of trees in a natural situation. It was not until the 19th and 20th centuries that this distinction was abandoned in German. However, in the absence of thorny nurse bushes, which disappeared due to the shadow under the trees, the planted trees then had to be protected manually. The "natural regeneration" was therefore still depended on work like ploughing, removal of browsing pressure and the suppression of weeds, making it not "natural" in the conventional sense. Instead, according to Vera, the original meaning of the word "natural" in this context was that a seed fell from a tree and then grew by itself, as opposed to being planted. This shift in expectation of where regeneration of trees was to be expected, from thorny fringes of groves in wood-pastures to the interior of closed tree stands, then led to the notion that herbivores were detrimental to forest regeneration, and necessitated fenced-out areas, tree shelters and population control via hunting.
Considered "alien" to the landscape, akin to invasive species, cattle and horses were now also removed from the forests, as it happened in former wood-pastures like Białowieża, because they were seen as harmful to the creation of a new old-growth forest. At the same time, the introduction of the potato made pannage, the fattening of pigs on acorns, obsolete, and grass species specifically bred for a high yield superseded the traditional pasturing, mostly of cattle, in wood-pastures. Together, these mechanisms created the spatial separation between livestock rearing and forestry, grassland and forest enshrined into modern law and practice.
Finally, the biodiversity losses associated with the conversion of open grassland, mantle and fringe vegetation and open-grown trees into closed-canopy forests were legitimised by the assumption that the forest was the only natural ecosystem, and hence species losses were casualties of a natural cause.
However, a strong argument that may put Vera's etymological evidence into perspective altogether is that the composition of medieval woodlands may not be relevant to their naturalness. Since by the medieval period agricultural traditions had already been ubiquitous in most of Europe for millennia, it may be unrealistic to assume that what people of the time perceived and labelled as wilderness may indeed have been one. Instead, it is doubtful that pristine conditions had survived in the Central- and Western European lowlands, Vera's area of study, at any rate up to this point.
Succession in grazed ecosystems
There are several ecological processes at work in herbivore grazing systems, namely associational resistance, shifting mosaics, cyclic succession, and gap dynamics. These processes would collectively transform the surrounding landscape, as per Vera's model.
Associational resistance
The term associational resistance describes facilitating relationships between plants that grow close to each other, against both biotic and abiotic stresses like browsing, drought, or salinity. In relation to grazed ecosystems, it can allow for the recruitment of trees and other palatable woody species, via thorny nurse bushes, in these environments. It has been proposed and demonstrated that associational resistance can be a key process in grazed environments, ensuring natural succession.In temperate Europe, succession on pastures commonly starts with so called "islets" ("Geilstellen"), patches of dung which are avoided by the herbivores for an amount of time after deposition, sufficient to allow the establishment of relatively unpalatable species such as rushes, nettles and hummocks of tall grasses like tussock grass. These swards, in turn, provide protection for thorny shrubs such as blackthorn, roses, hawthorn, juniper, bramble, holly and barberry during their early years, when they do not yet have protective thorns and are therefore vulnerable. Once the thorny saplings are fully established, they grow bigger over time and subsequently allow other, less resilient species to establish in their thorn protection, forming mantle and fringe vegetation together with species such as guelder rose, wild privet and dogwood. Other species such as mazzard, checker tree, rowan and whitebeam, which are distributed by fruit-eating birds through their faeces, would also frequently be placed within these shrubs, through resting birds leaving their droppings.
On the other hand, nut-bearing species such as hazel, beech, chestnut, pedunculate and sessile oak would become "planted" somewhat deliberately in the vicinity of those shrubs by rodents such as red squirrel and wood mouse, the nuthatch and corvids such as crows, magpies, ravens and especially jays, which store them for winter supply. In Europe, the Eurasian jay represents the most important seed disperser of oak, burying acorns individually or in small groups. Eurasian jays not only bury acorns in depths favoured by oak saplings, but seemingly also prefer spots with sufficient light availability, i.e. open grassland and transitions between grassland and shrubland, seeking for vertical structures such as shrubs in the near surroundings. Since oak is relatively light-demanding while not having the ability to regenerate on its own under high browsing pressure, these habits of the jay presumably benefit oak, since they provide the conditions oak requires for optimal growth and health. On a similar note, the nuthatch seems to assume a prominent role for hazel dispersal.
In addition, species such as wild pear, crab apple and whitty pear, which bear relatively large fruit, would find propagators in herbivores such as roe deer, red deer and cattle, or in omnivores such as the wild boar, red fox, the European badger and the raccoon, while wind-dispersed species such as maple, elm, lime or ash would land within these shrubs by chance.
Thorny bushes play an important role in tree regeneration in the European lowlands, and evidence is emerging that similar processes can also ensure the survival of browsing-sensitive species like rowan in browsed boreal forests.
Shifting mosaics and cyclic succession
A natural pasture ecosystem would therefore undergo various stages of succession, starting with unpalatable perennial plants, which provide shelter for thorny woody plants. Second, these would start to form thickets and enable the establishment of larger, palatable shrubs and trees respectively. Over time these would then outshadow the unpalatable but light-demanding thickets and emerge as big solitary trees, in the case of single-standing shrubs like hawthorn, or groups of trees in the case of expanding blackthorn shrubs. Because of the herbivore disturbance (browsing, trampling, wallowing, dust bathing), not even shade-tolerant tree saplings would be able to grow under the established trees. Therefore, once the established trees would start to decay, either due to old age or other factors like pathogens, illness, lightning strike or windbreak, this would leave open, bare land behind, for grasses and unpalatable species to colonise, closing the cycle.
On a large scale, different successional stages would thus contribute an ecosystem where open grassland, scrubland, emerging tree growth, groves of trees and solitary trees exist next to each other, and the alternation between these various successional stages would create dynamic shifting mosaics of vegetation. This in turn stimulates high biodiversity. Consequently, Vera's counter-proposal to the linear succession and Watt's gap-phase model of closed-canopy forest, to which it has been compared is a model of successional cycles known as the shifting mosaics model.
In effect however, not all areas would have necessarily been subject to this permanent change. Since grazing animals generally prefer to spend time in grasslands rather than in closed stands of trees, it would practically be possible for three different landscape types to coexist over longer periods in the same spots: permanently open areas, permanently closed groves and areas subject to constant shifting mosaics.
The prehistoric baseline
The Eemian landscape
Although Vera himself limited his argument to the Holocene and the fauna present into historical times, research better supports his claims in regard to earlier interglacials. Modern humans have likely exerted a strong influence in Europe since their first appearance here during the Weichselian glaciation, which has led some researchers to criticize Vera's choice of the early to mid Holocene as his benchmark for pristine nature. Instead, they argue that pristine nature only existed in Europe before the entering of Homo sapiens. They argue that the best model for what a truly natural landscape during a warm period in Europe would look like is the Eemian interglacial, which was the last warm period before the current Holocene, approximately 130,000 to 115,000 years ago, and the last warm period before Homo sapiens. While archaic humans existed in the form of neanderthals, their influence was probably only localised, due to their low population density. During this warm period, paleoecological data indeed suggest that semi-open landscapes, as postulated by Vera, were widespread and common, most likely maintained by large herbivores. Next to these semi-open landscapes, however, the researchers also found evidence for closed-canopy forest. Overall, the Eemian landscape appears to have been very dynamic and probably consisted of varying degrees of openness, including open grasslands, wood pastures, light-open woodland and closed-canopy forest.
The European megafauna
The Eemian interglacial was one of many warm interglacials during the Quaternary, of which the Holocene (or Flandrian interglacial) is the most recent. These alternating glacial and interglacial periods, triggered by the Milankovitch cycles, in turn had a profound influence on life. In Middle to Late Pleistocene Europe, the result of this cycling was that two very different faunal and floral assemblages took turns in Central Europe. The warm-temperate Palaeoloxodon-faunal assemblage, consisting of the straight-tusked elephant, Merck's rhinoceros, the narrow-nosed rhinoceros, Hippopotamuses, European water buffalo, aurochs, and several species of deer, among others (including most of today's European fauna), had its core area in the Mediterranean. The warm-temperate assemblage periodically expanded from there into the rest of Europe during warm interglacials, and receded during glacial periods into refugia in the Mediterranean. Meanwhile, the cold-temperate faunal assemblage of the mammoth steppe, consisting of the woolly mammoth, woolly rhinoceros, reindeer, saiga, muskox, steppe bison, arctic fox and lemming among others, was spread across vast areas of Northern Eurasia as well as North America, and during periodic cold glacials advanced deep into Europe. Other animals, such as horses, steppe lions, the scimitar cat, the Ice Age spotted hyena and wolves were part of both faunal assemblages. Both groups of animals spread and retreated cyclically, depending on whether the climate favoured one or the other, but essentially remained intact in refugia that continued to provide the conditions they preferred.
The Quaternary extinction event
Prior to the Last Glacial Maximum however, elements of the warm-temperate Palaeoloxodon-fauna (hippopotamus, straight-tusked elephant, the two Stephanorhinus species and neanderthals, for example) as well as the steppe species Elasmotherium sibricum started to disappear and eventually went extinct. At the onset of the Last Glacial Maximum, populations of Ice Age spotted hyena and the cave bear complex (Ursus spelaea, Ursus ingressus) seem to have collapsed large-scale, and became extinct next. After the Last Glacial Maximum and towards the Holocene, extinctions continued, with many emblematic "Ice Age species" of the mammoth steppe and adjacent habitats, such as the woolly rhinoceros, the steppe lion, the giant deer and the woolly mammoth falling victim, although small regional populations of woolly mammoth and steppe bison held out well into the Holocene, and the giant deer was present in the southern Ural region into historical times. These extinctions have been variously credited to human impact, climate change, or a combination of the two.
These extinctions were not limited to Europe or the Palearctic, but rather occurred on all continents except for Antarctica, in temporal connection to the migration of Homo sapiens. Together, these extinctions are commonly known as the Quaternary extinction event. Whereas today megafaunal Proboscideans, Rhinocerotidae and Hippopotamidae that commonly attain weights of exclusively exist in the global south, notably Sub-Saharan Africa and South and Southeast Asia, land mammals of comparable or greater size used to roam the northern hemisphere and South America until relatively recently. By 10,000 BC, the megafauna of the global north had alternately died out or been severely geographically restricted. Notable examples include various Proboscideans, Rhinocerotidae, ground sloths as well as all South American ungulates, glyptodontines and diprotodontids.
In addition, many mammals above that were spread across all continents except for Antarctica prior to the Quaternary extinction event have since declined across their range, or become locally or globally extinct, respectively. Modern taxa with a once wider distribution include the Eurasian saiga, wapiti-deer, the Asian black bear, bisons, the dhole, lions, the leopard, the jaguar, and the giant anteater. Research has also shown that the extant megafaunal species that survived the extinction event experienced a sharp population decline starting at the same time and continuing to the present day. While the exact cause of these events remains debated, it seems clear that ecological niches in Europe, the Middle East, big parts of Asia, and the Americas were left unoccupied.
The impact of megafauna extinctions
The effects of the global extinction of megafauna are likely to have been far-reaching and damaging to ecosystems, and continue to be. The late Quaternary extinction event is unprecedented in the Cenozoic (i.e. since the extinction of the non-avian dinosaurs) in its selectivity for large animals. Accordingly, the modern European megafauna-extirpated ecosystems deviate strongly from the megafauna-rich evolutionary norm. Similar to how herds of herbivores like wildebeest, zebra, impala, buffalo, and elephants drive African savanna vegetation patterns, and not vice versa (i.e. the vegetation dictates the activities of these herbivores), it now seems likely that herbivore herds could have provided similar ecosystem functions in the temperate regions before the Quaternary extinctions.
In Europe, where many species such as the straight-tusked elephant, two species of Stephanorhinus and the hippopotamus among many others were lost, this meant that their ecosystem functions – such as plant matter consumption and seed dispersal – were lost as well. Without the disturbance these animals provide, it is argued, forests could develop unhindered and landscapes became more uniform. As this is detrimental to species adapted to the presence of megafauna, some scholars advocate for the reintroduction of these animals where possible, or the introduction of modern proxy species to replace extinct species and their ecological impact, an advocacy known as Pleistocene rewilding.
Towards a resolution
Vera's ideas have been called a "challenge to orthodox thinking" and his book has been widely acclaimed by colleagues. It is credited as the spark of much debate about the character of historic and prehistoric landscapes in Europe. However, testing using pollen data generally does not support Vera's claims for widespread semi-open savanna during early stages of the Holocene, but rather lends support to the competing and more widely accepted high-forest theory. Similarly, modelling approaches and the use of beetle diversity as an indicator for landscape openness also support the view of a predominance of forest throughout the early and middle Holocene in most of Europe. Consequently, the botanist John Birks has argued for the rejection of the wood-pasture hypothesis. He did, however, acknowledge that the role grazing animals played in forest composition is being reevaluated, and was formerly largely ignored by Quaternary paleoecologists.
On the other hand, consensus is building that while forest did most likely dominate throughout the early stages of the Holocene, it was never as dense and overarching as previously assumed. Studies also indicate that forest cover varied considerably between regions, and was comparably high in Central Europe and lower in the Atlantic regions. Besides climate, topography must have also played a significant role. The aurochs at least seems to have favoured fertile, low-lying riverine areas and plains, which may have led to locally open conditions, while the hill and mountain ranges were more heavily forested. Overall, dense closed-canopy forest probably covered no more than 60% of most areas, with the remainder divided between open woodlands, savannas and open areas. This made the early to mid-Holocene Europe more forested than either today or during earlier interglacials, but not a continuous woodland.
In a 2005 response to Vera, Kathy Hodder et al. highlighted the importance of disturbance factors other than herbivory, particularly fire, to prehistoric landscapes, pointing out that both the high-forest theory and Vera's model have largely ignored this possibility. This stands in connection to the discovery of fire-loving beetle species and charcoal deposits in the European pre-Neolithic Holocene. In the same paper, they also argued that the influence of large herbivores can be acknowledged without this necessarily implying that they created the open, park-like landscapes described by Vera.
At the same time, research has shown that under the current climate free-roaming large grazers can indeed influence and even temporarily halt vegetation succession, as proposed by Vera. Vera's choice of the Mesolithic as his benchmark for pristine nature has also been criticized, because the role people played during this period is unclear. Anatomically modern humans have been present in Europe since 50-40 kya, and studies indicate that already in the early Holocene, human impact on the environment was second in importance only to climate, surpassing herbivore disturbance. However, the late-Pleistocene expansion of modern humans out of Africa is frequently cited as cause for the simultaneous global extinction of primarily large mammals. In a 2014 paper, rewilding ecologist Christopher Sandom et al. found that the depauperate megafauna that remained in Europe after these extinctions may be the reason for the reduced landscape openness. They reached this conclusion by comparing beetle deposits from the Holocene and Eemian of Britain as indicators for the degree of openness. These beetles, they found, indicated that during the Eemian interglacial, the last interglacial with a pristine megafauna, landscape openness was associated with high megafauna densities. In contrast, closed forest predominated in the early Holocene in the absence of megafauna. The importance of the impact of large herbivores on vegetation and the significance of megafauna extinctions in this regard has also been highlighted in other studies.
Implications and tangents
Implications for conservation practice
Vera's hypothesis has important implications for conservation theory and practice, because it puts emphasis on the importance of grasslands in temperate Europe and their legitimacy as natural landscapes with intrinsic conservation value. Under the high forest framework, these and related landscape types such as heathland were viewed as purely or mostly anthropogenic landscapes, naturally confined to areas marginal enough to prevent woodland formation. Instead it was believed that the broadleaved regions were dominated by climax communities of shade-tolerant species, interrupted only occasionally by collapses of forest cover and disturbances through fire, storm or browsing. Examples of this school of thought include Białowieża on the Polish-Belarusian border as well as the Hainich in Central Germany.
The logical consequence of this was that species associated with grasslands, forest fringes and old, open-grown trees disappeared on large scale, since many ecosystems in Europe, including highly species-rich grasslands in Romania, strictly depend on some management and are negatively impacted if the areas are left fallow and overgrown by forest vegetation. Similarly, the displacement of aspen in boreal forests seems to be accelerated more because of increasing competition in the increasingly closed stands than via browsing.
In Europe, grasslands were maintained by large herbivores over the last 1,8 million years, resulting in an exceptional diversity of species in many European grasslands. For example, on a wooded meadow in Estonia, 76 species of plant per were counted in 2000, making it one of the world's record sites. Similarly high numbers were counted at other locations in Eastern Europe, making the region one of the hotspots for plant species richness on small scale worldwide. However, grasslands in Europe and elsewhere are increasingly under threat, including from forest encroachment following abandonment, ill-conceived forest restoration schemes, overgrazing and agricultural intensification. Especially the notion that most grasslands derive from human management and as such are essentially degraded former woodlands suitable for reforestation has been called into question more recently and is threatening native grassland ecosystems worldwide. For Europe, studies have demonstrated the local persistence of grasslands throughout the Holocene as natural ecosystems, the important role they play for insects, for example, and the potential for biodiversity enhancement that lies in their maintenance by reintroduced large herbivores. At the same time, up to 90% of European semi-natural grasslands, meaning grasslands that were formerly maintained by humans and their livestock, have disappeared during the 20th century, with losses especially high in Western, Northern and Central Europe.Given the significant importance oaks have as habitat for wood-eating insect communities in Europe, it has been pointed out that traditional forest management may not deliver all the benefits dead oak wood has for these species, since these often depend on surrounding circumstances such as sun-exposure. Instead, conservation of highly species-rich plant communities of open oak woodlands may best be achieved through traditional grazing management.
In the traditional framework of closed-canopy forest as the aspired ideal, the losses of species dependent on open areas were seen as collateral damage necessary for the creation of this ideal and had to be accepted because species associated with open areas were seen as hemerophiles anyway, which would have followed human clearings into Central and Western Europe only in the Holocene and would have originally been restricted to Southern and Eastern Europe. Taking into account that this results in overall biodiversity loss, traditional agricultural landscapes were then in turn recognised as important refuges for species-groups associated with open landscapes, seen as either a by-product of post-Neolithic agricultural traditions or relics of Pleistocene assemblages that formed alongside the now-extinct Pleistocene megafauna for which introduced domestic animals were partial substitutes. In both cases, their continued survival would largely depend on the continued execution of traditional agricultural practices.
Vera's hypothesis implies both that the model of primeval forest and the resulting rhetoric are the result of a major fallacy in nature conservation, paleoecology and forestry, and that the preservation of open and half-open landscapes and their germane biodiversity does not depend on agricultural practices, but rather on the maintenance by large herbivores, whether wild or domesticated.
Rewilding and practical implementation
The validity of Vera's hypothesis remains debated among ecologists and conservationists, but it is often considered a fruitful approach for conservation, and thus has been widely implemented in daily practice. The resulting rewilding-advocacy differs from more traditional conservation primarily in that it emphasises a hands-off approach. Instead of intervening to preserve or revive specific species or ecosystem types, the principle is to reduce human intervention to a minimum and instead reintroduce natural ecosystem dynamics, with emphasis being put on returning large mammals to the landscape.
Examples of such projects include the Dutch conservation area Oostvaardersplassen, which was initiated by Vera, as well as the Knepp estate in Sussex. Isabella Tree, co-owner of the latter, has named Vera and his ideas as important reasons for her and her husband to consider rewilding their private estate with fallow deer, red deer, English Longhorn cattle (as ecological proxies for the extinct aurochs) and Tamworth pigs (as proxies for the wild boar).
Furthermore, in the shape of Rewilding Europe, a pan-European organization that aims for creating wild spaces in Europe by re-establishing food chains and reintroducing missing species has identified Vera's proposals as key to complex, biodiverse ecosystems. Taking them into account, it works to establish free-moving herds of European bison, aurochs-proxies (e.g. Tauros-cattle), proxies for the wild tarpan (e.g. Konik, Exmoor pony) as well as water buffalo and kulan (which were present in Europe until the early Holocene) to create dynamic ecosystems maintained by the grazing and browsing activity of these herbivores.
Ecology of wood-pastures
Grazed woodlands, wood-pastures and pastures in Europe harbour high biodiversity. Rare perennial plant species commonly or exclusively associated with these ecosystems in Europe include hellebores, peonies, asphodels, dittany, black false hellebore and bastard balm. The tree layer is often dominated by a number of oak species and many rare, local and threatened species such as Florentine wild apple, Lebanese wild apple, medlar, sorb tree, pears and wild plums are more often found in European silvopastoral systems than in commercial forest. Rare or declining bird species such as the European roller, hoopoe, several species of shrike, owls (scops owl, little owl) as well as wrynecks and middle spotted woodpeckers are attracted by wood-pastures in particular. In Iberia, the semi-natural oak-woodlands known as dehesa/montado are home to endemic species such as the Spanish imperial eagle and the Iberian lynx. Wood-pastures also provide important habitat for many species of invertebrates. Due to the abundance of large, old trees, wood-pastures are especially important for saproxylic beetles. This includes spectacular and rare species such as capricorn beetle, stag beetles (such as Lucanus cervus), variable chafer and click beetles. In the British Isles alone nearly 1800 species of invertebrates depend on decaying wood, including 700 species of beetles and about 730 species of flies.
Traditional land use
Many aspects of Vera's theory resonate well with traditional pastoral systems and agricultural practices across Europe and other parts of the world. This is especially true for regions where the pasturing of grazing animals has been carried out for hundreds and thousands of years. The old English saying "The thorn is the mother of the oak", referring to the recruitment of oaks inside thorny shrubs, attests to the knowledge about processes such as associational resistance as part of old traditional farming knowledge that was present in rural communities well before the theory itself was proposed in its current form. The phrase is commonly attributed to Humphry Repton, but was used by the writer Arthur Standish as early as 1613 and probably has origins even earlier. Following Vera's argumentation, wood-pastures and related farming systems as ancient land-use systems can also be viewed as essentially mimicking the primaeval European wilderness. This goes hand-in-hand with the fact that, for instance, 63 of the ecosystems listed in Annex I of the Habitats Directive of the European Union strictly depend on low-intensity use and maintenance work, mostly in the form of grazing and mowing. These habitats are labelled as high nature value farmland (HNV farmland), and the fact that traditional farming, in particular, can potentially harbour exceptional biodiversity values may in part be due to such mimicking effects that some forms of human use (such as grazing, pollarding, coppicing and hedgelaying) have in analogy to ecosystem services formerly exercised by the megafauna.
Sergey Zimov's megaherbivore decline model
While Vera's hypothesis focuses on temperate regions and especially temperate Europe, an argumentatively related model has more recently been proposed for high latitude regions of modern taiga and tundra biomes, where formerly mammoth steppe predominated. It essentially challenges the widespread view that the Pleistocene megafauna of the northern steppe vanished as a consequence of the warming climate at the advent of the Holocene and the consequent turnover of cold-adapted grassland and herb ecosystems into expanding forests and tundra dominated by mosses, lichens and dwarf trees. Instead, it argues that vice versa the declining megafauna was the precondition for the vegetational turnover, and that healthy megafauna populations could have maintained their preferred environment, the mammoth steppe, even under the stresses of the warming climate if human-induced extinctions had not occurred. Consequently, Sergey Zimov, one of the main supporters of this model, proposes that ecosystems functionally similar to the mammoth steppe of the Pleistocene could also function under modern circumstances, and seeks to prove this in the form of Pleistocene park. He and his son have since begun to reintroduce species that are now extinct in Yakutia, and to introduce species that are ecologically similar to those present in the region during the Pleistocene that have since become globally extinct. These include wild species like reindeer, muskox, bison and wisent, as well as hardy domestic breeds like Bactrian camels, Kalmyk cattle, domestic yaks and Orenburg goats. With these, the project hopes to revive the mammoth steppe, at least in fractions of its former expanse.
See also
New Forest – An ancient common with a significant proportion of wood pasture
Moccas Park
Hatfield Forest
Windsor Great Park
Epping Forest
Zuid-Kennemerland National Park
Marcescence – possibly an adaptation to prevent browsing of twigs and buds in winter
Globally Important Agricultural Heritage Systems
Notes
References
Sources
Further reading
Bunzel-Drüke, Margret; Luick, Rainer (2024): "Master builders of biodiversity". Naturschutz und Landschaftsplanung.
Jepson, Paul and Blythe, Cain (2020). Rewilding: The Radical New Science of Ecological Recovery, Icon Books Ltd. ISBN 978-1-78578-627-3
2000 introductions
Ecology
Ecological theories
Rewilding
Agroforestry
Hypotheses | Wood-pasture hypothesis | [
"Biology"
] | 11,864 | [
"Ecology"
] |
66,846,197 | https://en.wikipedia.org/wiki/Random%20graph%20theory%20of%20gelation | Random graph theory of gelation is a mathematical theory for sol–gel processes. The theory is a collection of results that generalise the Flory–Stockmayer theory, and allow identification of the gel point, gel fraction, size distribution of polymers, molar mass distribution and other characteristics for a set of many polymerising monomers carrying arbitrary numbers and types of reactive functional groups.
The theory builds upon the notion of the random graph, introduced by mathematicians Paul Erdős and Alfréd Rényi, and independently by Edgar Gilbert in the late 1950s, as well as on the generalisation of this concept known as the random graph with a fixed degree sequence. The theory has been originally developed to explain step-growth polymerisation, and adaptations to other types of polymerisation now exist. Along with providing theoretical results the theory is also constructive. It indicates that the graph-like structures resulting from polymerisation can be sampled with an algorithm using the configuration model, which makes these structures available for further examination with computer experiments.
Premises and degree distribution
At a given point of time, degree distribution , is the probability that a randomly chosen monomer has connected neighbours. The central idea of the random graph theory of gelation is that a cross-linked or branched polymer can be studied separately at two levels: 1) monomer reaction kinetics that predicts and 2) random graph with a given degree distribution. The advantage of such a decoupling is that the approach allows one to study the monomer kinetics with relatively simple rate equations, and then deduce the degree distribution serving as input for a random graph model. In several cases the aforementioned rate equations have a known analytical solution.
One type of functional groups
In the case of step-growth polymerisation of monomers carrying functional groups of the same type (so called polymerisation) the degree distribution is given by: where is bond conversion, is the average functionality, and is the initial fractions of monomers of functionality . In the later expression unit reaction rate is assumed without loss of generality. According to the theory, the system is in the gel state when , where the gelation conversion is . Analytical expression for average molecular weight and molar mass distribution are known too. When more complex reaction kinetics are involved, for example chemical substitution, side reactions or degradation, one may still apply the theory by computing using numerical integration. In which case, signifies that the system is in the gel state at time t (or in the sol state when the inequality sign is flipped).
Two types of functional groups
When monomers with two types of functional groups A and B undergo step growth polymerisation by virtue of a reaction between A and B groups, a similar analytical results are known. See the table on the right for several examples. In this case, is the fraction of initial monomers with groups A and groups B. Suppose that A is the group that is depleted first. Random graph theory states that gelation takes place when , where the gelation conversion is and . Molecular size distribution, the molecular weight averages, and the distribution of gyration radii have known formal analytical expressions. When degree distribution , giving the fraction of monomers in the network with neighbours connected via A group and connected via B group at time is solved numerically, the gel state is detected when , where and .
Generalisations
Known generalisations include monomers with an arbitrary number of functional group types, crosslinking polymerisation, and complex reaction networks.
References
Polymerization reactions
Polymer chemistry
Graph theory | Random graph theory of gelation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 709 | [
"Discrete mathematics",
"Graph theory",
"Materials science",
"Combinatorics",
"Mathematical relations",
"Polymer chemistry",
"Polymerization reactions"
] |
66,846,235 | https://en.wikipedia.org/wiki/Root%20%28board%20game%29 | Root: A Game of Woodland Might and Right is a 2018 asymmetric strategy wargame board game designed by Cole Wehrle, illustrated by Kyle Ferrin, and published by Leder Games. In Root, players compete for the most victory points through moving and battling using various factions with unique abilities. Upon its release, Root received positive reviews, and was followed by four expansions. A digital version, developed by Dire Wolf Digital, was released in 2020.
Gameplay
In Root, 2-4 players compete in an asymmetric strategy wargame to control a forest. Each player controls a different faction, each of which has different gameplay elements, tactics, and point scoring options. In the base game, 4 factions are present: the Eyrie Dynasties, Marquise de Cat, Woodland Alliance, and the Vagabond. While there is a common set of rules for movement, hands of cards, and battling each other, every faction adds an additional layer of rules complexity.
Players who select the Eyrie take their turns by planning their actions in a specific order as part of the decree, requiring them to take specific actions in specific areas of the board. Each round the player adds new cards to the decree, until they are unable to take a prescribed action, which causes them to lose victory points and reset their decree to the minimum. The Marquise de Cat requires its player to construct buildings across the board - gaining wood via sawmills to construct other types of buildings and add combat units to the board. The Woodland Alliance starts with no units on the board, instead adding sympathy tokens, gaining supporter cards, and adding a small number of warriors to the board when given the opportunity. Unlike the other factions, the Vagabond has no warrior units, instead controlling just one piece around the board. The Vagabond player can purchase items from other players and either befriend or attack them, as well as complete quests and explore ruins to earn victory points.
Reception
Root received critical success upon its release. Reviewing for Ars Technica, Charlie Theel praised the game's visuals and highlighted its strategic depth and asymmetrical factions. He described the Eyrie's design of adding decrees as a "fascinating and one of the most rewarding factions" that was "beset with challenges", the Marquise de Cat as "the most straightforward" but "deceptive", and considered the Woodland Alliance to be "a true guerilla force", with its power of destroying enemies to be "explosive and extremely gratifying". Theel concluded that the game offered "astounding depth" due to its "deep asymmetry" and "extended exploration". Jonathan Bolding from GamesRadar stated that it was "one of the best board games", praising the components, the accessible combat system, and "compelling" asymmetry, but commented negatively on the difficulty for new players. Similarly, Tom Mendelsohn commended the strategy, strategy depth, and "whimsical exterior". In a 2017 preview of the game, Destructoid commented favourably on the game's artwork; especially the contrast between cartoon animals and the mature themes of the game. Dicebreaker also listed Root as one of the best board games, describing the "absolutely adorable" artwork and balanced powers of the factions. In 2022, The New York Times named it one of the four best strategy board games alongside Brass: Birmingham, Ark Nova, and Lost Ruins of Arnak, praising its "unique ecosystem of conflicting and contrasting goals, powers, and win conditions" but noting that it was "an intimidating game for newbies".
Root also received numerous awards, including the 2018 Golden Geek Board Game of the Year award, the 2019 Origins Awards for Game of the Year, Best Board Game and Fan Favourite Board Game, and the American Tabletop Awards Complex Game award and the Spiel Portugal Jogo do Ano. It was also nominated for the 2020 As d'Or Expert award.
Expansions
Root: The Riverfolk Expansion was released in 2018. The expansion adds two new factions (the Riverfolk Company and the Lizard Cult), the ability to play with a second Vagabond, and the ability to play with a bot version of the Marquise de Cat. A digital adaptation of the expansion was released in April 2021.
Root: The Underworld Expansion was released in 2020. The expansion adds two new factions, the Underground Duchy and the Corvid Conspiracy, as well as two additional maps.
Root: The Clockwork Expansion was released in 2020. The expansion allows players to play against bot versions of all four of the factions that were included in the base game.
Root: The Exiles and Partisans Deck was released in 2020. This deck can be swapped with the deck from the original game to add variety.
Root: The Vagabond Pack was released in 2020. The pack includes seven new vagabond playing pieces as well as three new character cards for the Vagabond.
Root: The Marauder Expansion was released in 2022. The expansion includes two new factions, the Lord of the Hundreds and the Keepers in Iron, as well as adding the hirelings mechanic and four hirelings.
Root: The Clockwork Expansion 2 was released in 2022. The expansion lets players play against four automated factions: the Logical Lizards, Riverfolk Robots, Cogwheel Corvids and Drillbit Duchy.
Root: The Landmarks Pack was released in 2022. The pack includes 4 new landmarks as well as setup cards for the 2 existing ones.
Root: Marauder Hirelings and Hirelings Box was released in 2022. The expansion added the hireling versions of the Marauder Factions and one more. It also added a box for all hirelings.
Root: Riverfolk Hirelings Pack was released in 2022. The pack added the hireling versions of the Riverfolk Factions and one more.
Root: Underworld Hirelings Pack was released in 2022. The pack added the hireling versions of the Underworld Factions and one more.
Root: Homeland Expansion will be released in 2025.
The Role Playing Game
In 2021, Magpie Games released tabletop role-playing game Root: The Roleplaying Game. It uses the Powered by the Apocalypse design philosophy. The entire adventuring party plays as Vagabonds. Charlie Hall for Polygon recommended it for fans of "swashbuckling adventure" and "high-stakes political theater." It won the 2022 Silver ENNIE Award for "Best Game" and was also nominated for "Product of the Year."
Digital edition
Root: Digital Edition was released in September 2020 by Dire Wolf on the PC, iOS and Android platforms, followed by a Nintendo Switch version in November 2021. The PC version was praised by Jason Ornleas from GamingTrend, who praised the tutorial, aesthetics, soundtrack and strategy, but criticised the quickness of the AI turns and the lack of an undo button. In their list of the best board games of 2020, Vulture named the digital version of Root as the best board game app, complimenting the animations, AI, and in-game tutorial. Theel from Polygon also recommended the Nintendo Switch adaptation, praised the addition of new modes, and concluded that it "accomplishes the unenviable task of bringing its machinations to the screen", but critiqued the lack of group dynamics.
In recent updates, the game has addressed many of critiques, adding an undo button and challenges that can be played in the solo game mode. The digital edition also has DLC equivalent to the physical expansions to the game excluding the marauders and hirelings expansions. DLC for The Riverfolk Expansion, The Underworld Expansion, The Clockwork Expansion, The Exiles and Partisans Deck, and The Vagabond Pack. Additional challenges for the solo game mode and achievements are included with each DLC.
See also
Oath: Chronicles of Empire and Exile, Pax Pamir, and John Company, other board games designed by Cole Wehrle
References
External links
Root page on the Leder Games website
Asymmetric board games
Board games introduced in 2018
Board wargames
ENnies winners
Kickstarter-funded tabletop games
Origins Award winners
Strategy games | Root (board game) | [
"Physics"
] | 1,686 | [
"Asymmetric board games",
"Symmetry",
"Asymmetry"
] |
66,848,607 | https://en.wikipedia.org/wiki/Non-random%20segregation%20of%20chromosomes | Non-random segregation of chromosomes is a deviation from the usual distribution of chromosomes during meiosis, that is, during segregation of the genome among gametes. While usually according to the 2nd Mendelian rule (“Law of Segregation of genes“) homologous chromosomes are randomly distributed among daughter nuclei, there are various modes deviating from this in numerous organisms that are "normal" in the relevant taxa. They may involve single chromosome pairs (bivalents) or single chromosomes without mating partners (univalents), or even whole sets of chromosomes, in that these are separated according to their parental origin and, as a rule, only those of maternal origin are passed on to the offspring. It also happens that non-homologous chromosomes segregate in a coordinated manner. As a result, this is a form of Non-Mendelian inheritance.
This article describes cases where non-random segregation is the normal case for the particular organisms or occurs very frequently. A related phenomenon is called meiotic drive or segregation distortion. This is a higher than average transmission of a single chromosome relative to the homologous chromosome in inheritance. This can be due to non-random segregation during meiosis, but also to processes after meiosis that reduce the transmission of the homologous chromosome.
In addition, there are pathological cases that result in aneuploidy and are almost always lethal.
Background and early history of research
According to the chromosome theory of inheritance formulated by Theodor Boveri in 1904, homologous chromosomes were expected to be randomly distributed among the daughter nuclei during meiosis. The first studies on this question appeared in 1908 and 1909. These papers dealt with spermatogenesis in aphids, i.e. meiosis in the male sex. In aphids, sex determination is mostly done according to the XX/X0 type: females have two X chromosomes, males only one. However, males only appear in one generation towards the end of the year, while otherwise there are only females, which reproduce by parthenogenesis. The question now was how it is achieved that all offspring in sexual reproduction are females. It turned out that meiosis I is inequal, i.e. results in two unequal-sized cells, and the X chromosome always ends up in the larger daughter cell. Only from this cell do two sperm cells emerge after meiosis II, while the smaller cell degenerates. Thus, each sperm - like the egg - contains an X chromosome, and only female offspring (XX) are produced.
Also in 1909, a paper was published on the spermatogenesis of Coreus marginatus. There are two different X chromosomes and no Y chromosome (X1X20), and in meiosis I both X chromosomes are assigned to the same daughter nucleus. The same is apparently generally true in spiders, many species of which have been studied in subsequent years, as well as in various nematodess and in some aphids. The situation is somewhat more complicated in the American mole cricket Neocurtilla hexadactyla, which Fernandus Payne described in 1916: Here, three sex chromosomes are present (X1X2Y), two of which mate, while X1 is present as a univalent (unpaired). Although, as recent studies have confirmed, there is no mechanical linkage, the univalent X chromosome enters the same daughter nucleus that receives the other X chromosome.
It was only after all these counter-examples that a study by Eleanor Carothers on locusts appeared in 1917 - in the same journal as Payne's paper (Journal of Morphology) - which was seen as clear evidence for the expected random distribution. While earlier studies had been limited to sex chromosomes because homologous autosomes could not be distinguished, Carothers had found experimental animals in which homologous autosomes could also be partially distinguished. Payne's divergent findings were subsequently ignored, especially as they could not be confirmed in the European mole cricket. Thomas Hunt Morgan, who decisively contributed to the establishment of the chromosome theory of heredity, which was not yet generally accepted at that time, even explicitly wrote in his book The Physical Basis of Heredity (1919) that there was no contradictory evidence against the random segregation of maternal and paternal chromosomes (there is not a single cytological fact opposed to the free assortment of maternal and paternal chromosomes), although he was undoubtedly aware of the work of his former collaborator Payne. It was not until 1951 that Michael J. D. White rediscovered it and confirmed it through his own investigations.
The third basic variant of non-random segregation, in which the complete sets of chromosomes of maternal and paternal origin are separated from each other, was studied - among some other peculiarities - in the 1920s and 30s by Charles W. Metz and co-workers in fungus gnats. Since then, numerous other counterexamples to random segregation have been described in very different creatures. It was not until 2001, however, that a first review paper appeared that was devoted precisely to this topic and was not limited to specific cases. The authors stated that most geneticists are unaware of non-random segregations or consider them rare exceptions. Due to the wide taxonomic distribution of the known cases, they argue that the importance of these phenomena has been underestimated so far.
Single chromosomes or chromosome pairs
We first consider cases where only a single chromosome pair or a single unpaired chromosome (univalent) is affected, in the order of first description in the respective taxon.
Aphids
As mentioned, the first example of non-random segregation described as early as 1908 was the behaviour of the X chromosome during spermatogenesis in aphids. These insects exist for most of the year only as females and reproduce parthenogenetically, i.e. without the participation of males. There is no fertilisation or meiosis, and successive generations are genetically identical. Under certain conditions, mostly due to the decreasing day length towards the end of the vegetation period of the host plants, one generation occurs in which males are also present. This is achieved by the two X chromosomes present in females mating as in meiosis and their number being reduced to 1, resulting in males (X0).
The fact that, after this one bisexual generation, only females are produced again is based, as shown above, on the fact that, during spermatogenesis, the X chromosome is always assigned to the daughter cell from which sperm are produced. Hans Ris described the exact sequence of meiosis in 1942: According to this, the X chromosome does not participate in the movement towards the poles of the spindle apparatus during anaphase, but is stretched between the diverging poles. Also during the subsequent cell division, the chromosome remains in this position. Only at a late stage of furrowing does the furrowing groove shift to one side, and the X chromosome is allocated to the opposite, larger daughter cell. Since only this produces two sperm, all sperm as well as the eggs contain an X chromosome. After fertilisation, eggs are laid, which survive until the beginning of the next growing season and then only produce females (XX), which again reproduce parthenogenetically.
Butterflies
In butterflies, the sex of the offspring is not determined as in the most common case among animals, including humans, but by the make-up of the egg. In these cases, the female sex is heterogametic, the male is homogametic. In such cases, one does not speak of X and Y chromosomes, but of Z and W chromosomes. Males have two Z chromosomes (ZZ), females either one Z and one W chromosome (ZW) or only one Z chromosome (Z0). An example of the ZZ/Z0 type is the Taleporia tubulosa. In this species, J. Seiler (1920), an associate of Richard Goldschmidt, studied the inheritance of sex and the behaviour of the univalent Z chromosome during oogenesis. He found that the sex ratio among the offspring depends on the temperature and the age of the mother. At cool temperatures ("room temperature of about 12-16°"), the Z chromosome entered the polar body in 57% of the cases studied at meiosis I and only in 43% in the future egg nuclei. Accordingly, Seiler found an excess of females in the offspring. Conversely, when the chromosome was preferentially allocated to the egg in the incubator at 30-37°, there was a surplus of 62 % male offspring. Similarly, more males were produced when mating occurred a few days after hatching and thus towards the end of the short life of the female Imago. (Meiosis pauses here, as in most invertebrates, in metaphase I and is not completed until after fertilisation. Cf. Stasis of female meiosis). Evidence of non-random segregation in female meiosis has also been found in butterflies of the ZZ/ZW type. In some species of the genera Danaus and Acraea there are females that produce only female offspring (ZW). This is apparently due to the fact that the W chromosome always enters the egg cell and not the polar bodies. This modification of the meiotic chromosome distribution is hereditary and linked to the W chromosome.
Fungus gnats
The fungus gnats, whose spermatogenesis has some peculiarities (summary at fungus gnat#genetics), have already been mentioned. In meiosis II, a peculiarity occurs with the X chromosome. Normally, in meiosis II (as in mitosis), all chromosomes are divided into the two chromatids that make them up and these are allocated to the two daughter nuclei. In fungus gnats, on the other hand, the X chromosome travels prematurely to one of the spindle poles and only divides there or on the way there. Since a sperm is only produced from the cell formed there, this then contains two X chromosomes, and the zygote after fertilisation accordingly three. One of these X chromosomes is eliminated at an early embryonic stage, thereby restoring the normal female chromosome make-up (XX).
Flowering plants
The first case of non-random segregation of single chromosomes in a plant was described by Marcus Morton Rhoades in 1942 in maize. This non-randomness occurs when there is an abnormal form of chromosome No. 10 that contains an extra segment. Since this additional segment is recognisable as a knobbed structure in the Pachytene of meiotic prophase, the chromosome is referred to as K10. It occurs particularly in some ancient North American Indian maize varieties. If there is only one K10 and one normal chromosome 10, and in female meiosis I the crossing-over occurs in such a way that the chromatids are of different lengths, then in meiosis II the chromatid containing the knobbed additional segment is about 70% likely to enter the embryo sac and thus the egg cell. The segment is therefore accumulated to a high degree in the inheritance; it exhibits a meiotic drive. This also applies if other chromosomes carry the segment, but only if at least one K10 is present.
A corresponding accumulation of additional chromosome segments has also been described in some other plant species, but has not been studied in detail. Much more numerous are studies on additional chromosomes, the B chromosomes, which show no homology with regular chromosomes and only occur in some of the individuals of a population, i.e. have no essential functions. A non-random segregation of B chromosomes was first described by Catcheside in 1950 in the guayule. In this shrubby composite, the B chromosomes, if plural, do not mate or mate only fleetingly during meiosis I, i.e. they are mostly present as univalents. Nevertheless, they are very likely to migrate to the same pole in anaphase I.
Since Catcheside only studied male meiosis, which usually gives rise to four fertile daughter cells, it cannot be concluded that non-random segregation contributes to the accumulation in inheritance that is characteristic of B chromosomes in general. The situation is different in female meiosis, where three of the four daughter nuclei degenerate. In 1957, Hiroshi Kayano described the behaviour of a B chromosome in female meiosis in the Japanese lily species Lilium callosum, which is mostly present only in singular and therefore exists as a univalent. He found that the chromosome is allocated to about 80 % of the future egg cell and is passed on accordingly to 80 % of the offspring.
This work by Kayano seems to be the only one so far to demonstrate the accumulation of a B chromosome as a result of non-random segregation during meiosis in the embryo sac mother cell. In contrast, an accumulation of B chromosomes in plants by a directional nondisjunction in mitoses before or after meiosis has been observed in many cases, as for the first time in 1960 by Sune Fröst in the Crepis pannonica. Both chromatids often end up in the same daughter cell (nondisjunction), and this is directed in such a way that an accumulation in the inheritance results. Non-random segregation in meiosis can therefore only be concluded if directional nondisjunction in mitoses can be excluded. This is largely assured in the case of the Mediterranean sawfly plantain Plantago serraria. and Hypochaeris maculata. Another case is probably in Phleum nodosum.
Flies
Similar to maize, non-random segregation also occurs in the fruit fly Drosophila melanogaster during female meiosis, when homologous chromosomes are of different lengths and chromosomes with chromatids of different lengths are present as a result of crossing-over during meiosis II. Then, with a probability of about 70 %, the shorter chromatid enters the egg nucleus. This was discovered by E. Novitski in 1951. Later it was also found in Lucilia sericata and Hylemya. So it is apparently a widespread phenomenon in flies.
In Drosophila melanogaster, non-random segregation can also occur during male meiosis. This is the case when the sex chromosomes (X and Y) do not pair during meiosis I. In this case, the unpaired chromosomes usually end up in the same daughter cell. Then the unpaired chromosomes usually end up in the same daughter cell. Accordingly, there are many X0-type males among the offspring, but surprisingly few XXY-type males. The latter is due to the fact that the daughter cells with the XY constitution are disturbed in their development. On the other hand, the X0 males are infertile. The bottom line is that the X chromosome involved is enriched in the inheritance (meiotic drive).
Mealybugs
B chromosomes are also common in the animal kingdom. In the Mealybug, Uzi Nur described non-random segregation in both sexes in 1962. In oogenesis, the segregation behaviour of the B chromosome depends on the number of Bs present. If two Bs are present, then they mate during reduction division (which is meiosis II here, as it is generally in mealybugs, scale insects and aphids) and segregate in the normal way. However, if only one is present, then in two-thirds of the cases it enters the polar body and only in the remaining third does it enter the ovary. And the unpaired supernumerary B chromosome behaves in the same way if 3 or 5 Bs are present, while the paired ones segregate normally. Overall, therefore, there is a tendency in the female sex to exclude B chromosomes from the inheritance by non-random segregation, which comes into play especially when only one is present. However, this is contrasted in the male sex by a strong tendency to accumulate B chromosomes. This is due to the fact that in this species (as in many other mealybugs and scale insects) half of the meiosis products regularly degenerate. During reduction division (also meiosis II here), all B chromosomes are allocated to the future sperm nucleus with about 90 % probability.
Grasshoppers
Transmission of B chromosomes has also been studied in various grasshoppers. As in plants, it was found that the number of B chromosomes can increase even before meiosis due to mitotic nondisjunction. In contrast, Zipora Lucov and Uzi Nur found an example of non-random segregation at oogenesis in the North American species Melanoplus femurrubrum in 1973. Since there was never more than one B chromosome, accumulation prior to meiosis was ruled out in this case. Nevertheless, this chromosome was passed to about 80% of the offspring. Hewitt's (1976) study of Myrmeleotettix maculatus was even more informative. Hewitt found that when the eggs were fixed in metaphase I (the time of egg laying), the B chromosomes were mostly already found in the inward half of the division spindle, that is, near the future egg nucleus. The transmission rate of about 75% corresponded to this. How frequent such non-random segregation of B chromosomes is otherwise in grasshoppers cannot yet be estimated. It is true that many locust species are known to have B chromosomes. However, only in a few cases has their transmission been studied, and non-random segregation in meiosis is only one of several ways in which non-Mendelian transmission can occur.
Another chromosomal anomaly that is common in locusts is extra segments on individual chromosomes. Such additional segments can segregate quite randomly, and in fact it was locusts with homologous chromosomes of unequal length in which Carothers first found evidence of random segregation in 1917. In contrast, López-León et al. (1991, 1992) found circumstantial evidence for nonrandom segregation in two locust species: in Eyprepocnemis plorans, an extra segment in the female sex is less likely to be transmitted than the normal homologous chromosome if a B chromosome is also present. Thus, the B chromosome influences the transmission of a regular chromosome pair, while even in this case it follows Mendelian rules. The reduced transmission of the additional segment is most likely due to non-random segregation during oogenesis, because the alternative possibility of differential mortality of zygotes could be excluded. In Chorthippus jacobsi, López-León et al. studied the transmission of different additional segments at three different chromosomes. While all additional segments on chromosomes M5 and M6 are transmitted normally, accumulation consistently occurs in both sexes when an additional segment is located on the small chromosome S8. Even if both S8 chromosomes carry different sized additional segments, they do not follow Mendelian rules, but the shorter segment is preferentially transmitted. Again, non-random segregation during oogenesis can be inferred with high probability. In contrast, how non-Mendelian transmission occurs through the male sex is unclear.
Rodents
The first description of non-random segregation in a mammal appeared in 1977 and dealt with the Wood lemming. In some populations of this species, up to 80% of the animals are female. At the same time, some of the females have the "male" chromosome constitution XY. The fact that these animals develop into females, although they have a Y chromosome, is due to a mutation on the X chromosome. During meiosis, this mutated chromosome (X*) enters the egg nucleus more frequently than the Y chromosome and is therefore more likely to be transmitted to the offspring. A second example concerns a B chromosome in the Siberian collared lemming Dicrostonyx torquatus. In female meiosis I of this species, unpaired B chromosomes are preferentially assigned to the future egg nucleus and thus accumulate in the inheritance.
In Siberian populations of the house mouse, a variant form of chromosome 1 with two insertions occurs. This elongated variant is passed on by heterozygous females with much higher probability than the normal chromosome 1. As it turned out, this occurs by non-random segregation of the homologous chromosomes or chromatids in both meiotic divisions. As a result, up to 85% of the offspring of a heterozygous female can receive the insertions. However, the latter is only the case if the males used in the crossing experiments are not also carriers of these insertions. If instead homozygous carriers of these insertions were used, i.e. each sperm received the insertions, then the non-randomness in female meiosis was reversed: In this case, only about 1/3 of the offspring of a heterozygous mother received the insertions from this mother. This surprising influence of sperm on meiosis in the oocyte is possible because in mice, as in vertebrates in general, female meiosis pauses in metaphase II until fertilization occurs.
It has been known since 1962 that female mice with only one X chromosome (XO) are fertile, but their daughters have predominantly two X chromosomes. How this happens was unclear for a long time, but according to recent studies it is apparently due to the fact that the univalent X chromosome is preferentially allocated to the future egg nucleus during meiosis I.
Coordinated segregation of non-homologous chromosomes
Mechanically coupled univalents
That two non-homologous chromosomes segregate in a coordinated manner during meiosis was first described in 1909 in Coreus marginatus. In it, males have two different X chromosomes (X1X20), and these are both assigned to the same daughter nucleus in meiosis I. Later studies in other bugs revealed that the X chromosomes are linked and their cosegregation was apparently based on this. Thereby, up to five different X chromosomes can be present, and most species also have a Y chromosome that migrates to the opposite spindle pole. Such cosegregation of mechanically coupled sex chromosomes has also been described in spiders, nematodess, stoneflies, ostracods, in a scale insect, and in beetles.
Free univalents
In some aphid species, males have two different X chromosomes (X1X20), which are not mechanically linked and yet reach the same spindle pole during meiosis I. This is consistent with the directional segregation mode of a single X chromosome described above. In other aphid species, four different chromosomes probably cosegregate in this manner. A cosegregation of free univalents has also been described in the giant crab spider Delena cancerides. There, males have three different X chromosomes that are not mechanically linked as in other spiders, but are still assigned to the same spindle pole.
More interesting are those cases in which free univalents of different species segregate in a regulated manner to opposite spindle poles. This is part of the normal course of meiosis in the spermatogenesis of various Neuroptera, some Alticini, the cricket Eneoptera surinamensis, and the Mesostoma ehrenbergii (Turbellaria). Netwings mostly have one X and one Y chromosome. which do not mate during meiosis. However, some species have multiple univalent sex chromosomes, and univalent B chromosomes may be added. They all segregate in an orderly fashion to the spindle poles. This is called distance segregation. Similar relationships with multiple sex univalents have also been described in some flea beetles. In the cricket Eneoptera surinamensis, three free univalent sex chromomeres (X1X2Y) are present, already migrating to the spindle poles, while the autosomes assemble at the spindle equator. In the whirl worm Mesostoma ehrenbergii only three of the five chromosome pairs mate during meiosis. Thus, three bivalents and four univalents are present, and the univalents also segregate here before the bivalents. In fixed preparations, the univalents are often not correctly distributed. Hilary A. Oakley found the reason for this when she observed the process in a living object. According to this, the univalents move back and forth between the poles in metaphase I, i.e. when the bivalents are at the equator. Usually only one univalent moves, and after a longer pause (five to ten minutes) another one starts to move. This continues until all four are correctly distributed. This is followed by the anaphase, i.e. the segregation of the paired chromosomes.
Also in the northern mole cricket Neocurtilla hexadactyla already mentioned at the beginning, live observations of meiosis were very informative. There, as in Eneoptera, three sex chromosomes (X1X2Y) are present, but only X1 is present as a univalent. In this case, segregation of sex chromosomes also occurs before that of autosomes, in that the X2Y bivalent is already shifted in metaphase I from the metaphase plate toward one spindle pole in such a way that the Y chromosome is located near it, while the univalent X1 is located at the other pole. Through micromanipulation experiments in which they shifted the bivalent or the univalent in the spindle, René Camenzind and R. Bruce Nicklas (1968) found that X1 is the active element and depends on the orientation of the bivalent. Furthermore, the authors found that there is no mechanical connection between the two. However, an electron microscopic examination revealed some microtubules, which also make up the spindle fibers, and which here appear to form a fine connection between X1 and Y. Targeted irradiation of this microtubule junction with UV microbeams often (in about one-third of cases) resulted in X1 moving to the other half of the spindle. The same effect was surprisingly seen with irradiation of one of the three spindle fibers where the sex chromosomes were located, whereas irradiation of autosomal spindle fibers had no effect. Dwayne Wise et al. concluded that these four microtubule bundles form an "interacting network" that enables the coordinated segregation of sex chromosomes, i.e., the correct allocation of the X1.
Complete sets of chromosomes
Fungus gnats
The behavior of chromosomes in fungus gnat spermatogenesis is very unusual in several respects. A detail of meiosis II was already discussed above; however, meiosis I is far more remarkable. There the otherwise obligatory pairing of homologous chromosomes is completely omitted, and these are segregated from each other according to their origin - maternal or paternal. Their segregation starts right after the nuclear envelope dissolution, the metaphase is omitted, and the paternal chromosomes enter a small daughter cell which, like the polar bodies, passes away during oogenesis. Thus, all spermatozoa receive only the maternal chromosomes, and males act only as intermediaries between purely female lineages. The construction of the spindle apparatus in this division is also unusual. It is not a bipolar spindle, but merely a half-spindle with only one pole. The maternal chromosomes move towards this pole, the paternal ones away from it.
Some fungus gnats have, in addition to regular chromosomes, germline-limited or L-chromosomes (from limited), which are present only in cells of the germline and are eliminated from somatic cells. These segregate with the maternal regular chromosomes during spermatogenesis, thus enter the sperm unreduced. This doubling of their number is compensated for at an early stage of embryonic development by eliminating excess L chromosomes from the nucleus, so that exactly two always remain.
Cecidomyiidae
In Cecidomyiidae, spermatozoa also contain only the set of chromosomes of maternal origin, while paternal chromosomes are eliminated during meiosis I. Again, pairing of homologous chromosomes is omitted, cell division is inaequal, and only the maternal chromosomes move to a spindle pole, thereby entering the daughter cell from which two spermatozoa emerge after meiosis II, while the other daughter cell perishes. In addition, there are numerous germline-limited chromosomes, which, like those of fungus gnats, remain with the paternal regular chromosomes and are thus eliminated.
Scale insects
In most scale insects, males are parahaploid: although they have two sets of chromosomes, only chromosomes of maternal origin are active, and only they are passed on to offspring. Inactivation of the paternal chromosomes occurs at an early embryonic stage (blastula), when the chromosomes become highly condensed (heterochromatized). (This also occurs in humans, where in the female sex one of the two X chromosomes becomes heterochromatic). Elimination from inheritance can occur in several ways; only one occurs during meiosis. This is called the lecanoid chromosome system. Meiosis is inverse in scale insects, as in the aphids discussed above, that is, the actual reduction division is meiosis II. In the lecanoid mode, the chromosomes form a "double metaphase plate" with all maternal chromosomes on one side and all paternal chromosomes on the other. (In the normal case, chance rules here.) In anaphase, the two complete sets then step apart, each forming its own daughter nucleus. Since meiosis II is not associated with cell division here, and since the two daughter formations of the first division also reunite, a four-nucleated cell eventually results (as is generally the case in scale insect spermatogenesis). Of the 4 nuclei, however, only the two with the maternal chromosomes then become sperm nuclei; the other two become more and more condensed and finally perish.
Plants
In the plant kingdom, polyploidy is very common. For the most part, these are allopolyploid species in which each chromosome finds a homologous partner during meiosis. But there are also species with an odd number of chromosome sets. These can generally reproduce only apomictic, that is, bypassing meiosis and fertilization, because univalents are randomly distributed among the daughter nuclei during meiosis. However, some plants are known in which univalents are distributed non-randomly and therefore can reproduce sexually. The oldest example is the dog roses, in which this was discovered as early as 1922. They are pentaploid, that is, they have five sets of chromosomes. Of these, only two mate during meiosis in both sexes, so there are 7 bivalents and 21 univalents. In the female sex, i.e., in the embryo sac mother cell, all the univalents migrate undivided at meiosis I to the spindle pole that lies in the direction of the micropyle. Since the embryo sac is then formed there with the oocyte, it thus receives 4 complete sets of chromosomes. In pollenmeiosis, on the other hand, many univalents remain in anaphase I or II (so-called lagging) and are thus lost. This chromosome loss is so high that more than 1/10 of the pollen grains only contain a haploid set of those chromosomes that were paired during meiosis. And since only these haploid pollen grains are functional, the complete pentaploid chromosome set is restored at fertilization. In this way, 3 of the 5 sets of chromosomes are transmitted exclusively through the female line, while the remaining two behave normally.
The Leucopogon juniperinus is triploid, and of its 3 chromosome sets only two mate during meiosis I. The univalents of the third set are distributed directionally, and unlike dog roses in both sexes. Pollen meiosis here, as in related species (Tribus Stypheleae) is associated with inequivalent cell division: Three of the four daughter nuclei assemble at one end of the initially still undivided pollen mother cell and form three small cells there, which subsequently do not develop further. Thus, only one of the meiosis products gives rise to a pollen grain, and this is mostly haploid as a result of the directional segregation of the univalents in meiosis I, i. e. the univalents are eliminated from the pollen nucleus here not by lagging but by a directional distribution. In the embryo sac mother cell, on the other hand, they all migrate towards the micropyle with a greatly increased probability and thus preferentially enter the oocyte. Although the directional distribution in this species is by no means 100% in both sexes and therefore results in many aneuploid gametes, it is effective enough to allow high fertility.
The South American sweetgrass Andropogon ternatus is also triploid, and during meiosis one set of chromosomes remains unpaired. In anaphase I, the univalents in both sexes remain between the segregating half-bivalents and form their own third nucleus, which is included in one of the two daughter cells. In female meiosis, this is the daughter cell facing the micropyle. Thus, in agreement with the two plant species discussed previously, the univalents are allocated directionally to the micropylar side. However, since here the embryo sac is formed at the other end of the tetrad facing the chalaza, this results in the elimination of the univalents from the inheritance. The compensation for this is done by the pollen, in that apparently only those pollen grains which arise from the dinucleate meiocytes and are therefore diploid, develop normally and become fertile.
Significance
Fernando Pardo-Manuel de Villena and Carmen Sapienza discussed the significance of these non-randomnesses in a 2001 review limited to non-random segregation of single chromosomes or chromosome pairs. From the widespread occurrence of such phenomena (in plants, insects, and vertebrates) and the diversity of the respective sequence, they conclude that a functional asymmetry of spindle poles - one of the prerequisites of non-random segregation - is probably present in principle and not only exceptionally. This is also true for humans, where non-random segregation occurs when structurally abnormal chromosomes are present as a result of Robertsonian translocations. Elsewhere, the two authors argue for a significance of non-random segregation of structurally different homologous chromosomes (as in Robertson translocations) in the emergence of new species in evolution (speciation).
Despite publications about non-random segregation in major journals and symposia the potential implications of a multitude of findings were ignored for several decades.
Stem cells and non-random chromosome segregation
Non-random segregation of chromosomes is also found in mitosis when stem cells divide. Adult stem cells maintain the mature tissues of metazoans. Declines in their functions are related to tissue ageing. They reproduce in two manners, firstly in a way that their progeny will differentiate, and thus contribute functionally to the tissue, secondly remaining uncommitted and replenishing the stem cell pool. They play a dual role of generating the various cells that comprise mature tissue by differentiation, while also self-replicating just to sustain the stem cell population. They achieve this divergence through asymmetric cell division. The mitotic asymmetry with non-random segregation of chromosomes arises from unequal partitioning of chromosomes according to the age of their template DNA strands. As explained by the immortal DNA strand hypothesis, non-random chromosome segregation has a unique significance in asymmetric stem cell division; the progeny carrying chromosomes with "newly synthesized" DNA has a greater probability of having mutations because it has gone through a higher number of replications as compared to the segregated counterpart containing majorly "old DNA". As a consequence, the cell carrying "new DNA" likely differentiates into progenitor cell and the other cell carrying "old DNA" likely renews as a stem cell with less mutation alterations.
Pre-existing vs newly generated Histone 3 is distinguished by phosphorylation at threonine 3. H3T3P separates sister chromatids enriched with diverse pools of H3 in order to coordinate asymmetric segregation of "old" H3 into germ stem cells and that male germline activity requires tight regulation of H3T3 phosphorylation.
Literature
Bernard John: Meiosis. Cambridge University Press, Cambridge u. a. 1990. Kapitel Preferential segregation, page 238–247
Fernando Pardo-Manuel de Villena, Carmen Sapienza: Nonrandom segregation during meiosis: the unfairness of females. In: Mammalian Genome 12, page 331–339 (2001). ,
References
DNA replication
Meiosis | Non-random segregation of chromosomes | [
"Biology"
] | 7,641 | [
"Genetics techniques",
"Meiosis",
"DNA replication",
"Molecular genetics",
"Cellular processes"
] |
66,848,907 | https://en.wikipedia.org/wiki/Preslav%20Nakov | Preslav Nakov (born on 26 January 1977 in Veliko Turnovo, Bulgaria) is a computer scientist who works on natural language processing. He is particularly known for his research on fake news detection, automatic detection of offensive language, and biomedical text mining. Nakov obtained a PhD in computer science under the supervision of Marti Hearst from the University of California, Berkeley. He was the first person to receive the prestigious John Atanasov Presidential Award for achievements in the development of the information society by the President of Bulgaria.
Education
Preslav Nakov grew up in Veliko Turnovo, Bulgaria, where he attended primary and secondary school, obtaining a Diploma in Mathematics from the Secondary School of Mathematics and Natural Sciences 'Vassil Drumev' in 1996. He then obtained a MSc degree in Informatics (Computer Science) with specialisations in Artificial Intelligence and Information and Communication Technologies from Sofia University in 2011. During his MSc studies, he worked as a teaching assistant at Sofa University and the Bulgarian Academy of Sciences, as well as a guest lecturer at University College London during a visit in Spring 1999. Subsequently, he enrolled into the PhD program at the Department of Electrical Engineering and Computer Science, University of California, Berkeley, partly supported by a Fulbright Scholarship. Under the supervision of Marti Hearst, he wrote a thesis on the topic of text mining from the Web, and graduated with a PhD in Computer Science from UC Berkeley in 2007.
Career
Upon graduating from the University of California, Berkeley, Nakov started work as a Research Fellow at the National University of Singapore. Since 2012, he has been a Senior Scientist at the Qatar Computing Research Institute (QCRI). He maintains a position as an honorary lecturer at Sofia University.
Research
Preslav Nakov works in the area of natural language processing and text mining. He has published over 300 peer-reviewed research papers.
Preslav Nakov's early research was on lexical semantics and text mining. He published influential papers on biomedical text mining, most prominently on methods to identify citation sentences in biomedical papers.
He is though most well-known for his research on fake news detection, such as his work on predicting the factuality and bias of news sources, as well as for his research on the automatic detection of offensive language. Nakov also previously led the organisation of a popular evaluation campaign on sentiment analysis systems as part of SemEval between the years of 2015 and 2017.
He currently coordinates the Tanbih News Aggregator project, a large project with partners at the Qatar Computing Research Institute and the MIT Computer Science and Artificial Intelligence Laboratory, which aims to uncover stance, bias and propaganda in news.
Selected honors and distinctions
2003 John Atanasov Presidential Award for achievements in the development of the information society
2011 RANLP 2011 Young Researcher Award
2020 Conference on Information and Knowledge Management, best paper award
References
1977 births
Living people
Computer scientists
Natural language processing researchers
University of California, Berkeley alumni
Data miners
People from Veliko Tarnovo
Sofia University alumni | Preslav Nakov | [
"Technology"
] | 606 | [
"Computer science",
"Computer scientists"
] |
66,849,689 | https://en.wikipedia.org/wiki/Silver%20Sparrow%20%28malware%29 | The Silver Sparrow computer virus is malware that runs on x86- and Apple M1-based Macintosh computers. Engineers at the cyber security firm Red Canary have detected two versions of the malware in January and February 2021.
Description
Two versions of the malware were reported. The first version (described as the "non-M1" version) is compiled for Intel x86-64. It was first detected in January 2021. The second version contains code that runs natively on Apple's proprietary M1 processor, and was probably released in December 2020 and discovered in February 2021. The virus connects to a server hosted on Amazon Web Services. The software includes a self-destruct mechanism.
As of 23 February 2021, information about how the malware is spread and what system may be compromised is sparse. It is uncertain whether Silver Sparrow is embedded inside malicious advertisements, pirated software, or bogus Adobe Flash Player updaters. Red Canary has theorized that systems could have been infected through malicious search engine results that might have directed them to download the code. The ultimate object of the malware's release is also still unknown.
Silver Sparrow is the second malware virus observed to include M1-native code.
Impact
As of 23 February 2021, Internet security company Malwarebytes has discovered over 29,000 Macs worldwide running their anti-malware software to be infected with Silver Sparrow. Silver Sparrow infected Macs have been found in 153 countries as of February 17, with higher concentrations reported in the US, UK, Canada, France, and Germany, according to data from Malwarebytes. Over 39,000 Macs were affected in the beginning of March 2021.
On 23 February 2021, a spokesperson of Apple Inc. stated that "there is no evidence to suggest the malware they identified has delivered a malicious payload to infected users." Apple also revoked the certificates of the developer accounts used to sign the packages, thereby preventing any additional Macs from becoming infected.
References
2021 in computing
Cyberattacks
Cybercrime
Hacking in the 2020s
February 2021 crimes
Computer security exploits
MacOS malware | Silver Sparrow (malware) | [
"Technology"
] | 424 | [
"Computer security exploits"
] |
66,849,795 | https://en.wikipedia.org/wiki/HD%2050002 | HD 50002 (HR 2536) is a solitary star in the southern circumpolar constellation Volans. It is faintly visible to the naked eye with an apparent magnitude of 6.09 and is located at a distance of 708 light years. However, it is drifting further with a heliocentric radial velocity of .
HD 50002 has a classification of K3 III, indicating that it is a red giant. HD 50002 has a comparable mass to the Sun, but has expanded to an enlarged radius of . It radiates at 257 times the luminosity of the Sun from its photosphere at an effective temperature of , giving an orange hue. HD 50002 is metal enriched, with 166% the abundance of heavy metals compared to the Sun, and has a projected rotational velocity too low to be measured accurately.
Refrerences
Volans
K-type giants
050002
Durchmusterung objects
032332
2536
Volantis, 4 | HD 50002 | [
"Astronomy"
] | 199 | [
"Volans",
"Constellations"
] |
66,849,877 | https://en.wikipedia.org/wiki/Zygmunt%20Laskowski | Zygmunt or Sigismond Laskowski (19 January 1841 – 15 April 1928) was a Polish physician, surgeon, and anatomist.
Life
Born in Warsaw, he studied at the University of Warsaw and in 1863 fought in the January Uprising. After its defeat he went into exile in France, completing his medical studies in Paris and London between 1864 and 1855. In 1866 he invented a new method of embalming and conserving anatomical specimens, for which he received medals at the Expositions Universelles of 1867 and 1878 as well as another medal in Kraków in 1869.
Between 1869 and 1875 he was docent of anatomy and surgery at the Faculty of Medicine within Paris's University of France. He fought in the Franco-Prussian War as head surgeon of a field ambulance unit, before serving in the Siege of Paris. He moved to Geneva at the invitation of the canton's state council, founding an anatomical museum there. He was a member of the 'Liga Narodowa' (National League), a clandestine organisation hoping to gain Polish independence.
He gained an honorary doctorate from the Jagiellonian University in 1900, whilst his amateur astronomical studies led him to discover the supernova V603 Aquilae, which he first observed on 9 June 1918. He died in Geneva.
Works
Les procédés de conservation des pièces anatomiques (1885)
L’embaumement et la conservation des sujets et des préparations anatomiques (1886)
Grand atlas anatomique (1877)
Atlas iconographique de l'anatomie normale du corps humain (1894).
References
University of Warsaw alumni
1841 births
1928 deaths
19th-century Polish physicians
20th-century Polish physicians
Polish surgeons
Polish inventors
French military personnel of the Franco-Prussian War
French people of Polish descent
Polish participants of the January Uprising
Polish anatomists
Polish medical writers
Polish military doctors
Physicians from Warsaw
Emigrants from Congress Poland to France
Discoverers of supernovae
Amateur astronomers | Zygmunt Laskowski | [
"Astronomy"
] | 402 | [
"Astronomers",
"Amateur astronomers"
] |
66,850,068 | https://en.wikipedia.org/wiki/Claude%20Laurgeau | Claude Laurgeau (born November 1942) is a French professor in robotics. His primary interest is intelligent transportation systems.
He was a professor at the University of Nantes from 1975 to 1982, then director of the "Productive robotics research" department at the IT Agency from 1982 to 1987.
In 1989, he was appointed professor at École des Mines de Paris where he created the Robotics Research Center (CAOR) which he directed until February 2008. He officially retired at the end of 2010, while continuing to collaborate on research projects.
He has contributed to the creation of several robotics "start-up" companies, both at the IT Agency and at the École des mines.
He is president of the company Intempora which develops and publishes the software RTMaps.
Awards
Member of the Order of Academic Palms
Member of the National Order of Merit
Grand Jury Prize at the 2004 Léonard Trophies
Received the Engelberger Robotics Award in 2004
Books
Programmable automates Dunod, 1978
The machines of vision - ETA, 1986
Industrial automation - SCM, 1977
Languages for Robotics - Hermès,
Understanding Robotics - AFRI
The century of the intelligent car - at Press of Mines ParisTech 2009
References
Academic staff of the University of Nantes
Intelligent transportation systems
Roboticists
1942 births
Living people | Claude Laurgeau | [
"Technology"
] | 253 | [
"Warning systems",
"Intelligent transportation systems",
"Information systems",
"Transport systems"
] |
66,851,303 | https://en.wikipedia.org/wiki/Kepler-411 | Kepler-411 is a binary star system. Its primary star Kepler-411A is a K-type main-sequence star, orbited by the red dwarf star Kepler-411B on a wide orbit, discovered in 2012.
Primary star
The primary star's surface temperature is . Kepler-411A is similar to the Sun in its concentration of heavy elements, with a metallicity Fe/H index of 0.11, but is much younger at an age of 212 million years.
Kepler-411A exhibits significant starspot activity, with starspots covering 1.7% of the stellar surface. Darker starspots are concentrated around the equator of the star. Kepler-411A exhibits differential rotation, but with smaller amount of differential shear compared to the Sun.
The companion Kepler-411B is away from Kepler-411A. It is a red dwarf and a flare star.
Planetary system
In 2013, one planet, named Kepler-411b, was discovered, followed by planet Kepler-411c in 2016. Third planet in system detected by transit method, d, along with e detected by radial velocity method, were discovered in 2019.
References
Cygnus (constellation)
Planetary transit variables
K-type main-sequence stars
Planetary systems with four confirmed planets
J19102533+4931237
1781
Binary stars | Kepler-411 | [
"Astronomy"
] | 269 | [
"Cygnus (constellation)",
"Constellations"
] |
66,851,658 | https://en.wikipedia.org/wiki/Uniform%20boundedness%20conjecture%20for%20rational%20points | In arithmetic geometry, the uniform boundedness conjecture for rational points asserts that for a given number field and a positive integer , there exists a number depending only on and such that for any algebraic curve defined over having genus equal to has at most -rational points. This is a refinement of Faltings's theorem, which asserts that the set of -rational points is necessarily finite.
Progress
The first significant progress towards the conjecture was due to Caporaso, Harris, and Mazur. They proved that the conjecture holds if one assumes the Bombieri–Lang conjecture.
Mazur's conjecture B
Mazur's conjecture B is a weaker variant of the uniform boundedness conjecture that asserts that there should be a number such that for any algebraic curve defined over having genus and whose Jacobian variety has Mordell–Weil rank over equal to , the number of -rational points of is at most .
Michael Stoll proved that Mazur's conjecture B holds for hyperelliptic curves with the additional hypothesis that . Stoll's result was further refined by Katz, Rabinoff, and Zureick-Brown in 2015. Both of these works rely on Chabauty's method.
Mazur's conjecture B was resolved by Dimitrov, Gao, and Habegger in 2021 using the earlier work of Gao and Habegger on the geometric Bogomolov conjecture instead of Chabauty's method.
References
Conjectures
Arithmetic geometry | Uniform boundedness conjecture for rational points | [
"Mathematics"
] | 301 | [
"Unsolved problems in mathematics",
"Arithmetic geometry",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
66,854,311 | https://en.wikipedia.org/wiki/Kubelka%E2%80%93Munk%20theory | In optics, the Kubelka–Munk theory devised by Paul Kubelka and Franz Munk, is a fundamental approach to modelling the appearance of paint films. As published in 1931, the theory addresses "the question of how the color of a substrate is changed by the application of a coat of paint of specified composition and thickness, and especially the thickness of paint needed to obscure the substrate". The mathematical relationship involves just two paint-dependent constants.
In their article, differential equations are developed using a two-stream approximation for light diffusing through a coating whose absorption and remission (back-scattering) coefficients are known. The total remission from a coating surface is the summation of:
the reflectance of the coating surface;
the remission from the interior of the coating; and
the remission from the surface of the substrate.
The intensity considered in the latter two parts is modified by the absorption of the coating material. The concept is based on the simplified picture of two diffuse light fluxes moving through semi-infinite plane-parallel layers, with one flux proceeding "downward", and the other simultaneously "upward".
While Kubelka entered this field through an interest in coatings, his work has influenced workers in other areas as well. In the original article, there is a special case of interest to many fields is "the albedo of an infinitely thick coating". This case yielded the Kubelka–Munk equation, which describes the remission from a sample composed of an infinite number of infinitesimal layers, each having as an absorption fraction and as a remission fraction. The authors noted that the remission from an infinite number of these infinitesimal layers is "solely a function of the ratio of the absorption and back-scatter (remission) constants , but not in any way on the absolute numerical values of these constants". (The equation is presented in the same mathematical form as in the article, but with symbolism modified.)
While numerous early authors had developed similar two-constant equations, the mathematics of most of these was found to be consistent with the Kubelka–Munk treatment. Others added additional constants to produce more accurate models, but these generally did not find wide acceptance. Due to its simplicity and its acceptable prediction accuracy in many industrial applications, the Kubelka–Munk model remains very popular. However, in almost every application area, the limitations of the model have required improvements. Sometimes these improvements are touted as extensions of Kubelka–Munk theory, sometimes as embracing more general mathematics of which the Kubelka–Munk equation is a special case, and sometimes as an alternate approach.
Paint colors
In the original article, there are several special cases important to paints that are addressed, along with a mathematical definition of hiding power (an ability to hide the surface of an object). The hiding power of a coating measures its ability to obscure a background of contrasting color. Hiding power is also known as opacity or covering power.
In the following, is the fraction of incident light that is remitted (reflected) by a coated substrate under consideration, is the remission fraction from the substrate alone, is the remission fraction from the coating, is remission fraction of an infinitely thick layer, is the fraction of incident light transmitted by the sample under consideration, and X is the coating thickness.
Ideal white paint An ideal white paint reflects all incident light, and absorbs none, or and For this case, the remission fraction for a layer of finite thickness is
Ideal glaze A coating of an ideal glaze emits no light () and absorbs a fraction For this case, . For an infinitely thick glaze,
In the original article, there is a solution for remission from a coating of finite thickness. Kubelka derived many additional formulas for a variety of other cases, which were published in the post-war years. Whereas the 1931 theory assumed that light flows in one dimension (two fluxes, upward and downward within the layer), in 1948 Kubelka derived the same equations (up to a factor of 2) assuming spherical scatter within the paint layer. Later he generalized the theory to inhomogeneous layers (see below).
Paper and paper coatings
The Kubelka–Munk theory is also used in the paper industry to predict optical properties of paper, avoiding a labor-intensive trial-and-error approach. The theory is relatively simple in terms of the number of constants involved, works very well for many papers, and is well documented for use by the pulp and paper industry. If the optical properties (e.g., reflectance and opacity) of each pulp, filler, and dye used in paper-making are known, then the optical properties of a paper made with any combination of the materials can be predicted. If the contrast ratio and reflectivity of a paper are known, the changes in these properties with a change in basis weight can be predicted.
While the Kubelka–Munk coefficients are assumed to be linear and independent quantities, the relationship fails in regions of strong absorption, such as in the case of dyed paper. Several theories were proposed to explain the non-linear behavior of the coefficients, attributing the non-linearity to the non-isotropic structure of paper at both the micro- and macroscopic levels. However, using an analysis based on the Kramers–Kronig relations, the coefficients were shown to be dependent quantities related to the real and imaginary part of the refractive index. By accounting for this dependency, the anomalous behavior of the Kubelka–Munk coefficients in regions of strong absorption were fully explained.
Semiconductors
The band-gap energy of semiconductors is frequently determined from a Tauc plot, where the quantity is plotted against photon energy . Then the band-gap energy can be obtained by extending the straight segment of the graph to the axis. There is a simpler method adapted from the Kubelka–Munk theory, in which the band gap is calculated by plotting, versus , where is the absorption coefficient.
Colors
Early practitioners, especially D. R. Duncan, assumed that in a mixture of pigments, the colors produced in any given medium may be deduced from formulae involving two constants for each pigment. These constants, which vary with the wavelength of the incident light, measure respectively the absorbing power of the pigment for light and its scattering power. The work of Kubelka and Munk was seen as yielding a useful systematic approach to color mixing and matching. By resolving the Kubelka–Munk equation for the ratio of absorption to scatter, one can obtain a "remission function":
We may define and as absorption and back-scattering coefficients, which replace the absorption and remission fractions and in the Kubelka–Munk equation above. Then assuming separate additivity of the absorption and coefficients for each of components of concentration :
For the case of small amount of pigments, the scatter is dominated by the base material and is assumed to be constant. In such a case, the equation is linear in concentration of pigment.
Spectroscopy
One special case has received much attention in diffuse reflectance spectroscopy: that of an opaque (infinitely thick) coating, which can be applied to a sample modeled as an infinite number of infinitesimal layers. The two-stream approximation was embraced by the early practitioners. There were far more mathematics to choose from, but the name Kubelka–Munk became widely regarded as synonymous with any technique that modeled diffuse radiation moving through layers of infinitesimal size. This was aided by the popular assumption that the Kubelka–Munk function (above) was analogous to the absorbance function in transmission spectroscopy.
In the field of infrared spectroscopy, it was common to prepare solid samples by finely grinding the sample with potassium bromide (KBr). This led to a situation analogous to the described in the section just above for pigments, where the analyte had little effect on the scatter, which was dominated by the KBr. In this case, the assumption of the function being linear with concentration was reasonable.
However, in the field of near-infrared spectroscopy, the samples are generally measured in their natural (often particulate) state, and deviations from linearity at higher absorption levels were routinely observed. The remission function (also called the Kubelka–Munk function) was almost abandoned in favor of "log(1/R)". A more general equation, called the Dahm equation, was developed, along with a scheme to separate the effects of scatter from absorption in the log(1/R) data. In the equation, and are the measured remission by and transmission through a sample of layers, each layer having absorption and remission fractions of and . Note that the so-called ART function
is constant for any sample thickness.
In other areas of spectroscopy, there are shifts away from the strict use of the Kubelka–Munk treatment as well.
Failure of continuous models of diffuse reflectance
Continuous models are widely used to model diffuse reflection from particulate samples. They are embodied in various theories, including diffusion theory, the equation of radiation transfer, as well as Kubelka–Munk. In spite of its widespread use, there has long been an understanding that the Kubelka–Munk (K–M) theory has limitations. The term "failure of the Kubelka–Munk theory" has been applied because it does not "remain valid in strongly absorbing materials". There have been many attempts to explain the limitations and amend the K–M equation. In literature related to diffuse-reflection infrared Fourier-transform (DRIFT) spectra, "particularly specular reflection" is often identified as a culprit. In some corners, there is the working assumption that the problem is that the K–M theory is a two-flux theory, and that introducing additional directions will solve the problem. In particular, two continuous theories, "diffusion theory" and the "equation of radiation transfer" (ERT), have their advocates. Some of the advocates of the ERT have called to our attention the failure of the ERT to predict the desired linear absorption coefficient as particle size gets large, and blamed it on the hidden mass effect. In 2003, Donald and Kevin Dahm illustrated the degree to which the continuous theories all suffer from the fundamental limitation of trying to model a discontinuous sample as a continuum and suggested that as long as the effect of this limitation is unexplored, there is little reason to search for other reasons for "failure".
Spectroscopists have a desire to determine the same absorption coefficient quantity from diffuse reflectance measurements as they would from a transmission measurement on a non-scattering sample of the same material. The Bouguer–Lambert law describes the attenuation of transmitted light as exponential falloff in intensity of a direct beam of light as it passes through a medium. The cause of the attenuation may be absorption or scatter. A coefficient unaffected by scatter is desired by absorption spectroscopists. Mathematically, the Bouguer–Lambert law may be expressed as
where is the linear absorption coefficient, is the back-scatter coefficient, and is their sum, often called the extinction coefficient. (The symbols and may be used to represent the absorption and scattering parts of the extinction coefficients.)
Through work with the Dahm equation, we know that the ART function is constant for all sample thicknesses of the same material. This would include the infinitesimal layer used in the Kubelka–Munk differentiation. Consequently we may equate numerous functions:
Using a simple system (albeit rather complex mathematics), it can be shown that continuous models correctly predict the and in the ART function, but do not correctly predict the fractions of incident light that are transmitted directly. From this, it can be deduced that the coefficients and and not proportional to the and in the Bouguer–Lambert law.
Treatment of inhomogeneous layers
A coating layer is not the same as the substrate it covers. As Kubelka was interested in coatings, he was of course very interested in the handling of what he called "inhomogeneous layers". A set of equations, one of which was believed to apply to the case, had been published by Frank Benford in 1946 for the case of two light streams through plane parallel layers. However, it did not handle it successfully. Kubelka solved the problem, and we illustrate the solution here. First, a case to which the equation of Benford may be straightforwardly applied.
The sketch shows two surfaces bounding a slab of a non-absorbing medium. Notice that the assembly would appear identical regardless of which side was being entered. Apart from the surfaces, the medium has no spectroscopic properties. A beam of light of unit intensity reaches the front surface, and by our assumptions, half is remitted and half is transmitted. The portion that is transmitted proceeds to the other surface undiminished. There it is again split where half (1/4 of the original incident intensity) is transmitted and half is remitted. The amounts that remitted from the first surface can be totaled as can the amount that are transmitted through the second. The total remission is 2/3 ≈ 0.667. The total transmission is 1/3 ≈ 0.333.
Alternatively, we can use the equation of Benford that applies. For two plane parallel layers, x and y, having different properties, the transmission remission , and absorption fractions for the two layers can be calculated from the properties of the individual layers () from the following equations:
Next we will examine the case where the medium is absorbing one. While the total assembly would behave the same in either direction, in order to apply the mathematics, we will need to use an intermediate step where it does not.
Here we will assume that again the surfaces will remit and transmit 1/2 the amount striking it, this time we will assume that half of the intensity will be absorbed a trip across the slab. A beam of light of unit intensity reaches the front surface, and by our assumptions, half is remitted and half is transmitted, but this time half of the transmitted light, or 1/4 is absorbed before another 1/4 reaches the second surface, and 1/4 is remitted back across the slab to face half of it being absorbed. The sketch shows that the calculated values from the equations should be .
Now the sketch has three layers, labeled 1, 2, and 3. Layers 1 and 3 remit and transmit 1/2 and absorb nothing. Layer 2 absorbs half and transmits half, but remits nothing. We can build the assembly by first combining layers 1 and 2, and then combining that result as the x value in combing with layer 3 (as y).
So for step 1:
Kubelka has shown by theory and experiment that remittance and absorption of a non-homogeneous specimen depend on the direction of illumination, whereas transmittance does not. Consequently, for non-homogeneous layers, the remission from the first layer that occurs in the denominator is the remission when illuminated from the reverse (not the forward) direction, so we will need to know the value for for the next step, that is when the layer x is layer 2, and layer y is layer 1:
The next step is then to set and , with and , with in the denominator as :
In computer art
The K–M paint-mixing algorithm has been adapted to directly use the RGB color model by Sochorová and Jamriška in 2021. Their "Mixbox" approach works by converting the inputs into a version of CMYK (phthalo blue, quinacridone magenta, Hansa yellow, and titanium white) plus a residue (to account for the gamut difference), performing the K–M mixing in that latent space, and then producing the output in RGB. There are additional concerns for dealing with wider gamuts and improving speed. This RGB adaptation makes it easier for digital painting software to integrate the more realistic K–M method.
Notes
References
Lighting
Scattering, absorption and radiative transfer (optics) | Kubelka–Munk theory | [
"Chemistry"
] | 3,340 | [
"Scattering",
" absorption and radiative transfer (optics)"
] |
66,854,421 | https://en.wikipedia.org/wiki/Surface%20equivalence%20principle | In electromagnetism, surface equivalence principle or surface equivalence theorem relates an arbitrary current distribution within an imaginary closed surface with an equivalent source on the surface. It is also known as field equivalence principle, Huygens' equivalence principle or simply as the equivalence principle. Being a more rigorous reformulation of the Huygens–Fresnel principle, it is often used to simplify the analysis of radiating structures such as antennas.
Certain formulations of the principle are also known as Love equivalence principle and Schelkunoff equivalence principle, after Augustus Edward Hough Love and Sergei Alexander Schelkunoff, respectively.
Physical meaning
General formulation
The principle yields an equivalent problem for a radiation problem by introducing an imaginary closed surface and fictitious surface current densities. It is an extension of Huygens–Fresnel principle, which describes each point on a wavefront as a spherical wave source. The equivalence of the imaginary surface currents are enforced by the uniqueness theorem in electromagnetism, which dictates that a unique solution can be determined by fixing a boundary condition on a system. With the appropriate choice of the imaginary current densities, the fields inside the surface or outside the surface can be deduced from the imaginary currents. In a radiation problem with given current density sources, electric current density and magnetic current density , the tangential field boundary conditions necessitate that
where and correspond to the imaginary current sources that are impressed on the closed surface. and represent the electric and magnetic fields inside the surface, respectively, while and are the fields outside of the surface. Both the original and imaginary currents should produce the same external field distributions.
Love and Schelkunoff equivalence principles
Per the boundary conditions, the fields inside the surface and the current densities can be arbitrarily chosen as long as they produce the same external fields. Love's equivalence principle, introduced in 1901 by Augustus Edward Hough Love, takes the internal fields as zero:
The fields inside the surface are referred as null fields. Thus, the surface currents are chosen as to sustain the external fields in the original problem. Alternatively, Love equivalent problem for field distributions inside the surface can be formulated: this requires the negative of surface currents for the external radiation case. Thus, the surface currents will radiate the fields in the original problem in the inside of the surface; nevertheless, they will produce null external fields.
Schelkunoff equivalence principle, introduced by Sergei Alexander Schelkunoff, substitutes the closed surface with a perfectly conducting material body. In the case of a perfect electrical conductor, the electric currents that are impressed on the surface won't radiate due to Lorentz reciprocity. Thus, the original currents can be substituted with surface magnetic currents only. A similar formulation for a perfect magnetic conductor would use impressed electric currents.
The equivalence principles can also be applied to conductive half-spaces with the aid of method of image charges.
Applications
The surface equivalence principle is heavily used in the analysis of antenna problems to simplify the problem: in many of the applications, the close surface is chosen as so to encompass the conductive elements to alleviate the limits of integration. Selected uses in antenna theory include the analysis of aperture antennas and the cavity model approach for microstrip patch antennas. It has also been used as a domain decomposition method for method of moments analysis of complex antenna structures. Schelkunoff's formulation is employed particularly for scattering problems.
The principle has also been used in the analysis design of metamaterials such as Huygens’ metasurfaces and plasmonic scatterers.
See also
Aperture antennas
Babinet's principle
Electromagnetism uniqueness theorem
Huygens–Fresnel principle
Reciprocity (electromagnetism)
References
Bibliography
Electromagnetism
Antennas
Diffraction
Electromagnetic radiation | Surface equivalence principle | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 779 | [
"Physical phenomena",
"Electromagnetism",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Antennas",
"Electromagnetic radiation",
"Diffraction",
"Radiation",
"Crystallography",
"Fundamental interactions",
"Spectroscopy"
] |
69,594,402 | https://en.wikipedia.org/wiki/ACP-105 | ACP-105 is a drug which acts as a selective androgen receptor modulator (SARM). It has been investigated for potential use in the treatment of age-related cognitive decline. The drug has been found to reduce anxiety-like behavior in a mouse model of Alzheimer's disease when administered alone, as well as enhance spatial memory when coadministered with the selective estrogen receptor β agonist AC-186. ACP-105 is an aniline SARM and is related to AC-262536 and vosilasarm (RAD140).
References
Abandoned drugs
Chloroarenes
Heterocyclic compounds with 2 rings
Nitriles
Nitrogen heterocycles
Selective androgen receptor modulators
Tertiary alcohols | ACP-105 | [
"Chemistry"
] | 156 | [
"Pharmacology",
"Drug safety",
"Medicinal chemistry stubs",
"Functional groups",
"Pharmacology stubs",
"Nitriles",
"Abandoned drugs"
] |
69,594,438 | https://en.wikipedia.org/wiki/Kruskal%20count | The Kruskal count (also known as Kruskal's principle, Dynkin–Kruskal count, Dynkin's counting trick, Dynkin's card trick, coupling card trick or shift coupling) is a probabilistic concept originally demonstrated by the Russian mathematician Evgenii Borisovich Dynkin in the 1950s or 1960s discussing coupling effects and rediscovered as a card trick by the American mathematician Martin David Kruskal in the early 1970s as a side-product while working on another problem. It was published by Kruskal's friend Martin Gardner and magician Karl Fulves in 1975. This is related to a similar trick published by magician Alexander F. Kraus in 1957 as Sum total and later called Kraus principle.
Besides uses as a card trick, the underlying phenomenon has applications in cryptography, code breaking, software tamper protection, code self-synchronization, control-flow resynchronization, design of variable-length codes and variable-length instruction sets, web navigation, object alignment, and others.
Card trick
The trick is performed with cards, but is more a magical-looking effect than a conventional magic trick. The magician has no access to the cards, which are manipulated by members of the audience. Thus sleight of hand is not possible. Rather the effect is based on the mathematical fact that the output of a Markov chain, under certain conditions, is typically independent of the input. A simplified version using the hands of a clock is as follows. A volunteer picks a number from one to twelve and does not reveal it to the magician. The volunteer is instructed to start from 12 on the clock and move clockwise by a number of spaces equal to the number of letters that the chosen number has when spelled out. This is then repeated, moving by the number of letters in the new number. The output after three or more moves does not depend on the initially chosen number and therefore the magician can predict it.
See also
Coupling (probability)
Discrete logarithm
Equifinality
Ergodic theory
Geometric distribution
Overlapping instructions
Pollard's kangaroo algorithm
Random walk
Self-synchronizing code
Notes
References
Further reading
(1+9+80+9+1 pages) (NB. This is a translation of the first Russian edition published as "" by GTTI () in March 1952 as Number 6 in Library of the Mathematics Circle (). It is based on seminars held at the School Mathematics Circle in 1945/1946 and 1946/1947 at Moscow State University.)
(xii+365+1 pages); (viii+274+2 pages) (NB. This was originally published in Russian as "Markovskie prot︠s︡essy" () by Fizmatgiz () in 1963 and translated to English with the assistance of the author.)
(x+237 pages) (NB. This is a corrected translation of the first Russian edition published as "" by Nauka Press () in 1967 as part of a series on Probability Theory and Mathematical Statistics () with the assistance of the authors. It is based on lectures held at the Moscow State University in 1962/1963.)
(4 pages)
(4 pages); (4 of xiv+373+17 pages)
(xvi+421 pages)
(4 pages)
(6 pages)
(1+10 pages)
(8 pages)
(2 pages)
(10 pages)
(18 pages)
(23 pages)
(1+xvii+1+152 pages)
(13 pages)
(17 pages) (NB. This source does not mention Dynkin or Kruskal specifically.)
External links
[23:40]
Recreational mathematics
Cryptography
Number theoretic algorithms
Markov models | Kruskal count | [
"Mathematics",
"Engineering"
] | 765 | [
"Recreational mathematics",
"Cryptography",
"Cybersecurity engineering",
"Applied mathematics"
] |
69,594,812 | https://en.wikipedia.org/wiki/PF-06260414 | PF-06260414 is a drug which acts as a selective androgen receptor modulator (SARM), and was developed for androgen replacement therapy.
See also
AC-262536
ACP-105
Enobosarm
JNJ-28330835
Ligandrol
References
Selective androgen receptor modulators
Isoquinolines
Thiadiazinanes
Nitriles
Sulfones
Sulfur–nitrogen compounds | PF-06260414 | [
"Chemistry"
] | 90 | [
"Sulfones",
"Nitriles",
"Functional groups"
] |
69,594,924 | https://en.wikipedia.org/wiki/GSK-4336A | GSK4336A is a drug which acts as a selective androgen receptor modulator (SARM), and was developed for androgen replacement therapy.
See also
AC-262536
ACP-105
Enobosarm
JNJ-28330835
Ligandrol
References
Selective androgen receptor modulators
Chloroarenes
Trifluoromethyl compounds
Amides
Benzoxazepines
Lactams
Anilines
Carboxamides | GSK-4336A | [
"Chemistry"
] | 96 | [
"Amides",
"Functional groups"
] |
69,597,348 | https://en.wikipedia.org/wiki/Clayton%20J.%20Whisnant | Clayton J. Whisnant is Chapman Professor of the Humanities and European History at Wofford College.
Works
References
Historians of sexuality
Living people
Historians of Germany
Wofford College faculty
Year of birth missing (living people) | Clayton J. Whisnant | [
"Biology"
] | 46 | [
"Behavior",
"Sexuality",
"Historians of sexuality"
] |
69,597,365 | https://en.wikipedia.org/wiki/Pipe-in-pipe%20system | A pipe-in-pipe system is a form of plumbing where all water pipes are running inside another pipe. Its purpose is to ensure that any leaks in the innermost pipe will not leak into the building structure and can be detected, as well as make for easier change of any internal pipes with leakage without opening the walls.
Parts
A pipe-in-pipe system consists of four main components: an inner pipe, an outer pipe, wall boxes, and a distribution cabinet.
Legal requirements
In Norway, domestic plumbing based on pipe-in-pipe systems (known as: rør-i-rør system) was introduced in 1995, and has since become a legal requirement in all new houses being built. Due to experience with water damages with many traditional copper pipes in Norway between 1970 and 1995, new requirements came in 1997 stating that water pipes should be easily accessible for replacement after installation. These requirements have since lead to the current de facto requirement for pipe-in-pipe plumbing.
Operation
During normal operation, the water flows through the inner water pipe, which is enclosed by the outer goods pipe. The plumbing leads back to a centrally located distribution cabinet where the pipes to all tapping points of the home are gathered in one place. In the event of any damage and leakage to the internal pipes, the external pipes must ensure that the water leak is safely diverted to the distribution cabinet where it is made visible for inspection before it is lead to a room with drains in the floor. It can be advantageous to install a water leak sensor and connect it to an automatic water stop valve which closes the water supply. In the event of visible leaks, there must also be an easily accessible manual main shut-off valve which must stop water supply to all pipes.
Installation and repairs
Due to the way pipe-in-pipe systems are installed using flexible internal and external pipes, it is possible to pull out and replace the water pipes without having to open up walls. To ensure this, it is important that the outer pipes are clamped properly and securely so that they remain attached to the building structure, while also not being damaged. This is an absolutely crucial factor for how easy it is to change the water pipes, and the clamping rings used should be attached near wall boxes and distribution cabinets.
Water hammer
In addition to proper clamping of the external pipes to the building structure, proper clamping of the distributors inside the distribution cabinet is also important to avoid water hammers when taps are closed rapidly. Pipe-in-pipe systems may be more susceptible or vulnerable to water hammers, and are therefore recommended to be used together with soft closing water taps.
References
Water supply
Building engineering
Plumbing | Pipe-in-pipe system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 542 | [
"Hydrology",
"Building engineering",
"Plumbing",
"Construction",
"Civil engineering",
"Environmental engineering",
"Water supply",
"Architecture"
] |
69,597,400 | https://en.wikipedia.org/wiki/Digital%20rights%20in%20the%20Caribbean | Digital rights—human rights in relation to digital technologies—present particular challenges in the Caribbean countries, due to its geographies, political context, social inequalities and cultural diversity. While they face the same problem of digital divides as other regions, for islands the impacts of not accessing or understanding digital technologies can have particularly harmful consequences. Similar concerns could be found in terms of gender-based violence online, a global problem encompassing psychological, physical, emotional and sexual violence. This affects more acutely girls and young women and brings about special concerns within the Caribbean. However, there are other topics which are utmost problematic because of the history and type of applicable law system in countries from this region, such as in the case of digital identity and internet shutdowns. Despite variations across Caribbean countries, issues happening in one country can be replicated within the region or can affect people living in other countries.
Digital ID
Jamaica
There have been two attempts to implement the National Identification System (NIDS) bill in Jamaica. The first one was proposed in 2017 and dropped in 2019, when the Court indicated that the bill was being unjustifiably breaching Jamaicans' rights to privacy and equality, among others. In 2020, a new version of the bill was proposed. In 2021, civil society served recommendations to parliamentarians but most of them were not taken into account.
One of the many criticisms is the disproportionately large amounts of personal information being collected. Gathering biometric data was particularly signaled to be not necessary to provide legal identity and being a target for potential leaks and illegitimate access. The Jamaican government has already mistakenly exposed personal data, as seen in the JamCOVID app scandal that made public the immigration record of hundreds of thousands of travelers, as well as their COVID test results.
Other concerns expressed to parliamentarians and senators include opening the door for the disclosure of information to third parties in the future, and dismissing stronger safeguards for authentication logs, potentially providing space for profiling and surveilling citizens. In October 2021 the Lower House passed the bill.
Dominican Republic
In the Dominican Republic, a new Digital ID legislation was accelerated in 2014 with the support of the United Nations and World Bank to align local policies with international regulations, such as the European Union. This has been criticised in terms of the effectiveness of its promises to address issues of statelessness, poverty and social inclusion, especially amongst disadvantaged populations such as afro-descended, indigenous and women.
In 2020, protests erupted from civil society over trust issues and limitations to access basic services that the Digital ID system was representing. Furthermore, investigations showcased how the system was reinforcing vulnerabilities. For example, by creating higher complexities to access or renew digital ID among groups historically excluded from foundational identity systems. The case of Haitian migrants in DR and Dominican citizens of Haitian descendent was presented to be significantly illustrious, which for 80 years the Dominican state has battled to not recognise their nationality.
Besides its implications for the right to privacy, the Digital ID system has been problematised for excessively affecting the access to basic services, such as welfare, transport, health and education. Finally, it has been highlighted the overall lack of accountability of the purposes for which data collected is being used and its implications with other fundamental rights. The case of Digital ID in DR has been thoroughly discussed in the investigation "Legal Identity, Race and Belonging in the Dominican Republic. From Citizen to Foreigner" by Eve Hayes de Kalaf
.
Internet shutdowns
Cuba
On multiple occasions, the internet shutdowns internet shutdowns in Cuba have coincided with protests opposing the government. ETECSA is the only internet service provider in the country, making it easier to cut the communications. Some recent incidents include:
On November 27, 2020, an internet shutdown was reported during protests led by artists, including the San Isidro Movement, who denounced repressive measures and censorship from the authorities.
On January 27, 2021, a two-hour internet outage was reported during protests in front of the Ministry of Culture, demanding an answer on the detention of some artists. The internet access remained intermittent the rest of the day.
On July 11, 2021, during massive Cuban protests demanding food, water, medicine, COVID-19 vaccines and other items, several platforms showed that the internet traffic went down. The
Interamerican Commission on Human Rights called for Cuba to fulfill its human rights obligations on the right of protest.
Other forms of digital censorship, like directed outages and blocking apps have also been reported in 2020 and 2021.
Digital Divides
Digital divides in access have particularly acute implications in Latin America and the Caribbean (LAC), as it remains the most socially unequal region in the world. This context is not an exception in the Caribbean, where countries have represented the highest and lowest levels of digital development. According to Data from the International Telecommunication Union (ITU), by 2017 Barbados and St. Kitts and Nevis were leading the Americas' rankings of ICT access, following the United States and Canada. On the other hand, Belize, Guyana, Cuba and Haiti represented the lowest positions of the regional charts.
Data from the Inter-American Development Bank (IADB) evidenced that the prevalence of the digital gap impacted especially the region's rural population by 2018/2019. The Caribbean countries with high-level connectivity in rural areas were the Bahamas, Barbados and Panama, at the mid-level were Dominican Republic and Trinidad and Tobago, while within the lower level were Belize, Guyana and Jamaica. Not accessing the internet can have greater negative consequences for islands, especially after the COVID pandemic. Being offline can broaden the population's isolation, for example by reducing its possibilities to access information, be active in the labour market and receive aid from international agencies.
Gender based violence online
Research from the Web Foundation on Gender Based Violence (GBV) online presented how this issue can amplify digital divides among women and girls, with digital abuse being far worse among black and LGBTQ+ communities. According to research from The Economist, 38% of women globally have been subjects of digital violence, which was identified by the Web Foundation to stagger to 52% among young women and girls. Examples of gender based violence are identity thefts, physical and sexual threats, doxxing and sharing of non-consensual images. In relation to younger generations, Plan International conducted research in 2020 amongst more than 14,000 girls and young women from 31 countries. The study showcased that 58% had experienced online harassment and 50% indicated that they have faced more of these experiences online than on the street.
GBV online represents a greater challenge in the Caribbean. By 2015 four of its countries had the highest rates of sexual violence and between 20 and 35% of women had experienced physical, sexual or psychological violence. Amongst the most notable cases was the experience of Becky Dundee, a Guyanese-American lesbian. Her TikTok and Twitter publications became viral through large amounts of negative comments criticizing her masculine appearance. Furthermore, in Jamaica, a research was especially illustrious on this situation where two-thirds of respondents had observed online harassment and 76% indicated that sexually related harassment was a major problem.
References
Digital divide
Internet access
Violence against women | Digital rights in the Caribbean | [
"Technology"
] | 1,470 | [
"Internet access",
"IT infrastructure"
] |
69,597,625 | https://en.wikipedia.org/wiki/Periglandula%20ipomoeae | Periglandula ipomoeae is a fungus of the genus Periglandula in the family Clavicipitaceae. It lives symbiotically with the plant Ipomoea asarifolia as an epibiont.
References
Clavicipitaceae
Fungi described in 2011
Fungus species | Periglandula ipomoeae | [
"Biology"
] | 63 | [
"Fungi",
"Fungus species"
] |
69,598,013 | https://en.wikipedia.org/wiki/Geum%20borisii | Geum borisii may refer to the following plants of the genus Geum:
in botanical literature:
Geum borisii = the hybrid Geum bulgaricum × Geum montanum
Geum borisii = the hybrid Geum bulgaricum × Geum reptans
in gardening literature:
Geum coccineum
Geum quellyon
References
borisii | Geum borisii | [
"Biology"
] | 77 | [
"Set index articles on plants",
"Set index articles on organisms",
"Plants"
] |
69,598,228 | https://en.wikipedia.org/wiki/Bayshore%20Resilience | Bayshore Resilience, also known as Bashor or Bashore Resilience, is a test to determine the ratio of the energy released in deformation recovery to the energy that caused the deformation, or an estimate of the energy-absorbing statistics of a material in reference to another material. A ratio of 100 percent indicates a completely elastic pair of materials where a ratio of 0 percent indicates a pair of completely energy-absorbent materials.
The ratio is determined by dropping a weighted ball onto the material to be measured, and then taking the ratio of the rebound height to the initial height. The ratio is also an indicator of hysteretic energy loss.
The test for Bayshore Resilience is used to test the elasticity of polyurethanes.
Under National Federation of State High School Associations regulations, the material on the edge of a basketball backboard must meet a Bayshore Resilience number of 20 to 30.
See also
Resilience (materials science)
References
Experimental physics | Bayshore Resilience | [
"Physics"
] | 200 | [
"Experimental physics"
] |
69,598,695 | https://en.wikipedia.org/wiki/Sarah%20Rowland-Jones | Sarah Rowland-Jones is a British physician who is a Professor of Immunology at the University of Oxford. She works on immune responses to HIV infection. She has focussed her research on problems caused by HIV in Africa, with a hope to create a successful HIV vaccine. She is the former president of the Royal Society of Tropical Medicine and Hygiene.
Early life and education
Rowland-Jones was a medical student at the University of Cambridge. She was a postgraduate student at the University of Oxford, where she specialised in infectious diseases. She was a junior doctor in London during the beginning of the AIDS epidemic, and became interested in HIV infection. Rowland-Jones returned to the University of Oxford where she was made a Medical Research Council Fellow. Her research considered immune response to HIV infection.
Research and career
Rowland-Jones remained at Oxford throughout her medical career. She worked as a clinician scientist, senior fellow and, eventually, professor. Her research considered the impact of HIV in African communities. She was particularly interested in the immune responses of populations who were uninfected by highly exposed, for example infants with HIV positive mothers and sex workers. In particular, Rowland-Jones focussed on HIV–cytomegalovirus (CMV) coinfection and how this impacts pathogensis. She was elected Fellow of the Academy of Medical Sciences in 2000.
Rowland-Jones was made Director of the Oxford Centre for Tropical Medicine in 2001. The centre coordinates research in tropical medicine in low-income countries. She moved to The Gambia in 2004, where she oversaw research in the MRC Laboratories of The Gambia. She started working on HIV-2 as a model of attenuated HIV infection, and the role this may play in infant immune response.
In 2008, Rowland-Jones returned to Oxford, where she was made a Professor of Immunology. She holds a joint position in the Department of Infection, Immunity and Cardiovascular Disease at the University of Sheffield.
In 2018, Rowland-Jones was appointed President Elect of the Royal Society of Tropical Medicine and Hygiene. She served as president from 2018 to 2019. In 2020 she was appointed editor of the journal AIDS.
Selected publications
References
Women physicians
British women medical doctors
Vaccinologists
Infectious disease physicians
Academics of the University of Oxford
Year of birth missing (living people)
Living people
Alumni of the University of Oxford | Sarah Rowland-Jones | [
"Biology"
] | 470 | [
"Vaccination",
"Vaccinologists"
] |
69,599,559 | https://en.wikipedia.org/wiki/List%20of%20movies%20filmed%20in%20outer%20space | , several documentaries and at least three feature films have been partially filmed in outer space.
See also
List of films featuring space stations
References
space
movies | List of movies filmed in outer space | [
"Astronomy"
] | 30 | [
"Outer space",
"Outer space lists"
] |
69,600,247 | https://en.wikipedia.org/wiki/Lowell%20Center%20for%20Space%20Science%20%26%20Technology | Lowell Center for Space Science & Technology (abbreviated as LoCSST) is a public research centre in Lowell, Massachusetts, affiliated by University of Massachusetts Lowell. The research centre has partners and grants from research giants like NASA, National Science Federation, BoldlyGo institute for its excellence in Space science research.
Faculty
Supriya Chakrabarti, Ph.D., Professor of UMass Lowell and Director of the institute, faculty on Physics and applied physics.
Dimitris Christodoulou, Assistant Teaching Professor, faculty on Mathematical Science
Ofer Cohen, excels in the field of Computational plasma physics, Computational methods, Magnetohydrodynamics
Timothy Cook, Associate Professor in Physics; works on Visible & ultraviolet instrumentation, Sounding rockets, Small satellites, Tomography & other novel data analysis techniques
Christopher Hansen, Chair, UMass Lowell SHAP3D Site Director, Associate Professor of Mechanical Engineering; works on Materials science, self-healing materials, additive manufacturing (i.e. 3D printing) techniques
Silas Laycock, Associate Professor in Physics; works on Neutron stars and black holes in X-ray binaries, pulsars, multi wavelength astronomy, time domain astrophysics.
Marianna Maiaru, Assistant Professor of Mechanical Engineering
Ramaswamy Nagarajan, Co-Director of HEROES; works on Biocatalysis, greener advanced materials (electronic, photo-responsive polymers, molecularly integrated hybrid nanomaterials, materials for energy conversion/storage), elastomers, thermal & morphological characterization of materials, roll to roll manufacture of flexible electronic products
Jay Weitzen, Professor of Electrical & Computer Engineering and Wireless Communication
Research
The institute has or had worked on numerous topics or projects.
Experimental astronomy and space physics.
Observational Astronomy
Research on X-ray Binary Pulsars, based
Black Hole High mass X-ray Binary
Computational Astrophysics and Space Physics
Projects
PICTURE C
PICTURE C stands for Planetary Imaging Concept Testbed Using a Recoverable Experiment – Coronagraph. NASA awarded Chakrabarti's team a $5.6 million grant to develop and test PICTURE C. It was the largest grant up to then. The system potentially can detect young, Jupiter-size planets orbiting other sun-like stars in the Milky Way, capable of supporting life, by studying the disk of dust, asteroids, planets and other debris orbiting the stars and gain a better understanding of the processes and dynamics that formed solar system. PICTURE C is carried aloft to the edge of Earth's atmosphere, using huge helium balloons. Under Dr. Chakrabarti, PICTURE C made its first test flight in September 2019. The 'balloon lofted camera' instrument inflates to across and takes 3 hours to climb to an altitude of about and then hovers. The success of PICTURE-C test launch, makes space-based direct imaging a reality and helps NASA's Wide Field Infrared Survey Telescope (WFIRST) with such technological support. Members associated with the project are Supriya Chakraborti, Timothy Cook, Kuravi Hewawasam, Susanna Finn and Christopher Mendillo. Other collaborations were made from NASA's Jet Propulsion Laboratory, Goddard Space Flight Center, Caltech, MIT, the Space Telescope Science Institute and the University of California Santa Barbara.
Project Blue
Project Blue aims to design, build and launch a small and lightweight space telescope to detect habitable planets around the nearest star Alpha Centauri. LCSST team develops instrumentation for direct imaging of exoplanets in this project. The project is also associated with The BoldlyGo Institute, SETI Institute and Mission Centaur.
SPACE HAUC project
The Science Program Around Communication Engineering with High Achieving Undergraduate Cadres Project or SPACE HAUC project provides multi-disciplinary undergraduate students with hands-on training in designing and building space-flight missions. NASA released the Undergraduate Student Instrument Project (USIP) and Student Flight Research Opportunity (SFRO) in August 2015 and LCSST sent a proposal that was accepted under the program. The project was selected in 2017 by NASA's CubeSat Launch Initiative (CSLI) to be launched as part of the ELaNa program.
Dr. Chakrabarti mentors more than 75 undergraduate students from the Kennedy College of Sciences and the Francis College of Engineering for the project. The team's goal is to design and build a small cube satellite that will be launched by NASA into the orbit. NASA has awarded the team $200,000 to develop and test a prototype satellite, called SPACE HAUC-1, which is the UMass Lowell's first mission to go around the Earth. The program is designed to demonstrate the practicality of communicating at high data rates in the X band.
SPACE HAUC was expected to launch in 2018, later postponed to 2020. The satellite has successfully passed design review and is under the testing phase. After final assembly and integration of the spacecraft, SPACE HAUC is expected launch to the ISS for further deployment.
PICTURE B
IMAGERII
Facilities
The library contains source specific event files (gti and barycenter corrected), source images, light curves (in three different energy bands titled soft, broad and hard), spectra, periodograms, and pulse profiles on different projects. It also contains data of six X-ray telescope missions (~2000 pointed observations) namely, XMM-Newton, RXTE, Chandra, NuSTAR, NICER and Suzaku, spanning across a period of two decades up until 2019.
An Astronomical Observatory is located on South Campus named UMass Lowell Schueller Observatory.
References
External links
Official website
Research institutes in Massachusetts
Astronomy institutes and departments
Astrophysics research institutes
Organizations based in Lowell, Massachusetts | Lowell Center for Space Science & Technology | [
"Physics",
"Astronomy"
] | 1,150 | [
"Astronomy organizations",
"Astrophysics research institutes",
"Astrophysics",
"Astronomy institutes and departments"
] |
69,602,684 | https://en.wikipedia.org/wiki/Timeline%20of%20special%20relativity%20and%20the%20speed%20of%20light | This timeline describes the major developments, both experimental and theoretical, of:
Einstein’s special theory of relativity (SR),
its predecessors like the theories of luminiferous aether,
its early competitors, i.e.:
Ritz’s ballistic theory of light,
the models of electromagnetic mass created by Abraham (1902), Lorentz (1904), Bucherer (1904) and Langevin (1904).
This list also mentions the origins of standard notation (like c) and terminology (like theory of relavity).
Criteria for inclusion
Theories other than SR are not described here exhaustively, but only to the extent that is directly relevant to SR – i.e. at points when they:
anticipated some elements of SR, like Fresnel’s hypothesis of partial aether drag,
led to new experiments testing SR, like Stokes’s model of complete aether drag,
were disproved or questioned, e.g. by the experiments of Oliver Lodge.
For a more detailed timeline of aether theories – e.g. their emergence with the wave theory of light – see a separate article. Also, not all experiments are listed here – repetitions, even with much higher precision than the original, are mentioned only if they influence or challenge the opinions at their time. It was the case with:
Michelson and Morley (1886) repeating the experiment of Fizeau (1851), contradicting Michelson’s interpretation of his 1881 experiment;
Michelson–Morley (1887), more conclusive than the original experiment by Michelson (1881) and difficult to reconcile with their experiment of 1886, or other first-order measurements;
Kaufmann’s 1906 repetition of his 1902 experiment, because he claimed to contradict the model of Einstein and Lorentz, considered consistent with the data from 1902;
Miller (1933) or Marinov (1974), with results different than Michelson–Morley.
For lists of repetitions, see the articles of particular experiments. The measurements of speed of light are also mentioned only to the minimum extent, i.e. when they proved for the first time that c is finite and invariant. Innovations like the use of Foucault's rotating mirror or the Fizeau wheel are not listed here – see the article about speed of light.
This timeline also ignores, for reasons of volume and clarity:
the long story of spacetime and the concept of time as the fourth dimension; e.g. the ideas of Lagrange and Wells;
mathematical innovations that influenced the formalism of SR, e.g. the introduction of fibre bundles;
indirect evidence for SR, through the evidence for relativistic theories like general relativity or relativistic quantum mechanics;
publication of countless textbooks and popular science books or articles, even very influential classics like Mr Tompkins by George Gamow;
the cultural impact of SR, e.g. publication of documentaries or commemorations of SR during the World Year of Physics 2005;
new, untested theories modifying SR like Doubly special relativity or Variable speed of light.
Before the 19th century
1632 – Galileo Galilei writes about the relativity of motion and that some forms of motion are undetectable; this would be later called the relativity principle, essential for special relativity as one of its postulates.
1674 – Robert Hooke makes his observations of the Gamma Draconis star, or γ Draconis for short. He proves a variation in its position on the sky, which would be later identified as stellar aberration.
1676 – Ole Rømer gives the first piece of evidence that the speed of light is finite, through his observation of the moons of Jupiter; the discovery divides scientists of his time.
1690 – Christiaan Huygens gives the first estimate of the speed of light in air or vacuum, based on Rømer’s work. The result is equivalent to about 2×108 m/s in modern units, correct only to the order of magnitude.
1727 – James Bradley correctly identifies the peculiar behaviour of γ Draconis as stellar aberration. Bradley uses this fact to estimate the speed of light in air or vacuum, and his result is more accurate than Huygens’s: about 3.0×108 m/s in modern units. For the first time, the measurement is correct to the first two significant figures.
19th century
Before 1880s
1810 – François Arago observes that the speed of light of stars – measured with stellar aberration – may be independent of the relative motion of stars and the Earth; or at least, no differences are observable with a naked eye.
1818 – Augustin-Jean Fresnel proposes his model of partial aether dragging to explain Arago’s finding.
1845 – George Gabriel Stokes creates his own model of complete aether dragging.
1851 – The Fizeau experiment with light in flowing water confirms Fresnel’s model.
1861 – James Clerk Maxwell publishes his equations of the electromagnetic field, which had a great impact on the later works on aether and special relativity.
1868 – Martinus Hoek modifies the experiment of Fizeau, with the same conclusions.
1871 – George Biddell Airy observes the stellar aberration in a telescope filled with water, confirming Fresnel’s model and contradicting Stokes’s.
1880s
1881 – Albert Michelson performs his original interferometric experiment. It detects no aether wind, contradicting Fresnel’s model in favour of Stokes’s.
1885 – Ludwig Lange introduces the idea of inertial frame of reference. It is essential to relativity as an element of the modern formulation of the relativity principle.
1886 – Albert Michelson and Edward Morley repeat the Fizeau experiment with higher precision, confirming its result and contradicting the earlier conclusions of Michelson.
1887 – Woldemar Voigt publishes his coordinate transformations preserving the wave equation. They are very similar – but not equivalent – to the later Lorentz transformations.
1887 – the Michelson–Morley experiment fails to detect aether wind, disproving some aether theories and leading to new ones.
1889 – George FitzGerald conjectures the length contraction to explain the Michelson–Morley experiment.
1890s
1892 – Hendrik Lorentz – independently of FitzGerald – proposes the same explanation, with a formula only approximating the special-relativistic length contraction to the first order.
1893 – Oliver Lodge makes an interferometric experiment questioning the aether drag hypothesis.
1894 – Paul Drude introduces the symbol c for speed of light in vacuum.
1895 – Hendrik Lorentz corrects his 1892 model, proposing a contraction by the Lorentz factor (γ).
1895 – Albert Einstein probably makes his thought experiment about chasing a light beam, later relevant to his work on special relativity.
1897 – Oliver Lodge publishes another experimental result questioning aether drag.
1897 – Joseph Larmor publishes his coordinate transformations extending the length contraction formula. These transformations imply a form of time dilation and were an approximation of the full Lorentz transformations.
1898 – Henri Poincaré states that simultaneity is relative.
1899 – Hendrik Antoon Lorentz publishes an early version of his coordinate transformations, including the local time.
20th century
1900s
1902 – Lord Rayleigh writes that Lorentz’s hypothesis of length contraction predicts a form of birefringence and tries to observe it. The null result questions Lorentz’s model, but it would be later explained by a combination of length contraction and time dilation.
1902 – Max Abraham develops his classical model of the electron. It anticipated some elements of special relativity like the non-linear dependence of momentum on velocity – or, in other, more debatable terms, the relativistic mass. However, Abraham’s formula was different than in SR or in Lorentz’s theory.
1902 – Walter Kaufmann publishes his measurements of how the electron’s momentum – or, using later terms, its relativistic mass – depends on its speed. The results seem to confirm Abraham’s model.
1903 – Olinto De Pretto presents his aether theory with some form of mass–energy equivalence. It was described by a formula looking like Einstein’s E = mc2, but with different meanings of the terms.
1903 – Frederick Thomas Trouton and H.R. Noble publish the results of their experiment with capacitors, showing no aether drift.
1904 – DeWitt Bristol Brace conducts an improved version of Rayleigh’s 1902 experiment, again with null result.
1904 – Hendrik Lorentz explains the experimental results of Rayleigh, Brace, Trouton and Noble, using his refined coordinate transformations; he also proves that Maxwell’s equations are invariant under them. Lorentz also presents his own classical model of the electron, including the length contraction absent in the work of Abraham – but consistent with Kaufmann’s data so far.
1904 – Alfred Bucherer and Paul Langevin independently publish a model of the electron and its mass increasing with speed, in a way different both from Abraham’s and Lorentz’s theories. This hypothesis was also consistent with Kaufmann’s results at that stage.
1904 – Henri Poincaré presents the principle of relativity for electromagnetism.
1905 – Poincaré introduces the name Lorentz transformations and is the first to present them in their full form that would be later present in Einstein’s special relativity proper. Also, Poincaré is the first to describe the relativistic velocity-addition formula – implicitly in his publication and explicitly in his letter to Lorentz.
1905 – Albert Einstein publishes his special theory of relativity, including the mass–energy equivalence that would be later written as E = mc2.
1906 – Alfred Bucherer introduces the name theory of relativity, based on Max Planck’s term relative theory.
1906 – Walter Kaufmann publishes his new measurements of the mass–velocity dependence, and claims to disprove the formula of Lorentz and Einstein. At the same time, he accepts that both the old model of Abraham (1902) and the later model of Bucherer & Langevin (1904) are consistent with the data.
1907 – Max Von Laue describes how the relativistic velocity-addition formula recreates the Fresnel drag coefficients.
1908 – Hermann Minkowski publishes his spacetime formalism of special relativity.
1908 – Frederick Thomas Trouton and Alexander Rankine conduct an experiment with electric circuit, proving that the length contraction is not the only relativistic effect and some form of time dilation is present – similarly to the previous experiments by Rayleigh (1902) and Brace (1904).
1908 – Walther Ritz publishes his ballistic theory of light as an alternative to special relativity and Maxwell’s electrodynamics.
1909 – Paul Ehrenfest publishes the Ehrenfest paradox about rigidity in special relativity.
1909 – Gilbert N. Lewis and Richard Tolman coin the disputed term relativistic mass.
1910s
1910 – Vladimir Ignatowski makes the first derivations of Lorentz transformations that rely mostly – and almost entirely – on the relativity principle, without appealing to Maxwell’s equations; such derivations are sometimes called single-postulate.
1910 – Edmund Taylor Whittaker and Vladimir Varićak introduce the idea of rapidity, but without using this name.
1911 – Alfred Robb coins the term rapidity.
1911 – Paul Langevin presents the twin paradox implied by time dilation.
1911 – Max von Laue writes that special relativity and Lorentz aether theory predict the Sagnac effect, absent in Ritz's ballistic theory or in Stokes's theory of aether drag.
1913 – Georges Sagnac observes the effect named after him, disproving Ritz's ballistic theory or aether drag. However, he favours Lorentz's model and even claims – incorrectly – to contradict SR.
1913 – Willem de Sitter describes how the light of double stars contradicts Ritz’s ballistic theory of light.
1914 – Ludwik Silberstein gives the first description of Thomas–Wigner rotation, then underappreciated.
1914 – Günther Neumann measures the mass–velocity dependence for electrons. His result favours the formula of Lorentz & Einstein over the one by Abraham.
1915 – Charles-Eugène Guye and Charles Lavanchy make their own measurements of the inertia of cathode rays, much more exact than the earlier research by Kaufmann. Their conclusion is opposite to Kaufmann’s – again, the formula of Lorentz and Einstein is correct and Abraham’s model is disproved.
1920s and 1930s
1924 – Hans Thirring notices that ballistic theories of light contradict spectroscopic observations of the Sun.
1924 – Anton Lampa predicts a relativistic effect later known as Penrose–Terrell rotation.
1925 – the Michelson–Gale–Pearson experiment tests the Sagnac effect caused by the Earth’s rotation. The result disproves any aether drag; in combination with other experiments – disproving the stationary aether like the Michelson–Morley experiment – it proves the Lorentz transformations correct.
1925 – Llewelyn Thomas discovers Thomas precession, which can be explained by the effect described earlier by Silberstein and later by Wigner.
1928 – Paul Dirac describes the general energy–momentum relation, extending the equivalence of mass and energy.
1932 – Kennedy–Thorndike experiment confirms the Lorentz transformations in a new way, complementary to the Michelson–Morley experiment. These two results, if combined, prove some form of time dilation.
1932 – John Cockcroft and Ernest Walton prove the mass–energy equivalence via a nuclear reaction.
1933 – Dayton Miller conducts an improved form of the Michelson–Morley experiment, claiming to contradict special relativity. It would be later explained consistently with SR in the 1950s.
1935 – the Hammar experiment is another refutation of aether drag and evidence for special relativity.
1938 – Ives–Stilwell experiment measures time dilation via the relativistic Doppler effect. For the first time, the Lorentz transformations can be derived directly from empirical data, as would be noticed by Robertson in 1949.
1939 – Eugene Wigner rediscovers that SR predicts the Thomas–Wigner rotation.
After 1930s
1940 – Bruno Rossi and D.B. Hall observe time dilation in cosmic rays, i.e. in the decay of muons.
1949 – Howard P. Robertson notices that the Lorentz transformations can be deduced (extracted) from three key experiments: Michelson–Morley, Kennedy–Thorndike and Ives–Stillwell.
1954 – Gerhart Lüders and Wolfgang Pauli prove that the Lorentz invariance in quantum field theories implies the CPT symmetry, allowing for new tests of special relativity.
1955 – Robert S. Shankland and others explain Miller’s experimental result from 1933 in a way consistent with special relativity.
1959 – Roger Penrose and James Terrell independently publish their rediscovery that SR predicts the Penrose–Terrell effect.
1959 – E. Dewan and M. Beran publish the thought experiment known as Bell's spaceship paradox.
1960 – Vernon W. Hughes et al. perform a spectroscopic experiment, later interpreted as evidence for the Lorentz invariance of particle interactions.
1961 – Ronald Drever independently conducts a similar experiment with the same conclusions.
1961 – Wolfgang Rindler presents and solves the ladder paradox.
1967 – Gerald Feinberg introduces the term tachyon for hypothetical particles with speeds higher than that of light in vacuum (c).
1971 – The Hafele–Keating experiment confirms time dilation predicted by special & general relativity.
1974 – Stefan Marinov claims to contradict special relativity by measuring a variation in c. His results are noted by the scientific community but rejected as incorrect.
1983 – the speed of light in vacuum (c) is used to define the metre in the SI system of units; the definition does not mention any frame of reference, assuming this speed is universal, and implicitly that special relativity is correct.
21st century
2011 – Faster-than-light neutrino anomaly is reported by CERN.
2012 – the anomaly in neutrino speed is explained by a failure of the equipment; this reason is officially reported.
See also
History of Lorentz transformations
Timeline of gravitational physics and relativity
References
Further reading
Andrzej Kajetan Wróblewski, Einstein and Physics hundred years ago, Acta Physica Polonica B, Vol. 37 (2006). Retrieved 2021-12-28.
Special relativity | Timeline of special relativity and the speed of light | [
"Physics"
] | 3,410 | [
"Special relativity",
"Theory of relativity"
] |
69,602,943 | https://en.wikipedia.org/wiki/Wiz%20%28company%29 | Wiz, Inc. is an American cloud security startup headquartered in New York City. The company was founded in January 2020 by Assaf Rappaport, Yinon Costica, Roy Reznik, and Ami Luttwak, all of whom previously founded Adallom. Rappaport is CEO, Costica is VP of Product, Reznik is VP of Engineering, and Luttwak is CTO. The company's platform analyzes computing infrastructure hosted in Amazon Web Services, Microsoft Azure, Google Cloud Platform, Oracle Cloud Infrastructure, and Kubernetes for combinations of risk factors that could allow malicious actors to gain control of cloud resources and/or exfiltrate valuable data.
, Wiz employed about 1,995 people, with most sales and marketing personnel scattered across North America and Europe while most engineering personnel are based in Tel Aviv, Israel. In August 2022, Wiz claimed to be the fastest startup ever to scale from $1 million to $100 million in annual recurring revenue (ARR), from February 2021 to approximately July 2022. In February 2024, the company claimed to have reached $350M in ARR, with a 45% market share of Fortune 100 companies.
History
Wiz was founded in January 2020 by Assaf Rappaport, Yinon Costica, Roy Reznik, and Ami Luttwak, all of whom previously founded Adallom.
Wiz agreed to acquire Tel Aviv-based Raftt, a cloud-based developer collaboration platform, for $50 million in December 2023. In April 2024, the company acquired cloud detection and response startup, Gem Security, for around $350 million. Also that month, reports indicated that Wiz intended to purchase Lacework, but in May the deal fell through during the due diligence process. In November 2024, the company acquired security remediation and risk management startup Dazz for a cash-and-share deal valued at $450 million.
In 2024, it was reported that Google was in talks to buy Wiz at a reported valuation of $23 billion, but Wiz turned down the offer, in favor of going public.
Funding
Wiz has raised a total of $1.9 billion from a combination of venture capital funds and private investors:
Series A — In December 2020, Wiz emerged from stealth by raising $100 million from Index Ventures, Sequoia Capital, Insight Partners and Cyberstarts.
Series B — In April and May 2021, Wiz raised $130 million and $120 million (respectively) on a $1.7 billion valuation from , Index Ventures, Sequoia Capital, Insight Partners, and Cyberstarts.
Series C — In October 2021, Wiz raised $250 million on a $6 billion valuation led by Greenoaks, and with participation from Insight Partners, Capital, Sequoia Capital, Salesforce Ventures, and CyberStarts, and individual investors Bernard Arnault and Howard Schultz.
Series D — In February 2023, Wiz raised $300 million on a $10 billion valuation led by venture capital fund Greenoaks Capital, with participation from Lightspeed Venture Partners, along with individual investors including Bernard Arnault and Howard Schultz.
Series E — In May 2024, Wiz raised $1 billion on a $12 billion valuation from Andreessen Horowitz, Lightspeed Venture Partners, Thrive Capital, Greylock Partners, Wellington Management, Cyberstarts, Greenoaks, Index Ventures, Salesforce Ventures, Sequoia Capital and Howard Schultz.
Research
Wiz researchers have discovered and responsibly disclosed numerous cloud vulnerabilities that garnered significant media coverage:
ChaosDB – A series of flaws in Microsoft Azure's Cosmos DB that made it possible to download, delete, or manipulate databases belonging to thousands of Azure customers.
OMIGOD – Bugs in Open Management Infrastructure (OMI), a ubiquitous but poorly documented agent embedded in many popular Azure services, that allowed for unauthenticated remote code execution and privilege escalation.
NotLegit – Insecure default behavior in the Azure App Service that exposed the source code of some customer applications.
ExtraReplica – A chain of critical vulnerabilities found in the Azure Database for PostgreSQL Flexible Server that could let malicious users escalate privileges and gain access to other customers' databases after bypassing authentication.
AttachMe – A cloud isolation vulnerability that, before it was patched by Oracle Cloud Infrastructure, could have allowed attackers to access and modify other users' OCI storage volumes without authorization.
Hell's Keychain – A first-of-its-kind cloud service provider supply-chain vulnerability in IBM Cloud Databases for PostgreSQL that, before it was patched, could have allowed malicious actors to remotely execute code in victims' environments.
BingBang – A misconfiguration in Azure Active Directory (AAD) that allowed Wiz researchers to modify Bing.com search results in a way that malicious actors could use to steal Office 365 credentials granting access to countless users' private emails and documents.
References
American companies established in 2020
Computer security companies
Security companies of Israel
Technology companies of Israel | Wiz (company) | [
"Technology"
] | 1,060 | [
"Announced information technology acquisitions",
"Information technology"
] |
69,604,055 | https://en.wikipedia.org/wiki/H%C3%BCtte | The (originally , stylized as "HÜTTE" and pronounced ) is a reference work for engineers of various disciplines. It was compiled for the first time in 1857 by the (short , translating as "the hut") of the in Berlin, from which the association of German engineers Verein Deutscher Ingenieure (VDI) emerged. The authors were members of the association. The technical illustrations were created in woodcut technique by . It is published in constantly revised editions to this day and is therefore the oldest German reference work still available today.
First edition 1857 and bibliophile reprint 2007
The book was initially divided into three sections: (Mathematics and Mechanics), (Mechanical Engineering and Technology) and (Building Science) and was originally published by the publishing house , the later , who published it until 1971.
For the 150th anniversary in 2007, the first edition was reissued as a bibliophile reprint.
Historical development
Starting with the first edition in 1857, further book series have been developed over the decades. The reference work quickly developed into a standard work for engineers and was frequently reprinted and translated into other languages due to the great demand. The first translation ever was into Russian in 1863. In 1890, the work was divided into two, in 1908 into three, and finally in 1922 into four volumes. With the 27th edition in 1949, volume four was no longer available.
An English version was published by McGraw-Hill Book Co., New York, in 1916 as "Mechanical engineers' handbook, based on the Hütte and prepared by a staff of specialists" edited by Lionel Simeon Marks. This led to Marks' Standard Handbook for Mechanical Engineers, a work, which spawn several translations on its own and is continued up to the present with its 100th anniversary 12th edition published in 2017.
Another work initially influenced by the 1936 Russian translation of the 1931 edition of Hütte is the so called Bronshtein and Semendyayev (BS) handbook of mathematics. Written in 1939/1940 in Russia, it was first published in 1945. Translated into German in 1958, the latter is maintained and, in turn, translated into many other languages up to the present (2020).
Recent history
In the first years after the Second World War the work was temporarily relocated both to the Federal Republic of Germany (FRG) and German Democratic Republic (GDR).
The scientific Springer-Verlag has been publishing the book since 16 June 1971 and has been the publisher of all handbooks so far. For some time, the series was called "" (Technical Pocket Books). From the completely revised 29th edition in 1989 (volume editor ), the work once again appeared in one volume under the title "" (The Basics of Engineering). With the 32nd edition, it was renamed to the current title "" (Engineering Knowledge).
For the 150th anniversary in 2007, the 33rd edition of the book was published. It was updated to reflect the current state of science and technology and meet the curricula of technical universities and technical colleges. It comprises the following sections:
Mathematical and scientific basics
A. Math and statistics
B. Physics
C. Chemistry
Technological basics
D. Materials
E. Engineering mechanics
F. Technical thermodynamics
G. Electrical engineering
H. Measurement technology
I. Regulation and control technology
J. Computer engineering
Basics for products and services
K. Development and construction
L. Production
Economic and legal basics
M. Business administration
N. Management
O. Standardization
P. Legal
Q. Patents
The 34th edition was published in 2012, and the 35th edition was planned for 2020 and is now scheduled to be released in 2023.
See also
Bronshtein and Semendyayev (BS)
Marks' Standard Handbook for Mechanical Engineers
Notes
References
Further reading
(376 pages)
External links
Akademischer Verein Hütte e. V. Berlin
Mechanical engineering
1857 non-fiction books
19th-century German literature
20th-century German literature
21st-century German literature
Handbooks and manuals | Hütte | [
"Physics",
"Engineering"
] | 804 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
69,604,199 | https://en.wikipedia.org/wiki/Space%20industry%20of%20Scotland | In May 2021, the Space industry of Scotland consisted of 173 space companies operating across Scotland. These include spacecraft manufacturers, launch providers, downstream data analyzers, and research organisations. Space Scotland, the country's space agency, said that the space industry in Scotland contributes in excess of £4 billion to the Scottish economy.
Recognised as a European leader in space technology, Scotland builds more satellites than any other European country. Space Scotland claim that this is possible due to "entreprenerialism, technical expertise in miniturisation of satellites and support from Scottish universities". Scotland's space industry, and its agency, Space Scotland, contribute research and project to other agencies, including NASA and the European Space Agency.
In 2017/18 it was estimated that the space industry in Scotland employed approximately 8,000 people with an annual growth rate of 12% between 2013 and 2018. London Economics published a report projecting £2billion in income for Scotland's space cluster by 2030. Scottish space industry jobs represent almost 1 in 5 of all UK space industry employment.
Scottish Space Groups
Space Scotland
Space Scotland (formerly Scottish Space Leadership Council) is an industry coordinating body created to promote the Scottish space industry. In conjunction with Scottish Space Academic Forum and The Scottish Government, they have published A Strategy for Space in Scotland 2021 for the continued development of the Scottish space industry over the next decade. It acts as a "single voice" to collectively represent the wider Scottish space industry organisations, such as the Scottish Space Academic Forum and the Scottish Government Space Group.
The Scottish Space Group was established by Space Scotland in order to facilitate resources that Space Scotland required in order to "enhance, accelerate and improve sectoral growth". The organisation is committed to the further expansion of its operations, claiming that it will "provide in-depth industry perspectives, guidance and advice to support the future growth and success of “Team Scotland”". To achieve this, it will "continue to be an advocate for collective Scottish business interests".
Scottish Space Academic Forum (SSAF)
The Scottish Space Academic Forum (SSAF) was established "as an initiative and management forum aimed at ensuring alignment across research, development and education to provide support to maximise the research potential of Team Scotland for world class innovation and economic growth". The relationship between the space industry of Scotland and academic research was seen as a crucial part of the industries success, "in terms of accelerating the technical readiness levels of innovation". It works in partnership with other agencies, including Skills Development Scotland to ensure future growth in the sector by facilitating development of appropriate skills and experience required to work within Scotland's space sector.
Scottish Government Space Group
The Scottish Government launched the cross–agency initiative, the Scottish Government Space Group, which collects the Scottish Government and the Enterprise and Skills agencies of the government together in order to facilitate discussions that will continue to develop Scotland's space sector and associated industry. The Scottish Government recognise space and the associated space industry, exploration, research, development and manufacturing of space craft, as a key component to Scotland's future economic growth.
The creation of the Scottish Government Space Group ensures the Scottish Government "will work closely with the sector and beyond
to ensure that appropriate infrastructure and investment across the public and private sectors is provided to enable growth and enhance employment opportunities".
Public sector bodies
Scotland's space industry, and the work of Space Scotland, is supported by a number of public sector bodies and agencies, including:
UK Space Agency
Science and Technology Facilities Council
Royal Observatory Edinburgh
UK Astronomy Technology Centre
Higgs Centre for Innovation
Satellite Applications Catapult
European Space Agency
Scottish Environment Protection Agency (SEPA)
Scottish Association for Marine Science
Marine Scotland
Marine Alliance for Science and Technology for Scotland (MASTS)
Space Centres in Scotland
Higgs Centre for Innovation
The Higgs Centre for Innovation was created by the Science and Technology Facilities Council at the Royal Observatory Edinburgh to incubate space startups, provide the sector with facilities for building and demonstrating space technologies, and to give doctoral candidates startup and entrepreneurial experience. The facilities include cleanrooms, cryostats, vibration shaker tables, thermal chambers, and EMC testing facilities. The Higgs Centre is one of four ESA Business Incubation Centres in the UK.
Bayes Centre
The Bayes Centre, at the University of Edinburgh, hosts a coordinating hub for space and satellite data science activities that brings together academia, NGOs, the space industry, and governmental organisations with a focus on commercializing university research.
Ground Stations
There are several Ground Stations in Scotland with the capability to transmit and/or receive data from polar orbiting satellites (geostationary satellites not included as the smaller infrastructure required means they are not restricted to industry). Some ground stations are for research, some are commercial such as Dundee Satellite Station, and some military, such as QinetiQ at West Freugh. Commercial ground stations include:
Dundee Satellite Station
Dundee Satellite Station was founded in the 1970s and is now located in Errol, Perthshire with a selection of antennas for X-band, L-band and S-band capability.
Spaceports
There are multiple spaceports in varying phases of development in Scotland. Two Scottish spaceports, SaxaVord and Sutherland, were scheduled to have their first launches in 2022. The date for SaxaVord has been pushed back to 2024 while Sutherland is still under construction.
SaxaVord
SaxaVord Spaceport is located on the isle of Unst, in the Shetland Islands. It is planned to host Lockheed Martin's first rocket launches as well as Cumbernauld-based Skyrora's launches.
Sutherland Space Hub
Sutherland spaceport is located in the north of the Scottish mainland. It currently has six launch contracts with rocket maker Orbex which is headquartered in Forres, Scotland.
Space Data Companies
Omanos Analytics
Omanos Analytics, based in Glasgow, combines earth observation data with ground source data to track operations of infrastructure projects such as mining, logging, and rubber plantations. These are monitored for their environmental and community impact, especially in hostile and low-infrastructure regions with the goal of supporting sustainable development.
Ecometrica
Ecometrica, with offices in Edinburgh, has developed an end-to-end environmental SaaS whose purpose is to analyze earth observation data combined with on-the-ground data collection sources to identify risks and opportunities for their customers. The software assists sustainability planning, operations and reporting.
Space Intelligence
Space Intelligence, based in Edinburgh, uses machine learning on remote sensing satellite data to classify landscapes, especially around deforestation and forest degradation, to provide businesses seeking to reduce their environmental impact with actionable data.
Trade in Space
Trade in Space, based in Edinburgh, uses satellite data to create smart contracts via the blockchain in real time for commodities such as coffee.
Carbomap
Carbomap, based in Edinburgh, builds tools to analyze and develop insights from environmental data from remote sensing satellites and UAVs. They work with governments, NGOs, and research institutes to map out forests and monitor deforestation.
EarthBlox
EarthBlox, based in Edinburgh, produces a no-code SaaS interface to obtain and analyze data from remote sensing satellites for applications ranging from flood damage, crop production, and climate change.
Bird.i
Bird.i, based in Glasgow, uses satellite data to provide businesses with monitoring of infrastructure projects such as mining, oil and gas, and construction. It was acquired in April 2020 by Zonda.
Rocket makers
Skyrora
Skyrora, based in Cumbernauld, builds rockets suited for the launch of small satellites. The Skyrora XL rocket is intended to launch payloads of up to 315 kg into a Sun-synchronous orbit between 500 and 1000 km or a polar orbit between 200 and 1000 km. Their first scheduled launch is in 2023.
Orbex
Orbex, based in Forres (about 25 miles northeast of Inverness), is developing a rocket called Prime that is intended to launch nano satellites into a polar orbit. The first launch is targeted to end of 2022.
References
Scotland
Industry in Scotland
Science and technology in Scotland | Space industry of Scotland | [
"Astronomy"
] | 1,629 | [
"Space industry",
"Outer space"
] |
69,604,567 | https://en.wikipedia.org/wiki/DCAF11 | DDB1- and CUL4-associated factor 11 also known as WD Repeat Domain 23 (WDR23) is a protein that in humans is encoded by the DCAF11 gene.
DCAF11 is a WD40 repeat protein, containing seven repeats of the closed circular solenoid protein domain WD40. WDR-23 exists in two spatially distinct isoforms produced by alternative splicing, a cytoplasmic WDR-23A and nuclear WDR-23B. Nuclear and cytoplasmic versions of WDR-23 have distinct roles.
References
Further reading
EC 6.3.2
Protein tandem repeats | DCAF11 | [
"Chemistry",
"Biology"
] | 134 | [
"Biochemistry stubs",
"Protein tandem repeats",
"Protein stubs",
"Protein classification"
] |
69,605,663 | https://en.wikipedia.org/wiki/Zeldovich%E2%80%93Taylor%20flow | Zeldovich–Taylor flow (also known as Zeldovich–Taylor expansion wave) is the fluid motion of gaseous detonation products behind Chapman–Jouguet detonation wave. The flow was described independently by Yakov Zeldovich in 1942 and G. I. Taylor in 1950, although G. I. Taylor carried out the work in 1941 that being circulated in the British Ministry of Home Security. Since naturally occurring detonation waves are in general a Chapman–Jouguet detonation wave, the solution becomes very useful in describing real-life detonation waves.
Mathematical description
Consider a spherically outgoing Chapman–Jouguet detonation wave propagating with a constant velocity . By definition, immediately behind the detonation wave, the gas velocity is equal to the local sound speed with respect to the wave. Let be the radial velocity of the gas behind the wave, in a fixed frame. The detonation is ignited at at . For , the gas velocity must be zero at the center and should take the value at the detonation location . The fluid motion is governed by the inviscid Euler equations
where is the density, is the pressure and is the entropy. The last equation implies that the flow is isentropic and hence we can write .
Since there are no length or time scales involved in the problem, one may look for a self-similar solution of the form , where . The first two equations then become
where prime denotes differentiation with respect to . We can eliminate between the two equations to obtain an equation that contains only and . Because of the isentropic condition, we can express , that is to say, we can replace with . This leads to
For polytropic gases with constant specific heats, we have . The above set of equations cannot be solved analytically, but has to be integrated numerically. The solution has to be found for the range subjected to the condition at
The function is found to monotonically decrease from its value to zero at a finite value of , where a weak discontinuity (that is a function is continuous, but its derivatives may not) exists. The region between the detonation front and the trailing weak discontinuity is the rarefaction (or expansion) flow. Interior to the weak discontinuity everywhere.
Location of the weak discontinuity (Mach wave)
From the second equation described above, it follows that when , . More precisely, as , that equation can be approximated as
As , and if decreases as . The left hand side of the above equation can become positive infinity only if . Thus, when decreases to the value , the gas comes to rest (Here is the sound speed corresponding to ). Thus, the rarefaction motion occurs for and there is no fluid motion for .
Behavior near the weak discontinuity
Rewrite the second equation as
In the neighborhood of the weak discontinuity, the quantities to the first order (such as ) reduces the above equation to
At this point, it is worth mentioning that in general, disturbances in gases are propagated with respect to the gas at the local sound speed. In other words, in the fixed frame, the disturbances are propagated at the speed (the other possibility is although it is of no interest here). If the gas is at rest , then the disturbance speed is . This is just a normal sound wave propagation. If however is non-zero but a small quantity, then one find the correction for the disturbance propagation speed as obtained using a Taylor series expansion, where is the Landau derivative (for ideal gas, , where is the specific heat ratio). This means that the above equation can be written as
whose solution is
where is a constant. This determines implicitly in the neighborhood of the week discontinuity where is small. This equation shows that at , , , but all higher-order derivatives are discontinuous. In the above equation, subtract from the left-hand side and from the right-hand side to obtain
which implies that if is a small quantity. It can be shown that the relation not only holds for small , but throughout the rarefaction wave.
Behavior near the detonation front
First let us show that the relation is not only valid near the weak discontinuity, but throughout the region. If this inequality is not maintained, then there must be a point where between the weak discontinuity and the detonation front. The second governing equation implies that at this point must be infinite or, . Let us obtain by taking the second derivative of the governing equation. In the resulting equation, impose the condition to obtain . This implies that reaches a maximum at this point which in turn implies that cannot exist for greater than the maximum point considered since otherwise would be multi-valued. The maximum point at most can be corresponded to the outer boundary (detonation front). This means that can vanish only on the boundary and it is already shown that is positive near the weak discontinuity, is positive everywhere in the region except the boundaries where it can vanish.
Note that near the detonation front, we must satisfy the condition . The value evaluated at for the function , i.e., is nothing but the velocity of the detonation front with respect to the gas velocity behind it. For a detonation front, the condition must always be met, with the equality sign representing Chapman–Jouguet detonations and the inequalities representing over-driven detonations. The analysis describing the point must correspond to the detonation front.
See also
Taylor–von Neumann–Sedov blast wave
Guderley–Landau–Stanyukovich problem
References
Flow regimes
Fluid dynamics
Combustion
Hyperbolic partial differential equations | Zeldovich–Taylor flow | [
"Chemistry",
"Engineering"
] | 1,181 | [
"Chemical engineering",
"Combustion",
"Flow regimes",
"Piping",
"Fluid dynamics"
] |
69,605,961 | https://en.wikipedia.org/wiki/Government%20Chief%20Scientific%20Adviser%20%28Ireland%29 | The Irish Chief Scientific Adviser (CSA) is an adviser on science and technology to the Government of Ireland. The role was created in 2004 and was to operate independently of government departments, but report to Cabinet Committee on Science and Technology.
History
In 2004, Barry McSweeney was appointed as the first CSA. Following a controversy about the awarding of his PhD, McSweeney resigned in November 2005.
In January 2007, Patrick Cunningham was announced as the new CSA. He served a five-year term and hosted the 2012 EuroScience Open Forum meeting in Dublin.
In 2012, following the retirement of Cunningham, the separate office of CSA was abolished, and the role was given to Mark Ferguson, then director general of Science Foundation Ireland. Ferguson was reappointed in 2017 for a further five years. During this period, academics, politicians and others highlighted the limitations of having a non-independent science advisor.
Following the announcement of Philip Nolan's upcoming appointment as director general of Science Foundation Ireland, Simon Harris announced that the CSA was to be reinstated as an independent role.
List of Government Chief Scientific Advisers
Barry McSweeney (2004-2005)
Patrick Cunningham (2007-2012)
Mark Ferguson (2012-2022)
Aoife McLysaght (2024-present)
References
Ireland
Science and technology in the Republic of Ireland | Government Chief Scientific Adviser (Ireland) | [
"Technology"
] | 271 | [
"Scientists in technology assessment and policy",
"Chief scientific advisers by country"
] |
69,605,977 | https://en.wikipedia.org/wiki/Barbara%20H.%20Stuart | Barbara Stuart is an Australian spectroscopist.
Background
Stuart studied her BSc at the University of Sydney in 1987, tutoring at the university for 3 years, then studied a MSc biophysical chemistry, graduating in 1990. Stuart then moved to the UK and studied a PhD in polymer engineering at Imperial College London, graduating in 1993. Stuart then began lecturing in Physical Chemistry at the University of Greenwich for 2 years before returning to Australia to lecture at the University of Technology Sydney.
Publications
Biological Applications of Infrared Spectroscopy (1997)
Polymer Analysis (2002)
Infrared Spectroscopy: Fundamentals and Applications (2004)
Analytical techniques in materials conservation (2007)
Forensic Analytical Techniques (2012)
References
External links
Academic staff of the University of Technology Sydney
CSIRO people
Living people
Year of birth missing (living people)
University of Sydney alumni
Alumni of Imperial College London
Spectroscopists | Barbara H. Stuart | [
"Physics",
"Chemistry"
] | 174 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
69,606,247 | https://en.wikipedia.org/wiki/White%20slave%20trade%20affair | The White slave trade affair, also known as L’affaire de la traite des blanches, De handel in blanke slavinnen and Affaire des petite Anglaises, was a famous international scandal in Brussels in Belgium in 1880–1881. It attracted international attention to the issue of sex trafficking and became the starting point of the international campaign against sex trafficking.
History
In 1880, it was revealed that about fifty foreign girls had been sex trafficked illegally to work in brothels in Brussels. The case became a major scandal which attracted international infamy, especially since it became known that some people within the authorities had been involved in the trade. The scandal ended in both the mayor of Brussels as well as the head of the city's police force being forced to resign from their posts.
The scandal attracted international attention to the ongoing issue of sex trafficking. The intense press coverage resulted in public interest in the issue. It resulted in an international campaign against sex trafficking, which became labelled as white slave trade. Campaigns against sex trafficking first started in Belgium after the scandal of 1880, and spread from there to Great Britain in 1885, to France in 1902 and to the United States in 1907.
See also
Zwi Migdal
Sexual slavery
Ashkenazum
The Maiden Tribute of Modern Babylon
References
1880 in Belgium
1881 in Belgium
19th century in Brussels
Sex trafficking
Prostitution in Belgium
Scandals in Belgium
19th-century scandals
Human trafficking in Belgium | White slave trade affair | [
"Biology"
] | 288 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
69,606,772 | https://en.wikipedia.org/wiki/Advanced%20Materials%20Interfaces | Advanced Materials Interfaces is a peer-reviewed scientific journal covering materials science, including research on functional interfaces and surfaces and their specific applications.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
Scopus
Science Citation Index Expanded
According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.389, ranking it 48th out of 179 journals in the category "Chemistry, Multidisciplinary" and 95th out of 345 journals in the category "Materials Science, Multidisciplinary".
References
External links
Wiley-VCH academic journals
Materials science journals
English-language journals
Biweekly journals
Academic journals established in 2014 | Advanced Materials Interfaces | [
"Materials_science",
"Engineering"
] | 145 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
69,607,031 | https://en.wikipedia.org/wiki/All%20Japan%20Chemistry%20Workers%27%20Union | The All Japan Chemistry Workers' Union (JCWU, , Zenkoku Kagaku) was a trade union representing workers in the chemical industry in Japan.
The union was founded on 20 October 1987, by 30 local unions which were expelled from the Japanese Federation of Synthetic Chemistry Workers' Unions (Goka Roren), due to an internal dispute. The union affiliated to the Japanese Trade Union Confederation, initially with 25,000 members, although by 1996, this had declined to only 10,540. In October 1998, the union merged with Goka Roren, to form the Japanese Federation of Chemistry Workers' Unions.
References
Chemical industry trade unions
Trade unions established in 1987
Trade unions disestablished in 1998
Trade unions in Japan | All Japan Chemistry Workers' Union | [
"Chemistry"
] | 148 | [
"Chemical industry trade unions"
] |
69,607,043 | https://en.wikipedia.org/wiki/Guderley%E2%80%93Landau%E2%80%93Stanyukovich%20problem | Guderley–Landau–Stanyukovich problem describes the time evolution of converging shock waves. The problem was discussed by G. Guderley in 1942 and independently by Lev Landau and K. P. Stanyukovich in 1944, where the later authors' analysis was published in 1955.
Mathematical description
Consider a spherically converging shock wave that was initiated by some means at a radial location and directed towards the center. As the shock wave travels towards the origin, its strength increases since the shock wave compresses lesser and lesser amount of mass as it propagates. The shock wave location thus varies with time. The self-similar solution to be described corresponds to the region , that is to say, the shock wave has travelled enough to forget about the initial condition.
Since the shock wave in the self-similar region is strong, the pressure behind the wave is very large in comparison with the pressure ahead of the wave . According to Rankine–Hugoniot conditions, for strong waves, although , , where represents gas density; in other words, the density jump across the shock wave is finite. For the analysis, one can thus assume and , which in turn removes the velocity scale by setting since .
At this point, it is worth noting that the analogous problem in which a strong shock wave propagating outwards is known to be described by the Taylor–von Neumann–Sedov blast wave. The description for Taylor–von Neumann–Sedov blast wave utilizes and the total energy content of the flow to develop a self-similar solution. Unlike this problem, the imploding shock wave is not self-similar throughout the entire region (the flow field near depends on the manner in which the shock wave is generated) and thus the Guderley–Landau–Stanyukovich problem attempts to describe in a self-similar manner, the flow field only for ; in this self-similar region, energy is not constant and in fact, will be shown to decrease with time (the total energy of the entire region is still constant). Since the self-similar region is small in comparison with the initial size of the shock wave region, only a small fraction of the total energy is accumulated in the self-similar region. The problem thus contains no length scale to use dimensional arguments to find out the self-similar description i.e., the dependence of on cannot be determined by dimensional arguments alone. The problems of these kind are described by the self-similar solution of the second kind.
For convenience, measure the time such that the converging shock wave reaches the origin at time . For , the converging shock approaches the origin and for , the reflected shock wave emerges from the origin. The location of shock wave is assumed to be described by the function
where is the similarity index and is a constant. The reflected shock emerges with the same similarity index. The value of is determined from the condition that a self-similar solution exists, whereas the constant cannot be described from the self-similar analysis; the constant contains information from the region and therefore can be determined only when the entire region of the flow is solved. The dimension of will be found only after solving for . For Taylor–von Neumann–Sedov blast wave, dimensional arguments can be used to obtain
The shock-wave velocity is given by
According to Rankine–Hugoniot conditions the gas velocity , pressure and density immediately behind the strong shock front, for an ideal gas are given by
These will serve as the boundary conditions for the flow behind the shock front.
Self-similar solution
The governing equations are
where is the density, is the pressure, is the entropy and is the radial velocity. In place of the pressure , we can use the sound speed using the relation .
To obtain the self-similar equations, we introduce
Note that since both and are negative, . Formally the solution has to be found for the range . The boundary conditions at are given by
The boundary conditions at can be derived from the observation at the time of collapse , wherein becomes infinite. At the moment of collapse, the flow variables at any distance from the origin must be finite, that is to say, and must be finite for . This is possible only if
Substituting the self-similar variables into the governing equations lead to
From here, we can easily solve for and (or, ) to find two equations. As a third equation, we could two of the equations by eliminating the variable . The resultant equations are
where and . It can be easily seen once the third equation is solved for , the first two equations can be integrated using simple quadratures.
The third equation is first-order differential equation for the function with the boundary condition pertaining to the condition behind the shock front. But there is another boundary condition that needs to be satisfied, i.e., pertaining to the condition found at . This additional condition can be satisfied not for any arbitrary value of , but there exists only one value of for which the second condition can be satisfied. Thus is obtained as an eigenvalue. This eigenvalue can be obtained numerically.
The condition that determines can be explained by plotting the integral curve as shown in the figure as a solid curve. The point is the initial condition for the differential equation, i.e., . The integral curve must end at the point . In the same figure, the parabola corresponding to the condition is also plotted as a dotted curve. It can be easily shown than the point always lies above this parabola. This means that the integral curve must intersect the parabola to reach the point . In all the three differential equation, the ratio appears implying that this ratio vanishes at point where the integral curve intersects the parabola. The physical requirement for the functions and is that they must be single-valued functions of to get a unique solution. This means that the functions and cannot have extrema anywhere inside the domain. But at the point , can vanish, indicating that the aforementioned functions have extrema. The only way to avoid this situation is to make the ratio at finite. That is to say, as becomes zero, we require also to be zero in such a manner to obtain . At ,
Numerical integrations of the third equation provide for and for . These values for may be compared with an approximate formula , derived by Landau and Stanyukovich. It can be established that as , . In general, the similarity index is an irrational number.
See also
Taylor–von Neumann–Sedov blast wave
Zeldovich–Taylor flow
References
Flow regimes
Fluid dynamics
Combustion
Lev Landau | Guderley–Landau–Stanyukovich problem | [
"Chemistry",
"Engineering"
] | 1,331 | [
"Chemical engineering",
"Combustion",
"Flow regimes",
"Piping",
"Fluid dynamics"
] |
69,607,070 | https://en.wikipedia.org/wiki/Japanese%20Federation%20of%20Chemistry%20Workers%27%20Unions | The Japanese Federation of Chemistry Workers' Unions (Kagaku League) was a trade union representing workers in the chemical and pharmaceutical industries in Japan.
The union was established in 1998, when the Japanese Federation of Synthetic Chemistry Workers' Unions merged with the All Japan Chemistry Workers' Union. Like both its predecessors, it became affiliated with the Japanese Trade Union Confederation. By 2002, it had 104,000 members. That year, it merged with the National Organization of All Chemical Workers, the Japan Confederation of Petroleum Industry Workers' Unions, and the National Federation of Cement Workers' Unions of Japan to form the Japan Federation of Energy and Chemistry Workers' Unions.
References
Chemical industry trade unions
Trade unions established in 1998
Trade unions established in 2002
Trade unions in Japan | Japanese Federation of Chemistry Workers' Unions | [
"Chemistry"
] | 150 | [
"Chemical industry trade unions"
] |
69,607,191 | https://en.wikipedia.org/wiki/National%20Organization%20of%20All%20Chemical%20Workers | The National Organization of All Chemical Workers (, Shin Kagaku) was a trade union representing workers in the chemical industry in Japan.
The union was founded in 1950, and soon after was a founding affiliate of the National Federation Of Industrial Organisations. By 1958 it had 7,049 members, growing to 12,265 members in 1970. From the late 1987, it was affiliated to the Japanese Trade Union Confederation, but by 1996, its membership had declined to 8,313. In 2002, it merged with the Japanese Federation of Chemistry Workers' Unions, the Japan Confederation of Petroleum Industry Workers' Unions, and the National Federation of Cement Workers' Unions of Japan, to form the Japan Federation of Energy and Chemistry Workers' Unions.
References
Chemical industry trade unions
Trade unions established in 1950
Trade unions disestablished in 2002
Trade unions in Japan | National Organization of All Chemical Workers | [
"Chemistry"
] | 169 | [
"Chemical industry trade unions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.