id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
63,344,217 | https://en.wikipedia.org/wiki/Load%20value%20injection | Load value injection (LVI) is an attack on Intel microprocessors that can be used to attack Intel's Software Guard Extensions (SGX) technology. It is a development of the previously known Meltdown security vulnerability. Unlike Meltdown, which can only read hidden data, LVI can inject data values, and is resistant to the countermeasures so far used to mitigate the Meltdown vulnerability.
In theory, any processor affected by Meltdown may be vulnerable to LVI, but , LVI is only known to affect Intel microprocessors. Intel has published a guide to mitigating the vulnerability by using compiler technology, requiring existing software to be recompiled to add LFENCE memory barrier instructions at every potentially vulnerable point in the code. However, this mitigation appears likely to result in substantial performance reductions in the recompiled code.
See also
Transient execution CPU vulnerabilities
References
External links
Intel's analysis of the LVI vulnerability
Intel x86 microprocessors
Transient execution CPU vulnerabilities
2020 in computing
X86 memory management | Load value injection | [
"Technology"
] | 226 | [
"Transient execution CPU vulnerabilities",
"Computer security exploits"
] |
63,344,935 | https://en.wikipedia.org/wiki/British%20Gear%20Association | The British Gear Association (BGA) is the trade body (association) that represents the manufacture of gear equipment and mechanical power transmission in the United Kingdom.
History
The BGA was formed in 1986 to replace the previous British Gear Manufacturers' Association, headquartered in central London; the BGMA worked with the Association of Hydraulic Equipment Manufacturers (which became the British Fluid Power Association) and the British Compressed Air Society. In the late 1980s, due to the level of research undertaken, West Germany was producing seven times the amount of transmission gearing than that of the UK.
The organisation was incorporated as a company in 1993.
Structure
The organisation is currently headquartered at Newcastle University. In 1993, the company moved to Staffordshire from Birmingham. Much research has been done in power transmission at the university's National Gear Metrology Laboratory (NGML).
The organisation is part of EUROTRANS, also known as the European Committee of Associations of Manufacturers of Gears and Transmission Parts, the European trade association for gear and transmission manufacturers, and works with Orgalim. Equivalent European organisations are VDMA in Germany, and Artema in France.
Function
The organisation represents companies that make transmission equipment, such as differentials for the automotive industry. It organises industry seminars and conferences; its annual conference is held in mid-November.
See also
American Gear Manufacturers Association
References
External links
BGA
1986 establishments in the United Kingdom
Automotive industry in the United Kingdom
Engineering societies based in the United Kingdom
Gears
Mechanical engineering organizations
Newcastle University
Organisations based in Tyne and Wear
Organizations established in 1986
Science and technology in Tyne and Wear
Trade associations based in the United Kingdom | British Gear Association | [
"Engineering"
] | 324 | [
"Mechanical engineering",
"Mechanical engineering organizations"
] |
63,346,731 | https://en.wikipedia.org/wiki/Communications%20Earth%20%26%20Environment | Communications Earth & Environment is a peer-reviewed, open-access, scientific journal in environmental science and planetary science published by Nature Portfolio in 2020. The editor-in-chief is Heike Langenberg. Communications Earth & Environment was created as a sub-journal to Nature Communications following the introduction of Communications Biology, Communications Chemistry, and Communications Physics in 2018.
Abstracting and indexing
The journal is abstracted and indexed in:
Astrophysics Data System (ADS)
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.9, ranking it 36th out of 274 journals in the category "Environmental Sciences" and 10th out of 201 journals in the category "Geosciences, Multidisciplinary".
See also
Nature
Nature Communications
Scientific Reports
References
External links
Nature Research academic journals
Earth and atmospheric sciences journals
Environmental science journals
Open access journals
Academic journals established in 2020
English-language journals
Creative Commons-licensed journals
Continuous journals | Communications Earth & Environment | [
"Environmental_science"
] | 197 | [
"Environmental science journals"
] |
63,346,732 | https://en.wikipedia.org/wiki/Communications%20Materials | Communications Materials is a peer-reviewed, open access, scientific journal in the field materials science published by Nature Portfolio since 2020. The chief editor is John Plummer. The journal was created as one of several sub-journals to Nature Communications.
Abstracting and indexing
The journal is abstracted and indexed in selective databases such as Science Citation Index Expanded and Scopus. According to the Journal Citation Reports, the journal has a 2022 impact factor of 7.8.
See also
Nature
Nature Communications
Scientific Reports
References
External links
Nature Research academic journals
Materials science journals
Open access journals
Academic journals established in 2020
English-language journals
Creative Commons-licensed journals
Continuous journals
2020 establishments | Communications Materials | [
"Materials_science",
"Engineering"
] | 133 | [
"Materials science journals",
"Materials science"
] |
63,346,754 | https://en.wikipedia.org/wiki/Nature%20Reviews%20Earth%20%26%20Environment | Nature Reviews Earth & Environment is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was established in 2020. The editor-in-chief is Graham Simpkins.
Abstracting and indexing
The journal is abstracted and indexed in:
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2021 impact factor of 37.214, ranking it 2nd out of 279 journals in the category "Environmental Sciences" and 1st out of 201 journals in the category "Geosciences, Multidisciplinary".
References
External links
Nature Research academic journals
English-language journals
Earth and atmospheric sciences journals
Environmental science journals
Academic journals established in 2020
Monthly journals
Online-only journals
Review journals | Nature Reviews Earth & Environment | [
"Environmental_science"
] | 144 | [
"Environmental science journals",
"Environmental science journal stubs"
] |
63,346,756 | https://en.wikipedia.org/wiki/Nature%20Sustainability | Nature Sustainability is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was established in 2018. The editor-in-chief is Monica Contestabile.
Abstracting and indexing
The journal is abstracted and indexed in:
Science Citation Index Expanded
Scopus
Social Sciences Citation Index
According to the Journal Citation Reports, the journal has a 2022 impact factor of 27.7.
References
External links
English-language journals
Nature Research academic journals
Monthly journals
Online-only journals
Academic journals established in 2018
Sustainability journals | Nature Sustainability | [
"Environmental_science"
] | 103 | [
"Environmental science journals",
"Sustainability journals",
"Environmental science journal stubs"
] |
63,346,757 | https://en.wikipedia.org/wiki/Npj%202D%20Materials%20and%20Applications | npj 2D Materials and Applications, is an open access peer-reviewed scientific journal published by Nature Publishing Group. It focuses on 2D materials (such as thin films), including fundamental behaviour, synthesis, properties and applications.
According to the Journal Citation Reports, npj 2D Materials and Applications has a 2022 impact factor of 9.7. The current editor-in-chief is Andras Kis (École Polytechnique Fédérale de Lausanne).
Scope
npj 2D Materials and Applications publishes articles, brief communication, comment, matters arising, perspective, and editorial on 2D materials in their entirety, including fundamental behaviour, synthesis, properties and applications. Specific materials of interest will include, but are not limited to:
2D materials in all their forms: graphene, transition metal dichalcogenides, phosphorene and molecular systems, including relevant allotropes and compounds, and topological materials
fundamental understanding of their basic science
synthesis by physical and chemical approaches
behavior and properties: electronic, magnetic, spintronic, photonic, mechanical, including in heterostructures and other architectures
applications: sensors, memory, high-frequency electronics, energy harvesting and storage, flexible electronics, water treatment, biomedical, thermal management.
References
External links
Nature Research academic journals
Materials science journals
English-language journals | Npj 2D Materials and Applications | [
"Materials_science",
"Engineering"
] | 265 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
63,347,153 | https://en.wikipedia.org/wiki/Bert%20Poolman | Berend (Bert) Poolman is a Dutch biochemist, as specialist in bioenergetics of microorganisms and membrane transport. He is a professor of Biochemistry at the University of Groningen and an elected member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 2009. Poolman is a pioneer in the field of bottom-up synthetic biology, that is, the construction from molecular building blocks of functional metabolic networks and autonomously operating functional systems, which are typical of living cells. Poolman is a lecturer in membrane biology and synthetic biology.
Education and career
Poolman pursued studies in Biochemistry and Microbiology at the University of Groningen, the Netherlands, and the University of Bern (Switzerland), obtaining a MSc degree in 1984. He gained his PhD in 1987 with a thesis on bioenergetics of streptococci, under the supervision of Wil Konings and Hans Veldkamp.
After a brief stint as a scientist at Genencor Inc (now Dupont Industrial Biosciences) in San Francisco (USA), he returned to the Netherlands in the end of 1989 to start his own research group on biochemistry and molecular biology of membrane transport at the University of Groningen, supported by a fellowship from the Royal Netherlands Academy of Arts and Sciences. He has been professor of biochemistry at the Groningen since 1998. In 2008 he was appointed Program Director of its Centre for Synthetic Biolog), and in 2013 he became Scientific Director of its Biomolecular Sciences and Biotechnology Institute.
In 1993 Poolman has done a sabbatical at Transgene SA, Strasbourg (France). Thanks to a Fulbright fellowship, he was visiting professor in biochemistry at California Institute of Technology, Pasadena (USA) in 2003.
Poolman has been Chair of the KNAW Earth and Life Sciences Board, and has been vice-chair of KNAW Council for Natural and Technical Sciences since 2017.
From 2016 to 2018, he was a member of the Dutch Council for Physics and Chemistry, and currently he is a member of the core team of the Council for Chemistry.
Since 2009 he led the focus area on ‘Biomolecular and Bioinspired Functionality’ at the Zernike Institute for Advanced Materials (University of Groningen, together with Nobel laureate Ben Feringa, and, from 2010 to 2017, he managed a national Synthetic Biology program of the University of Groningen.
Research
Poolman has made seminal contributions to the understanding of the dynamics and permeability of biological membranes and to the field of vectorial biochemistry, that is, the role of electrochemical gradients in the fuelling and regulation of membrane transport. He demonstrated that the exchange of different sugars can be more advantageous for a cell than sugar-proton symport, and showed that cells exploit the coupling of substrate import to product exchange to conserve metabolic energy.
He is an expert in the field of ATP-binding cassette transporters, one of the largest known protein families, by combining functional and structural studies. Highlights include: discovery of export of hydrophobic compounds from the inner leaflet of the lipid bilayer; elucidation of sensing and gating mechanism of ABC importers involved in cell volume regulation; single-molecule fluorescence studies to elucidate the mechanism of solute capture and translocation; structural basis for peptide selection by receptors involved in nitrogen uptake; structural basis for vitamin recognition and transport by a new class of ABC importers; and the energy coupling stoichiometry of ABC importers.
Poolman has advanced of membrane transport by combining mechanistic in vitro studies with in vivo analyses of transporter regulation. His group has developed innovative technologies in membrane reconstitution and the probing of the physicochemical state of both the cytoplasm and the cell membrane. His group was the first to show that changes in the ionic strength are used to gate the activity of osmoregulatory transporters, providing the cell with a simple on/off switch to control its cytoplasmic volume. In parallel, his group developed sensors to quantify changes in ionic strength and excluded volume (macromolecular crowding).
Current research (2020)
His main current research areas include: (i) bacterial cell-volume regulation: elucidation of the homeostatic mechanisms that control the physicochemistry of the cell; (ii) building of synthetic cells: construction of functional out-of-equilibrium systems for metabolic energy conservation and development of cell volume regulatory networks. What tasks should a living cell minimally perform and how this can be accomplished with a minimal set of components? and (iii) the molecular mechanisms of membrane transport proteins: understanding the dynamics, energetics and mechanisms of solute transporters in the plasma membrane.
Publications
Poolman is ISI highly cited researcher in microbiology. He has published over 275 peer-reviewed papers in international scientific journals, which have been cited more than 25,000 times. His H-index (Google Scholar) is 83, and he holds four patents. Poolman shares his findings with wide audiences through newspaper, radio and TV appearances. In 2012 Schrauwers and Poolman wrote the book Synthetische Biologie: de Mens als Schepper (Synthetic Biology: Man as a Creator) to convey the developments in synthetic biology to a lay audience.
Awards and fellowships
Poolman has received numerous awards, including the Biochemistry Award (1989) of the Dutch Biochemistry and Molecular Biology Organisation (NVBMB), a Royal Netherlands Academy of Arts and Sciences fellowship (1989), a Human Frontiers Science Program Organization award (1992), the SON ‘Jonge Chemici’ award (1997), the Federation European Biochemical Society Lecturer Award (2014), and the Joel Mandelstam Memorial Lecture award (2016).
He obtained four TOP program grants from the Netherlands Organisation for Scientific Research (NWO)(2001, 2007, 2010, 2014)., two program grants from the Netherlands Proteomics Centre (2005 en 2008), and coordinated three large European networks (1996, 1999 and 2012). In 2015 he received an ERC Advanced Grant and in 2019 an ERC Proof-of-Concept Grant, and in 2017 the BaSyC consortium (with Poolman as one of the lead principal investigators) was awarded a multimillion Dutch Gravitation grant.
Personal life
Poolman was born in 1959 as the first son of Jelto Poolman and Neeltje Prinsse. In 1983 he married Heleen Stevenson (1959), with whom he has four children.
Links
Towards a metabolism for synthetic cells (video KNAW-symposium: Op jacht naar de minimale cel, 24 juni 2015 op Vimeo)
De mens als schepper (The human being as creator. Unifocus on YouTube. Subtitles in English)
References
1959 births
Living people
21st-century Dutch chemists
Dutch organic chemists
Members of the Royal Netherlands Academy of Arts and Sciences
University of Groningen alumni
Academic staff of the University of Groningen | Bert Poolman | [
"Chemistry"
] | 1,429 | [
"Organic chemists",
"Dutch organic chemists"
] |
63,349,405 | https://en.wikipedia.org/wiki/Developmental%20signaling%20center | A developmental signaling center is defined as a group of cells that release various morphogens which can determine the fates, or destined cell types, of adjacent cells. This process in turn determines what tissues the adjacent cells will form. Throughout the years, various development signaling centers have been discovered.
Spemann-Mangold organizer
In 1924, Hans Spemann and Hilde Mangold discovered a region in the dorsal blastopore lip of an amphibian embryo that induced certain neighboring cells into becoming neural tissue. This Spemann-Mangold organizer was the first time that a developmental organizer region was identified and studied. Since then many analogous organizers have been found in other organisms. The Spemann-Mangold organizer is important to developmental biology because it was the first proof that particular cell populations influenced the differentiation of other cells through signaling molecules.
Nieuwkoop center
The Nieuwkoop center, named after the developmental biologist Pieter Nieuwkoop, is a cluster of dorsal vegetal cells in a blastula which produce both mesoderm-inducing and dorsalizing signals. Signals from the Nieuwkoop center induce the Spemann-Mangold organizer, thus the Nieuwkoop Center is known as the organizer of the organizer. Even with the BCNE center (Blastula chordin and noggin expression center) removed from the blastula, the Nieuwkoop Center is able to induce formation of the Spemann-Mangold organizer. Transplant of the Nieuwkoop Center causes formation of an embryonic axis with an endodermal fate which contains dorsal mesoderm.
Due to difficulty defining definitive Nieuwkoop regions, little is known about the molecular composition of the Nieuwkoop signal. However, cells from the Nieuwkoop Center express potent mesoderm inducers as well as the secreted protein, Cerberus (CER1), which contributes to the formation of the head, heart, and asymmetry of internal organs. Furthermore, a homeobox gene, nieuwkoid, was named after the Nieuwkoop Center for its role in development. Nieuwkoid is expressed immediately following the mid-blastula transition to a pregastrula embryo on the dorsal side and mis-expression of nieuwkoid was found to be sufficient for induction of secondary axes.
BCNE center
The BCNE center is the Blastula Chordin and Noggin Expressing center. The BCNE center is located in the dorsal region of the animal pole. It appears after the mid-blastula stage and is triggered by the expression of beta-catenin like the Nieuwkoop center. This center is found to be distinct from the Nieuwkoop center, which secretes a different group of factors, due to expression of VegT and B1-Sox which prevents the BCNE center from extending into the vegetal pole of the blastula. The BCNE center is found to secrete several factors: chordin, noggin, Xnr3, siamois, goosecoid, twin, Admp, and FoxA4a. This center predisposes cells in the blastula stage to become neural tissue. The cells of the BCNE region give rise to the forebrain, most of the mid-brain and hind-brain, the notochord, and the floor plate.
References
Cells
Morphogens
Amphibians
Embryology | Developmental signaling center | [
"Biology"
] | 699 | [
"Animals",
"Morphogens",
"Induced stem cells",
"Amphibians"
] |
63,350,802 | https://en.wikipedia.org/wiki/Methanedisulfonic%20acid | Methanedisulfonic acid is the organosulfur compound with the formula CH2(SO3H)2. It is the disulfonic acid of methane. It is prepared by treatment of methanesulfonic acid with oleum. Its acid strength (pKa) is comparable to that of sulfuric acid.
History and synthesis
The acid was first unknowingly prepared in 1833 by Gustav Magnus as a decomposition product of ethanedisulfonic acid during early attempts to synthesize diethyl ether from ethanol and anhydrous sulfuric acid by Magnus. Early investigations focused on ether production from alcohols and strong anhydrous acids. Liebig provided a detailed overview of the various sulfonic acids obtained from these reactions, and introduced the name "ethionic acid" for the sulfooxyethanesulfonic acid previously termed "Weinschwefelsäure". Josef Redtenbacher subsequently analyzed the barium salt of MDA and coined the name (still occasionally used) methionic acid, following Liebig's convention.
In 1856, Adolph Strecker analyzed various methionate salts and improved the synthesis from ether and anhydrous sulfuric acid by trapping evolving gases within the reaction vessel to maximize conversion. The same year, Buckton and Hofmann discovered a synthesis reaction from acetonitrile or acetamide with fuming sulfuric acid but didn't identify their product, designating it methylotetrasulphuric acid.
developed another method in 1897, treating acetylene with fuming sulfuric acid to obtain acetaldehyde disulfonic acids, which he then decomposed to methionic acid upon boiling in alkaline solution.
However, all these early synthetic routes suffered from numerous byproducts. A higher-yielding synthesis was introduced by in 1929, treating dichloromethane (CH2Cl2) with potassium sulfite under hydrothermal conditions to get a methionate salt.
See also
1,3-Propanedisulfonic acid
References
Sulfonic acids | Methanedisulfonic acid | [
"Chemistry"
] | 434 | [
"Functional groups",
"Sulfonic acids"
] |
63,351,717 | https://en.wikipedia.org/wiki/Campenot%20chamber | A Campenot chamber is a three-chamber petri dish culture system devised by Robert Campenot to study neurons. Commonly used in neurobiology, the neuron soma or cell body is physically compartmentalized from its axons allowing for spatial segregation during investigation. This separation, typically done with a fluid impermeable barrier, can be used to study nerve growth factors (NGF). Neurons are particularly sensitive to environmental cues such as temperature, pH, and oxygen concentration which can affect their behavior.
The Campenot chamber can be used to study spatial and temporal axon guidance in both healthy controls and in cases of neuronal injury or neurodegeneration. Campenot concluded that neuron survival and growth depend on local nerve growth factors.
Structure
The Campenot chamber is made up of three chambers divided by Teflon fibers. These fibers are added to a petri dish coated in collagen with 20 scratches, spaced 200 μm apart, that become the parallel tracks for axons to grow. There is also a layer of grease that works to seal the Teflon to the neuron and separates the axon processes from the cell body. Refer to Side View of Campenot Chamber figure.
History of use
The uniqueness of the design allows for biochemical analysis and application of a stimulus at either distal or proximal ends. Campenot chambers have been used for a variety of studies including culturing of iPSC-derived motor neurons to isolate axonal RNA which can then be used for molecular analysis,,. The chamber has also been modified to study degeneration and apoptosis of cultured hippocampal neurons induced by amyloid beta. A modified 2-chamber system was used to examine the axonal transport of herpes simplex virus by examining the transmission of the virus from axon to epidermal cells. Through this study, the virus was found to undergo a specialized mode of viral transport, assembly and sensory neuron egress.
Recent techniques in lithography have made these chambers a more appealing model system. New microfluidic approaches have been established to create compartmentalized devices as these by using soft lithography. A recent study demonstrated that a negative mold consisting of microchannels can be made using SU-8 photoresist on a silicon wafer arrayed at a height of 3 μm to restrict the cell body transport while not allowing extension of neurites. The second layer of lithography defines compartment chambers that can be arranged uniquely to address a specific research question. The advantage of this approach provides easier visualization of cultures, a precise definition of compartments and channels, and a high device reproducibility.
Limitations
A few limitations are associated with this device including leakage of the fluid chamber due to sealing with only one layer of grease, the device itself is rather difficult to assemble, advanced live cell microscopy imaging is difficult to integrate, and the technique can only be performed using neurons of the PNS that depend on neurotrophic factors, as applications with CNS neurons have been found to be ineffective.
References
Biological techniques and tools
Neuroscience | Campenot chamber | [
"Biology"
] | 640 | [
"Neuroscience",
"nan"
] |
63,352,173 | https://en.wikipedia.org/wiki/EQUIL2 | Equil 2 is a computer program used to estimate the risk of nephrolithiasis (renal stones). The input data includes excretion, concentration, and the saturation of trace elements or other substances, which are involved in the creation of kidney stones and the output will be provided in terms of PSF score (probability of stone formation) or other equivalent formats. In some studies SUPERSAT, another program, provided more accurate measurements in some of the parameters such as relative supersaturation (RSS).
References
Kidney diseases
Medical software | EQUIL2 | [
"Biology"
] | 113 | [
"Medical software",
"Medical technology"
] |
63,352,392 | https://en.wikipedia.org/wiki/Target%20selection | Target selection is the process by which axons (nerve fibres) selectively target other cells for synapse formation. Synapses are structures which enable electrical or chemical signals to pass between nerves. While the mechanisms governing target specificity remain incompletely understood, it has been shown in many organisms that a combination of genetic and activity-based mechanisms govern initial target selection and refinement. The process of target selection has multiple steps that include axon pathfinding when neurons extend processes to specific regions, cellular target selection when neurons choose appropriate partners in a target region from a multitude of potential partners, and subcellular target selection where axons often target particular regions of a partner neuron.
Description
As bundled axons finish navigating through various neural circuits during neural development, the growth cones must selectively target with which cells it will synapse. This can be particularly well observed in the visual and olfactory systems of organisms. In order to develop into a properly functioning nervous system, there must be an extremely high degree of accuracy in which cell the growth cone forms neural connections. Although the target cell selection must be highly accurate, the degree of specificity that the neural connectivity achieves varies based on the neuronal circuitry system. The target selection process of an axon to develop synaptic connections with specific cells can be broken down into multiple stages that are not necessarily confined to exact chronological order.
The stages of targeting include:
region specification
target cell specification
subcellular specification
synaptic refinement
Region specification
The first stage in target selection is specification of target region, a process known as axon pathfinding. Growing neurites follow gradients of cell surface molecules that serve as chemoattractants and repellents to the growth cone. This perspective is an evolution of the chemoaffinity hypothesis posited by the neurobiologist Roger Wolcott Sperry in the 1960s. Sperry studied how the neurons in the visual systems of amphibians and goldfish form topographic maps in the brain, noting that if the optic nerve is crushed and allowed to regenerate, the axons will trace back the same patterns of connections. Sperry hypothesized that the target cells carried "identification tags" that would guide the growing axon, which we now know as recognition molecules that bind the growth cone along a gradient.
Neurons in sensory systems like the visual, auditory, or olfactory cortex grow into topographic maps such that neighboring neurons in the periphery correspond to adjacent target locations in the central nervous system. For example, neurons nearby on the retina will project to nearby cortical cells, creating a so-called retinotopic map. This cortical organization allows organisms to more easily decode stimuli.
The mechanisms governing region specification have been well studied in numerous systems. In Drosophila, numerous axon guidance molecules have been shown to be involved in precise regionalization of the ventral nerve cord.
Target cell specification
Once a growing neuron has entered the target area, they must locate and enter the appropriate target cell with which to synapse. This is accomplished through sequential signaling of attractive and repulsive cues, largely neurotrophins. The axon grows along its chemoattractant gradient until approaching the target cell, when its growth is slowed down by a sudden drop in the concentration of chemoattractant. This serves as a signal to enter the target cell.[1]
As the growth cone slows down, branches begin to form through one of two modalities: splitting of the growth cone, or interstitial branching. Growth cone splitting results in bifurcation of the main axon and is associated with axon guidance and innervating multiple faraway targets. Conversely, interstitial branching increases axonal coverage locally to define its presynaptic territory. Most mammalian CNS branches extend interstitially.[7] Branching can be caused by repulsive cues in the environment that cause the growth cone to pause and collapse, resulting in the formation of branches. [8]
To ensure successful innervation, inappropriate targeting must be prevented. Once the axon has reached its target area and started to slow down and branch, it can be held within the target area by a perimeter of cues repulsive to the growth cone.
Cell-to-cell interactions
Axons express patterns of cell-surface adhesion molecules that allow them to match with specific layer targets. An important family of adhesion molecules is constituted by the cadherins, whose different combination on targeting cells allow the traction and guidance of the forming axons. A typical example of layers with combinatorial expression of these molecules is the tectal laminae in the chick tectum, where the N-cadherin molecule is present only in those layers that receive axons form the retina.
Extracellular cues
Matrix factors and secreted cues are also very important in the formation of layered structures, and can be divided into attractive and repulsive cues, though the same factor can have both functions under varying conditions. For example, semaphorin is a substance with a repulsive effect that has been shown to have a fundamental role in layering between different somatosensory modalities in the spinal cord system.
Synapse formation
The molecular mechanism of synapse formation is a process composed by different stages that relies on complex intracellular mechanisms involving both the pre- and postsynaptic cell. When the growth cone of the growing presynaptic axon makes contact with the target cell, it loses the filopodia, while both cells start expressing adhesion molecules on their respective membranes to form tight junctions, called "puncta adherens", which are similar to an adherens junction. Different classes of adhesion molecules, like SynCAM, cadherins and neuroligins/neurexins play an important role in synapse stabilization and enable synaptic formation. After the synapses have been stabilized, the pre- and postsynaptic cells undergo subcellular changes on each side of the synapses. Namely, there is an accumulation of the Golgi apparatus on the postsynaptic side, while there is an accumulation of vesicles in the presynaptic terminal. Finally at the end of synaptogenesis, there is an apposition of extracellular matrix between the cells with the formation of a synaptic cleft. Characteristic of the postsynaptic cell is the presence of a postsynaptic density (PSD), formed by PDZ-domain-containing scaffold proteins whose function is to keep the neurotransmitter receptors clustered inside the synapse.
References
Cell biology
Neural circuitry
Nervous system | Target selection | [
"Biology"
] | 1,371 | [
"Organ systems",
"Cell biology",
"Nervous system"
] |
58,096,762 | https://en.wikipedia.org/wiki/Together%20for%20Sustainability | Together for Sustainability AISBL (TfS) is a joint initiative of chemical companies, founded in 2011. It focuses on the promotion of sustainability practices in the chemical industry's supply chain, currently gathering chemical companies around a single standard of auditing and assessment.
Sustainability in chemical supply chains
Over the past few years sustainability aspects in the chemical industry have become more important and holistic. Nowadays, chemical companies' measures focusing sustainability include apart from ecological aspects also social concerns and collaborative issues.
Today, it has been well accepted that the creation of sustainable chemical supply chains requires a joint effort beyond individual businesses. These efforts should integrate chemical companies, suppliers, customers as well as consumers.
Prof. Dr Wolfgang Stolze and Marc Müller of the University of St. Gallen summarize the development in the chemical industry in recent years as follows: "The scope of sustainability in the chemical industry has evolved from a firm-level construct with a strong focus on green aspects to a chain-level approach attempting to address the triple bottom line of economic, social and environmental elements."
History
The Together for Sustainability initiative was founded in 2011 by BASF, Bayer, Evonik, Henkel, Lanxess, and Solvay. The objective was to develop a global supplier engagement program and improve their own sustainability sourcing practices in line with the United Nations Global Compact. Since January 2015, the TfS initiative is incorporated as an international non-profit association according to the Belgian law.
Since June 2012, TfS conducts assessments and audits by independent experts, as well as the early partnership with the French company EcoVadis, which provided with sustainability scorecards and benchmarks.
In June 2023, Jennifer Jewson, CPO of LyondellBasell, was elected as president of the TfS.
Structure
The TfS is governed by two main organs, the General Assembly and the Steering Committee.
The General Assembly is formed by all the Chief Procurement Officers of the member companies, and holds power over the direction and structure of the organization, as well as approving the decisions of the Steering Committee.
The Steering Committee, formed by six elected members of the General Assembly as well as the TfS president, is the executive council of the organization and decides upon its activities and projects.
Additionally, TfS has several Regional Operating Committees (Asia, North America and South America) as well as, currently, five mission-specific work streams led and staffed by participants from the TfS member companies:
Work Stream 1: Governance and Partnerships
Work Stream 2: TfS Assessments
Work Stream 3: TfS Audits
Work Stream 4: TfS Communications and Capability Building
Work Stream 5: Scope 3 GHG Emissions
TfS' headquarter is in Brussels. It manages the day-to-day affairs of the organization and stays in close contact with the representatives and coordinators of the member companies.
TfS has a partnership with several other chemical industry associations: American Chemistry Council (ACC), European Chemical Industry Council (CEFIC), German Chemistry Council (VCI), China Petroleum and Chemical Industry Federation (CPCIF), Indian Chemical Council (ICC), and the Associação Brasileira da Indústria Química (ABIQUIM).
Members
TfS Membership is open to all companies in the chemical industry who subscribe to the United Nations Global Compact, Responsible Care, and show a commitment to sustainability. TfS membership has been growing steadily since its founding, and in April 2022 its members have a joint global turnover of over €500 billion.
As of October 2024, TfS has 54 member companies.
Recognition
2015 - Highly Commended at the Ethical Corporation Responsible Business Award 2015
2016 - Sustainable Purchasing Leadership Council Market Transformation Award
2018 - Best Third Sector/Not-for-profit Procurement Project at CIPS Supply Management Awards
2018 - Finalist for international Responsible Business Awards
See also
ISO 26000
Corporate social responsibility
United Nations Global Compact
Sustainable Stock Exchanges Initiative
Notes
References
External links
TfS Initiative website
Chemical industry
Sustainability
Corporate social responsibility
Supply chain management
Organisations based in Brussels | Together for Sustainability | [
"Chemistry"
] | 817 | [
"nan"
] |
58,097,665 | https://en.wikipedia.org/wiki/Nu-transform | In the theory of stochastic processes, a ν-transform is an operation that transforms a measure or a point process into a different point process. Intuitively the ν-transform randomly relocates the points of the point process, with the type of relocation being dependent on the position of each point.
Definition
For measures
Let denote the Dirac measure on the point and let be a simple point measure on . This means that
for distinct and for every bounded set in . Further, let be a Markov kernel from to .
Let be independent random elements with distribution . Then the point process
is called the ν-transform of the measure if it is locally finite, meaning that for every bounded set
For point processes
For a point process , a second point process is called a -transform of if, conditional on , the point process is a -transform of .
Properties
Stability
If is a Cox process directed by the random measure , then the -transform of is again a Cox-process, directed by the random measure (see Transition kernel#Composition of kernels)
Therefore, the -transform of a Poisson process with intensity measure is a Cox process directed by a random measure with distribution .
Laplace transform
It is a -transform of , then the Laplace transform of is given by
for all bounded, positive and measurable functions .
References
Point processes | Nu-transform | [
"Mathematics"
] | 268 | [
"Point processes",
"Point (geometry)"
] |
58,097,666 | https://en.wikipedia.org/wiki/Output%20signal%20switching%20device | An output signal switching device (OSSD) is an electronic device used as part of the safety system of a machine. It provides a coded signal which, when interrupted due to a safety event, signals the machine to shut down. It works by converting the standard direct current supply, usually 24 volts, into two pulsed and out-of-phase signals. The benefit of this is to avoid the possibility of a stray signal keeping the machine operating while actually in an unsafe condition.
Technical description
The device usually acts as the interface of a sensor (such as a light curtain), designed to signal a safety-related event, typically when the light curtain beam's being "broken". OSSD signals are the outputs from the protective device (light curtain or scanner) to a safety relay. OSSD outputs are typically semiconductor or transistor outputs, as opposed to relay or contact type outputs. There are usually two independent channels, so-called OSSD1 and OSSD2.
The non-tripped state is typically 24 VDC, and the tripped state (when the safety barrier has been violated) 0 VDC. If a wire were to break between the light curtain and the safety relay, the safety relay would trip to the safe state.
The OSSD outputs are self-checked. In the non-tripped state, the outputs periodically pulse low. The protective device checks the output, to make sure it does indeed go low when commanded. If not, the output may have failed or has shorted to 24V somewhere else. Between OSSD1 and OSSD2 the pulse intervals are staggered to check for crisscrossed wiring between the two.
The technology relies on two independent channels carrying the same information output by the device:
The OSSD technology and a classification of timing and other properties are described in the "Position Paper CB24I" issued by ZVEI - German Electrical and Electronic Manufacturer's Association.
OSSD signals are typically of Interface type C as described in CB24I.
Idle signal is 24 V, periodically shortly pulsed to 0 V (pulses are not synchronous) in order for the receiver to ensure no shortcut to either 0 V or 24 V.
Active signal is issued when both lines present 0 V; a single line presenting 0 V for a duration longer than the test pulses is sufficient to signal an event.
Some related terms:
Electrosensitive protective equipment (ESPE) - a device such as a light curtain, safety scanner, or gate position sensor. The ESPE has OSSD outputs.
External Device Monitor (EDM). The device issuing the OSSD signals may have an EDM input. The EDM is used to verify that the controlled device (safety relay) did indeed open when the OSSD signals were dropped. The safety relay has normally closed contacts, which close when the relay is de-energized, thereby turning on the EDM input.
See also
Automation
Safety engineering
References
European patent EP 2 362 408 B1 accorded to Rockwell Automation Germany GmbH & Co. KG, with chronograms and examples.
Article from the review Instrumentation and Control (South Africa) explaining use cases.
Safety engineering
Industrial automation | Output signal switching device | [
"Engineering"
] | 648 | [
"Systems engineering",
"Safety engineering",
"Automation",
"Industrial engineering",
"Industrial automation"
] |
58,098,110 | https://en.wikipedia.org/wiki/MIS416 | MIS416 is an experimental drug developed by Innate Immunotherapeutics which underwent clinical trials to treat secondary progressive multiple sclerosis. It is derived from the bacteria that causes acne and targets myeloid cells through TLR9 and NOD2. In one of its first rounds of clinical trials, the drug was shown to be "safe and well tolerated", with 80% of secondary-progressive multiple sclerosis patients exhibiting more than 30% improvement in at least one area of their MS status. However, Phase II clinical trials were unable to prove that the drug provided a benefit to patients. It is also being researched as a potential treatment for cancer.
Development
MIS416 is a microparticle derived from the cytoskeleton of P. acnes, a species of bacteria present on the skin of most adults that causes acne.
MIS416 was originally developed as a vaccine adjuvant, a component of vaccines that helps to activate an immune response against the vaccine target. Bacteria-derived microparticles have several advantages over traditional adjuvants related both to their size and biological properties.
Because MIS416 is engulfed by immune cells, it is being investigated as an immunotherapy-based treatment for solid tumors.
The drug was used to treat multiple sclerosis under a compassionate use law in New Zealand before clinical trials began. It is administered as an intravenous infusion. In 2017, clinical trials in people with secondary progressive multiple sclerosis failed to meet the primary endpoint of slowed progression of the disease.
Phase II trial and aftermath
On June 22, 2017, Innate Immunotherapeutics announced that clinical trials undertaken to evaluate the efficacy of MIS416 in managing secondary progressive multiple sclerosis (SPMS) had "failed to show any clinically meaningful benefit or statistical significance". As a result, the company's stock dropped by 92 percent and crashed on the Australian Securities Exchange.
US Congressman Chris Collins, a member of the company's board of directors and 17-percent stock holder, was subsequently indicted on charges of insider trading in connection with the poor clinical trial results. Collins had allegedly obtained word from the company about the results and informed his son, Cameron Collins, who immediately sold his US stock. Cameron allegedly tipped off shareholder Stephen Zarsky, who informed three others, thus preventing a total loss of $768,000 in stocks. Zarsky was indicted together with Collins and his son.
References
Abandoned drugs
Multiple sclerosis
Adjuvants | MIS416 | [
"Chemistry"
] | 519 | [
"Drug safety",
"Abandoned drugs"
] |
58,098,428 | https://en.wikipedia.org/wiki/No.%201%20Geisha | No. 1 Geisha was a legal brothel (ranch) and massage parlor in Elko, Nevada. The women who worked there were of Asian heritage. It was previously known as the Mona Lisa Ranch and CharDon's Club.
In January 2011, a proprietor of the brothel was sentenced to prison time in a Federal court for operating an illegal massage parlor in Seattle in addition to the legal operation in Nevada. The sentencing memorandum stated that “While on pre-trial release, [the owner of No. 1 Geisha] attempted to recruit women to work at the legal brothel in Elko ... He claimed that he needed to travel to Nevada to do repair work on his home; yet he was reported by the Elko Police Department and a confidential source that he was in Elko to recruit a ‘lessee’ and additional employees for the legal brothel.” The federal government seized close to $50,000 in assets from the legal brothel in Nevada in relation to the illegal operations in California and Washington.
See also
List of brothels in Nevada
References
Brothels in Nevada
Buildings and structures in Elko County, Nevada
Asian-American culture in Nevada | No. 1 Geisha | [
"Biology"
] | 233 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
58,098,655 | https://en.wikipedia.org/wiki/Pirkko%20Eskola | Pirkko Eskola is a Finnish physicist. She discovered the chemical elements Rutherfordium and Dubnium whilst working at the Lawrence Berkeley National Laboratory.
Research
Eskola was a student of at the University of Helsinki. She worked on heavy ion physics. In 1961 Eskola demonstrated that the half-life of Nobelium was 25 seconds. Eskola joined Lawrence Berkeley National Laboratory in 1968 and stayed until 1972. She worked with Albert Ghiorso, James Andrew Harris and her husband, . In 1969 she was part of the team that discovered Rutherfordium by bombarding californium-249 with Carbon-12. In 1970 she discovered Dubnium using the Heavy Ion Linear Accelerator, bombarding a target of californium-249 with nitrogen nuclei. There was debate between Russia and America over who first discovered of Rutherfordium. Eskola studied the alpha decay of Nobelium 255 and 257. She went on to work on beta-unstable Alpha particle emitting nuclei. Using Alpha-particle spectroscopy she studied lawrencium isotopes 255 - 260 and Mendelevium isotopes 248 - 252.
Her husband, Kari Eskola, was a professor of physics at the University of Helsinki. She went on to have a career in science education. She was an editor of the Finnish Physical Society journal Physica Fennica. Eskola was a member of the American Physical Society Committee for Women in Physics.
References
21st-century Finnish physicists
Nuclear physicists
Women nuclear physicists
Finnish expatriates in the United States
Living people
Year of birth missing (living people)
University of Helsinki alumni | Pirkko Eskola | [
"Physics"
] | 328 | [
"Nuclear physicists",
"Nuclear physics"
] |
58,100,140 | https://en.wikipedia.org/wiki/Kim%20Fortun | Kim Fortun, an American anthropologist, is a professor at University of California Irvine's department of anthropology. Her interests extend also to science and technology studies with a focus on environmental risk and disaster. From 2017 to 2019, she has served as the president of the Society for Social Studies of Science (4S).
In 2003, Fortun's first book, Advocacy After Bhopal: Environmentalism, Disaster, New World Orders, was awarded the Sharon Stephens Prize by the American Ethnological Society. From 2005 to 2010, she edited the Journal of Cultural Anthropology. Fortun currently helps lead multiple collaborative projects, including The Asthma Files and the Platform for Experimental and Collaborative Ethnography (PECE).
Selected works
(Awarded the Sharon Stephens Prize, 2003).
References
External links
Living people
21st-century American anthropologists
American women anthropologists
Cultural anthropologists
Science and technology studies scholars
Year of birth missing (living people)
21st-century American women | Kim Fortun | [
"Technology"
] | 194 | [
"Science and technology studies",
"Science and technology studies scholars"
] |
58,100,160 | https://en.wikipedia.org/wiki/Mixed%20Groups%20of%20Reconstruction%20Machines | Mixed Groups of Reconstruction Machines (), commonly known by the acronym MOMA, was a Greek military construction organization which was active from 1957 to 1992.
It was established in 1957, with sections based in Athens, Thessaloniki, Heraklion, Patras, Lamia, Larissa, and Ioannina, and comprised both permanent and conscript personnel from the Engineers arm of the Hellenic Army, as well as contracted civilian engineers, drivers, workers, and other personnel. Its main purpose was the construction of infrastructure (bridges, airports, roads etc) in the country following the extensive devastation of World War II, the Axis occupation of Greece, and the Greek Civil War. It was abolished in 1992, but in 2015, a similar service, under the name "MOMKA" (Μονάδα Μελετών και Κατασκευών) was established.
References
History of the Hellenic Army
1957 establishments in Greece
History of Greece (1949–1974)
Engineering units and formations
Military units and formations of the Hellenic Army | Mixed Groups of Reconstruction Machines | [
"Engineering"
] | 218 | [
"Engineering units and formations",
"Military engineering"
] |
58,100,243 | https://en.wikipedia.org/wiki/Spider%20shot | Spider shot was a variation of chain shot with multiple chains.
See also
Round shot
Heated shot
Canister shot
Grapeshot
References
Projectiles
Artillery ammunition
Balls
Chains
Metallic objects | Spider shot | [
"Physics"
] | 34 | [
"Metallic objects",
"Physical objects",
"Matter"
] |
58,100,944 | https://en.wikipedia.org/wiki/Octamethylene-bis%285-dimethylcarbamoxyisoquinolinium%20bromide%29 | Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide) (4-673-745-01) is an extremely potent carbamate nerve agent. It works by inhibiting acetylcholinesterase, causing acetylcholine to accumulate. Since the agent molecule is positively charged, it does not cross the blood brain barrier very well.
Toxicity
Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide) is an extremely toxic nerve agent that can be lethal even at extremely low doses. The LD50 in mice and rabbits is 16 μg/kg and 6 μg/kg, respectively.
Synthesis
5-Hydroxyisoquinoline and dimethylcarbamoyl chloride is heated on a steam bath for 2 hours. The mixture is then cooled and treated with benzene. The resulting solid is then dissolved in water. Sodium hydroxide is added to make the solution basic. The solution is extracted with chloroform and then dried with magnesium sulfate. The solvent is evaporated and the solid residue is then recrystallized from petroleum ether. The resulting product, 5-dimethylcarbamoxyisoquinoline, is then mixed with 1,8-dibromooctane in acetonitrile and refluxed for 8 hours. After cooling, the precipitate is filtered and recrystallized from acetonitrile. The product is then dried in vacuo for 14 hours at room temperature, resulting in the final product.
See also
EA-3887
EA-3966
EA-3990
EA-4056
T-1123
VX (nerve agent)
References
Carbamate nerve agents
Isoquinolines
Quaternary ammonium compounds
Acetylcholinesterase inhibitors
Bromides
Biscarbamates
Bisquaternary anticholinesterases
Aromatic carbamates | Octamethylene-bis(5-dimethylcarbamoxyisoquinolinium bromide) | [
"Chemistry"
] | 406 | [
"Bromides",
"Salts"
] |
58,103,605 | https://en.wikipedia.org/wiki/Bulgarian%20Astronautical%20Society | The Bulgarian Astronautical Society () is the oldest entity in Bulgaria dedicated to space exploration and space advocacy.
The Society's founders, Bulgarian Air Force Captain Docho Haralampiev and engineer Georgi Asparuhov, wanted to introduce the wider public to the benefits of space exploration after the launch of Sputnik 1. Haralampiev also sought to raise awareness about the potential of human spaceflight. The two initiated a series of meetings with Bulgarian Army generals, pilots, aviation doctors, engineers, Bulgarian Communist Party members and Bulgarian Academy of Sciences representatives.
The Society was established on 8 December 1957, two months after the launch of Sputnik 1. Its ranks swelled when numerous engineers and volunteers joined the organisation just days after its establishment. Because of the rigid legal environment in the People's Republic of Bulgaria at the time, the Society was established as an Astronautics Section of the Defence Assistance Organisation.
It joined the International Astronautical Federation in 1958.
See also
List of astronomical societies
References
External links
Space organizations
Non-profit organizations based in Bulgaria
Astronomy organizations
Space advocacy organizations
1957 establishments in Bulgaria
Scientific organizations established in 1957
Space program of Bulgaria | Bulgarian Astronautical Society | [
"Astronomy"
] | 232 | [
"Space advocacy organizations",
"Astronomy organizations",
"Space organizations"
] |
58,103,878 | https://en.wikipedia.org/wiki/Peano%20kernel%20theorem | In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano.
Statement
Let be the space of all functions that are differentiable on that are of bounded variation on , and let be a linear functional on . Assume that that annihilates all polynomials of degree , i.e.Suppose further that for any bivariate function with , the following is valid:and define the Peano kernel of asusing the notationThe Peano kernel theorem states that, if , then for every function that is times continuously differentiable, we have
Bounds
Several bounds on the value of follow from this result:
where , and are the taxicab, Euclidean and maximum norms respectively.
Application
In practice, the main application of the Peano kernel theorem is to bound the error of an approximation that is exact for all . The theorem above follows from the Taylor polynomial for with integral remainder:
defining as the error of the approximation, using the linearity of together with exactness for to annihilate all but the final term on the right-hand side, and using the notation to remove the -dependence from the integral limits.
See also
Divided differences
References
Numerical analysis | Peano kernel theorem | [
"Mathematics"
] | 268 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Mathematical problems",
"Approximations"
] |
58,103,934 | https://en.wikipedia.org/wiki/Mapping%20theorem%20%28point%20process%29 | The mapping theorem is a theorem in the theory of point processes, a sub-discipline of probability theory. It describes how a Poisson point process is altered under measurable transformations. This allows construction of more complex Poisson point processes out of homogeneous Poisson point processes and can, for example, be used to simulate these more complex Poisson point processes in a similar manner to inverse transform sampling.
Statement
Let be locally compact and polish and let
be a measurable function. Let be a Radon measure on and assume that the pushforward measure
of under the function is a Radon measure on .
Then the following holds: If is a Poisson point process on with intensity measure , then is a Poisson point process on with intensity measure .
References
Poisson point processes | Mapping theorem (point process) | [
"Mathematics"
] | 156 | [
"Point processes",
"Point (geometry)",
"Poisson point processes"
] |
58,104,462 | https://en.wikipedia.org/wiki/Lysenin | Lysenin is a pore-forming toxin (PFT) present in the coelomic fluid of the earthworm Eisenia fetida. Pore-forming toxins are a group of proteins that act as virulence factors of several pathogenic bacteria. Lysenin proteins are chiefly involved in the defense against cellular pathogens. Following the general mechanism of action of PFTs lysenin is segregated as a soluble monomer that binds specifically to a membrane receptor, sphingomyelin in the case of lysenin. After attaching to the membrane, the oligomerization begins, resulting in a nonamer on top of membrane, known as a prepore. After a conformational change, which could be triggered by a decrease of pH, the oligomer is inserted into the membrane in the so-called pore state.
Monomer
Lysenin is a protein produced in the coelomocyte-leucocytes of the earthworm Eisenia fetida. This protein was first isolated from the coelomic fluid in 1996 and named lysenin (from lysis and Eisenia). Lysenin is a relatively small water-soluble molecule with a molecular weight of 33 kDa. Using X-ray crystallography, lysenin was classified as a member of the Aerolysin protein family by structure and function. Structurally, each lysenin monomer consists of a receptor binding domain (grey globular part on right of Figure 1) and a Pore Forming Module (PFM); domains shared throughout the aerolysin family. The lysenin receptor binding domain shows three sphingomyelin binding motifs. The Pore Forming Module contains the regions that undergo large conformational changes to become the β-barrel in the pore.
Membrane receptors
The natural membrane target of lysenin is an animal plasma membrane lipid called sphingomyelin located mainly in its outer leaflet, involving at least three of its phosphatidylcholines (PC) groups. Sphingomyelin is usually found associated with cholesterol in lipid rafts. Cholesterol, which enhances oligomerization, provides a stable platform with high lateral mobility where monomer-monomer encounters are more probable. PFTs have shown to be able to remodel the membrane structure, sometimes even mixing lipid phases.
The region of the lysenin pore β-barrel expected to be immersed in the hydrophobic region of the membrane is the 'detergent belt', the 3.2 nm high region occupied by detergent in Cryogenic Electron Microscopy (Cryo-EM) studies of the pore. On the other hand, sphingomyelin/Cholesterol bilayers are about 4.5 nm height. This difference in height between the detergent belt and the sphingomyelin/cholesterol bilayer implies a bend of the membrane in the region surrounding the pore, called negative mismatch. This bending results in a net attraction between pores that induce pores aggregation.
Binding, oligomerization and insertion
Membrane binding is a requisite to initiate PFT oligomerization. Lysenin monomers bind specifically to sphingomyelin via the receptor binding domain. The final lysenin oligomer is constituted by nine monomers without quantified deviations. When lysenin monomers bind to sphingomyelin-enriched membrane regions, they provide a stable platform with a high lateral mobility, hence favouring the oligomerization. As with most PFTs, lysenin oligomerization occurs in a two-step process, as was recently imaged.
The process begins with monomers being adsorbed into the membrane by specific interactions, resulting in an increased concentration of monomers. This increase is promoted by the small area where the membrane receptor accumulates since the majority of PFT membrane receptors are associated with lipid rafts. Another side effect, aside from the increase of monomer concentration, is the monomer-monomer interaction. This interaction increases lysenin oligomerization. After a critical threshold concentration is reached, several oligomers are formed simultaneously, although sometimes these are incomplete. In contrast to PFTs of the cholesterol-dependent cytolysin family, the transition from incomplete lysenin oligomers to complete oligomers has not been observed.
A complete oligomerization results in the so-called prepore state, a structure on the membrane. Determining the prepore's structure by X-ray or Cryo-EM is a challenging process that so far has not produced any results. The only available information about the prepore structure was provided by Atomic Force Microscopy (AFM). The measured prepore height was 90 Å; and the width 118 Å, with an inner pore of 50 Å. A model of the prepore was built aligning the monomer structure () with the pore structure () by their receptor-binding domains (residues 160 to 297). A recent study in aerolysin suggests that the currently accepted model for the lysenin prepore should be revisited, according to the new available data on the aerolysin insertion.
A conformational change transforms the PFM into the transmembrane β-barrel, leading to the pore state. The trigger mechanism for the prepore-to-pore transition in lysenin depends on three glutamic acid residues (E92, E94 and E97), and is activated by a decrease in pH, from physiological conditions to the acidic conditions reached after endocytosis, or an increase in calcium extracellular concentration. These three glutamic acids are located in an α-helix that forms part of the PFM, and glutamic acids are found in aerolysin family members in its PFMs. Such a conformational change produces a decrease in the oligomer height of 2.5 nm according to AFM measurements. The main dimensions, using lysenin pore X-ray structure, are height 97 Å, width 115 Å and the inner pore of 30 Å. However, complete oligomerization into the nonamer is not a requisite for the insertion, since incomplete oligomers in the pore state can be found. The prepore to pore transition can be blocked in crowded conditions, a mechanism that could be general to all β-PFTs. The first hint of crowding effect on prepore to pore transition was given by congestion effects in electrophysiology experiments.
Insertion consequences
The ultimate consequences of lysenin pore formation are not well documented; however, it is thought to induce apoptosis via three possible hypotheses:
Breaking the sphingomyelin asymmetry between the two leaflets of the lipid bilayer by punching holes in the membrane and inducing lipid flip-flop (reorientation of a lipid from one leaflet of a membrane bilayer to the other).
Increasing the calcium concentration in the cytoplasm.
Decreasing the potassium concentration in the cytoplasm.
Biological role
The biological role of lysenin remains unknown. It has been suggested that lysenin may play a role as a defence mechanism against attackers such as bacteria, fungi or small invertebrates. However, lysenin's activity is dependent upon binding to sphingomyelin, which is not present in the membranes of bacteria, fungi or most invertebrates. Rather, sphingomyelin is mainly present in the plasma membrane of chordates. Another hypothesis is that the earthworm, which is able to expel coelomic fluid under stress, generates an avoidance behaviour to its vertebrate predators (such as birds, hedgehogs or moles). If that is the case, the expelled lysenin might be more effective if the coelomic fluid reaches the eye, where the concentration of sphingomyelin is ten times higher than in other body organs. A complementary hypothesis is that the pungent smell of the coelomic fluid - giving the earthworm its specific epithet foetida - is an anti-predator adaptation. However, it remains unknown whether lysenin contributes to avoidance of Eisenia by predators.
Applications
Lysenin's conductive properties have been studied for years. Like most pore-forming toxins, lysenin forms a non-specific channel that is permeable to ions, small molecules, and small peptides. There have also been over three decades of studies into finding suitable pores for converting into nanopore sequencing systems that can have their conductive properties tuned by point mutation. Owing to its binding affinity for sphingomyelin, lysenin (or just the receptor binding domain) has been used as a fluorescence marker to detect the sphingomyelin domain in membranes.
References
External links
https://www.theses.fr/2017AIXM0124
Protein toxins | Lysenin | [
"Chemistry"
] | 1,853 | [
"Protein toxins",
"Toxins by chemical classification"
] |
58,104,653 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Watch | The Samsung Galaxy Watch is a smartwatch developed by Samsung Electronics. It was announced on 9 August 2018. The Galaxy Watch was scheduled for availability in the United States starting on 24 August 2018, at select carriers and retail locations in South Korea on 31 August 2018, and in additional select markets on 14 September 2018.
On 27 February 2021, Shortly after the Galaxy Watch Active2 and Galaxy Watch3 received an update unlocking the ECG feature for the European countries, Samsung is now delivering Galaxy Watch3-intrinsic features to the original Galaxy Watch and Watch Active.
Specifications
References
External links
Galaxy Watch on Samsung Newsroom
Galaxy Watch 46mm (Bluetooth) on samsung.com
Consumer electronics brands
Products introduced in 2018
Smartwatches
Samsung wearable devices | Samsung Galaxy Watch | [
"Technology"
] | 149 | [
"Smartwatches"
] |
58,105,173 | https://en.wikipedia.org/wiki/NGC%205018 | NGC 5018 is an elliptical galaxy located in the constellation of Virgo at an approximate distance of 132.51 Mly. NGC 5018 was discovered in 1788 by William Herschel.
Three supernovae have been observed in NGC 5018: SN 2002dj, (type Ia, mag. 17), SN 2017isq (type Ia, mag. 15.3), and SN 2021fxy (type Ia, mag. 13.9).
See also
Galaxy
References
External links
5018
Virgo (constellation)
Elliptical galaxies
Astronomical objects discovered in 1788 | NGC 5018 | [
"Astronomy"
] | 119 | [
"Virgo (constellation)",
"Constellations"
] |
58,105,226 | https://en.wikipedia.org/wiki/Cavg | {{DISPLAYTITLE:Cavg}}
Cavg is the average concentration of a drug in the central circulation during a dosing interval in steady state. It is calculated by
where is the area under the curve and the dosing interval.
See also
Area under the curve (pharmacokinetics)
Cmax (pharmacology)
References
Pharmacokinetic metrics | Cavg | [
"Chemistry"
] | 83 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
58,105,950 | https://en.wikipedia.org/wiki/Lower%20Mississippi%20water%20resource%20region | The Lower Mississippi water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers.
The Lower Mississippi region, which is listed with a 2-digit hydrologic unit code (HUC) of 08, has an approximate size of , and consists of 9 subregions, which are listed with the 4-digit HUCs 0801 through 0809.
This region includes the drainage within the United States of: (a) the Mississippi River below its confluence with the Ohio River, excluding the Arkansas, Red, and White River basins above the points of highest backwater effect of the Mississippi River in those basins; and (b) coastal streams that ultimately discharge into the Gulf of Mexico from the Pearl River Basin boundary to the Sabine River and Sabine Lake drainage boundary. Includes parts of Arkansas, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee.
List of water resource subregions
See also
List of rivers in the United States
Water resource region
References
Lists of drainage basins
Drainage basins
Watersheds of the United States
Regions of the United States
Resource
Water resource regions | Lower Mississippi water resource region | [
"Environmental_science"
] | 265 | [
"Hydrology",
"Drainage basins"
] |
58,106,220 | https://en.wikipedia.org/wiki/Journal%20of%20Open%20Source%20Software | The Journal of Open Source Software is a peer-reviewed open-access scientific journal covering open-source software from any research discipline. The journal was founded in 2016 by editors Arfon Smith, Kyle Niemeyer, Dan Katz, Kevin Moerman, and Karthik Ram. The editor-in-chief is Arfon Smith (Space Telescope Science Institute). The journal is a sponsored project of NumFOCUS and an affiliate of the Open Source Initiative. The journal uses GitHub as publishing platform.
The journal was established in May 2016 and in its first year published 111 articles. It has been discussed by its editors in several peer-reviewed papers which describe its publishing model and its effectiveness.
Abstracting and indexing
The journal is abstracted and indexed in the Astrophysics Data System and in the DBLP computer science bibliography online database.
References
Further reading
External links
Creative Commons Attribution-licensed journals
Academic journals established in 2016
Continuous journals
Software engineering publications
English-language journals
Computer science journals | Journal of Open Source Software | [
"Technology",
"Engineering"
] | 204 | [
"Software engineering publications",
"Works about computing",
"Software engineering"
] |
58,106,740 | https://en.wikipedia.org/wiki/C17%20%28C%20standard%20revision%29 | C17, formally ISO/IEC 9899:2018, is an open standard for the C programming language, prepared in 2017 and published in July 2018. It replaced C11 (standard ISO/IEC 9899:2011), and is superseded by C23 (ISO/IEC 9899:2024) since October 2024. Since it was under development in 2017, and officially published in 2018, C17 is sometimes referred to as C18.
Changes from C11
C17 fixes numerous minor defects in C11 without introducing new language features.
The __STDC_VERSION__ macro is increased to the value 201710L.
For a detailed list of changes from the previous standard, see Clarification Request Summary for C11.
Compiler support
List of compilers supporting C17:
GCC 8.1.0
LLVM Clang 7.0.0
IAR EWARM v8.40.1
Microsoft Visual C++ VS 2019 (16.8)
Pelles C 9.00
See also
C++23, C++20, C++17, C++14, C++11, C++03, C++98, versions of the C++ programming language standard
Compatibility of C and C++
References
Further reading
N2176 (final draft of C17 standard); WG14; 2017.
ISO/IEC 9899:2018 (official C17 standard); ISO; 2018.
External links
C Language Working Group 14 (WG14) Documents
C (programming language)
Programming language standards
IEC standards
ISO standards | C17 (C standard revision) | [
"Technology"
] | 330 | [
"Computer standards",
"Programming language standards",
"IEC standards"
] |
59,739,095 | https://en.wikipedia.org/wiki/Yogue%20orthonairovirus | Yogue orthonairovirus, also called Yogue virus, is a species of virus in the genus Orthonairovirus. Its only known host is Rousettus aegyptiacus.
References
Nairoviridae | Yogue orthonairovirus | [
"Biology"
] | 49 | [
"Virus stubs",
"Viruses"
] |
59,739,220 | https://en.wikipedia.org/wiki/Society%20for%20Ecological%20Restoration | The Society for Ecological Restoration (SER) is a conservation organization based in the United States, supporting a "global community of restoration professionals that includes researchers, practitioners, decision-makers, and community leaders". The organization was founded in 1988. The mission of the organization is to: "advance the science, practice and policy of ecological restoration to sustain biodiversity, improve resilience in a changing climate, and re-establish an ecologically healthy relationship between nature and culture."
SER produces definitions and standards for the practice of ecological restoration, including the SER International Primer on Ecological Restoration (2004), International Standards for the Practice of Ecological Restoration (2016), and a certification program for professionals: Certified Ecological Restoration Practitioner (CERP).
References
Nature conservation organizations based in the United States
Ecological restoration | Society for Ecological Restoration | [
"Chemistry",
"Engineering"
] | 161 | [
"Ecological restoration",
"Environmental engineering"
] |
59,739,475 | https://en.wikipedia.org/wiki/Isodiazene | In organic chemistry, an isodiazene, also known by the incorrectly constructed (but commonly used) name 1,1-diazene or systematic name diazanylidene, is an organic derivative of the parent isodiazene (H2N+=N–, also called 1,1-diimide) with general formula R1R2N+=N–. The functional group has two major resonance forms, a diazen-2-ium-1-ide form, and an aminonitrene form:
Although isodiazenes are formally isoelectronic with ketones and aldehydes, the reactivity of this exotic functional group is very different. They are generally prepared by oxidation of the hydrazine (R2N–NH2), reduction of the 1,1-diazene oxide (R2N–N=O), 1,1-elimination of MX from R2N–NMX (M = Na, K; X = SO2Ar), or treatment of secondary amines with Angeli's salt, Na2N2O3, in the presence of acid. Isodiazenes participate in cycloaddition reactions with alkenes to generate N-aminoaziridines. In the absence of other reactants, they undergo reactions in which N2 is eliminated to give an organic residue or residues through both concerted and nonconcerted pathways. Cyclic isodizenes in particular readily undergo cycloelimination and chelotropic elimination reactions. Some of these reactions are believed to be concerted pericyclic processes, as evidenced by stereospecificity that is consistent with the conservation of orbital symmetry.
The absence of cyclobutane from the decomposition of the isodiazene derived from the saturated 5-membered azacycle is evidence against radical intermediates, and the process is also believed to be concerted and pericyclic.
Due to the facile elimination of N2, most isodiazenes can only be isolated in a matrix at cryogenic temperatures. A small number of highly hindered derivatives with tertiary R groups (e.g., R1= R2 = t-Bu, stable at –127 °C, decomposes at –90 °C; R1—R2 = C(CH3)2CH2CH2CH2(CH3)2C, stable up to –78 °C) are isolable by preparation and chromatography or filtration at low temperature as red solutions.
Isodiazenes have been observed to serve as ligands in transition metals complexes, including those of molybdenum and vanadium.
See also
Diazene
Ylide
Nitrene
References
Functional groups
Reactive intermediates | Isodiazene | [
"Chemistry"
] | 577 | [
"Organic compounds",
"Reactive intermediates",
"Functional groups",
"Physical organic chemistry"
] |
59,741,110 | https://en.wikipedia.org/wiki/Storyful | Storyful (stylized as storyful.) is a social media intelligence agency headquartered in Dublin, Ireland that is a subsidiary of Rupert Murdoch's News Corp offering services such as social news monitoring, video licensing, and reputation risk management tools for corporate clients. The startup was launched as the first social media newswire, a content aggregator, verifying news sources and online content in Dublin in 2010 by Mark Little, a former journalist with RTÉ News. Storyful was acquired by Rupert Murdoch’s News Corp in 2013 for USD$25 million.
Background
Mark Little, who had worked as a television journalist for RTÉ One, founded startup Storyful in Dublin, Ireland, in 2010, as a service that "verified news sources and online content". According to Nieman Lab, Storyful had a reputation for content aggregation as a social news agency—finding, verifying, distributing, licensing, and commercializing user-generated content, social media and online content from social networking services, including videos about stories in the news, such as the Syrian Civil War, Arab Spring protests, as well as "smaller viral moments". Storyful aimed to provide authority through its verification and monitoring tools while providing authenticity through user-generated content.
On December 20, 2013 News Corp, which is owned by Rupert Murdoch, purchased Storyful for US$25 million and opened a New York office in the same building as Fox News' main studios.
Little left Storyful in 2015 and Gavin Sheridan, Storyful's director of innovation left in 2014.
News Corp CEO Robert Thomson said that through Storyful, News Corp would "define the opportunities that the digital landscape presents, rather than simply adapt to them." After the acquisition, the company expanded its service to include "commercial and creative work".
After Murdoch acquired the company, from 2014 through to February 2018, losses "swelled", requiring a series of cash injections from News Corp. During that time the company expanded aggressively globally with a staff of about 200 worldwide up from about 30 in 2014.
According to The Guardian, in 2016, journalists were encouraged by Storyful to use the social media monitoring software called Verify developed by Storyful. By installing Verify's web browser extension on their computers, Verify would inform the journalists when social media content had been "verified and cleared". The Guardian revealed that through the Verify plugin, dozens of staff in four offices had access to the journalists browsing activity without them knowing. This data allowed Storyful to actively monitor its own clients' activities on social media and to "turn it into an internal feed" at Storyful that "updates in real time".
In November 2018, when a video circulated by Infowars' Paul Joseph Watson appeared to prove that CNN's Jim Acosta's contact with a White House intern was a physical blow, Storyful was able to prove that the 15-second-long clip had been doctored.
According to a January 21, 2019 article in CNN Business, Rob McDonagh, the editor of Storyful's U.S. news team, had proven that one of the viral videos that served as catalysts in the January 2019 Lincoln Memorial confrontation at January 18, 2019 Indigenous Peoples March, was posted by a suspicious account, under the handle @2020fight. McDonagh's team validates videos and posts before adding them to their "digest", distinguishing true stories from those that are not. Storyful attempts to validate each post or video before including it in its digest. McDonagh reviewed previous content from @2020fight's account, and found it suspicious because it had a high follower count, a "highly polarized and yet inconsistent political messaging", an "unusually high rate of tweets", and "the use of someone else's image in the profile photo." reporter Donie O'Sullivan said that the @2020fight video that had been posted on January 18, which had 2.5 million views by January 22, was the one that "helped frame the news cycle".
Currently the website offers a service by which video can be commercially brokered.
Services
Services include a newswire service—one of their "core pillars"—and social news monitoring. By February 2018, Storyful was developing "risk and reputation monitoring" services through which they would source and verify social news, fact-checking it and contextualising it for corporate clients. They were "developing tech tools" to "explore obscure or closed networks" for their intelligence team. can use to explore obscure or closed networks. They "track deviations in social conversations around brands and organisations and catch potential risks before they blow up. Like an alerts system." The company "released a re-booted version of its Newswire platform in 2018. According to FORA, Storyful was developing new tools to combat fake news online.
Clients
When Storyful was acquired by News Corp in 2013, the company already had the Wall Street Journal, the BBC, New York Times, YouTube, ITN and Channel 4 News as clients. By 2018 their clients included CNN, ABC News and Fox News, The New York Times, the Washington Post, in the United States, ABC in Australia, and all of News Corp’s own publications. Most of their "reputation-conscious corporate customers" clients prefer to not be named.
Notes
References
Storyful
Technology companies established in 2010
Internet properties established in 2010
Irish companies established in 2010
Companies based in Dublin (city)
Digital marketing companies of Ireland
Online companies of Ireland
User-generated content
Social media
Social media management platforms | Storyful | [
"Technology"
] | 1,125 | [
"Computing and society",
"Social media"
] |
59,742,549 | https://en.wikipedia.org/wiki/Tight%20junction%20proteins | Tight junction proteins (TJ proteins) are molecules situated at the tight junctions of epithelial, endothelial and myelinated cells. This multiprotein junctional complex has a regulatory function in passage of ions, water and solutes through the paracellular pathway. It can also coordinate the motion of lipids and proteins between the apical and basolateral surfaces of the plasma membrane. Thereby tight junction conducts signaling molecules, that influence the differentiation, proliferation and polarity of cells. So tight junction plays a key role in maintenance of osmotic balance and trans-cellular transport of tissue specific molecules. Nowadays is known more than 40 different proteins, that are involved in these selective TJ channels.
Structure of tight junction
The morphology of tight junction is formed by transmembrane strands in the inner side of plasma membrane with complementary grooves on the outer side. This TJ strand network is composed by transmembrane proteins, that interact with the actin in cytoskeleton and with submembrane proteins, which send a signal into the cell. The complexity of the network structure depends on the cell type and it can be visualized and analyzed by freeze-fracture electron microscopy, which shows the individual strands of the tight junction.
Function of tight junction proteins
TJ proteins could be divided in different groups according to their function or localization in tight junction. TJ proteins are mostly described in the epithelia and endothelia but also in myelinated cells. In the central and peripheral nervous system are TJ localized between a glia and an axon and within myelin sheaths, where they facilitate the signaling. Some of TJ proteins act as a scaffolds, that connect integral proteins with the actin in a cytoskeleton. Others have an ability to crosslink junctional molecules or transport vesicles through the tight junction. Some submembrane proteins are involved in the cell signaling and gene expression due to their specific binding to the transcription factor. The most important tight junction proteins are occludin, claudin and JAM family, that establish the backbone of tight junction and allow to passing of immune cells through the tissue.
TJ proteins in epithelia and endothelia
Proteins in epithelial and endothelial cells are occludin, claudin and tetraspanin, that each has a one or two different types of the conformation. All of them are created by four transmembrane regions with two (amino-, carboxyl-) extracellular domains, that are orientated towards the cytoplasm. But occludin has a structure with two similar extracellular loops compared to claudin and tetraspanin, which have one extracellular loop significantly longer than the other one.
Occludin
Occludin (60kDa) was the first identified component of tight junction. The tetraspan membrane protein is established by two extracellular loops, two extracellular domains and one short intracellular domain. The C-terminal domain of occludin is directly bound to ZO-1, which interacts with actin filaments in cytoskeleton. It works as a transmitter from and to the tight junction, because of its association with signaling molecules (PI3-kinase, PKC, YES, protein phosphases 2A, 1). This TJ protein also participate in a selective diffusion of solutes along concentration gradient and transmigration of leukocytes across the endothelium and epithelium. Therefore the result of the overexpression of mutant occludin in epithelial cells leads to break down the barrier function of tight junction and changes in a migration of neutrophils. Occludin cooperates with members of the claudin family directly or indirectly and together they form the long strands of tight junction.
Claudin
The claudin family is composed by 24 members. Some of them haven´t been well characterized yet but all members are encoded by 20-27kDa tetraspan proteins with two extracellular domains, one short intracellular domain and two extracellular loops, where is the first one notably larger than the second one. The C-terminal domain of claudins is required for their stability and targeting. This domain contains PDZ-binding motif, that facilitate to bind them to the PDZ membrane proteins, like a ZO-1, ZO-2, ZO-3,MUPP1. Each claudin has a specific variation and amount of charged aminoacids in the first extracellular loop. So through the repolarization of aminoacids could claudins selectively regulate the molecule transfer. In contrast to occludin, which makes paracellular holes for ion-trafficking between neighbour cells. Claudins seem to be on a tissue specific manner, because some of them are expressed only in a specific cell type. Claudin 11 is expressed in oligodendrocytes and Sertoli cells or Claudin 5 is expressed in the vascular endothelial cells.
Claudin 2,3,4,7,8,12,15 are present in epithelial cells throughout the segments of intestinal tract. Claudin 7 is occurred also in epithelial cells of the lung and kidney. Claudin-18 is expressed in the alveolar epithelial cells of the lung. Most types of claudins have more than two isoforms, that have a distinguish size or function. The specific combination of these isoforms creates tight junction strands, while the occulin is not required for. Occludin play a role in selective regulation by an incorporating itself into the claudin-based strands. The different proportion of claudin species in the cell gives them specific barrier properties. Claudins also have a function in a signaling of the cell adhesion, for example Cldn 7 binds directly to adhesion molecule EpCAM on the cell membrane. And Cldn 16 is associated with reabsorption of divalent cations, because it locates in epithelial cells of thick ascending loop of Henle.
TJ proteins in myelin sheaths
OSP/Claudin 11
OSP/Claudin 11 is occurred in a myelin of nerve cells and between Sertoli cells, so it forms tight junctions in the CNS. This protein in a cooperation with the second loop of occludin maintains the blood-testis barrier and spermatogenesis.
PMP22/gas-3
PMP22/gas-3, called peripheral myelin protein, is located in the myelin sheath. The expression of this protein is associated with a differentiation of Schwann cells, an establishment of tight junction in the Schwamm cell membrane or a compact formation of myelin. It is also present in epithelial cells of lungs and intestine, where interacts with occludin and ZO-1, that together create the TJ in the epithelia. PMP22/gas-3 belongs to the epithelial membrane protein family (EMP1-3), which conducts a growth and differentiation of cells.
OAP-1/TSPAN-3
OAP-1/TSPAN-3 cooperates with β1-integrin and OSP/Claudin11 within myelin sheaths of oligodendrocytes, thereby affects the proliferation and migration.
Junctional adhesion molecules
JAM
Junctional adhesion molecules are divided in subgroups according to their composition and binding motif.
Glycosylated transmembrane proteins JAMs are classified in the immunoglobulin superfamily, that are formed by two extracellular Ig-like domains: the transmembrane region and the C-terminal cytoplasmatic domain. Members of this JAM family could express two distinguish binding motifs. First subgroup composed by JAM-A, JAM-B, JAM-C has a PDZ-domain binding motif type II at their C-termini, which interacts with the PDZ domain of ZO-1, AF-6, PAR-3 and MUPP1. JAM proteins are not a part of tight junction strands but they participate in a signalization that leads to an adhesion of monocytes and neutrophils and their transmigration through the epithelium. JAMs in epithelial cells can aggregate with TJ strands, that are made of polymers of claudin and occludin. JAM-A maintains barrier properties in the endothelium and the epithelium as well as JAM-B and -C in Sertoli cells and spermatids.
The second subgroup of CAR, ESAM, CLMP and JAM4 proteins contains a PDZ-domain binding motif type I at their C-termini.
CAR (coxsackie and adenovirus receptor) also belongs to the immunoglobulin superfamily, same like JAM proteins. CAR is expressed in the epithelia of trachea, bronchi, kidney, liver and intestine, where positively contributes to the barrier function of the tight junction. This protein mediates a neutrophil migration, cells contacts and an aggregation. It´s necessary for the embryonal heart development, especially for the organization of myofibrils in cardiomyocytes. CAR is associated with PDZ-scaffolding proteins MAGI-1b, PICK, PSD-95, MUPP1 and LNX.
ESAM (endothelial cell selective adhesion molecule) is an immunoglobulin-transmembrane protein, which influences properties of the endothelial TJ. ESAM is present in endothelial cells and platelets but not in the epithelium and leukocytes. There, it directly binds to the MAGI-1 molecules through the ligation of C-terminal domain and PDZ-domain. This cooperation provides the formation of large molecular complex at tight junctions in the endothelium.
JAM4 is a component of immunoglobulin superfamily JAM but it expresses a PDZ-domain binding motif class I (doesn´t express a class II like members JAM-A,-B,-C). The JAM4 is situated in a kidney glomeruli and an intestine epithelium, where cooperates with MAGI-1, ZO-1, occludin and effectively regulates the permeability of these cells. JAM4 has a cell adhesion activity, which is conducted by MAGI-1.
Myelin Protein 0
Protein 0 is a major myelin protein of the peripheral nervous system, which integrates with PMP22. Together they form and compact myelin sheaths of nerve cells.
Plaque proteins in the tight junction
Plaque proteins are molecules, that are required for the coordination of signals coming from the plasma membrane. In recent years exist about 30 different proteins associated with cytoplasmatic properties of the tight junction.
One group of these proteins are attended in the organization of transmembrane proteins and the interaction with actin filaments. This PDZ-containing group is composed by ZO-1, ZO-2, ZO-3, AF-6, MAGI, MUPP1, PAR, PATJ, and the PDZ domain gives them a scaffolding function. PDZ domains are important for a clustering and an anchoring of transmembrane proteins. With the first group interacts one plaque protein without PDZ domain, called cingulin, which plays a key role in the cell adhesion.
The second group of plague proteins are used for a vesicular trafficking, barrier regulation and gene transcription, because certain of them are transcription factors or proteins with nuclear functions. Members of this second group are ZONAB, Ral-A, Raf-1, PKC, symplekin, cingulin and some more. They are characterized by lacking of the PDZ domain.
References
Proteins
Cell biology | Tight junction proteins | [
"Chemistry",
"Biology"
] | 2,492 | [
"Biomolecules by chemical classification",
"Proteins",
"Cell biology",
"Molecular biology"
] |
59,742,671 | https://en.wikipedia.org/wiki/DSatur | DSatur is a graph colouring algorithm put forward by Daniel Brélaz in 1979. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, adding a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of colours in its neighbourhood and colours this vertex next. Brélaz defines this number as the degree of saturation of a given vertex. The contraction of the term "degree of saturation" forms the name of the algorithm. DSatur is a heuristic graph colouring algorithm, yet produces exact results for bipartite, cycle, and wheel graphs. DSatur has also been referred to as saturation LF in the literature.
Pseudocode
Let the "degree of saturation" of a vertex be the number of different colours being used by its neighbors. Given a simple, undirected graph compromising a vertex set and edge set , the algorithm assigns colors to all of the vertices using color labels . The algorithm operates as follows:
Let be the uncolored vertex in with the highest degree of saturation. In cases of ties, choose the vertex among these with the largest degree in the subgraph induced by the uncolored vertices.
Assign to the lowest color label not being used by any of its neighbors.
If all vertices have been colored, then end; otherwise return to Step 1.
Step 2 of this algorithm assigns colors to vertices using the same scheme as the greedy colouring algorithm. The main differences between the two approaches arises in Step 1 above, where vertices seen to be the most "constrained" are coloured first.
Example
Consider the graph shown on the right. This is a wheel graph and will therefore be optimally colored by the DSatur algorithm. Executing the algorithm results in the vertices being selected and colored as follows. (In this example, where ties occur in both of DSatur's heuristics, the vertex with lowest lexicographic labelling among these is chosen.)
Vertex (color 1)
Vertex (color 2)
Vertex (color 3)
Vertex (color 2)
Vertex (color 3)
Vertex (color 2)
Vertex (color 3)
This gives the final three-colored solution .
Performance
The worst-case complexity of DSatur is , where is the number of vertices in the graph. This is because the process of selecting the next vertex to colour takes time, and this process is carried out times. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in , or using Fibonacci heap, where is the number of edges in the graph. This produces much faster runs with sparse graphs.
DSatur is known to be exact for bipartite graphs, as well as for cycle and wheel graphs. In an empirical comparison by Lewis in 2021, DSatur produced significantly better vertex colourings than the greedy algorithm on random graphs with edge probability , while in turn producing significantly worse colourings than the recursive largest first algorithm.
References
External links
High-Performance Graph Colouring Algorithms Suite of graph colouring algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2021).
C++ implementation of the DSatur Algorithm, presented as part of the article The DSatur Algorithm for Graph Coloring, Geeks for Geeks (2021)
1979 in computing
Graph algorithms
Graph coloring | DSatur | [
"Mathematics"
] | 713 | [
"Graph coloring",
"Mathematical relations",
"Graph theory"
] |
59,742,809 | https://en.wikipedia.org/wiki/Palmaria%20%28alga%29 | Palmaria is a genus of algae. One of its most notable members is dulse, Palmaria palmata.
References
Florideophyceae
Red algae genera
Taxa named by John Stackhouse | Palmaria (alga) | [
"Biology"
] | 42 | [
"Algae stubs",
"Algae"
] |
59,742,900 | https://en.wikipedia.org/wiki/Barriers%20to%20pro-environmental%20behaviour | Pro-environmental behaviour is behaviour that people consciously choose in order to minimize the negative impact of their actions on the environment. Barriers to pro-environmental behaviour are the numerous factors that hinder individuals when they try to adjust their behaviours toward living more sustainable lifestyles.
Generally, these barriers can be separated into larger categories: psychological, social/cultural, financial and structural. Psychological barriers are considered internal, where an individual's knowledge, beliefs and thoughts affect their behaviour. Social and cultural barriers are contextual, where an individual's behaviour is affected by their surroundings (e.g. neighbourhood, town, city, etc.). Financial barriers are simply a lack of funds to move toward more sustainable behaviour (e.g. new technologies, electric cars). Structural barriers are external and often impossible for an individual to control, such as lack of governmental action, or locality of residence that promotes car dependency as opposed to public transit.
Internal/psychological barriers
Identifying psychological barriers to pro-environmental behaviour is key to the design of successful behaviour change interventions. Scholars have identified several different categories of psychological barriers to pro-environmental action. A known researcher in the field, environmental psychologist Robert Gifford, has identified 33 of these barriers, barriers that he has termed “The Dragons of Inaction.” The Dragons are separated into seven categories: Limited Cognition, Ideologies, Social Comparison, Sunk Costs, Discredence, Perceived Risks, and Limited Behaviour. Below are the seven categories, integrated with additional barriers identified by other researchers. Other psychologists have argued that the attempt to identify psychological barriers to environmental behavior is problematic when used to explain societal inaction on climate change.
Limited cognition
Limited cognition barriers are barriers that arise from a lack of knowledge and awareness about environmental issues. For example, with a key environmental issue like climate change, a person might not engage in pro-environmental behaviour because they are: unaware that climate change is occurring; or aware that climate change is an issue, but are ill-informed about the science of climate change; or lacking information about how they could address the issue.
For those who are aware of current environmental issues, self-efficacy is an important barrier to action, where individuals often feel powerless in achieving large goals such as mitigating global climate change. Moreover, lack of motivation to change one's behaviour is correlated with the belief that individuals are incapable of performing effective pro-environmental actions.
Ideologies
Ideological barriers are created by pre-conceived ideas and the way an individual thinks about the world. Ideologies that can create barriers to pro-environmental behaviour can include a strong belief in free-enterprise capitalism, a fatalistic belief that a higher power is in control, and a belief that technology can solve all environmental issues. Accordingly, tactics such as environmental policies have prompted a tendency to struggle against perceived threats to one's freedom and comfortable lifestyle. This barrier is namely present in Western countries where individuals enjoy comparatively high levels of objective and subjective wellbeing due to socioeconomic status. It has been noted that to live within environmental limits, there is a need to make changes to the comfortable aspects of Western lifestyles, for example, reducing meat consumption, the use of airplanes, and use of electronic gadgets with short life-spans. Western cultural norms associate meat consumption with wealth, status and luxury, and meat consumption per capita in the richest 15 nations of the world is 750% higher than in the poorest 24 nations. A shift in values may be difficult, as people's life goals are formed by their ideas of social progress, personal status, and success through careers, higher incomes and consumption.
Moreover, there exist deep structural and cultural roots that couple the macro-level of financial, property or labour institutions to the micro-level of individualistic, utilitarian values. These roots are linked to the current economic growth paradigm, which can be defined as a worldview that maintains that economic growth is both good and necessary.
Social comparison
Social comparison barriers include the comparison of actions with those of others to determine the “correct” behaviour, whether it be beneficial or harmful for the environment. This means that social comparison barriers can also facilitate pro-environmental behaviour. For example, people will alter their energy consumption to replicate the reported usage of their neighbours. Moreover, if individuals believe those around them are not actively engaging in pro-environmental behaviour, they are less likely to engage in it themselves because they believe this to be unfair.
Sunk costs
Sunk cost barriers are the investments (not necessarily financial) of an individual that in turn restrict alternative possibilities for change, or in this circumstance, for pro-environmental behaviour. One example of a financial investment is car ownership, where the individual will be less likely to use alternative modes of transportation. Habits are considered a Sunk Costs Dragon as well because they are very difficult to change (e.g. eating habits). Individuals are also deeply invested in their life goals and aspirations, even if achieving them will harm the environment. Place attachment is considered here as well, where an individual who feels no place attachment to their home will be less likely to act pro-environmentally in that place than one who loves where they live.
Additional barriers are inconvenience and time-related pressures, which are suggested as reasons why individuals go back to unsustainable habits. An individual may find it annoying and inconvenient to compost if they do not have access to municipal composting, for example, and if one is pressed for time they may choose to use their car rather than wait for public transit.
Discredence
Discredence barriers generally involve disbelief in environmental issues and/or distrust in government officials and scientists. Complete denial of climate change and other environmental issues is becoming less prominent, but it continues to persist. Skepticism is still apparent in countries where there are efforts to shape public opinion through mediums such as conservative think tanks and media outlets. Moreover, mass media is the primary source of information on climate change in many countries, therefore depending on the individual, they will either trust or ignore the information they receive which will vary from one media outlet to the next based on different views.
Distrust in government has become a prevalent issue recently. In the United States for example, Americans have been polled every year about their confidence in their country's institutions (e.g. the Supreme Court, Congress, the Presidency, and the health-care establishment), and there has been a reported collapse in trust over time (12% in 2017). From an environmental standpoint, the first Trump administration has significantly diminished regulations that were put in place by the former administration to meet environmental standards. Examples of policy changes include pulling out of the Paris Agreement, loosening regulations on toxic air pollution, and issuing an executive order that called for a 30% increase in logging on public lands. There is a 97% scientific consensus on anthropogenic climate change, yet there is still not enough being done to meet global temperature targets of staying below a 1.5 degrees Celsius increase (see Paris Agreement).
Perceived risk
Risk perception barriers include worrying about whether financial or temporal investments will pay off. An example of a financial investment is solar panels which are initially costly. A temporal investment can simply be spending the time to do research on the topic instead of doing something else.
There exists the concept of psychological distance, where people tend to discount future risks when making trade-offs between cost and benefits, and instead prioritize immediate day-to-day concerns. Spatial distance allows individuals to disregard any risks, and instead consider them more likely for other people and places than for themselves. This barrier can simply be thought of as "out of sight, out of mind." Additionally, people typically underestimate the likelihood of being affected by natural disasters, as well as the degree to which others are concerned about environmental issues. Furthermore, the human brain privileges experience over analysis: personal experiences with extreme weather events can influence risk perceptions, beliefs, behaviour and policy support, whereas statistical information by itself means very little to most people.
It has been hypothesised many times that no matter how strong the climate knowledge provided by risk analysts, experts and scientists is, risk perception determines agents' ultimate response in terms of mitigation. However, recent literature reports conflicting evidence about the actual impact of risk perception on agents’ climate response. Rather, a no-direct perception-response link with the mediation and moderation of many other factors and a strong dependency on the context analysed is shown. Some moderation factors considered as such in the specialised literature include communication and social norms. Yet, conflicting evidence of the disparity between public communication about climate change and the lack of behavioural change has also been observed in the general public. Likewise, doubts are raised about the observance of social norms as an influencing predominant factor that affects action on climate change. What is more, disparate evidence also showed that even agents highly engaged in mitigation (engagement is a mediation factor) actions fail ultimately to respond.
Limited behaviour
Limited behaviour barriers may include people choosing easier, yet less effective, pro-environmental behavioural changes (e.g. recycling, metal straws), and the rebound effect, which occurs when a positive environmental behaviour is followed by one that negates it (e.g. saving money with an electric car to then buy a plane ticket).
Contextual barriers
Social and cultural factors
Research has also shown that how people support and engage in pro-environmental behaviour is also affected by contextual factors (i.e. social, economic, and cultural); people with diverse cultural backgrounds have different perspectives and priorities, and thus, they may respond to the same policies and interventions in different ways with regionally differentiated world views playing an important role. This means that people will use different excuses for their behaviours depending on contextual factors. Research has shown that information has a greater impact on behaviour if it is tailored to the personal situations of consumers and resonates with their important values. This suggests that, for example, policies developed to reduce and mitigate climate change would be more effective if they were developed specifically for the people whose behaviour they were targeting.
People are social beings who respond to group norms: behaviour and decision-making has been shown to be affected by social norms and contexts.
Demographic variables like age, gender and education, can have a variety of effects on pro-environmental behaviour, depending on the issue and context. However, when considering the effects of socio-demographics on individual perceptions of climate change, a recent study reported a meta-analysis which found that the largest demographic correlation with the belief of human-caused climate change is political affiliation (e.g. conservative views often mean less support for climate mitigation).
Economic factors
The cost of sustainable alternatives and financial measures used to support new technologies can also be a barrier to pro-environmental behaviour. Households may have severe budgetary constraints that discourage them from investing in energy-efficient measures. In addition, individuals may fear that project costs will not be recovered prior to a future sale of a property. Economic factors are not just barriers to pro-environmental behaviour for individual households but are also a barrier on the international scale. Developing countries that rely on coal and fossil fuels may not have the funding or infrastructure to switch to more sustainable energy sources. Therefore, help from developed countries, with regards to cost, may be needed. As nations become more prosperous, their citizens are less concerned with the economic battle for survival and are free to pursue postmaterialistic ideals such as political freedom, personal fulfillment, and environmental conservation.
In other cases however, environment-friendly behaviours may be undertaken for non-environmental reasons, such as to save money or to improve health (e.g. biking or walking instead of driving).
Structural barriers
Structural barriers are large-scale systemic barriers that may be perceived as being objective and external, and can be highly influential and near impossible to control, even when one wishes to adopt more pro-environmental behaviour. For example, lack of organizational and governmental action on sustainability is considered a barrier for individuals looking to participate in sustainable practices. Further examples of structural barriers include: low problem awareness at the local level caused by a low priority for adaptation at higher institutional levels, and missing leadership by certain key actors leading to an absence of appropriate decision-making routines. Other structural barriers reported from a Vancouver-based study include: term limits imposed on politicians that affect council's ability to make long-term decisions; budgetary cycles that force planning based on three year terms, rather than long-term planning; and hierarchical systems that inhibit flexibility and innovation.
Research has shown that individuals may not behave in accordance with environmental sustainability when they have little control over the outcome of a situation. An example of a structural choice that can influence an individual's use of high carbon transport, occurs when cities governments allow sprawling neighbourhoods to develop without associated public transit infrastructure.
The concept of barriers has also been defined in relation to adaptive capacity, the ability of a system to respond to environmental changes; a barrier can either be a reason for potential adaptive capacity not being translated into action, or a reason for the existence of low adaptive capacity.
See also
Climate action
Human impact on the environment
Transit desert
References
Environmental social science concepts
Environmentalism
Human impact on the environment
Social psychology | Barriers to pro-environmental behaviour | [
"Environmental_science"
] | 2,686 | [
"Environmental social science concepts",
"Environmental social science"
] |
59,743,838 | https://en.wikipedia.org/wiki/Luminous%20gemstones | Folktales about luminous gemstones are an almost worldwide motif in mythology and history among Asian, European, African, and American cultures. Some stories about light-emitting gems may have been based on luminescent and phosphorescent minerals such as diamonds.
Mineralogical luminosity
First, it will be useful to introduce some mineralogical terminology for gemstones that can glow when exposed to light, friction, or heat. Note that the following discussion will omit modern techniques such as X-rays and ultraviolet light that are too recent to have influenced folklore about luminous gems. Luminescence is spontaneous emission of light by a substance not resulting from heat, as distinguished from incandescence, which is light emitted by a substance as a result of heating. Luminescence is caused by the absorption of energy that is released in small amounts. When the energy comes from light or other electromagnetic radiation, it is referred to as photoluminescence; which is divisible between fluorescence when the glow ceases immediately with the excitation and phosphorescence when the glow continues beyond the period of excitation. Two types of luminescent phenomena are relevant to crystalline materials. Triboluminescence generates light through the breaking of chemical bonds in a material when it is rubbed, pulled apart, scratched, or crushed. Thermoluminescence re-emits previously absorbed electromagnetic radiation upon being heated (e.g., thermoluminescence dating).
The American geologist Sydney Hobart Ball, who wrote an article on "Luminous Gems, Mythical and Real", outlined the history of discoveries about luminescent and phosphorescent minerals. Most diamonds are triboluminescent if rubbed with a cloth, and a few are photoluminescent after exposure to direct sunlight. Both diamonds and white topaz may phosphoresce if heated below red heat. The phosphorescent quality of diamonds when heated by sunlight is usually believed to have been first revealed by Albertus Magnus (c. 1193–1280) and it was apparently rediscovered by Robert Boyle in 1663, who also found that some diamonds will luminesce under pressure. According to Prafulla Chandra Ray, the Indian king Bhoja (r. 1010–1055) knew that diamonds can phosphoresce (Ball 1938: 496).
The luminescent Bologna Stone (impure barite), which was discovered by Vincenzo Cascariolo in 1602, was sometimes called "" ("lunar stone"), because, like the moon, it gave out in the darkness the light it received from the sun (Kunz 1913: 168). In 1735, the French chemist Charles François de Cisternay du Fay determined that lapis lazuli, emerald, and aquamarine were luminescent. Josiah Wedgwood, in 1792, found phosphoresce from rubbing together two pieces of quartz or of agate, and wrote that the ruby gives "a beautiful red light of short continuance." Edmond Becquerel reported in 1861 that ruby fluoresces better than sapphire, red feldspar fluoresces, and crushed orthoclase will flame. In 1833, David Brewster discovered the fluorescence of the mineral fluorite or fluorspar. However, the English naturalist Philip Skippon (1641–1691) stated that one Monsieur Lort, of Montpellier, France, a "counterfeiter" of "amethysts, topazes, emeralds, and sapphires" found that on heating "fluor smaragdi" (Latin for "flowing emerald/beryl/jasper") in a pan of coals and afterwards "putting it in a dark place (it) shines very much: At the same time several other stones were tried but did not shine" (1732 6: 718).
Some fluorite, particularly the variety chlorophane (aka pyroemerald and cobra stone), may become very faintly luminescent simply from the heat of one's hand. Chlorophane is unusual for combining the properties of thermoluminescence, triboluminescence, phosphorescence, and fluorescence; it will emit visible spectrum light when rubbed, or exposed to light or heat, and can continue emitting for a long period of time. Among the gravels of the Irtysh River, near Krasnoyarsk, Russia, the German mineralogist Gustav Rose recorded seeing chlorophane pebbles that shone with brilliancy all night long, merely from exposure to the sun's heat. For luminous gem myths, Ball concludes that while it is "not impossible that the inventors of certain of the [luminous gem] tales may have been acquainted with the luminosity of gems, in my opinion many of the tales must be of other origin" (1938: 497).
Scholars have proposed many identifications for myths about luminous gemstones described for over two thousand years. Most frequently rubies or carbuncles (often red garnets), which classical and medieval mineralogists did not differentiate, and less commonly other gems, including diamonds, emeralds, jade, and pearls (Ball 1938: 497).
The American sinologist Edward H. Schafer proposes that the phosphorescent "emeralds" of classical antiquity, such as the brilliantly shining green eyes of the marble lion on the tomb of King Hermias of Atarneus (d. 341 BCE) on Cyprus, were fluorite, even though the Hellenistic alchemists had methods, "seemingly magical, of making night-shining gems by the application of phosphorescent paints to stones", the most famous being their "emeralds" and "carbuncles" (1963: 238).
The names of some luminescent gemstones etymologically derive from "glow" or "fire" words (e.g., pyroemerald for "chlorophane" above).
The OED defines pyrope (from Greek Πυρωπός, lit. "fire-eyed")" as: "In early use applied vaguely to a red or fiery gem, as ruby or carbuncle; (mineralogy) the Bohemian garnet or fire-garnet"; and carbuncle or carbuncle-stone (from Latin "carbunculus", "small glowing ember") as: "A name variously applied to precious stones of a red or fiery colour; the carbuncles of the ancients (of which Pliny describes twelve varieties) were probably sapphires, spinels or rubies, and garnets; in the Middle Ages and later, besides being a name for the ruby, the term was esp. applied to a mythical gem said to emit a light in the dark" (Ball 1938: 498).
Mythological luminosity
Luminous gems are common theme in comparative mythology. Ball cross-culturally analyzed stories about luminous stones and pearls and found about one hundred variants in ancient, medieval, and modern sources. The wide-ranging locations of the tales comprise all Asia (except Siberia), all Europe (except Norway and Russia), Borneo, New Guinea, the United States, Canada, certain South American countries and Abyssinia, French Congo, and Angola in Africa. The later African and American myths were likely introduced by Europeans. Ball divides legends about luminous gems into three principal themes: light sources, gem mining, and animals (Ball 1938: 497–498).
Light source legends
The first theme is using legendary luminous gems to illuminate buildings, for navigation lights on ships, or sometimes as guiding lights for lost persons (Ball 1938: 498–500).
In India, the earliest country in which fine gemstones were known, belief in luminous gems dates back some twenty-five centuries. The c. 700 BCE – 300 CE Vishnu Purana states that Vishnu, in his avatar as the many-headed snake Shesha under the name Ananta ("Endless"), "has a thousand heads adorned with the mystical Swastika and in each head a jewel to give light" (Ball 1938: 498). The c. 400 BCE – 300 CE Hindu classic Mahabharata tells the story of the five Pandava brothers and the raja Babruvahana's palace with its precious stones that "shone like lamps so that there was no need for any other light in the assembly." In the c. 100 CE Buddhacarita, the city of Kapila is said to have gems so bright "that darkness like poverty could find no place" (Ball 1938: 499).
In Classical antiquity, the Greek historian Herodotus (c. 484–425 BCE) was the first European to describe luminous gems. The temple of Heracles at Tyre had two great columns, one of gold, the other of smaragdos (, "green gems including emerald") that "shone brightly at night" (Harvey 1957: 33, suggesting the phosphorescent "false emerald" type of fluorspar). Ball says that the "wily priests doubtless enclosed a lamp in hollow green glass, to mislead the credulous". The Pseudo-Plutarch "On Rivers", probably written by the Greek grammarian Parthenius of Nicaea (d. 14 CE), states that in the Sakarya River the Aster (, "star") gem is found, "which flames in the dark", and thus called Ballen (the "King") in the Phrygian language (King 1867: 9). The Roman author Pliny the Elder (23–79 CE) described the chrysolampis as an eastern gem, "pale by day but of a fiery luster by night" (Ball 1938: 499). The Syrian rhetorician Lucian (c. 125–180 CE) describes a statue of the Syrian goddess Atargatis in Hierapolis Bambyce (present-day Manbij) with a gem on her head called Greek lychnis (, "lamp; light") (Schafer 1963: 237). "From this stone flashes a great light in the night-time, so that the whole temple gleams brightly as by the light of myriads of candles, but in the daytime the brightness grows faint; the gem has the likeness of a bright fire" (tr. Strong and Garstang 1913: 72). According to Pliny, the stone is called lychnis because its luster is heightened by the light of a lamp, when its tints are particularly pleasing (Laufer 1915: 58).
Although early Chinese classics from the Eastern Zhou dynasty (770–256 BCE) refer to luminous gems (e.g., "the Marquis of Sui's pearl" discussed below under grateful animals), Sima Qian's c. 94 BCE Han dynasty Records of the Grand Historian has two early references to using them as a source of light. Most Chinese names for shining pearls/gems are compounds of zhū (珠, "pearl; gem; bead; orb"), such as yèmíngzhū (夜明珠, "night luminous pearl"), míngyuèzhū (明月珠, "luminous moon pearl"), and yèguāngzhū (夜光珠, "night shining pearl"). The "House of Tian Jingzhong" history records that in 379 BCE, King Wei of Qi boasted to King Hui of Wei, "Even a state as small as mine still has ten pearls one inch in diameter that cast radiance over twelve carriages in front and behind them." (tr. Sawyer 2018: n.p.). In the biography of the Han court minister Zou Yang (鄒陽, fl. 150 BCE), he figuratively uses the terms mingyue zhi zhu (明月之珠, "luminous moon pearl") and yeguang zhi bi (夜光之壁, "night shining jade-disk") to illustrate how talented people are lost for lack of recommendations, "If I were to throw a luminous moon pearl or a night shining jade-disk on a dark road in front of someone, who would not grasp their sword and look startled?" The German sinologist August Conrady suggested that the Chinese names mingyuezhizhu and yeguangzhu may have an Indian origin, with analogs in the chandra-kânta ("moon-beloved") gem that contains condensed moonlight and the harinmaṇi ("moon-jewel") name for emerald (1931: 168–169).
Li Shizhen's 1578 Bencao Gangmu pharmacopeia describes leizhu (雷珠, "thunder pearls/beads") that the divine dragon shenlong "held in its mouth and dropped. They light the entire house at night" (tr. Laufer 1912: 64). Chinese dragons are frequently depicted with a flaming pearl or gem under their chin or in their claws. According to the German anthropologist Wolfram Eberhard, the long dragon is a symbol of clouds and rainstorms, and when it plays with a ball or pearl, this signifies the swallowing of the moon by the clouds or thunder in the clouds. The moon frequently appears as a pearl, and thus the dragon with the pearl is equal to the clouds with the moon. The pearl-moon relationship is expressed in the Chinese belief that at full moon pearls are solid balls and at new moon they are hollow (1968: 239, 382).
Rabbinic Judaism includes a number of references to luminous gems. For example, the first century Rabbi, Rav Huna, says he was fleeing from Roman soldiers and hid in a cave illuminated by a light that was brighter in the night and darker in the day.
The best documented of the illumination tales is that of the King of Ceylon's luminous carbuncle or ruby, first mentioned by the Greek traveler Cosmas Indicopleustes in the 6th century and thereafter described by many travelers, the latest of the 17th century. According to Indicopleustes, it was "as large as a great pine-cone, fiery red, and when seen flashing from a distance, especially if the sun's rays are playing around it, being a matchless sight" (Laufer 1915: 62). The Chinese Buddhist pilgrim Xuanzang's 646 Great Tang Records on the Western Regions locates it in the Buddha Tooth Temple near Anuradhapura, "Its magical brilliance illumines the whole heaven. In the calm of a clear and cloudless night it can be seen by all, even at a distance of a myriad li." The Song Scholar Zhao Rukuo's c. 1225 Zhu Fan Zhi ("Records of Foreign People") says, "The king holds in his hand a jewel five inches in diameter, which cannot be burnt by fire, and which shines in (the darkness of) night like a torch. The king rubs his face with it daily, and though he were passed ninety he would retain his youthful looks." (Hirth and Rockhill 1911: 73). Based on this incombustibility, Laufer says this night-shining jewel was probably a diamond (1915: 63). Others state that it "serves instead of a lamp at night", has "the appearance of a glowing fire", or of that "of a great flame of fire." Due to its luminescence, Marco Polo called it "The Red Palace Illuminator" (Ball 1938: 499).
The English alchemist John Norton wrote a 1470 poem entitled "Ordinal, or a manual of the chemical art", in which he proposed erecting a gold bridge over the River Thames and illuminating it with carbuncles set on golden pinnacles, "A glorious thing for men to beholde" (Ashmole 1652: 27).
Boats lit by luminous gems are a variant of the illumination idea. Rabbinic Judaism had a tradition that "Noah had a luminous stone in the Ark that "shone more brightly by night than by day, thus serving to distinguish day and night when the sun and moon were shrouded by dense cloud." (Harvey 1957: 15). The Genesis Rabbah describes the Tzoar that illuminates Noah's Ark (Genesis 6:16) as a luminous gemstone (the King James Version translates as 'window'). The Mormon Book of Ether describes "sixteen small stones; and they were white and clear, even as transparent glass", being touched by God's hand so that they might "shine forth in darkness." The Jaredites placed a stone fore and aft on each ship and had "light continually" during their 344-day voyage to America (Ball 1938: 500).
The theme of luminous gems guiding mariners and others originated in Europe in the Middle Ages. The earliest is probably the Scandinavian saga of the Visby garnets. In the Hanseatic city Visby, on the island of Gotland, the Church of St. Nicholas had two rose windows with huge garnets in the center, overlooking the Baltic Sea. Sagas say the two gems shone at night as brightly as did the sun at noon and guided mariners safely to port. In 1361 King Valdemar IV of Denmark conquered Gotland, but his rich booty, including the marvelous garnets, sank in the ocean when the king's ship was wrecked on the Kong Karls Land islands (Ball 1938: 500).
The relic of the Virgin Mary's wedding ring, which according to different accounts had an onyx, amethyst, or green jasper, was supposedly brought back from the Holy Land in 996 CE. It was placed in the Church of Santa Mustiola, Clusium (modern Chiusi), Italy, and in 1473 the ring was transferred to the Franciscan monastery in that city. One of the monks stole it and fled into the night, but when he repented and promised to return it, the ring emitted a bright light by which he traveled to Perugia. The two cities fought fiercely for the possession of this sacred ring, but in 1486 the Vatican decreed the relic should be placed in the Perugia Cathedral (Ball 1938: 500).
The Dutch scholar Alardus of Amsterdam (1491–1544) relates the history of a luminous "chrysolampis" (χρυσόλαμπις, "gold-gleaming") gem set on a golden tablet with other valuable gemstones. Around 975, Hildegard, wife of Dirk II, Count of Holland, dedicated the tablet to Saint Adalbert of Egmond and presented it to Egmond Abbey, where the saint's body reposed. Alardus tells us that the "chrysolampis" "shone so brightly that when the monks were called to the chapel in the nighttime, they could read the Hours without any other light"; however, this brilliant gem was stolen by one of the monks and thrown into the sea (Kunz 1913: 164).
The French chemist Marcellin Berthelot (1888) discovered an early Greek alchemical text "from the sanctuary of the temple" that says the Egyptians produced "the carbuncle that shines in the night" from certain phosphorescent parts ("the bile") of marine animals, and when properly prepared these precious gems would glow so brightly at night "that anyone owning such a stone could read or write by its light as well as he could by daylight" (Kunz 1913: 173).
Gem mining legends
Second, there are stories about miners finding luminous gems at night and extracting them by day (Ball 1938: 500–501). One notable exception is Pliny's c. 77 CE Natural History that describes finding carbuncles in the daytime, some kinds "doe glitter and shine of their owne nature: by reason whereof, they are discovered soone wheresoever they lie, by the reverberation of the Sun-beams" (Harvey 1957: 34).
In the 1st century BCE, the Greek historians Diodorus Siculus (c. 90–30) and Strabo (c. 63–24) both record the peridot (gem-quality olivine) mine of Egyptian king Ptolemy II Philadelphus (r. 285–246 BCE) on the barren, forbidden island of Ophiodes (, "Snakey") or Topazios (, "Topaz"), modern Zabargad Island, off the ancient Red Sea port Berenice Troglodytica.
Diodorus says Philadelphus exterminated the "divers sorts of dreadful Serpents" that formerly infested on the island on account of the "Topaz, a resplendent Stone, of a delightful Aspect, like to Glass, of a Golden colour, and of admirable brightness; and therefore all were forbidden to set footing upon that Place; and if any landed there, he was presently put to death by the Keepers of the Island." The Egyptian mining technique relied upon luminosity. "This Stone grows in the Rocks, darken'd by the brightness of the Sun; it's not seen in the Day, but shines bright and glorious in the darkest Night, and discovers itself at a great distance. The Keepers of the Island disperse themselves into several Places to search for this stone, and wherever it appears, they mark the Place, with a great Vessel of largeness sufficient to cover the sparkling Stone; and then in the Day time, go to the Place, and cut out the Stone, and deliver it to those that are Artists in polishing of 'em" (tr. Oldfather et al. 1814 3: 36). According to Strabo, "The topaz is a transparent stone sparkling with a golden lustre, which, however, is not easy to be distinguished in the day-time, on account of the brightness of the surrounding light, but at night the stones are visible to those who collect them. The collectors place a vessel over the spot [where the topazes are seen] as a mark, and dig them up in the day" (tr. Hamilton and Falconer 1889 3:103). Ball notes that the legendary "topaz" of Topazios island is olivine, which is not luminescent while true topaz is, and suggests, "This tale may well have been told to travelers by astute Egyptian gem merchants anxious to enhance the value of their wares by exaggerating the dangers inherent to procuring the olivines" (1938: 500). In the present day, the island mine is now submerged underwater and inaccessible.
The theme of locating luminous gems at night is found in other sources. The c. 125 CE didactic Christian text Physiologus states that the diamond ("carbuncle") is not to be found in the day but only at night, which may imply that it emits light (Laufer 1915:62). The Anglo-Indian diplomat Thomas Douglas Forsyth says that in 632, the ancient Iranian Saka Buddhist Kingdom of Khotan sent a "splendid jade stone" as tribute to Emperor Taizong of Tang. Khotan's rivers were famous for their jade, "which was discovered by its shining in the water at night", and divers would procure it in shallow waters after the snowmelt floods had subsided (1875: 113). The Bohemian rabbi Petachiah of Regensburg (d. c. 1225) adapted Strabo's story for the gold he saw in the land of Ishmael, east of Nineveh, where "the gold grows like herbs. In the night its brightness is seen when a mark is made with dust or lime. They then come in the morning and gather the herbs upon which the gold is found" (tr. Benisch and Ainsworth 1856: 51, 53).
A modern parallel to ancient miners seeking luminous gems at nighttime is mineworkers using portable shortwave ultraviolet lamps to locate ores that respond with color-specific fluorescence. For instance, under short-wave UV light, scheelite, a tungsten ore, fluoresces a bright sky-blue, and willemite, a minor ore of zinc, fluoresces green (Ball 1938: 501).
Animal legends
The third luminous-gem theme involves serpents (of Hindu origin), or small animals (Spanish), with gems in their heads, or grateful animals repaying human kindness (Chinese and Roman) (Ball 1938: 501–505).
Legends about snakes that carry a marvelous jewel either in their forehead or in their mouth are found almost worldwide. Scholars have suggested that the myth may have originated with snake worship, or light reflected by a serpent's eye, or the flame color of certain snakes' lips. In only a relative few of these legends is the stone luminous, this variant being known in India, Ceylon, ancient Greece, Armenia, and among Cherokee Indians (Ball 1938: 502).
The Hindu polymath Varāhamihira's 6th century Brhat Samhit encyclopedic work describes the bright star Canopus, named Agastya (अगस्त्य) in Sanskrit, also the name of the rishi Agastya, "Its huge white waves looked like clouds; its gems looked like stars; its crystals looked like the Moon; and its long bright serpents bearing gems in their hoods looked like comets and thus the whole sea looked like the sky." Another context says black glossy pearls are also produced in the heads of serpents related to the nāgarāja (नागराज, "dragon kings") Takshaka and Vasuki (tr. Iyer 1884: 77, 179).
The "Snake Jewel" story in Somadeva's 11th-century Kathasaritsagara ("Ocean of the Streams of Stories") refers to a maṇi (मणि, "gem; jewel; pearl") on a snake's head. When the Hindu mythological king Nala is fleeing from a jungle wildfire, he hears a voice asking for help and turns back to see a snake "having his head encircled with the rays of the jewels of his crest", who, after being rescued reveals himself to be the nāgarāja Karkotaka (tr. Tawney 1928 4: 245).
The 3rd-century CE Life of Apollonius of Tyana, the Greek sophist Philostratus's biography of Apollonius of Tyana (c. 3 BCE – 97 CE), says that in India, people will kill a mountain dragon and cut off its head, in which, "are stones of rich lustre, emitting every-coloured rays and of occult virtue." It also mentions a myth that cranes will not build their nests until they have affixed a "light-stone" (Ancient Greek lychnidis, "shining") to help the eggs hatch and to drive away snakes (tr. Conybeare 1912: 103, 155).
In the Bengali tale of "The Rose of Bakáwalí", the heroic prince Jamila Khatun encounters a monstrous dragon that carried in its mouth "a serpent which emitted a gem so brilliant that it lighted up the jungle for many miles". His plan to obtain it was to throw a heavy lump of clay on the luminous gem, plunging the jungle into darkness, "so that the dragon and the serpent knocked their heads against the stones and died" (tr. Clouston 1889: 296–297).
According to Armenian "The Queen of the Serpents" legend, the serpents of Mount Ararat select a queen who destroys invading armies of foreign serpents, and carries in her mouth a "wonderful stone, the Hul, or stone of light, which upon certain nights she tosses in the air, when it shines as the sun. Happy the man who shall catch the stone ere it falls." (von Haxthausen 1854: 355).
Henry Timberlake, the British emissary to the Overhill Cherokee during the 1761–1762 Timberlake Expedition, records a story about medicine men ("conjurers") using gemstones, which is a variant of the Horned Serpent legend in Iroquois mythology. One luminous gem "remarkable for its brilliancy and beauty" supposedly "grew on the head of a monʃtrous ʃerpent" that was guarded by many snakes. The medicine man hid this luminous gemstone, and no one else had seen it. Timberlake supposed he had "hatched the account of its difcovery" (1765: 48–49). Ball doubts the myth and suggests "European influence" (1938: 503).
The Catalan missionary Jordanus's c. 1330 Mirabilia says he heard that the dragons of India Tertia (Eastern African, south of Abyssinia) have on their heads "the lustrous stones which we call carbuncles." When they become too large to fly, they fall and die in a "certain river which issues from Paradise". After seventy days the people recover the "carbuncle which is rooted in the top of his head" and take it to Prester John, the Emperor of the Ethiopians (tr. Yule 1863: 42).
After his third visit to Persia in 1686, the French jeweler and traveler John Chardin wrote that the Egyptian carbuncle was "very probably only an Oriental Ruby of higher Colour than usual." The Persians call it Icheb Chirac, the Flambeau ["burning torch"] of the Night because of the property and Quality it has of enlightening all things round it", and "They tell you that the Carbuncle was bred within the Head of a Dragon, a Griffin, or a Royal Eagle, which was found upon the Mountain of Caf" (Chardin 2010: 166–167).
Like Chardin's griffin or eagle, some stories about luminous gems involve animals other than snakes and dragons. An early example is the 3rd-century CE Greek Pseudo-Callisthenes Romance of Alexander that says Alexander the Great once speared a fish, "in whose bowels was found a white stone so brilliant that everyone believed it was a lamp. Alexander set it in gold, and used it as a lamp at night" (Laufer 1915: 58).
Sydney H. Ball recounts the widespread variation of the animal-gratitude snake story involving a wild animal (often called carbuncle, Spanish carbunclo, or Latin carbunculo) with a luminous gem on its head, and which Europeans apparently introduced into Africa and America.
In 1565, Don John Bermudez, ambassador of Prester John to John III of Portugal, described an Upper Nile snake called "Of the shadow, or Canopie, because it hath a skinne on the head wherewith it covereth a very precious stone, which they say it hath in her head" (Purchas 1625 2: 1169).
The English merchant William Finch reported around 1608 a Sierra Leone story about a wolf-like creature with a luminous gem. "The Negros told us of a strange beast (which the interpreter called a Carbuncle) oft seene yet only by night, having a stone in his forehead, incredibly shining and giving him light to feed, attentive to the least noyse, which he no sooner heareth, but he presently covereth the same with a filme or skinne given him as a naturall covering that his splendour betray him not" (Dickens 1857: 124).
In 1666, another version of the theme is a huge snake recorded from Island Caribs on the island of Dominica, West Indies. "On its head was a very sparkling stone, like a Carbuncle, of inestimable price: That it commonly veil'd that rich Jewel with a thin moving skin, like that of a man's eye-lid: but that when it went to drink or sported himself in the midst of that deep bottom, he fully discover'd it, and that the rocks and all about receiv'd a wonderful lustre from the fire issuing out of that precious crown" (de Rochefort 1666: 15).
According to the Swiss explorer Johann Jakob von Tschudi, in the highlands of Peru and Bolivia, the native peoples tell stories of a fabulous beast with a luminous gem. "The carbunculo is represented to be of the size of a fox, with long black hair, and is only visible at night, when it slinks slowly through the thickets. If followed, he opens a flap or valve in the forehead, from under which an extraordinary, brilliant, and dazzling light issues. The natives believe that this light proceeds from a brilliant precious stone, and that any fool hardy person who may venture to grasp at it rashly is blinded; then the flap is let down, and the animal disappears in the darkness" (von Tschudi 1854: 320). The American archeologist Adolph Francis Alphonse Bandelier cites von Tschudi and describes the carbunculo as a cat with a blood-red jewel, which is supposed to dwell on Nevado Sajama mountain, near Oruro, Bolivia. Bandelier believes his Bolivian informants that the carbunculo has existed from the earliest times, and "certainly before the conquest, so that its introduction cannot be attributed to the Spaniards" (1910: 320). Nevertheless, based upon how closely the above American versions of the myth follow the pattern of the European form, Ball concludes that the Spaniards introduced the carbuncle myth (1938: 504).
In contrast to the above legends about people killing snakes and animals in order to obtain their luminous gems, another group of legends has a theme of injured animals presenting magical gems out of gratitude to people who helped them. This is a subcategory of The Grateful Animals folktale motif (Aarne–Thompson classification systems 554), for example, The White Snake or The Queen Bee.
These animal-gratitude stories are first recorded around two millennia ago in China and Rome. Based upon striking coincidences in Chinese and Roman versions of the story, Laufer reasoned that there was an obvious historical connection (1915: 59–60), and Ball believes these tales probably originated independently (1938: 504).
The earliest known story about a grateful animal with a luminous gem is the Chinese Suihouzhu (隨侯珠, "the Marquis of Sui's pearl") legend that a year after he saved the life of a wounded snake, it returned and gave him a fabulous pearl that emitted a light as bright as that of the moon (Ball 1938: 504). Sui (隨, cf. 隋 Sui dynasty), located in present-day Suizhou, Hubei, was a lesser feudal state during the Zhou dynasty (c. 1046 BCE – 256 BCE) and a vassal state of Chu. Several Warring States period (c. 475–221 BCE) texts mention Marquis Sui's pearl as a metaphor for something important or valuable, but without explaining the grateful snake tale, which implies that it was common knowledge among contemporary readers.
The Marquis of Sui's pearl is mentioned in the Zhanguo ci ("Strategies of the Warring States") compendium of political and military anecdotes dating from 490 to 221 BCE. King Wuling of Zhao (r. 325–299 BCE) summoned Zheng Tong (鄭同) for an audience and asked how to avoid warfare with neighboring feudal states. Zheng Tong replied, 'Well, let us suppose there is a man who carries with him the pearl of Sui-hou and the Ch'ih-ch'iu armband [持丘之環, uncertain] as well as goods valued at ten thousand in gold. Now he stops the night in an uninhabited place." Since he has neither weapons nor protectors, "It is clear he will not spend more than a night abroad before someone harms him. At the moment there are powerful and greedy states on your majesty's borders and they covet your land. ... If you lack weapons your neighbours, of course, will be quite satisfied" (tr. Crump 1970: 327).
The c. 3rd–1st centuries BCE Daoist Zhuangzi alludes to the marquis's pearl. "Whenever the sage makes a movement, he is certain to examine what his purpose is and what he is doing. If now, however, we suppose that there were a man who shot at a sparrow a thousand yards away with the pearl of the Marquis of Sui, the world would certainly laugh at him. Why is this? It is because what he uses is important and what he wants is insignificant. And is not life much more important than the pearl of the Marquis of Sui?" (tr. Mair 1994: 288).
Several Chinese classics pair the legendary Suihouzhu ("the Marquis of Sui's pearl") with another priceless gem, the Heshibi (和氏璧, "Mr. He's jade"). The bi is a type of circular Chinese jade artifact, and "Mr. He" was Bian He (卞和), who found a marvelous piece of raw jade that went cruelly unrecognized by successive Chu monarchs until it was finally acknowledged as a priceless jewel. The c. 3rd–1st century BCE Chuci ("Songs of Chu") mentions the paired gems, "Shards and stones are prized as jewels / Sui and He rejected". This poetic anthology also says, "It grieves me that shining pearls [明珠] should be cast out in the mire / While worthless fish-eye stones are treasured in a strong-box", and describes a flying chariot, "Fringed with the dusky Moon Bright pearls [明月之玄珠]" (tr. Hawkes 1985: 277, 295, 290). King Liu An's c. 139 BCE Huainanzi ("Philosophers of Huainan") uses the story to describe one who has attained the Way of Heaven (天道), "It is like the pearl of Marquis Sui or the jade disk of Mr. He. Those who achieved it became rich; those who lost it became poor" (tr. Major et al. 2010: 218).
The c. 222 CE De Natura Animalium ("On the Characteristics of Animals"), compiled by Roman author Claudius Aelianus, told the story of Heraclea or Herakleis, a virtuous widow of Tarentum, who after seeing a fledgling stork fall and break its leg, nursed it back to health, and set it free. One year later, as Heraclea sat at the door of her cottage, the young stork returned and dropped a precious stone into her lap, and she put it indoors. Awakening that night, she saw that the gem "diffused a brightness and a gleam, and the house was lit up as though a torch had been brought in, so strong a radiance came from, and was engendered by, the lump of stone" (tr. Scholfield 1959: 209–210).
Laufer cites three c. 4th-century Chinese grateful-animal stories that parallel Heraclea's stork. The Shiyi ji ("Researches into Lost Records"), compiled by the Daoist scholar Wang Jia (d. 390 CE) from early apocryphal versions of Chinese history, recounts an anecdote about King Zhao of Yan (燕昭王, r. 311–279 BCE) and grateful birds with dongguangzhu (洞光珠, "cave shining pearls").
When Prince Chao of Yen was once seated on a terrace, black birds with white heads flocked there together, holding in their beaks perfectly resplendent pearls, measuring one foot all round. These pearls were black as lacquer, and emitted light in the interior of a house to such a degree that even the spirits could not obscure their supernatural essence." (tr. Laufer 1915: 59).
The imperial historian Gan Bao's c. 350 CE Soushen Ji ("In Search of the Supernatural") has two grateful-animal stories involving luminous pearls/gems. The first involves a black crane; according to legend, when a crane has lived a thousand years it turns blue; after another thousand it becomes black and is called a xuanhe (玄鶴. "dark crane").
Kuai Shen [噲參] was the most filial son to his mother. Once a black crane was injured by a bow hunter and in its extremity, went to Kuai. The latter took it in, doctored its wound, and when it was cured set it free. Soon afterwards the crane showed up again outside Kuai's door. The latter shone a torch to see out and discovered its mate there too. Each of them held a single night-glowing pearl [明珠] in its beak to repay Kuai. (tr. DeWoskin and Crump 1996: 238).
The second story is oldest detailed explanation of Marquis of Sui's pearl.
Once upon a time, when the ruler of the old Sui kingdom was journeying, he came upon a great wounded serpent whose back was broken. The ruler believed the creature to be a spirit manifestation and ordered his physician to treat it with drugs to close up its wound. Thereafter the serpent was able to move again, and the place was called Mound of the Wounded Serpent. One year later the serpent brought a bright pearl [明珠] in its mouth to give the ruler of Sui to show its gratitude. The pearl was greater than an inch in diameter, of the purest white and emitted light like moonglow. In the dark it could illuminate an entire room. For these reasons it was known as "Duke Sui's Pearl" [隋侯珠] or the "Spirit Snake's Pearl" [靈蛇珠], or, again, the "Moonlight Pearl" [明月珠]. (tr. DeWoskin and Crump 1996: 239).
Laufer concludes that the "coincidences in these three Chinese versions and the story of the Greek author, even in unimportant details such as the thankful bird returning after one year to the marquis of Sui, are so striking, that an historical connection between the two is obvious" (1915: 60).
A later elaboration of animal-gratitude stories involves grateful animals and ungrateful people, who are typically rescued from a pitfall trap (Ashliman 2010). Two versions mention marvelous gems. The English historian Matthew Paris's c. 1195 Chronicles says that Richard I of England (1157–1199) used to tell a parable about ungrateful people. A Venetian, Vitalis, was rescued from a horrible death by a ladder being let down into a pit into which he had fallen. A lion and a serpent trapped in the same pit used his ladder to escape, and the lion in gratitude brought to Vitalis a goat he had killed and the snake a luminous jewel that he carried in his mouth. As Richard reportedly told the story after his return from the Crusades he may have heard it in the East, as similar stories, but without the stone being luminous, occur in two Indian collections, the c. 300 BCE Kalila wa Dimnah and the 11th-century Kathasaritsagara (Ball 1938: 505). The English poet John Gower's 1390 Confessio Amantis tells the story of the rich Roman lord Adrian and the poor woodcutter Bardus. Adrian falls into a pit that had already captured an ape and a serpent, and promises to give half his wealth to Bardus for pulling him out. After Bardus rescues the three, out of gratitude the ape piled up firewood for him and the serpent gave him "a stone more bright than cristall out of his mouth", but Adrian refuses to pay his debt. Bardus sells the luminous gem for gold and afterwards found it again in his purse, and the same thing happened every time he sold it. Emperor Justinian I summons Bardus, listens to his testimony supported by the magically reappearing gem, and compels Adrian to fulfill his promise (tr. Clouston 1887 1: 224–226).
Some scholars were skeptical about luminous gem stories. In the West, the earliest nonbeliever was the Portuguese traveler to India and gem expert, Garcia de Orta (1563), who, having been told by a jeweler of a luminous carbuncle, doubted its existence. In the East, the first recorded skeptic was the Chinese encyclopedist Song Yingxing, who in 1628 wrote "it is not true that there are pearls emitting light at the hour of the dusk or night" (Ball 1938: 505).
See also
Cintamani, a wish-fulfilling jewel in Hindu and Buddhist traditions
Indra's net, Buddhist metaphor of a vast net with a jewel or pearl at each knot, infinitely reflecting all the other jewels
Mani Jewel, various legendary jewels mentioned in Buddhist texts
References
Ashliman, D. L. (2010), "The Grateful Animals and the Ungrateful Man", University of Pittsburgh.
Ashmole, Elias (1652), Theatrum Chemicum Britannicum: containing severall poeticall pieces of our famous English philosophers, who have written the hermetique mysteries in their owne ancient language, Nath Brooks.
Ball, Sydney H. (1938), "Luminous Gems, Mythical and Real", The Scientific Monthly 47.6: 496–505.
Bendelier, Adolph F. (1910), The islands of Titicaca and Koati, The Hispanic Society of America.
Benisch, A, and William F. Ainsworth (1856), Travels of Rabbi Petachia of Ratisbon: who, in the latter end of the twelfth century, visited Poland, Russia, Little Tartary, the Crimea, Armenia, Assyria, Syria, the Holy Land, and Greece, Trubner & Co.
Chardin, John (1720, 2010), Sir John Chardin's Travels in Persia, Cosimo, Inc. reprint.
Clouston, William Alexander (1887), Popular Tales and Fictions: Their Migrations and Transformations, William Blackwood and Sons.
Clouston, William Alexander (1889), A Group of Eastern Romances and Stories from the Persian, Tamil, and Urdu, W. Hodge & Co.
Conybeare, Frederick Cornwallis, tr. (1912), The Life of Appolonius of Tyana, William Heinemann.
Crump, James I., Jr., tr. (1970), Chan-kuo ts'e, Clarendon Press.
De Ment, Jack (1949), Handbook of Fluorescent Gems and Minerals – An Exposition and Catalog of the Fluorescent and Phosphorescent Gems and Minerals, Including the Use of Ultraviolet Light in the Earth Sciences, Mineralogist Publishing Company.
DeWoskin, Kenneth J. and James Irving Crump, trs. (1996), In Search of the Supernatural: The Written Record, Stanford University Press.
Dickens, Charles (1857), Wolves, Household Words vol. 15.
Eberhard, Wolfram (1968), The Local Cultures of South and East China, Alide Eberhard, tr. Lokalkulturen im alten China, 1943, E.J. Brill.
Forsyth, Thomas Douglas (1875), Report of a Mission to Yarkund in 1873, Under Command of Sir T. D. Forsyth: With Historical and Geographical Information Regarding the Possessions of the Ameer of Yarkund, Foreign Department Press.
Harvey, E. Newton (1957), A History of Luminescence from the Earliest Time until 1900, American Philosophical Society Memoirs, 44.
Hawkes, David (1985), The Songs of the South: An Anthology of Ancient Chinese Poems by Qu Yuan and Other Poets, Penguin Books.
von Haxthausen, August ranz L.M. Haxthausen (1854), Transcaucasia, sketches of the nations and races between the Black sea and the Caspian, tr. by J.E. Taylor, Chapman & Hall.
Hill, John (2015), Through the Jade Gate – China to Rome (A Study of the Silk Routes 1st to 2nd Centuries CE), revised edition, 2 vols., BookSurge.
Hirth, Friedrich (1875), China and the Roman Orient, Ares Publishers 1975.
Hirth, Friedrich and William Woodville Rockhill, trs. (1911), Chau Ju-kua: His Work On The Chinese And Arab Trade In The Twelfth And Thirteenth Centuries, Entitled Chu-fan-chï, Imperial Academy of Sciences.––
Iyer, N. Chidambaram, tr. (1884), The Bṛihat Saṃhitâ of Varaha Mihira, Volumes 1–2, South Indian Press.
Jordanus, Friar, tr. Henry Yule (1863), Mirabilia Descripta: The Wonders of the East, Hakluyt Society.
King, Charles William (1867), Natural History of Precious Stones and of the Precious Metals, Deighton, Bell, & Co.
Kunz, George Frederick (1913), The Curious Lore of Precious Stones, B. Lippincott Company.
Laufer, Berthold (1912), Jade, A Study in Chinese Archaeology and Religion, Field Museum of Natural History Publication 154, 10.
Laufer, Berthold (1915), "The Diamond, a Study in Chinese and Hellenistic Folk-lore, Field Museum of Natural History Publication 184, 15.1.
Lucian, tr. by Herbert A. Strong and John Garstang (1913) The Syrian Goddess, Constable and Co.
Major, John S., Sarah Queen, Andrew Meyer, and Harold D. Roth (2010), The Huainanzi: A Guide to the Theory and Practice of Government in Early Han China, Columbia University Press.
Purchas, Samuel (1625), Hakluytus Posthumus, or Purchas his Pilgrimes, contayning a History of the World in Sea Voyages and Lande Travells, James MacLehose.
de Rochefort, Charles (1666) The History of the Caribby Islands Rendered into English by John Davies, London.
Sawyer, Ralph D. (2018), Sun Pin: Military Methods, Routledge.
Siculus, Diodorus, tr. by C.H. Oldfather et al. (1814), The Historical Library of Diodorus the Sicilian in Fifteen Books to which are added the Fragments of Diodorus, Edward Jones.
Skippon, Philip (1732), "An Account of a Journey made through Part of the Low Countries, Germany, Italy, and France about 1663-5," Churchill's Voyages,
Schafer, Edward H. (1963), The Golden Peaches of Samarkand, University of California Press.
Scholfield, A. F. tr. (1959), Aelian, Characteristics of Animals, Harvard University Press.
Strabo, tr. by H.C. Hamilton and W. Falconer (1889), The Geography of Strabo, Literally Translated with Notes, G. Bell & sons.
Tawney, Charles Henry (1928), The Ocean of Story, 10 vols., Chas. J. Sawyer.
Timberlake, Henry (1765), Memoirs of Lieut. Henry Timberlake, J. Ridley.
von Tschudi, Johann Jakob (1854), Travels in Peru, on the Coast, in the Sierra, Across the Cordilleras and the Andes, into the Primeval Forests, tr. by Thomasina Ross, A.S. Barnes & Co.
External links
Luminescence, The Gemology Project
World's largest night-shining jewel displayed in S China, China Daily'', 22 November 2010
"Mens PumpUp" Maintain and Stimulate Glow-in-The-Dark Pearl-Beaded Prolonging Rubber Ring, Amazon
Folklore
Gemstones
Luminescent minerals
Mythology | Luminous gemstones | [
"Physics",
"Chemistry"
] | 10,900 | [
"Luminescence",
"Luminescent minerals",
"Materials",
"Gemstones",
"Matter"
] |
59,744,387 | https://en.wikipedia.org/wiki/Women%20of%20Discovery%20Awards | The Women of Discovery Awards are given by the non-profit WINGS WorldQuest, in recognition of the achievements of women in science and exploration.
The awards were first presented in 2003, the same year that WINGS WorldQuest was formed by Milbry Polk and Leila Hadley Luce.
Both the Board of Directors and a Junior Council at the granting organization, WINGS WorldQuest, are involved in selecting the recipients of the Women of Discovery Awards, who are thereafter known as Fellows.
Women of Discovery Awards are given in the categories of Lifetime Achievement, Air and Space, Conservation, Courage, Earth, Field Research, Film and Exploration, Humanity, Leadership, and the Sea. In addition, some recipients have been simply designated as "Fellows", without being placed in a category. The awards include an unrestricted financial grant.
In addition to its fellowship program, WINGS WorldQuest offers Flag Carrier grants in support of field researchers who are financing explorations.
Fellows
The awards are given every 1-2 years. Not all awards are given each year. In some years more the same award may be given to more than one person.
Lifetime Achievement
Air and Space
Conservation
Courage
Earth
Field Research
Film and Exploration
Humanity
Innovation in Technology Award
Leadership
Sea
Awardee/Fellow
Flag Carriers
WINGS WorldQuest Flag Carriers receive grants in support of field research to help finance explorations.
References
Science awards honoring women
Awards established in 2003 | Women of Discovery Awards | [
"Technology"
] | 276 | [
"Science and technology awards",
"Science awards honoring women"
] |
59,745,058 | https://en.wikipedia.org/wiki/Marian%20Pour-El | Marian Boykan Pour-El (April 29, 1928 – June 10, 2009) was an American mathematical logician who did pioneering work in computable analysis.
Early life and education
Marian Boykan was born in 1928 in New York City; her parents were dentist Joseph Boykan and his wife Matilda (Mattie, née Caspe), a former laboratory technician and housewife.
As a young girl, she performed ballet at the Metropolitan Opera House,
and this influenced her later life where she was often more comfortable speaking before large audiences than in small groups.
Although she wanted to attend the Bronx High School of Science, it was at that time only for boys; instead, she went to a girls' school, Hunter College High School.
Her parents were unwilling to pay the tuition for private college for her, so she went to Hunter College, an inexpensive local higher-education establishment primarily aimed at training schoolteachers. There she earned a bachelor's degree in physics in 1949. She also completed enough courses in mathematics for a second major but was not allowed to have two majors by Hunter College's rules.
She was accepted to Harvard University for graduate studies in mathematics, with full support, as the only woman in the program.
At Harvard, she earned a master's degree in 1951 and a Ph.D. in mathematical logic in 1958. She was very isolated and lonely at Harvard, with few friends and, initially, no other students even willing to sit next to her in her classes. The nearest restroom to her classes was in a different building, and one of the few buildings with air conditioning in the summers was off-limits to women, even when she was assigned as an instructor to a class in that building. Because there were no logicians at Harvard at that time, she spent five years of her time as a visiting student at the University of California, Berkeley. Her doctoral dissertation was Computable Functions.
Career
After finishing her doctorate, Pour-El joined the mathematics faculty at Pennsylvania State University. She earned her tenure there in 1962. During a sabbatical from 1962 to 1964 at the Institute for Advanced Study, she worked with Kurt Gödel.
She moved in 1964 to the University of Minnesota, and was promoted to full professor there in 1968. Except for a year from 1969 to 1970 as a visiting professor at the University of Bristol, she remained at the University of Minnesota until her retirement in 2000.
At Minnesota, her doctoral students included Jill Zimmerman (Ph.D. 1990), later the James M. Beall Professor of Mathematics and Computer Science at Goucher College, Maryland.
In 1980 Pour-El was elected as a Member at Large for the AMS and served until 1982.
Contributions
Pour-El's early work concerned recursion theory, and included joint work with William Alvin Howard, Saul Kripke, Donald A. Martin, and Hilary Putnam.
In a 1974 publication, she studied analogues of computability for analog computers. She proved that, for her formulation of this problem, the functions that can be computed by such computers are the same as the functions that define solutions to algebraic differential equations. This result, a refinement of the work of Claude Shannon,
became known as the Shannon–Pour-El thesis.
In the late 1970s Pour-El began working on computable analysis.
Her "most famous and surprising result", co-authored with Minnesota colleague J. Ian Richards, was that for certain computable initial conditions, determining the behavior of the wave equation is an undecidable problem. Their result was later taken up by Roger Penrose in his book The Emperor's New Mind; Penrose used this result as a test case for the Church–Turing thesis, but concluded that the non-smoothness of the initial conditions makes it implausible that a computational device could use this phenomenon to exceed the limits of conventional computing. Freeman Dyson used the same result to argue for the evolutionary superiority of analog to digital forms of life.
With Richards, Pour-El was the author of a book, Computability in Analysis and Physics.
Recognition
Pour-El was elected to the Hunter College Hall of Fame in 1975, and as a Fellow of the American Association for the Advancement of Science in 1983.
A symposium was held in honor of Pour-El in Japan in 1993.
Personal life
As a student at Berkeley, Pour-El met her husband, Israeli biochemist Akiva Pour-El. They had one daughter, Ina. Her husband followed her to Penn State after completing his doctorate a year later, and he later followed her, again, when she moved to Minnesota. They lived separately for several long intervals, most notably from 1969 to 1975 when her husband taught in Illinois, and Pour-El wrote an article in 1981 on how having a long-distance relationship worked for her.
Pour-El's brother is music composer Martin Boykan.
Selected publications
References
1928 births
2009 deaths
20th-century American mathematicians
Mathematical logicians
Women logicians
Hunter College alumni
Harvard University alumni
Pennsylvania State University faculty
Fellows of the American Association for the Advancement of Science
People from New York City
21st-century American women
20th-century American women mathematicians | Marian Pour-El | [
"Mathematics"
] | 1,047 | [
"Mathematical logic",
"Mathematical logicians"
] |
59,746,854 | https://en.wikipedia.org/wiki/Rank-maximal%20allocation | Rank-maximal (RM) allocation is a rule for fair division of indivisible items. Suppose we have to allocate some items among people. Each person can rank the items from best to worst. The RM rule says that we have to give as many people as possible their best (#1) item. Subject to that, we have to give as many people as possible their next-best (#2) item, and so on.
In the special case in which each person should receive a single item (for example, when the "items" are tasks and each task has to be done by a single person), the problem is called rank-maximal matching or greedy matching.
The idea is similar to that of utilitarian cake-cutting, where the goal is to maximize the sum of utilities of all participants. However, the utilitarian rule works with cardinal (numeric) utility functions, while the RM rule works with ordinal utilities (rankings).
Definition
There are several items and several agents. Each agent has a total order on the items. Agents can be indifferent between some items; for each agent, we can partition the items to equivalence classes that contain items of the same rank. For example, If Alice's preference-relation is x > y,z > w, it means that Alice's 1st choice is x, which is better for her than all other items; Alice's 2nd choice is y and z, which are equally good in her eyes but not as good as x; and Alice's 3rd choice is w, which she considers worse than all other items.
For every allocation of items to the agents, we construct its rank-vector as follows. Element #1 in the vector is the total number of items that are 1st-choice for their owners; Element #2 is the total number of items that are 2nd-choice for their owners; and so on.
A rank-maximal allocation is one in which the rank-vector is maximum, in lexicographic order.
Example
Three items, x y and z, have to be divided among three agents whose rankings are:
Alice: x > y > z
Bob: x > y > z
Carl: y > x > z
In the allocation (x, y, z), Alice gets her 1st choice (x), Bob gets his 2nd choice (y), and Carl gets his 3rd choice (z). The rank-vector is thus (1,1,1).
In the allocation (x,z,y), both Alice and Carl get their 1st choice and Bob gets his 3rd choice. The rank-vector is thus (2,0,1), which is lexicographically higher than (1,1,1) – it gives more people their 1st choice.
It is easy to check that no allocation produces a lexicographically higher rank-vector. Hence, the allocation (x,z,y) is rank-maximal. Similarly, the allocation (z,x,y) is rank-maximal – it produces the same rank-vector (2,0,1).
Algorithms
RM matchings were first studied by Robert Irving, who called them greedy matchings. He presented an algorithm that finds an RM matching in time , where n is the number of agents and c is the largest length of a preference-list of an agent.
Later, an improved algorithm was found, which runs in time , where m is the total length of all preference-lists (total number of edges in the graph), and C is the maximal rank of an item used in an RM matching (i.e., the maximal number of non-zero elements in an optimal rank vector). The algorithm reduces the problem to maximum-cardinality matching. Intuitively, we would like to first find a maximum-cardinality matching using only edges of rank 1; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 and 2; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 2 and 3; and so on. The problem is that, if we pick the "wrong" maximum-cardinality matching for rank 1, then we might miss the optimal matching for rank 2. The algorithm of solves this problem using the Dulmage–Mendelsohn decomposition, which is a decomposition that uses a maximum-cardinality matching, but does not depend on which matching is chosen (the decomposition is the same for every maximum-cardinality matching chosen). It works in the following way.
Let G1 be the sub-graph of G containing only edges of rank 1 (the highest rank).
Find a maximum-cardinality matching in G1, and use it to find the decomposition of G1 into E1, O1 and U1.
One property of the decomposition is that every maximum-cardinality matching in G1 saturates all vertices in O1 and U1. Therefore, in a rank-maximal matching, all vertices in O1 and U1 are adjacent to an edge of rank 1. So we can remove from the graph all edges with rank 2 or higher adjacent to any of these vertices.
Another property of the decomposition is that any maximum-cardinality matching in G1 contains only E1-O1 and U1-U1 edges. Therefore, we can remove all other edges (O1-O1 and O1-U1 edges) from the graph.
Add to G1 all the edges with the next-highest rank. If there are no such edges, stop. Else, go back to step 2.
A different solution, using maximum-weight matchings, attains a similar run-time: .
Variants
The problem has several variants.
1. In maximum-cardinality RM matching, the goal is to find, among all different RM matchings, the one with the maximum number of matchings.
2. In fair matching, the goal is to find a maximum-cardinality matching such that the minimum number of edges of rank r are used, given that - the minimum number of edges of rank r−1 are used, and so on.
Both maximum-cardinality RM matching and fair matching can be found by reduction to maximum-weight matching.
3. In the capacitated RM matching problem, each agent has an upper capacity denoting an upper bound on the total number of items he should get. Each item has an upper quota denoting an upper bound on the number of different agents it can be allocated to. It was first studied by Melhorn and Michail, who gave an algorithm with run-time . There is an improved algorithm with run-time , where B is the minimum of the sum-of-quotas of the agents and the sum-of-quotas of the items. It is based on an extension of the Gallai–Edmonds decomposition to multi-edge matchings.
See also
Fair item assignment
Stable matching
Envy-free matching
Priority matching
References
Fair division
Matching (graph theory) | Rank-maximal allocation | [
"Mathematics"
] | 1,440 | [
"Recreational mathematics",
"Fair division",
"Graph theory",
"Game theory",
"Mathematical relations",
"Matching (graph theory)"
] |
59,747,046 | https://en.wikipedia.org/wiki/Clarke%20number | Clarke number or clarke is the relative abundance of a chemical element, typically in Earth's crust. The technical definition of "Earth's crust" varies among authors, and the actual numbers also vary significantly.
History
In the 1930s, USSR geochemist Alexander Fersman defined the relative abundance of chemical elements in geological objects, denoted in percents, as . This was in honor to the American geochemist Frank Wigglesworth Clarke, who pioneered in estimating the chemical composition of Earth's crust, based on Clarke and colleague's extensive chemical analysis of numerous rock samples, throughout 1889 to 1924().
Examples based on Fersman's definition:
: When the whole mass of a planet X is , and the mass of oxygen there is , then the weight clarke of oxygen in planet X is (dimensionless)
: When the whole count of atoms in a rock Y is , and the atom count of silicon there is , then silicon's clarke of atom count in rock Y is (dimensionless)
Fersman's "clarke of Earth's crust" is the Earth's surface including 16 km-thick lithosphere, hydrosphere and atmosphere.
In Russian
is synonymous to "the relative abundance of elements" in any object, either in weight ratio or in atomic (number of atoms) ratio, regardless of how "Earth's crust" is defined, and denotation is not restricted to percents.
In English
In the English speaking world, the term "clarke" was not even used in Wells(1937) which introduced Fersman's proposal, nor in later USGS articles such as Fleischer(1953). They used the term "relative abundance of the elements". Brian Mason also mentioned the term "clarke" in Mason(1952)(mistakenly attributing it to Vladimir Vernadsky, later corrected to Fersman in Mason(1958)), but the definition slightly differed from Fersman's, limiting it only to the average percentage in Earth's crust, but allowed to exclude hydrosphere and atmosphere. Besides for explaining the term, Mason himself did not use the term "clarke".
A variant term "clarke value" is occasionally used (examples:). However, "clarke value" can have a different meaning, the clarke of concentration (example:).
Terms "clarke number" and "Clarke number" are found in articles written by Japanese authors (example:).
Usage in Japan
In Japan, "clarke" is translated as . The word is always added, which happens to make the term appear similar in form with scientific constants such as . The term may have a narrower sense than Fersman's. Several of the following constraints may apply:
Only of Earth's crust
Lithosphere approximated as a 10 mile-deep layer from sea level
Must include all of three layers: lithosphere (93.06%), hydrosphere (0.91%) and atmosphere (0.03%)
Only mass ratio)
Denote in percents) (not in ppm or ppb)
(What the quoter believes to be) data from Clarke and Washington(1924)
Another peculiarity in Japan is the existence of a popular version of data, which was tabulated in reference books such as the annual "Chronological Scientific Tables" (RCST1939(1938)), the "Dictionary of Physics and Chemistry" (IDPC(1939)) and other prominent books on geochemistry and chemistry. This version Kimura(1938) was devised by chemist . It was often quoted as The "Clarke numbers" (unsourced examples:,). The numbers differed from any versions by Clarke / Clarke&Washington (1889–1924), or anything listed in foreign (non-Japanese) articles such as the USGS compilation , thus unknown outside of Japan. Yet the numbers were sometimes quoted in English articles without citation (example:).
As geological definition of "Earth's crust" evolved, the "10 mile-deep" approximation were deemed out-of-date, and some people considered the term "clarke number" obsolete too. Yet other people may have meant broader senses, not limiting to Earth's crust, leading to confusion. RCST1961(1961) switched their "clarke number" table from Kimura(1938) to Mason(1958) based, and the label "clarke number" on table was removed in RCST1963(1962). IDPC(1971) removed the "clarke number" table which was a Kimura(1938)'s variant. IDPC(1981) said the term is mostly abandoned, and the dictionary entry for "clarke number" itself was removed from IDPC(1998). So "clarke numbers" became associated almost solely with Kimura(1938)'s data, but Kimura's name forgotten. Incidentally, in major reference books, there was no data table titled "clarke numbers" which showed Clarke's original tables.
Despite being removed from major reference books, data from Kimura(1938) and phrases such as "the Clarke number of iron is 4.70", unsourced, continue to circulate, even in the 2010s (example:).
Example data
This section lists only historical data. For recent data, see Abundance of elements in Earth's crust.
Technical definition of "clarke", "Earth's crust" and "lithosphere" differ among authors, and the actual numbers vary accordingly, sometimes by several times. Even the same author presents multiple versions, with various estimation parameters or knowledge refinements. Yet they are often quoted without source, rendering the data unverifiable.
Clarke & Washington presented estimations of the average composition of the outer part of Earth with four variants:
10-mile crust, hydrosphere and atmosphere.
20-mile crust, hydrosphere and atmosphere.
10-mile crust, only igneous rocks and sedimentary rocks. (i.e. exclude hydrosphere and atmosphere)
10-mile crust, only igneous rocks. (i.e. exclude hydrosphere and atmosphere)
"The earth's crust" in Clarke and Washington works can mean two different things: (a) The whole outer part of Earth, i.e. lithosphere, hydrosphere and atmosphere; (b) Only the lithosphere, which in their works just meant "the rocky crust of the earth". "Crust" here means (b).
Following tables do not cover all elements. Some elements not on the table may have larger abundance. Some minor elements are listed here to aid identifying the origin of unsourced documents.
Some entries contain data for the disputed element 43 masurium.
Precision (number of digits) may be adjusted to improve legibility.
Of the mass of 10 mile-thick lithosphere plus hydrosphere and atmosphere
Tables of historical data for some elements of their relative abundance in Earth's crust.
Other variants
Some authors call these "clarkes" too, some do not.
Clarke of concentration
A related term "clarke of concentration" or "concentration clarke", synonym: "concentration factor (mineralogy)", is a measure to see how rich a particular ore is.
That is, the ratio between the concentrations of a chemical element in the ore, and its concentration in the whole Earth's crust (i.e. "clarke") .
If the concentration of a commodity in an ore X is , and the "clarke" of that commodity is ,
then "the clarke of concentration" of that commodity X is (dimensionless).
The value represents the degree to which the commodity is concentrated from crustal abundances to the ore by natural geochemical processes; a clue for whether the commodity could be mined economically.
References
Footnotes
Cited works
C: Frank Wigglesworth Clarke of USGS and Henry Stephens Washington
U: United States Geological Survey (USGS)
B:
F: Alexander Fersman
G: Victor Goldschmidt
M: Brian Mason
K:
H: Research on the history of chemistry
Examples of usage
R: (:ja:理科年表): An (mostly) annual reference book published in Japan since 1925CE. Note that the actual published year is typically one year earlier than the nominal (book title) year.
I: : Revised roughly by each decade. First edition 1935CE.
D: Kyoritsu Great Dictionary of Chemistry
X: Other usage examples
See also
Abundance of elements in Earth's crust, modern data
Structure of the Earth
Geochemistry
Earth sciences | Clarke number | [
"Chemistry"
] | 1,754 | [
"nan"
] |
59,747,263 | https://en.wikipedia.org/wiki/Amicable%20triple | In mathematics, an amicable triple is a set of three different numbers so related that the restricted sum of the divisors of each is equal to the sum of other two numbers.
In another equivalent characterization, an amicable triple is a set of three different numbers so related that the sum of the divisors of each is equal to the sum of the three numbers.
So a triple (a, b, c) of natural numbers is called amicable if s(a) = b + c, s(b) = a + c and s(c) = a + b, or equivalently if σ(a) = σ(b) = σ(c) = a + b + c. Here σ(n) is the sum of all positive divisors, and s(n) = σ(n) − n is the aliquot sum.
References
Divisor function
Integer sequences
Number theory | Amicable triple | [
"Mathematics"
] | 190 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
59,747,277 | https://en.wikipedia.org/wiki/Gallai%E2%80%93Edmonds%20decomposition | In graph theory, the Gallai–Edmonds decomposition is a partition of the vertices of a graph into three subsets which provides information on the structure of maximum matchings in the graph. Tibor Gallai and Jack Edmonds independently discovered it and proved its key properties.
The Gallai–Edmonds decomposition of a graph can be found using the blossom algorithm.
Properties
Given a graph , its Gallai–Edmonds decomposition consists of three disjoint sets of vertices, , , and , whose union is : the set of all vertices of . First, the vertices of are divided into essential vertices (vertices which are covered by every maximum matching in ) and inessential vertices (vertices which are left uncovered by at least one maximum matching in ). The set is defined to contain all the inessential vertices. Essential vertices are split into and : the set is defined to contain all essential vertices adjacent to at least one vertex of , and is defined to contain all essential vertices not adjacent to any vertices of .
It is common to identify the sets , , and with the subgraphs induced by those sets. For example, we say "the components of " to mean the connected components of the subgraph induced by .
The Gallai–Edmonds decomposition has the following properties:
The components of are factor-critical graphs: each component has an odd number of vertices, and when any one of these vertices is left out, there is a perfect matching of the remaining vertices. In particular, each component has a near-perfect matching: a matching that covers all but one of the vertices.
The subgraph induced by has a perfect matching.
Every non-empty subset has neighbors in at least components of .
Every maximum matching in has the following structure: it consists of a near-perfect matching of each component of , a perfect matching of , and edges from all vertices in to distinct components of .
If has components, then the number of edges in any maximum matching in is .
Construction
The Gallai–Edmonds decomposition of a graph can be found, somewhat inefficiently, by starting with any algorithm for finding a maximum matching. From the definition, a vertex is in if and only if (the graph obtained from by deleting ) has a maximum matching of the same size as . Therefore we can identify by computing a maximum matching in and in for every vertex . The complement of can be partitioned into and directly from the definition.
One particular method for finding a maximum matching in a graph is Edmonds' blossom algorithm, and the processing done by this algorithm enables us to find the Gallai–Edmonds decomposition directly.
To find a maximum matching in a graph , the blossom algorithm starts with a small matching and goes through multiple iterations in which it increases the size of the matching by one edge. We can find the Gallai–Edmonds decomposition from the blossom algorithm's work in the last iteration: the work done when it has a maximum matching , which it fails to make any larger.
In every iteration, the blossom algorithm passes from to smaller graphs by contracting subgraphs called "blossoms" to single vertices. When this is done in the last iteration, the blossoms have a special property:
All vertices of a blossom are inessential vertices of the bigger graph.
The vertex formed by contracting the blossom is an inessential vertex of the smaller graph.
The first property follows from the algorithm: every vertex of a blossom is the endpoint of an alternating path that starts at a vertex uncovered by the matching. The second property follows from the first by the lemma below:
Let be a graph, a matching in , and let be a cycle of length which contains edges of and is vertex-disjoint from the rest of . Construct a new graph from by shrinking to a single vertex. Then is a maximum matching in if and only if is a maximum matching in .
This lemma also implies that when a blossom is contracted, the set of inessential vertices outside the blossom remains the same.
Once every blossom has been contracted by the algorithm, the result is a smaller graph , a maximum matching in of the same size as , and an alternating forest in with respect to . In , the Gallai–Edmonds decomposition has a short description. The vertices in are classified into inner vertices (vertices at an odd distance in from a root) and outer vertices (vertices at an even distance in from a root); is exactly the set of inner vertices, and is exactly the set of outer vertices. Vertices of that are not in form .
Contracting blossoms preserves the set of inessential vertices; therefore can be found from by taking all vertices of which were contracted as part of a blossom, as well as all vertices in . The vertices in and are never contracted; and .
Generalizations
The Gallai–Edmonds decomposition is a generalization of Dulmage–Mendelsohn decomposition from bipartite graphs to general graphs.
An extension of the Gallai–Edmonds decomposition theorem to multi-edge matchings is given in Katarzyna Paluch's "Capacitated Rank-Maximal Matchings".
References
Graph algorithms
Matching (graph theory) | Gallai–Edmonds decomposition | [
"Mathematics"
] | 1,045 | [
"Matching (graph theory)",
"Mathematical relations",
"Graph theory"
] |
59,748,413 | https://en.wikipedia.org/wiki/The%20Shift%20Project | The Shift Project (also called The Shift or TSP) is a French nonprofit created in 2010 that aims to limit both climate change and the dependency of our economy on fossil fuels.
Presentation, goals and organization
The Shift Project is a French nonprofit created in January 2010 in Paris by energy-climate experts such as Jean-Marc Jancovici, Geneviève Férone-Creuzet and Michel Lepetit. The organization aims to address two issues raised by the use of carbon: climate change and the depletion of fossil fuels. The Shift works as a think tank that shares ideas with economic, political, academic and voluntary actors.
The Shift Project is funded by corporate sponsors. Its budget for 2017 was about 600,000 euros.
The organization is led by a group of three people elected by the board of directors, which includes members of the sponsoring companies. A group of experts, called the "Expert Committee" (), ensures the scientific validity of the work done by The Shift Project. This group of experts (in economics, finance, climate, physics, history...) includes Alain Grandjean, Gaël Giraud, Hervé Le Treut, Jean-Pascal van Ypersele and Jacques Treiner. When the think tank was created, the first director was Cédric Ringenbach. He held this position until 2016, when he left The Shift Project and created the nonprofit organization The Climate Collage, which was later renamed to The Climate Fresk. Now headed by Matthieu Auzanneau, The Shift has a team of about ten employees and works with volunteers who are grouped into an independent nonprofit called The Shifters.
The Shift examines the dependency of our economy on oil through three angles: the potential return of economic growth, the issues related to the finite amount of oil and, of course, the climate change due to carbon emissions. According to The Shift, although the GDP may have use cases, it is not really useful, especially because it does not consider natural resources (so it does not account for their limited availability) or resulting externalities like greenhouse gas emissions.
Projects, events and activities
Since 2012, The Shift Project has organized an annual two-day meeting called The Shift Forum with the objective of running a debate between big industrial and financial company leaders and experts on climate, energy and economics. The Shift also organizes many public events, sometimes in collaboration with other organizations like the Business and Climate Summit 2015 or the World Efficiency 2015.
The nonprofit also contributed to the National Debate for Energy Transition in France and its president, Jean-Marc Jancovici, is a member of the French Committee on Climate Change.
Main publications
The Shift mostly works in task forces: for a couple of months or years, a group of experts (from higher education and academic research, NGOs, public sector, companies...) is set up on a well-defined question. When the project ends, the task force writes a report and presents it to concerned actors. The report is then made publicly available.
Addressed issues include the building rehabilitation to make them more energy-efficient, the relation between energy and GDP, alternative metrics to GDP, the scientific rigor of energy scenarios, sustainable mobility or the price of carbon.
For the price of carbon in Europe, the Shift suggests to set a reservation price to 20 euros and increase it every year.
Since 2013, The Shift has been gathering experts on the energy rehabilitation of buildings and made propositions like the Energy Efficiency Passport. In addition to being experimented by the Shift through the nonprofit organization Expérience P2E, this building passport was then included in the Energy Transition Law and is now used by various actors in the building industry.
In 2016, at The Shift Project request, the engineer Francisco Luciano gathered a team of experts including the SNCF, Vinci Autoroutes, EDF, the CVTC, start-ups in car sharing, the senior official Olivier Paul-Dubois-Taine and researchers. In September 2017, The Shift published the report "Decarbonize mid-density areas – Less carbon more bond", for which The Shift and the project leader Francisco Luciano were invited by the Ministry for Transportation to attend the Mobility Foundations and various governmental working groups. The report, which is aimed to be well-argued and quantitative, concludes that it is possible to strongly decarbonize mobility in suburban areas thanks to cycling, car sharing and fast public transports. The working group also studied the delivery of goods and remote work.
On 4 October 2018, the think tank published a report on the digital economy impact on climate and environment. The report notes that the worldwide energy consumption of the digital economy grows at a very fast rate (about 9% a year) with a worsening energy efficiency, unlike most economic sectors. It concludes by advocating digital sobriety to minimize most of this impact growth.
Pledge for climate: The 2017 Decarbonize Europe Manifesto
The call for action of the economic actors
On 21 March 2017, the think tank made public the signatories of a text called "Decarbonize Europe Manifesto". This text is described as a wake-up call 15 months after the Paris Agreement. It begins with: "We, the signatories of this Manifesto to decarbonize Europe, call upon all European States to immediately implement policies aiming to achieve a level of greenhouse gas emissions close to zero by 2050!" and aims to "guarantee peace". It ends with: "We call upon all European actors – individuals, businesses and public authorities – to implement concrete and coherent strategies which can meet the challenge posed by climate change and the limits of natural resources.http://decarbonizeurope.org/en/" The Shift project claims the decarbonization of Europe is a challenge, but it is necessary for a modern future.
It is supported by more than 3,000 citizens including 80 company directors and around forty scientists and political figures. The press mainly mentions the signature of economic leaders like the magazine Challenges: "Climate: Why the company directors (at last) unite to decarbonize Europe".
The think tank then called candidates running for president for a commitment in favor of a European plan to fight climate change that would abide by the Paris Agreement
Signatories of the Manifesto
Company directors who signed the Manifesto include Elisabeth Borne (RATP), Martin Bouygues (Bouygues), Patricia Barbizet (Artémis-Kering), Guillaume Pepy (SNCF), Christophe Cuvillier (Unibail Rodamco), Nicolas Dufourq (BPI France), Pierre Blayau (Caise centrale de réassurance), Stéphane Richard (Orange), Alain Montarant (MACIF), Nicolas Théry (Crédit mutuel), Denis Kessler (SCOR), Xavier Huillard (Vinci), Jean-Dominique Senard (Michelin) and Agniès Ogier (Thalys).
Scientists who signed the Manifesto include climatologists like Jean Jouzel, Hervé Le Treut and Jean-Pascal van Ypersele; the biologist and senior official Dominique Dron; the mathematician Ivar Ekeland; physicists like Sébastien Balibar, Roger Balian and Yves Bréchet; economists like Gaël Giraud, Roger Guesnerie, Philippe Aghion, Christian de Perthuis, Jean-Marie Chevalier and Jean-Charles Hourcade; directors of grandes écoles like Meriem Fournier (AgroParis-Tech Nancy), Olivier Oger (EDHEC) and Vincent Laflèche (Mines ParisTech).
Other people who signed it include former ministers like Arnaud Montebourg, Serge Lepeltier, the Belgian Philippe Maystadt and the president of the union CFE-CGC François Hommeril.
Nine proposals
The Shift Project published "9 propositions to take Europe to a new area" about as many projects that should be done imperatively to meet the Paris Agreement, according to the Shift. The AFP specifies that these propositions are made "in parallel with the Manifesto" and are not "endorsed by the signatories". The daily economic newspaper Les Échos highlights the "plan for a 'carbon-free' Europe".
These propositions concern seven sectors: electricity, transportation, construction, industry, food, agriculture and forestry. They are described in depth in the book "Let's decarbonize!".
References
External links
Decarbonize Europe Manifesto
Climate change organizations
Energy organizations
Environmental organizations based in France | The Shift Project | [
"Engineering"
] | 1,736 | [
"Energy organizations"
] |
59,749,121 | https://en.wikipedia.org/wiki/Ren%C3%A9%20Roy%20%28chemist%29 | René Roy (born November 4, 1952) is a Canadian organic chemist from Quebec, specializing in glycobiology and carbohydrate chemistry. He is professor emeritus, Department of chemistry, at the Université du Québec à Montréal (UQAM) and associate professor at the Institut National de la Recherche Scientifique (INRS) – Institut Armand-Frappier (IAF). He is the founder and former director of PharmaQAM, a biopharmaceutical research center based at UQAM, focusing on the discovery of new bioactive molecules, their mechanism of action and the vectorization of drugs.
He is a pioneer in the development of synthetic glycoconjugate vaccines both for human and veterinary health, having co-developed the first and sole marketed semi-synthetic vaccine for human use, preventing bacterial meningitis and pneumonia in developing countries.
Education
René Roy completed his Ph.D. in carbohydrate chemistry from Université de Montréal in 1980, with Stephen Hanessian, developing synthetic methodologies and the syntheses of natural compounds using carbohydrate precursors (Chiron approach).
Career
Immediately after his Ph.D, in 1980, René Roy joined the National Research Council (NRC) in Ottawa where he worked as researcher in the Institute for Biological Sciences. Then, in 1985, he began his career as professor in the department of chemistry of the University of Ottawa where he served until December 2002. In parallel, he held the positions of Associate Director of the Ottawa-Carleton Chemistry Institute from 1993 to 1996, Director from 1996 to 1999, and again Associate Director in 2000. From 2002 to 2004, he was chairman of the American Chemical Society (ACS) Division of Carbohydrate Chemistry and, in 2005, head of the ACS awards committee.
In 2008, he returned to Montreal to teach organic chemistry at the Université du Québec à Montréal. There, he also founded the PharmaQAM biopharmaceutical research center which gathers some 50 professors and 17 institutions with common interests in the molecular aspects of medicinal chemistry and drug discovery working on new bioactive molecules, their mechanism of action and the way they are vectorized in vivo. He served as director of PharmaQAM until December 2017.
During his career, René Roy has co-developed meningitis vaccines, for humans and animals, that led to commercial success. One of them, targeting the Haemophilus influenzae b (HIB) bacteria, has been designed jointly with the Cuban researcher Vincente Verez Bencomo to prevent lethal meningitis and pneumonia in developing countries. It is the first human semi-synthetic glycoconjugate vaccine approved and remains the only one. In use since 2004, more than 34 million doses have been distributed to children in several countries including Vietnam, Syria, Brazil, Venezuela and Angola, eradicating the infectious disease in Cuba.
Rene Roy is a cofounder of Glycovax Pharma, a biotech company operating in Montreal, developing glycochemistry-based treatments against cancer and other disease with unmet medical needs.
Research
Research interests
René Roy uses carbohydrate chemistry to develop neoglycoconjugates and polymers to treat disease related to glycoproteins such as bacterial infections and cancers. His synthesis of new glycan structures, among which glycopolymers, glycodendrimers, and glycodendrimersomes (terms that he first developed) enabled progress in the area of multivalent molecular recognition mechanisms.
He is known for his work on semi-synthetic glycoconjugate vaccines. He has designed a breast cancer vaccine prototype.
René Roy has authored more than 370 scientific articles and 2 books on vaccines and glycomimetics. He has 5 patents to his credit, of which two ended in commercial products
Honors and awards
Hoffman-La Roche Award from the Canadian Society for Chemistry (1997)
Ottawa Life Sciences Council Achievement Award (2001)
Rotary International Paul Harris Fellowship (2001)
National Research Council of Canada Royalty Sharing Award (2001)
Melville L. Wolfrom Award from the American Chemical Society Division of Carbohydrate Chemistry (2003)
Gold Medal from the World Intellectual Property Organization (WIPO) (2005)
Tech Museum Award – Technology Benefiting Humanity the Hib Vaccine team (2005)
Probst Memorial Lecturer – Southern Illinois University (2006)
Award of Excellence in research from the Foundation of Stars - Montreal Children’s Hospital (2008)
Médaille (medal) de l’Université du Québec À Montréal (UQAM) (2009)
Léo-Pariseau Prize of the Association francophone pour le savoir (2010)
"Prix Cercle d’Excellence” of the Université du Québec (2011)
Canada Research Chair in Medicinal Chemistry (2004-2017)
See also
Glycopolymer
Sialic acid
N-Acetylneuraminic acid
Thomsen-Friedenreich antigen
References
External links
PharmaQAM Research Centre
Glycodendrimer definition
1952 births
Academic staff of the Université du Québec à Montréal
Université de Montréal alumni
Canadian organic chemists
Academic staff of the University of Ottawa
People from Sherbrooke
Glycobiologists
Canada Research Chairs
20th-century Canadian chemists
21st-century Canadian chemists
Living people | René Roy (chemist) | [
"Chemistry"
] | 1,099 | [
"Glycobiology",
"Organic chemists",
"Canadian organic chemists",
"Glycobiologists"
] |
59,749,164 | https://en.wikipedia.org/wiki/Hrvoje%20Petek | Hrvoje Petek (born January 13, 1958) is a Croatian-born American physicist and the Richard King Mellon Professor of Physics and Astronomy, at the University of Pittsburgh, where he is also a professor of chemistry.
Education
Petek received his B.S. degree in chemistry from Massachusetts Institute of Technology in 1980. Subsequently, he obtained his Ph.D. degree in chemistry from the University of California, Berkeley in 1985.
Research and career
Petek has developed coherent photoelectron spectroscopy and microscopy as methods for studying the dephasing and spatial propagation of polarization fields in solid state materials and nanostructure. He is developing methods for multidimensional multiphoton-photoemission spectroscopy. Together with Taketoshi Minato, Yousoo Kim, Maki Kawai, Jin Zhao , Jinlong Yang and Jianguo Hou, Petek discovered a delocalized electronic structure created by oxygen vacancy on titanium dioxide surface. Together with Jin Zhao , Ken Jordan and Ken Onda, Petek also discovered wet electron states, where electrons are partially solvated by water and other protic solvents at molecule vacuum interfaces. Together with Min Feng and Jin Zhao , Petek discovered atom-like superatom states of C60, and similar hollow molecules. Petek's research of metal plasmon excitations with semiconductor substrates, unrevealed the charge injection from optically active plasmonic modes into semiconductor substrates.
Petek is editor-in-chief of Progress in Surface Science and has organized conferences such as the 11th International Symposium of Ultrafast Surface Dynamics, held in Qiandao Lake, China. Petek has been (2015-2019) a member of the National Research and Development Agency Committee for the National Institute for Materials Science and is currently a senior scientific advisor to the Institute for Molecular Science in Japan.
Awards and honors
2019 Ahmed Zewail Award in Ultrafast Science and Technology
2018 American Association for the Advancement of Science Fellows
2005 Chancellor's Distinguished Research Award, University of Pittsburgh
2002 Fellow, American Physical Society
2000 Alexander von Humboldt Research Award
1996–1999, 2003–2006 NEDO Joint International Research Grant Awardee
References
External links
Petek Research Lab – Official website
1958 births
Living people
20th-century American chemists
21st-century American chemists
Massachusetts Institute of Technology School of Science alumni
University of California, Berkeley alumni
University of Pittsburgh faculty
Croatian emigrants to the United States
Croatian physicists
Scientists from Zagreb
Academic journal editors
Spectroscopists
Humboldt Research Award recipients
Fellows of the American Physical Society
Fellows of the American Association for the Advancement of Science
20th-century American physicists
21st-century American physicists | Hrvoje Petek | [
"Physics",
"Chemistry"
] | 539 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Analytical chemists",
"Spectroscopists",
"Spectroscopy"
] |
59,749,428 | https://en.wikipedia.org/wiki/Nekrasov%20matrix | In mathematics, a Nekrasov matrix or generalised Nekrasov matrix is a type of diagonally dominant matrix (i.e. one in which the diagonal elements are in some way greater than some function of the non-diagonal elements). Specifically if A is a generalised Nekrasov matrix, its diagonal elements are non-zero and the diagonal elements also satisfy,
where,
.
References
Matrices | Nekrasov matrix | [
"Mathematics"
] | 83 | [
"Matrices (mathematics)",
"Mathematical objects",
"Matrix stubs"
] |
59,750,771 | https://en.wikipedia.org/wiki/IC%201459 | IC 1459 (also catalogued as IC 5265) is an elliptical galaxy located in the constellation Grus. It is located at a distance of circa 85 million light-years from Earth, which, given its apparent dimensions, means that IC 1459 is about 130,000 light-years across. It was discovered by Edward Emerson Barnard in 1892.
Characteristics
IC 1459 is a giant elliptical galaxy. In its core has been observed a fast counter-rotating stellar component, with an estimated total mass of . The stars of this component have circular orbits on a flat disk with a radius of less than two arcseconds. It has been suggested that the stars of the counterrotating disk are of external origin, either by the accretion of a stellar satellite or of a dense molecular cloud from which the stars were created. The galaxy also features multiple shells and disturbed isophotes, which indicate that it has accreted material, probably by the merger of smaller galaxies with the elliptical galaxy. Enhanced photography revealed the presence of a diffuse spiral pattern at the outer regions of IC 1459.
It has been suggested that a merger started a starburst event in IC 1459. The total duration of this event has not been well constrained, but it is estimated it lasted less than 80 million years. Young stars make up approximately 3% of the total stellar population of the galaxy. They have an average age of 400 million years. Metallicity has been found to decrease as the radial distance from the center increases.
The nucleus of IC 1459 is active and has been characterised as a LINER based on its optical spectrum. The most accepted theory for the power source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. In the centre of IC 1459 lies a supermassive black hole with an estimated mass based on stellar velocities. Estimates of the mass of the black hole based on gas kinematics are lower, from (1-4) × 108 to 109 , although the presence of non-gravitational forces acting on the circumnuclear gas renders them less accurate. The disturbed kinematics of the circumnuclear gas may be caused by an outflow. In the circumnuclear region there is also dust with irregular distribution, indicating nonequilibrium motions.
The galaxy is characterised as low-power radio loud, with gigahertz-peaked spectrum, and has two symmetrical radio jets. In a 3.5 year observation period, the galaxy showed little variability. The galaxy also has weak X-ray emission. The nucleus appears unabsorbed in lower energy X-ray, with X-ray luminosity ergs s-1, while the Fe-Kα fluorescence line, is not strong. The infrared emission of the nucleus is also less than the one expected by an AGN hidden by a dust torus. This indicates that the AGN is seen unobstructed. The galaxy possess an X-ray halo, whose modelling indicated the presence of dark matter at a distance larger than 3 effective radii.
In images by the Hubble Space Telescope, 199 globular clusters were recognised, with an indication of a decrease of their number near the core and statistically non-significant evidence of dimodial color distribution.
Nearby galaxies
IC 1459 is the brightest galaxy in a galaxy group known as the IC 1459 group. It is a loose group centred at IC 1459 with a large number of spiral galaxies. Other members include NGC 7418, NGC 7418A, NGC 7421, IC 5264, IC 5269, IC 5269B, IC 5270, and IC 5273. IC 5264 lies 6.5 arcminutes south of IC 1459. This group, along with the NGC 7582 group form the Grus cloud, a region of elevated galaxy density. The Grus cloud, along with the nearby Pavo-Indus cloud, lies between the Local Supercluster and Pavo–Indus Supercluster.
The group features both diffuse X-ray emission from the intergalactic medium and HI emission. Based on the presence of both it has been suggested that the group is in its early stages of assembling from different subgroups. Three HI clouds have been found to be associated with the group, two located near IC 5270 and one near NGC 7418. These HI clouds are believed to have formed from gas stripped from the galaxies as a result of interactions.
Gallery
See also
NGC 1052 - a similar elliptical galaxy
References
External links
IC 1459 on SIMBAD
Elliptical galaxies
Shell galaxies
Radio galaxies
Grus (constellation)
1459
70090
Astronomical objects discovered in 1892
Discoveries by Edward Emerson Barnard | IC 1459 | [
"Astronomy"
] | 973 | [
"Grus (constellation)",
"Constellations"
] |
59,750,846 | https://en.wikipedia.org/wiki/Image%20destriping | Image destriping is the process of removing stripes or streaks from images and videos without disrupting the original image/video. These artifacts plague a range of fields in scientific imaging including atomic force microscopy, light sheet fluorescence microscopy, and planetary satellite imaging.
The most common image processing techniques to reduce stripe artifacts is with Fourier filtering. Unfortunately, filtering methods risk altering or suppressing useful image data. Methods developed for multiple-sensor imaging systems in planetary satellites use statistical-based methods to match signal distribution across multiple sensors. More recently, a new class of approaches leverage compressed sensing, to regularize an optimization problem, and recover stripe free images. In many cases, these destriped images have little to no artifacts, even at low signal to noise ratios.
References
computer vision
image processing | Image destriping | [
"Engineering"
] | 158 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
59,752,653 | https://en.wikipedia.org/wiki/Ebbe%20Nielsen%20Challenge | The Ebbe Nielsen Challenge is an international science competition conducted annually from 2015 onwards by the Global Biodiversity Information Facility (GBIF), with a set of cash prizes that recognize researcher(s)' submissions in creating software or approaches that successfully address a GBIF-issued challenge in the field of biodiversity informatics. It succeeds the Ebbe Nielsen Prize, which was awarded annually by GBIF between 2002 and 2014. The name of the challenge honours the memory of prominent entomologist and biodiversity informatics proponent Ebbe Nielsen, who died of a heart attack in the U.S.A. en route to the 2001 GBIF Governing Board meeting.
History
In 2001, GBIF created the Ebbe Nielsen Prize to honour the recently deceased Danish-Australian entomologist Ebbe Nielsen, who was a keen proponent of both GBIF and the biodiversity informatics discipline. That prize recognized a global researcher or research team for their retrospective contribution(s) to the field of biodiversity informatics, according to criteria set out by GBIF in the terms of the award. From 2015 onwards, GBIF re-launched the award process as a competition between global individuals or teams of researchers to create new software or approaches to using biodiversity data according to a theme announced annually for each round of the competition, and also to split the prize money among multiple groups instead of a single winner as in the initial era of the Prize. Calls for entries to the competition, now called the "Ebbe Nielsen Challenge", have been issued annually from 2015 to the present, with winners announced through a competitive process in all years except for 2017, when an insufficient number of entries was received.
List of winners, 2015 onwards
2015
Challenge: "Make significant use of GBIF-mediated data in a way that provides new representations or insights. Your submission could involve a range of results—websites, stand-alone or mobile applications, or outputs of analyses—or could seek to improve any number of issues or processes, including (but not limited to) data analysis or visualization, data workflows, uploading, or annotations."
A list of finalists for the 2015 Challenge is available here.
First Prize winner:
"GBIF Dataset Metrics" - Peter Desmet, Bart Aelterman and Nicolas Noé (Belgium)
Second Prize winner:
"BioGUID.org" - Richard Pyle (United States)
Details are available via the GBIF website here.
2016
Challenge: "In 2016, the Challenge will focus on the question of data gaps and completeness, seeking tools, methods and mechanisms to help analyse the fitness-for-use of GBIF-mediated data and/or guide priority setting for biodiversity data mobilization. We expect both data users and data holders to benefit from this year’s emphasis on gaps and completeness."
First Prize winner:
"Exploring ignorance in space and time" - Alejandro Ruete (Argentina)
Joint Second Prize winners:
"sampbias" - Alexander Zizka, Alexandre Antonelli and Daniele Silvestro (Sweden)
"GBIF Coverage Assessment Tools" - Walter Jetz, Jeremy Malczyk, Carsten Meyer, Michelle Duong, Ajay Ranipeta and Luis J. Villanueva (United States, South Africa and Germany)
Details are available via the GBIF website here.
2017
Challenge: "The 2017 GBIF Ebbe Nielsen Challenge seeks submissions that repurpose these datasets [in public open-access repositories] and adapting them into the Darwin Core Archive format (DwC-A), the interoperable and reusable standard that powers the publication of almost 800 million species occurrence records from the nearly 1,000 worldwide institutions now active in the GBIF network. The 2017 Ebbe Nielsen Challenge will task developers and data scientists to create web applications, scripts or other tools that automate the discovery and extraction of relevant biodiversity data from open data repositories."
No winners were announced, indicating that the 2017 prize money was not awarded to any entry.
2018
Challenge: "The 2018 Ebbe Nielsen Challenge is deliberately open-ended, so entrants have a broad remit for creating tools and techniques that advance in open science and improve the access, utility or quality of GBIF-mediated data. Challenge submissions may be new applications, visualization methods, workflows or analyses, or they build on and extend existing tools and features."
Joint First Prize winners:
"Checklist recipe: a template for reproducible standardization of species checklist data" - Lien Reyserhove, Damiano Oldoni and Peter Desmet (Belgium)
"Ozymandias: a biodiversity knowledge graph" - Roderic D. M. Page (United Kingdom)
Joint Second Prize winners:
"The bdverse" - Tomer Gueta, Yohay Carmel, Vijay Barve, Thiloshon Nagarajah, Povilas Gibas, Ashwin Agrawal (Israel, United States, Sri Lanka, Lithuania and India)
"GBIF Issues Explorer" - Luis J. Villanueva (United States)
"Smart mosquito trap to DwC pipeline" - Connor Howington and Samuel Rund (United States)
"Taxonomy Tree Editor" - Ashish Singh Tomar (Spain)
Details are available via the GBIF website here.
2019
Challenge: "The 2019 Ebbe Nielsen Challenge is deliberately open-ended, so entrants have a broad remit for creating tools and techniques that advance in open science and improve the access, utility or quality of GBIF-mediated data. Challenge submissions may be new applications, visualization methods, workflows or analyses, or they build on and extend existing tools and features. It is expected that successful entries provide practical, pragmatic solutions for the GBIF network while advancing biodiversity informatics and biodiversity data management in relation to the GBIF mission and strategic plan."
First Prize winner:
"WhereNext: a recommending system to identify sampling priorities based on generalized dissimilarity modeling" - Jorge Velásquez-Tibatá (Colombia)
Joint Second Prize winners:
"occCite: Tools to Enable Comprehensive Biodiversity Data Citation" - Hannah L. Owens, Cory Merow, Brian S. Maitner, Vijay V. Barve & Robert Guralnick (Denmark, United States)
"Organella" - Andrea Biedermann, Diana Gert, Dimitar Ruszev & Paul Roeder (Germany)
"Rapid Least Concern: Automated assessments of lower risk plants for the IUCN Red List of Threatened Species (Red List)" - Steven Bachman, Barnaby Walker & Justin Moat (United Kingdom)
Joint Third Prize winners:
"Agile GBIF Publishing" - Evgeniy Meyke (Finland)
"Biodiversity Information Review and Decision Support: the BIRDS package for R" - Debora Arlt, Alejandro Ruete & Anton Hammarström (Sweden)
"GB Sifter: A GBIF to GenBank Data Sifter" - Luis Allende (United States)
"naturaList" - Arthur Vinicius Rodrigues & Gabriel Nakamura (Brazil)
"GeoNature-atlas: Publish an online biodiversity atlas with GBIF data of your area" - Jordan Sellies, Amandine Sahl (France)
Details are available via the GBIF website here.
References
Biodiversity
Academic awards
International awards
Science and technology awards | Ebbe Nielsen Challenge | [
"Technology",
"Biology"
] | 1,500 | [
"Science and technology awards",
"Biodiversity",
"International science and technology awards"
] |
59,752,838 | https://en.wikipedia.org/wiki/Beilstein%20Journal%20of%20Nanotechnology | The Beilstein Journal of Nanotechnology is a peer-reviewed platinum open-access scientific journal covering all aspects of nanoscience and nanotechnology. It is published by the Beilstein Institute for the Advancement of Chemical Sciences and the editor-in-chief is Thomas Schimmel (Karlsruhe Institute of Technology). The journal was established in 2010. It is a member of the Free Journal Network.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
Ei Compendex
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.65.
References
External links
Nanotechnology journals
Continuous journals
English-language journals
Academic journals established in 2010 | Beilstein Journal of Nanotechnology | [
"Materials_science"
] | 160 | [
"Nanotechnology journals",
"Materials science journals"
] |
66,182,454 | https://en.wikipedia.org/wiki/Suaeda%20monoica | Suaeda monoica is a species of flowering plant in the sea-blite genus Suaeda, largely native to the shores of the Indian Ocean from South Africa to Sri Lanka, and salty areas inland. It has been introduced in Argentina. It exhibits phenotypic plasticity, with leaves that are much more succulent when grown under higher salinity conditions. Its leaves are edible, and it is used as an animal fodder plant where it grows.
References
monoica
Halophytes
Plants described in 1776 | Suaeda monoica | [
"Chemistry"
] | 108 | [
"Halophytes",
"Salts"
] |
66,182,782 | https://en.wikipedia.org/wiki/Carrier%20aircraft%20used%20during%20World%20War%20II | Over 700 different aircraft models were used during World War II. At least 135 of these models were developed for naval use, including about 50 fighters and 38 bombers.
Only about 25 carrier-launched aircraft models were used extensively for combat operations. Of these, nine were introduced during the war years after the Japanese attack on Pearl Harbor brought United States into the war, four by the United States Navy (USN) and three by the Royal Navy (RN) and two by the Imperial Japanese Navy (IJN).
Principal carrier aircraft used
The table below lists the principal carrier-launched fighters and bombers used during World War II. They are listed within each aircraft type in chronological order of their introduction to service. Allied reporting names such as "Val" and "Kate" are included for IJN aircraft. Neither Germany nor Italy put carriers or carrier-launched aircraft into service. Some Axis fighters are included in the table below for comparison with the Allied fighters that met them in combat.
Sources:
Notes:
Values were obtained from multiple sources. Some reported values may not be directly comparable to others in the same column.
Combat ranges for some of the aircraft could be extended using drop tanks containing supplemental fuel.
Japanese carrier aircraft designations for planes introduced after 1922 typically adhered to the following conventions. The first letter indicated the aircraft type, "A" for fighter, "B" for torpedo bomber, "C" for reconnaissance, and "D" for dive bomber. The last letter indicated the manufacturer, "A" for Aichi, "M" for Mitsubishi, "N" for Nakajima, and "Y" for Yokosuka. The "Type" indicated the last two digits for the Japanese year that the plane was adopted for service. For example, the "D3A (Type 99)" was a dive bomber manufactured by Aichi and adapted for service during the Japanese Year 2599 (1939). The plane was actually introduced for combat in 1940. The Allies referred to it as a "Val".
Relative aircraft capabilities were not the only factors that contributed to success or failure for carrier aircraft in combat. Attack coordination and tactics, along with pilot skill, determination, and willingness to self-sacrifice were at least as important and perhaps more so. For example, both the new, high-performing Grumman TBF Avenger and the obsolete Douglas TBD Devastator were slaughtered as they tried to deliver their torpedoes at Midway without fighter protection. But their pilots' dogged pressing of their attacks in the face of almost hopeless odds created opportunities for the dive bomber pilots that, by luck as well as by determination, arrived in a timely manner to sink four IJN carriers. The effectiveness of the special attacks ("kamikaze") late in the war using mostly outdated aircraft can also be attributed to pilot determination and self-sacrifice. Nonetheless, other things being equal, faster, more maneuverable aircraft with longer ranges and better armament contributed to successful combat outcomes.
Carrier aircraft types, functions, and features
Carrier aircraft types.
The types of aircraft usually launched from aircraft carriers were fighters, torpedo bombers, and dive-bombers. Floatplanes were also launched from some carriers but were typically catapulted from cruisers and battleships. Land-based aircraft types were frequently launched from carriers when delivering them to forward bases, such as when Curtiss P-40 Warhawk fighters flew from carriers to newly captured land-bases during the Allied invasion of North Africa. Sometimes land-based aircraft were launched from carriers for special operations, such as the USN Doolittle Raid, when B-25s were launched for a raid on Tokyo. Some carrier aircraft served in dual roles, such as fighter-bomber and bomber-reconnaissance aircraft.
Carrier aircraft functions.
Torpedo and dive bombers attacked enemy warships, transports, merchant ships, and land installations.
Fighters accompanied bombers on attack missions, protecting them during interceptions by enemy fighters. Fighters maintained overhead in Combat Air Patrols (CAP) protected their carriers and other warships in the fleet by intercepting enemy bombers and by attacking submarines.
Fighters and bombers were also widely used for reconnaissance and sometimes used for mine laying or for spotting to assist bombardment by warships.
Carrier aircraft features.
Size of aircrew. With few exceptions, fighters had a single crewmember, the pilot, while dive-bombers had two and torpedo bombers had three crewmembers. The RN valued a second crewmember in fighters for observation and navigation, and the Fairey Fulmar and the Fairey Firefly. The Blackburn Roc was a "turret fighter" and the second crewmember operated the turret
Armament. Fighters and bombers typically had two to four machine guns, sometimes six. These were mostly 7.7mm or 7.62mm (.303in), but the heavier 12.7mm (.50in) were on some RN and USN aircraft. The IJN Zero fighter had the more destructive 20mm (.79in) cannon in addition to machine guns. Later in the war, RN and USN fighters also had 20mm cannon in addition to or instead of machine guns. After 1943, some fighters and bombers were also capable of firing 12.7 cm (127mm) (5 in) rockets. Bombers also carried bombs or a torpedo, the maximum possible weight for which generally increased with new introductions during the war. Most fighters could carry a small bomb load.
Self-sealing fuel tank. Self-sealing fuel tanks retarded or eliminated the flow of fuel from a tank that has been holed during combat. This was typically accomplished by incorporating a material that swelled when it came into contact with fuel. Many RN and USN carrier aircraft used this technology, but it involved adding weight, and the IJN was reluctant to sacrifice range and maneuverability for improved survivability.
Protective armor. Like self-sealing fuel tanks, protective armor for the aircrew improved survivability at the cost of performance. The RN and USN armored their carrier aircraft, protecting the aircrew, but the IJN typically did not.
Number of wings. Carrier aircraft introduced after 1937 were all monoplanes except for the biplane RN Fairey Albacore which was an improved version of the Swordfish. The biplane Fairey Swordfish, introduced in 1936, was removed from front line combat but put onto anti-submarine convoy escort served through the entire war.
Folding wings. Monoplane carrier aircraft introduced after 1936 almost all had folding wings to reduce the space taken up in hangars. . Exceptions included the Mitsubishi A5M "Claude" fighter and the Douglas SBD Dauntless and Yokosuka D4Y "Judy" dive bombers.
Cockpit. Some early aircraft had open cockpits, but newer introductions typically had enclosed cockpits.
Undercarriage. Most of the carrier aircraft introduced after 1937 had retractable landing gear to reduce drag. The two exceptions, introduced in 1940, were the RN Fairy Albacore torpedo bomber and the IJN Aichi D3A2 "Val" dive bomber.
Pre- and early-war aircraft (1936-1941)
Chinese land-based aircraft vs. Japanese carrier aircraft
During the short “Shanghai Incident” in 1932, Japanese carrier-launched fighters and bombers and water-launched floatplanes attacked areas in and around the city. In one engagement, a group of three Mitsubishi B1M torpedo bombers and three Nakajima A1N fighters were attacked by a lone Boeing 218 fighter flown by Robert McCawley Short, an American pilot training Chinese flyers. He was shot down and the following month, Japanese fighters were unsuccessfully opposed by Chinese Curtiss Hawk fighters, some of which were also shot down.
Over the next five years, IJN introduced improved fighters including the Nakajima A2N in 1932, the first purely Japanese-designed fighter. In 1936, the Nakajima A4N entered service. The year after that, the Mitsubishi A5M "Claude" became the world's first low-wing, carrier-launched monoplane. It was highly maneuverable and the direct predecessor of the famed Mitsubishi A6M Zero introduced three years later.
At the time the Second Sino-Japanese War began in July 1937, IJN had about 200 fighters and bombers and 62 floatplanes available to attack. The Imperial Japanese Army (IJA0 had force concentrations in China's north and agreed that IJN would be responsible for aerial operations over central China. Three aircraft carriers with a total of 136 up-to-date fighters and bombers were sent to the coast off Shanghai. The Republic of China Air Force (ROCAF) was just emerging from a loose confederation of aircraft and airmen controlled by individual Chinese warlords. It was a hodgepodge of about 300 land-based fighters, mostly supplied by the US, UK, and Italy, with which to intercept Japanese fighters and bombers. The USSR also began supplying China aircraft after the Sino-Soviet Non-Aggression Pact was agreed in August.
The Curtiss Model 68 Hawk III biplane, built both in the US and China, was used by the ROCAF as a bomber as well as the primary fighter during the early part of the war. It took the brunt of the Japanese attack during the defense of Shanghai and battle of Nanking, helping to make flying aces of several Chinese pilots. Shortly after the conflict began, Gao Zhihang intercepted a land-based Japanese bomber group from Taiwan and shot down a Mitsubishi G3M medium bomber. This was the first aerial combat victory for the Chinese. In addition to Curtiss Hawk IIIs, the ROCAF opposed Japanese attacks with some older Curtis Hawk II, Boeing P-26 Peashooter, Gloster Gladiator, and Fiat CR.32 fighters. After attrition had taken its toll of these aircraft, they were replaced by Soviet Polikarpov I-15 biplane fighters and later by Polikarpov I-16 aircraft, the world's first low-wing monoplane fighter with retractable landing gear used in combat. The latter also had 20mm cannons, making it one of the most heavily armed fighters for the period.
For the next three years, IJN and ROCAF pilots fought above Beijing, Shanghai, Nanjing, Wuhan and elsewhere as Japan sought unsuccessfully to subdue China. Sometimes IJN carrier air groups were sent temporarily to land bases. Both air forces suffered defeats and enjoyed victories. Losing ground, the Chinese shifted their capital westward and inland until establishing it at Chongqing in central China. By 1941, Japan held large portions of northern and coastal China but had been weakened by the battles for inland central China. In mid-September 1941, as the IJN began to focus on the possibility of a wider war in the Pacific, it turned over responsibility for the air war over China to the Imperial Japanese Army. Continued resistance by China's National Revolutionary Army led to a war of attrition that tied down large numbers of Japanese troops until the end of World War II in 1945.
United States vs. Japanese carrier aircraft
The design of Japanese carrier aircraft was consistent with their overall strategy of emphasizing the offense in order to win a short war before America's overwhelmingly superior production capacity could be brought to bear. Expecting to face a numerically superior fleet, Japan's strategy envisioned using aircraft to help neutralize this advantage by gradual attrition as the enemy USN fleet approached Japan. This required aircraft with extended ranges and striking power, which in turn meant having them be lighter and faster but with less protection. Accordingly, Japan's Mitsubishi A6M Zero fighter, Nakajima B5N "Kate" torpedo bomber, and Aichi D3A "Val" dive bomber were all lightly-built with weight minimized by not providing cockpit armor to protect pilots or self-sealing fuel tanks to enable them to continue fighting after taking some hits. As a result of having greater range for its aircraft, the IJN would, during 1942, attack from 250 to 300 miles away compared to the USN that would only do so from 200 miles away.
Fighters
At the time of the 1941 attack on Pearl Harbor, Japan had both the world's best fighter and best torpedo bomber. In addition, their aircraft were flown by the world's most extensively trained and experienced airmen, in part due to their engagement since 1937 in the war in China. The A6M Zero was fast, highly maneuverable, and could out-turn and out-climb the USN Grumman F4F Wildcat. Also, when enemy planes were approaching for an attack, it was important for fighters to get off their decks and reach an advantageous altitude quickly. The Zero could climb at 3,000 ft/minute and the Wildcat only 2,300 ft/minute. Experienced Zero pilots were initially very successful against the lower performing Allied aircraft. Over the course of 1942, however, pilots in Wildcats developed aerial combat tactics such as the high-side pass and the Thach Weave. Exploiting these tactics coupled with greater aircraft survivability due to armor and self-sealing tanks, American pilots neutralized the Zero's advantages. Although better USN fighters were introduced later, the Wildcat served throughout the war. Over the course of 1942, Japan's substantial losses of her experienced pilots contributed to the American's gaining an upper hand in fighter combat.
Torpedo bombers
The Nakajima B5N "Kate" torpedo bomber was superior to the obsolete USN Douglas TBD Devastator in speed, rate of climb, and range. Kate torpedoes contributed to sinking USN fleet carriers Lexington (battle of the Coral Sea), Yorktown (Midway), and Hornet (Santa Cruz Islands), all during 1942. Devastator torpedoes contributed to sinking the IJN carrier Shōhō at the Coral Sea, but Zero fighters slaughtered the Devastators at Midway where they attacked without fighter protection. Only six of the 41 Devastators launched returned to their carriers. Though some closed to targets and launched torpedoes (the Mark 13 torpedo), the torpedoes ran deep or failed to explode
Dive bombers
The US had a superior dive bomber in the Douglas SBD-5 Dauntless compared to Japan's Aichi D3A2 "Val". Benefitting from fortunate timing, bombs from the Dauntless sank all four of the Japanese carriers lost at Midway. Nonetheless, the "Vals" served throughout the war and sank more Allied warships than any other Axis aircraft. This included sinking HMS Hermes, the first carrier to be sunk by carrier aircraft.
United Kingdom vs. German and Italian land-based aircraft
Fighters
When war broke out in 1939 in the Atlantic Theater, the RN had only recently reacquired responsibility for their carrier-launched aircraft. For the previous two decades, the Royal Air Force (RAF) had responsibility for all air operations, and development of improved carrier-launched aircraft had been neglected in favor of land-based fighters for the defence of UK and bombers for the offensive. As a result, the RN entered the war with mostly relatively slow, limited range biplane fighters and bombers that were that were inferior to USN and IJN carrier aircraft of the same age. At the time of Britain's evacuation from Norway, her Gloster Sea Gladiator fighters, Fairey Swordfish torpedo bombers, and Blackburn Skua fighter-bombers were also inferior compared to the land-based German aircraft. Nonetheless, the Sea Gladiators did succeed in shooting down a couple of Messerschmitt Bf 110 fighters during the Norwegian campaign in early 1940. Gladiators and Sea Gladiators did take part in the Battle of Britain that autumn, but it was the numerous land-based Supermarine Spitfires and Hawker Hurricanes and their pilots that provided the principal aerial defense for the UK. Sea Gladiators also assisted during the defense of Malta, where the few in operation shot down Italian aircraft. Gladiators were removed from front-line service around Britain by 1941. The more modern, monoplane Blackburn Roc was able to shoot down a Junkers Ju 88 fighter-bomber attacking a convoy in 1940. However, the Roc was no match for German Messerschmitt Bf 109 or Focke-Wulf Fw 190 fighters and could not even perform as well as Britain's own Blackburn Skua. Considered one of the worst fighters of the war, the Roc was, by late 1940, taken out of front line service and consigned to training, target towing, and rescue duties. A more modern monoplane fighter, the Fairey Fulmar, was introduced in 1940 to replace the obsolete Sea Gladiator. It was a two-seat design and was large and less nimble than the Axis fighters it opposed. Nonetheless, while protecting convoys to Malta, Fulmars shot down ten Italian bombers and six Axis fighters. During the war, they provided air cover for the raids on Taranto and Petsamo, protected convoys to Russia, and supported invasions of French North Africa and Italy. Fulmars shadowed the German battleship Bismarck enabling Fairey Swordfish torpedo bombers to catch up with her. The UK upgraded their naval fighter squadrons in 1941 by adapting the highly successful, land-based Hawker Hurricane to carrier use. The Hawker Sea Hurricane performed well while protecting the Malta convoys and, operating from escort carriers, many Atlantic convoys.
Dive bombers
The monoplane Blackburn Skua two-man fighter-dive bomber was introduced in late 1938. It sank the German cruiser Königsberg during the German invasion of Norway; the first major warship sunk by a dive bomber in combat. Skuas provided air cover during the Dunkirk evacuation and served in the Mediterranean. Like the Roc, however, they fared poorly against the higher performance land-based Messerschmitt Bf 109, and were withdrawn from front line service during 1941.
Torpedo bombers
The biplane Fairey Swordfish torpedo bomber, affectionately referred to as "Stringbag" by her aircrews, was introduced in 1936. It was an archaic-looking biplane with cloth-covered wings, open cockpit and fixed landing gear. It had been designed as a torpedo-spotter-reconnaissance aircraft and emerged as the standard naval attack aircraft serving as both a dive-bomber and torpedo bomber. In the first airborne torpedo attack of the war, Swordfish damaged a German destroyer at Trondheim. Later, Swordfish crippled the French battleship Dunkerque during the Attack on Mers-el-Kébir, disabled three Italian battleships during the Battle of Taranto and attacked the German battleship Bismarck through gale-force storms in the Atlantic, ultimately landing the torpedo that doomed her. Swordfish dropped depth charges and laid mines as well as well. The Fairey Albacore was introduced in 1940 to replace the Swordfish. Both were replaced in the front line by the Barracuda, but the Swordfish was retained to serve in anti-submarine and bombardment spotting assignments throughout the war. In the final accounting, the Swordfish destroyed more tonnage of Axis shipping than any other Allied aircraft.
Later introductions (1942-1943)
Over the course of the war, new aircraft introductions tended to be heavier with more powerful engines. They had greater speed, a faster rate of climb, and greater range than their predecessors. In 1944, the IJN would initiate an attack from 350 to 400 miles, 100 miles further away than in 1942. The RN would send out its Swordfish from 250 to about 300 and the USN from 250 miles out. However, aircraft "wing loading", the mass of the aircraft divided by the surface area of its wing, also tended to increase, suggesting poorer maneuverability for these larger planes.
United States aircraft
Fighters
The Vought F4U Corsair fighter-bomber introduced in 1942 had almost twice the horsepower of the Wildcat, was faster, had greater range, faster rate of climb, and was capable of carrying a 4,000 lb total load of bombs and High Velocity Aircraft Rockets. It was judged to be relatively difficult to land on a carrier, however, and was initially released by the USN only for use by land-based Marine units. The Grumman F6F Hellcat fighter-bomber introduced in 1943 was also faster than the Wildcat, had greater range, a rate of climb comparable to the IJN Zero, and was capable of carrying a 4,000 lb total load of bombs, torpedoes, and rockets. Both the Corsair and the Hellcat aircraft were faster than the Zero and, having armor protection and self-sealing fuel tanks, could take much more punishment. With the Corsair initially relegated to land-based use, the Hellcat became the mainstay of the USN Fast Carrier Task Groups of 1944–45. She was the most successful fighter of the war, with her pilots shooting down over 5,000 enemy aircraft at a 19:1 ratio of victories to losses. The Corsair was deployed for USN carrier squadrons after the British refined the aircraft and landing procedures for it.
Torpedo bombers.
After their devastating torpedo bomber losses at Midway, the USN quickly replaced the Devastator with the faster Grumman TBF Avenger. It also had twice the range of the Devastator, in part because the Avenger's torpedo was carried inside the plane, reducing drag. Like the Devastator, it had attacked the Japanese fleet at Midway without fighter protection, and only one of the six attacking planes returned to its base at Naval Air Station on Midway. The Avenger ultimately became the most effective and widely used torpedo bomber of the war and functioned even more often as a level bomber than as a torpedo bomber. Avengers operated from both fleet carriers and escort carriers and were highly effective submarine killers in both the Atlantic and Pacific theaters. They shared credit for sinking the Japanese super-battleships IJN Yamato and Musashi.
Dive bombers
The Curtiss SB2C Helldiver dive bomber introduced in 1942 was faster than the Dauntless but regarded as difficult to handle. Making necessary improvements delayed its first use in combat until late 1943. By this time, the Allies were moving away from an aircraft type dedicated to dive bombing. Air-to-ground rockets had been introduced that offered accuracy that formerly had been the primary advantage of the dive bomber over level bombers. Such rockets could be fired from the other types of carrier aircraft and were ultimately carried by Hellcat fighters, Corsair fighter-bombers, and Avenger and Swordfish torpedo bombers as well as Helldiver dive bombers. Nonetheless, the Helldiver became widely used and participated in battles over the Marianas, Philippines, Formosa, Iwo Jima, and Okinawa and sank more tonnage of Japanese shipping than any other aircraft during the war. It shared credit with Avengers for sinking IJN Yamato and Musashi.
United Kingdom aircraft
Fighters
In 1942, the British introduced another naval fighter by adapting a highly successful land-based aircraft, the Spitfire, to carrier use. The Supermarine Seafire was faster than its predecessors and began replacing Hawker Sea Hurricanes for front-line service. In light wind, however, it was subject to crash landings. Also having its engine-cooling air inlets on the underside of the fuselage made ditching more dangerous for the pilot, as was also the case with the Hurricane. The Seafire's range was limited, but could be extended using drop tanks. Seafires supported the Allied invasions of North Africa, Sicily, mainland Italy, and southern France. Temporarily assigned to land bases, they also supported the invasion of Normandy. In the Pacific as part of the British Pacific Fleet, Seafires were used for CAP. Overall, the adaptations of land aircraft had inferior performance to purpose-built carrier aircraft. Introduced late in the war, the Fairey Firefly was superior in performance and firepower to its predecessor, the Fairey Fulmer. It was conceived as early as 1938, but prolonged development delayed its combat use until mid-1944, by which time its performance had been eclipsed by both Axis and Allied fighters. The Firely was used for ground attack, reconnaissance and anti-submarine as well as a fighter aircraft. The Firefly participated in operations against the German battleship Tirpitz in Norway in July 1944. During operations against the Japanese oil refineries at Sumatra in early 1945, a Firefly shot down a Nakajima Ki-43 (“Oscar”) fighter. Fireflies also supported carrier-based actions against Japanese shipping and against positions in the Caroline and Japanese home islands. Less than 800 were produced during the war years.
Bombers
The Fairey Barracuda torpedo bomber/dive bomber was introduced in early 1943 and was the only RN aircraft designed to withstand the stresses of dive bombing since the retirement of the Skua.
As the war progressed, the RN increasingly used US-made, purpose-built Hellcats, Corsairs, and Avengers for carrier operations in both the Atlantic and Pacific theaters.
Japanese aircraft
Fighters
The Zero was among the world's best fighters at the time of the raid on Pearl Harbor. It was little improved over the war, while the Allies introduced more powerful planes with better protection. Over time, the Zero lost its competitive advantage due to development by the Allies of more capable aircraft as well as improved tactics. The IJN introduced a land-based fighter, the Kawanishi N1K1-J, in 1944 that had the power, maneuverability, and ruggedness to compete with the late-war Allied fighters.
An improved version of the carrier-launched Zero, the Mitsubishi A6M6, included self-sealing fuel tanks, armor plate protection for the pilot, and a more powerful engine, but the additions made it heavier and less nimble. Only one prototype was built before the war ended. With its diminished value as a competitive fighter, the Zero became the first aircraft to be used as a kamikaze special attack plane and was used more than any other aircraft for this purpose.
Torpedo bombers
The Nakajima B6N "Jill" torpedo bomber incorporated considerable improvements over the Nakajima B5N "Kate" in speed and range but its introduction was delayed by development and production problems. By the time the Jill was introduced, the Allied thrust up the Solomon Islands caused IJN leadership, in late 1943, to transfer many carrier aircraft from their first line carriers to land-based service out of Rabaul. With the Allies having firmly established air superiority in the area, only a fraction of these planes made it back to their carriers two weeks later. In the following year, carrier-based Jills suffered huge losses at the Battle of the Philippine Sea in mid-1944. With so few IJN carriers remaining afloat after the Battle of Leyte Gulf, the Jills became mostly land-based and by early 1945 were in use as kamikazes.
Dive bombers
IJN plans to upgrade carrier bombers were also frustrated by development and production delays. The Yokosuka D4Y3 "Judy" dive bomber was introduced in mid-1942 and intended to replace the slower "Val" by the end of that year, but the Val was kept in service until 1944. The Judy could outrun the USN Wildcat but, by the time the Judy came into wide use, the even faster USN Hellcat had been introduced. Many Judys were among the several hundred IJN planes lost during the Battle of the Philippine Sea. Nonetheless, it was bombs from a Judy, then operating from a land-base in the Philippines, that sank the light carrier USN Princeton during the Battle of Leyte Gulf in October 1944. A bomb from another Judy almost sank USN Franklin in March 1945.
Kamikaze special attack aircraft
Japanese use of "kamikaze" suicide aircraft began at the Leyte Gulf battle, and the D4Y3 "Judy" served in that role, damaging several Allied fleet and escort carriers. As the Allies approached Japan in early 1945, the IJN introduced the Yokosuka D4Y4, specifically for use as a kamikaze. Operating from land-bases, this version caused damage to several Allied carriers. By the end of the war, all six of the monoplane IJN carrier aircraft models used extensively during the war had also been engaged as kamikazes. Non-kamikaze aircraft models continued in use, often providing escort protection for kamikazes en route to enemy fleets.
Footnotes
Citations
Science and technology during World War II
Aircraft carriers | Carrier aircraft used during World War II | [
"Technology"
] | 5,735 | [
"Science and technology during World War II",
"Science and technology by war"
] |
66,182,787 | https://en.wikipedia.org/wiki/Surface%20magnon%20polariton | Surface magnon-polaritons (SMPs) are a type of quasiparticle in condensed matter physics. They arise from the coupling of incident electromagnetic (EM) radiations to the magnetic dipole polarization in the surface layers of a solid. Magnons are analogous to other forms of polaritons, such as plasmons and phonons, but represent an oscillation of the magnetic component of the solid's EM field rather than its electric component or a mechanical oscillation in the solid's atomic structure.
They are sometimes referred to as magnetic surface polaritons (MSPs).
By employing artificially constructed metamaterials whose properties mainly stem from their engineered internal fine structures rather than their bulk physical make up, it is possible to more easily achieve useful SMPs. However, they can be found in several natural magnetic materials, including at THz frequencies in antiferromagnetic crystals.
Magnons offer a way to control light-matter interactions at Terahertz frequencies.
References
Quasiparticles
Plasmonics | Surface magnon polariton | [
"Physics",
"Chemistry",
"Materials_science"
] | 220 | [
"Plasmonics",
"Matter",
"Materials science stubs",
"Surface science",
"Nanotechnology",
"Condensed matter physics",
"Quasiparticles",
"Condensed matter stubs",
"Solid state engineering",
"Subatomic particles"
] |
66,185,220 | https://en.wikipedia.org/wiki/Variants%20of%20SARS-CoV-2 | Variants of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are viruses that, while similar to the original, have genetic changes that are of enough significance to lead virologists to label them separately. SARS-CoV-2 is the virus that causes coronavirus disease 2019 (COVID-19). Some have been stated, to be of particular importance due to their potential for increased transmissibility, increased virulence, or reduced effectiveness of vaccines against them. These variants contribute to the continuation of the COVID-19 pandemic.
, the variants of interest as specified by the World Health Organization are BA.2.86 and JN.1, and the variants under monitoring are JN.1.7, KP.2, KP.3, KP.3.1.1, JN.1.18, LB.1, and XEC.
Overview
The origin of SARS-CoV-2 has not been identified. However, the emergence of SARS-CoV-2 may have resulted from recombination events between a bat SARS-like coronavirus and a pangolin coronavirus through cross-species transmission. The earliest available SARS-CoV-2 viral genomes were collected from patients in December 2019, and Chinese researchers compared these early genomes with bat and pangolin coronavirus strains to estimate the ancestral human coronavirus type; the identified ancestral genome type was labeled "S", and its dominant derived type was labeled "L" to reflect the mutant amino acid changes. Independently, Western researchers carried out similar analyses but labeled the ancestral type "A" and the derived type "B". The B-type mutated into further types including B.1, which is the ancestor of the major global variants of concern, labeled in 2021 by the WHO as alpha, beta, gamma, delta and omicron variants.
Early in the pandemic, the relatively low number of infections (compared with later stages of the pandemic) resulted in fewer opportunities for mutation of the viral genome and, therefore, fewer opportunities for the occurrence of differentiated variants. Since the occurrence of variants was rarer, the observation of S-protein mutations in the receptor-binding domain (RBD) region interacting with ACE2 was also not frequent.
As time went on, the evolution of SARS-CoV-2's genome (by means of random mutations) led to mutant specimens of the virus (i.e., genetic variants), observed to be more transmissible, to be naturally selected. Notably, both the Alpha and the Delta variants were observed to be more transmissible than previously identified viral strains.
Some SARS-CoV-2 variants are considered to be of concern as they maintain (or even increase) their replication fitness in the face of rising population immunity, either by infection recovery or via vaccination. Some of the variants of concern show mutations in the RBD of the S-protein.
Definitions
The term variant of concern (VOC) for SARS-CoV-2, which causes COVID-19, is a category used for variants of the virus where mutations in their spike protein receptor binding domain (RBD) substantially increase binding affinity (e.g., N501Y) in RBD-hACE2 complex (genetic data), while also being linked to rapid spread in human populations (epidemiological data).
Before being allocated to this category, an emerging variant may have been labeled a variant of interest (VOI), or in some countries a variant under investigation (VUI). During or after fuller assessment as a variant of concern the variant is typically assigned to a lineage in the Pango nomenclature system and to clades in the Nextstrain and GISAID systems.
Historically, the WHO regularly listed updates on variants of concern (VOC), which are variants with an increased rate of transmission, virulence, or resistance against mitigations, like vaccines. The variant submissions from member states are then submitted to GISAID, followed by field investigations of the variant. Updated definitions, published on the 4 October 2023, add variants of interest (VOI) and variants under monitoring (VUM) to the World Health Organization's working definitions for SARS-CoV-2 variants. Other organisations such as the CDC in the United States typically define their variants of concern slightly differently; for example, the CDC de-escalated the Delta variant on 14 April 2022, while the WHO did so on 7 June 2022.
, The WHO defines a VOI as a variant "with genetic changes that are predicted or known to affect virus characteristics such as transmissibility, virulence, antibody evasion, susceptibility to therapeutics and detectability" and that is circulating more than other variants in over one WHO region to such an extent that a global public health risk can be suggested. Furthermore, the update stated that "VOIs will be referred to using established scientific nomenclature systems such as those used by Nextstrain and Pango".
Notability criteria
Viruses generally acquire mutations over time, giving rise to new variants. When a new variant appears to be growing in a population, it can be labelled as an "emerging variant". In the case of SARS-CoV-2, new lineages often differ from one another by just a few nucleotides.
Some of the potential consequences of emerging variants are the following:
Increased transmissibility
Increased morbidity
Increased mortality
Ability to evade detection by diagnostic tests
Decreased susceptibility to antiviral drugs (if and when such drugs are available)
Decreased susceptibility to neutralising antibodies, either therapeutic (e.g., convalescent plasma or monoclonal antibodies) or in laboratory experiments
Ability to evade natural immunity (e.g., causing reinfections)
Ability to infect vaccinated individuals
Increased risk of particular conditions such as multisystem inflammatory syndrome or long COVID.
Increased affinity for particular demographic or clinical groups, such as children or immunocompromised individuals.
Variants that appear to meet one or more of these criteria may be labelled "variants under investigation" or "variants of interest" pending verification and validation of these properties. The primary characteristic of a variant of interest is that it shows evidence that demonstrates it is the cause of an increased proportion of cases or unique outbreak clusters; however, it must also have limited prevalence or expansion at national levels, or the classification would be elevated to a "variant of concern". If there is clear evidence that the effectiveness of prevention or intervention measures for a particular variant is substantially reduced, that variant is termed a "variant of high consequence".
Nomenclature
SARS-CoV-2 variants are grouped according to their lineage and component mutations. Many organisations, including governments and news outlets, referred colloquially to concerning variants by the country in which they were first identified. After months of discussions, the World Health Organization announced Greek-letter names for important strains on 31 May 2021, so they could be easily referred to in a simple, easy to say, and non-stigmatising fashion. This decision may have partially been taken because of criticism from governments on using country names to refer to variants of the virus; the WHO mentioned the potential for mentioning country names to cause stigma. After using all the letters from Alpha to Mu (see below), in November 2021 the WHO skipped the next two letters of the Greek alphabet, Nu and Xi, and used Omicron, prompting speculation that Xi was skipped to avoid offending Chinese leader Xi Jinping. The WHO gave as the explanation that Nu is too easily confounded with "new" and Xi is a common last name. In the event that the WHO uses the entirety of the Greek alphabet, the agency considered naming future variants after constellations.
Lineages and clades
While there are many thousands of variants of SARS-CoV-2, subtypes of the virus can be put into larger groupings such as lineages or clades. Three main, generally used nomenclatures have been proposed:
, GISAID—referring to SARS-CoV-2 as hCoV-19—had identified eight global clades (S, O, L, V, G, GH, GR, and GV).
In 2017, Hadfield et al. announced Nextstrain, intended "for real-time tracking of pathogen evolution". Nextstrain has later been used for tracking SARS-CoV-2, identifying 13 major clades (19A–B, 20A–20J and 21A) .
In 2020, Rambaut et al. of the Phylogenetic Assignment of Named Global Outbreak Lineages (PANGOLIN) software team proposed in an article "a dynamic nomenclature for SARS-CoV-2 lineages that focuses on actively circulating virus lineages and those that spread to new locations"; , 1340 lineages had been designated.
Each national public health institute may also institute its own nomenclature system for the purposes of tracking specific variants. For example, Public Health England designated each tracked variant by year, month and number in the format [YYYY] [MM]/[NN], prefixing 'VUI' or 'VOC' for a variant under investigation or a variant of concern respectively. This system has now been modified and now uses the format [YY] [MMM]-[NN], where the month is written out using a three-letter code.
Reference sequence
As it is currently not known when the index case or "patient zero" occurred, the choice of reference sequence for a given study is relatively arbitrary, with different notable research studies' choices varying as follows:
The earliest sequence, Wuhan-1, was collected on 24 December 2019.
One group (Sudhir Kumar et al.) refers extensively to an NCBI reference genome (GenBankID:NC_045512; GISAID ID: EPI_ISL_402125), this sample was collected on 26 December 2019, although they also used the WIV04 GISAID reference genome (ID: EPI_ISL_402124), in their analyses.
According to another source (Zhukova et al.), the sequence WIV04/2019, belonging to the GISAID S clade / PANGO A lineage / Nextstrain 19B clade, is thought to most closely reflect the sequence of the original virus infecting humans—known as "sequence zero". WIV04/2019 was sampled from a symptomatic patient on 30 December 2019 and is widely used (especially by those collaborating with GISAID) as a reference sequence.
The variant first sampled and identified in Wuhan, China is considered by researchers to differ from the progenitor genome by three mutations. Subsequently, many distinct lineages of SARS-CoV-2 have evolved.
Overview of historical variants of concern or under monitoring
The following table presents information and relative risk level for currently and formerly circulating variants of concern (VOC). The intervals assume a 95% confidence or credibility level, unless otherwise stated. Currently, all estimates are approximations due to the limited availability of data for studies. For Alpha, Beta, Gamma and Delta, there is no change in test accuracy, and neutralising antibody activity is retained by some monoclonal antibodies. PCR tests continue to detect the Omicron variant.
Previously circulating and formerly monitored variants (WHO)
The WHO defines a previously circulating variant as a variant that "has demonstrated to no longer pose a major added risk to global public health compared to other circulating SARS-CoV-2 variants", but should still be monitored.
On 15 March 2023, the WHO released an update on the tracking system of VOCs, announcing that only VOCs will be assigned Greek letters.
Previously circulating variants of concern (VOC)
The variants listed below had previously been designated as variants of concern, but were displaced by other variants. , the WHO lists the following under "previously circulating variants of concern":
Alpha (lineage B.1.1.7)
First detected in October 2020 during the COVID-19 pandemic in the United Kingdom from a sample taken the previous month in Kent, lineage B.1.1.7, labelled Alpha variant by the WHO, was previously known as the first Variant Under Investigation in December 2020 (VUI – 202012/01) and later notated as VOC-202012/01. It is also known as 20I (V1), 20I/501Y.V1 (formerly 20B/501Y.V1), or 501Y.V1. From October to December 2020, its prevalence doubled every 6.5 days, the presumed generational interval. It is correlated with a significant increase in the rate of COVID-19 infection in United Kingdom, associated partly with the N501Y mutation. There was some evidence that this variant had 40–80% increased transmissibility (with most estimates lying around the middle to higher end of this range), and early analyses suggested an increase in lethality, though later work found no evidence of increased virulence. As of May 2021, the Alpha variant had been detected in some 120 countries.
On 16 March 2022, the WHO has de-escalated the Alpha variant and its subvariants to "previously circulating variants of concern".
B.1.1.7 with E484K
Variant of Concern 21FEB-02 (previously written as VOC-202102/02), described by Public Health England (PHE) as "B.1.1.7 with E484K" is of the same lineage in the Pango nomenclature system, but has an additional E484K mutation. As of 17 March 2021, there were 39 confirmed cases of VOC-21FEB-02 in the UK. On 4 March 2021, scientists reported B.1.1.7 with E484K mutations in the state of Oregon. In 13 test samples analysed, one had this combination, which appeared to have arisen spontaneously and locally, rather than being imported. Other names for this variant include B.1.1.7+E484K and B.1.1.7 Lineage with S:E484K.
Beta (lineage B.1.351)
On 18 December 2020, the 501.V2 variant, also known as 501.V2, 20H (V2), 20H/501Y.V2 (formerly 20C/501Y.V2), 501Y.V2, VOC-20DEC-02 (formerly VOC-202012/02), or lineage B.1.351, was first detected in South Africa and reported by the country's health department. It has been labelled as Beta variant by WHO. Researchers and officials reported that the prevalence of the variant was higher among young people with no underlying health conditions, and by comparison with other variants it is more frequently resulting in serious illness in those cases. The South African health department also indicated that the variant may be driving the second wave of the COVID-19 epidemic in the country due to the variant spreading at a more rapid pace than other earlier variants of the virus.
Scientists noted that the variant contains several mutations that allow it to attach more easily to human cells because of the following three mutations in the receptor-binding domain (RBD) in the spike glycoprotein of the virus: N501Y, K417N, and E484K. The N501Y mutation has also been detected in the United Kingdom.
On 16 March 2022, the WHO has de-escalated the Beta variant and its subvariants to "previously circulating variants of concern".
Gamma (lineage P.1)
The Gamma variant or lineage P.1, termed Variant of Concern 21JAN-02 (formerly VOC-202101/02) by Public Health England, 20J (V3) or 20J/501Y.V3 by Nextstrain, or just 501Y.V3, was detected in Tokyo on 6 January 2021 by the National Institute of Infectious Diseases (NIID). It has been labelled as Gamma variant by WHO. The new variant was first identified in four people who arrived in Tokyo having travelled from the Brazilian Amazonas state on 2 January 2021. On 12 January 2021, the Brazil-UK CADDE Centre confirmed 13 local cases of the new Gamma variant in the Amazon rainforest. This variant of SARS-CoV-2 has been named lineage P.1 (although it is a descendant of B.1.1.28, the name B.1.1.28.1 is not permitted and thus the resultant name is P.1), and has 17 unique amino acid changes, 10 of which in its spike protein, including the three concerning mutations: N501Y, E484K and K417T.
The N501Y and E484K mutations favour the formation of a stable RBD-hACE2 complex, thus, enhancing the binding affinity of RBD to hACE2. However, the K417T mutation disfavours complex formation between RBD and hACE2, which has been demonstrated to reduce the binding affinity.
The new variant was absent in samples collected from March to November 2020 in Manaus, Amazonas state, but it was detected for the same city in 42% of the samples from 15 to 23 December 2020, followed by 52.2% during 15–31 December and 85.4% during 1–9 January 2021. A study found that infections by Gamma can produce nearly ten times more viral load compared to persons infected by one of the other lineages identified in Brazil (B.1.1.28 or B.1.195). Gamma also showed 2.2 times higher transmissibility with the same ability to infect both adults and older persons, suggesting P.1 and P.1-like lineages are more successful at infecting younger humans irrespective of sex.
A study of samples collected in Manaus between November 2020 and January 2021, indicated that the Gamma variant is 1.4–2.2 times more transmissible and was shown to be capable of evading 25–61% of inherited immunity from previous coronavirus diseases, leading to the possibility of reinfection after recovery from an earlier COVID-19 infection. As for the fatality ratio, infections by Gamma were also found to be 10–80% more lethal.
A study found that people fully vaccinated with Pfizer or Moderna have significantly decreased neutralisation effect against Gamma, although the actual impact on the course of the disease is uncertain.
A pre-print study by the Oswaldo Cruz Foundation published in early April found that the real-world performance of people with the initial dose of the Sinovac's Coronavac Vaccine had approximately 50% efficacy rate. They expected the efficacy to be higher after the 2nd dose. As of July 2021, the study is ongoing.
Preliminary data from two studies indicate that the Oxford–AstraZeneca vaccine is effective against the Gamma variant, although the exact level of efficacy has not yet been released. Preliminary data from a study conducted by Instituto Butantan suggest that CoronaVac is effective against the Gamma variant as well, and as of July 2021 has yet to be expanded to obtain definitive data.
On 16 March 2022, the WHO has de-escalated the Gamma variant and its subvariants to "previously circulating variants of concern".
Delta (lineage B.1.617.2)
The Delta variant, also known as B.1.617.2, G/452R.V3, 21A or 21A/S:478K, was a globally dominant variant that spread to at least 185 countries. It was first discovered in India. Descendant of lineage B.1.617, which also includes the Kappa variant under investigation, it was first discovered in October 2020 and has since spread internationally. On 6 May 2021, British scientists declared B.1.617.2 (which notably lacks mutation at E484Q) as a "variant of concern", labelling it VOC-21APR-02, after they flagged evidence that it spreads more quickly than the original version of the virus and could spread quicker or as quickly as Alpha. It carries L452R and P681R mutations in Spike; unlike Kappa it carries T478K but not E484Q.
On 3 June 2021, Public Health England reported that twelve of the 42 deaths from the Delta variant in England were among the fully vaccinated, and that it was spreading almost twice as fast as the Alpha variant. Also on 11 June, Foothills Medical Centre in Calgary, Canada reported that half of their 22 cases of the Delta variant occurred among the fully vaccinated.
In June 2021, reports began to appear of a variant of Delta with the K417N mutation. The mutation, also present in the Beta and Gamma variants, raised concerns about the possibility of reduced effectiveness of vaccines and antibody treatments and increased risk of reinfection. The variant, called "Delta with K417N" by Public Health England, includes two clades corresponding to the Pango lineages AY.1 and AY.2. It has been nicknamed "Delta plus" from "Delta plus K417N". The name of the mutation, K417N, refers to an exchange whereby lysine (K) is replaced by asparagine (N) at position 417. On 22 June, India's Ministry of Health and Family Welfare declared the "Delta plus" variant of COVID-19 a variant of concern, after 22 cases of the variant were reported in India. After the announcement, leading virologists said there was insufficient data to support labelling the variant as a distinct variant of concern, pointing to the small number of patients studied. In the UK in July 2021, AY.4.2 was identified. Alongside those previously mentioned it also gained the nickname 'Delta Plus', on the strength of its extra mutations, Y145H and A222V. These are not unique to it, but distinguish it from the original Delta variant.
On 7 June 2022, the WHO has de-escalated the Delta variant and its subvariants to "previously circulating variants of concern".
Previously circulating variants of interest (VOI)
Epsilon (lineages B.1.429, B.1.427, CAL.20C)
The Epsilon variant or lineage B.1.429, also known as CAL.20C or CAVUI1, 21C or 20C/S:452R, is defined by five distinct mutations (I4205V and D1183Y in the ORF1ab gene, and S13I, W152C, L452R in the spike protein's S-gene), of which the L452R (previously also detected in other unrelated lineages) was of particular concern. From 17 March to 29 June 2021, the CDC listed B.1.429 and the related B.1.427 as "variants of concern". As of July 2021, Epsilon is no longer considered a variant of interest by the WHO, as it was overtaken by Alpha.
From September 2020 to January 2021, it was 19% to 24% more transmissible than earlier variants in California. Neutralisation against it by antibodies from natural infections and vaccinations was moderately reduced, but it remained detectable in most diagnostic tests.
Epsilon (CAL.20C) was first observed in July 2020 by researchers at the Cedars-Sinai Medical Center, California, in one of 1,230 virus samples collected in Los Angeles County since the start of the COVID-19 epidemic. It was not detected again until September when it reappeared among samples in California, but numbers remained very low until November. In November 2020, the Epsilon variant accounted for 36 per cent of samples collected at Cedars-Sinai Medical Center, and by January 2021, the Epsilon variant accounted for 50 per cent of samples. In a joint press release by University of California, San Francisco, California Department of Public Health, and Santa Clara County Public Health Department, the variant was also detected in multiple counties in Northern California. From November to December 2020, the frequency of the variant in sequenced cases from Northern California rose from 3% to 25%. In a preprint, CAL.20C is described as belonging to clade 20C and contributing approximately 36% of samples, while an emerging variant from the 20G clade accounts for some 24% of the samples in a study focused on Southern California. Note, however, that in the US as a whole, the 20G clade predominates, as of January 2021. Following the increasing numbers of Epsilon in California, the variant has been detected at varying frequencies in most US states. Small numbers have been detected in other countries in North America, and in Europe, Asia and Australia. After an initial increase, its frequency rapidly dropped from February 2021 as it was being outcompeted by the more transmissible Alpha. In April, Epsilon remained relatively frequent in parts of northern California, but it had virtually disappeared from the south of the state and had never been able to establish a foothold elsewhere; only 3.2% of all cases in the United States were Epsilon, whereas more than two-thirds were Alpha.
Zeta (lineage P.2)
Eta (lineage B.1.525)
The Eta variant or lineage B.1.525, also called VUI-21FEB-03 (previously VUI-202102/03) by Public Health England (PHE) and formerly known as UK1188, 21D or 20A/S:484K, does not carry the same N501Y mutation found in Alpha, Beta and Gamma, but carries the same E484K-mutation as found in the Gamma, Zeta, and Beta variants, and also carries the same ΔH69/ΔV70 deletion (a deletion of the amino acids histidine and valine in positions 69 and 70) as found in Alpha, N439K variant (B.1.141 and B.1.258) and Y453F variant (Cluster 5). Eta differs from all other variants by having both the E484K-mutation and a new F888L mutation (a substitution of phenylalanine (F) with leucine (L) in the S2 domain of the spike protein). As of 5 March 2021, it had been detected in 23 countries. It has also been reported in Mayotte, the overseas department/region of France. The first cases were detected in December 2020 in the UK and Nigeria, and as of 15 February 2021, it had occurred in the highest frequency among samples in the latter country. As of 24 February 56 cases were found in the UK. Denmark, which sequences all its COVID-19 cases, found 113 cases of this variant from 14 January to 21 February 2021, of which seven were directly related to foreign travel to Nigeria.
As of July 2021, UK experts are studying it to ascertain how much of a risk it could be. It is currently regarded as a "variant under investigation", but pending further study, it may become a "variant of concern". Ravi Gupta, from the University of Cambridge said in a BBC interview that lineage B.1.525 appeared to have "significant mutations" already seen in some of the other newer variants, which means their likely effect is to some extent more predictable.
Theta (lineage P.3)
On 18 February 2021, the Department of Health of the Philippines confirmed the detection of two mutations of COVID-19 in Central Visayas after samples from patients were sent to undergo genome sequencing. The mutations were later named as E484K and N501Y, which were detected in 37 out of 50 samples, with both mutations co-occurrent in 29 out of these.
On 13 March, the Department of Health confirmed the mutations constitutes a variant which was designated as lineage P.3. On the same day, it also confirmed the first COVID-19 case caused by the Gamma variant in the country. The Philippines had 98 cases of the Theta variant on 13 March. On 12 March it was announced that Theta had also been detected in Japan. On 17 March, the United Kingdom confirmed its first two cases, where PHE termed it VUI-21MAR-02.
On 30 April 2021, Malaysia detected 8 cases of the Theta variant in Sarawak.
As of July 2021, Theta is no longer considered a variant of interest by the WHO.
Iota (lineage B.1.526)
Kappa (lineage B.1.617.1)
Lambda (lineage C.37)
Mu (lineage B.1.621)
Formerly monitored variants (WHO)
The variants listed below were once listed under variants under monitoring, but were reclassified due to either no longer circulating at a significant level, not having had a significant impact on the situation, or scientific evidence of the variant not having concerning properties.
Omicron
Lineage B.1.1.529
The Omicron variant, known as lineage B.1.1.529, was declared a variant of concern by the World Health Organization on 26 November 2021.
The variant has a large number of mutations, of which some are concerning. Some evidence shows that this variant has an increased risk of reinfection. Studies are underway to evaluate the exact impact on transmissibility, mortality, and other factors.
Named Omicron by the WHO, it was identified in November 2021 in Botswana and South Africa; one case had travelled to Hong Kong, one confirmed case was identified in Israel in a traveler returning from Malawi, along with two who returned from South Africa and one from Madagascar. Belgium confirmed the first detected case in Europe on 26 November 2021 in an individual who had returned from Egypt on 11 November. Indian SARS-CoV-2 Genomics Consortium (INSACOG) in its January 2022 bulletin noted that Omicron is in community transmission in India where new cases have been rising exponentially.
BA. sublineages
According to the WHO, BA.1, BA.1.1, and BA.2 were the most common sublineages of Omicron globally . BA.2 contains 28 unique genetic changes, including four in its spike protein, compared to BA.1, which had already acquired 60 mutations since the ancestral Wuhan strain, including 32 in the spike protein. BA.2 is more transmissible than BA.1. It was causing most cases in England by mid-March 2022, and by the end of March, BA.2 became dominant in the US. , the sublineages BA.1 to BA.5 including all their descendants are classified as variants of concern by the WHO, the CDC, and the ECDC (with the latter excluding BA.3).
XBB sublineages
During 2022, a number of further new strains emerged in different localities, including XBB.1.5, which evolved from the XBB strain of Omicron. The first case involving XBB in England was detected from a specimen sample taken on 10 September 2022 and further cases have since been identified in most English regions. By the end of the year, XBB.1.5 accounted for 40.5% of new cases across the US, and was the dominant strain; variant of concern BQ.1 was running at 18.3% and BQ.1.1 represented 26.9% of new cases, while the BA.5 strain was in decline, at 3.7%. At this stage, it was uncommon in many other countries, for example in the UK it was represented about 7% of new cases, according to UKHSA sequencing data. BQ.1 and BQ1.1 became known informally as "cerberus".
On 22 December 2022, the European Centre for Disease Control wrote in a summary that XBB strains accounted for circa 6.5% of new cases in five EU countries with sufficient volume of sequencing or genotyping to provide estimates.
EG.5, a subvariant of XBB.1.9.2, (nicknamed "Eris" by some media) emerged in February 2023. On 6August 2023, the UK Health Security Agency reported the EG.5 strain was responsible for one in seven new cases in the UK during the third week of July.
Lineage BA.2.86
During 2023, SARS-CoV-2 continued to circulate in the global population and to evolve, with a number of new subvariants. Testing, sequencing and reporting rates reduced.
BA.2.86 was first detected in a sample from 24July 2023, and was designated as a variant under monitoring by the World Health Organization on 17 August 2023.
JN.1 (sometimes referred to as "Pirola"), a subvariant of BA.2.86, emerged during August 2023 in Luxembourg. By December 2023, it had been detected in 12 countries, including the UK and US. On 19 December, JN.1 was declared by the WHO to be a variant of interest independently of its parent strain BA.2.86, but overall risk for public health was determined as low. With JN.1 accounting for some 60% of cases in Singapore, in December 2023, Singapore and Indonesia recommended wearing masks at airports. The CDC estimated that the variant accounted for 44% of cases in the US on 22 December 2023 and 62% of cases on 5 January 2024.
, JN.1 was estimated by the WHO to be the most prevalent variant of SARS-CoV-2 (70–90% prevalence in four out of six global regions; insufficient data in the East Mediterranean and African regions). The general level of population immunity and immunity from XBB.1.5 booster versions of the COVID-19 vaccine was expected to provide some protection (cross-reactivity) to JN.1.
Sublineages by year
2024
Late in April 2024, CDC data showed KP.2 to be the most common U.S. variant, with a quarter of all cases, just ahead of JN.1. KP1.1 represented 7 percent of U.S. cases. These two are sometimes referred to as the 'FLiRT' variants because they are characterized by a phenylalanine (F) to leucine (L) mutation and an arginine (R) to threonine (T) mutation in the virus's spike protein. By July 2024, a descendant of KP.2 with an extra amino acid change in the spike protein, Q493E, was given the names KP.3 and, informally, 'FLuQE,' and became a major variant in New South Wales during the Australian winter. Initial research suggested that the Q493E change could help KP.3 be more effective at binding to human cells than KP.2.
As of September 2024, XEC, first found in Germany, is expected to be the next major variant. XEC is a recombination of two subvariants: KS.1.1 and KP.3.3. Only a few cases have been detected in the United States, but it is reported to have a slight advantage over other variants in terms of transmissibility.
Omicron variants under monitoring (WHO, 2022/2023)
On 25 May 2022, the World Health Organization introduced a new category for potentially concerning sublineages of widespread variants of concern, initially called VOC lineages under monitoring (VOC-LUMs). This decision was made to reflect that in February 2022, over 98% of all GISAID sequenced samples belonged to the Omicron family, within which much of the variants' evolution took place. By 9 February 2023, the category had been renamed as "Omicron variants under monitoring."
Other notable variants
Lineage B.1.1.207 was first sequenced in August 2020 in Nigeria; the implications for transmission and virulence are unclear but it has been listed as an emerging variant by the US Centers for Disease Control. Sequenced by the African Centre of Excellence for Genomics of Infectious Diseases in Nigeria, this variant has a P681H mutation, shared in common with the Alpha variant. It shares no other mutations with the Alpha variant and as of late December 2020 this variant accounts for around 1% of viral genomes sequenced in Nigeria, though this may rise. As of May 2021, lineage B.1.1.207 has been detected in 10 countries.
Lineage B.1.1.317, while not considered a variant of concern, is noteworthy in that Queensland Health forced 2 people undertaking hotel quarantine in Brisbane, Australia to undergo an additional 5 days' quarantine on top of the mandatory 14 days after it was confirmed they were infected with this variant.
Lineage B.1.616, being identified in Brittany, Western France in early January 2021 and designated by WHO as "Variant under investigation" in March 2021, was reported to be difficult to detect from nasopharyngeal swab sampling method of coronavirus detection, and detection of the virus needs to rely on samples from lower respiratory tract.
Lineage B.1.618 was first isolated in October 2020. It has the E484K mutation in common with several other variants, and showed significant spread in April 2021 in West Bengal, India. As of 23 April 2021, the PANGOLIN database showed 135 sequences detected in India, with single-figure numbers in each of eight other countries worldwide.
In July 2021, scientists reported in a preprint which was published in a journal in February 2022, the detection of anomalous unnamed unknown-host SARS-CoV-2 lineages via wastewater surveillance in New York City. They hypothesized that "these lineages are derived from unsampled human COVID-19 infections or that they indicate the presence of a non-human animal reservoir".
Lineage B.1.640.2 (also known as the IHU variant) was detected in October 2021 by researchers at the Institut Hospitalo-Universitaire (IHU) in Marseille. They found the variant in a traveler who returned to France from Cameroon and reportedly infected 12 people. The B.1.640 lineage, which includes B.1.640.2, was designated a variant under monitoring (VUM) by the World Health Organization (WHO) on 22 November 2021. However, the WHO has reported that lineage B.1.640.2 has spread much slower than the Omicron variant, and so is of relatively little concern. According to a preprint study, lineage B.1.640.2 has two already known spike protein mutations – E484K and N501Y – among a total of 46 nucleotide substitutions and 37 deletions.
In March 2022, researchers reported SARS-CoV-2 variant recombinant viruses that contain elements of Delta and Omicron – Deltacron (also called "Deltamicron"). Recombination occurs when a virus combines parts from a related virus with its genetic sequence as it assembles copies of itself. It is unclear whether Deltacron – which is not to be confused with "Deltacron" reported in January albeit the first detection was also in January – will be able to compete with Omicron and whether that would be detrimental to health.
In July 2023, Professor Lawrence Young, a virologist at Warwick University announced a super mutated Delta variant from a swab of an Indonesian case with 113 unique mutations, with 37 affecting the spike protein.
Recombinant variants
In 2022, the British government reported a number of recombinant variants of SARS-CoV-2. These recombinant lineages have been given the Pango lineage identifiers XD, XE, and XF.
XE is a recombinant lineage of Pango lineages BA.1 and BA.2. XE was believed to have a growth rate 9.8% greater than BA.2.
Incubation theory for multiple mutated variants
Researchers have suggested that multiple mutations can arise in the course of the persistent infection of an immunocompromised patient, particularly when the virus develops escape mutations under the selection pressure of antibody or convalescent plasma treatment, with the same deletions in surface antigens repeatedly recurring in different patients.
Notable missense mutations
There have been a number of missense mutations observed of SARS-CoV-2.
del 69-70
The name of the mutation, del 69-70, or 69-70 del, or other similar notations, refers to the deletion of amino acid at position 69 to 70. The mutation is found in the Alpha variant, and could lead to "spike gene target failure" and result in false negative result in PCR virus test.
RSYLTPGD246-253N
Otherwise referred to as del 246-252, or other various similar expression, refer to the deletion of amino acid from the position of 246 to 252, in the N-terminal domain of spike protein, accompanied with a replacement of the aspartic acid (D) at the position 253 for asparagine (N).
The 7 amino acid deletion mutation is currently described as unique in the Lambda variant, and have been attributed to as one of the cause of the strain's increased capability to escape from neutralizing antibodies according to preprint paper.
N440K
The name of the mutation, N440K, refers to an exchange whereby the asparagine (N) is replaced by lysine (K) at position 440.
This mutation has been observed in cell cultures to be 10 times more infective compared to the previously widespread A2a strain (A97V substitution in RdRP sequence) and 1000 times more in the lesser widespread A3i strain (D614G substitution in Spike and a and P323L substitution in RdRP). It was involved in rapid surges of COVID-19 cases in India in May 2021. India has the largest proportion of N440K mutated variants followed by the US and Germany.
G446V
The name of the mutation, G446V, refers to an exchange whereby the glycine (G) is replaced by valine (V) at position 446.
The mutation, identified in Japan among inbound travelers starting from May, and among 33 samples from individuals related to 2020 Tokyo Olympic Games and 2020 Tokyo Paralympic Games, are said to be possible to impact affinity of multiple monoclonal antibody, although its clinical impact against the use of antibody medicine is still yet to be known.
L452R
The name of the mutation, L452R, refers to an exchange whereby the leucine (L) is replaced by arginine (R) at position 452.
L452R is found in both the Delta and Kappa variants which first circulated in India, but have since spread around the world. L452R is a relevant mutation in this strain that enhances ACE2 receptor binding ability and can reduce vaccine-stimulated antibodies from attaching to this altered spike protein.
L452R, some studies show, could even make the coronavirus resistant to T cells, that are necessary to target and destroy virus-infected cells. They are different from antibodies that are useful in blocking coronavirus particles and preventing it from proliferating.
Y453F
The name of the mutation, Y453F, refers to an exchange whereby the tyrosine (Y) is replaced by phenylalanine (F) at position 453. The mutation have been found potentially linked to the spread of SARS-CoV-2 among minks in the Netherlands in 2020.
S477G/N
A highly flexible region in the receptor binding domain (RBD) of SARS-CoV-2, starting from residue 475 and continuing up to residue 485, was identified using bioinformatics and statistical methods in several studies. The University of Graz and the Biotech Company Innophore have shown in a recent publication that structurally, the position S477 shows the highest flexibility among them.
At the same time, S477 is hitherto the most frequently exchanged amino acid residue in the RBDs of SARS-CoV-2 mutants. By using molecular dynamics simulations of RBD during the binding process to hACE2, it has been shown that both S477G and S477N strengthen the binding of the SARS-COV-2 spike with the hACE2 receptor. The vaccine developer BioNTech referenced this amino acid exchange as relevant regarding future vaccine design in a preprint published in February 2021.
E484Q
The name of the mutation, E484Q, refers to an exchange whereby the glutamic acid (E) is replaced by glutamine (Q) at position 484.
The Kappa variant circulating in India has E484Q. These variants were initially (but misleadingly) referred to as a "double mutant". E484Q may enhance ACE2 receptor binding ability, and may reduce vaccine-stimulated antibodies' ability to attach to this altered spike protein.
E484K
The name of the mutation, E484K, refers to an exchange whereby the glutamic acid (E) is replaced by lysine (K) at position 484. It is nicknamed "Eeek".
E484K has been reported to be an escape mutation (i.e., a mutation that improves a virus's ability to evade the host's immune system) from at least one form of monoclonal antibody against SARS-CoV-2, indicating there may be a "possible change in antigenicity". The Gamma variant (lineage P.1), the Zeta variant (lineage P.2, also known as lineage B.1.1.28.2) and the Beta variant (501.V2) exhibit this mutation. A limited number of lineage B.1.1.7 genomes with E484K mutation have also been detected. Monoclonal and serum-derived antibodies are reported to be from 10 to 60 times less effective in neutralising virus bearing the E484K mutation. On 2 February 2021, medical scientists in the United Kingdom reported the detection of E484K in 11 samples (out of 214,000 samples), a mutation that may compromise current vaccine effectiveness.
F490S
F490S denotes a change from phenylalanine (F) to serine (S) in amino-acid position 490.
It is one of the mutation found in Lambda, and have been associated with reduced susceptibility to antibody generated by those who were infected with other strains, meaning antibody treatment against people infected with strains carrying such mutation would be less effective.
N501Y
N501Y denotes a change from asparagine (N) to tyrosine (Y) in amino-acid position 501. N501Y has been nicknamed "Nelly".
This change is believed by PHE to increase binding affinity because of its position inside the spike glycoprotein's receptor-binding domain, which binds ACE2 in human cells; data also support the hypothesis of increased binding affinity from this change. Molecular interaction modelling and the free energy of binding calculations has demonstrated that the mutation N501Y has the highest binding affinity in variants of concern RBD to hACE2. Variants with N501Y include Gamma, Alpha (VOC 20DEC-01), Beta, and COH.20G/501Y (identified in Columbus, Ohio). This last became the dominant form of the virus in Columbus in late December 2020 and January and appears to have evolved independently of other variants.
N501S
N501S denotes a change from asparagine (N) to serine (S) in amino-acid position 501.
As of September 2021, there are 8 cases of patients around the world infected with Delta variant which feature this N501S mutation. As it is considered a mutation similar to N501Y, it is suspected to have similar characteristics as N501Y mutation, which is believed to increase the infectivity of the virus, however the exact effect is unknown yet.
D614G
D614G is a missense mutation that affects the spike protein of SARS-CoV-2. From early appearances in Eastern China early in 2020, the frequency of this mutation in the global viral population increased early on during the pandemic. G (glycine) quickly replaced D (aspartic acid) at position 614 in Europe, though more slowly in China and the rest of East Asia, supporting the hypothesis that G increased the transmission rate, which is consistent with higher viral titres and infectivity in vitro. Researchers with the PANGOLIN tool nicknamed this mutation "Doug".
In July 2020, it was reported that the more infectious D614G SARS-CoV-2 variant had become the dominant form in the pandemic. PHE confirmed that the D614G mutation had a "moderate effect on transmissibility" and was being tracked internationally.
The global prevalence of D614G correlates with the prevalence of loss of smell (anosmia) as a symptom of COVID-19, possibly mediated by higher binding of the RBD to the ACE2 receptor or higher protein stability and hence higher infectivity of the olfactory epithelium.
Variants containing the D614G mutation are found in the G clade by GISAID and the B.1 clade by the PANGOLIN tool.
Q677P/H
The name of the mutation, Q677P/H, refers to an exchange whereby the glutamine (Q) is replaced by proline (P) or histidine (H) at position 677. There are several sub-lineages containing the Q677P mutation; six of these, which also contain various different combinations of other mutations, are referred to by names of birds. One of the earlier ones noticed for example is known as "Pelican," while the most common of these as of early 2021 was provisionally named "Robin 1."
The mutation has been reported in multiple lineages circulating inside the United States as of late 2020 and also some lineages outside the country. 'Pelican' was first detected in Oregon, and as of early 2021 'Robin 1' was found often in the Midwestern United States, while another Q667H sub-lineage, 'Robin 2', was found mostly in the southeastern United States. The frequency of such mutation being recorded has increased from late 2020 to early 2021.
P681H
The name of the mutation, P681H, refers to an exchange whereby the proline (P) is replaced by histidine (H) at position 681.
In January 2021, scientists reported in a preprint that the mutation P681H, a characteristic feature of the Alpha variant and lineage B.1.1.207 (identified in Nigeria), is showing a significant exponential increase in worldwide frequency, thus following a trend to be expected in the lower limb of the logistics curve. This may be compared with the trend of the now globally prevalent D614G.
P681R
The name of the mutation, P681R, refers to an exchange whereby the proline (P) is replaced by arginine (R) at position 681.
Indian SARS-CoV-2 Genomics Consortium (INSACOG) found that other than the two mutations E484Q and L452R, there is also a third significant mutation, P681R in lineage B.1.617. All three concerning mutations are on the spike protein, the operative part of the coronavirus that binds to receptor cells of the body.
A701V
According to initial media reports, the Malaysian Ministry of Health announced on 23 December 2020 that it had discovered a mutation in the SARS-CoV-2 genome which they designated as A701B(sic), among 60 samples collected from the Benteng Lahad Datu cluster in Sabah. The mutation was characterised as being similar to the one found recently at that time in South Africa, Australia, and the Netherlands, although it was uncertain if this mutation was more infectious or aggressive than before. The provincial government of Sulu in neighbouring Philippines temporarily suspended travel to Sabah in response to the discovery of 'A701B' due to uncertainty over the nature of the mutation.
On 25 December 2020, the Malaysian Ministry of Health described a mutation A701V as circulating and present in 85% of cases (D614G was present in 100% of cases) in Malaysia. These reports also referred to samples collected from the Benteng Lahad Datu cluster. The text of the announcement was mirrored verbatim on the Facebook page of Noor Hisham Abdullah, Malay Director-General of Health, who was quoted in some of the news articles.
The A701V mutation has the amino acid alanine (A) substituted by valine (V) at position 701 in the spike protein. Globally, South Africa, Australia, Netherlands and England also reported A701V at about the same time as Malaysia. In GISAID, the prevalence of this mutation is found to be about 0.18%. of cases.
On 14 April 2021, the Malaysian Ministry of Health reported that the third wave, which had started in Sabah, has involved the introduction of variants with D614G and A701V mutations.
Data and methods
Modern DNA sequencing, where available, may permit rapid detection (sometimes known as 'real-time detection') of genetic variants that appear in pathogens during disease outbreaks. Through use of phylogenetic tree visualisation software, records of genome sequences can be clustered into groups of identical genomes all containing the same set of mutations. Each group represents a 'variant', 'clade', or 'lineage', and comparison of the sequences allows the evolutionary path of a virus to be deduced. For SARS-CoV-2, until March 2021, over 330,000 viral genomic sequences had been generated by molecular epidemiology studies across the world.
New variant detection and assessment
On 26 January 2021, the British government said it would share its genomic sequencing capabilities with other countries in order to increase the genomic sequencing rate and trace new variants, and announced a "New Variant Assessment Platform". , more than half of all genomic sequencing of COVID-19 was carried out in the UK.
Wastewater surveillance was demonstrated to be one technique to detect SARS-CoV-2 variants and to track their rise for studying related ongoing infection dynamics.
Testing
Whether one or more mutations visible in RT-PCR tests can be used reliably to identify a variant depends on the prevalence of other variants currently circulating in the same population.
Cross-species transmission
There is a risk that COVID-19 could transfer from humans to other animal populations and could combine with other animal viruses to create yet more variants that are dangerous to humans. Reverse zoonosis spillovers may cause reservoirs for mutating variants that spill back to humans – another possible source for variants of concern, in addition to immunocompromised people.
Cluster 5
In early November 2020, Cluster 5, also referred to as ΔFVI-spike by the Danish State Serum Institute (SSI), was discovered in Northern Jutland, Denmark. It is believed to have been spread from minks to humans via mink farms. On 4 November 2020, it was announced that the mink population in Denmark would be culled to prevent the possible spread of this mutation and reduce the risk of new mutations happening. A lockdown and travel restrictions were introduced in seven municipalities of Northern Jutland to prevent the mutation from spreading, which could compromise national or international responses to the COVID-19 pandemic. By 5 November 2020, some 214 mink-related human cases had been detected.
The WHO stated that cluster 5 had a "moderately decreased sensitivity to neutralising antibodies". SSI warned that the mutation could reduce the effect of COVID-19 vaccines under development, although it was unlikely to render them useless. Following the lockdown and mass-testing, SSI announced on 19 November 2020 that cluster 5 in all probability had become extinct. As of 1 February 2021, authors to a peer-reviewed paper, all of whom were from the SSI, assessed that cluster 5 was not in circulation in the human population.
Vaccines
During the COVID-19 pandemic, a variety of vaccines was developed. The vaccines were rolled out for administering to a broad range of recipients, typically beginning with the most vulnerable demographic groups.
Differential vaccine effectiveness
The interplay between the SARS-CoV-2 virus and its human hosts was initially natural but then started being altered by the rising availability of vaccines seen in 2021. The potential emergence of a SARS-CoV-2 variant that is moderately or fully resistant to the antibody response elicited by the COVID-19 vaccines may necessitate modification of the vaccines. The emergence of vaccine-resistant variants is more likely in a highly vaccinated population with uncontrolled transmission.
As of February 2021, the US Food and Drug Administration believed that all FDA authorized vaccines remained effective in protecting against circulating strains of SARS-CoV-2.
Immune evasion by variants
Vaccine adjustments
See also
SARS-CoV-2 Omicron variant
GX P2V, is a mutant strain of COVID-19 that is deadly to hACE2 humanized mice.
RaTG13, the second closest known relative to SARS-CoV-2
Colloquial names of COVID-19 variants
Notes
References
Further reading
External links
CoVariants
Cov-Lineages
GISAID – hCov19 Variants
Articles containing video clips
COVID-19 pandemic-related lists
SARS-CoV-2 | Variants of SARS-CoV-2 | [
"Biology"
] | 12,104 | [
"Viruses",
"Lists of viruses"
] |
66,185,856 | https://en.wikipedia.org/wiki/Container%20chassis | A container chassis, also called intermodal chassis or skeletal trailer, is a type of semi-trailer designed to securely carry an intermodal container. Chassis are used by truckers to deliver containers between ports, railyards, container depots, and shipper facilities, and are thus a key part of the intermodal supply chain.
Operation
The use of chassis to haul containers over-the-road is known as drayage trucking, and is a section of intermodal, which also includes rail transport of containers using well or flat cars and overseas transport in ships or barges. Like other intermodal equipment, chassis are equipped with twistlocks at each corner which allows a container (hoisted onto or off the chassis by a crane), to be locked on for secure transport or unlocked to be lifted off. The length of a chassis corresponds to which container size will fit (i.e., a 40-foot-long chassis fits a 40-foot-long container), but some models are adjustable length.
Semi-tractor trucks hook up to chassis via the kingpin. When disconnected from a tractor, the chassis' landing gear can be cranked down to park it.
Portable generators, also called gensets, can be mounted (underslung) onto chassis. These gensets are used to power a refrigerated container.
The axle group on some chassis (especially 20-foot and 53-foot units) can be slid backwards or forwards to change the weight distribution of heavy containers, allowing safe operation and compliance with weight restrictions.
An identification number is often stenciled on chassis to track each unit in a fleet. According to ISO 6346, a chassis should have the letter "Z" at the end of its reporting mark.
A variation is the tank container chassis, which are used for ISO tank containers. They are characteristically longer and have lower deck height then standard chassis, ideal for transporting constantly shifting payloads. These chassis can also be fitted with additional accessories including: lift kits to facilitate product discharge, hose tubes, and hi/lo kits to carry two empty tanks. They come in tandem axle, spread axle, tri-axle, and hi/lo combo configurations.
Chassis Pools
Unlike other countries where chassis are mostly owned or long-term leased by trucking companies, in the United States most chassis are currently owned by a few leasing companies (pools) which rent out the equipment to truckers. When a trucker leaves or enters a facility with a pool chassis, an EDI record is generated at the facility gate which identifies the trucking company and the chassis pool, and this allows the pool to invoice the appropriate trucking company for chassis usage. The system is influenced by the steamship lines and by the operation of container terminals. Firstly, containers are commonly stored on chassis as a single mounted unit at rail yards and depots–such terminals are known as "wheeled" facilities. Secondly, steamship lines offer a service called ″carrier haulage″ or ″store door delivery″, whereby they arrange the drayage of a customer’s container. The steamship line hires a local trucking company and pays the pool for the chassis usage.
As a result, steamship lines formed contractual agreements with the pools which entail that when a container is on-terminal it must be on a pool specified by the steamship line. This means that at wheeled facilities, containers are mounted onto chassis selected by the steamship line before the trucker arrives to pickup. Some disadvantages of this system are that it can restrict truckers' choice of which chassis to use and it can cause "chassis splits", which are when a container and its required chassis pool are in different locations.
Shortages
In the United States, container chassis shortages are a chronic problem, especially during peaks in container volume. There are several causes of chassis shortages, but a common problem is excessive off-terminal dwell time. Off-terminal dwell time is the length of time a shipper keeps a chassis/container at their premises. Long dwell times mean less free chassis on-site at ports and rail ramps.
See also
Semi-trailer
Containerization
Intermodal freight transport
Container port
Sidelifter
Drayage
ISO 6346
Swap body
Roadrailer
References
Further reading
External links
How truckers can avoid a bad dray day – Six tips for truckers about chassis pools
At Ag Exporters’ Meeting, the Chassis Debate Rages On – Discussion about the two types of chassis pools, co-op and proprietary
Athearn HO scale chassis – Model Railroading magazine, June 1999
RR Rolling Stock Category: Chassis – Picture archives of intermodal chassis in US
Chassis
Intermodal transport
Freight transport
Port infrastructure
Trucks
Trailers | Container chassis | [
"Physics"
] | 953 | [
"Physical systems",
"Transport",
"Intermodal transport"
] |
66,186,011 | https://en.wikipedia.org/wiki/SRT-3025 | SRT-3025 is an experimental drug that was studied by Sirtris Pharmaceuticals as a small-molecule activator of the sirtuin subtype SIRT1. It has been investigated as a potential treatment for osteoporosis, and anemia.
See also
SRT-1460
SRT-1720
SRT-2104
SRT-2183
STAC-9
References
1-Pyrrolidinyl compounds
Thiazoles
Amides | SRT-3025 | [
"Chemistry"
] | 98 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Amides"
] |
66,186,352 | https://en.wikipedia.org/wiki/Gila%20Hanna | Gila Hanna is a Canadian mathematics educator and philosopher of mathematics whose research interests include the nature and educational role of mathematical proofs, and gender in mathematics education. She is professor emerita in the Department of Curriculum, Teaching and Learning at the University of Toronto, affiliated with the Ontario Institute for Studies in Education, the former director of mathematics education at the Fields Institute, and the founder of the Canadian Journal of Mathematics, Science and Technology Education.
Books
Hanna is the author of Contact and Communication: An Evaluation of Bilingual Student Exchange Programs (OISE Press, 1980) and Rigorous Proof in Mathematics Education (OISE Press, 1983). Her numerous edited volumes include:
Creativity, Thought and Mathematical Proof (edited with Ian Winchester, 1990)
Towards Gender Equity in Mathematics Education (1996)
Proof Technology in Mathematics Research and Teaching (edited with David Reid and Michael de Villiers, 2019)
Recognition
Hanna was named a Fields Institute Fellow in 2003. She was the 2020 winner of the Partners in Research Dr. Jonathon Borwein Mathematics Ambassador Award.
References
External links
Home page
Living people
20th-century Canadian mathematicians
21st-century Canadian mathematicians
20th-century Canadian philosophers
21st-century Canadian philosophers
Canadian women mathematicians
Canadian women philosophers
Mathematics educators
Philosophers of mathematics
Year of birth missing (living people)
20th-century Canadian women scientists | Gila Hanna | [
"Mathematics"
] | 263 | [
"Philosophers of mathematics"
] |
66,187,030 | https://en.wikipedia.org/wiki/Arthur%20von%20Abramson | Arthur von Abramson (born 3 March 1854) was an Imperial Russian civil engineer.
He was born to a Jewish family in Odessa, and was educated at the city's gymnasium. He studied mathematics at the University of Odessa, but left to take a course in civil engineering at the Zurich Polytechnikum, from which he was graduated in 1876. Returning to Russia in 1879, von Abramson passed the state examination at the Russian Imperial Institute of Roads and Communications, and was appointed one of the directors of the Russian state railway at Kiev. He devised, built, and managed the sewer system of Kiev, and constructed the street-railroad of that city. In 1881 he founded and became editor-in-chief of a technical monthly, Inzhener ('The Engineer'). He was appointed president of the local sewer company and director of the Kiev city railroad.
Publications
Published in English as
References
1854 births
Year of death unknown
ETH Zurich alumni
Civil engineers from the Russian Empire
Editors from the Russian Empire
Odesa Jews
Print editors
Railway civil engineers
People from the Russian Empire in rail transport | Arthur von Abramson | [
"Engineering"
] | 220 | [
"Civil engineering",
"Civil engineering stubs"
] |
66,188,172 | https://en.wikipedia.org/wiki/Sandrine%20L%C3%A9v%C3%AAque-Fort | Sandrine Lévêque-Fort is a French optical physicist working in the field of Super-resolution imaging at Paris-Saclay University.
She was the recipient of the French Minister of Higher Education, Research and Innovation Irène Joliot-Curie Prize in 2020.
Education and career
Lévêque-Fort holds a master's degree from Paris-Sorbonne University and conducted her doctoral research at ESPCI Paris, which she completed in 2000. She then joined the Imperial College in London as a postdoctoral fellow, before joining the French National Centre for Scientific Research.
Awards and honours
2020 Irène Joliot-Curie Prize
References
External links
Lévêque-Fort group
French physicists
Living people
Women in optics
Optical physicists
Paris-Sorbonne University alumni
Paris-Saclay University people
French women physicists
Year of birth missing (living people)
Microscopists | Sandrine Lévêque-Fort | [
"Chemistry"
] | 174 | [
"Microscopists",
"Microscopy"
] |
66,188,236 | https://en.wikipedia.org/wiki/Playing%20with%20Infinity | Playing with Infinity: Mathematical Explorations and Excursions is a book in popular mathematics by Hungarian mathematician Rózsa Péter, published in German in 1955 and in English in 1961.
Publication history and translations
Playing with Infinity was originally written in 1943 by mathematician Rózsa Péter, based on a series of letters Péter had written to a non-mathematical friend, . Because of World War II, it was not published until 1955, in German, under the title Das Spiel mit dem Unendlichen, by Teubner.
An English translation by Zoltán Pál Dienes was published in 1961 by G. Bell & Sons in England, and by Simon & Schuster in the US. The English version was reprinted in 1976 by Dover Books. The German version was also reprinted, in 1984, by Verlag Harri Deutsch. The book has also been translated into Polish in 1962 and Russian in 1967. The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
Playing with Infinity presents a broad panorama of mathematics for a popular audience. It is divided into three parts, the first of which concerns counting, arithmetic, and connections from numbers to geometry both through visual proofs of results in arithmetic like the sum of finite arithmetic series, and in the other direction through counting problems for geometric objects like the diagonals of polygons. These ideas lead to more advanced topics including Pascal's triangle, the Seven Bridges of Königsberg, the prime number theorem and the sieve of Eratosthenes, and the beginnings of algebra and its use in proving the impossibility of certain straightedge and compass constructions.
The second part begins with the power of inverse operations to construct more powerful systems of numbers: negative numbers from subtraction, and rational numbers from division. Later topics in this part include the countability of the rationals, the irrationality of the square root of 2, exponentiation and logarithms, graphs of functions, slopes and areas of curves, and complex numbers. Topics in the third part include non-Euclidean geometry, higher dimensions, mathematical logic, the failings of naive set theory, and Gödel's incompleteness theorems.
In keeping with its title, these topics allow Playing with Infinity to introduce many different ways in which ideas of infinity have entered mathematics, in the notions of infinite series and limits in the first part, countability and transcendental numbers in the second, and the introduction of infinite points in projective geometry, higher dimensions, metamathematics, and undecidability in the third.
Audience and reception
Reviewer Philip Peak writes that the book succeeds in showing readers the joy of mathematics without getting them bogged down in calculations and formulas. On a similar note, Michael Holt recommends the book to mathematics teachers, as a sample of the more conceptual style of mathematics taught in Hungary at the time in contrast to the orientation towards practical calculation of English pedagogy. Reuben Goodstein summarizes it more succinctly as "the best book on mathematics for everyman that I have ever seen".
By the time of Leon Harkleroad's review in 2011, the book had become "an acknowledged classic of mathematical popularization". However, Harkleroad also notes that some idiosyncrasies of the translation, such as its use of pre-decimal British currency, have since become quaint and old-fashioned. And similarly, although W. W. Sawyer in reviewing the original 1955 publication calls its inclusion of topics from graph theory and topology "truly modern", Harkleroad points out that more recent works in this genre have included other topics in their own quest for modernity such as "fractals, public-key cryptography, and internet search engines", which for obvious reasons Péter omits.
References
Popular mathematics books
1943 non-fiction books
1955 non-fiction books
1961 non-fiction books
Infinity | Playing with Infinity | [
"Mathematics"
] | 801 | [
"Mathematical objects",
"Infinity"
] |
66,190,328 | https://en.wikipedia.org/wiki/Emmy%20Noether%20Fellowship | The LMS Emmy Noether Fellowship is a fellowship awarded by the London Mathematical Society.
"The fellowships are designed to enhance the mathematical sciences research, broadly construed, of holders either re-establishing their research programme after returning from a major break associated with caring responsibilities or those requiring support to maintain their research programme while dealing with significant ongoing caring responsibilities.
The fellowship is named after the German mathematician Emmy Noether.
Winners
The winners of the LMS Emmy Noether Fellowship have been:
2020 - Dr Milena Hering, University of Edinburgh
2020 - Dr Anne-Sophie Kaloghiros, Brunel University
2020 - Dr Irene Kyza, University of Dundee
2020 - Dr Cristina Manolache, University of Sheffield
References
Awards of the London Mathematical Society | Emmy Noether Fellowship | [
"Technology"
] | 155 | [
"Science and technology awards",
"Science award stubs"
] |
66,191,065 | https://en.wikipedia.org/wiki/List%20of%20population%20related%20meta%20concepts%20and%20meta%20lists | Outline of demography contains human demography and population related important concepts and high-level aggregated lists compiled in the useful categories.
The subheadings have been grouped by the following 4 categories:
Meta (lit. "highest" level) units, such as the universal important concepts related to demographics and places.
Macro (lit. "high" level) units where the "whole world" is the smallest unit of measurement, such as the aggregated summary demographics at global level. For example, United Nations.
Meso (lit. "middle" or "intermediate" level) units where the smallest unit of measurement cover more than one nation and more than one continent but not all the nations or continents. For example, summary list at continental level, e.g. Eurasia and Latin America or Middle East which cover two or more continents. Other examples include the intercontinental organisations e.g. the Commonwealth of Nations or the organisation of Arab states.
Micro (lit. "lower" or "smaller") level units where country is the smallest unit of measurement, such as the "globally aggregated lists" by the "individual countries" .
Please do not add sections on the items that are the nano (lit. "minor" or "tiny") level units as per the context described above, e.g. list of things within a city must be kept out.
Meta or important concepts
Global human population
World population
Demographics of the world
Fertility and intelligence
Human geography
Geographic mobility
Globalization
Human migration
List of lists on linguistics
Impact of human population
Human impact on the environment
Biological dispersal
Carrying capacity
Doomsday argument
Environmental migrant
Human overpopulation
Malthusian catastrophe
List of countries by carbon dioxide emissions
List of countries by carbon dioxide emissions per capita
List of countries by greenhouse gas emissions
List of countries by greenhouse gas emissions per capita
Overconsumption
Overexploitation
Population ecology
Continuation of human species
Sustainable and secure optimum human population
Birth control
Family planning
Human population planning
Human security
Ideal free distribution
Survivability
Survivorship curve
Sustainable food security
Food security
Geography of food
Nutritional economics
Sustainable agriculture
Sustainability
Climate change
Environmental Sustainability
List of climate scientists
List of global sustainability statistics
List of renewable energy topics by country and territory
List of environmental academic degrees
List of sustainability programs in North America
Voluntary Human Extinction Movement
Macro or highest level global lists
These are macro (high or top) level lists.
Global lists of historical populations
Historical demography
Population reconstruction
List of largest empires
List of richest nations throughout history
List of largest historical cities by chronology
World summary lists
World population
List of continents by population
List of regional organizations by population
List of religious populations
List of unrecognized countries
List of micronations
Meso or intermediate level sub-global lists
These meso (middle or intermediate) level lists are
"supra-national" (more than one continent "and" more than one nation), but
"sub-global" (but not all nations "or" all continents).
Intercontinental lists
This is aggregated list of demographics and places spanning across more than one continent but not across all the nations.
Americas
Arab states
Commonwealth of Nations
Eurasia
Latin America
Middle East
Summary lists aggregated by continents
This is aggregated list by individual subcontinent.
Asia
This is aggregated list of Asia subcontinent.
List of Asian countries by population
Population density of Asian countries
List of urban agglomerations in Asia
List of metropolitan areas in Asia
Africa
This is aggregated list of Africa subcontinent.
List of African countries by population
List of African countries by population density
Europe
This is aggregated list of Europe subcontinent.
List of European countries by population and area
List of the European Union members by density
North America
This is aggregated list of North America subcontinent.
List of North American countries by population
List of sovereign states and dependent territories in North America by population density
South America
This is aggregated list of South America subcontinent.
List of South American countries by population
List of sovereign states and dependent territories in South America by population density
Oceania
This is aggregated list of Oceania as subcontinent.
List of Oceanian countries by population
Population density of countries in Oceania
Micro or lover level global lists
Global lists aggregated by countries
This is "aggregated global" list by countries. Do not put the list of individual countries in his section.
List of list of countries and regions
Lists of countries by GDP
List of countries by population
List of countries by population (United Nations)
List of countries by past and future population
List of countries by population density
List of countries by arable land density
List of countries by population growth rate
List of countries by fertility rate
List of countries by median age
List of countries by refugee population
Global lists aggregated by subdivisions of countries
This is aggregated "global" list of subdivisions by countries.
List of the largest country subdivisions by area
List of political and geographic subdivisions by total area
Global lists aggregated by urban areas of countries
This is aggregated "global" list of urban areas by countries.
Urbanization
Transborder agglomeration
List of countries by urban population
List of largest cities by population
List of largest cities by density
List of divided cities
Lists of list of cities
Lists of list of places
Lists of list of Settlements
Lists of individual countries
This is list is related only to individual countries, i.e. create separate subsection for each country. Please help expand this incomplete list by adding more nations.
Bangladesh
Demographics of Bangladesh.
List of cities and towns in Bangladesh by population
List of villages in Bangladesh
Brazil
Demographics of Brazil
List of Brazilian states by area
List of Brazilian states by murder rate
List of cities in Brazil by population
List of Brazilian states by population
China
Demographics of China
List of Chinese administrative divisions by population
List of cities in China by population
India
Demographics of India
List of cities in India by population
List of million-plus agglomerations in India
List of metropolitan areas in India
List of states and union territories of India by population
List of towns in India by population
Indonesia
Demographics of Indonesia
Provinces of Indonesia
List of Indonesian cities by population
List of Indonesian islands by population
Pakistan
Demographic history of Pakistan
Demographics of Pakistan
List of cities in Pakistan by population
Ethnic groups in Pakistan
Russia
Demographics of Russia
List of cities and towns in Russia by population
List of federal subjects of Russia by population
USA
Demographics of United States
List of states and territories of the United States by population
See also
List of lists of lists
List of lists
References
List of population related meta concepts and meta lists
Lists of countries by continent, by population
Lists of countries by continent
Lists of countries by population
Lists of countries by large sub or trans-continental region, by population
Lists of countries in the Americas
Lists of countries in Africa
Lists of countries in Asia
Demographics of North America
Demographics of South America
Lists of countries in Oceania
Demographics of Oceania
Population related
Sustainability lists
Human overpopulation
Lists of countries by population density
Lists by population density
Population ecology
World population
Lists of geography lists
Demography
Outlines | List of population related meta concepts and meta lists | [
"Environmental_science"
] | 1,369 | [
"Demography",
"Environmental social science"
] |
66,191,118 | https://en.wikipedia.org/wiki/Half-sider%20budgerigar | A half-sider budgerigar is an unusual congenital condition that causes a budgerigar to display one color on one side of its body and a different color on the other.
This is not a simple genetic mutation, as can be observed in other color and pattern variations in this species. It is a rare example of a tetragametic chimera, which originates when two fertilized embryos merge during a very early stage of development — between the 2-cell and the 64-cell stage. Each half has different DNA, with genetically distinct cells and the resultant bird is in effect two budgerigars fused together to form a single autonomous individual.
The half-sider's coloring is usually divided bilaterally down the center, although, it can differ depending on which stage the twin embryos merged during development. Twin embryos that merged later in development will result in a budgerigar that has a splotchier distribution of the different cell populations.
In the case of the half-sider budgerigar, both embryos must possess different genetic phenotypes (one yellow-based and one white-based) in order for a visible half-sider to be produced. If both "halves" were the same base, it would still be a tetragametic chimera, but not a half-sider. It is also possible for the half-sider to be male on one side and female on the other (evidenced by a half blue, half brown cere) – an example of a bilateral gynandromorph.
Breeding a half-sider is unlikely to produce more half-siders, even when breeding two half-siders together, as the genetic makeup of the half that contributed the cells that make up the reproductive system is that which would then be perpetuated, assuming that the bird is even fertile in the first place. The chance of producing another half-sider would be the same as for any other budgerigar pairing.
References
Budgerigar colour mutations
Chimerism | Half-sider budgerigar | [
"Biology"
] | 421 | [
"Chimerism",
"Behavior",
"Reproduction"
] |
56,377,909 | https://en.wikipedia.org/wiki/Sharp%20PC-3000 | The Sharp PC-3000 was an MS-DOS-based palmtop computer introduced in 1991. The "SPC" was designed and developed by Distributed Information Processing Research Ltd. ("DIP") in the UK. DIP had earlier designed the Atari Portfolio and the two machines shared many design features both in hardware and software.
Features
As with desktop IBM PCs, this one-pound device's
screen displayed 80-column 25 lines.
Peripherals
The machine was one of the first to support the PC card interface, at the time known as PCMCIA.
Printers, floppy drives, dial-up modems, Fax modems were among the supported peripheral devices.
System software
Choice were MS-DOS 3.3 and Microsoft Windows 3.0 (running in real mode with a mouse).
Application software
The machine came with a suite of built in application providing a simple word processor, calculator and 1-2-3 compatible spreadsheet.
With some tweaking, it was also possible to run WordPerfect, Microsoft Word and Microsoft Excel.
Sharp PC-3100
A 2 MB model was produced: the 3100.
Notes and references
Computer-related introductions in 1991
PC-3000 | Sharp PC-3000 | [
"Technology"
] | 244 | [
"Computing stubs"
] |
56,378,067 | https://en.wikipedia.org/wiki/NGC%206041 | NGC 6041 is a giant elliptical galaxy located about 470 million light-years away in the constellation Hercules. NGC 6041 has an extended envelope that is distorted towards the galaxy pair Arp 122. NGC 6041 is the brightest galaxy (BCG) in the Hercules Cluster. The galaxy was discovered by astronomer Édouard Stephan on June 27, 1870.
See also
List of NGC objects (6001–7000)
Messier 87
NGC 1399
NGC 4874
NGC 4889
References
External links
Hercules (constellation)
Elliptical galaxies
6041
056962
+03-41-078
10170
Astronomical objects discovered in 1870
Hercules Cluster
Discoveries by Édouard Stephan | NGC 6041 | [
"Astronomy"
] | 134 | [
"Hercules (constellation)",
"Constellations"
] |
56,378,847 | https://en.wikipedia.org/wiki/Hypotonic-hyporesponsive%20episode | A hypotonic-hyporesponsive episode (HHE) is defined as sudden onset of poor muscle tone, reduced consciousness, and pale or bluish skin occurring within 48 hours after vaccination, most commonly pertussis vaccination. An HHE is estimated to occur after 1 in 4,762 to 1 in 1,408 doses of whole cell pertussis vaccine, and after 1 in 14,286 to 1 in 2,778 doses of acellular pertussis vaccine.
References
Vaccination
Symptoms and signs: Nervous system | Hypotonic-hyporesponsive episode | [
"Biology"
] | 119 | [
"Vaccination"
] |
56,379,488 | https://en.wikipedia.org/wiki/Biological%20effects%20of%20radiation%20on%20the%20epigenome | Ionizing radiation can cause biological effects which are passed on to offspring through the epigenome. The effects of radiation on cells has been found to be dependent on the dosage of the radiation, the location of the cell in regards to tissue, and whether the cell is a somatic or germ line cell. Generally, ionizing radiation appears to reduce methylation of DNA in cells.
Ionizing radiation has been known to cause damage to cellular components such as proteins, lipids, and nucleic acids. It has also been known to cause DNA double-strand breaks. Accumulation of DNA double strand breaks can lead to cell cycle arrest in somatic cells and cause cell death. Due to its ability to induce cell cycle arrest, ionizing radiation is used on abnormal growths in the human body such as cancer cells, in radiation therapy. Most cancer cells are fully treated with some type of radiotherapy, however some cells such as stem cell cancer cells show a reoccurrence when treated by this type of therapy.
Radiation exposure in everyday life
Non-ionising radiations, electromagnetic fields (EMF) such as radiofrequency (RF), or power frequency radiation have become very common in everyday life. All of these exist as low frequency radiation which can come from wireless cellular devices or through electrical appliances which induce extremely low frequency radiation (ELF). Exposure to these radioactive frequencies has shown negative affects on the fertility of men by impacting the DNA of the sperm and deteriorating the testes as well as an increased risk of tumor formation in salivary glands. The International Agency for Research on Cancer considers RF electromagnetic fields to be possibly carcinogenic to humans, however the evidence is limited.
Radiation and medical imaging
Advances in medical imaging have resulted in increased exposure of humans to low doses of ionizing radiation. Radiation exposure in pediatrics has been shown to have a greater impact as children's cells are still developing. The radiation obtained from medical imaging techniques is only harmful if consistently targeted multiple times in a short space of time. Safety measures have been introduced in order to limit the exposure of harmful ionizing radiation such as the usage of protective material during the use of these imaging tools. A lower dosage is also used in order to fully rid the possibility of a harmful effect from the medical imaging tools. The National Council on Radiation Protection and Measurements along with many other scientific committees have ruled in favor of continued use of medical imaging as the reward far outweighs the minimal risk obtained from these imaging techniques. If the safety protocols are not followed there is a potential increase in the risk of developing cancer. This is primarily due to the decreased methylation of cell cycle genes, such as those relating to apoptosis and DNA repair. The ionizing radiation from these techniques can cause many other detrimental effects in cells including changes in gene expression and halting the cell cycle. However, these results are extremely unlikely if the proper protocols are followed.
Target theory
Target theory concerns the models of how radiation kills biological cells and is based around two main postulates:
"Radiation is considered to be a sequence of random projectiles;
the components of the cell are considered as the targets bombarded by these projectiles"
Several models have been based around the above two points. From the various proposed models three main conclusions were found:
Physical hits obey a Poisson distribution
Failure of radioactive particles to attack sensitive areas of cells allow for survival of the cell
Cell death is an exponential function of the dose of radiation received as the number of hits received is directly proportional to the radiation dose; all hits are considered lethal
Radiation exposure through ionizing radiation (IR) affects a variety of processes inside of an exposed cell. IR can cause changes in gene expression, disruption of cell cycle arrest, and apoptotic cell death. The extent of how radiation effects cells depends on the type of cell and the dosage of the radiation. Some irradiated cancer cells have been shown to exhibit DNA methylation patterns due to epigenetic mechanisms in the cell. In medicine, medical diagnostic methods such as CT scans and radiation therapy expose the individual to ionizing radiation. Irradiated cells can also induce genomic instability in neighboring un-radiated cells via the bystander effect. Radiation exposure could also occur via many other channels than just ionizing radiation.
The basic ballistic models
The single-target single-hit model
In this model a single hit on a target is sufficient to kill a cell The equation used for this model is as follows:
Where k represents a hit on the cell and m represents the mass of the cell.
The n-target single-hit model
In this model the cell has a number of targets n. A single hit on one target is not sufficient to kill the cell but does disable the target. An accumulation of successful hits on various targets leads to cell death. The equation used for this model is as follows:
Where n represents number of the targets in the cell.
The linear quadratic model
The equation used for this model is as follows:
where αD represents a hit made by a one particle track and βD represents a hit made by a two particle track and S(D) represents the probability of survival of the cell.
The three lambda model
This model showed the accuracy of survival description for higher or repeated doses.
The equation used for this model is as follows:
The linear-quadratic-cubic model
The equation used for this model is as follows:
Sublesions hypothesis models
The repair-misrepair model
This model shows the mean number of lesions before any repair activations in a cell.
The equation used for this model is as follows:
where Uo represents the yield of initially induced lesions, with λ being the linear self-repair coefficient, and T equaling time
The lethal-potentially lethal model
This equation explores the hypothesis of a lesion becoming fatal within a given of time if it is not repair by repair enzymes.
The equation used for this model is as follows:
T is the radiation duration and tr is the available repair time.
The saturable repair model
This model illustrates the efficiency of the repair system decreasing as the dosage of radiation increases. This is due to the repair kinetics becoming increasingly saturated with the increase in radiation dosage.
The equation used for this model is as follows:
n(t) is the number of unrepaired lesions, c(t) is the number of repair molecules or enzymes, k is the proportionality coefficient, and T is the time available for repair.
Cellular environment and radiation hormesis
Radiation hormesis
Hormesis is the hypothesis that low levels of disrupting stimulus can cause beneficial adaptations in an organism. The ionizing radiation stimulates repair proteins that are usually not active. Cells use this new stimuli to adapt to the stressors they are being exposed to.
Radiation-Induced Bystander Effect (RIBE)
In biology, the bystander effect is described as changes to nearby non-targeted cells in response to changes in an initially targeted cell by some disrupting agent. In the case of Radiation-Induced Bystander Effect, the stress on the cell is caused by ionizing radiation.
The bystander effect can be broken down into two categories, long range bystander effect and short range bystander effect. In long range bystander effect, the effects of stress are seen further away from the initially targeted cell. In short range bystander, the effects of stress are seen in cells adjacent to the target cell.
Both low linear energy transfer and high linear energy transfer photons have been shown to produce RIBE. Low linear energy transfer photons were reported to cause increases in mutagenesis and a reduction in the survival of cells in clonogenic assays. X-rays and gamma rays were reported to cause increases in DNA double strand break, methylation, and apoptosis. Further studies are needed to reach a conclusive explanation of any epigenetic impact of the bystander effect.
Radiation and oxidative stress
Formation of ROS
Ionizing radiation produces fast moving particles which have the ability to damage DNA, and produce highly reactive free radicals known as reactive oxygen species (ROS). The production of ROS in cells radiated by LDIR (Low-Dose Ionizing Radiation) occur in two ways, by the radiolysis of water molecules or the promotion of nitric oxide synthesis (NOS) activity. The resulting nitric oxide formation reacts with superoxide radicals. This generates peroxynitrite which is toxic to biomolecules. Cellular ROS is also produced with the help of a mechanism involving nicotinamide adenosine dinucleotide phosphate (NADPH) oxidase. NADPH oxidase helps with the formation of ROS by generating a superoxide anion by transferring electrons from cytosolic NADPH across the cell membrane to the extracellular molecular oxygen. This process increases the potential for leakage of electrons and free radicals from the mitochondria. The exposure to the LDIR induces electron release from the mitochondria resulting in more electrons contributing to the superoxide formation in the cells.
The production of ROS in high quantity in cells results in the degradation of biomolecules such as proteins, DNA, and RNA. In one such instance the ROS are known to create double stranded and single stranded breaks in the DNA. This causes the DNA repair mechanisms to try to adapt to the increase in DNA strand breaks. Heritable changes to the DNA sequence have been seen although the DNA nucleotide sequence seems the same after the exposure with LDIR.
Activation of NOS
The formation of ROS is coupled with the formation of nitric oxide synthase activity (NOS). NO reacts with O2− generating peroxynitrite. The increase in the NOS activity causes the production of peroxynitrite (ONOO-). Peroxynitrite is a strong oxidant radical and it reacts with a wide array of biomolecules such as DNA bases, proteins and lipids. Peroxynitrite affects biomolecules function and structure and therefore effectively destabilizes the cell.
Mechanism of oxidative stress and epigenetic gene regulation
Ionizing radiation causes the cell to generate increased ROS and the increase of this species damages biological macromolecules. In order to compensate for this increased radical species, cells adapt to IR induced oxidative effects by modifying the mechanisms of epigenetic gene regulation. There are 4 epigenetic modifications that can take place:
formation of protein adducts inhibiting epigenetic regulation
alteration of genomic DNA methylation status
modification of post translational histone interactions affecting chromatin compaction
modulation of signaling pathways that control transcription factor expression
ROS-mediated protein adduct formation
ROS generated by ionizing radiation chemically modify histones which can cause a change in transcription. Oxidation of cellular lipid components result in an electrophilic molecule formation. The electrophilic molecule binds to the lysine residues of histones causing a ketoamide adduct formation. The ketoamide adduct formation blocks the lysine residues of histones from binding to acetylation proteins thus decreasing gene transcription.
ROS-mediated DNA methylation changes
DNA hypermethylation is seen in the genome with DNA breaks at a gene-specific basis, such as promoters of regulatory genes, but the global methylation of the genome shows a hypomethylation pattern during the period of reactive oxygen species stress.
DNA damage induced by reactive oxygen species results in increased gene methylation and ultimately gene silencing. Reactive oxygen species modify the mechanism of epigenetic methylation by inducing DNA breaks which are later repaired and then methylated by DNMTs. DNA damage response genes, such as GADD45A, recruit nuclear proteins Np95 to direct histone methyltransferase's towards the damaged DNA site. The breaks in DNA caused by the ionizing radiation then recruit the DNMTs in order to repair and further methylate the repair site.
Genome wide hypomethylation occurs due to reactive oxygen species hydroxylating methylcytosines to 5-hydroxymethylcytosine (5hmC). The production of 5hmC serves as an epigenetic marker for DNA damage which is recognizable by DNA repair enzymes. The DNA repair enzymes attracted by the marker convert 5hmC to an unmethylated cytosine base resulting in the hypomethylation of the genome.
Another mechanism that induces hypomethylation is the depletion of S-adenosyl methionine synthetase (SAM). The prevalence of super oxide species causes the oxidization of reduced glutathione (GSH) to GSSG. Due to this, synthesis of the cosubstrate SAM is stopped. SAM is an essential cosubstrate for the normal functioning of DNMTs and histone methyltransferase proteins.
ROS-mediated post-translation modification
Double stranded DNA breaks caused by exposure to ionizing radiation are known to alter chromatin structure. Double stranded breaks are primarily repaired by poly ADP (PAR) polymerases which accumulate at the site of the break leading to activation of the chromatin remodeling protein ALC1. ALC1 causes the nucleosome to relax resulting in the epigenetic up-regulation of genes. A similar mechanism involves the ataxia telangiectasia mutated (ATM) serine/threonine kinase which is an enzyme involved in the repair of double stranded breaks caused by ionizing radiation. ATM phosphorylates KAP1 which causes the heterochromatin to relax, allowing increased transcription to occur.
The DNA mismatch repair gene (MSH2) promoter has shown a hypermethylation pattern when exposed to ionizing radiation. Reactive oxygen species induce the oxidization of deoxyguanosine into 8-hydroxydeoxyguanosine (8-OHdG) causing a change in chromatin structure. Gene promoters that contain 8-OHdG deactivate the chromatin by inducing trimethyl-H3K27 in the genome. Other enzymes such as transglutaminases (TGs) control chromatin remodeling through proteins such as sirtuin1 (SIRT1). TGs cause transcriptional repression during reactive oxygen species stress by binding to the chromatin and inhibiting the sirtuin 1 histone deacetylase from performing its function.
ROS-mediated loss of epigenetic imprinting
Epigenetic imprinting is lost during reactive oxygen species stress. This type of oxidative stress causes a loss of NF- κB signaling. Enhancer blocking element CCCTC-binding factor (CTCF) binds to the imprint control region of insulin-like growth factor 2 (IGF2) preventing the enhancers from allowing the transcription of the gene. The NF- κB proteins interact with IκB inhibitory proteins, but during oxidative stress IκB proteins are degraded in the cell. The loss of IκB proteins for NF- κB proteins to bind to results in NF- κB proteins entering the nucleus to bind to specific response elements to counter the oxidative stress. The binding of NF- κB and corepressor HDAC1 to response elements such as the CCCTC-binding factor causes a decrease in expression of the enhancer blocking element. This decrease in expression hinders the binding to the IGF2 imprint control region therefore causing the loss of imprinting and biallelic IGF2 expression.
Mechanisms of epigenetic modifications
After the initial exposure to ionizing radiation, cellular changes are prevalent in the unexposed offspring of irradiated cells for many cell divisions. One way this non-Mendelian mode of inheritance can be explained is through epigenetic mechanisms.
Ionizing radiation and DNA methylation
Genomic instability via hypomethylation of LINE1
Ionizing radiation exposure affects patterns of DNA methylation. Breast cancer cells treated with fractionated doses of ionizing radiation showed DNA hypomethylation at the various gene loci; dose fractionation refers to breaking down one dose of radiation into separate, smaller doses. Hypomethylation of these genes correlated with decreased expression of various DNMTs and methyl CpG binding proteins. LINE1 transposable elements have been identified as targets for ionizing radiation. The hypomethylation of LINE1 elements results in activation of the elements and thus an increase in LINE1 protein levels. Increased transcription of LINE1 transposable elements results in greater mobilization of the LINE1 loci and therefore increases genomic instability.
Ionizing radiation and histone modification
Irradiated cells can be linked to a variety of histone modifications. Ionizing radiation in breast cancer cell inhibits H4 lysine tri-methylation. Mouse models exposed to high levels of X-ray irradiation exhibited a decrease in both the tri-methylation of H4-Lys20 and the compaction of the chromatin. With the loss of tri-methylation of H4-Lys20, DNA hypomethylation increased resulting in DNA damage and increased genomic instability.
Loss of methylation via repair mechanisms
Breaks in DNA due to ionizing radiation can be repaired. New DNA synthesis by DNA polymerases is one of the ways radiation induced DNA damage can be repaired. However, DNA polymerases do not insert methylated bases which leads to a decrease in methylation of the newly synthesized strand. Reactive oxygen species also inhibit DNMT activity which would normally add the missing methyl groups. This increases the chance that the demethylated state of DNA will eventually become permanent.
Clinical consequences and applications
MGMT- and LINE1- specific DNA methylation
DNA methylation influences tissue responses to ionizing radiation. Modulation of methylation in the gene MGMT or in transposable elements such as LINE1 could be used to alter tissue responses to ionizing radiation and potentially opening new areas for cancer treatment.
MGMT serves as a prognostic marker in glioblastoma. Hypermethylation of MGMT is associated with the regression of tumors. Hypermethylation of MGMT silences its transcription inhibiting alkylating agents in tumor killing cells. Studies have shown patients who received radiotherapy, but no chemotherapy after tumor extraction, had an improved response to radiotherapy due to the methylation of the MGMT promoter.
Almost all human cancers include hypomethylation of LINE1 elements. Various studies depict that the hypomethylation of LINE1 correlates with a decrease in survival after both chemotherapy and radiotheraphy.
Treatment by DNMT inhibitors
DMNT inhibitors are being explored in the treatment of malignant tumors. Recent in-vitro studies show that DNMT inhibitors can increase the effects of other anti-cancer drugs. Knowledge of in-vivo effect of DNMT inhibitors are still being investigated. The long term effects of the use of DNMT inhibitors are still unknown.
References
Radiation health effects
Cancer epigenetics | Biological effects of radiation on the epigenome | [
"Chemistry",
"Materials_science"
] | 3,933 | [
"Radiation effects",
"Radiation health effects",
"Radioactivity"
] |
56,379,561 | https://en.wikipedia.org/wiki/Regular%20distribution%20%28economics%29 | Regularity, sometimes called Myerson's regularity, is a property of probability distributions used in auction theory and revenue management. Examples of distributions that satisfy this condition include Gaussian, uniform, and exponential; some power law distributions also satisfy regularity.
Distributions that satisfy the regularity condition are often referred to as "regular distributions".
Definitions
Two equivalent definitions of regularity appear in the literature.
Both are defined for continuous distributions, although analogs for discrete distributions have also been considered.
Concavity of revenue in quantile space
Consider a seller auctioning a single item to a buyer with random value . For any price set by the seller, the buyer will buy the item if . The seller's expected revenue is . We define the revenue function as follows:
is the expected revenue the seller would obtain by choosing such that .
In other words, is the revenue that can be obtained by selling the item with (ex-ante) probability .
Finally, we say that a distribution is regular if is a concave function.
Monotone virtual valuation
For a cumulative distribution function and corresponding probability density function , the virtual valuation of the agent is defined as
The valuation distribution is said to be regular if is a monotone non-decreasing function.
Applications
Myerson's auction
An important special case considered by is the problem of a seller auctioning a single item to one or more buyers whose valuations for the item are drawn from independent distributions.
Myerson showed that the problem of the seller truthfully maximizing her profit is equivalent to maximizing the "virtual social welfare", i.e. the expected virtual valuation of the bidder who receives the item.
When the bidders valuations distributions are regular, the virtual valuations are monotone in the real valuations, which implies that the transformation to virtual valuations is incentive compatible.
Thus a Vickrey auction can be used to maximize the virtual social welfare (with additional reserve prices to guarantee non-negative virtual valuations).
When the distributions are irregular, a more complicated ironing procedure is used to transform them into regular distributions.
Prior-independent mechanism design
Myerson's auction mentioned above is optimal if the seller has an accurate prior, i.e. a good estimate of the distribution of valuations that bidders can have for the item.
Obtaining such a good prior may be highly non-trivial, or even impossible.
Prior-independent mechanism design seeks to design mechanisms for sellers (and agents in general) who do not have access to such a prior.
Regular distributions are a common assumption in prior-independent mechanism design.
For example, the seminal proved that if bidders valuations for a single item are regular and i.i.d. (or identical and affiliated),
the revenue obtained from selling with an English auction to bidders dominates the revenue obtained from selling with any mechanism (in particular, Myerson's optimal mechanism) to bidders.
Notes
References
Sources
Auction theory
Mathematical finance
Probability distributions | Regular distribution (economics) | [
"Mathematics"
] | 600 | [
"Functions and mappings",
"Probability distributions",
"Applied mathematics",
"Mathematical objects",
"Auction theory",
"Game theory",
"Mathematical relations",
"Mathematical finance"
] |
56,379,776 | https://en.wikipedia.org/wiki/Ecological%20Modelling | Ecological Modelling is a monthly peer-reviewed scientific journal covering the use of ecosystem models in the field of ecology. It was founded in 1975 by Sven Erik Jørgensen and is published by Elsevier. The current editor-in-chief is Brian D. Fath (Towson University). According to the Journal Citation Reports, the journal has a 2016 impact factor of 2.363.
References
External links
Ecology journals
Elsevier academic journals
Monthly journals
Academic journals established in 1975
English-language journals | Ecological Modelling | [
"Environmental_science"
] | 102 | [
"Environmental science journals",
"Ecology journals"
] |
56,379,870 | https://en.wikipedia.org/wiki/List%20of%20triple%20tautonyms | The following is a list of triple tautonyms: zoological names of species consisting of three identical words (the generic name, the specific name and the subspecies have the same spelling). Such names are allowed in zoology, but not in botany, where the generic and specific epithets of a species must differ (though differences as small as one letter are permitted, as in cumin, Cuminum cyminum).
List
Alces alces alces, the European elk
Alle alle alle, a subspecies of little auk
Anser anser anser, a subspecies of greylag goose
Bison bison bison, the plains bison
Bubo bubo bubo, the European eagle-owl
Bufo bufo bufo, the European toad
Buteo buteo buteo, the European buzzard
Capreolus capreolus capreolus, the European roe deer
Caracal caracal caracal, a subspecies of caracal
Cardinalis cardinalis cardinalis, a subspecies of northern cardinal
Caretta caretta caretta, the Atlantic loggerhead sea turtle
Ciconia ciconia ciconia, a subspecies of white stork
Crossoptilon crossoptilon crossoptilon, the Szechuan white-eared pheasant
Francolinus francolinus francolinus, a subspecies of black francolin
Gallus gallus gallus, the Cochin-Chinese red junglefowl
Giraffa giraffa giraffa, the South African giraffe
Gorilla gorilla gorilla, the western lowland gorilla
Jacana jacana jacana, the wattled jacana
Lagopus lagopus lagopus, the willow ptarmigan
Lutra lutra lutra, the European and North African variant of the Eurasian otter
Lynx lynx lynx, the Northern European lynx
Meles meles meles, the European badger
Mephitis mephitis mephitis, the Canada striped skunk
Naja naja naja, the Indian cobra
Natrix natrix natrix, the Central European variant of the grass snake
Pica pica pica, a subspecies of Eurasian magpie
Quelea quelea quelea, a subspecies of red-billed quelea
Rattus rattus rattus, the roof rat
Redunca redunca redunca, the Bohor reedbuck
Rupicapra rupicapra rupicapra, the Alpine chamois
Sula sula sula, the Caribbean and Southwest Atlantic Islands variant of the red-footed booby
Vulpes vulpes vulpes, the Scandinavian red fox
See also
List of tautonyms
Notes
References
Wolpow, E. R. (1983). Triple Tautonyms in Biology. Word Ways, 16(2), 10.
Zoological nomenclature
Lists of animals | List of triple tautonyms | [
"Biology"
] | 567 | [
"Zoological nomenclature",
"Animals",
"Lists of biota",
"Biological nomenclature",
"Lists of animals"
] |
56,380,195 | https://en.wikipedia.org/wiki/Browser%20isolation | Browser isolation is a cybersecurity model which aims to physically isolate an internet user's browsing activity (and the associated cyber risks) away from their local networks and infrastructure. Browser isolation technologies approach this model in different ways, but they all seek to achieve the same goal, effective isolation of the web browser and a user's browsing activity as a method of securing web browsers from browser-based security exploits, as well as web-borne threats such as ransomware and other malware. When a browser isolation technology is delivered to its customers as a cloud hosted service, this is known as remote browser isolation (RBI), a model which enables organizations to deploy a browser isolation solution to their users without managing the associated server infrastructure. There are also client side approaches to browser isolation, based on client-side hypervisors, which do not depend on servers in order to isolate their users browsing activity and the associated risks, instead the activity is virtually isolated on the local host machine. Client-side solutions break the security through physical isolation model, but they do allow the user to avoid the server overhead costs associated with remote browser isolation solutions.
Mechanism
Browser isolation typically leverages virtualization or containerization technology to isolate the users web browsing activity away from the endpoint device - significantly reducing the attack surface for rogue links and files. Browser isolation is a way to isolate web browsing hosts and other high-risk behaviors away from mission-critical data and infrastructure. Browser isolation is a process to physically isolate a user's browsing activity away from local networks and infrastructure, isolating malware and browser based cyber-attacks in the process while still granting full access.
Market
In 2017, the American research group Gartner identified remote browser (browser isolation) as one of the top technologies for security. The same Gartner report also forecast that more than 50% of enterprises would actively begin to isolate their internet browsing to reduce the impact of cyber attacks over the coming three years.
According to Market Research Media, the remote browser isolation (RBI) market is forecast to reach $10 Billion by 2024, growing at CAGR 30% in the period 2019–2024.
Comparison to other techniques
Unlike traditional web security approaches such as antivirus software and secure web gateways, browser isolation is a zero trust approach which does not rely on filtering content based on known threat patterns or signatures. Traditional approaches can't handle 0-day attacks since the threat patterns are unknown. Rather, browser isolation approach treats all websites and other web content that has not been explicitly whitelisted as untrusted, and isolates them from the local device in a virtual environment such as a container or virtual machine.
Web-based files can be rendered remotely so that end users can access them within the browser, without downloading them. Alternatively, files can be sanitized within the virtual environment, using file cleansing technologies such as Content Disarm & Reconstruction (CDR), allowing for secure file downloads to the user device.
Effectiveness
Typically browser isolation solutions provide their users with 'disposable' (non-persistent) browser environments, once the browsing session is closed or times out, the entire browser environment is reset to a known good state or simply discarded. Any malicious code encountered during that session is thus prevented from reaching the endpoint or persisting within the network, regardless of whether any threat is detected. In this way, browser isolation proactively combats both known, unknown and zero-day threats, effectively complementing other security measures and contributing to a defense-in-depth, layered approach to web security.
History
Browser isolation began as an evolution of the 'security through physical isolation' cybersecurity model and is also known as the air-gap model by security professionals, who have been physically isolating critical networks, users and infrastructures for cybersecurity purposes for decades. Although techniques to breach 'air-gapped' IT systems exist, they typically require physical access or close proximity to the air-gapped system in order to be effective. The use of an air-gap makes infiltration into systems from the public internet extremely difficult, if not impossible without physical access to the system . The first commercial browser isolation platforms were leveraged by the National Nuclear Security Administration at Lawrence Livermore National Laboratory, Los Alamos National Laboratory and Sandia National Laboratories in 2009, when browser isolation platforms based on virtualization were used to deliver non-persistent virtual desktops to thousands of federal government users.
In June 2018, the Defense Information Systems Agency (DISA) announced a request for information for a "cloud-based internet isolation" solution as part of its endpoint security portfolio. As the RFI puts it, "the service would redirect the act of internet browsing from the end user’s desktop into a remote server, external to the Department of Defense Information Network." At the time, the RFI was the largest known project for browser isolation, seeking "a cloud based service leveraging concurrent (simultaneous) use licenses at ~60% of the total user base (3.1 Million users)."
See also
Malware
Internet safety
Browser security
Antivirus software
References
Web browsers
Web security exploits
Internet security | Browser isolation | [
"Technology"
] | 1,059 | [
"Computer security exploits",
"Web security exploits"
] |
56,381,434 | https://en.wikipedia.org/wiki/Burkard%20Polster | Burkard Polster (born 26 February 1965 in Würzburg) is a German mathematician who runs and presents the Mathologer channel on YouTube. He is a professor of mathematics at Monash University in Melbourne, Australia.
Education and career
Polster earned a doctorate from the University of Erlangen–Nuremberg in 1993 under the supervision of Karl Strambach. Other universities that Polster has been affiliated with, before joining Monash University in 2000, include the University of Würzburg, University at Albany, University of Kiel, University of California, Berkeley, University of Canterbury, and University of Adelaide.
Polster's research involves topics in geometry, recreational mathematics, and the mathematics of everyday life, including how to tie shoelaces or stabilize a wobbly table.
Books
Polster is the author of multiple books including:
Included in the four-book compilation Scientia: Mathematics, Physics, Chemistry, Biology, and Astronomy for All (2011) and translated into German as Sciencia: Mathematik, Physik, Chemie, Biologie und Astronomie für alle verständlich (Librero, 2014, in German).
References
External links
Mathologer, Polster's YouTube site
Maths Masters, Burkard Polster and Marty Ross
Australian mathematicians
20th-century German mathematicians
Recreational mathematicians
University of Erlangen-Nuremberg alumni
Academic staff of Monash University
Living people
Science-related YouTube channels
1965 births
Mathematics popularizers
21st-century German mathematicians
English-language YouTube channels | Burkard Polster | [
"Mathematics"
] | 308 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
56,382,262 | https://en.wikipedia.org/wiki/Kestrel%20Institute | The Kestrel Institute is a nonprofit computer science research center located in Palo Alto's Stanford Research Park. Cordell Green, who founded Kestrel in 1981, is its Director and Chief Scientist. Its mission is to make it easier to write good, high-quality software and employs computer scientists like Lambert Meertens.
In the 1980s, Kestrel described its research focus as "knowledge-based software environments" to make it easier to write software ("normalize and mechanize the programming process"). In addition, a 2002 MIT Technology Review article described one of Kestrel's projects as a way to "almost force coders to write reliable programs". A 2005 Newsweek article discussed one Kestrel technology that developed software to help the U.S. military schedule cargo deployment by "translating a description of a problem into guidelines a computer can understand".
Nearly all of Kestrel's funding comes from government grants, from organizations such as the U.S. Department of Defense, DARPA, Intelligence Advanced Research Projects Activity (IARPA), Air Force Research Laboratory (AFRL), AFOSR, Office of Naval Research (ONR), NASA, and the National Science Foundation (NSF). In 2015, it received $4.9 million in grants and contributions, down from the previous year's $6.6 million.
References
External links
Computer science organizations
Computer science research organizations
Artificial intelligence associations
Science and technology think tanks
Organizations based in Palo Alto, California | Kestrel Institute | [
"Technology"
] | 303 | [
"Computer science",
"Computer science organizations"
] |
56,383,698 | https://en.wikipedia.org/wiki/Flufenoxuron | Flufenoxuron is an insecticide that belongs to the benzoylurea chitin synthesis inhibitor group, which also includes diflubenzuron, triflumuron, and lufenuron.
Flufenoxuron is a white crystalline powder. It is insoluble in water, is not flammable, and is not an oxidizer.
Toxicology and safety
Flufenoxuron toxicity to humans and other mammals is low, but it has a very high bioaccumulation in fish.
References
Insecticides
2-Fluorophenyl compounds
Ureas
2-Chlorophenyl compounds
Phenol ethers
Trifluoromethyl compounds | Flufenoxuron | [
"Chemistry"
] | 141 | [
"Organic compounds",
"Ureas"
] |
56,385,031 | https://en.wikipedia.org/wiki/Camidanlumab%20tesirine | Camidanlumab tesirine (Cami-T or ADCT-301) is an antibody-drug conjugate (ADC) composed of a human antibody that binds to the protein CD25, conjugated to a pyrrolobenzodiazepine dimer toxin.
The experimental drug, developed by ADC Therapeutics is being tested in clinical trials for the treatment of B-cell Hodgkin's lymphoma (HL) and non-Hodgkin lymphoma (NHL), and for the treatment of B-cell acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML).
Technology
The human monoclonal antibody is conjugated via a cleavable linker to a cytotoxic (anticancer) pyrrolobenzodiazepine (PBD) dimer. The antibody binds to CD25, which is the alpha chain of the interleukin 2 receptor IL2RA. This molecule is expressed mainly on activated T- and B-cells, both of which are types of white blood cells that play a role in the human immune system. CD25 is over-expressed in a wide range of hematological malignancies, such as leukemias and lymphomas. After binding to a CD25-expressing cell, the antibody is internalized into the cell where enzymes release the cytotoxic drug. PBD dimers work by crosslinking specific sites of the DNA, blocking the cancer cells’ division that cause the cells to die. As a class of DNA-crosslinking agents they are significantly more potent than systemic chemotherapeutic drugs.
Clinical trials
Two phase I trials are evaluating the drug in patients with relapsed or refractory Hodgkin’s and non-Hodgkin’s lymphoma and relapsed or refractory CD25-positive acute myeloid leukemia or acute lymphoblastic leukemia. At the 59th American Society of Hematology (ASH) Annual Meeting, interim results from a Phase I, open-label, single agent dose-escalating study designed to evaluate the treatment of camidanlumab tesirine in relapsed or refractory Hodgkin’s or non-Hodgkin’s lymphoma were presented. Among the patients enrolled at the time of the data cutoff the overall response rate was 78% (including a 44% complete response rate) in patients with relapsing or refractory Hodgkin’s lymphoma.
References
Experimental cancer drugs
Antibody-drug conjugates | Camidanlumab tesirine | [
"Biology"
] | 567 | [
"Antibody-drug conjugates"
] |
56,385,719 | https://en.wikipedia.org/wiki/HD%20112014 | HD 112014 is a star system in the northern constellation of Camelopardalis. It is dimly visible as a point of light with an apparent visual magnitude of 5.92. The distance to this system is approximately 406 light years based on parallax measurements.
The stars HD 112028 and HD 112014 were identified as a double star by F. G. W. Struve in 1820, and are listed as WDS 12492+8325 A and B, respectively, in the Washington Double Star Catalog. The binary nature of component B, or HD 112014, was discovered by J. S. Plaskett in 1919. It is a double-lined spectroscopic binary with an orbital period of 3.29 days and an eccentricity (ovalness) of 0.04. They are separated by . Both components are A-type main-sequence stars.
References
A-type main-sequence stars
Spectroscopic binaries
Triple stars
Camelopardalis
Durchmusterung objects
112014
062561
4892 | HD 112014 | [
"Astronomy"
] | 220 | [
"Camelopardalis",
"Constellations"
] |
56,389,209 | https://en.wikipedia.org/wiki/Flick%20%28time%29 | A flick is a unit of time equal to exactly 1/705,600,000 of a second. The figure was chosen so that time periods associated with frequencies commonly used for video or screen frame rate (24, 25, 30, 48, 50, 60, 90, 100 and 120 Hz), as well as audio sampling (8, 16, 22.05, 24, 32, 44.1, 48, 88.2, 96, and 192 kHz), can all be represented nicely with integers. That is useful in programming, because non-integer computing generally involves approximations, and possibly leads to noticeable errors.
A flick is approximately 1.42 × 10−9 s, which makes it larger than a nanosecond but much smaller than a microsecond.
The unit was launched in January 2018 by Facebook. A similar unit for integer representation of temporal points was proposed in 2004 under the name TimeRef, splitting a second into 14,112,000 parts. This makes 1 TimeRef equivalent to 50 flicks.
Etymology
The word flick is a portmanteau of frame (as in e.g. animation frame) and tick (as in computer instruction cycle).
References
External links
YouTube video: Why Did Facebook Invent A New Unit Of Time? The "Flick" Explained With Math.
Units of time
Standards | Flick (time) | [
"Physics",
"Mathematics"
] | 275 | [
"Physical quantities",
"Time",
"Time stubs",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
56,389,250 | https://en.wikipedia.org/wiki/Social%20media%20coverage%20of%20the%20Olympics | Over the years, television broadcast rights have distinguished what Olympic-related content can be accessed by fans online. By doing so, mobile-friendly social platforms began to integrate into the Olympics. Athletes and fans use these platforms to share live updates, special moments, and behind-the-scenes specials.
The rise of social video and social broadcasting technology has provided a huge opportunity for Olympic athletes, teams, and sponsors to bring coverage to fans like never before. Olympic-related content has infiltrated all forms of social media, including Snapchat, Instagram, Twitter, and Facebook. It is expected to expand in the coming years.
Background
The Olympics is able to advertise to its viewers and its host country with the use of data it collects through Social media marketing. Prominent social media platforms include: Twitter, Facebook, Instagram, Tumblr, YouTube, Google, MSN, Yahoo and many more. Campaign Initiatives and Artificial Intelligence technologies have been used to analyze the social media content of users. Information from consumers such as their preferences, demographics, age and locality are all analyzed to gain consumer insight. Campaign initiatives and AI technologies were used for such purposes in the 2010 Vancouver Winter Olympics and are in use currently. Social media marketing of the Olympics is a new phenomena, beginning prior to the 2008 Beijing Olympics
Variations
There are two classifications of social media marketing recognized by the IOC:
Officially sanctioned content from rights holders and sponsors that maximizes the use of Olympic content (imagery, hashtag)
Unofficial content that is generated by brands that leverage the excitement of the Olympics
2008 Beijing Summer Olympics
Social media marketing emerged as a phenomenon during the 2008 Beijing Olympics, which progressed as a marketing and an advertising tactic ever since. The Beijing Olympics became the test subject for social media marketing initiatives started by advertising agencies. In 2008, social media marketing began the transition from one-sided communication to mass communication of the Olympic Games. Although social media marketing of the Olympic Games began in 2008, the audience to the Olympics was still primarily reached through television–reaching an audience of 4.3 billion viewers. At the time, the viewers of the Olympic Games through Internet website platforms made up an audience of approximately 390 million individuals.
What was the beginning of Olympic social media marketing, was also the beginning of a more globalized experience of the Olympic Games via social media. Twitter, now a prominent social media platform, began in 2006 and grew to three million active users by the beginning of the 2008 Beijing Olympics. Members of Facebook, another prominent social media platform, tracking the Olympic Games grew from approximately one million during the Olympic Games of Athens 2004 to 90 million during the 2008 Beijing Olympics. Social media use, in general, increased by 24 percent between 2007 and 2008–from 63 percent of U.S. adults to 87 percent of U.S. adults.
2010 Vancouver Winter Olympics
The International Olympic Committee (IOC) deemed The Vancouver Winter Olympics as "the first social media games” based on its fan base through social media platforms. The IOC launched their Facebook page a month before the games began, attracting 1.5 million fans. Shifting to online viewing attracted a younger audience than past Olympic games with over 60 percent of Facebook fans being under 24 years of age.
Athletes like Lindsey Vonn and Shaun White reached fans on social media as the platform posted behind-the-scenes coverage on their experiences. The IOC used social media to create competitions between athletes and fans streamed online. Its YouTube channel hosted a “Best of Us” challenge in which the public could compete in games with their favorite athletes, acquiring three million viewers. Photos spread across social media platforms, such as Flickr, which had 11,000 photos posted by 600 photographers, bringing a new perspective to the games.
Twitter contributed constant live updates of the competitions. The IOC's Twitter following doubled to 12,000 followers during the Vancouver Olympics, creating a larger viewer population for the games. The IOC created social media guidelines as more athletes and fans got online to interact with the Olympics. Social media was still relatively new as a marketing platform, so these guidelines confused many individuals.
2012 London Summer Olympics
The London 2012 Olympic Games succeeded in broadcasting, participation and marketing. For the first time, the IOC broadcast the Olympic Games live and on-demand through YouTube, allowing fans to access the Games anytime, anywhere through live streaming. The combination of conventional broadcasting and mobile platforms reached a global audience of 4.8 billion people.
Social media soared with Facebook, Twitter and Google+, attracting 4.7 million followers. Athletes shared photographs, interacted online with fans and updated daily, either in person or via an agent. Instagram was established by 2012, making itself a premier photo-sharing platform perfect for athletes to capture their emotions. Lewis Wiltshire, head of sport for Twitter UK said, "Never before have fans had such direct
access to their sporting heroes."
Social media created conversation on fan opinions regarding athletes, including 962,756 total mentions of Usain Bolt, “Fastest Man in History,” who defended the 100 meter and 200 meter gold medals. Michael Phelps followed with 828,081 total mentions.
Olympic sponsors were active on social media; created several campaigns to promote their brands; and inspired viewers with mass participation and personalized events. The Adidas “Take the Stage” Campaign recognized talent around the world, installing a photo booth and inviting the 550 Olympics athletes to take the stage. (IOC Marketing Report 2012). David Beckham surprised fans at the photo booth in Westfield shopping centre, gaining popularity in UK media. Coca-Cola, Acer Inc., McDonald's, Visa Inc. and several others used similar tactics of participation to attract viewers.
2014 Sochi Winter Olympics
Channels
The 2014 Winter Olympic Games were held in Sochi, a city in Krasnodar Krai, Russia, establishing the first “social media Olympics” for Russia. The most popular Russian social media and networking service, VK, created an Olympic page, similar to Facebook's. The Olympic VK page has 2.8 million fans and—the most popular official community on the platform. Throughout the games, VK had 54 million Olympic mentions, an average of 1.5 million per day.
Numbers grew on other social media pages: more than 2 million fans joined the Olympic Facebook page, 168,101 followed the Olympic Twitter, 150,000 followed the Olympic Instagram and three million visited the Olympic website in February 2014. There were 90,000 total updates on social media by Sochi 2014 Olympians and teams. The United States was the most active country during the games logging 22,598 posts across Facebook, Twitter, and Instagram.
Engagement
With social media there is also hashtags. The most popular hashtag was #sochi2014 with almost 11,000 uses. The next top five hashtags were #wearewinter, #teamusa, #olympics, #goaus and #wirfuerD. Another popular hashtag was #Sochiproblems, depicting local struggles. Photos of the poor state of Sochi on all platforms made the games the number one trending topic one week before the opening ceremony. #SochiFail and #SochiProblems gave multiple reports of the poor living arrangements, incomplete construction, broken elevators, and polluted waters. This was one way that social media provided awareness to its users.
Media Perceptions
Media perceptions varied during the games; the Olympics was viewed as a confrontation between Eastern and Western Civilizations. The LGBT community took a stand against the games. Sponsors for the games including Coca-Cola, Mcdonald's, and P&G protested against Russian authorities and Russian anti-LGBT laws. Many protests took a stand against Russian laws, which created a discussion between human rights advocates. Advocates believed organizations should not promote certain values in western markets while supporting an anti-human rights government in another market.
2016 Rio Summer Olympics
Social media marketing was an influential tool in the promotion and analysis of the 2016 Rio Olympics. Thomas Bach, President of the International Olympic Committee said that the power of sport demonstrates that diversity and interconnectedness can enlighten us all. With over 25,000+ sources of accredited media covering the games, the 2016 games were the most consumed Olympic games to date. Marketing for the Rio Olympics began in 2013 and ultimately lasted 3 years. There were 26 million visits to Olympic.org, the official website of the Olympic games, and over 7 billion views of official Olympic content on social media. There were over 270 digital media platforms covering the games and active engagement throughout various social media platforms including Facebook and Twitter.
Twitter saw over 187 million tweets inspired by the Rio Olympics, a 24.6 percent increase from the 2012 Summer Olympics. The 187 million tweets generated nearly 75 billion Twitter impressions. It is important to note that throughout this same period, Twitter nearly doubled in size in terms of users, suggesting that there could have been an even larger growth in Twitter engagement. The hashtag, #Rio2016 was official content created by the IOC to generate buzz and encourage engagement.
Facebook saw 277 million people interact 1.5 billion times regarding the 2016 games. Comparatively to the London games, which saw 116 million posts, there was a significant increase in interaction. Facebook also used the Rio Olympics to debut their Sponsored Photo Frame aspect as an alternative method of advertising revenue and marketing outreach. The idea behind this innovation was to present a new advertising opportunity for the platform and to encourage alternative interaction in a similar way to which Snapchat uses paid and sponsored geographic filters.
The Rio Olympics was the first Olympics to utilize snapchat as a marketing tool. NBC directly partnered with Snapchat and Buzzfeed to promote the 2016 games.
2020 Tokyo Summer Olympics
The 2020 Tokyo Summer Olympics were postponed to 2021 due to the COVID-19 pandemic. Olympic fans took to social media to discuss and manage their emotions regarding the postponement. Some platforms explained the sequence of events that led to the postponement, while other shared historical moments. On Twitter, users used the postponement as a way to explain how dangerous the pandemic really is, as well as to bolster political arguments. The hashtag #tokyo2021 offered optimism among the negativity and fear.
References
Social media
Olympic culture | Social media coverage of the Olympics | [
"Technology"
] | 2,077 | [
"Computing and society",
"Social media"
] |
76,408,907 | https://en.wikipedia.org/wiki/Controlled%20reception%20pattern%20antenna | Controlled reception pattern antennas (CRPA) are active antennas that are designed to resist radio jamming and spoofing. They are used in navigation applications to resist GPS spoofing attacks.
References
Antennas
Satellite navigation | Controlled reception pattern antenna | [
"Engineering"
] | 44 | [
"Antennas",
"Telecommunications engineering"
] |
76,408,967 | https://en.wikipedia.org/wiki/GPS%20jamming | GPS jamming is an act of overwhelming satellite navigation receivers with powerful radio signals that drown out the signals from GPS satellites, rendering the receiver unable to calculate its position or time accurately. Such jamming can disrupt various GPS-dependent devices, from vehicle and aircraft navigation systems to precision agriculture and mobile phone networks. In civil aviation, GPS jamming can disrupt ADS-B transmission. GPS jamming is a particular type of GNSS interference.
Under ITU rules, countries are obliged to eliminate harmful interference through GPS jamming and spoofing, but the ITU lacks effective enforcement measures. The ICAO legal framework requires that countries should implement appropriate prevention and mitigation of GPS jamming and spoofing. Under the ICAO's Montreal Convention, countries shall make GPS jamming and spoofing punishable. In the United States, the operation, marketing, or sale of any GPS jamming equipment is prohibited under federal law.
Occurrences
GPS jamming may be used by countries when conducting military exercises and drills, but under IATA recommendations they should recognize the harmful impact of such jamming to civil aviation and exercise utmost caution. In civil aviation, Eurocontrol outlined two major hotspots of GPS jamming: Eastern Turkish airspace to Iraq, Iran, Armenia (extending to Armenia–Azerbaijan border) and Southern Cypriot airspace towards Egypt, Lebanon and Israel.
Following Russian invasion of Ukraine, Russia used GPS jamming to support its military activity and in an effort to harass NATO nations. In December 2022 and January 2023, GPS jamming was noted in northern Poland, southern Sweden, southeastern Finland, Estonia and Latvia. In April 2023, Russia, to counter Ukrainian drone attacks, deployed GPS jamming in 15 of its regions, including Ivanovo, Vladimir, Yaroslavl, Ryazan, Kaluga and Tver Oblast that surround Moscow. GPS-guided Ukrainian drones frequently target infrastucture and residential areas in Moscow and Moscow Oblast.
Countering
GPS jamming is seen as not as insidious as GPS spoofing. It has been encountered on long-haul flights (particularly to Russia) and airline pilots are able to counter such jamming. Flickering readings instantly reveal GPS jamming and there are multiple checklists to handle it.
References
External links
GPSjam.org (daily maps of GPS interference)
"Dual-Satellite Geolocation of Terrestrial GNSS Jammers from Low Earth Orbit"
See also
Electronic warfare
Radio jamming
Electronic warfare
Global Positioning System
Technology hazards | GPS jamming | [
"Technology",
"Engineering"
] | 507 | [
"Wireless locating",
"Aircraft instruments",
"nan",
"Aerospace engineering",
"Global Positioning System"
] |
76,409,788 | https://en.wikipedia.org/wiki/NGC%202144 | NGC 2144 is a spiral galaxy located in the constellation Mensa in the southern hemisphere. It was first discovered and observed by John Louis Emil Dreyer in 1888, during his efforts to update the New General Catalogue. NGC 2144 is not a Messier Object and doesn't have a Messier Number. The galaxy has been heavily documented and observed by multiple people and other organizations using telescopes.
Gallery
References
Spiral galaxies
Astronomical objects discovered in 1888
Mensa (constellation)
2144 | NGC 2144 | [
"Astronomy"
] | 97 | [
"Mensa (constellation)",
"Constellations"
] |
76,410,046 | https://en.wikipedia.org/wiki/RCW%20106 | RCW 106 is a large star-forming nebula in the Constellation Norma. RCW 106 is visible in the direction of the R103 OB association and is embedded in a 100 thousand solar mass 28 pc x 94 pc giant molecular cloud. The RCW catalogue designates the two brightest regions of this nebula as RCW 106a and RCW 106b but these do not appear to be distinct objects.
Gallery
References
H II regions
Norma (constellation) | RCW 106 | [
"Astronomy"
] | 92 | [
"Norma (constellation)",
"Constellations"
] |
76,410,556 | https://en.wikipedia.org/wiki/G299.2-2.9 | G299.2-2.9 is a supernova remnant in the Milky Way, 16,000 light years from Earth. It is the remains of a Type Ia supernova.
The observed radius of the remnant shell translates to approximately 4,500 years of expansion, making it one of the oldest observed Type Ia supernova remnants.
Description
G299.2-2.9 gives astronomers an opportunity to study how supernova remnants evolve and warp over time. G299.2-2.9 also provides a glimpse of the explosion that produced it. G299.2-2.9 is split into several distinct and different regions: an almost complete bubble interrupted only by a blow-out, a bright center, a complex "knot" region on the northeastern edge of the bubble structure and a diffuse emission extending beyond the main structure. It has been heavily documented by multiple satellites and in-orbit telescopes, including the Hubble Space Telescope, Spitzer Telescope, and Chandra.
The small X-ray emission from the deep portions of G299.2-2.9 shows large quantities of iron and silicon, which indicates that it is a remnant of a Type Ia supernova. The outer "shell" is large and complex, with a multi-shell structure. Outer shells similar to G299.2-2.9 are usually not associated with exploded stars. Since theories about Type Ia supernovae assume they go off in a specified environment, detailed studies of the outer "shell" of G299.2-2.9 have helped astronomers improve their understanding of the areas and situations where thermonuclear explosions occur.
Gallery
References
Supernovae
Supernova remnants
Musca | G299.2-2.9 | [
"Chemistry",
"Astronomy"
] | 348 | [
"Supernovae",
"Astronomical events",
"Constellations",
"Musca",
"Explosions"
] |
76,411,237 | https://en.wikipedia.org/wiki/IRAS%2013208-6020 | IRAS 13208-6020 is a preplanetary nebula in the constellation Centaurus. These nebulae are formed from material that is shed by a central star. It was first discovered and observed during the IRAS Sky Survey. This is a relatively short-lived phenomenon that gives astronomers an opportunity to watch the early stages of planetary nebula formation, hence the name protoplanetary, or preplanetary nebula.
Characteristics
IRAS 13208-6020 has a very clear bipolar form, with two very similar outflows of material in opposite directions and a dusty ring around the star. It does not shine, but is instead illuminated by light from the central star. IRAS 13208-6020 is not currently in the planetary nebula stage, and it is assumed to be very early in its lifespan.
References
Protoplanetary nebulae
Centaurus
IRAS catalogue objects | IRAS 13208-6020 | [
"Astronomy"
] | 179 | [
"Centaurus",
"Constellations"
] |
76,411,338 | https://en.wikipedia.org/wiki/Data%20memory-dependent%20prefetcher | A data memory-dependent prefetcher (DMP) is a cache prefetcher that looks at cache memory content for possible pointer values, and prefetches the data at those locations into cache if it sees memory access patterns that suggest following those pointers would be useful.
As of 2022, data prefetching was already a common feature in CPUs, but most prefetchers do not inspect the data within the cache for pointers, instead working by monitoring memory access patterns. Data memory-dependent prefetchers take this one step further.
The DMP in Apple's M1 computer architecture was demonstrated to be capable of being used as a memory side-channel in an attack published in early 2024. At that time its authors did not know of any practical way to exploit it. The DMP was subsequently discovered to be even more opportunistic than previously thought, and has now been demonstrated to be able to be used to effectively attack a variety of cryptographic algorithms in work called GoFetch by its authors.
Intel Core processors also have DMP functionality (Intel use the term "Data Dependent Prefetcher") but Intel states that they have features to prevent their DMPs being used for side-channel attacks. The authors of GoFetch state that they were unable to make their exploit work on Intel processors.
References
Digital circuits
Computer architecture | Data memory-dependent prefetcher | [
"Technology",
"Engineering"
] | 284 | [
"Computing stubs",
"Computers",
"Computer engineering",
"Computer architecture"
] |
76,412,112 | https://en.wikipedia.org/wiki/Z%20229-15 | Z 229-15 is a ring galaxy in the constellation Lyra. It is around 390 million light-years from Earth. It has been referred to by NASA and other space agencies as hosting an active galactic nucleus, a quasar, and a Seyfert galaxy, each of which overlap in some way.
Z 229-15 was first discovered by astronomer, D. Proust from the Meudon Observatory in 1990. According to Proust, he described the object as a possible obscured spiral galaxy featuring strong signs of absorption. Additionally, Z 229-15 was also observed through the 1.93-m telescope taken at Observatorie de Haute-Provence.
Z 229-15's classification has been up for speculation for many years. Z 229-15 has been widely called a quasar, and if this is true would make Z 229-15 positively local. Many space agencies, notably NASA, have called it a Seyfert galaxy that contains a quasar, and that, by definition, hosts an active galactic nuclei. This would make Z 229-15 a very uncommon galaxy in scientific terms.
Z 229-15 has a supermassive black hole at its core. The mass of the black hole is solar masses. The interstellar matter in Z 229-15 gets so hot that it releases a large amount of energy across the electromagnetic spectrum on a regular basis.
References
Lyra
Ring galaxies
Quasars
Seyfert galaxies
62756
Astronomical objects discovered in 1990 | Z 229-15 | [
"Astronomy"
] | 307 | [
"Lyra",
"Constellations"
] |
76,412,258 | https://en.wikipedia.org/wiki/MoonBounce | MoonBounce is a UEFI firmware-based rootkit. It is linked to Chinese APT41 hacker group. MoonBounce was discovered by the researchers at Kaspersky in 2021. It can disable Windows security tools and bypass User Account Control.
The data shows that the attacks are highly targeted. It is a landmark in a UEFI rootkit evolution. It is the third known malware UEFI bootkit found.
Infection
Kaspersky has detected the firmware rootkit in only one case so they didn't reveal much about its infection method. It is believed that it had been installed remotely.
The SPI flash memory on the motherboard is the implanting location. CORE_DXE is the firmware laced component which is used during the first phases of the UEFI boot sequence. It hooks EFI Boot Services functions and inject more malware into a svchost.exe process during boot.
It resides on a low level portion of the hard drive. It operates in memory only which makes it undetectable on the HDD.
References
Malware toolkits
Malware
Firmware
Rootkits | MoonBounce | [
"Technology"
] | 236 | [
"Malware",
"Computer security exploits"
] |
76,412,352 | https://en.wikipedia.org/wiki/K2-25 | K2-25 is a young red dwarf star located in the Hyades cluster. There is a single known Neptune-sized planet in a 3.5 day orbit.
Hyades cluster
Using proper motion measurements in a search for low-luminosity members of the Hyades cluster, William van Altena first identified the star vA 50 (later known as K2-25) as a probable cluster member. Membership in the Hyades cluster was later confirmed.
Properties
K2-25 is a red dwarf that is only 26% the mass of the Sun and less than 1% of the luminosity. As a member of the Hyades cluster, it is only 650 million years old as compared to the Sun's 4.5 billion.
There is clear evidence for starspot activity in both the Kepler data and radial velocities as well as the associated activity indicators.
Planetary system
The star has one known planet, K2-25b, with searches of the Kepler space telescope data for transits of additional planets being negative. Analysis of transit-timing variations from the Spitzer Space Telescope as well as the MEarth Project also found no evidence of additional planets.
Discovery
Brightness measurements of K2-25 taken by the Kepler space telescope during its extended K2 mission led to the discovery of the transiting planet K2-25b.
Characteristics
K2-25b is a Hot Neptune type planet in an eccentric 3.48 day orbit.
Due to its proximity and the activity levels of its host star, K2-25b should be losing some of its atmosphere to space; however, observations of two transits by the Hubble Space Telescope to search for escaping neutral hydrogen were negative.
Gallery
References
External links
The Extrasolar Planets Encyclopaedia entry for K2-25b
Hyades (star cluster)
M-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
Taurus (constellation) | K2-25 | [
"Astronomy"
] | 398 | [
"Taurus (constellation)",
"Constellations"
] |
76,412,912 | https://en.wikipedia.org/wiki/Antrodia%20ramentacea | Antrodia ramentacea is a species of polypore fungus in the family Fomitopsidaceae, first described in 1879 by Miles Joseph Berkeley and Broome and transferred into its current genus by Marinus Anton Donk in 1966.
Distribution and habitat
It appears in North America, Europe and Asia, most often in Europe. It usually grows on dead conifer wood, mostly pine and spruce.
References
Fomitopsidaceae
Fungi described in 1879
Fungi of Asia
Fungi of Europe
Fungi of North America
Taxa named by Elias Magnus Fries
Taxa named by Christopher Edmund Broome
Fungus species | Antrodia ramentacea | [
"Biology"
] | 118 | [
"Fungi",
"Fungus species"
] |
76,414,909 | https://en.wikipedia.org/wiki/Immunoliposome%20therapy | Immunoliposome therapy is a targeted drug delivery method that involves the use of liposomes (artificial lipid bilayer vesicles) coupled with monoclonal antibodies to deliver therapeutic agents to specific sites or tissues in the body. The antibody modified liposomes target tissue through cell-specific antibodies with the release of drugs contained within the assimilated liposomes. Immunoliposome aims to improve drug stability, personalize treatments, and increased drug efficacy. This form of therapy has been used to target specific cells, protecting the encapsulated drugs from degradation in order to enhance their stability, to facilitate sustained drug release and hence to advance current traditional cancer treatment.
History
Alec D. Bangham discovered liposomes in the 1960s as spherical vesicles made of a phospholipid bilayer that houses hydrophilic cores. The liposomes were then studied to uncover the properties of biological membranes and a hydration method was discovered to prepare artificial liposomes from 1968 to 1975. Since then, multiple methods of preparing liposomes have been utilized and their characteristics (physical and chemical) have been studied.
Monoclonal antibodies are proteins that stick to specific antigens that tag specific cells and can be synthesized in the lab. They were first generated in 1975 and have since advanced to being used for immunotherapy.
Immunolipsomes were developed utilizing both of these components. The first anticancer drug made with this method was doxorubicin (DOX) in the 1990s.
Composition and structure
The core structure of immunoliposomes is a lipid bilayer. This lipid bilayer forms a hydrophilic core, which provides stable encapsulation for a therapeutic payload. Common lipids used are phosphatidylcholine (PC), phosphatidylethanolamine (PE), and cholesterol. The lipid bilayer is surface modified through conjugation using monoclonal antibodies for specific recognition of the target cells or tissues of interest. The core of the immunoliposome contains the therapeutic payload, which can be anything from small drugs, nucleic acids, peptides, or imaging agents.
There are often stabilizers and excipients present for formulation, stability, and functionality. Some include polyethylene glycol (PEG), antioxidants to prevent degradation of lipids, and buffering agents for optimal pH.
Synthesis
Immunoliposomes are created when antibodies are conjugated to liposomes. One way to do this is through covalent bonds between the antibody (or its fragment) and the lipid. Another way is through chemical modification of antibodies so they have a higher affinity for the liposome. “In general, the conjugation methodology is based on three main reactions; a reaction between activated carboxyl groups and amino groups which yields an amide bond, a reaction between pyridyl dithiols and thiols which yields disulfide bonds, and a reaction between maleimide derivatives and thiols, which yields thioether bonds.”
Conjugation via carboxyl and amino residues
Amine groups are found throughout an antibody and are used as a target due to their easy steric accessibility and modification. An overview of this reaction is found in Figure 2. Most often amine groups found on lysine are covalently bonded to carboxyl groups of glutamic and aspartic acid on formed liposomes using certain agents. A two step process is utilized where the first step uses 1-ethyl-3-[3-dimethylaminopropyl] carbodiimide to create an amine reactive product from the carboxyl group. This product is a target for a nucleophilic attack by the amine but it hydrolyzes quickly, so EDC is added to stabilize it. As seen in the Figure 2, the intermediate can lead to the desired stable amide bond by chance or the recreation of a carboxyl group. To create more of the desired carboxyl-amine bond, N-hydroxysulfosuccinimide (sulfo-NHS) is added to form another intermediate that is an NHS ester. The second step to this reaction is for the antibodies to use the N-terminus of the lipid to covalently conjugate by creating an amide bond via displacement of sulfo-NHS groups. This leads to the final product of an antibody conjugated to a liposome to create an immunoliposome. This process is highly efficient and effective while maintaining the biological activity of the antibody.
Conjugation via thiol group
Another process of creating immunoliposomes is by using a thiol group and creating a thioether bond. The sulfhydryl group is a key player can is found in cysteine bridges on proteins and reagents like Traut’s reagents, SATA, and Sulfo-LC-SPDP. The reduction or hydrolysis of these groups generates thiol groups that create antibody conjugation to lipids. There are multiple methods of this process, and one uses the crosslinking agent SATA as shown in Figure 3. The ester end of SATA reacts with amino groups in proteins to form an amide link and a molecule with a protected sulfhydryl group. In order to continue the reaction, this group must be freed which is done by adding hydroxylamine. The following step is to add a chemical that can be an anchor between the lipid and the thiol group. Some examples of molecules that are capable of being this anchor are maleimide, iodoacetyl groups or 2-pyridyldithiol groups. Ultimately, these steps create an antibody-enzyme conjugate that has been formulated using a thiol group.
Mechanism of action
Immunoliposomes use similar functionality as liposomes with the added measure of conjugating monoclonal antibodies and their fragments to the liposomes. The use of antibodies allow for easy targeting as they can recognize many different types of antigens. Diseased cells typically contain more antigens than healthy cells, which is how antibodies are able to appropriately target certain extracellular domains (depending on antigen overexpression) and kill diseased cells. Liposomal drug delivery combined with antibodies as a targeting ligand are what help immunoliposomes function as an effective drug carrier.
Once the immunoliposomes deliver the appropriate drugs to the targeted cells, they can enter the cell using either selective uptake of liposomes by endocytosis or liposome release near the targeted cell. Because of antibody conjunction, the cellular uptake amount is increased for immunoliposomes allowing greater drug entry into diseased cells. To control when a drug is released, immunoliposomes are being developed that can sense stimuli. This stimuli can come from the microenvironment of a tumor using factors such as reduced pH, temperature, and enzyme levels. External stimuli like light, heat, magnetic fields, or ultrasound can also act as a trigger for drug release.
Immunoliposomes can target a wide variety of cell options. This can be split into two main types commonly known as intravascular and extravascular space as seen in Figure 4. Intravascular cells are more accessible during circulation and include erythrocytes, myeloid cells, lymphocytes, neutrophils, etc. Extravascular cells are located on tissue parenchymal or stromal cells. Because immunoliposomes have many antibody copies, they contain a higher avidity than just one antibody alone allowing for effective targeting against cancer cells and some drug resistant cells.
Applications
Immunoliposome applications use its ability to act as a drug delivery system and release specific drug components to target cells. This mechanism can be specifically highlighted in cancer cell targeting and through nutrient delivery systems.
Cancer cell targeting
The most common use of immunoliposomes is to target cancer cells using different antibodies. Folate receptors and transferrin receptors are typically overexpressed on cancer cells, so immunoliposomes will target these corresponding ligands. Folate receptors dictate tumor cell specificity and have been seen to be expressed in multiple inflammatory diseases including psoriasis, Crohn’s disease, atherosclerosis, and rheumatoid arthritis making folate-targeted immunoliposomes an efficient drug carrier to deliver antiinflammatory drugs. Transferrin receptors help with the iron demand in proliferating cancer cells and allow for formation of transferrin receptor-targeted anticancer therapies. EGFR (epidermal growth factor) is a tyrosine kinase receptor overexpressed in solid tumors such as colorectal, non small-cell lung cancer, squamous cell carcinoma, and breast cancer making it another target receptor for immunoliposomes. Some cancers create tumors that have multiple different receptors being overexpressed or utilize cancer stem cells, which allow for differentiation of numerous cancer types, so to combat this, dual-targeting immunoliposomes are being created to target multiple ligands and increase therapeutic efficacy. A study provides a promising preclinical demonstration of the effectiveness and ease of preparation of Valrubicin-loaded immunoliposomes (Val-ILs) as a novel nanoparticle technology. In the context of hematological cancers, Val-ILs have the potential to be used as a precise and effective therapy based on targeted vesicle-mediated cell death.
Nutrient delivery system
Immunoliposomes can also be used as nutrient delivery systems to help stimulate brain activity. The effective transport of certain nutrients to the hypothalamus in order to regulate brain activity is currently a huge problem. The leptin gene is used to regulate feedback loops and send signals from the adipose tissue to the hypothalamus. Using this physiological function of leptin, immunoliposome nutrient delivery systems can be integrated into the body to help with nutrition transport to the brain as seen in Figure 5. Transferrin receptors have high expression at the BBB (blood brain barrier) and can be used as targets for immunoliposomes to transport p-glycoprotein substances.
Advantage and limitations
Immunoliposomes have many possible applications as described above and have certain advantages in new research and ideas. Some advantages being researched include immunoliposomes targeting specific molecules in the body. In preclinical testing, they can be environmentally responsive to specific conditions of temperature, pH, enzymes, redox reactions, magnetic energy, and light to release drugs. This conditional ability allows immunoliposomes to focus on specific target areas which can be beneficial for drug delivery. Increased targeting allows the possibility for decreased systemic toxicity while increasing drug concentration at a certain site. Even with this advantage of immunoliposomes, there are some challenges in their application.
The success of immunoliposomes during in vivo testing has been shown by multiple groups including Meeroekyai et al 2023 and animal testing has been successful in groups such as Refaat et al 2022. However, they struggle to thrive in higher level and clinical testing. This challenge stems from the variability and lack of understanding of tumors, pharmacokinetics, and large scale production of immunoliposomes. For example, tumors vary but typically have increased vascular permeability and decreased lymphatic drainage which lead to the EPR effect. EFR is the enhanced permeability and retention effect in which drug carriers depend on but the effect and environment can vary in solid tumors. This varying environment makes it hard to predict how the immunoliposome acts and quantify its pharmacokinetics. Additionally, it has been a concern that the animal models used in preclinical testing will not reflect the same effect in humans. Because of this idea, despite any preclinical success, there is a concern to test in humans due to unknown risks. For environmentally responsive immunoliposomes, more modification and purification steps are required to produce the final product. This increase in complexity for immunoliposomes and their behavior also increases costs. Another challenge to marketability and clinical research is the difficulty of scaling up the production of immunoliposomes. The procedure and use of small quantities in the laboratory make upscaling the production a challenge that has not been focused upon.
Research
Liposomal medicine research for cancer therapy has increased over the years as an alternative to conventional cancer treatment. There is an interest in liposomal medicine because it features targeted drug delivery while mitigating the damage to healthy cells and tissues. One of the combination products under liposome therapy that is being researched for cancer therapy applications is immunoliposome therapy. Other research areas in liposome combination therapy include photodynamic therapy, photothermal agents, radiotherapy, and gas therapy agents.
A immunoliposome therapy clinical study that was completed was conducted by the Swiss Group for Clinical Cancer Research from 2006 to 2009. The study was a phase II clinical trial that looked at the combination of commercially sold Doxorubicin with bevacizumab, a monoclonal antibody that blocks tumor growth. The therapy was used to treat patients with locally recurrent or metastatic breast cancer. Out of the 43 patients, 16 had grade 3 palmar-plantar erythrodysesthesia, one had grade 3 mucositis, and one severe cardiotoxicity, according to the study. As a result, the combination therapy demonstrated higher than anticipated toxicity while only having modest therapeutic effect. These results concluded that, although immunoliposome therapy has promise, there is still more research needed before translating into commercial products.
Commercialization
There are several liposome medicines currently available commercially, which helps set the regulatory pathway for immunoliposome therapies.
As immunoliposome therapy has progressed in research, big market players in pharmaceutical research and manufacturing have invested in the development of these therapies. A relevant example of this is a phase I/II trial that examined the effectiveness of PDS0101 in combination with pembrolizumab, an immune checkpoint inhibitor (sold under the brand name Keytruda). The study is funded by PDS Biotechnology and in partnership with Merck. The purpose of the study is to determine the effectiveness of PDS0101 + pembrolizumab in shrinking tumors in patients with virus-related oropharyngeal cancer tumors in humans. PDS0101 is a peptide-based vaccine that aids in the immune response to kill tumor cells. The study also relies on pembrolizumab monoclonal antibodies to help the body's immune system attack the cancer and interfere with the spread of tumor cells.
Although immunoliposome therapy exhibits clinical and commercial promise, there are several known challenges in the translation from laboratory studies to clinical studies and ultimately to commercialization. One obstacle is that immunoliposome therapy is limited by having a short half-life and retention time once it reaches the tumor microenvironment. Additionally, immunoliposome therapies are often individualized which requires close clinical monitoring and comprehensive evaluation methods. From a biochemical perspective, other challenges that immunoliposome therapies face are drug instability due to the phospholipid bilayer and the known possibility for hepatotoxicity. From a manufacturing perspective, designing liposome drug delivery systems at an industrial scale can present a challenge due to the complexity of these drug release mechanisms and their related biosafety.
Similar approaches
Though immunoliposomes serve as a possible advancement, there are other therapies similar to it that trail on the role of targeted drug delivery systems. One example of such therapy is Immune Polymeric nanoparticles, which are similar to liposomes but consist of small particles composed of biodegradable polymers. These nanoparticles similarly encapsulate drugs and can function to enhance specificity towards targeted diseased cells with peptide ligands. Another type is Targeting Antibody Drug Conjugates, which combine monoclonal antibodies with the cytotoxicity of chemotherapy drugs. This specific type is catered towards cancer cells expressing a specific target antigen. They are well-tolerated by the body as they are biodegradable, eliminating many potential toxicity factors, and proving to be a possible new model for therapeutics.
References
Drug delivery devices
Liposomally encapsulated antineoplastic agents | Immunoliposome therapy | [
"Chemistry"
] | 3,482 | [
"Pharmacology",
"Drug delivery devices"
] |
76,415,171 | https://en.wikipedia.org/wiki/Waste%20input-output%20model | The Waste Input-Output (WIO) model is an innovative extension of the environmentally extended input-output (EEIO) model. It enhances the traditional Input-Output (IO) model by incorporating physical waste flows generated and treated alongside monetary flows of products and services.
In a WIO model, each waste flow is traced from its generation to its treatment, facilitated by an allocation matrix.
Additionally, the model accounts for the transformation of waste during treatment into secondary waste and residues, as well as recycling and final disposal processes.
By including the end-of-life (EoL) stage of products, the WIO model enables a comprehensive consideration of the entire product life cycle, encompassing production, use, and disposal stages within the IO analysis framework. As such, it serves as a valuable tool for life cycle assessment (LCA).
Background
With growing concerns about environmental issues, the EEIO model evolved from the conventional IO model appended by integrating environmental factors such as resources, emissions, and waste. The standard EEIO model, which includes the economic input-output life-cycle assessment (EIO-LCA) model, can be formally expressed as follows
Here represents the square matrix of input coefficients, denotes releases (such as emissions or waste) per unit of output or the intervention matrix, stands for the vector of final demand (or functional unit), is the identity matrix, and represents the resulting releases (For further details, refer to the input-output model). A model in which represents the generation of waste per unit of output is known as a Waste Extended IO (WEIO) model. In this model, waste generation is included as a satellite account.
However, this formulation, while well-suited for handling emissions or resource use, encounters challenges when dealing with waste. It overlooks the crucial point that waste typically undergoes treatment before recycling or final disposal, leading to a form less harmful to the environment. Additionally, the treatment of emissions results in residues that require proper handling for recycling or final disposal (for instance, the pollution abatement process of sulfur dioxide involves its conversion into gypsum or sulfuric acid). Leontief's pioneering pollution abatement IO model did not address this aspect, whereas Duchin later incorporated it in a simplified illustrative case of wastewater treatment.
In waste management, it is common for various treatment methods to be applicable to a single type of waste. For instance, organic waste might undergo landfilling, incineration, gasification, or composting. Conversely, a single treatment process may be suitable for various types of waste; for example, solid waste of any type can typically be disposed of in a landfill. Formally, this implies that there is no one-to-one correspondence between treatment methods and types of waste.
A theoretical drawback of the Leontief-Duchin EEIO model is that it considers only cases where this one-to-one correspondence between treatment methods and types of waste applies, which makes the model difficult to apply to real waste management issues. The WIO model addresses this weakness by introducing a general mapping between treatment methods and types of waste, establishing a highly adaptable link between waste and treatment. This results in a model that is applicable to a wide range of real waste management issues.
The Methodology
We describe below the major features of the WIO model in its relationship to the Leontief-Duchin EEIO model, starting with notations.
Let there be producing sectors (each producing a single primary product), waste treatment sectors, and waste categories. Now, let's define the matrices and variables:
: an matrix representing the flow of products among producing sectros.
: an matrix representing the generation of wastes from producing sectors. Typical examples include animal waste from livestock, slag from steel mills, sludge from paper mills and the chemical industry, and meal scrap from manufacturing processes.
: an matrix representing the use (recycling) of wastes by producing sectors. Typical examples include the use of animal waste in fertilizer production and iron scrap in steel production based on an electric arc furnace.
: an matrix representing the net flow of wastes: .
: an matrix representing the flow of products in waste treatment sectors.
: an matrix representing the net generation of (secondary) waste in waste treatment sectors: ( and are defined similar to and ). Typical examples of include ashes generated from incineration processes, sludge produced during wastewater treatment, and residues derived from automobile shredding facilities.
: an vector representing the final demand for products.
: an vector representing the generation of waste from final demand sectors, such as the generation of kitchen waste and end-of-life consumer appliances.
: an vector representing the quantity of products produced.
: an vector representing the quantity of waste for treatment.
It is important to note that variables with or pertain to conventional components found in an IO table and are measured in monetary units. Conversely, variables with or typically do not appear explicitly in an IO table and are measured in physical units.
The balance of goods and waste
Using the notations introduced above, we can represent the supply and demand balance between products and waste for treatment by the following system of equations:
Here, dednotes a vector of ones () used for summing the rows of , and similar definitions apply to other terms. The first line pertains to the standard balance of goods and services with the left-hand side referring to the demand and the right-hand-side supply. Similarly, the second line refers to the balance of waste, where the left-hand side signifies the generation of waste for treatment, and the right-hand side denotes the waste designated for treatment. It is important to note that increased recycling reduces the amount of waste for treatment .
The IO model with waste and waste treatment
We now define the input coefficient matrices and waste generation coefficients as follows
Here, refers to a diagonal matrix where the element is the -th element of a vector .
Using and as derived above, the balance () can be represented as:
This equation () represents the Duchin-Leontief environmental IO model, an extension of the original Leontief model of pollution abatement to account for the generation of secondary waste. It is important to note that this system of equations is generally unsolvable due to the presence of on the left-hand side and on the right-hand side, resulting in asymmetry. This asymmetry poses a challenge for solving the equation. However, the Duchin-Leontief environmental IO model addresses this issue by introducing a simplifying assumption:
This assumption () implies that a single treatment sector exclusively treats each waste. For instance, waste plastics are either landfilled or incinerated but not both simultaneously. While this assumption simplifies the model and enhances computational feasibility, it may not fully capture the complexities of real-world waste management scenarios. In reality, various treatment methods can be applied to a given waste; for example, organic waste might be landfilled, incinerated, or composted. Therefore, while the assumption facilitates computational tractability, it might oversimplify the actual waste management processes.
The WIO model
Nakamura and Kondo addressed the above problem by introducing the allocation matrix of order that assigns waste to treatment processes:
Here, the element of of represents the proportion of waste treated by treatment . Since waste must be treated in some manner (even if illegally dumped, which can be considered a form of treatment), we have:
Here, stands for the transpose operator.
Note that the allocation matrix is essential for deriving from .
The simplifying condition () corresponds to the special case where and is a unit matrix.
The table below gives an example of for seven waste types and three treatment processes. Note that represents the allocation of waste for treatment, that is, the portion of waste that is not recycled.
The application of the allocation matrix transforms equation () into the following fom:
Note that, different from (), the variable occurs on both sides of the equation. This system of equations is thus solvable (provided it exists), with the solution given by:
The WIO counterpart of the standard EEIO model of emissions, represented by equation (), can be formulated as follows:
Here, represents emissions per output from production sectors, and denotes emissions from waste treatment sectors.
Upon comparison of equation () with equation (), it becomes clear that the former expands upon the latter by incorporating factors related to waste and waste treatment.
Finally, the amount of waste for treatment induced by the final demand sector can be given by:
The Supply and Use Extension (WIO-SUT)
In the WIO model (), waste flows are categorized based solely on treatment method, without considering the waste type. Manfred Lenzen addressed this limitation by allowing both waste by type and waste by treatment method to be presented together in a single representation within a supply-and-use framework.
This extension of the WIO framework, given below, results in a symmetric WIO model that does not require the conversion of waste flows into treatment flows.
</p>It is worth noting that despite the seemingly different forms of the two models, the Leontief inverse matrices of WIO and WIO-SUT are equivalent.
The WIO Cost and Price Model
Let's denote by , , , and the vector of product prices, waste treatment prices, value-added ratios of products, and value-added ratios of waste treatments, respectively.
The case without waste recycling
In the absence of recycling, the cost counterpart of equation () becomes:
which can be solved for and as:
The case with waste recycling
When there is a recycling of waste, the simple representation given by equation () must be extended to include the rate of recycling and the price of waste :
Here, is the vector of waste prices, is the diagonal matrix of the vector of the average waste recycling rates, , and ( and are defined in a similar fashion).
Rebitzer and Nakamura used () to assess the life-cycle cost of washing machines under alternative End-of-Life scenarios.
More recently, Liao et al. applied () to assess the economic effects of recycling copper waste domestically in Taiwan, amid the country's consideration of establishing a copper refinery to meet increasing demand.
A caution about possible changes in the input-output coeffcieints of treatment processes when the composition of waste changes
The input-output relationships of waste treatment processes are often closely linked to the chemical properties of the treated waste, particularly in incineration processes.
The amount of recoverable heat, and thus the potential heat supply for external uses, including power generation, depends on the heat value of the waste.
This heat value is strongly influenced by the waste's composition.
Therefore, any change in the composition of waste can significantly impact and .
To address this aspect of waste treatment, especially in incineration, Nakamura and Kondo recommended using engineering information about the relevant treatment processes.
They suggest solving the entire model iteratively, which consists of the WIO model and a systems engineering model that incorporates the engineering information.
Alternatively, Tisserant et al proposed addressing this issue by distinguishing each waste by its treatment processes. They suggest transforming the rectangular waste flow matrix () not into an matrix as done by Nakamura and Kondo, but into an matrix. The details of each column element were obtained based on the literature.
WIO tables and applications
Waste footprint studies
The MOE-WIO table for Japan
The WIO table compiled by the Japanese Ministry of the Environment (MOE) for the year 2011 stands as the only publicly accessible WIO table developed by a governmental body thus far. This MOE-WIO table distinguishes 80 production sectors, 10 waste treatment sectors, 99 waste categories, and encompasses 7 greenhouse gases (GHGs). The MOE-WIO table is available here.
Equation () can be used to assess the waste footprint of products or the amount of waste embodied in a product in its supply chain. Applied to the MOE-WIO, it was found that public construction significantly contributes to reducing construction waste, which mainly originates from building construction and civil engineering sectors. Additionally, public construction is the primary user (recycler) of slag and glass scrap. Regarding waste plastics, findings indicate that the majority of plastic waste originates not from direct household discharge but from various production sectors such as medical services, commerce, construction, personal services, food production, passenger motor vehicles, and real estate.
Other studies
Many researchers have independently created their own WIO datasets and utilized them for various applications, encompassing different geographical scales and process complexities. Here, we provide a brief overview of a selection of them.
End-of-Life electrical and electronic appliances
Kondo and Nakamura assessed the environmental and economic impacts of various life-cycle strategies for electrical appliances using the WIO-table they developed for Japan for the year 1995.
This dataset encompassed 80 industrial sectors, 5 treatment processes, and 36 types of waste.
The assessment was based on Equation ().
The strategies examined included disposal to a landfill, conventional recycling, intensive recycling employing advanced sorting technology, extension of product life, and extension of product life with functional upgrading.
Their analysis revealed that intensive recycling outperformed landfilling and simple shredding in reducing final waste disposal and other impacts, including carbon emissions.
Furthermore, they found that extending the product life significantly decreased environmental impact without negatively affecting economic activity and employment, provided that the reduction in spending on new purchases was balanced by increased expenditure on repair and maintenance.
General and hazardous industrial waste
Using detailed data on industrial waste, including 196 types of general industrial waste and 157 types of hazardous industrial waste, Liao et al. analyzed the final demand footprint of industrial waste in Taiwan across various final demand categories. Their analysis revealed significant variations in waste footprints among different final demand categories. For example, over 90% of the generation of "Waste acidic etchants" and "Copper and copper compounds" was attributed to exports. Conversely, items like "Waste lees, wine meal, and alcohol mash" and "Pulp and paper sludge" were predominantly associated with household activities
Global waste flows
Tisserant et al developed a WIO model of the global economy by constructing a harmonized multiregional solid waste account that covered 48 world regions, 163 production sectors, 11 types of solid waste, and 12 waste treatment processes for the year 2007. Russia was found to be the largest generator of waste, followed by China, the US, the larger Western European economies, and Japan.
Decision Analytic Extension Based on Linear Programming (LP)
Kondo and Nakamura applied linear programming (LP) methodology to extend the WIO model, resulting in the development of a decision analytic extension known as the WIO-LP model. The application of LP to the IO model has a well-established history. This model was applied to explore alternative treatment processes for end-of-life home electric and electronic appliances, aiming to identify the optimal combination of treatment processes to achieve specific objectives, such as minimization of carbon emissions or landfill waste. Lin applied this methodology to the regional Input-Output (IO) table for Tokyo, augmented to incorporate wastewater flows and treatment processes, and identified trade-off relationships between water quality and carbon emissions. A similar method was also employed to assess the environmental impacts of alternative treatment processes for waste plastics in China.
See also
References
External links
WIO table compiled by the Japanese Ministry of the Environment
Industrial ecology
Ecological economics
Mathematical and quantitative methods (economics)
Economics models
Economic planning
Environmental social science concepts
Waste management | Waste input-output model | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,161 | [
"Industrial engineering",
"Environmental social science concepts",
"Environmental engineering",
"Industrial ecology",
"Environmental social science"
] |
76,416,541 | https://en.wikipedia.org/wiki/GoFetch | GoFetch is a family of cryptographic attacks on recent Apple silicon CPUs that exploits the CPU's on-chip data memory-dependent prefetcher (DMP) to investigate the contents of memory. CPUs affected include the M1, M2, M3 and A14 series system-on-a-chip processors.
The DMP looks at cache memory content for possible pointer values, and prefetches the data at those locations into cache if it sees memory access patterns that suggest following those pointers would be useful. The GoFetch attacks use those speculative cache fetches to undermine a number of different cryptographic algorithms by using memory access timings to exfiltrate data from those algorithms using timing attacks.
The authors of GoFetch state that they were unable to make their exploit work on the Intel Raptor Lake processor they tested due to its more limited DMP functionality.
References
External links
Computer security exploits
2024 in computing
Apple silicon | GoFetch | [
"Technology"
] | 195 | [
"Computer security stubs",
"Computing stubs",
"Computer security exploits"
] |
76,416,680 | https://en.wikipedia.org/wiki/Surface%20Pro%2010 | The Microsoft Surface Pro 10 is a 2-in-1 detachable tablet computer developed by Microsoft, succeeding the Surface Pro 9.
The device was announced on March 22, 2024 alongside the Surface Laptop 6 and released on April 9, 2024, targeting business customers. The general public can purchase the device some time in May, 2024. However, the consumer version will use the ARM processor.
History
Features
Hardware
Software
Accessories
Reception
Timeline
References
Microsoft Surface
2-in-1 PCs
Computer-related introductions in 2024 | Surface Pro 10 | [
"Technology"
] | 106 | [
"Crossover devices",
"2-in-1 PCs"
] |
76,417,324 | https://en.wikipedia.org/wiki/IRAS%2020068%2B4051 | IRAS 20068+4051 is a protoplanetary nebula in the constellation Cygnus. It has not yet become a complete planetary nebula, and it's still in the short-lived protoplanetary nebula phase. IRAS 20068+4051 has given astronomers insight into how planetary nebulae form and evolve over time.
Discovery
IRAS 20068+4051 was first discovered and observed during a sky survey with the Infrared Astronomical Satellite and later was later observed further with the Hubble Space Telescope.
See also
List of protoplanetary nebulae
References
Protoplanetary nebulae
Cygnus (constellation)
IRAS catalogue objects | IRAS 20068+4051 | [
"Astronomy"
] | 130 | [
"Cygnus (constellation)",
"Constellations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.