id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
71,465,792 | https://en.wikipedia.org/wiki/Pycnonuclear%20fusion | Pycnonuclear fusion () is a type of nuclear fusion reaction which occurs due to zero-point oscillations of nuclei around their equilibrium point bound in their crystal lattice. In quantum physics, the phenomenon can be interpreted as overlap of the wave functions of neighboring ions, and is proportional to the overlapping amplitude. Under the conditions of above-threshold ionization, the reactions of neutronization and pycnonuclear fusion can lead to the creation of absolutely stable environments in superdense substances.
The term "pycnonuclear" was coined by A.G.W. Cameron in 1959, but research showing the possibility of nuclear fusion in extremely dense and cold compositions was published by W. A. Wildhack in 1940.
Astrophysics
Pycnonuclear reactions can occur anywhere and in any matter, but under standard conditions, the speed of the reaction is exceedingly low, and thus, have no significant role outside of extremely dense systems, neutron-rich and free electron-rich environments, such as the inner crust of a neutron star. A feature of pycnonuclear reactions is that the rate of the reaction is directly proportional to the density of the space that the reaction is occurring in, but is almost fully independent of the temperature of the environment.
Pycnonuclear reactions are observed in neutron stars or white dwarfs, with evidence present of them occurring in lab-generated deuterium-tritium plasma. Some speculations also relate the fact that Jupiter emits more radiation than it receives from the Sun with pycnonuclear reactions or cold fusion.
Black dwarfs
White dwarfs
In white dwarfs, the core of the star is cold, under which conditions, so, if treated classically, the nuclei that arrange themselves into a crystal lattice are in their ground state. The zero-point oscillations of nuclei in the crystal lattice with energy at the energy at Gamow's peak equal to can overcome the Coulomb barrier, actuating pycnonuclear reactions. A semi-analytical model indicates that in white dwarfs, a thermonuclear runaway can occur at much earlier ages than that of the universe, as the pycnonuclear reactions in the cores of white dwarfs exceed the luminosity of the white dwarfs, allowing C-burning to occur, which catalyzes the formation of type Ia supernovas in accreting white dwarfs, whose mass is equal to the Chandrasekhar mass.
Some studies indicate that the contribution of pycnonuclear reactions towards instability of white dwarfs is only significant in carbon white dwarfs, while in oxygen white dwarfs, such instability is caused mostly due to electron capture. Although other authors disagree that the pycnonuclear reactions can act as major long-term heating sources for massive (1.25 ) white dwarfs, as their density would not suffice for a high rate of pycnonuclear reactions.
While most studies indicate that at the end of their lifecycle, white dwarfs slowly decay into black dwarfs, where pycnonuclear reactions slowly turn their cores into ^56Fe, according to some versions, a collapse of black dwarfs is possible: M.E. Caplan (2020) theorizes that in the most massive black dwarfs (1.25 ), due to their declining electron fraction resulting from ^56Fe production, they will exceed the Chandrasekhar limit in the very far future, speculating that their lifetime and delay time can stretch to up to years.
Neutron stars
As the neutron stars undergo accretion, the density in the crust increases, passing the electron capture threshold. As the electron capture threshold ( g cm−3) is exceeded, it allows for the formation of light nuclei from the process of double electron capture (^40Mg + 2e -> ^34Ne + 6n + ), forming the light neon nuclei and free neutrons, which further increases the density of the crust. As the density increases, the crystal lattices of neutron-rich nuclei are forced closer together due to gravitational collapse of accreting material, and at a point where the nuclei are pushed so close together that their zero-point oscillations allow them to break through the Coulomb barrier, fusion occurs. While the main site of pycnonuclear fusion within neutron stars is the inner crust, pycnonuclear reactions between light nuclei can occur even in the plasma ocean. Since the core of neutron stars was approximated to be g cm−3, at such extreme densities, pycnonuclear reactions play a large role as demonstrated by Haensel & Zdunik, who showed that at densities of g cm−3, they serve as a major heat source. In the fusion processes of the inner crust, the burning of neutron-rich nuclei (^{34}Ne + ^{34}Ne -> ^68Ca) releases a lot of heat, allowing pycnonuclear fusion to perform as a major energy source, possibly even acting as an energy basin for gamma-ray bursts.
Further studies have established that most magnetars are found at densities of g cm−3, indicating that pycnonuclear reactions along with subsequent electron capture reactions could serve as major heat sources.
Triple-alpha reaction
In Wolf–Rayet stars, the triple-alpha reaction is accommodated by the low-energy of ^8Be resonance. However, in neutron stars the temperature in the core is so low that the triple-alpha reactions can occur via the pycnonuclear pathway.
Mathematical model
As the density increases, the Gamow peak increases in height and shifts towards lower energy, while the potential barriers are depressed. If the potential barriers are depressed by the amount of , the Gamow peak is shifted across the origin, making the reactions density-dependent, as the Gamow peak energy is much larger than the thermal energy. The material becomes a degenerate gas at such densities. Harrison proposed that models fully independent of temperature be called cryonuclear.
Pycnonuclear reactions can proceed in two ways: direct (^{34}Ne + ^{34}Ne or ^{40}Mg + ^{40}Mg) or through chain of electron capture reactions (^25N + ^40Mg).
Uncertainties
The current consensus on the rate of pycnonuclear reactions is not coherent. There are currently a lot of uncertainties to consider when modelling the rate of pycnonuclear reactions, especially in spaces with high numbers of free particles. The primary focus of current research is on the effects of crystal lattice deformation and the presence of free neutrons on the reaction rate. Every time fusion occurs, nuclei are removed from the crystal lattice - creating a defect. The difficulty of approximating this model lies within the fact that the further changes occurring to the lattice and the effect of various deformations on the rate are thus far unknown. Since neighbouring lattices can affect the rate of reaction too, negligence of such deformations could lead to major discrepancies. Another confounding variable would be the presence of free neutrons in the crusts of neutron stars. The presence of free neutrons could potentially affect the Coulomb barrier, making it either taller or thicker. A study published by D.G. Yakovlev in 2006 has shown that the rate calculation of the first pycnonuclear fusion of two ^{34}Ne nuclei in the crust of a neutron star can have an uncertainty magnitude of up to seven. In this study, Yakovlev also highlighted the uncertainty in the threshold of pycnonuclear fusion (e.g., at what density it starts), giving the approximate density required for the start of pycnonuclear fusion of g cm−3, arriving at a similar conclusion as Haesnel and Zdunik. According to Haesnel and Zdunik, additional uncertainty of rate calculations in neutron stars can also be due to uneven distribution of the crustal heating, which can impact the thermal states of neutron stars before and after accretion.
In white dwarfs and neutron stars, the nuclear reaction rates can not only be affected by pycnonuclear reactions but also by the plasma screening of the Coulomb interaction. A Ukrainian Electrodynamic Research Laboratory "Proton-21", established that by forming a thin electron plasma layer on the surface of the target material, and, thus, forcing the self-compression of the target material at low temperatures, they could stimulate the process of pycnonuclear fusion. The startup of the process was due to the self-contracting plasma "scanning" the entire volume of the target material, screening the Coulomb field.
Screening, Quantum Diffusion & Nuclear Fusion Regimes
Before delving into the mathematical model, it is important to understand that pycnonuclear fusion, in its essence, occurs due to two main events:
A phenomenon of quantum nature called quantum diffusion.
Overlap of the wave functions of zero-point oscillations of the nuclei.
Both of these effects are heavily affected by screening. The term screening is generally used by nuclear physicists when referring to plasmas of particularly high density. In order for the pycnonuclear fusion to occur, the two particles must overcome the electrostatic repulsion between them - the energy required for this is called the Coulomb barrier. Due to the presence of other charged particles (mainly electrons) next to the reacting pair, they exert the effect of shielding - as the electrons create an electron cloud around the positively charged ions - effectively reducing the electrostatic repulsion between them, lowering the Coulomb barrier. This phenomenon of shielding is referred to as "screening", and in cases where it is particularly strong, it is called "strong screening". Consequently, in cases where the plasma has a strong screening effect, the rate of pycnonuclear fusion is substantially enhanced.
Quantum tunnelling is the foundation of the quantum physical approach to pycnonuclear fusion. It is closely intertwined with the screening effect, as the transmission coefficient depends on the height of the potential barrier, the mass of the particles, and their relative velocity (since the total energy of the system depends on the kinetic energy). From this follows that the transmission coefficient is very sensitive to the effects of screening. Thus, the effect of screening not only contributes to the reduction of the potential barrier that allows for "classical" fusion to occur via the overlap of the wave functions of the zero-point oscillations of the particles, but also to the increase of the transmission coefficient, both of which increase the rate of pycnonuclear fusion.
On top of the other various jargon related to pycnonuclear fusion, the papers also introduce various regimes, that define the rate of pycnonuclear fusion. Specifically, they identify the zero-temperature, intermediate, and thermally-enhanced regimes as their main ones.
One-Component Plasma (OCP)
The pioneers to the derivation of the rate of pycnonuclear fusion in one-component plasma (OCP) were Edwin Salpeter and David Van-Horn, with their article published in 1969. Their approach used a semiclassical method to solve the Schrödinger equation by using the Wentzel-Kramers-Brillouin (WKB) approximation, and Wigner-Seitz (WS) spheres. Their model is heavily simplified, and whilst it is primitive, is required to understand other approaches which largely extrapolated on the works of Salpeter & Van Horn. They employed the WS spheres to simplify the OCP into regions containing one ion each, with the ions situated on the vertices of a BCC crystal lattice. Then, using the WKB approximation, they resolved the effect of quantum tunnelling on the fusing nuclei. Extrapolating this to the entire lattice allowed them to arrive at their formula for the rate of pycnonuclear fusion:
where is the density of the plasma, is the mean molecular weight per electron (atomic nucleus), is a constant equal to and serves as a conversion factor from atomic mass units to grams, and represents the thermal average of the pairwise reaction probability.
However, the big fault of the method proposed by Salpeter & Van-Horn is that they neglected the dynamic model of the lattice. This was improved upon by Schramm and Koonin in 1990. In their model, they found that the dynamic model cannot be neglected, but it is possible that the effects caused by the dynamicity can be cancelled out.
See also
Cold fusion
Thermonuclear reaction
Accretion (astrophysics)
Plasma (physics)
Quantum tunnelling
References
Nuclear fusion
Neutron sources
Astronomy | Pycnonuclear fusion | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,679 | [
"Nuclear fusion",
"nan",
"Nuclear physics"
] |
75,750,707 | https://en.wikipedia.org/wiki/1%2C1%2C2%2C2-Tetrafluoroethane | 1,1,2,2-Tetrafluoroethane (also called R-134 or HFC-134) is a hydrofluorocarbon, a fluorinated alkane. It is an isomer of the more-used 1,1,1,2-tetrafluoroethane (R-134a). It is used as a foam expansion agent and heat transfer fluid.
References
Fluoroalkanes
Greenhouse gases
Hydrofluorocarbons | 1,1,2,2-Tetrafluoroethane | [
"Chemistry",
"Environmental_science"
] | 104 | [
"Greenhouse gases",
"Environmental chemistry"
] |
75,754,732 | https://en.wikipedia.org/wiki/JP-10%20%28fuel%29 | JP-10 (Jet Propellant 10) is a synthetic jet fuel, specified and used mainly as fuel in missiles. Being designed for military purposes, it is not a kerosene based fuel.
Developed to be a gas turbine fuel for cruise missiles, it contains mainly exo-tetrahydrodicyclopentadiene (exo-THDCPD) with some endo-isomer impurity. About 100 ppm of alkylphenol-based antioxidant is added to prevent gumming. Optionally, 0.10–0.15% of fuel system icing inhibitor may be added. Exo-THDCPD is produced by catalytic hydrogenation of dicyclopentadiene and then isomerization.
It superseded JP-9, which is a mixture of norbornadiene-based RJ-5 fuel, tetrahydrodicyclopentadiene and methylcyclohexane, because of a lower temperature service limit and about four times lower price. Since the lack of volatile methylcyclohexane makes its ignition difficult, a separate priming fluid PF-1 with about 10-12% of this additive is required for the engine start-up. Its main use is in the Tomahawk missiles.
The Russian equivalent is called detsilin.
Chemical properties of JP-10 fuel
Chemical formula: C10H16
H/C (Hydrogen/Carbon) ratio (mole): 1.6
Average molecular weight (g/mol): 136.2
LHV (lower heating value) (MJ/kg): 43.0
Uses
JP-10 absorbs heat energy, so is endothermic with a relatively high density of 940 kg/m3. It has a low freezing point of less than and the flash point is . The high energy density of 39.6 MJ/L makes it ideal for military aerospace applications - its primary use. The ignition and burn chemistry has been extensively studied. The exo isomer also has a low freezing point. Its other properties have also been studied extensively.
Even though its uses are mainly for the military, the relatively high cost has meant research has been undertaken to find lower costs routes including the use of cellulosic materials.
Further research
Current and past areas of research focus on:
The pyrolysis and kinetics of the fuel.
Catalytic addition of nanoparticles such as those based on cerium(IV) oxide.
Catalysis for the endo to exo isomerisation.
Use of additives in JP-10 for various enhancements.
References
Further reading
Aviation fuels
Liquid fuels
Petroleum products
de:JP-10 | JP-10 (fuel) | [
"Chemistry",
"Engineering"
] | 552 | [
"Aviation fuels",
"Petroleum",
"Petroleum products",
"Aerospace engineering"
] |
74,467,809 | https://en.wikipedia.org/wiki/Terephthalaldehyde | Terephthalaldehyde (TA) is an organic compound with the formula C6H4(CHO)2. It is one of three isomers of benzene dicarboxaldehyde, in which the aldehyde moieties are positioned in the para conformation on the benzene ring. Terephthalaldehyde appears as a white to beige solid, typically in the form of a powder. It is soluble in many organic solvents, such as alcohols (e.g., methanol or ethanol) and ethers (e.g., tetrahydrofuran or diethylether).
Preparation
Terepthalaldehyde can be synthesised from p-xylene in two steps. First, p-xylene can be reacted with bromine to create α,α,α',α'-Tetrabromo-p-xylene. Next, sulphuric acid is introduced to create terephthaldehyde. Alternative procedures also describe the conversion of similar p-xylene derivatives into terephthalaldehyde.
Reactions and applications
Terphthalaldehyde is used in the preparation of imines, which are also commonly referred to as Schiff bases, following a condensation reaction with amines. During this reaction, water is also formed. This reaction is by definition reversible, thus creating an equilibrium between aldehyde and amine on one side, and the imine and water on the other. However, due to aromatic conjugation between the imine group and benzene ring, the imines are relatively stable and will not easily hydrolyse back to the aldehyde. When in an acidic aqueous environment, however, imines will start to hydrolyse more easily. Typically, an equilibrium between the imine and aldehyde is formed, which is dependent on the concentration of the relevant compounds and the pH of the solution.
Imines from terephthalaldehyde find use in the preparation of metal-organic coordination complexes. In addition, terepthaldehyde is a commonly used monomer in the production of imine polymers, also called polyimines. It finds further use in the synthesis of covalent organic frameworks (COFs), and It is used as a precursor for the preparation of paramagnetic microporous polymeric organic frameworks (POFs) through copolymerization with pyrrole, indole, and carbazole. Due to the characteristic metal-coordinating properties of imines, terephthalaldehyde finds common use in synthesis of molecular cages.
Terephthalaldehyde is also a commonly used intermediate or starting material in the preparation of a broad variety of organic compounds, such as pharmaceuticals, dyes and fluorescent whitening agents.
Related compounds
phthalaldehyde
isophthalaldehyde
terephthalic acid
References
Benzaldehydes
Monomers
Reagents for organic chemistry | Terephthalaldehyde | [
"Chemistry",
"Materials_science"
] | 624 | [
"Monomers",
"Polymer chemistry",
"Reagents for organic chemistry"
] |
69,914,127 | https://en.wikipedia.org/wiki/Javad%20Owji | Javad Owji (; born 24 July 1966) is an Iranian oil engineer and politician who served as the Minister of Petroleum of Iran from 2021 to 2024.
Early life and education
Owji was born in Shiraz in 1966. He received a bachelor's degree in oil engineering from Petroleum University of Technology in Ahvaz.
Career
From 1980 Owji worked in the oil-related public offices. He was the deputy oil minister and the head of the National Iranian Gas Company from 2009 to 2013 during the last term of President Mahmoud Ahmadinejad. He also served in various oil-related posts, including chairman of the board of supervision of production and gas refineries and vice chairman of Petro Mofid Oil and Gas Development Holding. Owji was nominated by President Ebrahim Raisi as oil minister on 11 August 2021. On 25 August Owji was confirmed by the Majlis with 198 to 70 with 18 abstentions. He succeeded Bijan Namdar Zangeneh in the post.
References
External links
20th-century Iranian engineers
21st-century Iranian engineers
21st-century Iranian politicians
1966 births
Directors of the National Iranian Oil Company
Living people
Oil ministers of Iran
Politicians from Shiraz
Petroleum University of Technology alumni
Petroleum engineers | Javad Owji | [
"Engineering"
] | 249 | [
"Petroleum engineers",
"Petroleum engineering"
] |
71,466,674 | https://en.wikipedia.org/wiki/Hydrogen%20transport | Hydrogen transport involves the use of technology to transport hydrogen from the point of generation to the point of use.
Techniques
Hydrogen can be transported in a variety of forms.
Gas
Hydrogen can be transported in gaseous form, typically in a pipeline. Because hydrogen gas is highly reactive, the pipeline or other container must be able to resist interacting with the gas. Hydrogen's low density at atmospheric pressure means that gas transport is suitable only for low volume requirements.
Liquid
Hydrogen switches to the liquid phase at . Thus, transporting liquid hydrogen requires sophisticated refrigeration technologies such as cryogenic tanker trucks and liquefaction plants.
Compound
Hydrogen can be reacted with other elements to form a variety of compounds. This allows it to be transported in either liquid (e.g., water) or solid form. One variation on this concept is to transport atomic silicon, produced using renewable energy. Mixing silicon with water separates water's oxygen from its hydrogen without requiring additional energy. The hydrogen can then be oxidixed with the oxygen (or air) to produce energy (with water as the only byproduct).
Mechanochemical
Mechanochemistry refers to chemical reactions triggered by mechanical forces as opposed to heat, light, or electric potential. Ball milling can crush material such as boron nitride or graphene, allowing hydrogen gas to be absorbed by the powder, storing the hydrogen. The hydrogen can be released by heating the powder. These techniques offer the potential of substantial net energy savings.
Safety
Hydrogen transport must address various safety threats.
It is highly flammable, requiring little energy to ignite. However, it is low density (0.0837 g/L), which allows leaked gas to rapidly dissipate, rather than accumulate as a higher density gas might, such as chlorine (3.214 g/L).
Liquid hydrogen requires such low temperatures that leaks may solidify other air components such as nitrogen and oxygen. Solid oxygen can mix with liquid hydrogen, forming a mixture that could self-ignite. A jet fire can also ignite.
At high concentrations, hydrogen gas is an asphyxiant, but is not otherwise toxic.
ISO Technical Committee 197 is developing standards governing hydrogen applications. Standards are available onboard systems, fuel tanks and vehicle refueling systems and for production (including electrolysis and steam methane reformers).
Individual jurisdictions such as Italy have developed additional standards.
See also
Hydrogen transportation
References
External links
Hazardous materials
Energy in transport | Hydrogen transport | [
"Physics",
"Chemistry",
"Technology"
] | 501 | [
"Physical systems",
"Transport",
"Materials",
"Energy in transport",
"Hazardous materials",
"Matter"
] |
78,841,983 | https://en.wikipedia.org/wiki/Dan%20%28mass%29 | Dan (), or Daam in Cantonese, Tan in Japanese and Taiwanese, also called "Chinese hundredweight" or "picul", is a traditional Chinese unit for weight measurement in East Asia. It originated in China before being introduced to neighboring countries.
Nowaday, the mass of 1 dan equals 100 jin or 50 kg in mainland China, 60 kg in Taiwan and Japan,
and 60.478982 kg in Hong Kong, Singapore and Malaysia.
Dan is mostly used in the traditional markets.
China Mainland
On June 25, 1959, the State Council of the People's Republic of China issued the Order on the Unified Measurement System, with minor amendment to the market system. "
Legally, 1 dan equals 100 jins, 50 kg, or 110.2 lb.
Taiwan
The so-called Taiwan dan is actually the dan used throughout China during the Qing Dynasty. 1 Taiwan dan is 60 kg, equal to 100 Taiwan jin.
Hong Kong and Macau
Hong Kong law stipulates that one dan is equal to one hundred jin , which is 60.478982 kg.
Singapore and Malaysia have similar regulations as Hong Kong, as they are all former British colonies.
Japan
In Japan, 1 dan, or tan in Japanese pronunciation, is equal to 60 kg.
See also
Chinese units of measurement
Hong Kong units of measurement
Taiwanese units of measurement
Japanese units of measurement
Notes
References
Sources
, reprinted by the Louisiana State University Press at Baton Rouge in 1991.
External links
中國度量衡#衡
市制
擔
英擔
https://en.wiktionary.org/wiki/公擔#Chinese 公擔
Units of mass
Chinese units of measurement
Customary units of measurement | Dan (mass) | [
"Physics",
"Mathematics"
] | 343 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Customary units of measurement",
"Units of measurement"
] |
78,844,718 | https://en.wikipedia.org/wiki/Ga-68-Trivehexin | 68Ga-Trivehexin is a radiotracer for positron emission tomography (PET), obtained by labeling the peptide conjugate Trivehexin with the a positron emitting radionuclide gallium-68 (68Ga). 68Ga-Trivehexin targets (i.e., binds to) the cell surface receptor αvβ6-integrin, a heterodimeric transmembrane cell adhesion receptor whose primary natural ligand is latency associated peptide (LAP) in its complex with transforming growth factor beta 1 (TGF-β1). Binding of αvβ6-integrin to LAP releases and thus, activates TGF-β1. 68Ga-Trivehexin is applied for PET imaging of medical conditions associated with elevated αvβ6-integrin expression, which are concomitant with elevated TGF-β1 activity. As an activator of the tumor suppressor TGF-β1, αvβ6-integrin is often overexpressed in tumors and fibrosis tissues, which is why 68Ga-Trivehexin PET imaging is primarily relevant in this medical context.
Chemistry
Trivehexin, the radiolabeling precursor
Like most precursors used for radiolabeling with radioactive metal cations, Trivehexin is composed of a dedicated complex ligand (a so-called chelator) for kinetically inert binding of the 68GaIII ion, and the bioligand(s) for binding to αvβ6-integrin. The chelator comprised in Trivehexin is a triazacycloalkane with 3 phosphinic acid substituents, with the basic structure 1,4,7-triazacyclononane-1,4,7-triphosphinate (frequently abbreviated TRAP). The αvβ6-integrin binding structure is a cyclic nonapeptide with the amino acid sequence cyclo(YRGDLAYp(NMe)K).
In the Trivehexin molecule, three of these cyclopeptides are covalently bound to the TRAP chelator core. Since TRAP possesses three equivalent carboxylic acids for attachment of other molecular units (so-called conjugation) via amide formation, Trivehexin is a C3-symmetrical molecule with its three peptide bioligands being fully equivalent. The peptides are attached to the chelator core via the terminal amine group of the side chains of N-methyl lysine. Actually, the conjugation is not done by amide bonding directly, but involves prior functionalization of the peptide with a short molecular extension (a linker) bearing a terminal alkyne, and of TRAP with three linkers bearing terminal azides. These components are assembled by means of copper(I) catalyzed alkyne-azide cycloaddition (CuAAC, also known as Huisgen reaction, a Click chemistry reaction), giving rise to the three 1,3-triazole linkages in the 68Ga-Trivehexin structure.
68Ga radiolabeling
68Ga-Trivehexin is a radioactive drug. The radioactive atom, gallium-68 (68Ga), decays with a half-life of approximately 68 min to the stable isotope zinc-68 (68Zn), to 89% by β+ decay whereby a positron with a maximum kinetic energy of 1.9 MeV is emitted (the remaining 11% are EC decays). Due to the short half life, 68Ga-Trivehexin can not be manufactured long before use but the 68Ga has to be introduced into the molecule shortly before application. This process is referred to as radiolabeling, and is done by complexation of the trivalent cation 68GaIII by the TRAP chelator in Trivehexin.
68GaIII is usually obtained from a dedicated mobile radionuclide source, a Gallium-68 generator, in form of a solution in dilute (0.04–0.1 M) hydrochloric acid (frequently and imprecisely referred to as "68Ga chloride solution in HCl" despite it contains no species with a Ga–Cl bond but [68Ga(H2O)6]3+ complex hydrate cations). For radiolabeling, the pH of the 68Ga containing generator eluate has to be raised from its initial value (depending on HCl concentration, pH 1–1.5) to pH 2–3.5 using suitable buffers, such as sodium acetate. Then, Trivehexin (5–10 nmol) is added to the buffered 68Ga-containing solution, and the mixture is briefly heated to 50–100 °C (usually 2–3 min) to finalize the complexation reaction.
Use as medical imaging agent
αvβ6-Integrin as molecular target
The abundance of αvβ6-Integrin on most adult human cell types and respective tissues is low. It is however overexpressed in the context of several medical conditions, such as cancer or fibrosis, particularly idiopathic pulmonary fibrosis.
In line with the finding that αvβ6-integrin is expressed by epithelial cells, an elevated density of αvβ6 is observed on the cell surfaces of many carcinomas (synonymous to cancers of epithelial origin). Hence, 68Ga-Trivehexin can be used for PET imaging of αvβ6-integrin positive cancers (i.e., those whose cells possess a sufficiently high density of αvβ6 on their surface), including but not limited to pancreatic ductal adenocarcinoma, non-small cell lung cancer, squamous cell carcinomas (SCC) of different origin (most notably, oral and esophageal SCC), as well as breast, ovarian, and bladder cancer.
68Ga-Trivehexin has a high binding affinity to αvβ6-integrin (IC50 = 0.047 nM). Its affinity to other RGD-binding integrins is much lower (IC50 for αvβ3, αvβ8, and α5β1 are 2.7, 6.2, and 22 nM, respectively; note that for IC50, higher values mean lower affinity), resulting in a high selectivity for αvβ6-integrin.
Imaging procedure
Since 68Ga is a positron emitter, 68Ga-Trivehexin is applicable for PET imaging. However, PET is rarely used as a standalone imaging technique these days because most clinics use PET/CT or even PET/MRI systems, which provide more detailed and useful medical information to the physician.
For clinical PET/CT diagnostics, an activity in the range of 80–150 MBq 68Ga-Trivehexin is injected intravenously (i.v.). The tracer then distributes in the body and specifically binds to its target αvβ6-integrin, while an excess is excreted via the kidneys and the urine. As a result, 68Ga-Trivehexin and, therefore, the positron-emitting radionuclide 68Ga, is preferably accumulated by αvβ6-integrin abundant tissues (for example, tumor tissue). Next, a PET/CT scanner is used to detect the gamma radiation which is generated by the annihilation of the positrons emitted by 68Ga (not the actual positrons, which do not leave the body but travel only a few millimetres through the tissue). The spatial distribution of the annihilation events is 'reconstructed' (calculated) from the raw detector data (referred to as listmode data), which eventually delivers a 3-dimensional representation of αvβ6-integrin positive tissues of interest. Typically, the PET/CT imaging is performed 45–60 minutes after the i.v. administration of 68Ga-Trivehexin.
PET/CT imaging of cancers
68Ga-Trivehexin has not yet obtained a marketing approval. It is used for clinical imaging of αvβ6-integrin expression in experimental settings. First-in-human application of different αvβ6-integrin radiotracers has demonstrated that 68Ga-Trivehexin performed especially well in detecting pancreatic cancer, showing high uptake in tumor lesions and low background in the gastrointestinal tract (GI tract). 68Ga-Trivehexin has been used for clinical PET/CT imaging in single cases and two cohorts (12 and 44 patients, respectively) of pancreatic ductal adenocarcinoma, as well as in cases of tonsillar carcinoma metastasized to the brain, of bronchial mucoepidermoid carcinoma,
of disseminated parathyroid adenoma in the context of the diagnosis of primary hyperparathyroidism (PHPT),
and of papillary thyroid carcinoma.
Safety
Like for other radioactive imaging agents in medicine, the applied amounts of radioactivity are so low that radiation-related adverse effects are very unlikely to occur, and have not been observed in practice. Consistent with the "tracer principle", the amount of pharmacologically active compound injected to a patient in the course of such an examination is extremely low. Adverse events, such as toxicity or allergic reactions, are thus highly improbable. No adverse or clinically detectable pharmacologic effects were observed following intravenous administration of 68Ga-Trivehexin when administered to cancer patients, and there were no significant changes in vital signs, laboratory study results, or electrocardiograms. In a study involving healthy volunteers, researchers again reported no adverse or clinically detectable pharmacologic effects and no significant changes in vital signs.
References
Positron emission tomography
Amides
Chelating agents
Nine-membered rings
Gallium
Phosphinates
Isotopes of gallium
Nonapeptides
Triazoles | Ga-68-Trivehexin | [
"Physics",
"Chemistry"
] | 2,146 | [
"Antimatter",
"Matter",
"Isotopes of gallium",
"Positron emission tomography",
"Functional groups",
"Isotopes",
"Chelating agents",
"Amides",
"Process chemicals"
] |
78,845,077 | https://en.wikipedia.org/wiki/Capability%20Hardware%20Enhanced%20RISC%20Instructions | Capability Hardware Enhanced RISC Instructions (CHERI) is a computer processor technology designed to improve security. CHERI aims to address the root cause of the problems that are caused by a lack of memory safety in common implementations of languages such as C/C++, which are responsible for around 70% of security vulnerabilities in modern systems.
The hardware works by giving each reference to any piece of data or system resource its own access rules. This prevents programs from accessing or changing things they should not. It also makes it hard to trick a part of a program into accessing or changing something that it should be able to access, but at a different time. The same mechanism is used to implement privilege separation, dividing processes into compartments that limit the damage that a bug (security or otherwise) can do.
CHERI can be added to many different instruction set architectures including MIPS, AArch64, and RISC-V, making it usable across a wide range of platforms.
Software must be recompiled to use CHERI, but most software requires few (if any) changes to the source code. CHERI’s importance has been recognised by governments as a way to improve cybersecurity and protect critical systems. It is under active development by various business and academic organizations.
Background
CHERI is a capability architecture. Early capability architectures, such as the CAP computer and Intel iAPX 432, demonstrated secure memory management but were hindered by performance overheads and complexity. As systems became faster and more complex, vulnerabilities like buffer overflows and use-after-free errors became widespread. CHERI addresses these challenges with a design intended for modern computing environments. It enforces memory safety and provides secure sharing and isolation to handle increasing software complexity and combat cyberattacks.
Mechanism
A CHERI system operates at a hardware level by providing a hardware-enforced type (a CHERI capability) that authorises access to memory. This type includes an address and other metadata, such as bounds and permissions. Instructions such as loads, stores, and jumps, that access memory use one of these types to authorise access, whereas on traditional architectures they would simply use an address.
This metadata is stored inline, alongside the address, in the computer's memory is protected by a tagged bit, which is cleared if the capability is tampered with. This informs the computer of which areas of memory can be accessed through a specific operation and how a program can modify or read memory through that operation. This allows CHERI systems to catch cases where memory that was outside the bounds of where the program was supposed to read or write to was operated on. Associating the metadata with the value used to access memory, rather than with the memory being accessed (in contrast to a memory management unit) means that the hardware can catch cases where a program attempts to access a part of memory that it should have access to while intending to access a different piece of memory.
Implementations of CHERI systems also include modifications to the default memory allocator. A memory allocator is a component that defines that a range of addresses should be treated by the programmer as an object. On a CHERI system, it must also communicate this information to the hardware, by setting the bounds on the pointer (represented by a CHERI capability) that is returned. It may also communicate the lifetime, to prevent use-after-free or use-after-reuse bugs.
Depending on the context, CHERI systems can be used to enhance compiler-level checks, build secure enclaves, or even be used to augment existing instruction architectures. A report by Microsoft in 2019 found that CHERI’s protections could be used to mitigate over 70% of memory safety issues found in 2019 at the company. CHERI architectures are also designed to be backward compatible with existing programming languages such as C and C++. A study performed by University of Cambridge researchers found that porting six million lines of C and C++ code to CHERI required changes to 0.026% of the Lines-of-Code (LoC).
Limitations
The architecture introduces hardware complexity due to the tag-bit mechanisms and capability checks required for enforcing memory safety. Although optimisations have been implemented to minimise these impacts, the performance trade-offs can vary depending on specific workloads and specific implementations. Additionally, CHERI requires modifications to both software and hardware ecosystems. Implementations such as Morello allow unmodified binaries to run but these do not get any additional security benefits. Software must be recompiled or adapted to utilise CHERI’s capability-based model, and hardware manufacturers must incorporate CHERI extensions into their designs.
Standardisation remains an ongoing effort. While initiatives such as the CHERI Alliance and RISC-V standardisation aim to establish broader support, the lack of widely accepted industry standards for CHERI features have delayed adoption. Adapting legacy software or retrofitting existing systems to work with CHERI can be challenging, particularly for large and heterogeneous codebases. The difficulty often stems from programming practices used during the software's original development, such as implementing custom memory management, where identifying pointers from integers can be particularly problematic.
CHERI Implementations
The CHERI architecture has been implemented across multiple platforms and projects:
Morello: Developed by Arm as part of the UKRI-funded Digital Security by Design (DSbD) programme, the Morello chip is a superset architecture designed to evaluate experimental CHERI features for potential production use on the AArch64 architecture. The Morello board supports CheriBSD, custom versions of Android, and Linux. It remains a research prototype.
CHERIoT: Introduced by Microsoft in 2023 and now developed by multiple vendors, CHERIoT is a RISC-V CHERI adaptation optimised for small embedded devices. CHERIoT is a hardware-software co-designed project and builds a custom RTOS and compartment model along with specialised hardware to provide string security guarantees. It incorporates advanced memory safety features inspired by the CHERI temporal safety projects performed on Morello.
Sonata: Developed by lowRISC and manufactured by NewAE as part of the UKRI-funded Sunburst project, the Sonata platform is an FPGA-based system designed to run RISC-V architectures. The board has an open-source design, allowing researchers and developers to modify and adapt its hardware and software. Sonata is primarily designed as a prototyping system for CHERIoT.
ICENI: Announced by SCI Semiconductors in 2024, ICENI is a CHERIoT-compatible microcontroller designed for secure embedded systems.
CHERI implementations that target mainstream operating systems, are designed to accommodate both legacy and pure capability software, to allow for gradual adaptation for existing applications. CHERI has also been implemented across various hardware architectures in a research setting, including MIPS, AArch64 (via the Morello platform), and RISC-V.
History
In the 1970s and 1980s early capability architectures such as the CAP computer (developed at the University of Cambridge) and the Intel iAPX 432 demonstrated strong security properties. These systems relied on indirection tables to manage capabilities, introducing performance bottlenecks as memory access required multiple lookups. While this approach worked when processors were slow and memory was fast, it became impractical by the mid-1980s as processors became faster and memory access times lagged behind.
In 2010 DARPA launched the Clean-slate design of Resilient, Adaptive, Secure Hosts (CRASH) programme, which tasked participants with redesigning computer systems to improve security. SRI International and University of Cambridge team revisited capability architectures, seeking to address memory safety challenges inherent in conventional designs.
By 2012 early CHERI prototypes were presented, These prototypes ran a microkernel with hand-written assembly for manipulating capabilities. CHERI was designed to be easy to implement on modern superscalar pipelined architectures. Unlike earlier capability systems, CHERI eliminated the need for indirection tables, avoiding the associated performance issues and proving that modern capability architectures could be efficiently implemented.
In 2014 CHERI hardware demonstrated its ability to run a full UNIX-like operating system, FreeBSD. This demonstration showed that CHERI’s capability model can integrate with existing software ecosystems. CHERI was originally prototyped as an extension to MIPS-64. The implementation used 256-bit capabilities, containing fields for a 64-bit base, length, object type, and permissions, with some bits reserved for experimental purposes.
In 2015 CHERI introduced a new capability encoding model that separated the address (referred to as a cursor) from the bounds and permissions. This refinement allowed capabilities to function as pointers in compiled C code, improving usability. That same year, Arm joined the project and provided critical feedback, highlighting that while doubling pointer sizes might be acceptable, quadrupling them would not. This feedback led to the development of CHERI Concentrate, a compressed encoding model that reduced capability size to 128 bits by eliminating redundancy between the base, address, and top.
In 2019 CheriABI demonstrated a fully memory-safe implementation of POSIX, allowing existing desktop software to become memory safe with a single recompile.
By 2020 it became evident that software vendors were reluctant to port their software without hardware vendor support, while hardware vendors were unwilling to produce chips without sufficient customer demand. UK Research and Innovation (UKRI) launched the Digital Security by Design (DSbD) programme to address adoption barriers for CHERI. The programme allocated £70M, matched by £100M of industrial investment, to build the CHERI software ecosystem.
This initiative funded Arm’s Morello chip, a superset architecture designed to evaluate experimental CHERI features for potential production use based on AArch64. The Morello board was designed to run CheriBSD, as well as custom versions of Android and Linux. At the same time, the Cornucopia project demonstrated that CHERI could enforce both spatial and temporal memory safety, offering deterministic protection against heap object temporal aliasing (roughly, "use-after-free"). The follow-up project, Cornucopia Reloaded, showcased efficient temporal safety using page-table features in Morello, in particular, near-negligible pause times for the application making use of revocation.
In 2023 Microsoft introduced CHERIoT, a RISC-V CHERI adaptation optimised for small embedded devices. CHERIoT incorporated ideas from Cornucopia and memory colouring techniques such as SPARC ADI and Arm MTE to enhance security. As part of the UKRI-funded Sunburst project, lowRISC launched the Sonata platform to advance RISC-V-based CHERI development and support standardisation efforts. Both the CHERI RISC-V research work and CHERIoT fed into the standardisation process for an official CHERI family of RISC-V extensions. Codasip announced that they had RISC-V IP cores with CHERI extensions available to license.
By 2024 SCI Semiconductors announced ICENI, a CHERIoT-compatible chip designed specifically for secure embedded systems. Codasip is actively developing a Linux kernel implementation for the RISC-V architecture. The CHERI Alliance, a non-profit organisation based in Cambridge, UK, was established to promote the adoption of CHERI technology and its integration into secure digital products and systems, including Google as a founding member.
References
Capability systems
Computer architecture
Computer memory
Memory management
Operating system security | Capability Hardware Enhanced RISC Instructions | [
"Technology",
"Engineering"
] | 2,355 | [
"Capability systems",
"Computer engineering",
"Computer architecture",
"Computer systems",
"Computers"
] |
78,847,059 | https://en.wikipedia.org/wiki/Law%20of%20constancy%20of%20interfacial%20angles | The law of constancy of interfacial angles (; ) is an empirical law in the fields of crystallography and mineralogy concerning the shape, or morphology, of crystals. The law states that the angles between adjacent corresponding faces of crystals of a particular substance are always constant despite the different shapes, sizes, and mode of growth of crystals. The law is also named the first law of crystallography or Steno's law.
Definition
The International Union of Crystallography (IUCr) gives the following definition: "The law of the constancy of interfacial angles (or 'first law of crystallography') states that the angles between the crystal faces of a given species are constant, whatever the lateral extension of these faces and the origin of the crystal, and are characteristic of that species." The law is valid at constant temperature and pressure.
This law is important in identifying different mineral species as small changes in atomic structure can lead to large differences in the angles between crystal faces.
The sum of the interfacial angle (external angle) and the dihedral angle (internal angle) between two adjacent faces sharing a common edge is radians (180°).
History
The law of the constancy of interfacial angles was first observed by the Danish physician Nicolas Steno when studying quartz crystals (De solido intra solidum naturaliter contento, Florence, 1669), who noted that, although the crystals differed in appearance from one to another, the angles between corresponding faces were always the same.
The law was also observed by Domenico Guglielmini (Riflessioni filosofiche dedotte dalle figure de Sali, Bologna, 1688), but it was generalized and firmly established by Jean-Baptiste Romé de l'Isle (Cristallographie, Paris, 1783) who accurately measured the interfacial angles of a great variety of crystals, using the goniometer designed by Arnould Carangeot and noted that the angles are characteristic of a substance. Carangeot was a student of Romé de L’Isle at the time of his invention of the basic crystallographic measuring instrument.
A French crystallographer, René Just Haüy, showed in 1774 that the known interfacial angles could be accounted for if the crystal were made up of minute building blocks (molécules intégrantes) that correspond approximately to the present-day unit cells.
In the diagram, the green dodecahedron on the left is built from cubical units, with the faces having a Miller index of (210). Unlike the regular dodecahedron on the right, its faces are not regular pentagons, but they are close to regular in appearance. The piling of the cubical units forms the pentagonal dodecahedron of pyritohedral pyrite. The decrement of the layers is in the proportion of 2:1, which leads to a dihedral angle at the top edge pq of 126° 87′, closely corresponding to that of the empirical crystal, of 127° 56′. The diagram is based on an 1801 drawing by René Just Haüy.
Crystal structure
The phenomenon of the constancy of interfacial angles is important because it is an outward sign of the inherent symmetry and ordered arrangement of atoms, ions or molecules within a crystal structure. The faces of a crystal are parallel to the planes of the crystal lattice, and it is for this reason that the interfacial angles are the same in different crystal specimens.
The angles between the various faces of a crystal remain unchanged throughout its growth. Crystals grow by addition of material to existing faces, this material being deposited parallel to the already existing surfaces. Consequently, if more material is added to one face than to another, the faces become unalike in size and shape, nevertheless the interfacial angles between them remain the same.
Crystals generally exhibit anisotropy, that is their properties are dependent on their direction. In particular, crystals cleave in specific directions, namely those parallel to the planes of the lattice structure. Cleavage preferentially occurs parallel to higher density planes with low Miller indices.
See also
Law of rational indices
References
Crystallography
Mineralogy concepts
Scientific laws | Law of constancy of interfacial angles | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 860 | [
"Mathematical objects",
"Scientific laws",
"Equations",
"Materials science",
"Crystallography",
"Condensed matter physics"
] |
78,850,749 | https://en.wikipedia.org/wiki/List%20of%20Sigma%20Gamma%20Tau%20chapters | Sigma Gamma Tau is the American honor society in aerospace engineering. The society formed from the merger of Tau Omega and Gamma Alpha Rho in 1953. It has chartered more than fifty chapters in the United States. With the merger of the two societies, Sigma Gamma Tau started with fourteen chapters, renamed for their host institution. Following is a list of Sigma Gamma Tau chapters, with inactive institutions indicated in italics.
Notes
References
Engineering honor societies
Lists of chapters of former Association of College Honor Societies members by society | List of Sigma Gamma Tau chapters | [
"Engineering"
] | 100 | [
"Engineering societies",
"Engineering honor societies"
] |
78,856,281 | https://en.wikipedia.org/wiki/4-Mercaptobenzoic%20acid | 4-Mercaptobenzoic acid (p-mercaptobenzoic acid, ''p''-MBA) is an organosulfur compound with the formula para-. It is used as a ligand in thiolate-protected gold cluster compounds, such as .
See also
Gold cluster
References
Benzoic acids
Thiols | 4-Mercaptobenzoic acid | [
"Chemistry"
] | 73 | [
"Organic compounds",
"Thiols"
] |
77,390,020 | https://en.wikipedia.org/wiki/Kotzig%27s%20conjecture | Kotzig's conjecture is an unproven assertion in graph theory which states that finite graphs with certain properties do not exist.
A graph is a -graph if each pair of distinct vertices is connected by exactly one path of length .
Kotzig's conjecture asserts that for there are no finite -graphs with two or more vertices.
The conjecture was first formulated by Anton Kotzig in 1974.
It has been verified for , but remains open in the general case (as of November 2024).
The conjecture is stated for because -graphs do exist for smaller values of .
-graphs are precisely the complete graphs.
The friendship theorem states that -graphs are precisely the (triangular) windmill graphs (that is, finitely many triangles joined at a common vertex; also known as friendship graphs).
History
Kotzig's conjecture was first listed as an open problem by Bondy & Murty in 1976, attributed to Kotzig and dated to 1974.
Kotzig's first own writing on the conjecture appeared in 1979.
He later verified the conjecture for and claimed solution, though unpublished, for .
The conjecture is now known to hold for due to work of Alexandr Kostochka.
Kostochka stated that his techniques extend to , but a proof of this has not been published.
A survey on -graphs was written by John A. Bondy, including proofs for many statements previously made by Kotzig without written proof.
In 1990 Xing & Hu claimed a proof of Kotzig's conjecture for .
This seemed to resolve the conjecture at the time, and still today leads many to believe that the problem is settled.
However, Xing and Hu's proof relied on a misunderstanding of a statement proven by Kotzig. Kotzig showed that a -graph must contain a -cycle for some , which Xing and Hu used in the form that cycles of all these lengths must exist.
In their paper Xing and Hu show that for a -graph must not contain a -cycle.
Since this is in contradiction to their reading of Kotzig's result, they conclude (incorrectly) that -graphs with cannot exist.
This mistake was first pointed out by Roland Häggkvist in 2000.
Kotzig's conjecture is mentioned in Proofs from THE BOOK in the chapter on the friendship theorem.
It is stated that a general proof for the conjecture seems "out of reach".
Properties of -graphs
A -graph on vertices contains precisely paths of length .
Since the two end-vertices of an edge in a -graph are connected by a unique -path, each edge is contained in a unique -cycle. Consequently, the graph has a unique decomposition into edge disjoint -cycles, and there are no other -cycles besides these. In particular, -graphs are Eulerian.
-graphs are not bipartite: if is odd and are vertices in the same bipartition class, no -path can connect them. Likewise, if is even and are vertices in different bipartition classes, no -path can connect them.
Even cycles form important substructures in -graphs. A lollipop (sometimes also monocle) is the union of an even -cycle with a path that intersects the cycles in precisely one of its end vertices. The path must be shorter than as it would otherwise give rise to two -paths with the same end vertices. Therefore, the existence and distribution of lollipops, and more generally, of even cycles, has been studied extensively. It is known that there must exist an even cycle of length for some , and that there cannot exist even cycles of lengths , , , , or .
A -graph cannot contain a cycle (even or odd) of length at least . At the same time, there must exist a cycle of length at least . Combining these constraints yield .
Any two -cycles in a -graph must have at least three and at most vertices in common. In particular, is 2-connected. Kotzig furthermore claims that any two -cycles have at least seven vertices in common, though no proof has been published.
Let denote the number of -cycles in a given -graph. Then . If is even, then . If is odd, then . Consequently, the number of edges in a -graph (for odd) is bounded, and since -graphs are connected, so is the number of vertices.
References
Graph theory
Unsolved problems in mathematics | Kotzig's conjecture | [
"Mathematics"
] | 915 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in graph theory"
] |
77,391,914 | https://en.wikipedia.org/wiki/Hering%27s%20Paradox | Hering's paradox describes a physical experiment in the field of electromagnetism that seems to contradict Maxwell's equation in general, and Faraday's Law of Induction and the flux rule in particular. In his study on the subject, Carl Hering concluded in 1908 that the usual statement of Faraday's Law (at the turn of the century) was imperfect and that it required to be modified in order to become universal.
Since then, Hering's paradox has been used repeatedly in physics didactics to demonstrate the application of Faraday's Law of Induction, and it can be considered to be completely understood within the theory of classical electrodynamics. Grabinski criticizes, however, that most of the presentations in introductory textbooks were problematical. Either, Faraday's Law was misinterpreted in a way that leads to confusion, or solely such frames of reference were chosen that avoid the need of an explanation. In the following, Hering's paradox is first shown experimentally in a video and -- in a similar way as suggested by Grabinski -- it is shown, that when carefully treated with full mathematical consistency, the experiment does not contradict Faraday's Law of Induction. Finally, the typical pitfalls of applying Faraday's Law are mentioned.
Experiment
The experiment is shown in the video on the right side. In the experiment, a slotted iron core is used, where a coil fed with a direct current generates a constant magnetic field in the core and in its slot.
Two different experiments are carried out in parallel:
*In the lower part, an ordinary conductor loop is passed through the slot of the iron core. As there is a magnetic field in this slot, a voltage is generated at the ends of the conductor loop, which is amplified and displayed in the lower oscilloscope image.
*A modified conductor loop is realized in the upper part. The conductor loop is split at one point and the split ends are fitted with a metal wheel. During the experiment, the metal wheels move around the magnetic core and exert a certain contact pressure on each other and on the core, respectively. As the magnetic core is electrically conductive, there is always an electrical connection between the wheels and therefore between the separated ends of the loop. The oscilloscope does not show any voltage despite the otherwise identical conditions as in the first experiment.
In both experiments, the same change in magnetic flux occurs at the same time. However, the oscilloscope only shows a voltage in one experiment, although one would expect the same induced voltage to be present in both experiments. This unexpected result is called Hering's paradox, named after Carl Hering.
Explanation
Moving wires/oscilloscope, magnet at rest
The easiest way to understand the outcome of the experiment is to view it from the rest frame of the magnet, i. e. the magnet is at rest, and the oscilloscope and the wires are at motion. In this frame of reference, there is no reason for a voltage to arise, because the set-up consists of a magnet at rest and some wires moving in a field free space around the magnet, which scratch the magnet a little.
To conclude, there is
* no change of the magnetic field anywhere () and thus no current-driving force on the charges in the circuit due to rest induction,
* and those parts of the circuit having charges being at motion () are not exposed to a magnetic field () and vice versa, so that there is no magnetic force on the charges anywhere in the circuit.
Moving magnet, wires/oscilloscope at rest
While the perspective from the rest frame of the magnet causes no difficulties in understanding, this is not the case when viewed from a frame of reference in which the oscilloscope and the cables are at rest and an electrically conductive permanent magnet moves into a conductor loop at a speed of . Under these circumstances, there is rest induction due to the movement of the magnet ( at the front edge of the magnet), and beyond that, the magnet is also a moving conductor. The double function of the magnet as a conductor at motion on the one hand, and as the root cause for the magnetic field on the other hand raises an essential question: Does the magnetic field of the magnet exert a Lorentz force on the charges inside the magnet? The correct answer to this question is "Yes, it does", and it is one of the pitfalls concerning the application of Faraday's Law. For some people it is contraintuitive to assume that a Lorentz force is exerted to a charge although there is no relative motion between the magnet and the charge.
An essential step of solving the paradox is the realization that the inside of the conductive moving magnet is not field-free, but that a non-zero electric field strength prevails there. If this field strength is integrated along the line , the result is the desired induced voltage. However, the induced voltage is not localized in the oscilloscope, but in the magnet.
The equation can be derived from the consideration that there is obviously no current-driving force acting on any section of the circuit. Since the absence of forces also applies in particular to the inside of the magnet, the total electromagnetic force for a charge located inside the magnet equals . If we assume that the charge moves “slip-free” with the magnet (), the following also applies: . The last equation, however, is mathematically equivalent to .
Finally, the following electric field strengths result for the various sections of the conductor loop:
To check whether the outcome of the experiment is compatible with Maxwell's equations, we first write down the Maxwell Faraday equation in integral notation:
Here is the induction surface, and is its boundary curve, which is assumed to be composed of the (stationary) sections , , and , respectively. The dot indicates the dot product between two vectors. The direction of integration (clockwise) and the surface orientation (pointing into the screen) are right-handed to each other as assumed in the Maxwell Faraday equation.
Considering the electrical field strengths shown in the table, the left side of the Maxwell Faraday equation can be written as:
The minus sign is due to the fact that the direction of integration is opposite to the direction of the electric field strength ().
To calculate the right-hand side of the equation, we state that within the time the magnetic field of the induction surface increases from to () within a strip of length and width ().
Thus the right side of the equation equals
The right and left sides of the equation are obviously identical. This shows that Hering's paradox is in perfect agreement with the Maxwell Faraday equation.
Note that the speed of the boundary curve has no physical importance whatsoever. This can be seen most easily in the differential notation of the Maxwell-Faraday equation where neither the induction area nor its boundary occurs. From a mathematical point of view, the boundary curve is just an imaginary line that had to be introduced to convert the Maxwell-Faraday equation to its integral notation such as to establish a relationship to electical voltages.
Because the boundary curve is physically of no importance, the outcome of an experiment does not depends on the speed of this curve and it is not affected by whether or not the speed of the boundary curve corresponds to the speed of a conductor wire being located at the same place. For reasons of simplicity, the speed of the boundary curve is assumed to be zero in this article.
The movement that actually counts is the movement of the (electrically conducting) magnet. It affects the value of the electric field strength inside the magnet and is thus accounted for in the Maxwell-Faraday equation via the numerical value of the vector field .
Pitfalls
The difficulties in understanding Hering's paradox and similar problems are usually based on three misunderstandings:
(1) the lack of distinction between the velocity of the boundary curve and the velocity of a conductor present at the location of the boundary curve,
(2) the uncertainty as to whether the term in the Maxwell-Faraday equation is just an imaginary boundary line or a conductor (correct is: is a boundary curve without any physical properties) and
(3) ignoring the fact that in an ideal conductor moving in a magnetic field with flux density , there is a non-zero electric field strength .
If these points are consistently considered, Hering's paradox turns out to be in perfect agreement to Faraday's law of induction (given by the Maxwell Faraday equation) viewed from any frame of reference whatsoever. Furthermore, the difficulties in understanding the (thought) experiments described in the chapter "Exceptions to the flow rule" in the "Feynman Lectures" are due to the same misunderstandings.
References
Faraday's law of electromagnetic induction
Michael Faraday
Maxwell's equations
de:Heringsches Paradoxon | Hering's Paradox | [
"Physics",
"Mathematics"
] | 1,808 | [
"Electrodynamics",
"Maxwell's equations",
"Equations of physics",
"Dynamical systems"
] |
74,469,613 | https://en.wikipedia.org/wiki/WELL%20Building%20Standard | WELL Building Standard (WELL) is a healthy building certification program, developed by the International WELL Building Institute PCB (IWBI), a California registered public benefit corporation.
History
The WELL Building Standard began in 2013 by Paul Scialla of Delos company, becoming the first well-being focused standard. By 2016, over 200 projects in 21 countries adopted the certification. In 2014, Green Business Certification Inc. began to provide third-party certification for WELL. By 2024, WELL is being used across more than 5 billion square feet of space in 130 countries, supporting an estimated 25 million occupants in nearly 74,000 commercial and residential locations.
Principles & concepts
WELL v2 met best practices on four tenets: evidence-based, verifiable, implementable, and feedback-focused. The principles in WELL v2 are equitable, global, evidence-based, technically robust, customer-focused, and resilient. WELL is a performance-based system which Performance Verification is completed by an authorized WELL Performance Testing Agent.
Certification
There are two types of certifications, the WELL Certification for Owner-occupied and the WELL Core Certification.
The WELL Core is for the building that provide tenant occupation more than 75%, and not needed to achieve minimum points from every subjects.
The WELL Silver, Gold, and Platinum level must achieve at least 1, 2, and 3 points per subject, but the WELL Bronze has no minimum points' rule. For WELL core, there are no minimum point.
The optimization point requirement from the WELL Bronze, Silver, Gold, and Platinum are ranging from 40, 50, 60, and 80 points. The rating system limits 12 points per subject, except the Innovation which limit under 10 points. Total points must not be over 100.
Assessment
The WELL certificated buildings must pass all precondition requirements and they could get optimization points available from extension subjects.
Air
Because users spend 90% time in interior, they can expose to indoor air pollutions that lead to headaches, dry throat, eye irritation, runny nose, asthma attacks, infection with legionella bacteria and carbon monoxide poisoning. It leads to thousands of cancer death and 100,000 of respiratory issues annually. Avoidable costs in the U.S. could be over 100 billion dollars annually 45% from radon and tobacco and 45% from lost productivity, and 10% respiratory diseases. Combustion sources such as candles, tobacco, stoves, furnaces, fireplaces producing carbon monoxide, nitrogen dioxide, and small particles are common. Furnishings, fabrics, cleaning product emit volatile organic compound (VOCs). It could be resolved by eliminate problem sources and design solutions. Air pollution leads to 7 millions premature deaths, around 600,000 of those were children under 5 years old in 2012.
A01-A04 Precondition
A01 Air Quality topic, the WELL conducts to limit level of particulate matter both PM2.5 and PM10 under 15 and 50 for normal region or 25 and 50 microgram/cubic metre or 30% of 24–48 hours average of outdoor levels for polluted region and thresholds for volatile organic compound (VOC) such as benzene, formaldehyde, toluene to 10, 50, and 300 microgram/cubic metre or total VOC of 500 microgram/cubic metre. Inorganic gases such as carbon monoxide and ozone are also limited to 10 milligram per cubic metre and 100 microgram/cubic metre. Radon is limited under 0.15 Becquerel/Litre. WELL makes sure that all air quality shall be monitored with a digital platform, except for radon parameter.
A02 Smoke-free Environment topic, smoking and using of electronic cigarette indoor is not allowed, except for outdoors at only ground level further than 7.5 m from project apertures including air-intake area.
A03 Ventilation Design topic, the WELL assured building to have existing or new mechanical ventilation systems following ASHRAE 62.1-2 or EN standard 16798-1 or AS 1668.2 or CIBSE Guide A: Environmental Design. Naturally ventilation can also be used without mechanical ventilation system if the design follows Natural Ventilation Procedure in ASHRAE 62.1, CIBSE AM10, AS 1668.4 at least 90% of the project area. Ventilation monitoring can be only solution if carbon dioxide level is met under 900 ppm indoor and 500 ppm outdoor.
A04 Construction Pollution Management topic, the contractor is to ensure that ducts are cleaned and protected from contamination. Contractor shall filter with more than 70% efficiency for particles 3-10 micrometers on the installed ventilation system during construction. Contractor must implement dust and moisture management such as using of temporary barriers, dust guards for saws, walk-off mats on entryway.
A05 - A14 Optimization
A05 Enhanced Air Quality topic, enhancing threshold of particulate matter both PM2.5 and PM10 under 12 and 30 for 1 point, 10 and 20 for 2 points. Enhancing volatile organic compound such as benzene, formaldehyde, toluene under 3, 9, 300 microgram/cubic metre and additional acetaldehyde, acrylonitrile, caprolactam, naphthalene under 140, 5, 2.2, 9 microgram/cubic metre would receive 1 point. Inorganic gases such as carbon monoxide and nitrogen dioxide under 7 milligram per cubic metre and 40 microgram per cubic metre receives additional 1 point.
A06 Enhanced Ventilation Design topic threshold of mechanical and natural ventilation or with demand controlled ventilation (DCV) or engineered natural ventilation system to keep CO2 levels low, or at least 50% of workstations are supplied in the breathing zone with an airspeed under 50 fpm at user's head receives 2 points. Displacement ventilation system implement or air diffusers located 2.8 m above the floor receives additional 1 points.
A07 Operable Windows, providing openable windows to access to outdoor air, available to access outdoor air by 75% of regularly occupied areas or 4% of the occupied areas of there are many floors receives 1 point, another 1 point, if there is outdoor air quality for PM2.5, temperature, relative humidity display near windows.
A08 Air quality monitoring and Awareness topic, installing air quality sensor and submitting data to WELL provides 1 point and display screens available in the building to promote awareness provides another 1 point.
A09 Pollution Infiltration Management topic, entrance way design such as 3 meters' air door and to slow movement of air from outdoor to indoor by building vestibule or revolving doors, air curtain and management by wet cleaning once a week, vacuuming once a day provides 1 point and envelope of building that is designed to mitigate outside air pollution provides another 1 point.
A10 Combustion Minimization topic, combustion restriction indoor or keeping away from building 3.3 meter and vehicle running limitation of 30 seconds.
A11 Source Separation topic by to remove the sources by design separately closed door, negatively pressurized or exhaust fans such as the return air to outdoor of all bathrooms, kitchens, cleaning and chemical storage, high-volume printers and copiers provides 1 point.
A12 Air Filtration topic by using media filters in ventilation system or standalone device by appropriate efficiency to filter outdoor air, the higher PM2.5, the higher efficiency from 35% to 95% and keep replacing annually provides 1 point.
A13 Enhanced Supply Air topic by ventilating occupied spaces with all outdoor air or cleaning devices such as activated carbon filter with efficiency of 75% or media filter or Ultraviolet germicidal irradiation (UVGI) under UL 2998 Zero Ozone Emissions Validation or Intertek Zero Ozone Verification provides 1 point.
A14 Microbe and Mold Control topic by implement ultraviolet radiation system for HVAC coil each provides 1 point.
Water
WELL Water aims to increase the rate of adequate hydration in building and reduce risks due to contaminated water. Human's body is two-thirds water, recommended water intake around 2 - 3.7 litres per day to let respiration, perspiration and excretion works. Water with high nitrate can impair oxygen transport in infants, causing neurodevelopment impair. Trihalomethane (THMs) and haloacetic acids (HAAs) in water can cause cancer. Legionella control is needed in cooling systems and hot tubs. Materials must not support mold growth. Well bathroom design, better hand washing leads to reduce risks enteric and respiratory diseases.
W01 - W03 Precondition
For W01 Water Quality Indicators, WELL conducts by performance test to limit turbidity of water under 1.0 Nephelometric Turbidity Unit (NTU) or Formazin Turbidity Unit (FTU) or Formazin Nephelometric Units (FNU). Water sample of any 100 ml shall have zero coliform bacteria.
For W02 Drinking Water Quality, the project requires drinking water that limits chemical contamination of arsenic, cadmium, chromium, copper, fluoride, lead, mercury, nickel, nitrate, nitrite, chlorine, trihalomethane, and haloacetic acids. Pesticide contamination has to be controlled such as aldrin and dieldrin, atrazine, carbofuran, chlordane, 2,4-Dichlorophenoxyacetic acid, DDT, lindane, pentachlorophenol. Organic contaminants are also limit, such as benzene, benzo(a)pyrene, carbon tetrachloride, 1,2-Dichloroethane, tetrachloroethene, toluene, trichloroethylene, 2,4,6-Tribromophenol, vinyl chloride, and xylene.
For W03, Basic Water Management, annually monitoring is required such as turbidity, pH, residual free chlorine. Legionella management must be determined in the building.
W04 - W08 Optimization
For W04, Enhanced Water Quality, enhancing threshold of drinking water contamination level provides 1 point.
W05, Drinking Water Quality Management provides total 3 points. For 2 points, the project needs pre-testing water quality on the farthest water dispenser for every 10 floors such as turbidity, coliform bacteria, pH, total dissolved solids (TDS), chlorine, residual (free) chlorine, arsenic, lead, copper, nitrate, benzene at least one month before Performance Verification and monitoring piped water that delivers drinking water, testing water from dispensers quarterly. Turbidity must be equal or less than 1.0 NTU. pH must be between 6.5 and 9.0. TDS must be equal or less than 500 mg/L. Total Chlorine must be equal or less than 5 mg. L. Residual (free) chlorine must be equal or less than 5 mg/L. Total Coliforms must not be detected in a 100 ml sample. Lead must be equal or less than 1 micro gram/L. Sampling frequency can be reduced to once a year, if the results are under limit two consecutive times. Copper must be equal or less than 2 mg/L. Sampling frequency can be reduced to two a year. If the results are under limit four consecutive times, then there is no need to monitor anymore. The test result must be submitted to WELL annually. Last part is display of water management information to promote drinking water transparency provides another 1 point.
For W06, Drinking Water Promotion, encouraging people to drink water easily by provide water dispenser minimum one per floor within 30 meter of all users and in all dining areas, designing for water bottle-refilling with maintenance provides 1 point.
W07, Moisture Management provides total 3 points, to limit the bacteria and mold growth, the project needs building envelope that incorporates site drainage and storm water management, air tightness testing, vapor pressure differentials, entryway strategies to minimize water permeation, drainage plane from interior to exterior cladding, limiting wicking in porous materials by using non-porous materials such as closed-cell foams, waterproofing membranes, metal between porous meterials or free-draining spaces, provides 1 point. Using protection or implementing measures to eliminate condensation on cold surfaces such as basement, slab-on-grade floor and liquid water as interior housewrap such as basement, bathroom, kitchen provides another 1 point. Label or manual at point-of-connection to shut-off pipe and all water treatment devices need backflow prevention system such as air gap or backflow preventer valve. The last 1 point is to manage moisture by scheduling inspection, notification system in the building and to submit all inspection result to WELL annually.
W08 Hygiene Support, the highest points in the Water subject, total 4 points. Part 1, 1 point, bathroom needs trash receptacle in toilet stall if toilet paper cannot be flushed, free of 50% subsidized sanitary pads, storage support in toilet stalls, at least one bathroom has wheelchair bathroom, one bathroom per project for infant changing table, syringe drop box, single-user bathroom needs occupancy signage, self-primed liquid-seal trap in floor drain. For public projects such as airport, family bathroom are required, containing changing table for infant, child size toilet and sinks, motion sensor for lights, slip-resistant floor, grab bars, hook or shelf for bags in toilet stall, wheelchair accessibility by local code. Part 2, 1 point, bathroom needs hands-free flushing toilet, contactless soap dispenser and hand-drying, hands-free exiting door, sensor-activated, programmable line-purge faucet. Part 3, 1 point, faucet design avoiding flowing directly into the drain, splash limited inside the sink, minimum width of 23 cm sink size, flowing column of water minimum 20 cm and at least 7.5 cm away from sink edge. Part 4, 1 point, handwashing supply must has fragrance-free liquid soap through sealed dispensers, paper towels or hand dryers with HEPA filter or fabric towel rolls and signage with proper hand washing steps.
Nourishment
WELL Nourishment concept supports healthy and sustainable eating patterns by accessing to fruits and vegetables more and limiting highly processed foods, nudging users to choose better choices. Poor nutrition responses to one in five people death globally. Unhealthy eating is worse than combining of drug, alcohol, and tobacco. Diets usually is low in fruits, vegetables, whole grains, nuts, seeds, but instead flood with highly processed foods such as refined sugars and oils. In 2019, the EAT-Lancet Commission developed best food option. By developing environment conditions that influence users to change diets with holistic approach, supportive policies.
N01 - N02 Precondition
N01 Fruits and Vegetables, WELL ensures to operate food outlet for at least two varieties of fruits and non-fried vegetables that would be clearly be seen by users. Each food outlets are at least 50% of food options are fruits or non-fried vegetables.
N02, Nutritional Transparency, other type of foods such as packaged food and beverage must display the total calories per serving, macronutrient, and sugar content. Owner must communicates users on food allergy. High sugar foods or over than 25 grams of sugar per serving is banned from the menu in dining spaces or at least identified in the items for users to make a decision.
N03 - N13 Optimization
N03 Refined Ingredients, restriction of sugars by limit beverage under 25 g of sugar per container, at least 25% of beverages contain no sugar, and non-beverage food except fruit contain sugar under 25 gram per serving receives 1 point. Whole grains promotion by 50% of grain-based food contains whole grain the most receives another 1 point.
N04 Food Advertising, eliminating sugar drink, deep-fried food advertising, and sale area promotes water, fruits and vegetables consumption instead, receives 1 point.
N05 Artificial Ingredients, phasing out or restrict of artificial ingredients such as colorings, sweeteners, food preservation, fats and oils by clearly labeled on packaging or signage receives 1 point.
N06 Portion Sizes, dining space promoting healthy size of food receives 1 point by limiting portion below 650 kCal or over 650 kcal menu under 50%, and dish or bowl size must not be over certain area or volume. Primary school students, secondary school students, and adults plate size are no more than 20, 25, and 30 cm diameter.
N07 Nutrition Education, providing nutritional knowledge by cooking demonstration or dietary education learning by nutritionist or gardening workshop receives 1 point.
N08 Mindful Eating provides 2 points by dedicating eating space within 200 meters walking distance of the boundary of the project that contains tables and chairs to 25% of users at peak, protects from outdoor climate hazard, provides variety of seating options from small to large group of people, and if there is employees or students then they would provides daily 30 minutes break for meal.
N09 Special Diets, alternative food providing to food allergen person, total 2 points. For part 1, 1 point, providing food at least does not include peanut and tree nuts or gluten and wheat or soy or sesame or animal products including seafood, dairy, and eggs. For part 2, 1 point, clearly labeling on packaging, menus, signage that the food contains peanut, fish, shellfish, soy, milk, egg, wheat, tree nuts, sesame, gluten.
N10 Food Preparation, providing space for user's meal such as cold storage, countertop, sink for dish and hand washing, microwave or toaster, reusable plates, garbage bin receives 1 point.
N11 Responsible Food Sourcing, sourcing 50% foods that are certified organic and 25% animal product lines that are certified organic by Gran Sasso Science Institute (GSSI) or Seafood Certification Scheme by labeling for sustainable receives 1 point.
N12 Food Production receives 2 points by providing garden or greenhouse with food-bearing plants or edible landscape or hydroponic or aeroponic farming which is accessible to users in regular hours and growing area both horizontal and vertical at least 0.09 sq. m per user or 0.05 sq. m per student, except hydroponic and aeroponic farming can be half of normal system, and users can access to food growing tools, in which the area is within 400 meter walking distance.
N13 Local Food Environment, supporting local food by locating the project near fruit and vegetable supermarket or farmers' market that open at least once a week within 400 meters walk distance, or serving local agriculture program or hosting weekly sale of fruits and vegetables receives 1 point.
Light
WELL Lighting concept aims to create lighting that reduces circadian disruption to improve sleep quality and mood and productivity. Humans are diurnal driving by circadian systems or internal clock. Disruption of it leads to obesity, diabetes, depression, breast cancer, metabolic and sleep disorders. Insufficient light can lead to disruption.
L01 - L02 Precondition
L01 Light Exposure, daylighting design hugely integrates in a project by daylight simulation such as the Spatial Daylight Analysis that shows how much daylight illuminates throughout working hours. Adequate daylighting level can be decided on the interior layout or the building design such as a distance from windows. For the project that finds it difficult for daylight access, circadian lighting design can replace daylight such as an Intrinsically photosensitive retinal ganglion cell (ipRGC) receiving enough light at least 150 Melanopic Lux (EML)
L02 Visual Lighting Design, WELL still keeps visual lighting design which is conventional lighting method for user's visual comfort and acuity. Lighting design standard in WELL follows Illuminating Engineering Society Lighting Library or EN standard 12464-1&2 or ISO 8995-1 or Chinese Standard GB 50034 or CIBSE SLL Code for Lighting. Alternately WELL allows light level threshold from U.S. General Services Administration's facilities standards.
L03 - L09 Optimization
L03 Circadian Lighting Design, if the project chose precondition part of circadian lighting, the project receives 1 point automatically. For the project that achieves at least 275 Melanopic Lux (EML), the project receives more 2 points.
L04 Electric Light Glare Control, limiting glare from indoor artificial light, receiving 2 points, by using 100% upward lighting or fixture classified Unified Glare Rating (UGR) equal or under 16 or all fixtures does not exceed 6,000 candela/sq.m at angle between 45 and 90 degrees from bottom, it can be done in lighting calculation software that results in UGR equal or under 16.
L05 Daylight Design Strategies, providing daylight exposure indoors through design strategies. Daylight plan for the project receives 1 point for workstations near the window within 7.5 meters, but if positioned within 5.5 meters, it receives 2 points instead. Integrating of solar shading system with manual control receives 1 point, or with automated control system throughout year receives 2 points instead.
L06 Daylight Simulation, daylight calculation that results equal or higher than 55% of project occupancy area, receiving 300 lux more than 50% of annual time of use, is received 1 point. If it results equal or higher than 75%, then it will be received 2 points.
L07 Visual Balance receives 1 point by either design by at least three of five parameters of luminance ratio of horizontal and vertical plane at maximum 10 times, 0.4 ratio of minimum illuminance and average illuminance on horizontal task plane, using automation system by changing light characters at least light levels over a period at least 10 minutes, using consistent Correlated color temperature (CCT) plus and minus 200 Kelvin, or the project is designed by lighting professional that takes those ratios in account.
L08 Electric Light Quality, for quality of light fixtures, total 3 points, all light fixture, except decorative lights and emergency lights, that color rendering is equal or over color rendering index (CRI) of 90 or CRI of 80 with R9 equal or over than 50 or TM-30 of color rendering fidelity (Rf) equal or more than 78 and color rendering gamut (Rg) equal or more than 100 with Rcs,h1 from -1% to 15%, will receive 1 point, and that flicker of luminaires classified as "reduced flicker operation" or defined 1, 2, 3 recommended practices by IEEE standard 1789-2015 LED or short term light flicker (Pst LM) and Stroboscopic Visibility Measure (SVM) equal or under 1.0 and 0.6 per NEMA 77-2017, will receive 2 points.
L09 Occupant Lighting Control, providing individual control of light for one per 60 sq.m. or one per 10 occupants will be received 1 point, but if there are one per 30 sq.m. or one per 5 occupants, then it will be received 2 points, with the project has lighting control in each zone was setup at least three lighting levels, able to change group of lights with different light beams or color or CCT, all users be able to control light manually by keypad or digital interface, and separately control of lighting for presentation. Task light provided with no cost to employees and light levels and direction can be controlled by users independently under shielded light source receives 1 point.
Movement
To promote physical activity by creating opportunities through spaces is a substantial impact to decrease risk of death by 10% and 25%, which more than half a million and one million people globally would not have died because of inactivity. Global lifespan could increase by 0.68 years. Physical inactivity leads to pre-mature death and chronic illness, type II diabetes, cardiovascular disease, depression, stroke, dementia and cancer. 23% of adult globally are inactive caused by rising urbanization and economic development. Sitting or sedentary globally average 3–9 hours daily among adults that linked to mentioned illnesses.
V01 - V02 Precondition
V01 Active Buildings and Communities, the summary of optimization points which WELL requires the project to achieve as least one point from four optimization features specifically V03 Circulation Network for visible, open to access and aesthetic stair circulation, V04 Facilities for Active Occupants such as cycling network with bike parking or showers, lockers and changing rooms, V05 Site Planning and Selection such as pedestrian-friendly environment or mass transit within walking distance, V08 Physical Activity Spaces and Equipment such as free sport opportunities and facilities or green space for outdoor activities.
V02, Ergonomic Workstation Design, intended for users to adjust furniture freely such as monitor position, work surface height, chair, standing desk and foot support including user's orientation or instruction.
V03 - V10 Optimization
V03 Circulation Network provides total 3 points, designing aesthetic staircase by music, artwork, designed to have light levels of at least 215 lux in use, windows or skylights that provide daylight or nature views, natural design elements, gamification receives 1 point. Integrating point-of-decision signage on stair area such as motivational element receives 1 point. Promoting visible stair that close to the entrance receives 1 point.
V04 Facilities for Active Occupants, providing cycling network and parking for bikes with shortterm and longterm with basic bike maintenance tools and minimum Bike Score® of 50, 200 m walk distance of an existing cycling network receives 2 points, and with showers, lockers, and changing facilities 16 places plus one for every 1,000 occupants receives another 1 point.
V05 Site Planning and Selection, site planning for walking and connection to public transportation minimum Walk Score® of 70 or within a 400 m distance of the project boundary, by selecting sites with friendly streets and friendly footpath receives 2 points, and with walkable distance to mass transit receives another 2 points.
V06 Physical Activity Opportunities, providing physical activity for occupants at no cost, by qualified professional, not in form of punishment by at least one 30-minute event per week or one 60-minute per week for school students receives 1 point, and for not less than total 150 minutes per week or not less 60 minutes per day for school students receive 2 points.
V07 Active Furnishings, providing at least manual or electric work surface or treadmill or stationary bicycle or step machine, and with 50% ot 90% of workstations receives 1 point ot 2 points in this active furnishing.
V08 Physical Activity Spaces and Equipment, providing indoor fitness space at no cost receives 2 points either the space includes two types of exercises or equipment that allow use by at least 5% of occupants at any time or has minimum size of 25 sq. m plus 0.1 sq. m per occupant until 930 sq. m, but WELL also allows if the project gives free pass to access fitness facility within 200 meters walking distance instead.
V09 Physical Activity Promotion receives only 1 point, if the project offer at least two of five activities to employees such as prize from sport competition, subsidy for employee sport member, reducing of health care cost, flexible work hours, paid time off at least four days per year physical activity, and either utilization of incentive program for 50% or improvement at least 10%, and if students are in account, it needs program to reduce TV viewing, computer or smartphone use, gaming, and to teaching physical activity or movement or physical activity breaks.
V10 Self-Monitoring Support, receive only 1 point by providing free fitness tracker at no cost or at least 50%, measuring at least two physical activities and at least one additional activity such as sleep or mindfulness.
Thermal comfort
Holistic approach and intervention to help design building that focus on individual thermal discomfort, which is subjective under the same conditions. One-size-fits all does not suit for large people. Personal thermal control should be used to improve productivity and decrease sick building syndrome. Due to large number of people, thermal comfort conditions should create baseline satisfaction for the largest number of people. It is greatly influencing users and one of the biggest impact to motivation, alertness, focus and mood. It should provide acceptable thermal environment to minimum 80% of users, in fact only 11% met accepted human satisfaction in the U.S., and 41% were dissatisfaction which detrimental to business. Employees perform 15% poorer in hot conditions and 14% poorer in cold conditions.
T01 Thermal Performance Precondition
The first part, WELL ensures thermal indoor environment to be controlled. For HVAC control system (mechanically conditioned space), acceptable thermal comfort by PMV/PPD model must be in between -0.5 and 0.5 over 90% of regularly occupied spaces. For naturally conditioned system, the minimum prevailing mean outdoor temperature (tpma (out)), calculated from average outdoor temperature entire day, must be 10 degrees Celsius and indoor temperature of 31% of the tpma (out) plus 14.3 degrees Celsius. The maximum tpma (out) must be under 33.5 degrees Celsius and indoor temperature of 31% of the tpma (out) plus 21.3 degrees Celsius. For example, the tpma (out) 33.5 degrees Celsius, the indoor temperature shall not be over 31.7 degrees Celsius. If tpma (out) is over than 33.5 degrees Celsius then a mechanically conditioned space would be in place. WELL allows the project to use optimization points from T06 Thermal Comfort Monitoring with dry-bulb temperature data or it can be only thermal comfort surveys by achieving 2 points from T02 Verified Thermal Comfort, 80% satisfaction survey of thermal comfort.
The second part, semi-annual testing on summer and winter for dry-bulb temperature and relative humidity, air speed, and mean radiant temperature or it can only achieve T06 Thermal Comfort Monitoring feature.
T02 - T07 Optimization
T02 Verified Thermal Comfort, if surveying from user results in 80% satisfaction receives 2 points, and 90% satisfaction receives 3 points, with responses significant 35% of 45 users or 15 from 20 users or 80% of 20 users with sample template.
T03 Thermal Zoning, providing thermostat control point one per 60 sq. m and per 30 sq. m or one per 10 users and 5 users would receives 1 and 2 points, by presenting with digital interface on computer or phone, with temperature sensors apart from exterior wall, windows, doors, direct sunlight, air supply diffusers, mechanical fans, heaters, other significant sources of heat or cold.
T04 Individual Thermal Control, giving personal thermal control for both heating and cooling receives each 1 point, for cooling, implementing user-adjustable thermostat, desk fan or ceiling fan, chair with mechanical cooling system, other solution that affecting PMV change of -0.5 within 15 minutes without changing the PMV for other occupants, for heating, implementing user-adjustable thermostat, electric parabolic space heater, electric heated chair or foot warmers, personal or shared blankets, other solution that affecting PMV change of -0.5 within 15 minutes without changing the PMV for other occupants, thus allowing flexible dress code receives another 1 point.
T05 Radiant Thermal Comfort, radiant thermal comfort management for both heating and cooling receives each 1 point by implement radiant ceilings, walls, floors, radiant panels at least 50% of regularly occupied project area.
T06 Thermal Comfort Monitoring such as dry-bulb temperature and relative humidity receives 1 point with display screen and website or mobile application every 500 sq. m of regularly occupied space, and submitting data annually to WELL with proof of calibration.
T07 Humidity Control, control of humidity receives 1 point by having mechanical system that can maintain relative humidity 30% to 60% at all times or submitting document that modeled relative humidity levels in space from 30% to 60% for at least 98% of all business hours or by meeting thermal comfort monitoring (T06) with relative humidity levels from 30% to 60% except high-humidity spaces.
Sound
Providing holistic approach to address acoustical comfort of a space, measuring satisfaction of users by human response to mechanical vibrations through a medium, such as air. Sleep disturbance, hypertension, reduction of mental arithmetic skills in children are caused by exterior noise and HVAC, appliances noises. Myocardial infarction risk can be cause by traffic noise at night. Bad reverberation times and background sound levels can affect speech intelligibility in educational areas where aural comprehension is vital for memory retention. The planning and commissioning of an isolated and balanced HVAC system is firm baseline for anticipated background noise level. Adding facade elements such as mass and glazing, sealing gaps and providing airspace between enclosed spaces can increase occupant comfort. Replacing hard surfaces with absorptive materials improve speech projection and acoustical privacy. Consistent background sound levels is provided by sound masking system, increase the signal-to-noise ratio to increase privacy.
S01 Sound Mapping Precondition
For labeling acoustic zone, interior design must have a design of acoustics zoning label such as loud zone, quiet zone, mixed zone, and circulation zone to mitigate a sound transmission from loud zones to quiet zones.
For providing acoustic design plan, the design concept shall incorporate acoustical comfort, background noise, speech privacy, and reverberation time and/or impact noise within the project boundary, or it can be an acoustical engineering professional in acoustic to evaluate existing sounds and to recommend solutions and measurements.
S02 - S06 Optimization
S02 Maximum Noise Levels, limiting background noise levels over period of five minutes and average sound pressure levels do not exceed tier 1, receiving 1 point, that average Sound pressure level (SPL) for category 1 to 4, 40 dBA to 55 dBA and 60 dBC to 75 dBC, and maximum SPL from 50 to 65 dBA or 70 to 85 dBC. For tier 2, receiving 3 points instead, that average SPL for category 1 to 4, 35 to 50 dBA and 55 dBC to 70 dBC, and maximum SPL from 45 to 60 dBA or 65 to 80 dBC.
S03 Sound Barriers, sound barriers that designed for sound isolation at walls and doors meets Sound transmission class (STC) or Weighted Sound Reduction (Rw) values from minimum 40 STC to 60 STC, and doors with minimum 30 STC would receives 1 point. Achieving minimum Noise Isolation Class (NIC) or Weight Difference Level (Dw) for each wall type from minimum 35 NIC to 55 NIC, or the sum of NIC or Dw combined with Noise Criteria Rating (NC) or Weighted Sound Pressure Level (LAeq) with minimum 70 to 85 NIC + NC or Dw + LAeq would receive 2 points.
S04 Reverberation Time, maintain persistence of voices in the room receives 2 points, for example, area for music rehearsal for maximum 1.1 seconds, area for learning for maximum 0.6 seconds that could be verified by technical document or performance test.
S05 Sound Reducing Surfaces, furnishing quality surfaces of room that meets criteria of tier 1 or 2, receiving 1 or 2 points, for example, open workspaces, minimum noise reduction coefficient (NRC) OR Alpha-w of 0.75 or 0.90 and minimum furniture height and NRC OR Alpha-w with minimum height of 1.2 m above finished floor and minimum 0.70, cumulatively at least 10% of occupied project area.
S06 Minimum Background Sound, artificial background sound by making sound masking system receives 1 point to increase privacy that produce 1/3 octave band output signal and minimum frequency spectrum of 100 Hz to 5 Hz. Providing enhanced speech reduction automatically receives 1 point by achieving 2 points from S03 or S05, and S06 part 1.
Materials
Promoting the use of low-hazard cleaning products and practices that reduce health impact, mitigation of contamination and public health, management of waste, low-hazard pesticides. For some toxic, prone to bioaccumulation materials carrying old chemicals such as lead that accounted of a one million deaths in 2017. CCA in wood structures can leach arsenic in soil where children can be exposed. New materials such as perfluorinated alkyl compounds (PFCs), orthophthalates, some heavy metals and halogenated flame retardants (HFRs) are superior but cause negative health impact. Volatile organic compound (VOCs) are so common in insulation, paints, coatings, adhesives, furniture and furnishings, composite wood products and flooring that cause respiratory issue and cancer risks. Two solutions are to increase knowledge of materials and to promote assessment and selection to minimize impacts.
X01 - X03 Precondition
For X01 Material Restrictions, Asbestos level in newly installed products is limit under 1,000 ppm by weight or area. Mercury content for fluorescent lamp and sodium-vapor lamp is limit to 2.5 mg to 32 mg or passed the Restriction of Hazardous Substances Directive (RoHS). For fire alarms, meters, sensors, relays, thermostats, and load break switches is limit to no more than 1000 ppm of mercury by weight and 100 ppm of lead or certified the RoHS. For newly installed paints, lead content shall not be over than 100 ppm by weight and certified by the Living Building Challenge's Red List or the Cradle-to-cradle design list or Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) and Candidate List of Substances of Very High Concern (SHVC) lists. Drinking water pipes are limit to 0.25% of lead, or labeled as American National Standards Institute (ANSI) or NSF.
For X02 Interior Hazardous Materials Management, if the building was constructed before the enactment of asbestos banning law, an inspector must qualifies for asbestos containing materials (ACM) and performs polarized light microscopy (PLM) or transmission electron microscopy (TEM). ACM must be removed from the project if it was found. Same as an asbestos management, removing process needs to be done if lead containing materials were found in paints. Polychlorinated biphenyl (PCB) is restricted especially in a caulk, it needs to be removed.
For X03 CCA and Lead Management, outdoor materials such as chromated copper arsenate (CCA) is banned. Lead hazard over the limit in bare soil, turf, artificial turf, recycled tire, and paint shall be examined and be removed or replaced.
X04 - X11 Optimization
X04 Site Remediation, assess and mitigate site hazards by remediation process of a risk-based approach to sustainable remediation receives 1 point.
X05 Enhanced Material Restrictions, material of at least 50% of furniture, millwork, fixture is limited 100 Parts-per notation (ppm) of halogenated flame retardants (HFR), polyfluoroalkyl substances (PFAS), lead, cadmium, mercury, or do not contain textiles and plastic at all, and all electrical products have RoHS restrictions would be awarded 1 point. For flooring products are limited 100 ppm of HFR, PFAS, orthophthalates, and insulation products are limited 100 ppm of HFR, and ceiling and wall panels are limited 100 ppm of HFR and orthophthalates, and pipe and fittings are limited 100 ppm of orthophthalates would receive 1 point.
X06 VOC Restrictions, Volatile organic compound restriction from wet-applied products and furniture, architectural and interior products receives 4 points.
X07 Materials Transparency, selecting products with disclosed ingredients, enhanced ingredient disclosure, and third-party verified ingredients receives 1 point for each, total 3 points.
X08 Materials Optimization, at least 25 distinct products having ingredients inventoried to 100 ppm and free of compound listed in the Living Building Challenge's Red List or the Cradle-to-cradle design list or REACH Restriction and Substance of very high concern (SVHC) lists, or products purchased for future repair would receive 1 point. For optimized products, at least 15 distinct products are certified under the Cradle to Cradle Certified, the Living Product Challenge, and the Global GreenTag.
X09 Waste Management, implementing a waste management plan by identification of roles, sources, protocols to clean and track wastes receives 1 point.
X10 Pest Management and Pesticide Use, implementing integrated pest management (IPM) for indoor and outdoor space receives 1 point.
X11 Cleaning Products and Protocols, developing cleaning plan receives 2 points.
Mind
Mental health issue is estimated 14.3% of deaths worldwide, 8 million death per year, including substance use affect 13% of the global burden of disease and 32% of years lived with disability. Alcohol and drug use is significantly premature death, alcohol alone affect 3.3 million deaths and 5% global burden of disease. Depression and anxiety ranking first and sixth place global burden of disease, with depression accounting for 4% global burden of disease and caused the largest disability worldwide.
Global economy lost from depression and anxiety 1 trillion dollars due to lost productivity, 18% of adults experience the condition, over 30% of adults will experience it during their lifetime, despite that, spending to fight the issue were less than 2 dollars per person. High income and low income country, people experience the issue 35-50% and 76-85% without treatment which causes suicide more than 800,000 deaths per years worldwide. Even that, people with the issue have mortality rate 2.2 times higher than normal and a median loss of 10 years life.
The issue could be mitigated by policies, programs, design in workplace promotion, prevention, interventions. Reducing stigma, promoting positive work environments, stress management programs, and strategies, substance services and treatment, optimal sleep, increasing nature contact can improve overall mental health.
M01 - M02 Precondition
For M01 Mental Health Promotion, WELL ensures that users receive at least two from five options such as education on mental health quarterly, trainings annually, mindfulness program weekly, healthy working hours, a space for relaxation. The project also sends users some form of communication such as annual communication and onboarding to address mental health and well-being benefits with resources.
For M02 Nature and Place, common spaces, rooms, and circulation routes must integrate natural elements related subjects such as natural shape material, plants, water, nature views. The project must designed to provide a celebration of culture and social cognition, celebration of place, integration of art, and human delight that connect to place, by Living Building Challenge 4.0, Core Imperative 19 - Beauty + Biophilia hypothesis topic.
M03 - M11 Optimization
M03 Mental Health Services, offering mental health screening for depression and substance use, providing licensed mental health professional, guidance process for next step for 1 point. Mental health services such as clinical screening, inpatient treatment, outpatient treatment, prescription medication at no cost or subsidy, information on benefits coverage, consultation receives 1 point. Supporting sick leave, shortterm and longterm leave, interpersonal support, adjustment work schedule, adjustment of workplace to quieter area receives 1 point. Mental health recovery by trauma-focus psychotherapy, psychological first aid (PFA), bereavement counseling, information on benefits coverage to additional services receives 1 point.
M04 Mental Health Education, mental health education for regular occupants by managing personal mental health at work, providing in-person or virtually education receives 1 point, and health education for managers by reducing workplace stress, burnout, motivation receives additional 1 point.
M05 Stress Management, develop stress management plan by assessing overwork by more than 48 hours per week, absenteeism, not using paid time off, performance, turnover rates, survey results, improvement of employee stress to change of organization, employee participation receives 2 points.
M06 Restorative Opportunities, supporting healthy working hours by provide minimum 11 consecutive resting hours per day, 24 consecutive hours off per week, 48 hours for those who in shift work, and for eligible employees, minimum 20 days paid time off per year, no work during time off, sick to vacation clearly defined, accrual policy is defined, and for school, do not start earlier than 8:30 am receives 1 point, and additional 1 point for nap allowing and good acoustically separated nap area for 1% of eligible employee at least 30 minutes.
M07 Restorative Spaces, providing break space for users to restorative and encourage relief from fatigue with minimum 7 sqm plus 0.1 sqm per regular users or 186 sqm, with calming adjustable lighting, sound intervention of natural features, thermal control, movable lightweight chars, cushions, natural elements, subdued colors, visual privacy including signage explaining its purpose receives 1 point.
M08 Restorative Programming, for restorative programming such as mindfulness training course, yoga, digital mindfulness offering receives 1 point.
M09 Enhanced Access to Nature, floor plan that at least 75% of workstations have a sight to natural elements winthin 10 meters receives 1 point, and providing outdoor nature access and to one green space or blue space within 200 meter walk distance from boundary, and total green space at least 0.5 hectare receives additional 1 point.
M10 Tobacco Cessation, providing resources and motivation, rewarding, counseling, tobacco prescription for tobacco quitting and limiting of tobacco receives 3 points.
M11 Substance Use Services, education of drug use receives 1 point, and additional 1 point for clinical services.
Community
Estimated 235 million urban families live in low standard housing, leading to poor health outcomes like asthma, infectious disease and cardiovascular. Only 55% of U.S. companies see diversity as a priority, in UK, women earns 80.2% of men's. Spaces usually are not design for diverse needs. Surveying can bring more returns on investment. Fostering civic engagement and espouse can increase employee rentiontion and attraction and financial returns. Design plays critical role to let all users access.
C01 - C04 Precondition
C01 Health and well-being promotion by provides WELL feature guide and regularly communicates to the users.
C02 Integrative design by incorporating all stakeholders to set the health and well-being goals for the project.
C03 Emergency preparedness by implementing emergency management planning and post-occupancy evaluation are also required.
C04 Occupant survey by implement survey program to users.
C05 - C14 Optimization
C05 Enhanced Occupant Survey, total 4 points by using additional survey and analysis for 1 point, using comparison pre and post survey for 1 point, implementing aspirational satisfaction and unmet satisfaction plan for 1 point, focus group activity by interview and evaluate result for 1 point.
C06 Health Services and Benefits, providing employees with a health benefits policy at no cost or subsidized, offering on-demand health services, offering sick leave, supporting vaccinate program receives each 1 point, total 4 points.
C07 Enhanced Health and Well-Being Promotion, promoting culture of health by communications and promotion group receives 1 point. Having at least one dedicated executive-level employee to plan and promote heathy activities receives 1 point.
C08 New Parent Support, offering new parent leave support receives 1, 2, 3 points for paid leave at least 12, 18, 30 weeks. For breastfeeding support, such as break time and giving insulated cooler or refrigerator access receives 1 point. C09 New Mother Support, providing lactation room at least 2.1 x 2.1 m with appropriated elements in the building receives 2 points.
C10 Family Support, childcare support services, offering family leave, offering bereavement support receives each 1 point, total 3 points.
C11 Civic Engagement, promoting community engagement and community space for employees receives each 1 point, total 2 points.
C12 Diversity and Inclusion by creating Diversity, equity, and inclusion (DEI) assessment and action plan, DEI system, DEI hiring practives each receives 1 point, total 3 points.
C13 Accessibility and Universal Design, implementing universal design receives 2 points.
C14 Emergency Resources, providing emergency information or procedures, building notification system, Automated external defibrillator (AEDs), first aid kit, and emergency training for medical emergencies and security team or Cardiopulmonary resuscitation (CPR), first aid, AED usage trainings receives 1 point. Providing opioid pain relieve kit for emergency such as naloxone receives 1 point.
Innovation
I01 - I05 Optimization
This category has no requirement but it can provide additional points such as new intervention for maximum 10 topics, each receives 1 point. If one member of the project team has WELL AP, it automatically receives 1 point. Offering WELL educational tours at least six times per year receives 1 point. If the project commits in any well-being or health program that approved by IWBI and it was completed within three years, it receives 1 point. If the project achieve any green building certification that approved by IWBI, it receives 5 points of rating.
Performance
Scientific research provides information about the standard’s performance. Existing research focuses on the evaluation of indoor environmental quality (IEQ) parameters. The certification requires post-occupancy evaluation, which allows occupants to provide feedback to building owners and management on these IEQ parameters. For buildings with 10 or more occupants, the Occupant Indoor Environmental Quality (IEQ) Survey from the Center for the Built Environment at UC Berkeley (or an approved alternative) is completed by a representative sample of at least 30% of occupants at least once per year. The survey covers the following topics: acoustics, thermal comfort, furnishings, workspace light levels and quality, odors and air quality, cleanliness and maintenance, and layout.
In 2020, researchers analyzed 1,121 post-occupancy evaluation surveys conducted in nine offices, two WELL-certified and seven not WELL-certified. Results of the study were mixed, with higher occupant satisfaction in the WELL-certified buildings for spatial comfort, thermal comfort, noise and privacy, personal control, and workspace comfort, but lower satisfaction for visual comfort and connection to the outside in comparison with non-WELL certified buildings. These findings may be attributable to the types of non-WELL certified buildings used in the comparison, as they may already be high-performance buildings in other regards which do not necessarily satisfy all of the WELL certification’s criteria.
In 2021, another study on surveys compared the results of three rounds of occupant IEQ satisfaction surveys reported by three groups of employees who moved from three non-WELL (two BREEAM and one non-certified) to three WELL-certified office buildings. For two out of the three building pairs, there was a statistically significant increase in building and workspace satisfaction after relocation to WELL buildings. However, for 55% of certification parameters for the three compared cases, there was an insignificant difference upon relocation. Results found higher occupant satisfaction for building cleanliness and furniture but no increase in satisfaction with noise and visual comfort.
Another 2021 study investigated indoor air quality (IAQ) before and after relocation to WELL-certified office buildings. The results indicated there was no significant concentration difference for the majority of measured air pollutants between non-WELL and WELL buildings.
In 2022, researchers conducted a pre- versus post-occupancy evaluation of approximately 1,300 workers transitioning to WELL-certified offices from non-WELL certified offices. Using pre- and post-occupancy surveys, overall satisfaction rates improved from 42% (pre-occupancy) to 70% (post-occupancy) across all parameters. The largest increases in satisfaction were for cleanliness and access to nature, while occupants were most satisfied with maintenance and lighting in WELL-certified offices.
In 2023, researchers analyzed 1,403 post-occupancy evaluation surveys from 14 open-plan offices (10 of which were WELL-certified and four of which were uncertified) in Australia, New Zealand, and Hong Kong. The five offices that achieved the highest satisfaction in interior design, indoor air quality, privacy and connection to the outdoor environment were WELL-certified. No significant differences in health were found between WELL-certified and non-WELL certified buildings as quantified by questions about physical and mental health presented in the post-occupancy evaluation surveys.
In 2024, researchers used a statistical matching approach to compare occupant satisfaction from 3,268 surveys from 20 WELL-certified and 49 LEED-certified buildings. Overall building and workplace satisfaction was found to be high in WELL-certified buildings (94% and 87%). Statistical analysis revealed that there is a 39% higher probability of finding an occupant that is satisfied with the building overall in a WELL-certified building than a LEED-certified building. Although satisfaction was found to be higher in WELL-certified buildings, satisfaction in LEED-certified buildings with the building, workspace, and most IEQ parameters was still relatively high. Temperature and sound privacy in LEED-certified buildings are the only parameters with mean satisfaction values less than “neutral” amongst all studied parameters in LEED- and WELL-certified buildings.
References
Building biology
Building engineering
Environment of the United States
Environmental design
Sustainable building in the United States
Sustainable building rating systems | WELL Building Standard | [
"Engineering"
] | 11,085 | [
"Environmental design",
"Building engineering",
"Civil engineering",
"Design",
"Building biology",
"Architecture"
] |
74,470,076 | https://en.wikipedia.org/wiki/Coulomb%20gas | In statistical physics, a Coulomb gas is a many-body system of charged particles interacting under the electrostatic force. It is named after Charles-Augustin de Coulomb, as the force by which the particles interact is also known as the Coulomb force.
The system can be defined in any number of dimensions. While the three-dimensional Coulomb gas is the most experimentally realistic, the best understood is the two-dimensional Coulomb gas. The two-dimensional Coulomb gas is known to be equivalent to the continuum XY model of magnets and the sine-Gordon model (upon taking certain limits) in a physical sense, in that physical observables (correlation functions) calculated in one model can be used to calculate physical observables in another model. This aided the understanding of the BKT transition, and the discoverers earned a Nobel Prize in Physics for their work on this phase transition.
Formulation
The setup starts with considering charged particles in with positions and charges . From electrostatics, the pairwise potential energy between particles labelled by indices is (up to scale factor)
where is the Coulomb kernel or Green's function of the Laplace equation in dimensions, so
The free energy due to these interactions is then (proportional to) , and the partition function is given by integrating over different configurations, that is, the positions of the charged particles.
Coulomb gas in conformal field theory
The two-dimensional Coulomb gas can be used as a framework for describing fields in minimal models. This comes from the similarity of the two-point correlation function of the free boson ,
to the electric potential energy between two unit charges in two dimensions.
See also
Sine-Gordon equation
XY model
References
Statistical mechanics | Coulomb gas | [
"Physics"
] | 362 | [
"Statistical mechanics"
] |
74,471,367 | https://en.wikipedia.org/wiki/Pneumatic%20anti-ice%20system | A pneumatic anti-ice system is a technology that uses air or another gas to prevent ice buildup on ships sailing in icy waters. It is housed below the waterline on the ship's hull. Pneumatic anti-ice systems use compressed air or engine exhaust as the working gas, which is vented overboard through a series of ejectors from bow to amidships. Since the ejectors are located below the waterline or near the keel, the airflow streaming from them forms a water/air curtain along the hull.
History
The concept of a ship anti-icing system in the form of a water-air boundary layer was introduced in 1966 in the USSR. Variants of a heated steam-air system in the waterline area were considered, and the prospects for its use as a thruster to increase maneuverability were studied. The modern form was proposed by Wärtsilä in 1969 and was first installed on Finnish cargo ferry Finncarrier. It was tested in the Baltic Sea in 1970. The first icebreaker on which the pneumatic anti-ice system was installed was the Yermak, built in 1974.
Performance
The adhesion of ice to the hull has thermal and electrostatic aspects. The ongoing processes develop too quickly to warm above-the-waterline ice to the ambient water temperature, as a result of which it freezes or sticks to the hull. Air flushing reduces the contact area of the ice with the hull and increases the temperature by creating an upstream current of warmer water at greater depth, thereby solving the first problem. Another mechanism is the accumulation of an electrostatic charge in the ice when it cracks and breaks. When the state of underwater paint coating of the hull is unsatisfactory, it can become ineffective against preventing ice from sticking.
Referencesiocde
Transport safety
Ice in transportation | Pneumatic anti-ice system | [
"Physics"
] | 374 | [
"Physical systems",
"Transport",
"Ice in transportation",
"Transport safety"
] |
75,767,608 | https://en.wikipedia.org/wiki/Sulcus%20primigenius | The (Latin for "initial furrow") was the ancient Roman ritual of plowing the boundary of a new cityparticularly formal coloniesprior to distributing its lots or erecting its walls. The Romans considered the ritual extremely ancient, believing their own founder Romulus had introduced it from the Etruscans, who had also fortified most of their cities. The ritual had the function of rendering the course of the city wall sacrosanct but, owing to the necessity of some profane traffic such as the removal of corpses to graveyards, the city gates were left exempted from the ritual.
Ritual
According to surviving classical sources, the needed to occur on an auspicious day of the Roman calendar, further confirmed by augury or similar consultation of omens. The magistrate or other official in charge of the ceremony personally set a bronze plowshare on a wooden ard, which was then attached to a yoked pair of cattle. All literary sources state that the team should consist of a cow on the left and a bull on the right, driven counterclockwise so that the cow was to the inside and the bull to outside, although surviving numismatic evidence appears to show only bulls or standard oxen instead. The ritual was solemn enough that it needed to be performed togate and with covered head () but, as it required the use of both hands, the magistrate's toga was worn wrapped tightly and cinched in Gabine style. In this manner, the magistrate whipped the cattle around the entire course of the future city walls. All of the clods of earth raised by the plow were supposed to fall to the inside, which was accomplished by keeping the plow crooked and by men following the magistrate and plow. This procedure simultaneously established an initial city wall () from the clods and its protective ditch () from the furrow itself. This course was considered sacred and inviolable, which required that the plow be lifted across the locations of the future city gates so that it would be religiously permissible to enter and leave the town, particularly with profane cargo such as corpses or waste. The cattle were sacrificed at the end of the procedure. The city wall was subsequently raised over the earth beside the furrow, whose inner boundary set the outer limits for subsequent auspices performed by the city.
In Latin, the verb used to describe performing this ritual was ("to trace"). The Romans considered it an inheritance from Etruscan religion, meaning that it was presumably included among the sections on the founding of cities in the now-lost Books of Ritual (). For the Romans, the was the essential establishment of a city to the point that Roman law held as late as Justinian that the furrow of the plow was the formal delimitation of a city's territory. In like manner, plows were used to deconsecrate walls, undoing any former ritual and removing any religious stigma from their destruction.
Rome
Plutarch relates the Roman legend that Romulus was guided in the foundation of Rome by Etruscan priests. The daythe 30th of an early Roman month, a new moonwas supposedly marked by a conjunction of the sun and moon producing an eclipse, although modern scholars consider this a mistaken backward application of celestial tables of Plutarch's time and no actual eclipse occurred within a century of the suggested date. After creating a circular pit or trench (), Romulus had the city's initial settlers throw soil from their homelands into it along with representative sacrifices of the necessities and luxuries of settled life. Plutarch places this in a valley at the Comitium, although most accounts placed Romulus's settlement on the Palatine Hill. Romulus then plowed the , establishing Rome's quadrangular first walls and initial sacred boundary. In his discussion of Claudius's later expansion of the pomerium, Tacitus relates that his own belief was that Romulus's furrow and Rome's initial boundarythough unmarked by the 1st century when he was writinghad included the Altar of Hercules in the Forum Boarium and then ran east along the base of the Palatine to the Altar of Consus before turning north to include the Curia Hostilia and the shrine of the Lares Praestites at the Regia and ending at the Forum Romanum; this is only two sides of the course butsince he ascribes the inclusion of the Forum and the Capitol to Titus Tatiusit presumably would have run along the other two sides of the Palatine. (Lanciani notes several problems with this proposed course, which in the archaic period would have probably run through marshland.) Dionysius of Halicarnassus, possibly overstating the point, states that Romulus's furrow was continuous rather than leaving the necessary spaces at the wall's gates. Dionysius then states that Romulus offered sacrifices and provided public games. Before the settlers could enter the city and build their houses, he lit fires before their tents, which they leapt over to expiate any previous guilt or offense and to purify themselves. They then offered their own sacrifices, each as well as they were able. (Against this, Plutarch held the Parilia festival was long kept without any sacrifice at all to commemorate the sanctity of the event of the city's founding.) When the city's walls were later expanded by Rome's kings and under the Republic, the formal sacred boundary was marked with boundary stones. Varro noted the same had been done at Aricia.
Other settlements
The Romans thought many of the Latin towns had been established by the same ritual and used it for all of their formal colonies. Under influence from the Etruscans and Greeks, such colonies were typically established with Hippodamian grids or similar centuriation, meaning their walls' gates were typically placed at each end of major thoroughfares known as and . The walls frequently varied from perfect squares or rectangles, however, owing to local topography.
The was a common reverse type for coins issued by the colonies, often appearing with their first issues but sometimes continuing in use for centuries thereafter. The typical form was to show a magistrate goading a team of oxen with a raised whip. The design was sometimes localized through the inclusion of legionary vexillas or adjusting the cattle to reflect the size of local livestock. Nearly 30 examples of such issues are known, ranging from Iulia Constantia Zilil in Mauretania to Rhesaina in Mesopotamia.
Literature
In Vergil's Aeneid, the hero Aeneas sees the Carthaginians following the ritual and later lays out Lavinium in Italy with his own plow.
As noted by Varro, Pomponius, Isidore, and St. Augustine, the Romans generally derived the etymology of ("city") itself from ("sphere") with regard to the ritual furrow established at its creation.
See also
Agriculture in ancient Rome
Ancient Roman defensive walls
Glossary of Roman religion
References
Citations
Bibliography
Ancient sources
.
, now lost but cited in Servius and Isidore.
.
, an epitome of Flaccus's now lost De Verborum Significatione.
.
.
.
.
, now lost but cited in Tribonian & al.
.
.
.
.
, Books I & V.
Modern sources
.
.
.
.
.
Ancient Roman city planning
Roman agriculture
Roman law
Topography of the ancient city of Rome
Ancient Roman religious practices
Ancient Roman architecture
Ancient Roman geography
Urban geography
Urban design
City founding
Religious rituals
Animal festival or ritual
State ritual and ceremonies
Rituals attending construction | Sulcus primigenius | [
"Engineering"
] | 1,568 | [
"Construction",
"Rituals attending construction"
] |
75,768,013 | https://en.wikipedia.org/wiki/Cost-sensitive%20machine%20learning | Cost-sensitive machine learning is an approach within machine learning that considers varying costs associated with different types of errors. This method diverges from traditional approaches by introducing a cost matrix, explicitly specifying the penalties or benefits for each type of prediction error. The inherent difficulty which cost-sensitive machine learning tackles is that minimizing different kinds of classification errors is a multi-objective optimization problem.
Overview
Cost-sensitive machine learning optimizes models based on the specific consequences of misclassifications, making it a valuable tool in various applications. It is especially useful in problems with a high imbalance in class distribution and a high imbalance in associated costs
Cost-sensitive machine learning introduces a scalar cost function in order to find one (of multiple) Pareto optimal points in this multi-objective optimization problem.
Cost Matrix
The cost matrix is a crucial element within cost-sensitive modeling, explicitly defining the costs or benefits associated with different prediction errors in classification tasks. Represented as a table, the matrix aligns true and predicted classes, assigning a cost value to each combination. For instance, in binary classification, it may distinguish costs for false positives and false negatives. The utility of the cost matrix lies in its application to calculate the expected cost or loss. The formula, expressed as a double summation, utilizes joint probabilities:
Here, denotes the joint probability of actual class and predicted class , providing a nuanced measure that considers both the probabilities and associated costs. This approach allows practitioners to fine-tune models based on the specific consequences of misclassifications, adapting to scenarios where the impact of prediction errors varies across classes.
Applications
Fraud Detection
In the realm of data science, particularly in finance, cost-sensitive machine learning is applied to fraud detection. By assigning different costs to false positives and false negatives, models can be fine-tuned to minimize the overall financial impact of misclassifications.
Medical Diagnostics
In healthcare, cost-sensitive machine learning plays a role in medical diagnostics. The approach allows for customization of models based on the potential harm associated with misdiagnoses, ensuring a more patient-centric application of machine learning algorithms.
Challenges
A typical challenge in cost-sensitive machine learning is the reliable determination of the cost matrix which may evolve over time.
Literature
Cost-Sensitive Machine Learning. USA, CRC Press, 2011.
Abhishek, K., Abdelaziz, D. M. (2023). Machine Learning for Imbalanced Data: Tackle Imbalanced Datasets Using Machine Learning and Deep Learning Techniques. (n.p.): Packt Publishing.
References
Machine learning | Cost-sensitive machine learning | [
"Engineering"
] | 540 | [
"Artificial intelligence engineering",
"Machine learning"
] |
75,770,177 | https://en.wikipedia.org/wiki/Trigeminal%20nerve%20stimulation | Trigeminal nerve stimulation (TNS) or external Trigeminal nerve stimulation (eTNS) is a non-invasive, non-medication therapy for Attention deficit hyperactivity disorder approved in the United States by the FDA for the treatment of ADHD in children ages 7–12. It is also used off-label to treat ADHD in adults.
External trigeminal nerve stimulation (eTNS) is similar to transcutaneous electrical nerve stimulation (TENS), a treatment for chronic pain. A small device supplies electricity to electrodes that are placed on the skin. The device is able to modulate the intensity and frequency of electrical impulses delivered to the nerve endings in the skin.
There is ongoing investigation and research into the use of trigeminal nerve stimulation to treat other psychiatric disorders, such as depression and PTSD.
References
Physical psychiatric treatments
Electrotherapy
Neurophysiology
Neuropsychology
Neurotechnology
Treatment of depression
Attention deficit hyperactivity disorder management
Medical devices
Bioelectromagnetics | Trigeminal nerve stimulation | [
"Biology"
] | 212 | [
"Medical devices",
"Medical technology"
] |
75,775,878 | https://en.wikipedia.org/wiki/Organoid%20intelligence | Organoid intelligence (OI) is an emerging field of study in computer science and biology that develops and studies biological wetware computing using 3D cultures of human brain cells (or brain organoids) and brain-machine interface technologies. Such technologies may be referred to as OIs.
Differences with non-organic computing
As opposed to traditional non-organic silicon-based approaches, OI seeks to use lab-grown cerebral organoids to serve as "biological hardware." Scientists hope that such organoids can provide faster, more efficient, and more powerful computing power than regular silicon-based computing and AI while requiring only a fraction of the energy. However, while these structures are still far from being able to think like a regular human brain and do not yet possess strong computing capabilities, OI research currently offers the potential to improve the understanding of brain development, learning and memory, potentially finding treatments for neurological disorders such as dementia.
Thomas Hartung, a professor from Johns Hopkins University, argues that "while silicon-based computers are certainly better with numbers, brains are better at learning." Furthermore, he claimed that with "superior learning and storing" capabilities than AIs, being more energy efficient, and that in the future, it might not be possible to add more transistors to a single computer chip, while brains are wired differently and have more potential for storage and computing power, OIs can potentially harness more power than current computers.
Some researchers claim that even though human brains are slower than machines at processing simple information, they are far better at processing complex information as brains can deal with fewer and more uncertain data, perform both sequential and parallel processing, being highly heterogenous, use incomplete datasets, and is said to outperform non-organic machines in decision-making.
Training OIs involve the process of biological learning (BL) as opposed to machine learning (ML) for AIs. BL is said to be much more energy efficient than ML.
Bioinformatics in OI
OI generates complex biological data, necessitating sophisticated methods for processing and analysis. Bioinformatics provides the tools and techniques to decipher raw data, uncovering the patterns and insights. A Python interface is currently available for processing and interaction with brain organoids.
Intended functions
Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies. However, brain-inspired silicon chips are still limited in their ability to fully mimic brain function, as most examples are built on digital electronic principles. One study performed OI computation (which they termed Brainoware) by sending and receiving information from the brain organoid using a high-density multielectrode array. By applying spatiotemporal electrical stimulation, nonlinear dynamics, and fading memory properties, as well as unsupervised learning from training data by reshaping the organoid functional connectivity, the study showed the potential of this technology by using it for speech recognition and nonlinear equation prediction in a reservoir computing framework.
Ethical concerns
While researchers are hoping to use OI and biological computing to complement traditional silicon-based computing, there are also questions about the ethics of such an approach. Examples of such ethical issues include OIs gaining consciousness and sentience as organoids and the question of the relationship between a stem cell donor (for growing the organoid) and the respective OI system.
Enforced amnesia and limits on duration of operation without memory reset have been proposed as a way to mitigate the potential risk of silent suffering in brain organoids.
References
Artificial intelligence
Computational fields of study
Computational neuroscience
Developmental neuroscience
Formal sciences
Intelligence by type
Stem cells
Synthetic biology | Organoid intelligence | [
"Technology",
"Engineering",
"Biology"
] | 747 | [
"Synthetic biology",
"Biological engineering",
"Computational fields of study",
"Bioinformatics",
"Molecular genetics",
"Computing and society"
] |
75,782,067 | https://en.wikipedia.org/wiki/C6H13NO3S | {{DISPLAYTITLE:C6H13NO3S}}
The molecular formula C6H13NO3S (molar mass: 179.23 g/mol) may refer to:
Cyclamic acid
Fudosteine
Molecular formulas | C6H13NO3S | [
"Physics",
"Chemistry"
] | 55 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
75,782,122 | https://en.wikipedia.org/wiki/Bismuthyl%20%28ion%29 | Bismuthyl is an inorganic oxygen-containing singly charged ion with the chemical formula BiO, and is an oxycation of bismuth in the +3 oxidation state. Most often it is formed during the hydrolysis of trivalent bismuth salts, primarily nitrate, chloride and other halides. In chemical compounds, bismuthyl plays the role of a monovalent cation.
In inorganic chemistry bismuthyl has been used to describe compounds such as BiOCl which were assumed to contain the diatomic bismuthyl, BiO, cation, that was also presumed to exist in aqueous solution.
This diatomic ion is not now believed to exist. Unlike other inorganic radicals such as hydroxyl, carbonyl, chromyl, uranyl or vanadyl, according to the current IUPAC rules, the name bismuthyl for BiO is not recommended, since individual molecules of these groups are not identifiable but atomic layers of Bi and O. Their presence in compounds preferably should be referred to as oxides. However, the latter position remains controversial. For example, to this day the Russian school of inorganic chemistry still operates with bismuthyl and stibil (antimonyl) cations as actually existing radicals.
In the history of chemistry
Until the last quarter of the 20th century, the real existence of the bismuthyl ion was not in doubt; it was fully present in all reference books and manuals on inorganic chemistry, including German and English ones. The most famous compound of this class was considered bismuthyl chloride, the chemical properties of which were studied in detail and were considered titular for all other bismuth compounds. In addition, the compound with the calculation formula BiOCl exists in nature in the form of bismoclitea, one of the secondary metamorphosed minerals from the class of halides.
In the fundamental three-volume book “Modern Inorganic Chemistry” by Nobel laureate Frank Cotton and Geoffrey Wilkinson, summarizing the latest achievements of science in the first half of the 20th century, the real existence of the bismuthyl cation is not only not questioned, but is not even discussed in any detail. This inorganic radical is mentioned without further explanation and is by default considered a legacy of the fundamental corpus of inorganic chemistry of the 19th century. First of all, the authors note that of the entire group of pnictogens, only bismuth has a truly extensive and detailed cation chemistry. According to the authors, aqueous solutions of bismuth salts contain well-defined hydrated cations. Moreover, bismuthyl in the newest version at that time also acquires quasi-polymeric properties, connecting into chains or hexagons. For example, in neutral perchlorate solutions the main ions are [Bi6O6]6+ or its hydrated form [Bi6(OH)12]6+, and at higher pH values [Bi6O6(OH)3]3+ are formed.
In mineralogy and geochemistry
Previously, it was believed that bismuthyl plays almost the main role in the geochemistry of bismuth and metamorphic processes taking place in a liquid medium. Already in ore waters, bismuth and its main compounds are oxidized, forming a sparingly soluble oxychloride — bismoclite, which, when mixed with bicarbonate background waters, is replaced by an even more sparingly soluble — bismuthite. As a result, small amounts of bismuth circulate in both ore and background waters precisely in the form of bismuthyl ion.
The migration of bismuth in neutral and slightly alkaline groundwater in the form of a simple bismuth ion is hindered as a result of the low threshold pH for the precipitation of its hydroxide from solution. According to thermodynamic calculations carried out in the late 1960s for the stability fields of native bismuth, bismuthinite, bismuth oxides and bismuthyl chloride, in the pH–Eh coordinates the main ion form of bismuth migration was the bismuthyl ion BiO+. According to calculations, it occupied a leading place in the metabolic and oxidative processes that constantly take place in the erosion zones of bismuth minerals.
Bismuthyl chloride, along with BiO(NO) nitrate, which was originally considered the title compound of this cation, actually exists in nature in the form of bismoclite, one of the secondary metamorphosed minerals from the class of halides. According to the chemical formula conventionally recognized back in the 19th century, bismoclite consisted precisely of bismuthyl cations (BiO) and chlorine anions (Cl). Thus, previously the chemical composition of this mineral was traditionally called bismuthyl chloride. However, by the end of the 20th century, based on the results of targeted chemical analyses, the reality of the existence of the diatomic bismuthyl ion was called into question. Since then, bismoclite has been characterized as bismuth oxide-chloride (oxychloride). In the same way, it was proposed to rename all similar bismuthyl compounds, primarily the remaining halides (from fluoride to iodide) and nitrate.
Chemical properties
The classic method for obtaining bismuthyl salts was the treatment of bismuth oxide () with nitric acid. This reaction produces bismuthyl salts such as BiO(NO3 and Bi2O2(OH)(NO3) as end products. The same bismuthyl salts precipitate when strongly acidic solutions of various bismuth compounds are diluted.
The formation of bismuthyl was also considered to be a process that constantly occurs as a result of hydrolysis. Thus, bismuth nitrate, Bi(NO3)3 • 5H20, crystallizes from a solution resulting from the reaction of bismuth with nitric acid. It dissolves in a small amount of water acidified with nitric acid. However, when the solution is diluted with larger quantities of water, hydrolysis occurs and basic salts precipitate, the composition of which depends on the conditions. A salt of the composition BiONO3 is often formed.
Bismuthyl chloride (BiOCl) is readily soluble in hydrochloric acid. Moreover, this process, like nitrate, proceeds through a reversible reaction; a shift of the reaction to the left or right also occurs along the line of hydrolysis, depending on the relative amount of water and the (residual) hydrochloric acid present. Adding water to a slightly acidic solution of ВіСl immediately causes the appearance of a white precipitate of basic bismuth chloride, BiOCl. When hydrochloric acid is added, the precipitate dissolves again, but it immediately falls out when more water is added. All other bismuth compounds behave in aqueous solutions similarly to chloride.
In more detail, the ongoing hydrolysis reactions using bismuth chloride as an example are usually represented by the following reversible equations:
BiCl3 + H2O ↔ BiOHCl2 + HCl
BiOHCl2 + H2O ↔ Bi(OH)2Cl + HCl
The resulting dihydroxobismuth chloride is unstable and easily splits off a water molecule:
Bi(OH)2Cl = BiOCl + H2O
The output is a basic salt containing a bismuthyl cation ВiO+, i.e. ″bismuthyl″ chloride.
Bismuth nitrate is hydrolyzed in the same way, forming the main salt of the composition BiONO3. However, the reaction with it in an aqueous environment is much less successful and does not have such a clear result, since the resulting bismuthyl nitrate is much more soluble in water than its chloride.
The hydrolysis reaction of bismuth salts is reversible, and therefore when heated and hydrochloric acid is added to the precipitate, it dissolves again:
BiOCl + 2HCl = BiCl3 + H2O
When the solution is diluted again with water, a precipitate of the basic salt precipitates again.
The main mechanism in such reactions is the pronounced amphotericity of X(ОН)3 hydroxides for arsenic and antimony and the basic properties for bismuth, as a result of which the salts are susceptible to hydrolysis, especially in the case of antimony and bismuth, which are characterized by the formation of antimonyl cations SbO+ and bismuthyl BiO+. According to this principle, Bi(OH)3, losing water when heated, turns into yellow bismuthyl hydroxide with the formula BiO(OH), sparingly soluble in water, which upon further dehydration forms Bi2O3 oxide.
At elevated temperatures, the vapors of the metal combine rapidly with oxygen, forming the yellow trioxide, . When molten, at temperatures above 710 °C, this oxide corrodes any metal oxide and even platinum. On reaction with a base, it forms two series of oxyanions: , which is polymeric and forms linear chains, and . The anion in is a cubic octameric anion, , whereas the anion in is tetrameric.
In addition to bismuthyl itself, thiocompounds corresponding to bismuthyl salts are also considered indicative for the chemistry of bismuth, for example, gray thiobismuthyl chloride with the formula BiSCl and others similar to it. These substances, unlike bismuthyl salts, are very stable with respect to water, and can be easily prepared by the action of hydrogen sulfide gas on the corresponding bismuth trihalide.
Practical significance
The mineral bismoclite (bismuthyl chloride) has a traditional use as one of the secondary bismuth ores that are constantly formed in oxidation zones. When mixed with other associated ores, it will become the raw material for the production of pure bismuth and its compounds.
In medical diagnostics, bismoclite (in the form of purified bismuth oxychloride) is used as a local radiocontrast agent.
In addition, in the production of cosmetics, bismoclite is used as an enhancing additive; it gives a pearlescent shine to lipstick, nail polish and eye shadow.
In the chemical industry, in the process of cracking hydrocarbons, bismuthyl chloride is used as a catalyst.
The bismuthyl cation is also widely involved in the synthesis of bismuth-organic compounds, including those with pharmaceutical applications.
References
See also
Bismuthyl
Bismuthyl chloride
Bismuthyl carbonate
Bismuthyl nitrate
Bismoclite
Cations
Bismuth compounds
History of chemistry | Bismuthyl (ion) | [
"Physics",
"Chemistry"
] | 2,247 | [
"Cations",
"Ions",
"Matter"
] |
75,782,783 | https://en.wikipedia.org/wiki/SIRIUS%20%28software%29 | SIRIUS is a Java-based open-source software for the identification of small molecules from fragmentation mass spectrometry data without the use of spectral libraries. It combines the analysis of isotope patterns in MS1 spectra with the analysis of fragmentation patterns in MS2 spectra. SIRIUS is the umbrella application comprising CSI:FingerID, CANOPUS, COSMIC and ZODIAC.
SIRIUS, including its web services for structural elucidation, is freely available to use for academic research. Bright Giant GmbH offers subscription-based access to the SIRIUS web services for commercial users.
SIRIUS is not suitable for analyzing proteomics MS data.
History
The SIRIUS software is developed by the group of Sebastian Böcker at the Friedrich Schiller University Jena, Germany and since 2019 together with Bright Giant GmbH. SIRIUS development started in 2009 as a software for identification of the molecular formula by decomposing high-resolution isotope patterns (also called MS1 data). The name is an akronym resulting from this original purpose: Sum formula Identification by Ranking Isotope patterns Using mass Spectrometry.
In 2008 the group introduced the concept of fragmentation trees for identification of the molecular formula based on fragmentation mass spectrometry data, also called tandem MS or MS2 data. Back then, identification of small molecules was approached by searching in a reference spectral library. Examples of such libraries include MassBank, METLIN, or NIST/EPA/NIH EI-MS Library. However, this is limited to known molecules with available standards that have been measured and put in a reference spectral library. For unknown molecules, identification of the molecular formula is a crucial step. In 2011/2012, the group conceived fragmentation trees as a means of structural elucidation by automatically comparing these fragmentation trees. Fragmentation pattern similarities are strongly correlated with the chemical similarity of molecules. Thus, aligning the fragmentation tree of an unknown molecule to a set of known molecules helps to elucidate its structure. Fragmentation trees were introduced in SIRIUS 2.
Also in 2012, the group of Juho Rousu at University of Helsinki, Finland, introduced a machine learning method to predict molecular properties from tandem MS data. This concept was brought together with the fragmentation tree concept in 2015 resulting in CSI:FingerID, being introduced in SIRIUS 3. The fragmentation tree is used to predict a molecular fingerprint of the unknown molecule using machine learning, which in turn is used to search a molecular structure database such as PubChem. Molecular structure databases are orders of magnitude larger than reference spectra libraries (PubChem containing ~111 million compounds in 2021 compared to NIST Tandem Mass Spectral Library containing ~50.000 compounds in 2023). This kind of structure identification refers to the identity and connectivity (with bond multiplicities) of the atoms, but not stereochemistry information. Elucidation of stereochemistry is currently beyond the power of automated search engines.
SIRIUS 3 also introduced the graphical user interface (GUI).
In 2020, in cooperation with the group of Pieter C Dorrestein at UC San Diego, USA, molecular formula identification was improved based on derivative networks from complete biological datasets to rank molecular formula candidates. This method is called ZODIAC and has been integrated into SIRIUS 4.
Also in 2020, in cooperation with Rousu's and Dorrestein's groups, CANOPUS for systematic compound class annotation was introduced to SIRIUS 4.
In 2022, the COSMIC confidence score was added to the CSI:FingerID structure identification workflow in SIRIUS 4, allowing users to determine the trustworthiness of the identification.
Data
SIRIUS is using data from liquid-chromatography tandem mass spectrometry (LC-MS/MS). It requires high-resolution, high mass accuracy MS1 and MS2 data as input. LC is not mandatory for SIRIUS, however is often required to separate individual compounds in complex samples.
MS1 data refers mainly to the isotope pattern of the compound. Due to the natural isotopic distributions of the elements, several peaks in the mass spectrum correspond to the same type of sample molecule, reflecting its isotope pattern.
MS2 data refers to the fragmentation pattern of the compound. MS2 is also known as tandem mass spectrometry or MS/MS. The statistical model of SIRIUS and the machine learning model of CSI:FingerID were trained on MS2 spectra created by collision-induced dissociation (CID), as commonly applied in LC-MS/MS experiments.
SIRIUS expects both, MS1 and MS2 spectra, as input. Omitting the MS1 data is possible, but it will make the analysis more time-consuming and can lead to poorer results.
SIRIUS and CSI:FingerID have been trained on a wide variety of data, including data from different instrument types. Certain aspects of the mass spectra are important to successfully process the data:
High mass accuracy: The mass deviation of the input spectra should be within 20 ppm. Mass spectrometry devices such as TOF, Orbitrap and FT-ICR usually provide data with high mass accuracy, as do coupled devices such as Q-TOF, IT-TOF or IT-Orbitrap. Spectra measured with a quadrupole or linear trap do not provide the required accuracy for data analysis with SIRIUS.
Rich fragmentation spectra: It is not possible to deduce the structure or even the molecular formula from an MS2 spectrum that contains almost no peaks. Prior noise filtering of the spectra is not necessary and not favorable. SIRIUS considers up to 60 peaks in the fragmentation spectrum and decides for itself which of these peaks are regarded as noise.
Centroided MS data: SIRIUS does not contain routines for peak picking from profile-mode spectra. msConvert in ProteoWizard can be used to convert to centroided data. Additionally, there are several tools specialized for the preprocessing task, such as OpenMS, MZmine or XCMS. OpenMS and MZmine 3 both provide export functions tailored to the needs for SIRIUS.
Different common MS file formats, such as .csv, .ms or .mgf files, can be imported to SIRIUS. SIRIUS can import full LC-MS-runs (.mzML) or single compounds. At present, SIRIUS only handles single-charged compounds.
Features
SIRIUS identifies small molecules in a two step approach:
First, the molecular formula of the molecule is determined.
Second, a molecular fingerprint is predicted to search against a structure database to identify the most likely candidate.
The following algorithms are implemented in SIRIUS:
SIRIUS: Molecular formula identification
SIRIUS is the name of the umbrella application, but (for historic reasons) also the name for the identification of the molecular formula. Molecular formula refers to the elemental composition of the molecule. The mere mass of a molecule is not sufficient to determine the correct molecular formula. Even with very high mass accuracy, many molecular formulas can explain a mass measured in a spectrum, in particular in higher mass regions. In SIRIUS, molecular formula identification is done using isotope pattern analysis on the MS1 data as well as fragmentation tree computation on the MS2 data. The score of a molecular formula candidate is a combination of the isotope pattern score and the fragmentation tree score.
To identify the molecular formula, SIRIUS is considering all possible molecular formulas for a set of elements. The elements most abundant in living beings are hydrogen (H), carbon (C), nitrogen (N), oxygen (O), and phosphor (P). This is the default set of elements in SIRIUS. Some less common elements result in very characteristic isotope pattern changes and can be automatically detected. Detectable elements are sulfur (S), chlorine (Cl), bromine (Br), boron (B) and selenium (Se). The current version of SIRIUS uses
a deep neural network for auto-detection of elements from the isotope and fragmentation pattern of the query molecule.
For very large molecules or in case of missing data (e.g., a missing isotope pattern), it is possible to restrict SIRIUS to molecular formulas found in a database, such as PubChem.
Decomposition of mass
In order to quickly generate a manageable number of molecular formula candidates, the monoisotopic mass is decomposed into all possible molecular formulas that would lead to this mass. There are two definitions of the monoisotopic mass: (1) the sum of the masses of the most abundant naturally occurring stable isotope of each atom (i.e. the highest peak of the isotope pattern) (2) the sum of the masses of the lightest naturally occurring stable isotope of each atom (i.e. the peak of the isotope pattern with the lowest mass). For small molecules, the lightest peak is also mostly the highest peak of the isotope pattern. However, in the computational context of SIRIUS, the second definition is used.
Decomposing the monoisotopic mass into all possible molecular formulas requires a mass interval taking into account the measurement inaccuracy of the instrument. This real-valued decomposition is transformed into a problem instance with integer masses by using a blowup factor. The resulting problem is known as Change-making problem which is well-studied and can be solved in runtime linear in the size of the output.
Isotope pattern analysis
Isotope patterns of the candidate molecular formulas are simulated starting with the isotopic distributions of the individual elements, and then combining these distributions by folding.
The simulated isotope pattern is compared with the measured pattern by assigning probabilities to the observed masses and intensities.
Fragmentation tree computation
A fragmentation tree is a representation of the fragmentation process similar to “fragmentation diagrams” created by experts. The fragmentation tree annotates the MS2 spectrum by providing a molecular formula for each fragment peak. Peaks that do not receive an annotation are considered noise peaks. The fragmentation tree also predicts the fragmentation reactions (called losses) leading to the fragment peaks. Fragmentation trees are a valuable tool for deducing information about the fragmentation but are not a precise depiction of the actual fragmentation process.
To identify the molecular formula of an unknown molecule, a separate fragmentation tree is computed for every molecular formula candidate. In other words, the method attempts to reconstruct the fragmentation process that led to this MS2 spectrum for each candidate molecular formula. This allows to compare the different hypotheses that a particular candidate is actual the correct molecular formula. The best-scoring fragmentation tree (i.e. the fragmentation process that is best explaining the spectrum) corresponds to the most likely molecular formula explanation.
ZODIAC: Improved molecular formula identification
ZODIAC improves the ranking of the formula candidates provided by SIRIUS. Organisms produce related metabolites derived from multiple but limited biosynthetic pathways. For a full LC-MS/MS run that is derived from a biological sample or any other set of derivatives the relation of the metabolites is reflected in their similarity. Those similarities are in turn reflected in joint fragments and losses between the fragmentation trees and can be leveraged to improve molecular formula identification of the individual molecules.
ZODIAC uses the top X molecular formula candidates for each molecule from SIRIUS to build a similarity network, and uses Bayesian statistics to re-rank those candidates. Prior probabilities are derived from fragmentation tree similarity. Finding an optimal solution to the resulting computational problem is NP-hard, therefore Gibbs sampling is used.
ZODIAC stands for ZODIAC: Organic compound Determination by Integral Assignment of elemental Compositions.
CSI:FingerID: Structure database search
CSI:FIngerID identifies the structure of a molecule by predicting its molecular fingerprint and using this fingerprint to search in a molecular structure database.
Molecular fingerprints
A molecular fingerprint is a binary vector, where each position corresponds to a specific molecular property. In this representation, a given position X may encode the presence or absence of a particular substructure, with '1' indicating presence and '0' indicating absence. Various types of molecular fingerprints exist, including PubChem CACTVS fingerprints, Klekota-Roth fingerprints, MACCS fingerprints, and Extended-Connectivity Fingerprints (ECFP). A molecular fingerprint can be deterministically computed from a given molecular structure. Different molecular structures may yield the same molecular fingerprint.
Predicting molecular fingerprints
CSI:FingerID predicts a probabilistic fingerprint with a variety of molecular properties from several fingerprint types. The fingerprint is predicted from the given spectrum and its corresponding fragmentation tree using deep kernel learning, which is a combination of kernel methods and deep neural networks. Not only the top scoring molecular formula but multiple high-scoring molecular formula candidates are considered.
Comparing molecular fingerprints
To search in a molecular structure database requires a metric to compare and score the molecular fingerprints. Tanimoto similarity (Jaccard index) is a commonly employed metric. A similarity value of 1 signifies identical fingerprints, while a value of 0 indicates structures that do not share any molecular properties. The calculated similarity value depends on the choice of fingerprint type.
CSI:FingerID employs a logarithmic posterior probability to rank the structure candidates, where scores are represented as negative numbers, and zero is the optimum. This scoring function results in a higher number of correct identifications. Tanimoto similarities are also given.
COSMIC: Identification confidence
The COSMIC confidence score assigns a confidence to CSI:FingerID structure identifications. The idea is similar to False Discovery Rates: All molecules in a large dataset are analysed using CSI:FingerID, the top-ranked hit for each molecule will be evaluated by COSMIC and the most trustworthy identifications can be selected for further analysis. COSMIC does not re-rank structure candidates of a particular molecule nor does it discard any identifications.
COSMIC employs a confidence score that combines E-value estimation and a linear support vector machine (SVM) with enforced directionality. Calibration of CSI:FingerID scores is achieved using E-value estimates. Generating decoys for small molecule structures is a non-trivial task, that is why candidates in PubChem serve as a proxy for decoys here.
The score distribution is modeled as a mixture distribution of log-normal distributions, and the P-value and E-value of a hit score are estimated using the kernel density estimate of PubChem candidate scores. The SVM is employed to classify whether a hit is correct, utilizing features such as the calibrated score, score differences to other candidates, the total peak intensity explained by the fragmentation tree, and the cardinality of molecular fingerprints. Learning is constrained to a linear SVM to mitigate the risk of overfitting, and the directionality of features is enforced. This involves making upfront decisions about whether high or low values of a feature should enhance the confidence in an identification. For instance, a high CSI:FingerID score of a hit should increase but never decrease the confidence that the hit is correct. Some features necessitate the existence of at least two candidates for comparison, and separate SVMs are trained for single instances. The decision values of the SVM are mapped to posterior probability estimates using Platt scaling. This comprehensive approach ensures a robust and nuanced assessment of the confidence in molecule identifications.
CANOPUS: compound class prediction
CANOPUS is short for class assignment and ontology prediction using mass spectrometry. It predicts the compound classes from the molecular fingerprint predicted by CSI:FingerID. This approach is completely database-free, i.e. it is not even limited to molecules that are listed in structure databases.
CANOPUS employs a deep neural network (DNN) to predict 2,497 compound classes. The DNN was trained on 4.10 million compound structures with compound classes assigned by ClassyFire. No MS/MS data was used for training, but instead simulated ‘realistic’ probabilistic fingerprints for the training molecular structures were used. The DNN predicts all compound classes simultaneously.
For full biological datasets, CANOPUS provides a comprehensive overview of compound classes present in the sample and allows for comparisons between different cohorts at compound class level.
Areas of application
Small molecules are essential components found throughout nature, playing a significant role in various fields such as drug discovery, diagnostics, food science, environmental monitoring, and more. Effectively addressing many global challenges hinges on the comprehensive identification of small molecules in complex samples. These complex mixtures contain thousands of different molecules measurable in a single mass spectrometry run.
The identification of unknown small molecules is considered a critical bottleneck in metabolomics, natural product research, and related fields, given that widely over 90% of all small molecules remain unknown. Commonly, analyses were based on targeted approaches that are limited to the rediscovery of known molecules. In contrast, untargeted analysis is a top-down strategy that avoids the need for a prior specific hypothesis on expected small molecules. The focus shifts from asking, "Is molecule X present in the sample?" to "Which (unknown) molecules are present in the sample and might be relevant for downstream analysis?"
SIRIUS is designed for the untargeted structural elucidation of unknown molecules, addressing various challenges:
The correct molecular structure is prominently ranked from an extensive list of candidates. This can be compared to a Google search where the optimal answer is expected to be among the top three.
It can be assessed whether the top candidate is indeed correct.
Structural information is available even for molecules absent in extensive structure databases, including details on compound class and substructure information.
Examples of application
Neonatal dried blood spots are important for newborn screening and a powerful source for investigating the potential metabolic etiologies of various diseases using untargeted LC-MS-based metabolomics. Researchers used SIRIUS to investigate the stability of metabolites and classes of molecules in neonatal dried blood spot biobanks.
Marine microorganisms offer a rich source of bioactive compounds with unique structures and remarkable biological activity. This makes them an important resource for the search for new therapeutic compounds. Researchers are using SIRIUS, to narrow down the search to the most promising microorganisms.
Pediatric asthma poses diagnostic challenges due to its variable presentation. Breath analysis could be a game-changer in pediatric allergic asthma management. By identifying unique exhaled metabolic signatures using SIRIUS, researchers developed an approach to diagnose children with allergic asthma.
Thiacloprid is a first-generation, widely used, neonicotinoid insecticide. Its persistence in the environment and potential adverse effects on human health have raised significant concerns. Elucidating the impurity profile of pesticides is crucial for assessing their environmental impact and potential risks, and setting acceptable limits for impurities. Using SIRIUS, researchers demonstrated an approach for identifying structurally related impurities in pesticides.
Under certain conditions, two bacterial species can thrive together in a dual-species biofilm. The cooperation between P. aeruginosa and S. aureus in cystic fibrosis leads to increased disease severity. Using SIRIUS, researchers identified a metabolite that could be related to the increased pathogenesis of this dual-species biofilm in cystic fibrosis.
Our skin hosts a diverse community of microorganisms known as the skin microbiota. Using SIRIUS, researchers identified changes in the skin metabolome that are more pronounced than changes in the microbial composition, suggesting that even subtle shifts in microbial abundance can lead to significant effects on the skin.
Limitations
Limitation of the measurement method
Mass spectra alone lack sufficient information to unambiguously identify every molecule. Some molecules produce almost indistinguishable spectra – even more similar than the same molecule measured on two different instruments. Extensive follow-up experiments are required for unambiguous identification.
Based thereon, it is impossible to always correctly identify a molecular structure merely from a mass spectrum. Thus, CSI:FingerID as well as other methods for structure database search, cannot guarantee finding the correct molecular structure as first hit. That is why it is important to have the correct structure ranked very high from an extensive list of candidates and to assess the confidence in the top hit.
Limitation of structure databases
Structure databases are orders of magnitude larger than spectral libraries but still incomplete. It is understood that not every existing biomolecule is or will be contained in structure databases.
For these instances, SIRIUS offers several solutions:
SIRIUS can search in databases of hypothetical structures. This could be for example interesting for finding derivatives.
The predicted molecular fingerprint offers structural information about, e.g., substructures.
CANOPUS predicts the compound classes of a molecule without searching in a database.
Independent evaluation of the software
CASMI (Critical Assessment of Small Molecule Identification)
is an open contest on the identification of small molecules from mass spectrometry data, and was launched in 2012 by Emma Schymanski and Steffen Neumann.
In CASMI 2016, CSI:FingerID and a derivative of CSI:FingerID, in which the Böcker Group was also involved, won first and second place in the category “Best Automatic Structural Identification - In Silico Fragmentation Only”. Also, CSI:FingerID had the best result for ranking the correct molecule structure at position one (70 out of 127, positive mode).
In CASMI 2017, SIRIUS plus CSI:FingerID won in 3 of 4 categories: “Best Structure Identification on Natural Products”, “Best Automatic Structural Identification - In Silico Fragmentation Only”, “Best Automatic Candidate Ranking”.
In CASMI 2022, six out of 16 contestants used SIRIUS in their workflow to identify the best molecular structure candidates. SIRIUS won in the categories “Correct elemental formulas”, “Correct compound structure classes” and “Correct 2D chemical structures”. CASMI 2022 included compounds that were not even contained in PubChem.
Awards and recognition
Sebastian Böcker's group at FSU Jena won the 2022 Thuringian Research Award in the Applied Research category for SIRIUS and the underlying methods.
SIRIUS was recognized as a "method to watch" by Nature Methods in 2020.
Licences
SIRIUS is developed by the group of Sebastian Böcker at the FSU Jena in close collaboration with the Bright Giant GmbH. SIRIUS is provided as a software-as-a-service solution. The client software is open-source and installed on the users’ computers. Molecular formula annotation using fragmentation trees and isotope pattern analysis is performed on your local computer without subscription requirement.
The SIRIUS web services for structural elucidation, including molecular fingerprint prediction, structure database search, confidence score assessment and compound class prediction, require a user account. The web services are free for academic/non-commercial use provided/hosted by the FSU Jena. Academic institutions are identified by their email domain and access will be granted automatically. In some cases, further validation might be required.
Bright Giant GmbH offers subscription-based access to the SIRIUS web services for structural elucidation for commercial users.
Alternatives
Other algorithms and software for searching in structure databases are CFM-ID, ICEBERG, MetFrag, MS-FINDER, MetaboScape® (Bruker), MassHunter (Agilent) or Compound Discoverer™ (Thermo Fisher Scientific).
See also
Tandem mass spectrometry
Metabolomics
List of mass spectrometry software
References
Mass spectrometry software | SIRIUS (software) | [
"Physics",
"Chemistry"
] | 4,738 | [
"Mass spectrometry software",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Chemistry software"
] |
78,861,280 | https://en.wikipedia.org/wiki/Transition%20metal%20hydroxide%20complexes | Transition metal hydroxide complexes are coordination complexes containing one or more hydroxide (OH−) ligands. The inventory is very large.
Hydroxide as a ligand
Hydroxide is classified as an X ligand in the Covalent bond classification method. In the usual electron counting method, it is a one-electron ligand when terminal and a three-electron ligand when doubly bridging.
From the electric structure perspective, hydroxide is a strong pi-donor ligand, akin to fluoride. One consequence is that few polyhydroxide complexes are low spin. Another consequence is that electron-precise hydroxide complexes tend to be rather nucleophilic.
Representative complexes
Many hydroxo complexes are prepared by treating metal halides with hydroxide salts. Hydrolysis of basic ligands (amides, alkyls) also produces hydroxide complexes.
Homoleptic complexes
Only a few homoleptic hydroxide complexes are known. These include the d6 species and the d0 complexes .
Mixed ligand complexes
Many complexes are known where hydroxide shares the coordination sphere with other ligands. One pair of such complexes are {[Co(NH3)3]2(mu-OH)3}3+ and its derivative {[Co(NH3)3(H2O)]2(mu-OH)2}4+.
Reactions
Prominent reactions of metal hydroxides are their acid-base behavior. Protonation of metal hydroxides gives aquo complexes:
where is the ligand complement on the metal M
Thus, aquo ligand is a weak acid, of comparable strength to acetic acid (pKa of about 4.8).
In principle but not very commonly, metal hydroxides undergo deprotonation, yielding oxo complexes:
Characteristically, hydroxide ligands are compact and basic. They tend to function as bridging ligands. One manifestation of this property is the preponderance of di-and polymetallic hydroxide complexes. A practical consequence of this feature is the tendency of metal aquo complexes to form precipitates of meta hydroxides.
Bioinorganic chemistry
Hemerythrins, proteins responsible for oxygen (O2) transport in some animals have an diiron hydroxide active site. The hydroxide ligand engages the bound O2 through hydrogen bonding.
The nucleophilicity of hydroxo ligands is relevant to the role of some M-OH centers in enzymology. For example, in carbonic anhydrase, a zinc hydroxide binds carbon dioxide:
The oxygen evolving complex (OEC) consists of a Mn-Ca-O-OH cluster that is responsible for the biosynthesis of O2. It is proposed that the O-O bond forming step involves a hydroxide ligand.
Metalloproteinases catalyze the hydrolysis peptide bond. The catalytic center is such enzymes often involves metal hydroxides.
References
Ligands | Transition metal hydroxide complexes | [
"Chemistry"
] | 609 | [
"Ligands",
"Coordination chemistry"
] |
78,868,644 | https://en.wikipedia.org/wiki/Glauber%20multiple%20scattering%20theory | The Glauber multiple scattering theory
is a framework developed by Roy J. Glauber to describe the scattering of particles off composite targets, such as nuclei, in terms of multiple interactions between the probing particle and the individual constituents of the target. It is widely used in high-energy physics, nuclear physics, and hadronic physics, where quantum coherence effects and multiple scatterings are significant.
Description
The basic idea of the Glauber formalism is that the incident projectile is assumed to interact with each component of the complex target in turn as it moves in a straight line through the target. This assumes the eikonal approximation, viz that the projectile's trajectory is nearly straight-line, with only small-angle deflections due to interactions with the target component. The theory accounts for the fact that a projectile may interact with more than one constituent (e.g., the nucleons of a target nucleus) as it passes through the target nucleus. These interactions are treated coherently. The scattering amplitude is taken as the sum over contributions from multiple scatterings. This is done using the optical model, where the target nucleus is treated as a complex potential. In fact, coherent superposition of scattering amplitudes from all possible paths through the nucleus is a fundamental aspect, leading to phenomena like diffraction patterns. The theory often uses Gaussian or Woods-Saxon distributions for nuclear densities.
Formalism
The elastic scattering amplitude in Glauber theory is given by:
where: is the momentum transfer, is the impact parameter, is the eikonal phase shift representing the integrated interaction potential. For a nucleus, is expressed as the sum of contributions from individual nucleons, where is the transverse position of nucleon j.
At high energies, the above formalism simplifies by focusing on transverse geometry and neglecting effects like spin or low-energy dynamics. Relativistic corrections were not part of the original formalism, but have been included in modern applications when they are necessary (high-energy cases)
Other simplifications are that the theory assumes independent scatterings, neglects correlations between nucleons and, as an effective modeling, does not account for some QCD effects directly, which are significant at very small distances.
Applications
The Glauber theory has been applied to:
Elastic and inelastic scattering of protons, neutrons, and other particles off nuclei.
Heavy-ion collisions to describe the initial geometry of collisions and energy deposition.
High-energy diffraction in hadron-hadron or hadron-nucleus scattering.
EMC effect, specifically nuclear shadowing, in deep inelastic scattering.
Color transparency which describes how much of the projectile penetrates the target nucleus without being absorbed or deflected significantly.
See also
Roy J. Glauber
Coherent state
References
Quantum chromodynamics
Hadrons
Nuclear physics | Glauber multiple scattering theory | [
"Physics"
] | 586 | [
"Hadrons",
"Subatomic particles",
"Matter",
"Nuclear physics"
] |
78,869,128 | https://en.wikipedia.org/wiki/Hilbert%E2%80%93Arnold%20problem | In mathematics, particularly in dynamical systems, the Hilbert–Arnold problem is an unsolved problem concerning the estimation of limit cycles. It asks whether in a generic finite-parameter family of smooth vector fields on a sphere with a compact parameter base, the number of limit cycles is uniformly bounded across all parameter values. The problem is historically related to Hilbert's sixteenth problem and was first formulated by Russian mathematicians Vladimir Arnold and Yulij Ilyashenko in the 1980s.
Overview
The problem arises from considering modern approaches to Hilbert's sixteenth problem. While Hilbert's original question focused on polynomial vector fields, mathematical attention shifted toward properties of generic families within certain classes. Unlike polynomial systems, typical smooth systems on a sphere can have arbitrarily many hyperbolic limit cycles that persist under small perturbations. However, the question of uniform boundedness across parameter families remains meaningful and forms the basis of the Hilbert–Arnold problem.
Due to the compactness of both the parameter base and phase space, the Hilbert–Arnold problem can be reduced to a local problem studying bifurcations of special degenerate vector fields. This leads to the concept of polycycles—cyclically ordered sets of singular points connected by phase curve arcs—and their cyclicity, which measures the number of limit cycles born in bifurcations.
Local Hilbert–Arnold problem
The local version of the Hilbert-Arnold problem asks whether the maximum cyclicity of nontrivial polycycles in generic k-parameter families (known as the bifurcation number ) is finite, and seeks explicit upper bounds.
The local Hilbert–Arnold problem has been solved for and , with and . For , a solution strategy exists but remains incomplete. A simplified version considering only elementary polycycles (where all vertices are elementary singular points with at least one nonzero eigenvalue) has been more thoroughly studied. Ilyashenko and Yakovenko proved in 1995 that the elementary bifurcation number is finite for all .
In 2003, mathematician Vadim Kaloshin established the explicit bound .
See also
Bifurcation theory
Dynamical system
Hilbert's sixteenth problem
Limit cycle
List of unsolved problems in mathematics
References
Dynamical systems
Systems theory
Unsolved problems in mathematics | Hilbert–Arnold problem | [
"Physics",
"Mathematics"
] | 453 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Mechanics",
"Dynamical systems"
] |
77,402,300 | https://en.wikipedia.org/wiki/Per%20Helander | Per Helander (born 1967) is a Swedish theoretical plasma physicist and a leading scientist in the world in stellarator physics. He is the head of Stellarator Theory Division at the Max Planck Institute for Plasma Physics.
Education and career
Helander was born in Umeå, his grandfather is Dick Helander, the former bishop of Strängnäs. Helander studied physics at Chalmers University of Technology, where he received a Master's degree in plasma physics in 1991. Subsequently, he earned a PhD degree at the same institution in theoretical physics in 1994 with a thesis titled Dynamics of Fast Ions in Tokamaks. His doctoral advisors were Mietek Lisak and Dan Anderson. Afterwards, Helander was a postdoctoral fellow at Massachusetts Institute of Technology in the group of Dieter Sigmar. He then joined the theory department at Culham Science Centre (now Culham Centre for Fusion Energy) in Abingdon of the United Kingdom Atomic Energy Authority in 1996. He was an adjunct professor at Chalmers University of Technology from 2002 till 2005. In 2006, Helander was appointed Scientific Fellow at the Greifswald Branch of the Max Planck Institute for Plasma Physics. He was appointed to a chair for theoretical plasma physics at the University of Greifswald.
Honours and awards
In 2023, Helander was named a Fellow of the American Physical Society. In 2024, Helander was awarded the Hannes Alfvén Prize along with Tünde Fülöp for outstanding contributions to theoretical plasma physics, yielding groundbreaking results that significantly impact the understanding and optimization of magnetically confined fusion plasmas.
References
1967 births
Living people
21st-century physicists
Plasma physicists
Chalmers University of Technology alumni
Max Planck Institute directors
Swedish physicists
People from Umeå
Academic staff of the University of Greifswald
Fellows of the American Physical Society
Max Planck Institutes researchers
Computational physicists
Theoretical physicists
Academic staff of the Chalmers University of Technology | Per Helander | [
"Physics"
] | 379 | [
"Computational physicists",
"Plasma physics",
"Theoretical physics",
"Computational physics",
"Plasma physicists",
"Theoretical physicists"
] |
77,402,447 | https://en.wikipedia.org/wiki/Thermo-acoustic%20instability | Thermo-acoustic instability refers to an instabiltiy arising due to acoustics field and unsteady heat release process. This instability is very relevant in combustion instabilities in systems such as rocket engines, etc.
Rayleigh criterion
A very simple mechanism of acoustic amplification was first identified by Lord Rayleigh in 1878. In simple terms, Rayleigh criterion states that amplification results if, on the average, heat addition occurs in phase with the pressure increases during the oscillation.. That is, if is the pressure perturbation (with respect to its mean value ) and is the rate of heat release per unit volume (with respect to its mean value ), then the Rayleigh criterion says that acoustic amplification occurs if
Rayleigh criterion is used to many explain phenomena such as singing flames in tubes, sound amplification in Rijke tube and others. In complex systems, Rayleigh criterion, may not ne strictly valid, as there exists many damping factors such as viscous/wall/nozzle/relaxation/homogeneous/particle damping, mean-flow effects, et, that are not accounted in Rayleigh's analysis.
See also
Darrieus–Landau instability
Diffusive–thermal instability
Rijke tube
References
Fluid dynamics
Combustion
Fluid dynamic instabilities | Thermo-acoustic instability | [
"Chemistry",
"Engineering"
] | 269 | [
"Fluid dynamic instabilities",
"Chemical engineering",
"Combustion",
"Piping",
"Fluid dynamics"
] |
77,402,909 | https://en.wikipedia.org/wiki/Matalon%E2%80%93Matkowsky%E2%80%93Clavin%E2%80%93Joulin%20theory | The Matalon–Matkowsky–Clavin–Joulin theory refers to a theoretical hydrodynamic model of a premixed flame with a large-amplitude flame wrinkling, developed independently by Moshe Matalon & Bernard J. Matkowsky and Paul Clavin & Guy Joulin, following the pioneering study by Paul Clavin and Forman A. Williams and by Pierre Pelcé and Paul Clavin. The theory, for the first time, calculated the burning rate of the curved flame that differs from the burning rate of the planar flame due to flame stretch, associated with the flame curvature and the strain imposed on the flame by the flow field.
Burning rate formula
According to Matalon–Matkowsky–Clavin–Joulin theory, if and are the laminar burning speed and thickness of a planar flame (and be the corresponding flame residence time with being the thermal diffusivity in the unburnt gas), then the burning speed for the curved flame with respect to the unburnt gas is given by
where is the unit normal to the flame surface (pointing towards the burnt gas side), is the flow velocity field evalauted at the flame surface and and are the two Markstein numbers, associated with the curvature term and the term corresponding to flow strain imposed on the flame.
See also
G equation
Markstein number
References
Fluid dynamics
Combustion | Matalon–Matkowsky–Clavin–Joulin theory | [
"Chemistry",
"Engineering"
] | 286 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
77,405,256 | https://en.wikipedia.org/wiki/Fradkin%20tensor | The Fradkin tensor, or Jauch-Hill-Fradkin tensor, named after Josef-Maria Jauch and Edward Lee Hill and David M. Fradkin, is a conservation law used in the treatment of the isotropic multidimensional harmonic oscillator in classical mechanics. For the treatment of the quantum harmonic oscillator in quantum mechanics, it is replaced by the tensor-valued Fradkin operator.
The Fradkin tensor provides enough conserved quantities to make the oscillator's equations of motion maximally superintegrable. This implies that to determine the trajectory of the system, no differential equations need to be solved, only algebraic ones.
Similarly to the Laplace–Runge–Lenz vector in the Kepler problem, the Fradkin tensor arises from a hidden symmetry of the harmonic oscillator.
Definition
Suppose the Hamiltonian of a harmonic oscillator is given by
with
momentum ,
mass ,
angular frequency , and
displacement ,
then the Fradkin tensor (up to an arbitrary normalisation) is defined as
In particular, is given by the trace: . The Fradkin Tensor is a thus a symmetric matrix, and for an -dimensional harmonic oscillator has independent entries, for example 5 in 3 dimensions.
Properties
The Fradkin tensor is orthogonal to the angular momentum :
contracting the Fradkin tensor with the displacement vector gives the relationship
.
The 5 independent components of the Fradkin tensor and the 3 components of angular momentum give the 8 generators of , the three-dimensional special unitary group in 3 dimensions, with the relationships
where is the Poisson bracket, is the Kronecker delta, and is the Levi-Civita symbol.
Proof of conservation
In Hamiltonian mechanics, the time evolution of any function defined on phase space is given by
,
so for the Fradkin tensor of the harmonic oscillator,
.
The Fradkin tensor is the conserved quantity associated to the transformation
by Noether's theorem.
Quantum mechanics
In quantum mechanics, position and momentum are replaced by the position- and momentum operators and the Poisson brackets by the commutator. As such the Hamiltonian becomes the Hamiltonian operator, angular momentum the angular momentum operator, and the Fradkin tensor the Fradkin operator. All of the above properties continue to hold after making these replacements.
References
Quantum mechanics
Classical mechanics | Fradkin tensor | [
"Physics"
] | 487 | [
"Quantum mechanics",
"Theoretical physics",
"Mechanics",
"Classical mechanics"
] |
77,409,352 | https://en.wikipedia.org/wiki/Far-UVC | Far-UVC is a type of ultraviolet germicidal irradiation being studied and commercially developed for its combination of pathogen inactivation properties and reduced negative effects on human health when used within exposure guidelines.
Far-UVC (200-235 nm), while part of the broader UV-C spectrum (100-280 nm), is distinguished by its unique biophysical effects on living tissues. Unlike conventional UV-C lamps (which typically have peak emissions at 254 nm), far-UVC demonstrates significantly reduced penetration into biological tissue. This limited penetration depth is primarily due to strong absorption by proteins at wavelengths below 240 nm. Consequently, far-UVC photons are mostly absorbed in the outer protective layers of skin and eyes before reaching sensitive cells, resulting in minimal health effects. However, far-UVC can still lead to negative health effects through reactive byproducts like ozone.
While the technology has been studied since the early 2010s, heightened demand for disinfectant tools during the COVID-19 pandemic played a significant role in spurring both academic and commercial interest into far-UVC. Unlike conventional germicidal UV-C lamps, which are limited to upper-room (above people's heads) pathogen inactivation or use in unoccupied spaces due to their negative effects on human skin and eyes, far-UVC is considered promising for whole-room pathogen inactivation due to its enhanced safety. This allows for the installation of far-UVC lights on ceilings, potentially enabling direct disinfection of the breathing zone while people are present. Wearable garments incorporating far-UVC light sources to disinfect on programmable demand the vicinity around the user have also been proposed.
Although far-UVC shows potential for implementation in a wide variety of use cases, its wider adoption as a pandemic prevention strategy requires further research around its safety and efficacy.
Historical Development
Far-UVC's development was primarily led by the research of Dr. David J. Brenner and his colleagues (including David Welch and Manuela Buonanno) at Columbia University's Center for Radiological Research. In the early 2010s, Brenner initially studied far-UVC for its potential as a surgical site disinfectant. Over the next decade, his lab began to study the technology for its ability to prevent the airborne transmission of pathogens, as well as its health effects on mammalian skin. In 2018, a seminal paper published by Brenner's lab announced the technology as an inexpensive and safe technology to reduce the spread of airborne microbial diseases like tuberculosis and influenza.
During the COVID-19 pandemic far-UVC research and commercialization efforts increased. The technology is currently being further studied for its safety and efficacy, particularly regarding its effect on ozone creation and interactions with indoor air chemistry and the built environment. Latest studies uphold initial evidence towards the technology's germicidal efficacy in realistic room-like environments. These finding pave the way for future wearable garments which can disinfect a programmable area in the vicinity of the user on demand.
Safety and Efficacy
Research from the Brenner lab and other scientists has demonstrated the improved safety and efficacy profile of far-UVC compared to other ultraviolet wavelengths. When evaluating ultraviolet germicidal lights, eye and skin health are primary concerns. UV-B, predominantly responsible for the harmful effects of sunlight, poses the highest risk for erythema, photokeratitis, sunburn and skin cancer. While longer UV-C wavelengths and UV-A can also cause damage, their effects are less severe than UV-B.
In contrast, far-UVC has shown remarkably different results. Studies on both lab mice and humans have found no significant impact on skin health, even at doses far exceeding current guidelines. This enhanced safety is attributed to far-UVC's difficulty in penetrating the outermost layer of the epidermis called the stratum corneum. The stratum corneum is effective at blocking far-UVC as it's composed primarily of dead cells filled with keratin protein, which absorb far-UVC light.
Regarding ocular safety, while comprehensive human studies are still pending, limited research has been conducted on human eye exposure to overhead far-UVC lamps. These studies have found no evidence of damage or increased discomfort. Additionally, research on rats has revealed significantly reduced penetration and damage from far-UVC compared to other UV wavelengths. These findings suggest a promising safety profile for far-UVC, though further research, particularly on human eyes, is needed to fully establish its long-term effects.
When far-UVC interacts with airborne oxygen it produces ozone and other byproducts, an effect that has been demonstrated in laboratory and real world environments. While the extent to which this produced ozone leads to negative health effects is the subject of active research, the mechanism for ozone causing cardiovascular disease and premature mortality is established in outdoor settings.
A key concern for far-UVC implementations is balancing radiation dosage and microbial inactivation rates. Although far-UVC has been shown to be effective at inactivating a wide array at viruses at doses that fall beneath exposure limits, the optimal dosage for achieving sufficient deactivation and indoor air quality standards requires further study.
Positive skin and eye safety attributes can be forgone if a given far-UVC lamp produces unwanted emissions at wavelengths other than the a device's stated specifications. For this reason, optical filters have been suggested as a mitigation device. Mitigation techniques also have been studied for ozone production.
Far-UVC Devices and Commercialization
The most common device used to generate far-UVC radiation is a Krypton Chloride (KrCl) excimer lamp, which emits light at the 222 nm wavelength. Following the sudden increase in demand for disinfectant tools brought upon by the COVID-19 pandemic, a number of companies began to market and sell consumer far-UVC devices. These devices come in many different configurations and commercial form factors. There are no public estimates available for the size of the far-UVC device industry.
Regulation
Considering the technology's evolving nature, regulatory bodies around the world have not yet created binding standards as to what is considered a safe and effective dosage for far-UVC implementations, nor have they created certifications or passed regulations for the safety of commercial far-UVC devices. Legislation has been proposed for governing the production of ozone from germicidal UV light in California. In lieu of formal regulations or standards, guidelines for exposure limits and indoor air quality are put in place by professional associations. Some have suggested that these exposure limits are too conservative and need to be revised for shorter wavelength UV-C.
References
Sterilization (microbiology)
Ultraviolet radiation
Waste treatment technology
Radiobiology | Far-UVC | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 1,409 | [
"Spectrum (physical sciences)",
"Water treatment",
"Radiobiology",
"Electromagnetic spectrum",
"Ultraviolet radiation",
"Microbiology techniques",
"Sterilization (microbiology)",
"Environmental engineering",
"Waste treatment technology",
"Radioactivity"
] |
78,873,974 | https://en.wikipedia.org/wiki/Carol%20Robinson | Dame Carol Vivien Robinson (born 10 April 1956) is a British chemist and former president of the Royal Society of Chemistry (2018–2020). She was a Royal Society Research Professor and is the Dr Lee's Professor of Physical and Theoretical Chemistry, and a professorial fellow at Exeter College, University of Oxford. She is the founding director of the Kavli Institute for Nanoscience Discovery, University of Oxford, and she was previously professor of mass spectrometry at the chemistry department of the University of Cambridge.
Early life and education
Born in Kent, the daughter of Denis E. Bradley and Lillian (née Holder), Carol Vivien Bradley left school at 16 and began her career as a lab technician in Sandwich, Kent with Pfizer, where she began working with the then novel technique of mass spectrometry.
Her potential was spotted, and she gained further qualifications at evening classes and day release from her job at Pfizer. After earning her degree, she left Pfizer and studied for a Master of Science degree at the University of Swansea, followed by a Ph.D. at the University of Cambridge, which she completed in just two years. During this time she was a student at Churchill College, Cambridge.
Career and research
After a postdoctoral training fellowship at the University of Bristol, Robinson took up a junior position in the mass spectrometry unit at the University of Oxford, where she began analysing protein folding. Robinson and colleagues successfully captured protein folding in the presence of the chaperone GroEL, demonstrating that at least some aspects of protein secondary structure could be studied in the gas phase.
Robinson has broken ground as the first woman professor in the department of chemistry at both the University of Cambridge (2001) and the University of Oxford (2009). Her research has pushed the limits of electrospray ionization mass spectrometry, demonstrating that proteins and other complex macromolecules can be studied in the gas phase. In addition to her contributions to the study of protein folding, Robinson has conducted important work on ribosomes, molecular chaperones and most recently membrane proteins. Her research has made seminal contributions to gas-phase structural biology, with progress toward the study of protein complexes in their native environments for drug discovery. Additionally, she is a co-founder of OMass Therapeutics, a University of Oxford spin-out company applying mass spectrometry technology to drug discovery.
Honours and awards
Robinson was awarded the American Society for Mass Spectrometry's Biemann Medal in 2003, and the Christian B. Anfinsen Award in 2008. In 2004 the Royal Society awarded her both a Fellowship (FRS) and the Rosalind Franklin Award. Her nomination for the Royal Society reads:
Distinguished for her research on the application of mass spectrometry to problems in chemical biology. She has used mass spectrometry to define the folding and binding of interacting proteins in large complexes. Most importantly, she has established that macromolecular complexes such as GroEL, ribosomes, and intact virus capsids can be generated in the gas phase and their electrospray mass spectra recorded. This work has demonstrated the power of mass spectrometry in studying very large complexes and allowed her to define changes in their conformation and the manner of their assembly.
In 2010 Robinson received the Davy Medal "for her ground-breaking and novel use of mass spectrometry for the characterisation of large protein complexes".
In 2011 she was given the Interdisciplinary Prize by the Royal Society of Chemistry for "development of a new area of research, gas-phase structural biology, using highly refined mass spectrometry techniques", the Aston Medal, and the FEBS/EMBO Women in Science Award.
She was appointed Dame Commander of the Order of the British Empire (DBE) in the 2013 New Year Honours for services to science and industry.
She received the Thomson Medal Award in 2014.
In 2015 she was a laureate of the L'Oréal-UNESCO For Women in Science Awards; "For her groundbreaking work in macromolecular mass spectrometry and pioneering gas phase structural biology by probing the structure and reactivity of single proteins and protein complexes, including membrane proteins."
In 2017 she was elected a Foreign Associate of the US National Academy of Sciences.
In 2018 she won the Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry from the American Chemical Society.
In 2019 she won the Novozymes Prize for "almost single-handedly founding a subfield of mass spectrometry proteomics". Also in 2019 she received the Royal Medal.
In 2020, she was chosen as the recipient of the Othmer Gold Medal.
In 2021 she received the 2022 Louis-Jeantet Prize for Medicine. and the 2022 European Chemistry Gold Medal by the European Chemical Society. Also in 2021, she became an International Honorary Member of the American Academy of Arts and Sciences.
In 2022 she was awarded the Franklin Institute Award for Chemistry.
In 2023 she was elected to the American Philosophical Society and was awarded the John B. Fenn Award for Distinguished Contribution to Mass Spectrometry. She was named one of the top ten "Innovators and Trailbalzers" on the 2023 Power List by the Analytical Scientist.
In 2024, she received the EPO European Inventor Lifetime Achievement Award for her work in mass spectrometry that significantly advanced biochemical research and medical diagnostics. On June 19, 2024, she received an honorary doctorate from the University of Cambridge in recognition of her achievements in chemistry.
She has been awarded 13 honorary doctorates including the Weizmann Institute of Science, Aarhus University Denmark, University of Kent, the University of York, and the University of Bristol.
References
External links
(See Denis Noble.)
(interview conducted by Erick M. Carreira)
1956 births
Living people
British chemists
Dames Commander of the Order of the British Empire
Female fellows of the Royal Society
Fellows of Churchill College, Cambridge
Fellows of the Royal Society
Place of birth missing (living people)
British women chemists
Academics of the University of Oxford
Fellows of Exeter College, Oxford
Alumni of Swansea University
Alumni of Churchill College, Cambridge
Fellows of the Academy of Medical Sciences (United Kingdom)
Dr Lee's Professors of Chemistry
Rhodes Trustees
L'Oréal-UNESCO Awards for Women in Science laureates
21st-century British women scientists
Foreign associates of the National Academy of Sciences
Presidents of the Royal Society of Chemistry
Thomson Medal recipients
Mass spectrometrists
Benjamin Franklin Medal (Franklin Institute) laureates | Carol Robinson | [
"Physics",
"Chemistry"
] | 1,343 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
78,877,767 | https://en.wikipedia.org/wiki/GAGG%3ACe | Cerium-doped gadolinium aluminium gallium garnet (GAGG:Ce) is a single-crystal ceramic scintillator material. It is being considered for applications in astrophysics, such as in high-energy gamma ray detection.
References
Ceramics
Luminescent minerals
Gallium compounds
Gadolinium compounds | GAGG:Ce | [
"Chemistry"
] | 68 | [
"Luminescence",
"Phosphors and scintillators"
] |
78,882,441 | https://en.wikipedia.org/wiki/Ammonium%20pentasulfide | Ammonium pentasulfide is a chemical compound with the chemical formula .
Synthesis
Passing hydrogen sulfide through a suspension of powdered sulfur in a concentrated ammonia solution:
Physical properties
Ammonium sulfide forms yellow crystals, decomposing in water, of monoclinic system, space group P21/c, cell parameters a = 0.5427 nm, b = 1.6226 nm, c = 0.9430 nm, β = 105.31°, Z = 4.
The compound can be stored under the mother liquor without air access. When dry, it decomposes quickly in the air. the compound emits sulfur intensively in water and melts in a sealed ampoule at 95 °C to form a red liquid.
Chemical properties
The compound decomposes when stored in air or slightly heated:
References
Ammonium compounds
Polysulfides | Ammonium pentasulfide | [
"Chemistry"
] | 177 | [
"Ammonium compounds",
"Salts"
] |
78,884,325 | https://en.wikipedia.org/wiki/1ES%201927%2B654 | 1ES 1927+654 is a type 2 Seyfert galaxy located 270 million light-years away in the constellation of Draco, containing an active galactic nucleus. 1ES 1927+654 it is located in a host galaxy that is relatively unremarkable in appearance. However, its core, powered by a supermassive black hole, has exhibited behaviors that challenge conventional theories about accretion disks and black hole environments, and is the subject of academic papers analysing its unusual characteristics. Its supermassive black hole is found massive and is a source of X-ray flashes.
Timeline of Discoveries
Initial Identification (1984):
1ES 1927+654 was first cataloged during the Einstein Slew Survey, which aimed to identify X-ray sources in the sky. It was classified as a Seyfert galaxy due to its emission-line features.
Optical Variability Detected (1990s):
Observations in the 1990s revealed irregular optical variability, suggesting active processes in its nucleus.
Significant Flares Observed (2017):
A dramatic increase in brightness was detected, with the galaxy brightening by a factor of about 40 in the ultraviolet spectrum. This event triggered follow-up studies to investigate the cause.
Accretion Disk Disruption Event (2018):
Detailed observations by X-ray and ultraviolet telescopes revealed that the AGN's accretion disk had undergone a partial or total disruption.
Magnetic Field Hypothesis (2020):
Studies suggested that the extreme variability could be linked to magnetic field instabilities around the black hole. The event challenged models of black hole accretion and inspired new theories about AGN outbursts.
Popular interest
1ES 1927+654 has captured the attention of both professional astronomers and amateur stargazers due to its unpredictable behavior. Its sudden changes have made it a target for multi-wavelength observation campaigns, drawing data from X-ray, optical, and radio observatories around the world.
Gallery
References
External links
1ES 1927+654 on NASA/IPAC Database
Seyfert galaxies
Active galaxies
Draco (constellation)
Supermassive black holes | 1ES 1927+654 | [
"Physics",
"Astronomy"
] | 426 | [
"Black holes",
"Unsolved problems in physics",
"Supermassive black holes",
"Constellations",
"Draco (constellation)"
] |
78,884,788 | https://en.wikipedia.org/wiki/Law%20of%20rational%20indices | The law of rational indices is an empirical law in the field of crystallography concerning crystal structure. The law states that "when referred to three intersecting axes all faces occurring on a crystal can be described by numerical indices which are integers, and that these integers are usually small numbers." The law is also named the law of rational intercepts or the second law of crystallography.
Definition
The International Union of Crystallography (IUCr) gives the following definition: "The law of rational indices states that the intercepts, OP, OQ, OR, of the natural faces of a crystal form with the unit-cell axes a, b, c are inversely proportional to prime integers, , , . They are called the Miller indices of the face. They are usually small because the corresponding lattice planes are among the densest and have therefore a high interplanar spacing and low indices."
History
The law of constancy of interfacial angles, first observed by Nicolas Steno, (De solido intra solidum naturaliter contento, Florence, 1669), and firmly established by Jean-Baptiste Romé de l'Isle (Cristallographie, Paris, 1783), was a precursor to the law of rational indices.
René Just Haüy showed in 1784 that the known interfacial angles could be accounted for if a crystal were made up of minute building blocks (molécules intégrantes), such as cubes, parallelepipeds, or rhombohedra. The 'rise-to-run' ratio of the stepped faces of the crystal was a simple rational number p/q, where p and q are small multiples of units of length (generally different and not more than 6). Haüy's method is named the law of decrements, law of simple rational truncations, or Haüy's law. The law of rational indices was not stated in its modern form by Haüy, but it is directly implied by his law of decrements.
In 1830, Johann Hessel proved that, as a consequence of the law of rational indices, morphological forms can combine to give exactly 32 kinds of crystal symmetry in Euclidean space, since only two-, three-, four-, and six-fold rotation axes can occur. However, Hessel's work remained practically unknown for over 60 years and, in 1867, Axel Gadolin independently rediscovered his results.
Miller indices were introduced in 1839 by the British mineralogist William Hallowes Miller, although a similar system (Weiss parameters) had already been used by the German mineralogist Christian Samuel Weiss since 1817.
In 1866, Auguste Bravais showed that crystals preferentially cleaved parallel to lattice planes of high density. This is sometimes referred to as Bravais's law or the law of reticular density and is an equivalent statement to the law of rational indices.
Crystal structure
The law of rational indices is implied by the three-dimensional lattice structure of crystals. A crystal structure is periodic, and invariant under translations in three linearly independent directions.
Quasicrystals do not have translational symmetry, and therefore do not obey the law of rational indices.
References
Crystallography
Mineralogy concepts
Scientific laws | Law of rational indices | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 663 | [
"Mathematical objects",
"Scientific laws",
"Equations",
"Materials science",
"Crystallography",
"Condensed matter physics"
] |
75,786,010 | https://en.wikipedia.org/wiki/Shape%20of%20the%20atomic%20nucleus | The shape of the atomic nucleus depends on the variety of factors related to the size and shape of its nucleon constituents and the nuclear force holding them together. The origins of nuclear shape begin with the spacial extent (aka root mean squared charge radius) of almost nearly all stable and a great many unstable nuclei has been determined mainly by electron and muon scattering experiments as well as spectroscopic experiments. An important factor in the internal structure of the nucleus is the nucleon-nucleon potential, which ultimately governs the distance between individual nucleons, while a dip in the charge density of some light nuclide structures a lesser density of nucleonic matter. A surprising non-spherical expectation for the shape of the nucleus originated in 1939 in the spectroscopic analysis of the quadrupole moments while the prolate spheroid shape of the nucleon arises from analysis of the intrinsic quadruple moment. The simple spherical approximation of nuclear size and shape provides at best a textbook introduction to nuclear size and shape. The unusual cosmic abundance of alpha nuclides has inspired geometric arrangements of alpha particles as a solution to nuclear shapes, although the atomic nucleus generally assumes a prolate spheroid shape. Nuclides can also be discus-shaped (oblate deformation), triaxial (a combination of oblate and prolate deformation) or pear-shaped.
Origins of nuclear shape
The atomic nucleus is composed of protons and neutrons (collectively called nucleons). In the Standard model of particle physics, nucleons are in the group called hadrons, the smallest known particles in the universe to have measurable size and shape. Each is in turn composed of three quarks. The spatial extent and shape of nucleons (and nuclides assembled from them) ultimately involves quark interactions within and between nucleons. The quark itself does not have measurable size at the experimental limit set by the electron (≈ 10−18 m in diameter). The size, or root mean squared (RMS) charge radius, of the proton (the smallest nuclide) has a 2018 CODATA recommended value of 0.8414 (19) fm (10−15 m), although values may vary by a few percent according to the experimental method employed (see proton radius puzzle). Nuclide size ranges up to ≈ 6 fm. The largest stable nuclide, lead-208, has an RMS charge radius of 5.5012 fm, and the largest unstable nuclide americium-243 has an experimental RMS charge radius of 5.9048 fm. The main source of nuclear radius values derives from elastic scattering experiments (electron and muon), but nuclear radii data also come from experiments on spectroscopic isotope shifts (x-ray and optical), β decay by mirror nuclei, α decay, and neutron scattering. Although the radius values delimit the spatial extent of the nucleus, spectroscopic and scattering experiments dating back to 1935 in many cases indicate a deviation of the nuclear charge distribution or quadrupole moment consistent with non-spherical nuclear shapes for many nuclei.
Simple spherical approximation
The atomic nucleus been depicted as a compact bundle of the two types of nucleons depicted as hard-packed spheres. This depiction of the atomic nucleus only approximates the empirical evidence for the size and shape of the nucleus. The root mean squared (RMS) charge radius of most stable (and many unstable) nuclides have been experimentally determined. If the nucleus is assumed to be spherically symmetric, an approximate relationship between nuclear radius and mass number arises above A=40 from the formula R=RoA1/3 with Ro = 1.2 ± 0.2 fm. R is the predicted spherical nuclear radius, A is the mass number, and Ro is a constant determined by experimental data. This radius to mass relationship has its roots in the liquid drop model as proposed by Gamow in 1930. The graph on the right plots the radius-to-mass of the experimental charge radius (blue line) as compared to the spherical approximation (green line). For light nuclides below A=40, the smooth curvilinear spherical radius plot contrasts with the erratic experimental radius-to-mass. For medium and heavy nuclides above A=40, the plots converge and run approximately parallel when Ro = 1.
Nucleon shape
The empirical knowledge of nucleon shape originates from the study of the transition from the proton ground state N(938) to the first excited state ∆+(1232). Multiple studies using a variety of models have led to an expectation of non-spherical shape. The proton's RMS charge radius of 0.8414 fm only defines the spatial extent of its charge distribution, i.e. the distance from its center of mass to its farthest point. Examination of the angular dependence of the charge distribution indicates that the proton is not a perfect sphere. Model-dependent analyses of the intrinsic quadrupole moment suggests that the ground-state nucleon shape conforms to a prolate spheroid shape.
The intrinsic quadrupole moment is distinct from the spectroscopic quadrupole moment, as realized more than 50 years ago. The intrinsic quadrupole moment relates to a body-fixed coordinate system that rotates with the nucleon in contrast to the spectroscopically measured quadrupole moment. While the nucleon's spectroscopic quadrupole moment is zero due to angular moment selection rules related to spin, the non-zero intrinsic quadrupole is obtained by electromagnetic quadrupole transitions between the nucleon ground N(938) and ∆(1232) excited states. The proton and neutron have nearly the same mass (938 MeV), and may be regarded as one particle, the nucleon N(938),with two different charge states (proton +1, and neutron 0). The proton's N(938) ground state and ∆+(1232) excited state have different shapes. The transition between the states supports a prolate spheroid deformation for the ground state, and an oblate spheroid deformation for the excited state.
The prolate shaped ground state reflects quark-to-quark interactions arising from the Pauli exclusion principle. In the ground state, the two down quarks of a ground-state neutron are in an isospin 1 state, and simultaneously in a spin 1 state in order that the spin-isospin wave function is symmetric. The exclusion principle is built into the anti-symmetric fermionic wave function, thereby forbidding a pair of identical fermions from occupying the same quantum state. In accordance with Pauli exclusion force, the spin-spin repulsive force between identical fermions pushes like-flavored quarks further apart. Conversely, when the spins of a pair of unlike fermions align, such as an up-/down-quark pair within a ground-state nucleon, the nuclear force is attractive and draws the particles close to other each other without violating the Pauli exclusion principle. Within the ground state neutron, this results in a picture of the spin interactions (above) in which the two down quarks (like fermions) qualitatively repel to either end of the prolate nucleon structure while simultaneously attracting to the up quark (unlike fermion) in the middle. Similar spin-spin interactions play out in the proton, considered identical to the neutron but existing in a different charge state.
Electron scattering techniques pioneered by Robert Hofstadter gave the first indication of a deeper structure for the nucleon. The technique is similar in principle to Rutherford's gold foil experiment in which alpha particles are directed at a thin gold foil, but Hofstadter's use of electrons, rather than alpha particles, enabled much higher resolution. The radial charge density of the neutron in particular was shown to have a complex internal structure consisting of a positive core and a negative skin, qualitatively consistent with the neutron's quark charge distribution shown above. Hofstadter received a Nobel prize for this work in 1961, several years before Murray Gell-Mann posited the quark model in 1965.
Space between nucleons
The atomic nucleus is a bound system of protons and neutrons. The spatial extent and shape of the nucleus depend not only on the size and shape of discrete nucleons, but also on the distance between them (the inter-nucleon distance). (Other factors include spin, alignment, orbital motion, and the local nuclear environment (see EMC effect).) The proximity of adjacent nucleons is governed by the nucleon-nucleon potential, and the force between a pair of nucleons can be obtained by taking the derivative of the potential. The strong nuclear force between nucleons is short-range, and the interaction between a pair of nucleons depends on the distance between them . Below 0.5 fm, each nucleon has a repulsive hard core that prevents neighboring nucleons from approaching any closer. Repulsive and attractive forces balance at ≈ 0.8 fm, and become maximally attractive at ≈ 1.0 fm, as illustrated in the diagram. Because energy is required to separate them, the pair of nucleons are said to be in a bound state. The proton-neutron (p-n) bound state, or p-n pair, is stable and ubiquitous in baryonic matter. The p-n pair contributes implicitly to the top ten most abundant isotopes in the universe, eight of which contain equal numbers of protons and neutrons (see Oddo-Harkins rule and abundance of the elements). Conversely, the proton-proton (diproton) and neutron-neutron (dineutron) bound states are unstable and therefore rarely found in nature. The deuteron (the simplest p-n pair) does not have a spherical shape owing to its quadrupole moment. The transverse charge density of the deuteron now confirms a prolate or elongated shape.
Soft core of light nuclides
Electron scattering techniques have yielded clues as to the internal structure of light nuclides. Proton-neutron pairs experience a strongly repulsive component of the nuclear force within ≈ 0.5 fm (see "Space between nucleons" above). As nucleons cannot pack any closer, nearly all nuclei have the same central density. While this statement generally holds true for nuclides above calcium-40, electron scattering experiments of many of the lighter nuclides reveal a nuclear core that is remarkably less dense then the rest of the nucleus. Model-independent analyses of nuclear charge densities for both He-3 and He-4, for example, indicate a significant central depression within a radius of 0.8 fm. Other light nuclides, including carbon-12 and oxygen-16, exhibit similar off-center charge density maxima. A lower radial charge density within the nuclear core reflects a lower likelihood that scattering electrons will encounter a nucleon near the center of the nucleus compared to the surrounding nuclear structure.
Alpha particle as possible building block
Although the proton and the neutron are the building blocks of the atomic nucleus, the unusual natural abundance of alpha nuclides has prompted investigations of the role of the alpha particle, or helium-4 nucleus, as a potential building block of matter. Alpha cluster models envision the atomic nucleus as having discrete alpha particles that occupy average relative positions. Hydrogen makes up 74% of the ordinary baryonic matter of the universe, but 99% of the remaining matter is contained within just eight nuclides (^{4}_{2}He, ^{12}_{6}C, ^{14}_{7}N, ^{16}_{8}O, ^{20}_{10}Ne, ^{24}_{12}Mg, ^{28}_{14}Si, and ^{32}_{16}S), seven of which are alpha nuclides. In the table below, the shapes of these nuclides may correspond to simple geometric arrangements of alpha particles, with associated radius predictions.
Heavier nuclides
For many medium-to-heavy nuclides, in particular those far from the magic numbers of protons and neutrons, a spherical model of the atomic nucleus is incompatible with observed large quadrupole moments, indicating that lower potential energy is obtained for an ellipsoidal shape than for a spherical nucleus of the same volume. In general, their ground states tend towards a prolate shape, although experimental data hint at oblate ground-state shapes in certain nuclei, for example krypton-72. Experiments also suggest that some heavy nuclei, such as barium-144 and radium-224, possess asymmetric pear shapes evidenced by their measured octupole moments. It is also possible for a nucleus to adopt different shapes in states with a similar excitation energy, which is referred to as shape coexistence. For example, the ground states of krypton-74 and krypton-76 have prolate shapes, but there is evidence for oblate-shape excited structures in these nuclei appearing at low excitation energy. In this particular case, the shapes of coexisting structures tend to mix together.
References
Chemistry
Nuclear physics | Shape of the atomic nucleus | [
"Physics"
] | 2,762 | [
"Nuclear physics"
] |
69,955,049 | https://en.wikipedia.org/wiki/Feferman%E2%80%93Vaught%20theorem | The Feferman–Vaught theorem in model theory is a theorem by Solomon Feferman and Robert Lawson Vaught that shows how to reduce, in an algorithmic way, the first-order theory of a product of structures to the first-order theory of elements of the structure.
The theorem is considered to be one of the standard results in model theory. The theorem extends the previous result of Andrzej Mostowski on direct products of theories.
It generalizes (to formulas with arbitrary quantifiers) the property in universal algebra that equalities (identities) carry over to direct products of algebraic structures (which is a consequence of one direction of Birkhoff's theorem).
Direct product of structures
Consider a first-order logic signature L.
The definition of product structures takes a family of L-structures for for some index set I and defines the product structure
, which is also an L-structure, with all functions and relations defined pointwise.
The definition generalizes the direct product in universal algebra to structures for languages that contain not only function symbols but also relation symbols.
If is a relation symbol with arguments in L and are elements of the cartesian product, we define the interpretation of in by
When is a functional relation, this definition reduces to the definition of direct product in universal algebra.
Statement of the theorem for direct products
For a first-order formula in signature L with free variables a subset of , and for an interpretation of the variables , we define the set of indices for which holds in
Given a first-order formula with free variables , there is an algorithm to compute its equivalent game normal form, which is a finite disjunction of mutually contradictory formulas.
The Feferman–Vaught theorem gives an algorithm that takes a first-order formula and constructs a formula that reduces the condition that holds in the product to the condition that holds in the interpretation of sets of indices:
The formula is thus a formula with free set variables, for example, in the first-order theory of fields of sets.
Proof idea
The formula can be constructed following the structure of the starting formula . When is quantifier free then, by definition of direct product above it follows
Consequently, we can take to be the equality in the language of fields of sets.
Extending the condition to quantified formulas can be viewed as a form of quantifier elimination, where quantification over product elements in is reduced to quantification over subsets of .
Generalized products
It is often of interest to consider substructure of the direct product structure. If the restriction that defines product elements that belong to the substructure can be expressed as a condition on the sets of index elements, then the results can be generalized.
An example is the substructure of product elements that are constant at all but finitely many indices. Assume that the language L contains a constant symbol and consider the substructure containing only those product elements for which the set
is finite. The theorem then reduces the truth value in such substructure to a formula in the field of sets, where certain sets are restricted to be finite.
One way to define generalized products is to consider those
substructures where the sets belong to some field of sets of indices (a subset of the powerset set algebra ), and where the product substructure admits gluing. Here admitting gluing refers to the following closure condition: if are two product elements and is the element of the field of sets, then so is the element defined by "gluing" and according to :
Consequences
The Feferman–Vaught theorem implies the decidability of Skolem arithmetic by viewing, via the fundamental theorem of arithmetic, the structure of natural numbers with multiplication as a generalized product (power) of Presburger arithmetic structures.
Given an ultrafilter on the set of indices , we can define a quotient structure on product elements, leading to the theorem of Jerzy Łoś that can be used to construct hyperreal numbers.
References
Model theory | Feferman–Vaught theorem | [
"Mathematics"
] | 823 | [
"Foundations of mathematics",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
69,959,302 | https://en.wikipedia.org/wiki/Corrosion%20Science | Corrosion Science is a peer-reviewed scientific journal published by Elsevier in 16 issues per year. Established in 1961, it covers a wide range of topics in the study of pure/applied corrosion and corrosion engineering, including but not limited to oxidation, biochemical corrosion, stress corrosion cracking, and corrosion control methods, as well as surface science and engineering. The editors-in-chief are J.M.C. Mol (Delft University of Technology) and O.R. Mattos (Federal University of Rio de Janeiro).
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts
Current Contents/Engineering, Computing & Technology
Inspec
Materials Science Citation Index
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 7.205.
References
External links
English-language journals
Academic journals established in 1961
Elsevier academic journals
Journals published between 13 and 25 times per year
Chemical engineering journals
Materials science journals
Corrosion | Corrosion Science | [
"Chemistry",
"Materials_science",
"Engineering"
] | 194 | [
"Materials science stubs",
"Metallurgy",
"Chemical engineering",
"Materials science journals",
"Corrosion",
"Chemical engineering journals",
"Materials science journal stubs",
"Materials science",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs"... |
69,961,810 | https://en.wikipedia.org/wiki/Tetraethylammonium%20trichloride | Tetraethylammonium trichloride (also known as Mioskowski reagent) is a chemical compound with the formula [NEt4][Cl3] consisting of a tetraethylammonium cation and a trichloride as anion. The trichloride is also known as trichlorine monoanion representing one of the simplest polychlorine anions. Tetraethylammonium trichloride is used as reagent for chlorinations and oxidation reactions.
Properties
At room temperature, tetraethylammonium trichloride is a yellow solid which is soluble in polar organic solvents (e.g., methylene chloride or acetonitrile). As it is a strong oxidant and chlorinating agent it is reacting with most organic solvents. The trichloride can be considered as an symmetric anion as found in [NnPr4][Cl3], which is formed by a 3c-4e bond.
Preparation
Commonly, tetraethylammonium trichloride is prepared by the reaction of tetraethylammonium chloride and elemental chlorine in methylene chloride at room temperature. After evaporation of the solvent, tetraethylammonium trichloride is obtained as a yellow solid.
[NEt4]Cl + Cl2 → [NEt4][Cl3]
Recently, an alternative preparation of tetraethylammonium trichloride has been described using tetraethylammonium chloride and potassium peroxymonosulfate as oxidant.
Applications
In general, tetraethylammonium trichloride has a similar reactivity compared to elemental chlorine and other trichlorides, e.g., triethylmethylammonium trichloride. As tetraethylammonium trichloride is a solid and can be dissolved in methylene chloride or acetonitrile, it is used as an easier to handle alternative to elemental chlorine, in particular for the synthesis of intermediates in natural product synthesis. Tetraethylammonium trichloride reacts with alkenes to the corresponding vicinal 1,2-dichlorinated alkanes and similarly with alkynes to the corresponding trans-dichlorinated alkenes. Electron rich arenes are chlorinated in para-position. While aldehydes are dichlorinated in alpha-position, ketones react to the monochlorinated alpha-chloroketones. In presence of 1,4-diazabicyclo[2.2.2]octane tetraethylammonium trichloride is a useful oxidant for the oxidation of primary alcohols to the corresponding aldehydes and of secondary alcohols to the corresponding ketones. For compounds bearing both a primary and a secondary alcohol, selective oxidation of the secondary alcohol is observed. Acetals undergo C-H chlorination of the tertiary C-H bond providing the corresponding chlorinated acetals.
References
Tetraethylammonium salts
Chlorine compounds
Hypervalent molecules
Polyhalides | Tetraethylammonium trichloride | [
"Physics",
"Chemistry"
] | 665 | [
"Molecules",
"Hypervalent molecules",
"Matter"
] |
78,904,028 | https://en.wikipedia.org/wiki/Strontium%20thiocyanate | Strontium thiocyanate refers to the salt . It is a colorless solid. According to X-ray crystallography, it is a coordination polymer. The Sr2+ ions are each coordinated to eight thiocyanate anions in a distorted square antiprismatic molecular geometry where each square face contains two adjacent S atoms and two adjacent N atoms. The motif is reminiscent of the fluorite structure. The same structure is observed for Ca(SCN)2, Ba(SCN)2, and Pb(SCN)2.
References
Thiocyanates
Strontium compounds | Strontium thiocyanate | [
"Chemistry"
] | 129 | [
"Thiocyanates",
"Inorganic compounds",
"Functional groups",
"Inorganic compound stubs"
] |
78,904,128 | https://en.wikipedia.org/wiki/Calcium%20thiocyanate | Calcium thiocyanate refers to the salt . It is a colorless solid. According to X-ray crystallography, it is a coordination polymer. The Ca2+ ions are each bonded to eight thiocyanate anions, with four Ca-S and four Ca-N bonds. The motif is reminiscent of the fluorite structure.
References
Thiocyanates
Calcium compounds | Calcium thiocyanate | [
"Chemistry"
] | 84 | [
"Thiocyanates",
"Inorganic compounds",
"Functional groups",
"Inorganic compound stubs"
] |
77,413,880 | https://en.wikipedia.org/wiki/Catalan%20forge | The Catalan forge is a set of technological processes designed to obtain iron by directly reducing the ore—without going through the intermediary of smelting as in a blast furnace—and then shingling the resulting . The Catalan forge employs hydraulic power to operate a hammer or trip hammer, and a ventilation system, known as the trompe, is utilized to maintain the furnace's combustion. The term refers to the technology and building where this activity occurs. Despite its name, this type of forge was used extensively from the 17th to the 19th century in mountainous regions such as the Alps, the Massif Central, and the Pyrenees, as well as by the first American settlers.
Origin
Metallurgy has used rich, easily meltable ores for millennia, including compact brown hematites and decomposed, hydrated carbonates. The ores were placed in circular hearths dug into the ground and constructed from clay in a rudimentary manner. These hearths were fueled by charcoal and activated by two leather bellows. The ores were transformed into a malleable mass of iron, known as a "burr," through the application of heat and pressure. This mass was then beaten with hammers to remove slag and impurities. This was the "hand forge," or "flying forge," constructed where the ore was discovered. When the vein was exhausted or charcoal was in short supply, the metallurgists departed from the site and established their operations elsewhere, leaving behind the crucibles and slag heaps. The consumption of charcoal was a significant contributing factor to the deforestation of the Pyrenees, which in turn gave rise to numerous conflicts, including the War of the Maidens.
The hand forge was a ubiquitous primitive tool. It was gradually understood that river mills, which were used for grinding grain or powering sawmills, could also be used for beating metal. The iron mill gradually supplanted the hand forge with two waterwheels, one of which activated the bellows and the other the hammer. The innovation that would become the most distinctive aspect of Catalan forging appeared in Italy during the early 17th century. This innovation, variously known as the "Pyrenees" or the "Alps" in different regions, reached the French Pyrenees around the midway point of the century and subsequently proliferated throughout the Pyrenean region.
Tools and features
The hydraulic trompe
A defining feature of the Catalan forge is the necessity for a relatively elevated head, ranging from 7 to 10 meters. The water is directed into a wooden receptacle, the , and allowed to flow down a vertical pipe, the shaft. Typically, two shafts are in operation concurrently. The shaft is constructed from a tree trunk bisected and subsequently hollowed out. Subsequently, iron bands are affixed to join the two components. At the upper extremity of the pipe, apertures are created at oblique angles, descending inwards and serving as aspirators. The shaft opening is closed by movable wedges, operated from below, which regulate the water flow. The lower extremity of the shaft culminates in a sizable trapezoidal wooden receptacle designated as the "wind box." The water flows into the wind box, which is directed onto a bench-like structure protected by a stone slab. It then exits the wind box through a sliding door. As the water descends, air is drawn in through the suction tubes, resulting in a mixture of water and air that flows into the box. The pressurized air is then conveyed through a quadrangular duct, designated as the "man or sentinel," and subsequently through a nozzle to the upper portion of the firebox. Consequently, the proboscis offers a permanent and automated solution for the ventilation of the firebox, which can be precisely regulated by varying the flow rate.
Foyer
A further defining characteristic of the Catalan forge is the blowing over a low, open hearth with a trompe. Thus, the Catalan forge differs from the Stückofen, another highly advanced low furnace of the same period, in that the charge filled the latter's chimney.
The foyer is a quadrangular mass of masonry (comprising clay and large stones), measuring 2.5 to 3 meters in length and 0.70 to 0.90 meters in height. One of its sides is inclined at an angle to the ground. The hearth is positioned at the intersection of two walls. The dimensions of the furnace vary depending on the smith and their specific requirements. The base of the furnace, or hearth, is constructed from a single large slab of granite or gneiss. The stones that support and surround the crucible are frequently grindstones or fragments of old millstones, preventing water or humidity accumulation. The wall on the shorter side of the crucible is arched and has an opening through which the nozzle from the windbox can pass.
The crucible, a pivotal component of the Catalan forge, exhibits distinctive inner surfaces. The front face, designated as the "hand," is situated on the left when observing the crucible from a frontal perspective. The left face, which allows the wind to pass through, is called the . The posterior aspect of the crucible is the cellar, while the lateral aspect is the ore or contrevent. The basement is constructed of masonry, while the remaining sides are lined with thick iron plates. The is constructed of two plates, separated by a third plate, the , which serves as a fulcrum for the workers' levers, facilitating the lifting of the massé. As mentioned earlier, the plates are affixed to one another by a horizontal crosspiece. This is supported by robust structures on either side, frequently comprising substantial stones and an antiquated hammerhead.
Hammer
The hammer utilized to strike the massé represents the primary tool employed by the forge. A hydraulic wheel drives the hammer. The wheel is affixed to a cylindrical wooden axle, with cams protruding from its circumference. The cams are designed to grip the tail of the hammer, which is positioned perpendicularly and movable vertically on an axle. The hammer is elevated until the cam releases its grip and falls back. At this juncture, the subsequent cam advances to initiate a repetition of the process, as mentioned earlier. The anvil, the point at which the hammer headlands, is fitted with a removable metal pile in its center, which can be changed according to the nature of the work in progress. Similarly, the hammerhead is a heavy metal mass, but its lower part—the part that comes into contact with the hammerhead—is also removable. The cadence of the hammer blows is regulated by varying the rate of fall on the wheel.
Layout of the forge
Construction
The forge is situated on a watercourse with an adequate flow rate, sufficient elevation, and convenient accessibility, given the necessity for transporting ore, coal, and finished products, typically by mule. The advent of the horn prompted the relocation or upgrading of existing forges, contingent on their geographical positioning.
The stream is channeled and directed to two : one feeds the trompe, and the other is the paddlewheel that drives the swift(s). The water flow can be regulated by opening or closing the relevant valves. The interior of the forge consists of multiple sections, one of which is designated for the hammers. The hammer, or malh (actually a swift), is the critical tool used to shape the mass of raw iron (the 'massé'), transforming it into refined bars or its final shape before shipment.
The wind box, strategically situated at the base of the horn, is a crucial component that ensures the furnace's efficiency. A wall through which the tuyere passes separates it from the furnace. The various rooms or compartments are responsible for the reception and distribution of the raw materials, namely ores, coal, and finished iron. Additionally, workers can be allocated a designated area during their designated rest periods.
Staff
In principle, the forge is staffed by a brigade of eight workers.
Four masters are responsible for the following areas: the , the , and two . The is responsible for overseeing the operation of the forge. The maillé, or hammer-maker, is to supervise the iron's mechanical work and the hammer's operation, or hammer-maker, is to oversee the iron's mechanical work and the hammer's operation. The are responsible for managing the fire and wind.
Four "valets" assist the blacksmiths, crushing the ore with the hammer.
A "garde-forge" procures raw materials, ore, and coal.
A "clerk" is responsible for the oversight of supplies, orders, and accounting.
Production
An iron magnifier weighs approximately 125 kilograms. Consequently, a well-coordinated team can operate the device manually. The production of this magnifying glass requires five hours of shingling and forging before it can be transformed into a marketable iron product.
Geographical distribution
Pyrénées-Orientales
Ariège
Pyrénées-Atlantiques
Andorra
In France, the numerous modest rural metallurgical facilities reliant on the Catalan forge, which had persisted despite the advent and enhancements of blast furnaces, ultimately ceased to exist at the advent of the 20th century, when the Thomas process was perfected. This process was responsible for the remarkable expansion of the Lorraine steel industry. Before the Franco-Prussian War of 1870, the two departments of Meurthe and Moselle collectively produced 1.4% of France's steel output. By 1913, Thomas Steel, produced exclusively in Meurthe-et-Moselle, accounted for 69% of the nation's total steel production. This trend was also accentuated by the significant advancement in transportation methods, which enabled manufactured goods to be delivered to distant locations from their point of production.
The initial American settlers refined the cast iron they produced using the Catalan forge, a relatively simple construction compared to blast furnaces and their associated forges. This process was employed in the southern United States until the mid-nineteenth century.
Disappearance
The growth and subsequent decline of the Catalan forges had a notable influence on the price of charcoal. At the zenith of the process, the price of wood rose considerably: from 1833 to 1842, the cost per quintal increased from 7.50 to 9-10 francs. As production declined, the price also fell. In 1854-1855, it was sold for 8 francs, but by 1868, it was worth only 6.10 francs, and by 1872, it was worth 6.80 francs, despite the high inflation rate.
See also
Forge
Notes
References
Bibliography
History of metallurgy
Hydropower
Metallurgy | Catalan forge | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,179 | [
"Metallurgy",
"History of metallurgy",
"Materials science",
"nan"
] |
77,418,202 | https://en.wikipedia.org/wiki/Calcium%28I%29%20fluoride | Calcium(I) fluoride is an unstable inorganic chemical compound with the chemical formula . It can exist as a high temperature gas, or an isolated molecule in a solid noble gas matrix.
References
Fluorides
Metal halides | Calcium(I) fluoride | [
"Chemistry"
] | 47 | [
"Inorganic compounds",
"Salts",
"Inorganic compound stubs",
"Metal halides",
"Fluorides"
] |
77,419,359 | https://en.wikipedia.org/wiki/Dark%20oxygen | Dark oxygen production refers to the generation of molecular oxygen (O2) through processes that do not involve light-dependent oxygenic photosynthesis. The name therefore uses a different sense of 'dark' than that used in the phrase "biological dark matter" (for example) which indicates obscurity to scientific assessment rather than the photometric meaning. While the majority of Earth's oxygen is produced by plants and photosynthetically active microorganisms via photosynthesis, dark oxygen production occurs via a variety of abiotic and biotic processes and may support aerobic metabolism in dark, anoxic environments.
Abiotic production
Abiotic production of dark oxygen can occur through several mechanisms, such as:
Water radiolysis: This process typically takes place in dark geological ecosystems, such as aquifers, where the decay of radioactive elements in surrounding rock leads to the breakdown of water molecules, producing O2.
Oxidation of surface-bound radicals: On silicon-bearing minerals like quartz, surface-bound radicals can undergo oxidation, contributing to O2 production.
In addition to direct O2 formation, these processes often produce reactive oxygen species (ROS), such as hydroxyl radicals (OH•), superoxide (O2•-), and hydrogen peroxide (H2O2). These ROS can be converted into O2 and water either biotically, through enzymes like superoxide dismutase and catalase, or abiotically, via reactions with ferrous iron and other reduced metals.
Biotic production
Biotic production of dark oxygen is performed by microorganisms through distinct microbial processes, including:
Chlorite dismutation: This involves the dismutation of chlorite (ClO2−) into O2 and chloride ions.
Nitric oxide dismutation: This involves the dismutation of nitric oxide (NO) into O2 and dinitrogen gas (N2) or nitrous oxide (N2O).
Water lysis via methanobactins: Methanobactins can lyse water molecules to produce O2.
These processes enable microbial communities to sustain aerobic metabolism in environments that lack oxygen.
Experimental evidence
Recent studies have provided evidence for dark oxygen production in various geological and subsurface environments:
Groundwater ecosystems: Dissolved oxygen concentrations have been measured in old groundwaters previously assumed to be anoxic. The presence of O2 is attributed to microbial communities capable of producing dark oxygen and water radiolysis. Metagenomic analyses and oxygen isotope studies further support local oxygen generation rather than atmospheric mixing.
Seafloor environments: A study on manganese nodules on the abyssal seafloor has suggested abiotic dark oxygen production. The proposed mechanism is electrolysis, because voltages were recorded on the surface of the nodules. However, no voltage great enough to split water was measured, the energy source for electrolysis is unknown, and previous experiments from the same region have not found any evidence of oxygen production.
Implications
Despite its diverse pathways, dark oxygen production has traditionally been considered negligible in Earth's systems. Recent evidence suggests that O2 is produced and consumed in dark, apparently anoxic environments on a much larger scale than previously thought, with implications for global biogeochemical cycles. It could also prove to be a possible way to support life in water on other planets, which opens up scientists to a new study and giving further evidence that we may not be alone in the universe.
References
Chemistry
Electrolysis
Oceanographical terminology
Oxygen | Dark oxygen | [
"Chemistry"
] | 728 | [
"Electrochemistry",
"Electrolysis"
] |
77,421,883 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28magnetic%20moment%29 | This page lists examples of magnetic moments produced by various sources, grouped by orders of magnitude. The magnetic moment of an object is an intrinsic property and does not change with distance, and thus can be used to measure "how strong" a magnet is. For example, Earth possesses an enormous magnetic moment, however we are very distant from its center and experience only a tiny magnetic flux density (measured in tesla) on its surface.
Knowing the magnetic moment of an object () and the distance from its centre () it is possible to calculate the magnetic flux density experienced () using the following approximation:
,
where is the constant of vacuum permeability.
Examples
References
See also
Orders of magnitude (magnetic flux density)
Magnetic moment
Magnetism | Orders of magnitude (magnetic moment) | [
"Mathematics"
] | 148 | [
"Quantity",
"Orders of magnitude",
"Units of measurement"
] |
77,422,349 | https://en.wikipedia.org/wiki/The%20Nym%20mixnet | The Nym mixnet is a free and open-source software designed to ensure a high level of privacy in all online communications. It is an implementation of a mix network devised by David Chaum in 1981.
History
The research that led to the design of the Nym mixnet was carried out as part of various European Commission projects, including Panoramix and NextLeap. The Nym mixnet directs Internet traffic through a worldwide multi-layered network (an overlay network) consisting of hundreds of nodes, where the source and destination IP addresses are decoupled.
Harry Halpin, Claudia Diaz from KU Leuven University and Aggelos Kiayias, wrote the Nym white paper in February 2021. The latter two also wrote the document "Reward Sharing for Mixnets" describing the token economy on which the Nym mixnet is based.
Anna Piotrowska from University College London and some of the founders of the startup Chainspace, Co-founders of startup Chainspace which was acquired by Meta (as Facebook) to design for Libra, also collaborated in the design.
Chelsea Manning, a whistleblower and former intelligence analyst in the US army, was recruited to audit the Nym mixnet for vulnerabilities. She then joined the team as a security expert in January 2022.
Operation
The Nym mixnet masks metadata using three main techniques: fundamental actions: inserting dummy packets, modifying packet´s transfer times and standardising packet size. These three strategies make it extremely difficult to analyze data patterns and prevent the implementation of computer surveillance devices designed for spying, monitoring or commercial exploitation of data by government or private institutions. The Nym mixnet directs Internet traffic through a worldwide multi-layered network (an overlay network) consisting of hundreds of nodes, where the source and destination IP addresses are decoupled. It is similar to the Tor network or VPNs in that it allows the IP address to be camouflaged, obscuring the location of the user. The main difference is that the Nym mixnet also hides metadata. According to security experts today, metadata can be used for mass surveillance and traffic analysis.
See also
Anonymous P2P
Crypto-anarchism
Darknet
Freedom of information
Internet censorship circumvention
Internet privacy
References
External links
2020 software
Application layer protocols
Computer networking
Free software programmed in Rust
File sharing
Free routing software
Internet privacy software
Internet security
Overlay networks
Proxy servers | The Nym mixnet | [
"Technology",
"Engineering"
] | 492 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
77,423,637 | https://en.wikipedia.org/wiki/Fortifications%20of%20Larrun | The fortifications of Larrun from the late modern period are a series of military works situated on the mountain of Larrun, immediately to the west of the border between Spain and France. Some of the structures were erected during the 1793-1794 campaign and repurposed to impede the advance of the Anglo-Hispano-Portuguese coalition troops, which the future Duke of Wellington led. More than twenty redoubts are distributed across the territories of Ascain, Sare, and Urrugne, with partial coverage also extending to those of Saint-Pée-sur-Nivelle and Biriatou.
The battles of the late 18th century were primarily concentrated in the commune of Urrugne. The revolutionary forces were deployed in the Louis XIV, Bertuste, Bayonet, and Emigrés redoubts, situated along the border and overlooking the Bidassoa River as well as the route from Vera de Bidassoa. This configuration proved effective in containing the advance of the Spanish attackers. Théophile de La Tour d'Auvergne, renowned as the "first grenadier of the Republic," distinguished himself notably during these confrontations.
The circumstances at the outset of the nineteenth century were markedly distinct. Wellington advanced as a conqueror and successfully breached the Sare lock, subsequently attacking the slopes of Larrun before heading toward Bayonne. The French army, under the command of Marshal Soult, demonstrated valor in their defense of the Zuhalmendi, Grenada, and Madeleine Chapel redoubts. The inadequacy of the defensive position, ill-suited to counterattacks, and the inexperience of the defenders at the Ermitebaïta and Mendibidea redoubts permitted the Anglo-Hispano-Portuguese coalition troops to breach the defensive line and ultimately force the French forces to retreat towards Saint-Pée-sur-Nivelle.
The redoubts, situated on elevated terrain, were constructed by two principal architectural designs, tailored to the local topography. Examples of star-shaped redoubts include the Santa Barbara, the so-called "BF 29" (Border Marker 29), and the Bayonet redoubts. Others adopt a fairly regular quadrilateral configuration, exemplified by the Madeleine Chapel redoubt, or a pentagonal shape, as observed in the Emigrés and Olhain Chapel redoubts. A third category encompasses less common shapes, such as the ovoid Louis XIV redoubt in Sare, which is presumed to be a reuse of a protohistoric structure. Thirteen of these fortifications have been designated as historical monuments.
General overview of the setup
Larrun represents the final summit of the Pyrenees mountain range before the Atlantic Ocean. The mountain reaches an elevation of 905 meters, with its summit and slopes divided between four communes: Vera de Bidassoa in Spain and Ascain, Sare, and Urrugne in France. It overlooks a series of hills that extend across the four localities. The elevations of Biscarzun (185 m) and Esnaur (272 m) are situated within the municipality of Ascain. Similarly, the elevations of Saint-Ignace (273 m), Suhamendy (301 m), Ibantelly (698 m), and Santa- The peak of Mount Barbara (140 m) is located in the commune of Sare, while Mount Faague (552 m) is situated in the same region. The peak of Mount San Benito (462 m) is located in the commune of Vera de Bidassoa. Finally, Mount Mendalé (573 m) is located in the commune of Urrugne.
The fortifications of Larrun are composed of star-shaped redoubts, also known as "bastioned," polygonal, or ovoid, situated on elevated ridges or knolls. Their construction may have been somewhat rudimentary, with some protected by simple ditches or backed by existing ruins, such as the Larrun Hermitage redoubt in Sare. Some, like the Louis XIV redoubt in Sare, have protohistoric origins.
These are modest, isolated fortifications designed to provide refuge for infantrymen, who would otherwise be compelled to form squares in open terrain. The generally uncovered nature of the trenches increased the mobility of the infantry, allowing them to move while remaining partially sheltered. However, these works offered minimal capacity for counterattacks, given the limited number of soldiers they could accommodate and the difficulty of exiting the trenches. It was only through external intervention that an assailed redoubt could be relieved. Some redoubts were equipped with artillery pieces, which, due to the steep slopes of Larrun, were often exposed to enemy fire. This configuration was particularly evident when the Allies captured the Hermitage redoubt at the summit of Larrun, resulting in the fortified Alchangue ridge becoming subject to their artillery fire.
History
The battles of 1793-1794
The execution of Louis XVI on January 21, 1793, served to exacerbate the underlying tensions between France and Spain. Consequently, on March 7 of the same year, the National Convention of France declared war on Charles IV, King of Spain. In the Basque Country, the battles—which had initially commenced in the Val d'Aran in Catalonia—focused on the valleys of the Bidassoa and Nivelle rivers. In 1793, the Committee of Public Safety constructed a redoubt at the summit of Larrun on the site of a previously destroyed hermitage.
The opposing forces included 8,000 French soldiers, positioned by General Servan—who served as War Minister until his resignation on October 3, 1792—in the communes of Sare, Hendaye, and Urrugne. They were opposed by 2,200 Spaniards under the command of General Ventura Caro, who had been reinforced by the army of émigrés led by the Marquis of Saint-Simon.
The French forces were deployed in a manner that allowed them to exert control over three primary sectors. The route from Vera de Bidassoa to Ciboure, traversing the Col d’Insola and following the existing route through the Olhette district (Urrugne), was safeguarded by the Belchenea camp and a network of redoubts situated at the region’s summits. Notable among these were the Choucoutoun redoubt at an elevation of 94 meters, the Gendarmes redoubt, the Voltigeurs redoubt at an elevation of 81 meters, the Joliment redoubt, the Emigrés redoubt, and the Mendalé redoubt at an elevation of 573 meters. Furthermore, fortifications were constructed to safeguard the Bidassoa crossing between Biriatou and Béhobie. The French forces were distributed along the flanks of Croix des Bouquets, Xoldokogaña, Mont du Calvaire, Rocher des Perdrix, and Lumaberde. Finally, the Bidassoa estuary required the establishment or reinforcement of the Sans-Culottes, Ihartzecoborda, and Etsail redoubts from Croix des Bouquets, as well as the digging of trenches to protect the village of Hendaye from the Socorri hill.
A significant portion of the combat occurred within Urrugne's territory, which served as a base of operations for a contingent of French forces, including Théophile de La Tour d’Auvergne and generals Servan, Muller, , and , at the . In contrast, revolutionary troops were situated in Béhobie, Biriatou, or the fortified positions along the ridges.
The initial engagement was precipitated by a Spanish incursion on April 23, 1793, while the French military forces were still undergoing reorganization. The decree formally establishing the Army of the Western Pyrenees was promulgated on May 1. The Spanish offensive focused on Fort Hendaye and the Louis XIV redoubt in Urrugne, which came under artillery bombardment. The French volunteers were compelled to withdraw to Croix des Bouquets.
On May 1, Spanish troops captured the redoubt at the summit of Larrun and established a settlement there. On May 2, another Spanish attack on Hendaye was repelled beyond the Bidassoa by French troops, with La Tour d'Auvergne distinguishing himself by his courageous conduct. On May 26, 1794, French troops under the command of La Tour d’Auvergne once again attempted to seize the Larrun redoubt. Despite encountering fierce resistance on the heights of Santa-Barbara (also known as Sainte-Barbe), a hill located in the commune of Sare, they were ultimately forced to withdraw.
On July 13, another Spanish offensive was repelled, resulting in the enemy troops occupying Biriatou and positions along the Bidassoa. In response, the French headquarters withdrew to a position north of the Nivelle River, where they reinforced their troops with recruits who had undergone training at a camp in Bidart until early 1794. On February 5, 1794, hostilities resumed when a column of 13,000 Spanish infantrymen and 700 cavalrymen entered France via the road from Vera de Bidassoa. The Mont du Calvaire and Mendalé positions were subsequently occupied by the attacking forces, who were subsequently repelled by the French. The latter subsequently recaptured the positions, which were then renamed the "Bayonet Redoubt" due to the intense bayonet fighting that occurred there.
The Battles of October and November 1813
Following the defeat at Vitoria on June 21, 1813—which resulted in the withdrawal of the French troops escorting Joseph Bonaparte—and the subsequent defeats at Sorauren on July 25 and San Marcial on August 31, Arthur Wellesley's troops found themselves positioned on the banks of the Bidassoa. In a letter to William Carr Beresford dated October 2, 1813, the Duke of Wellington articulated his intentions. "[...] the heights of the Bidassoa afford such an extensive view that it would be prudent to secure them with minimal delay." Marshal Soult, in command of the French forces, had partially fortified the heights and approaches to the Bidassoa but appeared more preoccupied with the fate of Pamplona, which had been under siege by General Enrique José O'Donnell's Spanish troops since June 26.The two armies were positioned in such a way that they were facing each other. On the French side, Marshal Soult (also known as the "Duke of Dalmatia") divided his forces into three groups. To the west, Reille's two divisions, commanded by and Maucune, were responsible for covering the right flank between Urrugne and Ciboure. The center was held by Clauzel's three divisions, led by Taupin, Maransin, and Conroux, which were responsible for guarding the heights of Larrun, Sare, and Ascain. These divisions were supported by elements of 's division in the Ascain- sector. To the east, between Amotz and Mondarrain, were the divisions of Erlon and Abbé. Marshal Soult resided in Saint-Jean-de-Luz.
The Anglo-Hispano-Portuguese coalition, under the command of Wellesley, advanced along a common front line, with Ramsay's English and O'Donnell's Spanish divisions situated to the east, between Zugarramurdi and Echalar, while Alten maintained control of the Lizuniaga pass, advancing from Vera de Bidassoa. The western flank, facing Reille's divisions, was occupied by Longa and Freyre's troops.
On October 7, an assault force of 15,000 allied troops crossed the Bidassoa. The so-called "Cross of the Bouquets" combat occurred in Urrugne. The French were taken unawares, as the preceding night's inclement weather had obfuscated the assailants' preparations. They crossed the Bidassoa at three distinct fords upstream of Fontarabie, whereas Soult anticipated that the primary assault would occur in the Ainhoa region, situated to the east of Sare. Reille's forces, which had taken up position in the Louis XIV redoubt in Urrugne by 7:30 that morning, were overwhelmed by Graham's column. Neither Reille nor Clauzel was able to engage their reserves in time, as Montfort and Boyer's brigades were at Calvaire Mountain and Poiriers Pass in Biriatou, and Gauthier's brigade was at Bordegain.
On October 7, 1813, at 4 a.m., an additional 20,000 men initiated an assault on the fortifications of Larrun. The fighting concluded at nightfall, with the French forces still maintaining control of Larrun at the cost of 1,000 casualties on each side. Wellington then opted to circumvent the French defensive positions at Olhain and proceeded to capture the Santa Barbara (or Sainte-Barbe) redoubt. On October 8, the French troops evacuated the Larrun hermitage (Ermitebaita) and commenced a strategic withdrawal to the Alchangue ridge. The battles of October 7 and 8 concluded with 1,400 French casualties and 1,600 Allied casualties.
On the night of October 12-13, the French troops under the command of Conroux recaptured the Santa Barbara redoubt, maintaining control of the position despite a Spanish assault that resulted in the loss of 500 men for the opposing forces. By the conclusion of the battles that transpired between October 7 and 13, the Allied forces had established a foothold along the border ridge, encompassing the Olhain Chapel redoubt, the summit of Larrun, and the entirety of the ridge overlooking the Bidassoa. This included the Urrugne redoubts of the Emigrés, Bayonnette, and Louis XIV. The front stabilized, and a tacit peace emerged, leading the French and English to neutralize certain areas on the southern slope of Petite Larrun.Wellington was informed on October 20 that the defense forces in Pamplona were unable to maintain their position for more than a week. This information was obtained from an intercepted letter from the governor of Pamplona. The city subsequently surrendered on October 31, allowing the Allied forces to concentrate their troops on the French border. In light of the French reversals in Germany—the Battle of Leipzig concluded on October 19 with the withdrawal of Napoleonic forces—the Allied commander opted to concede to the mounting pressures from the allied sovereigns and launch an invasion of France from the south.
On November 10, Wellington initiated a significant military operation, deploying 40,000 troops against the fortifications of Larrun and into the Nivelle Valley. The French center, under the command of Clauzel, was subjected to an assault in the Sare sector by eight divisions. Wellington's strategy entailed capturing the village, which represented a significant vulnerability in the defense line and subsequently advancing towards Saint-Pée-sur-Nivelle and Amotz. His objective was to maneuver on Bayonne, thereby dividing the French army and compelling it to engage on multiple fronts. At dawn, the Allied forces, under the command of General Colville, successfully captured the Santa-Barbara and Grenada redoubts and initiated an offensive against the Alchangue positions. The positions held by Clauzel and Erlon were breached by 40,000 men under the command of Hill and Beresford. Meanwhile, Reille's right flank was subjected to assaults from Hope, who commanded 19,000 men and 54 cannons. Hope was also supported by the English fleet's firepower. The Mouiz camp was evacuated by the French forces before 8 a.m., with the troops subsequently retreating to Sare. By 11 a.m., the Louis XIV and Saint-Ignace redoubts had also been captured.
The Allied divisions under the command of Generals Le Cor, Cole, Alten, Longa, and Freyre were able to gain the upper hand over the French defenders. They were able to infiltrate through "all-natural corridors," pursuing the retreating French in disorder towards Saint-Pée-sur-Nivelle, then Bidart and Arrauntz (Ustaritz district).
The English troops entered Saint-Pée-sur-Nivelle at approximately 2 p.m., while Longa's division proceeded to occupy Ascain, which was subsequently pillaged overnight. By the end of November 10, the French army remained deployed to the east, supported by the Nive and Larressore under the command of Drouet d'Erlon. Its center was situated near Bayonne on the Saint-Pée road, while to the west, it occupied the Nivelle Valley from Saint-Jean-de-Luz to Serres under the command of Reille. The Allied troops maintained control of Urrugne, Sare, Ascain, Saint-Pée-sur-Nivelle, Souraïde, and Espelette. French losses totaled 4,265 people, including 1,400 prisoners, along with 51 cannons and the stores of Espelette and Saint-Jean-de-Luz. The Allies sustained 2,694 casualties.
The redoubts of Ascain
The redoubts of Biskarzoun and Esnaur are situated on two mounds overlooking Ascain, affording observation of potential attackers' approach from all directions. Although isolated from the remainder of the position and still incomplete at dawn on November 10, 1813, the redoubts were sufficiently substantial to withstand a direct attack due to their location. Initially, both works appeared to have been held by units from Taupin's division, particularly the 47th line.
The Biskarzoun Redoubt, also known as Biscarzoun or Biskarzun, is situated at an elevation of 185 meters and provides a strategic overlook of the village of Ascain. The structure takes the form of an irregular heptagon, with a pile of rocks situated at its center. The structure's perimeter measures approximately 100 meters, with an area of approximately 650 square meters. To the southeast of the position, a semicircular trench was dug into a terrain ledge approximately 75 meters from the redoubt. This trench allowed defenders to cover the blind spot that hindered artillery from the main fortification. Similarly, it appears that the Biskarzoun redoubt was captured without a fight, presumably at the direction of Taupin, as no evidence of a struggle was discovered. It also encompasses the commune of Saint-Pée-sur-Nivelle and has been designated a historic monument since 1992.
The Esnaur Redoubt is situated at an altitude of 273 meters and serves as a dominant defensive position along the route leading to the . The structure replicates the Biskarzoun Redoubt in a larger format, forming an irregular seven-sided polygon with an area of 2,200 m2 and a perimeter of 150 meters. A crown of 6 to 9 meters surrounds it, at times excavated into the rock through techniques analogous to those employed in certain protohistoric "terraced" enclosures. An oval platform marks its center. This fortification has been designated a historic monument since 1992.
The natural fortress of Ihicelhaya serves to protect Ascain from incursions from the south, situated at an elevation of 419 meters. Battles are known to have taken place there, as evidenced by the presence of a dry stone wall, although the precise dates remain uncertain, with some sources indicating 1793-1794 and others 1813. Galleries have been excavated into the rock, forming a structure measuring approximately fifty meters in length and twenty meters in width. The redoubt of the hermitage is situated at the summit of the Larrun. It is a dry stone construction erected on the foundations of the former chapel. It housed an allied battery of cannons, enabling them to reach the fortified crest of Alchangue.
Several fortifications were obliterated in the 20th century, subsumed beneath the processes of urbanization and development. Notable among these are the redoubts of Chanoneta, Uramendi, and Beheré. The Teilleria, or Tuileries, redoubt was situated in the Serres area, between Saint-Jean-de-Luz and Ascain. The fortification served as the Serres camp, where reserve troops were stationed. It was also destroyed by 20th-century developments and formed a pentagon perched at an altitude of 57 meters, 500 meters from the chapel in the hamlet of Serres. It controlled the Nivelle, Ascain, and the road to Saint-Jean-de-Luz.
The redoubts of Sare
The Mouiz camp, also known as Koralhandia, meaning "grand enclosure," was constructed in 1813 and represents the most significant component of the defensive system. The fort is situated on the northern slopes of the Larrun massif, at the location designated as Aira-herri, at an elevation of 537 meters. It is located approximately 800 meters northeast of the Trois-Fontaines railway station on the Larrun railway and 1,400 meters southwest of the . The strategic position was selected to safeguard the access to the Saint-Ignace pass and the village of Sare from an adversary approaching from the west or southwest. Although the redoubt is situated within the boundaries of Sare, the western extremity of the Alchangue ridge, colloquially designated Petite Larrun by military authorities, is depicted on the Ascain cadastre.
The redoubt, constructed using superimposed sandstone slabs without the addition of cement, exhibits a six-pointed star shape, rendering it particularly suited for short-range flanking fire. The structure covers an area of 1,040 square meters and is enclosed by a wall that does not contain loopholes. This wall rises to a height of two meters with a thickness of 80 centimeters. A trench measuring approximately 400 meters in length and constructed in a zigzag configuration connects the redoubt to Alchango-Harriak, a fortified ridge situated 500 meters to the southwest. This ridge extends from the Argaïneko pass to the Trois Fontaines pass and reaches an elevation of 625 meters. Four infantry posts have been established along the ridge. The fortification is equipped with six artillery pieces.
The work, with a perimeter of 1,460 meters, including 1,040 meters of dry stone walls, has been included on the Ministry of Culture's list of protected cultural heritage sites since November 4, 1986.
On November 10, 1813, Kempf's brigades of the 43rd Allied Regiment and Colborne's of the 17th Portuguese Regiment initiated an assault on the Mouiz camp from the ridge, while another Portuguese battalion advanced directly on the fortification. The French defenders engaged in a desperate battle, utilizing their remaining ammunition and strength, and resorting to the use of stones and rocks during the Allies' final assault.The Chapel of La Madeleine is a fortified structure situated to the northeast of Sare, in the vicinity of the Amotz district of Saint-Pée-sur-Nivelle. The structure reaches an elevation of 187 meters and is constructed in a quadrilateral formation, with three sides protected by a deep ditch. The side facing Amotz is defended by several smaller defensive structures. The Uhaldekoborda trench extends for nearly 500 meters in an eastward direction, reaching a length of 250 meters. It is protected by a 60-meter-high parapet.
On November 10, 1813, the front formed by the redoubt of the Chapel of La Madeleine and that of Louis XIV was overrun following two unsuccessful assaults by two Anglo-Portuguese divisions. This redoubt has been listed since 1993.
The Zuhalmeni Redoubt, also known as Zuhalmeni or Souhameni, and the Signal Redoubt, is situated to the northwest of the town, close to the Mendionde pass, at an elevation of 301 meters. The star-shaped work has dimensions of 115 by 85 meters, with a perimeter of 310 meters. It is protected by a deep ditch that renders it impassable to attackers. The work, defended by 350 seasoned soldiers of the , endured five successive assaults before surrendering. This surrender was obtained by the English colonel, who came to negotiate with the regiment's commander, convincing him of the futility of resistance in the face of the Allies' advance. In the course of the battle, two hundred English soldiers were killed for the loss of a single French defender.
The entire site has been listed since 1992.
The two fortifications of Ermitebaïta and Mendibidea, which are mutually supportive, comprise a system that affords extensive views of the positions of attackers emerging from the ravine extending from the to the Mendiondo pass. Additionally, the system is designed to safeguard the access to the ridge safeguarded by the Zuhalmendi redoubt. The Ermitebaïta redoubt has a commanding view of the Saint-Ignace pass. It is situated at an altitude of 268 meters, two kilometers northeast of the summit of the Larrun and 750 meters southwest of the Zuhalmendi redoubt. The fortification is star-shaped, with a perimeter of 185 meters, situated within a quadrilateral measuring 90 meters by 75 meters. It is surrounded by a parapet and a four-meter-wide ditch. A trench connects it to a "U"-shaped outpost defending the pass. In 1813, the young recruits stationed there abandoned the fort without a fight after the evacuation of the Louis XIV redoubt. It has been listed as a historic monument since 1992.
The Mendibidea redoubt, situated 250 meters from Ermitebaïta, encompasses the western aspect of the Saint-Ignace pass. The structure's perimeter measures 152 meters. Similarly, the Mendibidea redoubt was vacated by the 70th Line Battalion, comprising inexperienced recruits, following the evacuation of the Louis XIV redoubt due to the advance of the Spanish divisions of Longa and Freyre. Despite General Taupin's endeavors, the battalion could not be reassembled. The structure, which has retained a high degree of integrity, has been designated a historic monument since 1992, along with that of Ermitebaïta.The chapel of Olhain is situated at an elevation of 397 meters on a hilltop overlooking the road from Sare to Vera de Bidassoa. The redoubt is an irregular pentagon, with the two largest sides measuring 25 meters. One side, oriented towards the Larrun, encompasses the chapel and is supported by two opposing lateral walls. During the battles of October 7 and 8, 1813, Wellington seized the redoubt before launching an offensive on the hermitage. The structure has been listed since 1992.
The Santa-Barbara Redoubt is situated at an altitude of 137 meters in the Lehenbiscay district. It is located on the hill of Santa Barbara, on the edge of the plateau overlooking the Lizuniaga stream, 1,500 meters south of the church of Sare. The structure has a star shape and is protected by a ditch ranging in width from 4.60 to 6.50 meters, which is doubled inside by a 180-meter-long parapet. It is inscribed within an 80-meter-square quadrilateral and appears to be part of a larger defensive ensemble, of which it constitutes the main part. The structure has been listed since 1993.
During the Western Pyrenees campaign (1793-1795), the position was defended by Théophile de la Tour d'Auvergne, who was known as "First Grenadier of the Republic." The redoubt was destroyed in 1795 but subsequently rebuilt in 1813. During the battles that took place between October 7 and 13 of that year, the redoubt changed hands several times. Wellington's troops ultimately secured it on November 10, at the cost of 500 combat-ready men, in comparison to 200 imperial soldiers.The Grenada Redoubt, also known as Chelkor, is situated on the territory of Sare. It is positioned at an elevation of 127 meters and overlooks the road leading to the from the summit of a hill. The structure is a six-pointed star-shaped work with a perimeter of 80 meters, surrounded by a ditch measuring between 1.5 and 2 meters in width. The inner parapet of the star is doubled on its southern part by a 50-centimeter ditch. During the 1813 battles, the redoubt offered significant resistance to Allied troops; however, it ultimately succumbed to the combined assault of a British horse battery and a Portuguese infantry brigade.
The Monhoa Redoubt, also known as Monhoa or Monhohandi, is situated in Sare and serves as an outpost for the Santa-Barbara and Grenada redoubts on the route connecting the Lizarrieta pass to the Lizuniaga pass. It is located at an altitude of 167 meters and only a Z-shaped ditch remains, with the absence of remains on either side suggesting that it was only a half-redoubt.
Similarly, the redoubt of boundary marker 29, also designated as Bechini, is situated on a plateau at an elevation of 600 meters, at the base of the Larrun summit. The structure forms a star shape, protected by a ditch measuring 50 centimeters in depth and an earth parapet. It is situated adjacent to a dry stone barn to the southwest. Historical research has demonstrated that it was the site of considerable combat during the wars of 1793–1794.
The Louis XIV Redoubt, also known as Mendiondo or Gastelugaina, has been lost to history since 1977, except for a fragment of rampart into which the current water tower is embedded. According to General Francis Gaudeul, it is a protohistoric enclosure that was adapted for the needs of the 19th-century wars. It was also used during the 1793-1794 battles against Spanish troops.
The elliptical shape is a relatively uncommon feature among modern-era fortifications, which may be attributed to the protohistoric origins of the structure in question.
On November 10, 1813, the troops under the command of General Maransin, protected by the forces of to the east and those of General to the west successfully repelled two assaults. With the withdrawal of Conroux's division to the left, the redoubt was captured by the Allied forces, resulting in the loss of three officers and 179 soldiers for the French. General Maransin, who had been temporarily taken captive by the attackers, managed to escape and subsequently resumed command of his division.
The Idoyko Biskarra Redoubt is situated at the border between Spain and France, in proximity to boundary marker 43. It is positioned at the summit of a hill, at an altitude of 502 meters, 300 meters from the . The structure is ovoid and oriented towards French territory, and historical evidence suggests that it was utilized during the 1793-1794 battles.
The redoubts of Urrugne
The Bayonnette Redoubt is located at an altitude of 560 meters at the summit of Mendalé. It has a view of the village of Vera de Bidassoa.
The parapet surrounding the structure measures 350 meters in length. Before its arrival, a ditch 7 meters in width and 1.5 to 2 meters in depth awaits. The structure has a maximum length of 127 meters from north to south and 107 meters from south-southwest to north-northeast. The mound rising at its center is of a truncated cone shape, measuring 3 to 5 meters in height, with a base circumference of nearly 110 meters and a diameter of 20 meters at the top.
The site saw action during the conflicts of 1793-1794 and 1813. It acquired its designation as the "Bayonnette Redoubt" as a result of the French bayonet charge against the Spaniards, which commenced on July 24, 1794, intending to reclaim the position that had been lost on May 2, 1793.
It was restored in September 1813 and defended by the 9th Light Battalion, from Taupin's division, part of Clausel's army corps. It was attacked by the Allies on October 7. On October 8, 1813, the battalion of the 88th Infantry Regiment, under the command of Battalion Chief Gillet, resisted the English assaults for a long time before being massacred. General Van der Maësen, who died while trying to clear the bridge at Vera de Bidassoa, was temporarily buried there before being subsequently interred at the Ascain cemetery.
The Redoubt and the Bortuste Redoubt have been designated as historic monuments since 1992.
The Emigrés Redoubt is situated at an elevation of 394 meters, overlooking the Ibardin pass. It is positioned on a ridge that runs parallel to the CD 404 road, which connects the pass to the former Herboure customs.
The structure, which appears to have been named in error due to the emigrants' camp erected on the other side of the border, has been listed since 1992. It is noteworthy that the entrance is located to the south, where the parapet covered with stone slabs creates a chicane to repel attackers to the east. The redoubt occupies a quadrilateral of 58 by 70 meters.
The battles of 1793 left little documentation. The redoubt appears to have been captured by the Spaniards on May 2, 1793, and subsequently recaptured by the French in July 1794. In October 1813, given the low density of the French defensive system, the redoubt seems to have been taken by attackers emerging from the wooded ravines surrounding it.
The Louis XIV Redoubt is a distinct structure from that of Sare, which has already been described. It extends over the communes of Urrugne and Biriatou at the 124th meridian. The structure overlooks the Bidassoa and allows a view of Pheasant Island, situated 1 km to the west. It was named the Louis XIV Redoubt in memory of the conference held on this island in 1659. It has been listed as a historic monument since 1997.
On April 23, 1793, the Redoubt, which was defended by French troops, was attacked and subsequently captured by General Caro's troops. On June 26, the troops of General Servan, led by La Tour d’Auvergne, recaptured the redoubt. Further battles took place on July 23 of the same year, during which the Spaniards seized the redoubt, which they were forced to relinquish on January 14, 1794. Despite renewed assaults by General Caro on February 5, the French maintained control of the structure.
In the context of the 1813 battles, to launch an attack on 31 August towards San Marcial, Marshal Soult established his command post in this same redoubt in advance of the aforementioned attack. On October 7, while the fortification was under the protection of the of the brigade, which was under the command of General Reille, the redoubt was subjected to a sudden and intense assault by the troops of Wellington. The French forces were compelled to relinquish their position to the Allied troops and retreat to the Croix des Bouquets.
Prior to 1950, IGN maps reference numerous redoubts in Urrugne that have since been dismantled, including the former Voltigeurs Redoubt, the Choucoutoun Redoubt, and the Legarcia Redoubt. These fortifications have been subsumed by the urban expansion that has encroached upon the Larrun ridges within the town's boundaries.
See also
Battle of the Bidassoa
Battle of the Nive
Battle of Nivelle
Campaign in north-east France (1814)
France–Spain border
Peninsular War
French Revolutionary Wars
Notes
References
Jacques Antz, Sare, volume 1, 1993
Jacques Antz, Autrefois Sare, 2006
Henri Alexis Brialmont, Histoire du duc de Wellington, volume 2,1856-1857
Francis Gaudeul, Les redoutes du Ier Empire du Pays basque, 1984
Francis Gaudeul, Les redoutes du Ier Empire du Pays basque, 1985
Guy Lalanne, Ascain, 1991
Guy Lalanne, Urrugne, 1989
Jean-Claude Lorblanchès, Campagne de l'armée impériale du Pays basque à Toulouse (1813-1814), 2013
Other sources
Bibliography
The author was a Belgian engineer officer specializing in fort construction, and founder of the Journal de l'armée belge (1850).
The description of the battles referenced by Guy Lalanne is based on the nineteenth-century works of Commandant and P.J. Pellot.
Mountain warfare
Mountains of Pyrénées-Atlantiques
Mountains of the Pyrenees
Fortifications
Military installations | Fortifications of Larrun | [
"Engineering"
] | 7,640 | [
"Fortifications",
"Military engineering"
] |
77,423,961 | https://en.wikipedia.org/wiki/Oklab%20color%20space | The Oklab color space is a uniform color space for device independent color designed to improve perceptual uniformity, hue and lightness prediction, color blending, and usability while ensuring numerical stability and ease of implementation. Introduced by Björn Ottosson in December 2020, Oklab and its cylindrical counterpart, Oklch, have been included in the CSS Color Level 4 and Level 5 drafts for device-independent web colors since December 2021. They are supported by recent versions of major web browsers and allow the specification of wide-gamut P3 colors.
Oklab's model is fitted with improved color appearance data: CAM16 data for lightness and chroma, and IPT data for hue. The new fit addresses issues such as unexpected hue and lightness changes in blue colors present in the CIELAB color space, simplifying the creation of color schemes and smoother color gradients.
Coordinates
Oklab uses the same spatial structure as CIELAB, representing color using three components:
L for perceptual lightness, ranging from 0 (pure black) to 1 (reference white, if achromatic), often denoted as a percentage
a and b for opponent channels of the four unique hues, unbounded but in practice ranging from −0.5 to +0.5; CSS assigns ±100% to ±0.4 for both
a for green (negative) to red (positive)
b for blue (negative) to yellow (positive)
Like CIELCh, Oklch represents colors using:
L for perceptual lightness
C for chroma representing chromatic intensity, with values from 0 (achromatic) with no upper limit, but in practice not exceeding +0.5; CSS treats +0.4 as 100%
h for hue angle in a color wheel, typically denoted in decimal degrees
Achromatic colors
Neutral greys, pure black and the reference white are achromatic, that is, , , , and h is undefined. Assigning any real value to their hue component has no effect on conversions between color spaces.
Color differences
The perceptual color difference in Oklab is calculated as the Euclidean distance between the coordinates.
Conversions between color spaces
Conversion to and from Oklch
Like CIELCh, the Cartesian coordinates a and b are converted to the polar coordinates C and h as follows:
And the polar coordinates are converted to the Cartesian coordinates as follows:
Conversion from CIE XYZ
Converting from CIE XYZ with a Standard Illuminant D65 involves:
Converting to an LMS color space with a linear map:
Applying a cube root non-linearity:
Converting to Oklab with another linear map:
Given:
Conversion from sRGB
Converting from sRGB requires first converting from sRGB to CIE XYZ with a Standard Illuminant D65. As the last step of this conversion is a linear map from linear RGB to CIE XYZ, the reference implementation directly employs the multiplied matrix representing the composition of the two linear maps:
Conversion to CIE XYZ and sRGB
Converting to CIE XYZ and sRGB simply involves applying the respective inverse functions in reverse order:
Notes
References
Color space
Color appearance models
2020 introductions | Oklab color space | [
"Mathematics"
] | 667 | [
"Color space",
"Space (mathematics)",
"Metric spaces"
] |
77,424,313 | https://en.wikipedia.org/wiki/Cauchy%27s%20estimate | In mathematics, specifically in complex analysis, Cauchy's estimate gives local bounds for the derivatives of a holomorphic function. These bounds are optimal.
Cauchy's estimate is also called Cauchy's inequality, but must not be confused with
the Cauchy–Schwarz inequality.
Statement and consequence
Let be a holomorphic function on the open ball in . If is the sup of over , then Cauchy's estimate says: for each integer ,
where is the n-th complex derivative of ; i.e., and (see ).
Moreover, taking shows the above estimate cannot be improved.
As a corollary, for example, we obtain Liouville's theorem, which says a bounded entire function is constant (indeed, let in the estimate.) Slightly more generally, if is an entire function bounded by for some constants and some integer , then is a polynomial.
Proof
We start with Cauchy's integral formula applied to , which gives for with ,
where . By the differentiation under the integral sign (in the complex variable), we get:
Thus,
Letting finishes the proof.
(The proof shows it is not necessary to take to be the sup over the whole open disk, but because of the maximal principle, restricting the sup to the near boundary would not change .)
Related estimate
Here is a somehow more general but less precise estimate. It says: given an open subset , a compact subset and an integer , there is a constant such that for every holomorphic function on ,
where is the Lebesgue measure.
This estimate follows from Cauchy's integral formula (in the general form) applied to where is a smooth function that is on a neighborhood of and whose support is contained in . Indeed, shrinking , assume is bounded and the boundary of it is piecewise-smooth. Then, since , by the integral formula,
for in (since can be a point, we cannot assume is in ). Here, the first term on the right is zero since the support of lies in . Also, the support of is contained in . Thus, after the differentiation under the integral sign, the claimed estimate follows.
As an application of the above estimate, we can obtain the Stieltjes–Vitali theorem, which says that a sequence of holomorphic functions on an open subset that is bounded on each compact subset has a subsequence converging on each compact subset (necessarily to a holomorphic function since the limit satisfies the Cauchy–Riemann equations). Indeed, the estimate implies such a sequence is equicontinuous on each compact subset; thus, Ascoli's theorem and the diagonal argument give a claimed subsequence.
In several variables
Cauchy's estimate is also valid for holomorphic functions in several variables. Namely, for a holomorphic function on a polydisc , we have: for each multiindex ,
where , and .
As in the one variable case, this follows from Cauchy's integral formula in polydiscs. and its consequence also continue to be valid in several variables with the same proofs.
See also
Taylor's theorem
References
Further reading
https://math.stackexchange.com/questions/114349/how-is-cauchys-estimate-derived/114363
Complex analysis | Cauchy's estimate | [
"Mathematics"
] | 706 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
77,426,205 | https://en.wikipedia.org/wiki/Sierpi%C5%84ski%27s%20theorem%20on%20metric%20spaces | In mathematics, Sierpiński's theorem is an isomorphism theorem concerning certain metric spaces, named after Wacław Sierpiński who proved it in 1920.
It states that any countable metric space without isolated points is homeomorphic to (with its standard topology).
Examples
As a consequence of the theorem, the metric space (with its usual Euclidean distance) is homeomorphic to , which may seem counterintuitive. This is in contrast to, e.g., , which is not homeomorphic to . As another example, is also homeomorphic to , again in contrast to the closed real interval , which is not homeomorphic to (whereas the open interval is).
References
See also
Cantor's isomorphism theorem is an analogous statement on linear orders.
Theorems in topology | Sierpiński's theorem on metric spaces | [
"Mathematics"
] | 163 | [
"Mathematical theorems",
"Mathematical problems",
"Topology",
"Theorems in topology"
] |
77,426,376 | https://en.wikipedia.org/wiki/Pixhawk | Pixhawk is a project responsible for creating open-source standards for the flight controller hardware that can be installed on various unmanned aerial vehicles. Additionally, any flight controller built to the open standards often includes "Pixhawk" in its name and may be referred to as such.
Overview
An unmanned vehicle's flight controller, also referred to as an FC, FCB (flight control board), FMU (flight management unit), or autopilot, is a combination of hardware and software that is responsible for interfacing with a variety of onboard sensors and control systems in order to facilitate remote control or provide fully autonomous control.
Pixhawk-standardized flight controllers are being used for academic, professional, and amateur applications, and are supported by two mainstream autopilot firmware options: PX4 and ArduPilot. Both firmware options allow for a variety of vehicle types through the Pixhawk flight controller system, including configuration options for unmanned boats, rovers, helicopters, planes, VTOLs, and multirotors. Many manufacturers have adopted various iterations of the Pixhawk standard, including Holybro and CubePilot. Refer to the UAV-systems hardware chart for a full list of flight controllers that have fully or partially adopted the Pixhawk standard.
Pixhawk flight controllers typically feature one or two microcontrollers. In the case of two microcontrollers, a main flight management processor handles all sensor readings, PID calculations, and other resource-heavy computations, while the other handles input/output operations to external motors, switches and radio control receivers. Onboard sensors include an IMU with a multi-axis accelerometer and gyroscope, magnetometer to use as a compass, and a GPS tracking unit to estimate the vehicle's location.
Standards
The Pixhawk standards dictate the hardware requirements for manufacturers who are building products to be compatible with the PX4 autopilot software stack. However, due to ArduPilot's adaptation of Pixhawk flight controllers, the standard is able to ensure compatibility with ArduPilot as well.
The open standards consist of a main autopilot reference standard for each iteration of the Pixhawk FMU, as well as various other standards that apply to the general Pixhawk control ecosystem, such as a payload bus standard or a smart battery standard.
Autopilot Reference Standard
This is the main section of the Pixhawk open standards, containing all mechanical and electrical specification for each version of the flight management unit. Currently, versions 1, 2, 3, 4, 4X, 5, 5X, 6X, 6U, and 6C autopilots have been released. The mechanical design standard includes dimensional drawings of the FMU's PCB, the selected sensor types and their locations, and areas that need additional heat sinking. The electrical standard includes the pin-out of each pin in the main processing microcontroller, and which interface each pin is set to communicate with.
Autopilot Bus Standard
The autopilot bus standard is an extension of the autopilot reference standard specifically for providing more information about manufacturing the latest reference versions of Pixhawk FMU, such as the 5X and 6X. The main reason for this is that these are the first flight units featuring a system on module design, where the housing of the flight controller module takes the form of a compact prism with a set of extremely high-density, 100-pin connectors between the module and the baseboard (seen at the bottom of the image on the right). The baseboard allows users to plug the necessary peripheral devices (such as motors, servos, and radios) into the flight controller, while the system on module design results in an easily swappable flight computer. Additionally, this bus standard details PCB layout guidelines for the system on module along with a catalog of reference schematics for interfaces between the module and the baseboard.
Connector Standard
In the connector standard, the Pixhawk project specifies using the JST GH for the vast majority of all interfaces between the flight controller board and pluggable peripherals. Just as importantly, the standard defines a convention for user-facing pin-outs for telemetry, GPS, CAN bus, SPI, power, and debug ports. External pin-out information is critical for anyone developing a vehicle with an autopilot, as improperly plugging in peripherals results in a non-functional system at best, and a dangerous environment with broken hardware at worst. Although there is a great deal of variation within the Pixhawk family in terms of available ports and port types, the standardization of pin-outs for the most popular interfaces is immensely helpful to any user working with multiple generations of Pixhawk flight controllers.
Other standards
Payload Bus Standard
Although this section serves as an accessory to the main Autopilot Reference Standard, it concisely details how the Pixhawk standards suggest making additional vehicle payloads that are compatible with a Pixhawk autopilot. Although it is not strictly enforced across all vehicle payload manufacturers, this facilitates the possibility for users to implement payloads and flight controllers from different manufacturers.
Smart Battery Standard
The smart battery standard has not been published yet, but it is set to define the interface between a smart battery and a Pixhawk FMU. Such a standard would define the communication protocols, connectors, and capabilities of a battery management system that would be used in a Pixhawk-operated vehicle.
Radio Interface Standard
Although there are a variety of radio solutions that can be interfaced with a Pixhawk flight controller, the project does have a short mechanical, electrical, and software definition for a Pixhawk-specific radio communication system. The standard anticipates connections between ground stations and radio modules to be over USB or Ethernet, while connections between local and remote radios could go over traditional radio-frequency links, or LTE.
History
In 2008, Lorenz Meier, a master's student at ETH Zurich, wanted to make an indoor drone that could use computer vision to autonomously traverse a space and avoid collisions with obstacles. However, such technology did not exist, let alone in a way that was accessible to a university student. Motivated by participating in the indoor autonomy category of a European Micro Air Vehicle competition, Lorenz leveraged the help of professor Marc Pollefeys and assembled a group of 14 teammates to spend nine tireless months creating custom flight controller hardware, firmware, and high-level software. The team, named "Pixhawk," won first place in their category in 2009, being the first competitors to successfully implement computer vision for obstacle avoidance.
Revisiting the project in subsequent years, Lorenz realized that there were not a lot of existing industry tools that could be used to accomplish what he and his team did. As a result, the Pixhawk team made the entire project open source. The ground control software that allowed the team to interface with the drone while it was in flight, the MAVLink communication protocol that was custom developed for streaming telemetry back to the ground station, the PX4 autopilot software that was responsible for controlling the drone, and the Pixhawk flight controller hardware that the autopilot ran on were all released to the public for further development.
Over time, the released project began to grow. MAVLink was picked up by the open-source ArduPilot autopilot software development project, and the ground control software QGroundControl was subsequently used to interface with MAVLink systems. After a couple codebase rewrites and hardware development cycles, Lorenz and a worldwide team of open-source maintainers were able to support a manufacturer that would build a flight controller to their standards. In 2013, 3D Robotics became the first manufacturer of commercial Pixhawk flight controllers, officially lowering the barrier to entry to autonomous flight for enthusiasts and corporations worldwide. Now, anyone could purchase an extremely capable autonomous flight control, flash it with free, open-source PX4 or ArduPilot firmware, and have a university research-level drone platform.
Lorenz heavily credits the open-source community with the extensive success of the Pixhawk platform, as the combined development power seemed to be greater than that of a well-resourced company. In order to help standardize various developments across the project and ensure that it remained accessible and open-source, the Dronecode organization was founded in 2014. Dronecode is currently a non-profit organization under the Linux Foundation, and it has been responsible for facilitating conversations that define the Pixhawk standards.
References
External links
Official repository on GitHub
PX4 autopilot software home page
ArduPilot autopilot software home page
Avionics computers
Embedded systems
Flight control systems
Open-source hardware
Unmanned aerial vehicles
Unmanned surface vehicles
Unmanned underwater vehicles
Robotics engineering | Pixhawk | [
"Technology",
"Engineering"
] | 1,820 | [
"Computer engineering",
"Robotics engineering",
"Embedded systems",
"Computer systems",
"Computer science"
] |
69,968,655 | https://en.wikipedia.org/wiki/Hairpin%20technology | Hairpin technology is a winding technology for stators in electric motors and generators and is also used for traction applications in electric vehicles. In contrast to conventional winding technologies, the hairpin technology is based on solid, flat copper bars which are inserted into the stator stack. These copper bars, also known as hairpins, consist of enameled copper wire bent into a U-shape, similar to the geometry of hairpins.
In addition to hairpins with U-shape, there are two other variants of bar windings, the so-called I-pin technology and the concept of continuous hairpin windings.
I-Pins are straight copper wire elements that are inserted into the stator slots. Unlike Hairpins, these Pins are not bent prior to insertion into stack. However, contacting is necessary on both sides of the stator. In the concept of continuous hairpin windings, so-called winding mats are produced and afterwards inserted into the stack from the inner diameter.
Hairpin stators are most commonly used for synchronous machines.
Stator structure
The structure of a hairpin stator differs from conventional stators only in the type of winding system - other components of the stator are little changed. The stack of sheets consists of many layers of individual sheets, each insulated by a thin coating. The housing is another subcomponent that does not generally require modifications. The thin, round wire of the conventional winding technology is substituted by copper bars, which better fit the slot geometry and therefore provide a higher slot-filling degree than regular winding. To create the necessary winding scheme, the free ends of the hairpins are twisted before welding. In addition to the impregnation process for the entire stator, which is also necessary for conventionally wound stators, a layer of insulation resin is applied to the ends of the hairpins.
Manufacturing
The hairpin stator process chain is based on an indirect winding approach. Due to the solid cross section, hairpins can be shaped into their final geometry ahead of the actual assembly process. In contrast to conventional stator production, in which winding-based assembly processes predominate, a forming process is applied.
Production can be divided into 4 steps:
Hairpin
In the first process, a flat copper wire, which is usually already enameled, is continuously unwound and straightened in several stages to reduce residual curvature and stresses. In preparation for welding of the copper ends in a later process step, this insulation is partially removed. Laser-based and mechanical processes are feasible. The hairpin wire is cut to length and bent, in varying order. Hairpins are formed into a three-dimensional geometry either in a single-stage process using special CNC bending equipment or in multiple stages in which a die bending process follows a swivel bending process. There are three technologies for bending hairpin wires: U-Pin, in which hairpin wires have a shape resembling a U, I-Pin, with wires resemling an I, and Continuous Hairpin, also called continuous wave, in which a single wire is bent into a serpentine shape up to several meters long. U-Pin technology is the most common of these.
Assembly and twisting
Next the pins are inserted into the stator stack. The insertion process is limited by overlaps in the winding head geometry. The hairpins are usually pre-assembled in an assembly nest. Individual pins are arranged in accordance with the winding scheme. In general, a single hairpin stator uses 3-16 different hairpin geometries. The stator slots are lined with insulation paper to separate the winding system from the ground potential of the stator's sheet stack.
In the next assembly step, the hairpin basket is inserted axially into the stator stack. To support the insertion the hairpins are sometimes equipped with chamfers during the cutting process – grippers can improve positioning precision.
Each layer of hairpin ends is twisted in accordance with the winding scheme. During the associated rotation the tool has to be moved in an axial direction for height compensation. To ensure axial accessibility the hairpin ends must be radially exposed in a preparatory step.
Welding and interconnection
Next, hairpin ends are electrically contacted with each other to form the winding scheme. Using a laser, the hairpin ends are partially melted and joined. An optimal welding process is marked by homogenous weld geometries as well as minimal thermal input. Repeatable welding strategies require the stator to maintain a stable condition.
Relative height and lateral offset of the hairpin end can cause welding defects. These can be prevented by corrective processes that are dependent on precise tolerances within upstream processes. Phase jumps and the main electroconductive connection of the entire winding can be carried out through connective elements or assemblies connect to the welded hairpin ends. This can also be done via laser welding. Examples of interconnection elements are contact rings, terminals, and bridges.
Insulation
After the winding process, the welded copper ends are re-insulated and the entire stator is impregnated. Powder coating or polyurethane-based resins are commonly used to insulate the copper ends. Typically, dipping, trickling, or potting processes are used. The impregnation process differs little from those used for conventional stators, such as dipping or trickling processes. The purpose of impregnation is to protect the stator from thermal, electrical, ambient, and mechanical influences.
Testing
A variety of tests are performed throughout the production process. Ensuring function- and safety-relevant properties of the stator is a key objective. Common tests are:
Partial discharge testing
Surge voltage testing
Resistance testing
Geometric testing
Challenges
Particularly in traction drives, a major implementation challenge is process reliability, particularly bending and welding processes. The bending process must not damage the insulation and exactly match the required geometry. Incorrectly welded hairpin ends can cause electrical losses – and possibly a non-functioning stator.
Key target parameters are high fill factors within the stator slots and a small winding head. Due to the rectangular and enlarged conductor cross section, fill factors can reach 73% (significantly higher than the 45-50% in conventionally wound stators). A small winding head increases relative active material and thus the proportion that generates power. However, hairpin's larger cross sectioncan result in additional electrical losses, e.g., due to current displacement effects such as the skin effect.
Automotive industry
Due to deterministic assembly processes, good speed-torque behavior, and high fill factors, hairpin technology has gained appeal for automotive applications. Additionally, the hairpin production process is suitable for automation. As a result, shorter cycle times and increasing quantities lead to decreasing production costs.
Hairpin technology is increasingly applied in automotive applications. The first production vehicle with hairpin technology was the 2008 General Motors Chevrolet Tahoe hybrid featuring 2 motors with this stator construction in GM's 2ML70 "2Mode" transmission.
The Volkswagen Group relies on hairpin stators in its electric vehicles, including the ID.3, ID.4 the Audi e-tron GT, and the Porsche Taycan. The BMW iX3 is the company's first vehicle to employ hairpin stators. In 2021, General Motors unveiled its new motor line up that includes a 64 kW ASM for hybrid applications and a 255 kW PSM in the Hummer EV. In 2023, Tesla announced that its next generation motor would use hairpins.
Research
Government and industry are funding hairpin technology research projects. These include::
Pro-E-Traktion (Production, BMW AG)
HaPiPro2 (Production, PEM of RWTH Aachen University)
AnStaHa (Production, Karlsruhe Institute of Technology)
IPANEMA (Machine Learning, API Hard- and Software GmbH)
KIPrEMo (Artificial Intelligence, FAPS of FAU Erlangen-Nürnberg)
KIKoSA (Artificial Intelligence, FAPS of FAU Erlangen-Nürnberg)
Further reading
Kampker/Schnetter/Vallée: Elektromobilität. 2nd rev. edition, 2018, Springer Berlin Heidelberg,
Gläßel, Tobias: Prozessketten zum Laserstrahlschweißen von flachleiterbasierten Formspulenwicklungen für automobile Traktionsantriebe. FAU Studien aus dem Maschinenbau Band 354. Juli 2020, Erlangen, FAU University Press,
VDMA/Raßmann: Produktionsprozess eines Hairpin-Stators. 1st edition, Oktober 2019,
References
External links
VDMA: Production process of Hairpin stators
Schaeffler eDrive Plattform: Benefits and Disadvantages of a Hairpin stator
Electric motors | Hairpin technology | [
"Technology",
"Engineering"
] | 1,799 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
69,970,556 | https://en.wikipedia.org/wiki/Salicornia%20rubra | Salicornia rubra, the Rocky Mountain glasswort, is a species of flowering plant in the family Amaranthaceae. It is native to colder or higher areas of North America; the Yukon, Nunavut, British Columbia, Alberta, Saskatchewan, Manitoba, and Ontario in Canada, and the western and north-central United States. It has been introduced to Quebec and Michigan, and has gone extinct in Illinois. A halophyte, it is one of the most salt-tolerant plants of North America.
References
rubra
Halophytes
Flora of Yukon
Flora of Nunavut
Flora of Western Canada
Flora of Ontario
Flora of the Northwestern United States
Flora of Nevada
Flora of Utah
Flora of New Mexico
Flora of North Dakota
Flora of South Dakota
Flora of Nebraska
Flora of Kansas
Flora of Minnesota
Flora of Iowa
Plants described in 1899
Flora without expected TNC conservation status | Salicornia rubra | [
"Chemistry"
] | 176 | [
"Halophytes",
"Salts"
] |
69,970,614 | https://en.wikipedia.org/wiki/Buchalter%20Cosmology%20Prize | The Buchalter Cosmology Prize, established in 2014, is a prestigious annual prize bestowed by Dr. Ari Buchalter.
Every year, three Buchalter Prizes are awarded in recognition of ground-breaking work in cosmology with the potential to produce a breakthrough advance in our understanding of how the Universe works, particularly by substantially extending or challenging currently-accepted models. The first, second, and third prize come with a prize money of $10,000, $5,000, and $2,500 respectively. The winners are typically announced in the January meeting of the American Astronomical Society (AAS), placing it de facto among the annual AAS prizes.
Advisors and Judges
Submissions and nominations are overseen by a panel composed of the chairman Dr. Ari Buchalter, an advisory board composed of two senior physicists, and a Judging Panel composed of three senior physicists. The composition of advisory board and Judging Panel is changed periodically. Current members of the advisory board are David Helfand and Marc Kamionkowski, whereas current members of the Judging Panel are Claudia de Rham, Matthew Johnson, and Justin Khoury.
Vision and Mission
The prize was conceived by Dr. Ari Buchalter, a former astrophysicist turned entrepreneur who earned his PhD from Columbia University in 1999 working with David Helfand, on the premise that there are fundamental gaps in our understanding of cosmology, and that several currently-accepted paradigms might be incomplete or incorrect. The prize was created to support the development of new boundary-pushing ideas or discoveries that have the potential to produce a breakthrough advance beyond our present understanding of the Universe.
Recipients
2014
1st prize: Marina Cortês and Lee Smolin, for their work The Universe as a Process of Unique Events.
2nd prize: Jonathan Kaufman, Brian Keating, and Brad Johnson, for their work Precision Tests of Parity Violation Over Cosmological Distances.
3rd prize: Carroll Wainwright, Matthew Johnson, Hiranya Peiris, Anthony Aguirre, Luis Lehner, and Steven Liebling, for their work Simulating the Universe(s): from Cosmic Bubble Collisions to Cosmological Observables with Numerical Relativity.
2015
1st prize: Julian Barbour, Tim Koslowski, and Flavio Mercati, for their work Identification of a gravitational arrow of time.
2nd prize: Nemanja Kaloper and Antonio Padilla, for their work Sequestering the Standard Model Vacuum Energy.
3rd prize: Niayesh Afshordi and Elliot Nelson, for their work Cosmological Non-Constant Problem: Cosmological bounds on TeV-scale physics and beyond.
2016
1st prize: Nima Khosravi, for their work Ensemble Average Theory of Gravity.
2nd prize: Elliot Nelson, for their work Quantum Decoherence During Inflation from Gravitational Nonlinearities.
3rd prize: Cliff Burgess, Richard Holman, Gianmassimo Tasinato, and Matthew Williams, for their work EFT Beyond the Horizon: Stochastic Inflation and How Primordial Quantum Fluctuations Go Classical.
2017
1st prize: Lasha Berezhiani and Justin Khoury, for their work Theory of Dark Matter Superfluidity.
2nd prize: Steffen Gielen and Neil Turok, for their work Perfect Quantum Cosmological Bounce.
3rd prize: Peter Adshead, Diego Blas, Cliff Burgess, Peter Hayman, and Subodh Patil, for their work Magnon Inflation: Slow Roll with Steep Potentials.
2018
1st prize: José Ramón Espinosa, Davide Racco, and Antonio Riotto, for their work A Cosmological Signature of the Standard Model Higgs Vacuum Instability: Primordial Black Holes as Dark Matter.
2nd prize: Douglas Edmonds, Duncan Farrah, Djordje Minic, Jack Ng, and Tatsu Takeuchi, for their work Modified Dark Matter: Relating Dark Energy, Dark Matter and Baryonic Matter.
3rd prize: Jonathan Braden, Matthew Johnson, Hiranya Peiris, Andrew Pontzen, and Silke Weinfurtner, for their work A New Semiclassical Picture of Vacuum Decay.
2019
1st prize: Jahed Abedi and Niayesh Afshordi, for their work Echoes from the Abyss: A highly spinning black hole remnant for the binary neutron star merger GW170817.
2nd prize: Eugenio Bianchi, Anuradha Gupta, Hal Haggard, and Bangalore Sathyaprakash, for their work Quantum gravity and black hole spin in gravitational wave observations: a test of the Bekenstein-Hawking entropy
3rd prize: Jose Beltrán Jiménez, Lavinia Heisenberg, and Tomi Koivisto, for their work The Geometrical Trinity of Gravity.
2020
1st prize: Daniel Green and Rafael Porto, for their work Signals of a Quantum Universe.
2nd prize: Mikhail Ivanov, Marko Simonović, and Matias Zaldarriaga, for their work Cosmological parameters from the BOSS Galaxy Power Spectrum.
3rd prize: Philip Mocz, Anastasia Fialkov, Mark Vogelsberger, Fernando Becerra, Mustafa Amin, Sownak Bose, Michael Boylan-Kolchin, Pierre-Henri Chavanis, Lars Hernquist, Lachlan Lancaster, Federico Marinacci, Victor Robles, and Jesús Zavala, for their work First star-forming structures in fuzzy cosmic filaments.
2021
1st prize: Karsten Jedamzik and Levon Pogosian, for their work Relieving the Hubble tension with primordial magnetic fields.
2nd prize: Azadeh Maleknejad, for their work SU(2) and its Axion in Cosmology: A common Origin for Inflation, Cold Sterile Neutrinos, and Baryogenesis.
3rd prize: Sunny Vagnozzi, Luca Visinelli, Philippe Brax, Anne-Christine Davis, and Jeremy Sakstein, for their work Direct detection of dark energy: the XENON1T excess and future prospects.
2024
1st prize: The Canadian Hydrogen Intensity Mapping Experiment (CHIME), for their work Detection of Cosmological 21 cm Emission with the Canadian Hydrogen Intensity Mapping Experiment.
2nd prize: Dr. Nathaniel Craig, Dr. Daniel Green, Dr. Joel Meyers, Dr. Surjeet Rajendran, for their work No νs is Good News.
3rd prize: Dr. Nhat-Minh Nguyen, Dr. Fabian Schmidt, Dr. Beatriz Tucci, Dr. Martin Reinecke, Dr. Andrija Kostić, for their work How much information can be extracted from galaxy clustering at the field level?
Perimeter Institute
Since its inception, the prize has been highly dominated by Perimeter Institute, whose researchers and associates featured for six consecutive years among the prize winners between 2014 and 2019: given the nature of the prize, this is a reflection of the cutting-edge research conducted at the institute.
References
External links
Astronomy prizes
Physics awards
Physical cosmology
Awards established in 2014
American science and technology awards | Buchalter Cosmology Prize | [
"Physics",
"Astronomy",
"Technology"
] | 1,436 | [
"Astronomical sub-disciplines",
"Astronomy prizes",
"Theoretical physics",
"Astrophysics",
"Science and technology awards",
"Physical cosmology",
"Physics awards"
] |
69,975,833 | https://en.wikipedia.org/wiki/Oxhydroelectric%20effect | The oxhydroelectric effect consists in the generation of voltage and electric current in pure liquid water, without any electrolyte, upon exposure to electromagnetic radiation in the infrared range, after creating a physical (not chemical) asymmetry in liquid water e.g. thanks to a strongly hydrophile polymer, such as Nafion.
Since the publication of the first seminal research, other independent research has been published, which refer to this effect, in scientific peer reviewed, reputable journals (with impact factors higher than the median in the respective fields).
The system can be described as a photovoltaic cell operating in the infrared electromagnetic range, based on liquid water instead of a semiconductor.
Theoretical model
The model proposed by Roberto Germano and his collaborators, who have first observed the effect is based on the known concept of the exclusion zone.
The first observations of a different behaviour of water molecules close to the walls of its container date back to late ‘60s and early ‘70s, when Drost-Hansen, upon reviewing many experimental articles, came to the conclusion that interfacial water shows structural difference with respect to the bulk liquid water.
In 2006 Gerald Pollack published a seminal work on the exclusion zone and those observations were subsequently reported by several other groups, which all report observations of a coherent water region created at the boundary between the surface of a hydrophilic material and the bulk water.
Further elaborating on the work of Pollack, the model describes liquid water as a system made of two phases: a matrix of non-coherent water molecules hosting many “Coherence Domains” (CDs), about 0.1 um in size, found in the exclusion zone, but also in the bulk volume.
In this model the behaviour of the coherence domains is also considered as the cause for the formation of xerosydryle.
The two phases, are characterized by different thermodynamic parameters, and are in a stable non-equilibrium state.
The coherent phase should be described by a quantum state, and in particular a state oscillating between a fundamental state, where electrons are firmly bound (ionization energy of 12.60 eV), and an excited state characterized by a quasi-free electron configuration. The energy of the excited state is 12.06 eV, which means that only a small amount of energy as small as (12.60 - 12.06) eV = 0.54 eV (Infrared range) is sufficient to extract an electron.
Then, at a fixed temperature and for molecules density exceeding a threshold, the transition of the non-coherent water molecules to the coherence state is spontaneous because it is driving the system to a lower energy configuration.
More exactly, the almost free electrons have to cross an energy barrier of (0.54 - Χ) eV, where Χ ~ 0.1 eV is the electric potential difference at the CD boundary with the non-coherent water.
This small amount of energy, ~ 0.44 eV, necessary for the electron extraction, makes the coherent water a reservoir of quasi-free electrons that can be easily released by Infrared stimulation, or quantum tunnel effect or by small external perturbation.
The two water phases, with their different potentials behave as the two components of a photovoltaic cell based on semiconductors.
Then, in the cell described in the patent, one of the two sectors has sheets of hydrophilic material, which create (more) coherent domains in that sector, with respect to the other sector.
Research
The research on the effect has started as a side project in Germano's "technology transfer company" Promete s.r.l. and since 2023 it is conducted in Oxhy s.r.l., a startup created with the purpose to further develop this line of research.
Notes
Water
Electricity
Surface science | Oxhydroelectric effect | [
"Physics",
"Chemistry",
"Materials_science",
"Environmental_science"
] | 779 | [
"Water",
"Hydrology",
"Condensed matter physics",
"Surface science"
] |
75,803,705 | https://en.wikipedia.org/wiki/C3H6OS2 | {{DISPLAYTITLE:C3H6OS2}}
The molecular formula C3H6OS2 may refer to:
S,S'-Dimethyl dithiocarbonate
Ethyl xanthic acid
Thiomethylketone
1,2-dithiolane-1-oxide
1,3-dithiolane-1-oxide | C3H6OS2 | [
"Chemistry"
] | 78 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
75,809,125 | https://en.wikipedia.org/wiki/Transition%20metal%20nitroso%20complexes | Transition metal nitroso complexes are coordination complexes containing one or more organonitroso ligands (RNO).
Structure and bonding
Organic nitroso compounds bind to metals in several ways, but most commonly as monodentate N-bonded ligands. Also known are O-bonded, η2-N,O-bonded. Dimers of organic nitroso compounds also bind in a κ2--O,O bidentate manner.
Synthesis
Organic nitroso complexes can be prepared from preformed organic nitroso precursors. These precursors usually exist as N-N bonded dimers, but the dimer dissociates readily. This direct method is used to give W(CO)5(tert-BuNO) (where tert-Bu is ). The Fe-porphyrin complex depicted below is prepared by this route. More complicated but more biorelevant routes involve degradation of precursors such as nitrobenzene and phenylhydroxylamine.
(Et = C2H5, i-Pr = (CH3)2CH)
The coupling of organic ligands and nitric oxide is yet another route.
Connection to methemoglobinemia
Methemoglobinemia is a disorder where a large fraction of hemoglobin in one's blood has converted to inactive forms, generically called methemoglobin. Since methemoglobin is not an oxygen-carrier, methemoglobinemia is a serious disorder, sometimes fatal. Exposure to nitrobenzene, aniline, and their derivatives cause this disorder, which is attributed to their conversion to nitrosobenzene (and derivatives), which inactivate hemoglobin by forming a complex with the Fe center, precluding binding of O2.
References
Coordination complexes
Inorganic chemistry
Nitroso compounds | Transition metal nitroso complexes | [
"Chemistry"
] | 387 | [
"Coordination chemistry",
"nan",
"Coordination complexes"
] |
78,917,032 | https://en.wikipedia.org/wiki/List%20of%20hypothetical%20particles | This is a list of hypothetical subatomic particles in physics.
Elementary particles
Some theories predict the existence of additional elementary bosons and fermions that are not found in the Standard Model.
Particles predicted by supersymmetric theories
Supersymmetry predicts the existence of superpartners to particles in the Standard Model, none of which have been confirmed experimentally. The sfermions (spin-0) include:
Another hypothetical sfermion is the saxion, superpartner of the axion. Forms a supermultiplet, together with the axino and the axion, in supersymmetric extensions of Peccei–Quinn theory.
The predicted bosinos (spin ) are
Just as the photon, Z and W± bosons are superpositions of the B, W, W, and W fields, the photino, zino, and wino are superpositions of the bino, wino, wino, and wino. No matter if one uses the original gauginos or this superpositions as a basis, the only predicted physical particles are neutralinos and charginos as a superposition of them together with the Higgsinos.
Other superpartner categories include:
Charginos, superpositions of the superpartners of charged Standard Model bosons: charged Higgs boson and W boson. The Minimal Supersymmetric Standard Model (MSSM) predicts two pairs of charginos.
Neutralinos, superpositions of the superpartners of neutral Standard Model bosons: neutral Higgs boson, Z boson and photon. The lightest neutralino is a leading candidate for dark matter. The MSSM predicts four neutralinos.
Goldstinos are fermions produced by the spontaneous breaking of supersymmetry; they are the supersymmetric counterpart of Goldstone bosons.
Sgoldstino, superpartners of goldstinos.
Dark energy candidates
The following hypothetical particles have been proposed to explain dark energy:
Dark matter candidates
The following categories are not unique or distinct: For example, either a WIMP or a WISP is also a FIP.
Hidden sector theories have also proposed forces that only interact with dark matter, like dark photons.
From experimental anomalies
These hypothetical particles were claimed to be found or hypothesized to explain unsual experimental results. They relate to experimental anomalies but have not been reproduced independently or might be due to experimental errors:
Other
Cosmon, hypothetical state containing the observable universe before the Big Bang.
Diproton (He-2), nuclei consisting of two protons and no neutrons. Yet unobserved.
Diquark, hypothetical state of two quarks grouped inside a baryon.
Geons are electromagnetic or gravitational waves which are held together in a confined region by the gravitational attraction of their own field of energy.
Kaluza–Klein towers of particles are predicted by some models of extra dimensions. The extra-dimensional momentum is manifested as extra mass in four-dimensional spacetime.
Pomerons, used to explain the elastic scattering of hadrons and the location of Regge poles in Regge theory. A counterpart to odderons.
By type
Branons, scalar fields predicted in brane world models.
Composite Higgs, models that consider the Higgs boson to be a composite particle.
Higgs doublets are hypothesized by some theories of physics beyond the standard model.
Continuous spin particle are hypothetical massless particles related to the classification of the representations of the Poincaré group.
Cryptons, any particle from the dark sector of string theory landscape.
Elementary particles that are not bosons or fermions:
Paraparticles, particles that follow parastatistics
Plektons, particles that follow Braid statistics
Exotic particles, particles with exotic properties like negative mass or complex mass.
Exotic hadrons, particles composed of unusual combinations of quarks and gluons.
Exotic mesons
Exotic baryons
Glueball, hypothetical particle that consist of only gluons.
Quark bound states beyond the pentaquark, like hexaquarks and heptaquarks.
Leptoquark, hypothetical particles that are neither bosons or fermions but carry lepton and baryon numbers.
Magnetic monopole is a generic name for particles with non-zero magnetic charge. They are predicted by Grand Unification Theories. These may include:
Dirac monopoles, monopole that would allow charge quantization.
't Hooft–Polyakov monopoles, Dirac monopole but without Dirac strings.
Wu–Yang monopoles, point-like monopole with potential of the form 1/r.
Dyons, extensions of the idea of a magnetic monopole.
Majorana fermions, fermions that are their own anti-particle
Mesonic molecule, two mesons bound together by strong force.
Micro black hole, sub-atomic sized black holes.
Black hole electron, microscopic black hole with the properties of an electron.
Minicharged particle are hypothetical subatomic particles charged with a tiny fraction of the electron charge.
Mirror particles are predicted by theories that restore parity symmetry.
Neutronium, hypothetical nuclei consisting only of neutrons (more than one). Examples include the tetraneutron.
Preons were suggested as subparticles of quarks and leptons, but modern collider experiments have all but ruled out their existence.
Rishons, particles from the Rishon model of preons.
From superseded and obsolete theories
Caloric rays used until the 19th century to explain thermal radiation.
Light corpuscles, hypothetical classical particles used to explain optical phenomena.
Phlogiston, hypothetical combustible content in matter used to explain thermodynamics before the 18th century.
Ultramundane corpuscles, from Le Sage's theory of gravitation, used to explain gravitational phenomena.
Strangelet, hypothetical particle that could form matter consisting of strange quarks.
R-hadron, bound particle of a quark and a supersymmetric particle.
T meson, hypothetical mesons composed of a top quark and one additional subatomic particle. Examples include the theta meson, formed by a top and an anti-top.
Tachyons is a hypothetical particle that travels faster than the speed of light so they would paradoxically experience time in reverse (due to inversion of the theory of relativity) and would violate the known laws of causality. A tachyon has an imaginary rest mass.
True muonium, atom composed of a muon and an anti-muon. Yet unobserved.
Unparticles, hypothetical particles that are massless and scale invariant.
Weyl fermions, hypothetical spin-1/2 massless particles, only found as a quasiparticle.
See also
References
Particles
Subatomic particles
Particles
Unsolved problems in physics | List of hypothetical particles | [
"Physics"
] | 1,439 | [
"Hypothetical particles",
"Matter",
"Unsolved problems in physics",
"Physical objects",
"Physics beyond the Standard Model",
"Nuclear physics",
"Particle physics",
"Particles",
"Atoms",
"Subatomic particles"
] |
77,428,152 | https://en.wikipedia.org/wiki/Quantum%20double%20model | In condensed matter physics and quantum information theory, the quantum double model, proposed by Alexei Kitaev, is a lattice model that exhibits topological excitations. This model can be regarded as a lattice gauge theory, and it has applications in many fields, like topological quantum computation, topological order, topological quantum memory, quantum error-correcting code, etc. The name "quantum double" come from the Drinfeld double of a finite groups and Hopf algebras. The most well-known example is the toric code model, which is a special case of quantum double model by setting input group as cyclic group .
Kitaev quantum double model
The input data for Kitaev quantum double is a finite group . Consider a directed lattice , we put a Hilbert space spanned by group elements on each edge, there are four types of edge operators
For each vertex connecting to edges , there is a vertex operator
Notice each edge has an orientation: when is the starting point of , the operator is set as , otherwise, it is set as .
For each face surrounded by edges , there is a face operator
Similar to the vertex operator, due to the orientation of the edge, when face is on the right-hand side when traversing the positive direction of , we set ; otherwise, we set in the above expression. Also, note that the order of edges surrounding the face is assumed to be counterclockwise.
The lattice Hamiltonian of quantum double model is given by
Both of and are Hermitian projectors, they are stabilizer when regard the model is a quantum error correcting code.
The topological excitations of the model is characterized by the representations of the quantum double of finite group . The anyon types are given by irreducible representations. For the lattice model, the topological excitations are created by ribbon operators.
The gapped boundary theory of quantum double model can be constructed based on subgroups of . There is a boundary-bulk duality for this model.
The topological excitation of the model is equivalent to that of the Levin-Wen string-net model with input given by the representation category of finite group .
Hopf quantum double model
The quantum double model can be generalized to the case where the input data is given by a C* Hopf algebra. In this case, the face and vertex operators are constructed using the comultiplication of Hopf algebra. For each vertex, the Haar integral of the input Hopf algebra is used to construct the vertex operator. For each face, the Haar integral of the dual Hopf algebra of the input Hopf algebra is used to construct the face operator.
The topological excitation are created by ribbon operators.
Weak Hopf quantum double model
A more general case arises when the input data is chosen as a weak Hopf algebra, resulting in the weak Hopf quantum double model.
References
Quantum information theory
Condensed matter physics | Quantum double model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 587 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
77,434,685 | https://en.wikipedia.org/wiki/Categorical%20probability | In mathematics, the term categorical probability denotes a collection of category-theoretic approaches to probability theory and related fields such as statistics, information theory and ergodic theory.
The earliest ideas in the field were developed independently by Lawvere and by Chentsov, where they defined a version of what we today call the category of Markov kernels, and appeared in 1962 and 1965 respectively.
Some of the most widely used structures in the theory are
The category of measurable spaces;
Markov categories such as the category of Markov kernels;
Probability monads such as Giry monad.
References
https://ncatlab.org/nlab/show/category-theoretic+approaches+to+probability+theory
https://golem.ph.utexas.edu/category/2024/07/imprecise_probabilities_toward.html#more
Further reading
https://ncatlab.org/nlab/show/Giry+monad#related_constructions
https://golem.ph.utexas.edu/category/2024/08/introduction_to_categorical_pr.html#more
Voevodsky's unfinished manuscript: Notes on categorical probability, July 13, 2009.
https://mathoverflow.net/questions/463712/hopf-monads-in-categorical-probability-theory
External links
https://golem.ph.utexas.edu/category/2020/06/categorical_probability_and_st.html
https://golem.ph.utexas.edu/category/2020/06/statistics_for_category_theori.html
Probability theory
Category theory | Categorical probability | [
"Mathematics"
] | 369 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
78,940,222 | https://en.wikipedia.org/wiki/V-EMF%20therapy | V-EMF therapy is a therapy that is based on the synergy between electromagnetic fields, vacuum and low-intensity electrostimulation. It is also known as Biodermogenesi. The electromagnetic field is generated with a variable frequency between 0.5 and 2 MHz.
Electromagnetic fields are able to promote the repair processes of skin lesions with a reduction in healing time and scar size and with an increase in the re-epithelialization process. They also promote the migration and subsequent stabilization of different cell types involved in the processes of regeneration, tissue engineering and wound care.
The data in the literature show that the therapeutic application of electromagnetic fields allows to obtain satisfactory results in the regeneration of degenerated or traumatized or injured tissue reaching the healing and regeneration of permanent lesions. In addition, they have been shown to be useful in reducing fibrosis both in the treatment of cellulite and in the treatment of fibrotic, hypertrophic and keloid scars.
Similarly to electromagnetic fields, the application of vacuum, in this therapy adopted with negative pressure between 8 and 16 hundredths of Bar, has shown considerable effectiveness in reducing skin fibrosis due to cellulite and scarring.
As for low-intensity electrostimulation, or electroporation, delivered at 5 VDC, it is able to increase skin nourishment. The three forms of energy used simultaneously are the basis of V-EMF therapy, which has shown significant results in various fields of application. The therapeutic protocol has proven effective in the treatment of burn scars post-surgical trauma and chemical burn, 20 in the treatment of stretch marks and in anti-aging therapy.
See also
Biodermogenesi
References
Medical research
Dermatologic procedures
Electromagnetism | V-EMF therapy | [
"Physics"
] | 359 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
75,821,283 | https://en.wikipedia.org/wiki/Safety%20Science | Safety Science is a monthly peer-reviewed scientific journal published by Elsevier covering research on all aspects of human and industrial safety. The editor-in-chief is Georgios Boustras (European University Cyprus),. The journal was established in 1976 as the Journal of Occupational Accidents, with Herbert Eisner as founding editor-in-chief. In 1990, the aims and scope of the journal were expanded, and the journal obtained its current name.
Editors-in-chief
Since 1990, the following persons are or have been editors-in-chief:
1990–2009: Andrew Hale
2010–2012: Kathryn Mearns
2013–2017: Jean-Luc Wybo
2018–present: Georgios Boustras
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.7.
References
Academic journals established in 1976
Monthly journals
Elsevier academic journals
English-language journals
Occupational safety and health journals
Safety engineering | Safety Science | [
"Engineering"
] | 203 | [
"Safety engineering",
"Systems engineering"
] |
75,833,059 | https://en.wikipedia.org/wiki/Tuanshanzia | Tuanshanzia is a genus of Proterozoic eukaryote, known from several locations across China and India, including the Gaoyuzhuang and Chuanlinggou formations, the eponymous Tuanshanzi Formation, as well as the Vindhya Basin. It is probably an alga, although its exact classification is currently unclear. Tuanshanzia seems to be part of a wider group of elongate Proterozoic algae, alongside Changchengia and Eopalmaria.
Description
Tuanshanzia specimens range from long, and are often preserved as carbonaceous films with a wide range of shapes, varying by species. The type, T. lanceolata has a lanceolate shape, whereas other species have shapes ranging from oval to elongated. As a form taxon, Tuanshanzia is likely paraphyletic, although as no practical alternative exists it remains a valid genus. The genus is named after the Tuanshanzi Formation, where it was first discovered, while the various species are named after their morphology. Many of the specimens from the Tuanshanzi Formation are likely microbial mat fragments due to their rough edges and irregularity, however some are likely actual algae due to smooth and regular margins, alongside carbon isotope analysis showing similarities to eukaryotes.
References
Proterozoic life
Paleoproterozoic
Mesoproterozoic
Taxa described in 1995
Proterozoic first appearances
Enigmatic eukaryote taxa | Tuanshanzia | [
"Biology"
] | 303 | [
"Eukaryotes",
"Eukaryote stubs"
] |
75,834,378 | https://en.wikipedia.org/wiki/Double%20field%20theory | Double field theory in theoretical physics refers to formalisms that capture the T-duality property of string theory as a manifest symmetry of a field theory.
Background
In double field theory, the T-duality transformation of exchanging momentum and winding modes of closed strings on toroidal backgrounds translates to a generalized coordinate transformation on a doubled spacetime, where one set of its coordinates is dual to momentum modes and the second set of coordinates is interpreted as dual to winding modes of the closed string. Whether the second set of coordinates has physical meaning depends on how the level-matching condition of closed strings is implemented in the theory: either through the weak constraint or the strong constraint.
In strongly constrained double field theory, which was introduced by Warren Siegel in 1993, the strong constraint ensures the dependency of the fields on only one set of the doubled coordinates; it describes the massless fields of closed string theory, i.e. the graviton, Kalb Ramond B-field, and dilaton, but does not include any winding modes, and serves as a T-duality invariant reformulation of supergravity.
Weakly constrained double field theory, introduced by Chris Hull and Barton Zwiebach in 2009, allows for the fields to depend on the whole doubled spacetime and encodes genuine momentum and winding modes of the string.
Double field theory has been a setting for studying various string theoretical properties such as: consistent Kaluza-Klein truncations of higher-dimensional supergravity to lower-dimensional theories, generalized fluxes, and alpha-prime corrections of string theory in the context of cosmology and black holes.
References
Theoretical physics | Double field theory | [
"Physics"
] | 332 | [
"Theoretical physics"
] |
78,946,358 | https://en.wikipedia.org/wiki/1H-LSD | 1H-LSD (N1-hexanoyl-lysergic acid diethylamide, SYN-L-027) is an acylated derivative of lysergic acid diethylamide (LSD), with a six carbon hexanoyl chain attached to the N1 position. It acts as a prodrug for LSD, and in animal studies produces drug-appropriate responding with a similar potency to short-chain homologues such as ALD-52 and 1P-LSD, in contrast to the 4 and 5 carbon homologues 1B-LSD and 1V-LSD which are several times weaker.
See also
Lysergic acid diethylamide (LSD)
1cP-LSD
1DD-LSD
ALD-52
1cP-AL-LAD
1P-ETH-LAD
References
Designer drugs
Lysergamides
Prodrugs
Serotonin receptor agonists | 1H-LSD | [
"Chemistry"
] | 203 | [
"Chemicals in medicine",
"Prodrugs"
] |
78,951,259 | https://en.wikipedia.org/wiki/Foselutoclax | Foselutoclax is an investigational new drug that is being evaluated for the treatment of age-related eye diseases, particularly diabetic macular edema (DME) and wet age-related macular degeneration (AMD). Developed by Unity Biotechnology, this senolytic compound acts as a potent inhibitor of Bcl-xL, a protein that senescent cells rely on for survival. Foselutoclax is designed to selectively eliminate senescent cells in the retina, potentially addressing the underlying causes of vision loss in these conditions.
References
Anti-aging substances
Senescence
Anilines
Benzene derivatives
Carboxylic acids
Phenyl compounds
Phosphates
Piperazines
Pyrroles
Sulfonamides
Sulfones
Thioethers
Trifluoromethyl compounds | Foselutoclax | [
"Chemistry",
"Biology"
] | 170 | [
"Pharmacology",
"Anti-aging substances",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Salts",
"Senescence",
"Sulfones",
"Phosphates",
"Cellular processes",
"Pharmacology stubs",
"Metabolism"
] |
78,954,051 | https://en.wikipedia.org/wiki/Vaginolysin | Vaginolysin (VLY) is a toxin produced by Gardnerella vaginalis, a bacterium commonly associated with bacterial vaginosis. VLY is a member of the cholesterol-dependent cytolysin family, characterized by their ability to form pores in cholesterol rich membranes. The most closely related protein is intermedilysin, which is produced by Streptococcus intermedius.
VLY exhibits cytolytic activity against human erythrocytes, causing lysis of red blood cells. This process releases iron, an essential nutrient for microbial pathogens. In vitro studies have also demonstrated VLY induces membrane blebbing in human vaginal and cervical cells, suggesting its role in epithelial cell damage.
The cytolytic activities of VLY are hypothesized to contribute to the virulence of Neisseria gonorrhoeae, potentially by facilitating the bacterium's access to intracellular metabolites and aiding in its evasion of host immune responses.
References
Proteins
Toxins | Vaginolysin | [
"Chemistry",
"Environmental_science"
] | 219 | [
"Biomolecules by chemical classification",
"Toxicology",
"Molecular biology",
"Toxins",
"Proteins"
] |
78,960,568 | https://en.wikipedia.org/wiki/ST-148%20%28antiviral%29 | ST-148 is an antiviral drug which acts as a capsid inhibitor. It was developed for treatment of dengue fever, but while it shows strongest activity against dengue virus it also shows broad spectrum activity against other flaviviruses such as Zika virus. It is thought to cause viral capsid proteins to become more rigid, inhibiting both assembly and disassembly of capsids and thereby hindering viral replication and infection of cells.
References
Antiviral drugs
Thiadiazoles
Carboxamides
Thienopyridines
Amines
Heterocyclic compounds with 3 rings | ST-148 (antiviral) | [
"Chemistry",
"Biology"
] | 126 | [
"Antiviral drugs",
"Functional groups",
"Amines",
"Biocides",
"Bases (chemistry)"
] |
77,444,635 | https://en.wikipedia.org/wiki/Fay-Riddell%20equation | The Fay-Riddell equation is a fundamental relation in the fields of aerospace engineering and hypersonic flow, which provides a method to estimate the stagnation point heat transfer rate on a blunt body moving at hypersonic speeds in dissociated air. The heat flux for a spherical nose is computed according to quantities at the wall and the edge of an equilibrium boundary layer.where is the Prandtl number, is the Lewis number, is the stagnation enthalpy at the boundary layer's edge, is the wall enthalpy, is the enthalpy of dissociation, is the air density, is the dynamic viscosity, and is the velocity gradient at the stagnation point. According to Newtonian hypersonic flow theory, the velocity gradient should be:where is the nose radius, is the pressure at the edge, and is the free stream pressure. The equation was developed by James Fay and Francis Riddell in the late 1950s. Their work addressed the critical need for accurate predictions of aerodynamic heating to protect spacecraft during re-entry, and is considered to be a pioneering work in the analysis of chemically reacting viscous flow.
Assumptions
The Fay-Riddell equation is derived under several assumptions:
Hypersonic Flow: The equation is applicable for flows where the Mach number is significantly greater than 5.
Continuum Flow: It assumes the flow can be treated as a continuum, which is valid at higher altitudes with sufficient air density.
Thermal and Chemical Equilibrium: The gas is assumed to be in thermal and chemical equilibrium, meaning the energy modes (translational, rotational, vibrational) and chemical reactions reach a steady state.
Blunt Body Geometry: The equation is most accurate for blunt body geometries where the leading edge radius is large compared to the boundary layer thickness.
Extensions
While the Fay-Riddell equation was derived for an equilibrium boundary layer, it is possible to extend the results to a chemically frozen boundary layer with either an equilibrium catalytic wall or a noncatalytic wall.
Applications
The Fay-Riddell equation is widely used in the design and analysis of thermal protection systems for re-entry vehicles. It provides engineers with a crucial tool for estimating the severe aerodynamic heating conditions encountered during atmospheric entry and for designing appropriate thermal protection measures.
See also
Aerodynamic heating
Atmospheric entry
Hypersonic flight
Stagnation point
References
External links
Stagnation Point Heating
Heat transfer
Atmospheric entry
Aerospace engineering | Fay-Riddell equation | [
"Physics",
"Chemistry",
"Engineering"
] | 485 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Atmospheric entry",
"Thermodynamics",
"Aerospace engineering"
] |
77,452,364 | https://en.wikipedia.org/wiki/NGC%205419 | NGC5419 is a large elliptical galaxy in the constellation of Centaurus. Its velocity with respect to the cosmic microwave background is 4,375 ± 23km/s, which corresponds to a Hubble distance of 64.5 ± 4.5Mpc (∼210million light-years). It was discovered by British astronomer John Herschel on 1 May 1834.
NGC5419 is the brightest cluster galaxy of the galaxy cluster, AbellS0753. It contains a large core with a radius span of 1.58 arcsec (≈55 pc). In addition, it has a double nucleus, indicating the presence of two supermassive black holes in the center with a separation gap of only ≈70 pc.
Supernovae
Two supernovae have been observed in NGC5419:
SN2018zz (typeIa, mag.16) was discovered by the All Sky Automated Survey for SuperNovae (ASAS-SN) on 3 March 2018.
SN2020alh (typeIa, mag.15.3).
NGC 5488 Group
According to A.M. Garcia, the galaxy NGC5419 is part of the NGC5488 group (also known as LGG369). This group of galaxies has 14 members: NGC 5397, NGC 5488, IC 4366 and nine galaxies from the ESO catalog.
See also
List of NGC objects (5001–6000)
References
External links
5419
050100
Centaurus
18340501
Discoveries by John Herschel
-06-31-019
Elliptical galaxies
384- G 039 | NGC 5419 | [
"Astronomy"
] | 327 | [
"Centaurus",
"Constellations"
] |
75,845,271 | https://en.wikipedia.org/wiki/Porous%20polymer | Porous polymers are a class of porous media materials in which monomers form 2D and 3D polymers containing angstrom- to nanometer-scale pores formed by the arrangement of the monomers. They may be either crystalline or amorphous. Subclasses include covalent organic frameworks (COFs), hydrogen-bonded organic frameworks (HOFs), metal-organic frameworks (MOFs), and porous organic polymers (POPs). The subfield of chemistry specializing in porous polymers is called reticular chemistry.
Covalent organic frameworks
Covalent organic frameworks are crystalline porous polymers assembled from organic monomers linked through covalent bonds.
Hydrogen-bonded organic frameworks
Hydrogen-bonded organic frameworks are crystalline porous polymers assembled from organic monomers linked through hydrogen bonds.
Metal-organic frameworks
Metal-organic frameworks are crystalline porous polymers assembled from organic monomers connected by coordination to metal atom centers.
References | Porous polymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 195 | [
"Porous polymers",
"Porous media",
"Polymer chemistry",
"Materials science"
] |
75,845,973 | https://en.wikipedia.org/wiki/N-Nitrosomorpholine | N-Nitrosomorpholine (NNM, NMOR) is an organic compound which is known to be a carcinogen and mutagen.
Chemistry
NMOR is a pale yellow sand-like powder below 84°F.
NMOR is most commonly produced from morpholine, but can also be made by the reaction of dimorpholinomethane in fuming nitric acid. Few reactions using NMOR as a starting material are reported in the organic synthesis literature, but it can be used as a precursor to a nitrogen-centered radical.
Occurrence
NMOR is generally not used intentionally, but is instead created by the nitrosation of morpholine or morpholine derivatives which are used for several industrial purposes.
Rubber
2-(Morpholinothio)benzothiazole is used as an accelerator/stabilizer for vulcanization, or the manufacture of rubber products. It is the precursor to NMOR in the vulcanization process, as it is nitrosated by ambient sources of the nitro group present in the manufacturing process. As such, workers and others exposed to the rubber industry or its byproducts are exposed to higher levels of NMOR than the general population, raising their risk of cancer.
Tobacco products
NMOR is a component of tobacco products. As of 2014, detectable levels of NMOR are present in tobacco products in the United States and China. The presence of NMOR and other n-nitrosoamines is not limited to cigarettes, but is found in smokeless tobacco products (snuff tobacco, Snus, etc.) as well. Volatile nitrosamines, including NMOR, are detectable in the urine of tobacco smokers.
Food
Morpholine oleate is used in glazing wax which covers fruit. NMOR can be generated by the nitration of morpholine, causing its presence in waxed fruits.Health Canada, the Canadian governmental department of public health, has stated in 2002 that this does not pose a risk to human health.
Consumption of nitrate-rich diets is correlated with levels of salivary and urinary NMOR. The presence of NMOR can also be observed in gastric juices.
Other
NMOR has been found in several cosmetic products.
Health hazards
The mechanisms of carcinogenesis are not completely clear in humans. NMOR and its metabolites may induce DNA damage by directly forming reactive oxygen species or compounds which crosslink DNA. In a rat model in 2013, it was observed that NMOR is hydroxylated, probably by a P450 enzyme, alpha to the N-nitroso moiety. This then decomposes into a diazonium-containing aldehyde which is capable of crosslinking DNA.
Endogenous synthesis from morpholine in the digestive system is observed. NMOR can be generated from N-nitrosating species formed by salivary nitrite and stomach acid, potentially leading to more damage in individuals with acid reflux. H. pylori does not induce NMOR formation in vitro, though this has yet to be confirmed in vivo.
NMOR is in fact used to generate liver cancer models in rats. Along with N-diethylnitrosamine, it is the gold standard for producing hepatocarcinoma with 100% lung metastasis.
See also
Nitrosamine
N-Nitrosodiethylamine
Hepatotoxicity
References
4-Morpholinyl compounds
Nitrosamines
Reagents for organic chemistry
Carcinogens | N-Nitrosomorpholine | [
"Chemistry",
"Environmental_science"
] | 738 | [
"Carcinogens",
"Toxicology",
"Reagents for organic chemistry"
] |
75,858,491 | https://en.wikipedia.org/wiki/Outline%20of%20metrology%20and%20measurement | The following is a topical outline of the English language Wikipedia articles on the topic of metrology and measurement. Metrology is the science of measurement and its application.
Main articles
Metrology
Measurement
Metrology overviews
Dimensional metrology
Forensic metrology
Historical metrology
Smart Metrology
Time metrology
Quantum metrology
History
History of measurement
History of the metre
Concepts
Quantification (science)
Standard (metrology)
Weights and measures
Unit of measurement
Geodesy
Geodesy - the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D.
List of geodesists
History of geodesy
Physical geodesy
International Union of Geodesy and Geophysics
International Association of Geodesy
Systems and units
Unit of measurement
System of measurement
Systems of measurement
System of measurement
Metric system
Unit prefixes
Metric prefixes
Orders of magnitude
Systems of units
Centimetre–gram–second system of units
Scales
Scale of temperature
Conversion of scales of temperature
Celsius
Delisle scale
Fahrenheit
Gas mark
Kelvin
Leiden scale
Newton scale
Rankine scale
Réaumur scale
Rømer scale
Wedgwood scale
Units
Unit of measurement
Base unit of measurement
Natural units
Dimensionless quantity
General Lists
Dimensionless units
Lists of units of measurement
Units of measurement by country
Units of measurement by region
Customary units of measurement
Obsolete units of measurement
Equivalent units
Metricated units
Lists by type
Acceleration
Amount of substance
Amount
Angle
Area
Astronomical
Catalytic
Chemical measurement
Density
Dynamic Viscosity
Electric current
Electric charge
Electromagnetism
Energy
Flow
Force
Angular velocity
Frequency
Illuminance
Information
Length
Level
Luminance
Photometry
Luminous energy
Luminous exposure
Luminous flux
Luminous intensity
Mass
Meteorology measurement
Navigation
Non-SI metric units
Optics
Power
Pressure
Purity
Quality
Radiation dose
Radiation
Rate
Sound
Surveying
Temperature
Time
Torque
Velocity
Volume
Organizations
International Bureau of Weights and Measures
International Organization of Legal Metrology
National Institute of Standards and Technology
National Institute of Metrology Standardization and Industrial Quality
National Physical Laboratory
NCSL International
Norwegian Metrology Service
Instruments and devices
Coordinate-measuring machine
Cylindrical coordinate measuring machine
Universal measuring machine
Pratt & Whitney Measurement Systems
Standards
ISO 16610
ISO 25178
Josephson voltage standard
Metre Convention
Unified Code for Units of Measure
International prototype metre
International Prototype of the Kilogram
Lists
List of humorous units of measurement
List of unusual units of measurement
List of obsolete units of measurement
List of measuring instruments
List of nautical units of measurement
List of scientific units named after people
List of international units
List of SI electromagnetism units
Timelines
Timeline of temperature and pressure measurement technology
Timeline of time measurement technology
Journals
Measurement
Metrologia
See also
Outline of the metric system
References
Bibliography
Books
Articles
Notes
Citations
External links
Accuracy and precision
Metrology | Outline of metrology and measurement | [
"Physics",
"Mathematics"
] | 525 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
75,863,315 | https://en.wikipedia.org/wiki/Center%20for%20Gravitational%20Wave%20Astronomy%20and%20Astrophysics | The Center for Gravitational Wave Astronomy and Astrophysics (CGWAA) is a research center at Syracuse University. Research at the CGWAA includes the study of gravitational wave astronomy, designing of Cosmic Explorer next-generation observatory, development new quantum optics technologies and precision measurement to build new detectors. The center was established in 2023 and has hosted seminar series and several conferences.
Founding
The center was established on October 13, 2023, to combine the various groups working on the LIGO Scientific project.
The LIGO research at Syracuse began with theoretical contributions from Peter Bergmann, Joshua N. Goldberg, and Roger Penrose, and Syracuse had a comparatively large number of collaborators on the team that made the first observation of gravitational waves in 2015. The department of physics has had collaborations with CERN, LIGO Scientific Collaboration, and Fermilab, among other institutes.
Funding
In October 2023, the center received over $1.5M in funding from the National Science Foundation to study gravitational waves and design next-generation observatories. The center hosts proposal-writing workshops at Syracuse University's Minnowbrook Conference Center. Its collaboration with researchers from MIT, Penn State, California State Fullerton, and the University of Florida resulted in over $9 million in NSF funding.
People
The center is directed by Stefan Ballmer and hosts research groups of Peter Saulson, Duncan Brown, Alexander Nitz, Collin Capano, Craig Cahillane, and Georgia Mansell.
References
External links
2023 establishments in New York (state)
Astronomy institutes and departments
Astrophysics research institutes
Research institutes in New York (state)
Syracuse University research institutes
Institutes associated with CERN | Center for Gravitational Wave Astronomy and Astrophysics | [
"Physics",
"Astronomy"
] | 337 | [
"Astronomy organizations",
"Astrophysics research institutes",
"Astrophysics",
"Astronomy institutes and departments"
] |
75,864,308 | https://en.wikipedia.org/wiki/Habitable%20zone%20for%20complex%20life | A Habitable Zone for Complex Life (HZCL) is a range of distances from a star suitable for complex aerobic life. Different types of limitations preventing complex life give rise to different zones. Conventional habitable zones are based on compatibility with water. Most zones start at a distance from the host star and then end at a distance farther from the star. A planet would need to orbit inside the boundaries of this zone. With multiple zonal constraints, the zones would need to overlap for the planet to support complex life. The requirements for bacterial life produce much larger zones than those for complex life, which requires a very narrow zone.
Exoplanets
The first confirmed exoplanets was discovered in 1992, several planets orbiting the pulsar PSR B1257+12. Since then the list of exoplanets has grown to the thousands. Most exoplanets are hot Jupiter planets, that orbit very close the star. Many exoplanets are super-Earths, that could be a gas dwarf or large rocky planet, like Kepler-442b at a mass 2.36 times Earths.
Star
Unstable stars are young and old stars, or very large or small stars. Unstable stars have changing solar luminosity that changes the size of the life habitable zones. Unstable stars also produce extreme solar flares and coronal mass ejections. Solar flares and coronal mass ejections can strip away a planet's atmosphere that is not replaceable. Thus life habitable zones require and very stable star like the Sun, at ±0.1% solar luminosity change. Finding a stable star, like the Sun, is the search for a solar twin, with solar analogs that have been found. Proper star metallicity, size, mass, age, color, and temperature are also very important to having low luminosity variations. The Sun is unique as it is metal rich for its age and type, a G2V star. The Sun is currently in its most stable stage and has the correct metallicity to make it very stable. Dwarf stars (red dwarf/orange dwarf/brown dwarf/subdwarf) are not only unstable, but also emit low energy, so the habitable zone is very close to the star and planets become tidally locked on the timescales needed for the development of life. Giant stars (subgiant/giant star/red giant/red supergiant) are unstable and emit high energy, so the habitable zone is very far from the star. Multiple-star systems are also very common and are not suitable for complex life, as the planet orbit would be unstable due to multiple gravitational forces and solar radiation. Liquid water is possible in Multiple-star systems.
Named habitable zones
A conventional habitable zone is defined by liquid water.
Habitable zone (HZ) (also called the circumstellar habitable zone), the orbit around a star that would allow liquid water to remain for a short period of time (a given period of time) on at least a small part of the planet's surface. Thus within the HZ, water, (H2O) is between and temperature. This zone is a temperature zone, set by the star's radiation and distance from the star. In the Solar System the planet Mars is just at the outer boundary of the habitable zone. The planet Venus is at the inner edge of the habitable zone, but due to its thick atmosphere it has no water. The HZ includes planets with elliptic orbits; such planets might orbit into and out of the HZ. When a planet moves out of the HZ, all its water would freeze to ice on the outside of the HZ, and/or all water would become steam on the inner side. The HZ could be defined as the region where bacteria, a form of life, could possibly survive for a short period of time. The HZ is also sometimes called the "Goldilocks" zone.
Optimistic habitable zone (OHZ): a zone where liquid surface water could have been on a planet at some time in its past history. This zone would be larger than the HZ. Mars is an example of a planet in the OHZ.: it is just beyond the HZ today, but had liquid water for a short time span before the Mars carbonate catastrophe, some 4 billion years ago.
Continuously habitable zone (CHZ): a zone where liquid water persists on the surface of a planet for years. This requires a near-circular planetary orbit and a stable star. The zone may be much smaller than the habitable zone.
Conservative habitable zone: a zone where liquid surface water remains on a planet over a long time span, as on Earth. This might also need a greenhouse effect provided by gases such as CO2 and water vapor to maintain the correct temperature. Rayleigh scattering would also be needed.
Named habitable zones for complex life
Over time and with more research, astronomers, cosmologists and astrobiologist have discovered more parameters needed for life. Each parameter could have a corresponding zone. Some of the named zones include:
Ultraviolet habitable zone: a zone where the ultraviolet (UV) radiation from a star is neither too weak nor too strong for life to exist. Life needs the correct amount of ultraviolet for synthesis of biochemicals. The extent of the zone depends on the amount of ultraviolet radiation from the star, the range of UV wavelengths, the age of the star, and the atmosphere of the planet. In humans UV is used to produce vitamin D. Extreme ultraviolet (EUV) can cause atmospheric loss.
Photosynthetic habitable zone: a zone where both long-term liquid water and oxygenic photosynthesis can occur.
Tropospheric habitable zone, or ozone habitable zone: a zone where the planet would have the correct amount of ozone needed for life. Inhaling too much ozone causes inflammation and irritation, whereas too little troposphere ozone would produce biochemical smog. On Earth, the troposphere ozone is part of the ground-level ozone protection. Tropospheric ozone is formed by the interaction of ultraviolet light with hydrocarbons and nitrogen oxides.
Planet rotation rate habitable zone: the zone where a planet's rotation rate is best for life. If rotation is too slow, the day/night temperature difference is too great. The rotation rate also changes the planet's reflectivity and thus temperature. A fast rotation rate increases wind speed on the planet. The rotation rate affects the planet's clouds and their reflectivity. Slowing the rotation rate changes cloud distributions, cloud altitudes, and cloud opacities. These changes in the clouds changes the temperature of the planet. A high rotation rate also can cause continuous, very fast winds on the surface.
Planet rotation axis tilt habitable zone, or obliquity habitable zone: the region where a stable axial tilt for a planet's rotation is maintained. Earth's axis is tilted 23.5°; this gives seasons, providing snow and ice that can melt to provide water run off in the summer. Obliquity has a major impact on a planet's temperature, thus its habitable zone.
Tidal habitable zone. Planets too close to the star become tidally locked. The mass of the star and the distance from the star set the tidal habitable zone. A planet tidally locked has one side of the planet facing the star, this side would be very hot. The face away from the star would be well below freezing. A planet too close to the star will also have tidal heating from the star. Tidal heating can vary the planet's orbital eccentricity. Too far from the star and the planet will not receive enough solar heat.
Astrosphere habitable zone: the zone in which a planet's astrosphere will be strong enough to protect the planet from the solar wind and cosmic rays. The astrosphere must be long lasting to protect the planet. Mars lost its water and most of its atmosphere after the losing its magnetic field and Mars carbonate catastrophe event. Star-Sun's solar wind is made of charged particles, including plasma, electrons, protons and alpha particles. The solar wind is different for each star. Earth's magnetic field is very large and has protected Earth since its formation.
Atmosphere electric field habitable zone: the place in which the ambipolar electric field is correct for the planet's electric field to help ions overcome gravity. The planet's ionosphere must be correct to protect against the loss of the atmosphere. This is addition to a strong magnetic field to protect against the solar wind stripping away the atmosphere and water into outer space.
Orbital eccentricity habitable zone: the zone in which planets maintain a nearly circular orbit. As orbits with eccentricity have the planets move in and out of the habitable zones. In the solar system, the grand tack hypothesis proposes the theory of the unique placement of the gas giants, the solar system belts and the planets near circular orbits.
Coupled planet-moon - Magnetosphere habitable zone: the zone that planet's moon and the planet's core produce a strong magnetosphere, magnetic field to protect against the solar wind stripping away the planet's atmosphere and water into outer space. Just as Mars had a magnetic field for a short time. Earth's Moon had a large magnetosphere for several hundred million years after its formation, as proposed in a 2020 study by Saied Mighani. The Moon's magnetosphere would have given added protection of Earth's atmosphere as the early Sun was not as stable as it today. In 2020, James Green modeled the coupled planet-moon-magnetosphere habitable zone. The modeling showed a coupled planet–moon magnetosphere that would give planet the protection from stellar wind in the early Solar System. In the case of Earth, the Moon was closer to Earth in the early formation of the solar system, giving added protection. This protection was needed then as the Sun was less stable.
Pressure-dependent habitable zone: the zone in which planets may have the correct atmospheric pressure to have liquid surface water. With a low atmospheric pressure, the temperature at which water boils is much lower, and at pressures below that of the triple point, liquid water cannot exist. The average surface pressure on Mars today is close to that of the triple point of water; thus, liquid water cannot exist there. Planets with high-pressure atmospheres may have liquid surface water, but life forms would have difficulty with respiratory systems at high-pressure atmospheres.
Galactic habitable zone (GHZ): The GHZ, also called the Galactic Goldilocks zone, is the place in a galaxy in which heavy elements needed for a rocky planet and life are present, but also a place where strong cosmic rays will not kill life and strip the atmosphere off the planet. The term Goldilocks zone is used, as it is a fine balance between the two sites (heavy element and strong cosmic rays). Galactic habitable zone is the place a planet will have the needed parameters to support life. Not all galaxies are able to support life. In many galaxies, life-killing events such as gamma-ray bursts can occur. About 90% of galaxies have long and frequent gamma ray bursts, thus no life. Cosmic rays pose a threat to life. Galaxies with many stars too close together or without any dust protection also are not hospitable for life. Irregular galaxies and other small galaxies do not have enough heavy elements. Elliptical galaxies are full of lethal radiation and lack heavy elements. Large spiral galaxies, like the Milky Way, have the heavy element needs for life at its center and out to about half distance from center bar. Not all large spiral galaxies are the same, spiral galaxies with too much active star formation can kill the galaxy and life. Too little star formation and the spiral arms will collapse. Not all spiral galaxies have the correct galactic ram pressure stripping parameters; too much ram pressure can deplete the galaxy of gas and thus end star formation. The Milky Way is a barred spiral galaxy, the bar is important to star formation and metallicity of the galaxy's stars and planets. Barred spiral galaxy, must have stable arms with the just right star formation. Bars galaxies are in about 65% of spiral galaxies, but most have too much star formation. Peculiar galaxies lack stable spiral arms, while irregular galaxies contain too many new stars and lack heavy elements. Unbarred spiral galaxy, do not correct star formation and metallicity for a galactic goldilocks zone. For long term life on a planet, the spiral arms must be stable for a long period of time, as in the Milky Way. The spiral arms must not be too close to each other, or there will be too much ultraviolet radiation. If the planet moves into or across a spiral arm the orbits of the planets could change, from gravitational disturbances. Movement across a spiral arms also would cause deadly asteroid impacts and high radiation. The planet must be in the correct place in the spiral galaxy: near the galactic center, radiation and gravitational forces are too great for life, whereas the outskirts of a spiral galaxy are metal-poor. The Sun in 28,000 light years from the center bar, in the galactic Goldilocks zone. At this distance, the Sun revolves in the galaxy at the same rate as the spiral-arm rotation, thus minimizing arm crossings.
Supergalactic habitable zone: a place in a supercluster of galaxies that can provide for habitability of planets. The supergalactic habitable zone takes into account events in galaxies that can end habitability not only in a galaxy, but all galaxies nearby, such as galaxies merging, active galactic nucleus, starburst galaxy, supermassive black holes and merging black holes, all which output intense radiation. The supergalactic habitable zone also takes into account the abundance of various chemical elements in the galaxy, as not all galaxies or regions within have all the needed elements for life.
Habitable zone for complex life (HZCL): the place that all the life habitable zones overlap for a long period of time, as in the Solar System. The list of habitable zones for complex life has grown longer with increasing understanding of the Universe, galaxies, and the Solar System. Complex life is normally defined as eukaryote life forms, including all animals, plants, fungi, and most unicellular organisms. Simple life forms are normally defined as prokaryotes.
Other orbital-distance related factors
Some factors that depend on planetary distance and may limit complex aerobic life have not been given zone names. These include:
Milankovitch cycle The Milankovitch cycle and ice age have been key is shaping Earth. Life on Earth today is using water melting from the last ice age. The ice ages cannot be too long or too cold for life to survive. Milankovitch cycle has an impact on the planet's obliquity also.
Life
Life on Earth is carbon-based. However, some theories suggest that life could be based on other elements in the periodic table. Other elements proposed have been silicon, boron, arsenic, ammonia, methane and others. As more research has been done on life on Earth, it has been found that only carbon's organic molecules have the complexity and stability to form life. Carbon properties allows for complex chemical bonding that produces covalent bonds needed for organic chemistry. Carbon molecules are lightweight and relatively small in size. Carbon's ability to bond to oxygen, hydrogen, nitrogen, phosphorus, and sulfur (called CHNOPS) is key to life.
Gallery
See also
Exoplanet orbital and physical parameters
Habitability of natural satellites – liquid water on a moon
Habitability of yellow dwarf systems – liquid water on yellow dwarf star
Habitability of red dwarf systems – liquid water on red dwarf star
Planetary habitability in the Solar System – liquid water in our Solar System
Habitability of binary star systems – liquid water on binary stars
Habitability of F-type main-sequence star systems – liquid water on planets orbiting F-type stars
Superhabitable planet – a hypothetical exoplanet
References
Planetary habitability
Astronomical hypotheses
Extraterrestrial life
Extraterrestrial water | Habitable zone for complex life | [
"Astronomy",
"Biology"
] | 3,307 | [
"Astronomical hypotheses",
"Hypothetical life forms",
"Extraterrestrial life",
"Astronomical controversies",
"Biological hypotheses"
] |
75,864,768 | https://en.wikipedia.org/wiki/Xavier%20Garbet | Xavier Garbet (b. 1961) is an theoretical plasma physicist and a professor at Nanyang Technological University (NTU). He is currently appointed as the Temasek Chair in Clean Energy, as his professorship was established by a $6 million endowment from Singapore's state investment firm, Temasek Holdings. This appointment aims to support a magnetic confinement fusion research and manpower training program at NTU for clean energy development in Singapore.
Early life and career
In 1982, Garbet received a Bachelor's degree in Physics from Paris-Sorbonne University. He then earned a Master's degree in plasma physics from Paris-Saclay University. Subsequently, he earned a PhD degree in theoretical and high-energy physics in 1988 and a Habilitation à diriger des recherches (French habilitation) in 2001 from Aix-Marseille University.
Garbet was hired by the French Alternative Energies and Atomic Energy Commission (CEA) in 1988. He was a visiting scientist at General Atomics from 1994 to 1995 and led a plasma transport modelling task force at the Joint European Torus from 2001 to 2004. In 2008, he became a research director at CEA. In 2022, he joined NTU as a professor of theoretical and computational plasma physics and was appointed as the Temasek Chair in Clean Energy.
Honours and awards
Garbet was awarded the CNRS Silver Medal by the French National Centre for Scientific Research for his work on plasma confinement fusion in 2010.
In 2019, he was awarded the Fernand Holweck Medal and Prize by the Société Française de Physique.
In 2022, Garbet was awarded the Hannes Alfvén Prize for his theoretical contributions to the dynamics of magnetically confined fusion plasmas.
References
Living people
1961 births
20th-century physicists
21st-century physicists
Place of birth missing (living people)
Plasma physicists
Paris-Sorbonne University alumni
Paris-Saclay University alumni
Aix-Marseille University alumni
Academic staff of Nanyang Technological University | Xavier Garbet | [
"Physics"
] | 405 | [
"Plasma physicists",
"Plasma physics"
] |
75,865,513 | https://en.wikipedia.org/wiki/William%20Edward%20Augustin%20Aikin | William Edward Augustin Aikin (6 February 1807 – 31 May 1888), known professionally as William E. A. Aikin, was an American analytical chemist and natural scientist. He was chair of the chemistry department at the University of Maryland from 1837 to 1883. While most of his work focused on chemistry, he held accomplishments in other fields in the natural sciences.
Early years, education, and personal life
Aikin was born in Rensselaer County, New York on February 6, 1807. He graduated from the Rensselaer Institute in 1829 and shortly thereafter earned a license from the New York State Medical Society. Aikin married twice and had 28 children. He outlived both wives and all but three of his children.
Academic career
Despite completing his training and earning an honorary M.D. degree from the Vermont Academy of Medicine, Aikin turned away from the medical profession and took a position in 1833 teaching natural sciences at the Western Female Collegiate Institute in Pittsburgh. In Baltimore, he became an associate professor teaching chemistry and pharmacy at the University of Maryland for one year until he was elected chair of the chemistry department in October 1837. He filled that role until his withdrawal as Emeritus Professor in 1883.
He was Dean of the School of Medicine at the university from 1840–1841 and from 1844 to 1855. Other positions he held included Professor of Natural Philosophy in the School of Arts and Sciences at the University of Maryland, Lecturer at the Maryland Institute, and Professor of Physics, Chemistry, and Natural Philosophy at Mount Saint Mary's College in Emmitsburg.
Other accomplishments
Aikin held an interest in a broad range of sciences. A colleague said of him: If you want a pretty good practical mathematician, one of the best botanists in America, an experimental chemist, of the first order, a very superior Geologist, Mineralogist, and Zoologist, you have him in Dr. William Aikin.
In 1837, Aikin published a catalog of botanical specimens he studied in the Baltimore area.
Aikin was appointed assistant geologist on The Geological Survey of Virginia from June 1, 1837, to April 13, 1839.
In 1839, Aikin was appointed Governor of the Baltimore Infirmary.
Aikin experimented with photography and published some of his findings in 1840.
Aikin served as an expert witness in the murder trials of Nancy W. Hufford in 1851, Dr. Paul Schoeppe in 1872, and Elizabeth G. Wharton in 1873.
From about 1868 until 1888, Aikin served as Baltimore's Inspector of Gas and Illuminating Oils.
After Aikin's death, a steam-powered automobile was discovered in his office at The University of Maryland. The car was built in 1882 and credited to Aikin; he may have built it with his son, Albert, a civil engineer who wrote a thesis on steam machinery and, at the time, lived with his father in Baltimore.
Publications
References
1807 births
1888 deaths
Analytical chemists
19th-century American geologists
19th-century American photographers | William Edward Augustin Aikin | [
"Chemistry"
] | 597 | [
"Analytical chemists"
] |
75,866,380 | https://en.wikipedia.org/wiki/2CE-5iPrO | 2CE-5iPrO (5-iPrO-2C-E) is a psychedelic substituted phenethylamine derivative related to 2C-E, but with the 5-methoxy group extended to isopropoxy. Similar to related "tweetio" compounds such as 2CD-5EtO, it has a longer duration of action than 2C-E but is otherwise similar in activity, although it shows reduced antiinflammatory actions.
See also
2C-E-FLY
N-Ethyl-2C-B
References
2C (psychedelics)
Serotonin receptor agonists
Methoxy compounds
Amines
Isopropyl compounds | 2CE-5iPrO | [
"Chemistry"
] | 143 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Pharmacology stubs",
"Bases (chemistry)"
] |
75,867,139 | https://en.wikipedia.org/wiki/Gallium%20indium%20antimonide | Gallium indium antimonide, also known as indium gallium antimonide, GaInSb, or InGaSb (GaxIn1-xSb), is a ternary III-V semiconductor compound. It can be considered as an alloy between gallium antimonide and indium antimonide. The alloy can contain any ratio between gallium and indium. GaInSb refers generally to any composition of the alloy.
Preparation
GaInSb films have been grown by molecular beam epitaxy, chemical beam epitaxy and liquid phase epitaxy on gallium arsenide and gallium antimonide substrates. It is often incorporated into layered heterostructures with other III-V compounds.
Electronic Properties
The bandgap and lattice constant of GaInSb alloys are between those of pure GaSb (a = 0.610 nm, Eg = 0.73 eV) and InSb (a = 0.648 nm, Eg = 0.17 eV). Over all compositions, the bandgap is direct, like in pure GaSb and InSb.
Applications
InGaSb and InGaSb-containing heterostructures have been studied for use in near- to mid-infrared photodetectors, transistors, and hall effect sensors.
References
External links
Properties of GaInSb
Antimonides
Gallium compounds
Indium compounds
III-V compounds | Gallium indium antimonide | [
"Chemistry"
] | 284 | [
"III-V compounds",
"Inorganic compounds"
] |
75,868,995 | https://en.wikipedia.org/wiki/Anachronism%20in%20Middle-earth | Anachronism, chronological inconsistency, is seen in J. R. R. Tolkien's fantasy world of Middle-earth in the juxtaposition of cultures of evidently different periods, such as the classically-inspired Gondor and the medieval-style Rohan, and in the far more modern hobbits of the Shire, a setting which resembles the English countryside of Tolkien's childhood. The more familiar lifestyle and manner of the hobbits, complete with tobacco, potatoes, umbrellas, and mantelpiece clocks, allows them to mediate between the reader and the far older cultures of Middle-earth. They were introduced for The Hobbit, a children's story not planned to be set in Middle-earth; their anachronistic role is extended in The Lord of the Rings.
Tolkien's books are at once medieval in style and modern in many ways, such as appealing to a diverse modern readership and possessing a modern novelistic "realism". The One Ring, too, embodies a strikingly modern concept, that power corrupts; in medieval thought, power just revealed how a person already was. The combination of medieval and modern is echoed in Peter Jackson's films of The Lord of the Rings, introducing further anachronistic elements such as skateboarding during a battle scene.
Cultures of different periods
Scholars have commented that the cultures of Middle-earth, such as the classically-inspired Gondor and the medieval-style Rohan, are evidently of different eras, creating a built-in element of anachronism in the narrative.
Those heroic cultures are, in turn, clearly quite unlike that of the home-loving hobbits of the Shire. Gondor is rooted in ancient Rome, while Rohan echoes many aspects of the culture of the Anglo-Saxons. The Tolkien scholar Sandra Ballif Straubhaar writes that "the most striking similarities" for Gondor are with the legends of ancient Rome: Aeneas, from Troy, and Elendil, from Númenor, both survive the destruction of their home countries; the brothers Romulus and Remus found Rome, while the brothers Isildur and Anárion found the Númenórean kingdoms in Middle-earth; and both Gondor and Rome experienced centuries of "decadence and decline".
Bilbo Baggins's comfortable home in The Hobbit, on the other hand, is in Tom Shippey's words
Modern hobbits in an older world
Tolkien scholars including Shippey and Dimitra Fimi have stated that the hobbits are misfits in Middle-earth's heroic world. Tolkien placed the Shire not somewhere heroic, but in a society he had personally experienced, "more or less a Warwickshire village of about the period of the Diamond Jubilee [of Queen Victoria, in 1897]". Shippey described the hobbits' culture, complete with tobacco and potatoes, as a "creative anachronism" on Tolkien's part. In his view, anachronism is the "essential function" of hobbits, enabling Tolkien to "bridge the gap" by mediating between readers' lives in the modern world and the dangerous ancient world of Middle-earth. Robert Tally notes that Bilbo is the anachronism in The Hobbit as he enters the otherwise consistently "distant, legendary, or mythic past", meeting the wizard Gandalf, the Dwarf Thorin, Elves, and the dragon. This mediating function was, back in 1957, said to be essential by Douglass Parker in his review of The Lord of the Rings, Hwaet We Holbytla....
Fimi comments that this applies both to the style of language used by the hobbits, and to their material culture of "umbrellas, camping kettles, matches, clocks, pocket handkerchiefs and fireworks", all of which are plainly modern, as are the fish and chips that Sam Gamgee thinks of on his journey to Mordor. Most striking, in her view, however, is Tolkien's description of the enormous dragon firework at Bilbo's party which rushed overhead "like an express train". Tolkien's drawing of the hall of Bilbo's home, Bag End, shows both a clock and a barometer (mentioned in an early draft), and he had another clock on his mantelpiece. To arrange a party, the hobbits rely on a daily postal service. The effect, the scholars agree, is to bring the reader comfortably into the ancient heroic world.
The medievalist Lynn Forest-Hill writes that the plants mentioned are similarly anachronistic, whether the "nasturtians" growing over Bag End, the "taters" in its garden, or the "pipeweed" that the hobbits liked to smoke, each plant indicating a homely activity – gardening, cooking, smoking. In her view, the nasturtians "signal the specific relationship of [the] anachronistic [hobbits] to the present". Characters, too, can be anachronistic, out of their time, as with the hobbit-become-monster Gollum, who after his five centuries hidden under the Misty Mountains is in the time of the War of the Ring, the end of the Third Age, but who is from an era of the distant past when hobbits still lived by the River Anduin.
Medieval but modern
Scholars agree that while Middle-earth has a strongly Medieval feeling and setting, books like The Lord of the Rings are certainly modern. Tolkien, a philologist, was a professional medievalist; but his Middle-earth writings have attracted readers, in the words of Jane Chance and Alfred Siewers "globally across a wide political and cultural spectrum, from the postmodern counterculture to Christian traditionalists." The scholar of humanities Brian Rosebury comments that Tolkien's writing shares several qualities with modernism, as well as having a modern novelistic "realism". Anna Vaninskaya states that Tolkien was certainly "a modern writer"; he did not engage with modernism, but his work was "supremely intertextual", interweaving and juxtaposing styles, modes, and genres.
Shippey writes that a central aspect of The Lord of the Rings is strikingly non-medieval: the One Ring. Tolkien depicts it as relentlessly evil, eating away at its possessor's mind. Shippey comments that "The most evident fact to note about the Ring is that it is in conception strikingly anachronistic, totally modern". In his view, it embodies the modern maxim "Power corrupts, and absolute power corrupts absolutely", where in medieval thought, power just revealed how a person already was. The whole idea that power is corrosive and addictive is thus a modern one.
The illustrator Ted Nasmith describes his own Tolkien artwork as embodying "appropriate anachronism", presenting the apparently medieval in the idiom of modern fantasy.
A literary process
Tolkien started writing The Hobbit purely as a children's story, nothing to do with his legendarium. By the time he had completed it, it alluded to Sauron (as the Necromancer) and mentioned Elrond, Esgaroth, and Gondolin: it was being drawn into Middle-earth. All the same, in 1937 when The Hobbit was published, Tolkien expected that that would be as far as the interconnections would go. However, a month later, his publisher, Stanley Unwin, let him know that the public would want "more from you about Hobbits!" Tolkien started work on a sequel, which became The Lord of the Rings, and it necessarily contained both heroic elements and hobbits. The story grew in the telling, and became a feigned history rather than a Silmarillion-like mythology, a fantasy complete with a sub-created secondary world, suitable for adults as well as children. Tolkien laboured to resolve the inconsistencies that the merger of The Hobbit and the mythology created, often successfully; but the anachronism of the hobbits in a more ancient world turned out to be both inherent in the story, and necessary to mediate between the characters of the ancient world and the reader.
In adaptations
Peter Jackson's 2001–2003 film adaptation of The Lord of the Rings introduced further anachronistic elements. The scholar of literature Gwendolyn Morgan comments that Arwen is transformed into a "twenty-first century Buffy the Vampire Slayer", replacing Tolkien's "medieval courtly mistress", while the heroic Aragorn becomes an "angst-ridden, sensitive, existential '90s male", and Saruman's hatching of his Uruk Hai, a specially large breed of orcs, echoes modern concerns about genetic engineering. Then, she notes, there are the jokes about dwarf-tossing, and Legolas's skateboarding "down the stairs on a shield at Helm's Deep", this last becoming hugely popular, "evoking applause and verbal outbursts" in cinemas, things which Morgan suggests "may be more jarring".
References
Primary
Secondary
Sources
Themes of The Lord of the Rings
Anachronism | Anachronism in Middle-earth | [
"Physics"
] | 1,929 | [
"Spacetime",
"Anachronism",
"Physical quantities",
"Time"
] |
75,873,137 | https://en.wikipedia.org/wiki/Data%20minimization | Data minimization is the principle of collecting, processing and storing only the necessary amount of personal information required for a specific purpose. The principle emanates from the realisation that processing unnecessary data is creating unnecessary risks for the data subject without creating any current benefit or value. The risks of processing personal data vary from identity theft to unreliable inferences resulting in incorrect, wrongful and potentially dangerous decisions.
The principle of data minimization is a global, universal principle of data protection, and can thus be found in almost every legal or regulatory text on data protection/privacy.
The data minimization principle in regulatory texts worldwide (selection)
The data minimization principle is the second of the six fundamental privacy principles set forth in the General Data Protection Regulation and the UK GDPR.
The OECD Privacy Guidelines refer to the data minimization principle as the Collection Limitation Principle (part two, article 7).
The American Data Privacy and Protection Act (ADPPA), a United States proposed federal online privacy bill that was not enacted included data minimisation as a main principle.
The APEC Privacy Framework includes the data minimization principle, referred to as the Collection Limitation principle, as principle III.
The American Privacy Rights Act (APRA), a comprehensive data privacy law proposed in April 2024 in the United States, includes a section on data minimisation.
The Canadian Personal Information Protection and Electronic Documents Act (PIPEDA) includes the principle as Principle 4 - Limiting Collection.
References
Internet
Data security
de:Datensparsamkeit | Data minimization | [
"Technology",
"Engineering"
] | 314 | [
"Cybersecurity engineering",
"Internet",
"Data security",
"Transport systems"
] |
75,874,009 | https://en.wikipedia.org/wiki/Signpost%20sequence | In mathematics and apportionment theory, a signpost sequence is a sequence of real numbers, called signposts, used in defining generalized rounding rules. A signpost sequence defines a set of signposts that mark the boundaries between neighboring whole numbers: a real number less than the signpost is rounded down, while numbers greater than the signpost are rounded up.
Signposts allow for a more general concept of rounding than the usual one. For example, the signposts of the rounding rule "always round down" (truncation) are given by the signpost sequence
Formal definition
Mathematically, a signpost sequence is a localized sequence, meaning the th signpost lies in the th interval with integer endpoints: for all . This allows us to define a general rounding function using the floor function:
Where exact equality can be handled with any tie-breaking rule, most often by rounding to the nearest even.
Applications
In the context of apportionment theory, signpost sequences are used in defining highest averages methods, a set of algorithms designed to achieve equal representation between different groups.
References
Sequences and series
Apportionment methods | Signpost sequence | [
"Mathematics"
] | 233 | [
"Sequences and series",
"Mathematical analysis",
"Mathematical structures",
"Mathematical objects"
] |
75,879,690 | https://en.wikipedia.org/wiki/Cloud-9%20%28RELHIC%29 | Cloud-9 is a REionization-Limited-H i Cloud (RELHIC), which may turn out to be a starless dark matter galaxy. This RELHIC may have a mass 5 billion times that of the Sun and was found in the vicinity of the spiral galaxy M94, in the constellation Canes Venatici.
References
Canes Venatici
Astronomical objects discovered in 2023 | Cloud-9 (RELHIC) | [
"Astronomy"
] | 85 | [
"Canes Venatici",
"Galaxy stubs",
"Astronomy stubs",
"Constellations"
] |
75,880,928 | https://en.wikipedia.org/wiki/Trimethylsulfoxonium | Trimethylsulfoxonium (abbreviated TMSO) is a cation with a formula (CH3)3SO+ consisting of a sulfur atom attached to three methyl groups and one oxygen atom. It has a net charge of +1.
Production
Refluxing dimethyl sulfoxide with methyl iodide can yield trimethylsulfoxonium iodide.
Reactions
Treated with sodium hydride, trimethylsulfoxonium forms dimethylsulfoxonium methylide.
Trimethylsulfoxonium can polymerise to yield polyethylene.
Copper, zinc and palladium ions in water react with trimethylsulfoxonium and sodium hydroxide to form sulfur ylide complexes.
Properties
In the chloride, the sulfur-oxygen bond length is 1.436 Å, sulfur-carbon bond is 1.742. OSC angles are 112.6°, and CSC angles are 106.2°.
List of compounds
References
Organosulfur compounds
Cations
Sulfur ions | Trimethylsulfoxonium | [
"Physics",
"Chemistry"
] | 214 | [
"Matter",
"Organosulfur compounds",
"Organic compounds",
"Cations",
"Sulfur ions",
"Ions"
] |
75,883,560 | https://en.wikipedia.org/wiki/List%20of%20landing%20ellipses%20on%20extraterrestrial%20bodies | This is a list of the projected landing zones on extraterrestrial bodies. The size of the ellipse or oval graphically represents statistical degrees of uncertainty, i.e. the confidence level of the landing point, with the center of the ellipse being calculated as the most likely given the plethora of variables. Their accuracy has improved from the early attempts in the 1960s; active research continues in the 21st century.
Ellipse table
See also
Moon landing
Mars landing
Great Galactic Ghoul
Cone of Uncertainty
Tropical cyclone forecasting
Deliberate crash landings on extraterrestrial bodies
Notes
References
Spaceflight
Exploration of Mars
Exploration of the Moon | List of landing ellipses on extraterrestrial bodies | [
"Astronomy"
] | 131 | [
"Spaceflight",
"Outer space"
] |
75,893,889 | https://en.wikipedia.org/wiki/Plasmalysis | Plasmalysis is a electrochemical process that requires a voltage source. On the one hand, it describes the plasma-chemical dissociation of organic and inorganic compounds (e.g. C-H and N-H compounds) in interaction with a thermal/non-thermal plasma between two electrodes. On the other hand, it describes the synthesis, i.e. the combination of two or more elements to form a new molecule (e.g. methane synthesis/methanation). Plasmalysis is an artificial word made of plasma and lysis (Greek λύσις, "[dissolution]").
Thermal/non-thermal plasma
Thermal plasmas. can be technically generated, for example, by inductive coupling of high-frequency fields in the MHz range (ICP: Inductively coupled plasma) or by direct current coupling (arc discharges). A thermal plasma is characterized by the fact that electrons, ions and neutral particles are in thermodynamic equilibrium. For atmospheric-pressure plasmas, the temperatures in thermal plasmas are usually above 6000 K. This corresponds to average kinetic energies of less than 1 eV.
Nonthermal plasmas are found in low-pressure arc discharges, such as fluorescent lamps, in dielectrically barrier discharges (DBD), such as ozone tubes, in microwave plasmas (plasma torches, i.e. PLexc oder MagJet) or in GHz-plasmajets. A non-thermal plasma shows a significant difference between the electron and gas temperature. For example, the electron temperature can be several 10,000 K, which corresponds to average kinetic energies of more than 1 eV while a gas temperature close to room temperature is measured. Despite their low temperature, such plasmas can trigger chemical reactions and excitation states via electron collisions. Pulsed coronal and dielectrically impeded discharges belong to the family of nonthermal plasmas. Here the electrons are much hotter (several eV) than the ions/neutral gas particles (room temperature).
Technical aspects
To generate a nonthermal plasma at atmospheric pressure, a working gas (molecular or inert gas, e.g. air, nitrogen, argon, helium) is passed through an electric field. Electrons originating from ionization processes can be accelerated in this field to trigger impact ionization processes. If more free electrons are produced during this process than are lost, a discharge can build up. The degree of ionization in technically used plasmas is usually very low, typically a few per mille or less. The electrical conductivity generated by these free charge carriers is used to couple in electrical power. When colliding with other gas atoms or molecules, the free electrons can transfer their energy to them and thus generate highly reactive species that act on the material to be treated (gaseous, liquid, solid). The electron energy is sufficient to split covalent bonds in organic molecules. The energy required to split single bonds is in the range of about 1.5 - 6.2 eV, for double bonds in the range of about 4.4 - 7.4 eV and for triple bonds in the range of 8.5 - 11.2 eV . For gases that can also be used as process gases, dissociation energies are e.g. 5.7 eV (O2) and 9.8 eV (N2)
Applications of atmospheric pressure plasmas
Atmospheric-pressure plasmas have been used for a variety of industrial applications, including volatile organic compound (VOC) removal, exhaust gas emission treatment and polymer surface and food treatment. For decades, non-thermal plasmas have also been used to generate ozone for water purification. Atmospheric pressure plasmas can be characterized primarily by a large number of electrical discharges in which the majority of the electrical energy is used to generate energetic electrons. These energetic electrons produce chemically excited species - free radicals and ions - and additional electrons by dissociation, excitation and ionization of background gas molecules by electron impact. These excited species in turn oxidize, reduce or decompose the molecules, such as wastewater or biomethane, that are brought into contact with them. Part of the electrical energy is converted into chemical energy. Plasmalysis can thus be used to store energy, for example in the plasma analysis of ammonium from waste water or liquid fermentation residue, which produces hydrogen and nitrogen. The hydrogen thus produced can serve as an energy carrier for a hydrogen economy.
Dissociation mechanisms of gases and liquids
In the following section XH stands for any hydrogen compound, e.g. CH- and NH-compounds.
Thermal dissociation: gaseous hydrogen molecules are being dissociated at temperatures above 3000 K e.g. in a plasma. At temperatures above 3500 K H2 und O2 are dissociated.
electron impact dissociation:
The density of radicals scales with the electron density and higher gas and electron temperatures (thermal dissociation and electron impact).
ion impact dissociation:
dissociative electron attachment:
This process generates negative ions as well as neutral particles. The collision electron is captured by collision excitation. The energy difference between the ground state and the excited state dissociates the molecule. The electron-induced dissociation of water depends on the electron temperature, which influences the ratio of the OH density (n_OH) to the electron density (n_e) significantly. The maximum OH density is reached in the early afterglow when the electron temperature (T_e) is low.
Photoionisation: High-energy photons dissociate molecules
Solvated electrons: Reducing agent in liquid
Dissociation efficiency of different hydrogen sources
Water Electrolysis
Since the focus is always on the most energy-efficient dissociation of chemical compounds, the benchmark is the energy input of the electrolysis of distilled water (45 kWh/kgH2) as in the following reaction equation:
Methane-plasmalysis
A particularly efficient way of generating hydrogen (10 kWh/kgH2) is the methane plasmalysis. In this process, methane (e.g. from natural gas) is decomposed in the plasma under oxygen exclusion, forming hydrogen and elemental carbon, as in the following reaction equation:
Methane plasmalysis offers, among other things, the possibility of decentralized decarbonization of natural gas or, if biogas is used, also the realization of a CO2 sink, whereby, in contrast to the CCS process commonly used to date, no gas has to be compressed and stored, but the elemental carbon produced can be bound in product form.
This technology can also be used to prevent the flaring of so-called "flare gases" by using them as a feedstock for the production of hydrogen and carbon.
Wastewater-plasmalysis
The plasmalysis of wastewater and liquid manure enables hydrogen to be recovered from pollutants contained in the wastewater (ammonium (NH4) or hydrocarbon compounds (COD)). The plasma-catalytic decomposition of ammonia takes place as shown in the following reaction equation:
The treated wastewater is purified in the process. The energy requirement for the production of green hydrogen is approx. 12 kWh/kgH2.
This technology can also be used as ammonia cracking (chemistry) technology for splitting the hydrogen carrier ammonia.
Dissociation of hydrogen sulfide
Hydrogen sulfide - a component of crude oil and natural gas and a by-product in anaerobic digestion of biomass - is also suitable for plasma-catalytic decomposition to produce hydrogen and elemental sulfur due to its weak binding energy.
The energy requirement for the production of hydrogen from H2S is approx. 5 kWh/kgH2.
Reactor geometry
It is apparent that both the reactor geometry and the method by which the plasma is generated strongly influence the performance of the system.
References
Electrochemistry
Process engineering | Plasmalysis | [
"Chemistry",
"Engineering"
] | 1,609 | [
"Process engineering",
"Electrochemistry",
"Mechanical engineering by discipline"
] |
75,912,306 | https://en.wikipedia.org/wiki/L%C3%A9vy-Leblond%20equation | In quantum mechanics, the Lévy-Leblond equation describes the dynamics of a spin-1/2 particle. It is a linearized version of the Schrödinger equation and of the Pauli equation. It was derived by French physicist Jean-Marc Lévy-Leblond in 1967.
Lévy-Leblond equation was obtained under similar heuristic derivations as the Dirac equation, but contrary to the latter, Lévy-Leblond equation is not relativistic. As both equations recover the electron gyromagnetic ratio, it is suggested that spin is not necessarily a relativistic phenomenon.
Equation
For a nonrelativistic spin-1/2 particle of mass m, a representation of the time-independent Lévy-Leblond equation reads:
where c is the speed of light, E is the nonrelativistic particle energy, is the momentum operator, and is the vector of Pauli matrices, which is proportional to the spin operator . Here are two components functions (spinors) describing the wave function of the particle.
By minimal coupling, the equation can be modified to account for the presence of an electromagnetic field,
where q is the electric charge of the particle. V is the electric potential, and A is the magnetic vector potential. This equation is linear in its spatial derivatives.
Relation to spin
In 1928, Paul Dirac linearized the relativistic dispersion relation and obtained Dirac equation, described by a bispinor. This equation can be decoupled into two spinors in the non-relativistic limit, leading to predict the electron magnetic moment with a gyromagnetic ratio . The success of Dirac theory has led to some textbooks to erroneously claim that spin is necessarily a relativistic phenomena.
Jean-Marc Lévy-Leblond applied the same technique to the non-relativistic energy relation showing that the same prediction of can be obtained. Actually to derive the Pauli equation from Dirac equation one has to pass by Lévy-Leblond equation. Spin is then a result of quantum mechanics and linearization of the equations but not necessarily a relativistic effect.
Lévy-Leblond equation is Galilean invariant. This equation demonstrates that one does not need the full Poincaré group to explain the spin 1/2. In the classical limit where , quantum mechanics under the Galilean transformation group are enough. Similarly, one can construct classical linear equation for any arbitrary spin. Under the same idea one can construct equations for Galilean electromagnetism.
Relation to other equations
Schrödinger's and Pauli's equation
Taking the second line of Lévy-Leblond equation and inserting it back into the first line, one obtains through the algebra of the Pauli matrices, that
,
which is the Schrödinger equation for a two-valued spinor. Note that solving for also returns another Schrödinger's equation. Pauli's expression for spin- particle in an electromagnetic field can be recovered by minimal coupling:
.
While Lévy-Leblond is linear in its derivatives, Pauli's and Schrödinger's equations are quadratic in the spatial derivatives.
Dirac equation
Dirac equation can be written as:
where is the total relativistic energy. In the non-relativistic limit, and one recovers, Lévy-Leblond equations.
Heuristic derivation
Similar to the historical derivation of Dirac equation by Paul Dirac, one can try to linearize the non-relativistic dispersion relation . We want two operators and linear in (spatial derivatives) and E, like
for some , such that their product recovers the classical dispersion relation, that is
,
where the factor is arbitrary an it is just there for normalization. By carrying out the product, one find that there is no solution if are one dimensional constants. The lowest dimension where there is a solution is 4. Then are matrices that must satisfy the following relations:
these relations can be rearranged to involve the gamma matrices from Clifford algebra. is the Identity matrix of dimension N. One possible representation is
,
such that , with , returns Lévy-Leblond equation. Other representations can be chosen leading to equivalent equations with different signs or phases.
References
Eponymous equations of physics
Quantum mechanics
Spinors | Lévy-Leblond equation | [
"Physics"
] | 898 | [
"Eponymous equations of physics",
"Theoretical physics",
"Equations of physics",
"Quantum mechanics"
] |
75,912,651 | https://en.wikipedia.org/wiki/C10H7NO2 | The molecular formula C10H7NO2 (molar mass: 173.17 g/mol) may refer to:
1-Nitroso-2-naphthol
1-Nitronaphthalene
2-Nitronaphthalene
Molecular formulas | C10H7NO2 | [
"Physics",
"Chemistry"
] | 57 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
75,916,379 | https://en.wikipedia.org/wiki/Domain%20separation | In cryptography, domain separation is a construct used to implement multiple different functions using only one underlying template in an efficient way. The domain separation can be defined as partitioning of the domain of a function to assign separate subdomains to different applications of the same function.
For example, cryptographic protocols typically rely on random oracles (ROs, functions that return a value fully determined by their input yet otherwise random). The security proofs for these protocols are based on the assumption that the random oracle is unique to the protocol: if two protocols share the same RO, the assumptions of the proof are not met anymore. Since creating a new cryptographic primitive from scratch each time an RO is needed is impractical, multiple ROs (say, RO1 and RO2) are produced by prepending unique domain separation tags (DSTs, also known as domain separators) to the input of a base oracle RO:
RO1(x) := RO("RO1" || x)
RO2(x) := RO("RO2" || x)
where "RO1" and "RO2" are the strings representing the unique DSTs and || is a concatenation operator. If the underlying RO function is secure (say, it is a cryptographic hash), RO1 and RO2 are statistically independent. The technique was originally proposed by Bellare & Rogaway in 1993.
Uses
The domain separation construct can be used for multiple purposes:
providing independent ROs for protocols;
extending the output size of an RO (for example, by using the RO multiple times (numbered from 1 to L), each time using a representation of oracle number as a DST. This technique is called "counter mode" due to its similarity to the counter mode of a block cipher;
"keying" the oracle by using an encryption key as a DST.
In the practical sense, the domain separation can provide "customization", an equivalent of the strong typing in programming: it enforces the use of independent calculations for different tasks, so an attacker that had learned a result of one calculation will get no information about another one.
Kinds of functions
Domain separation can be used with functions implementing different cryptographic primitives.
Hash functions
Domain separation is most commonly used with hash functions. The input domain of a hash function is practically unlimited, it is easy to partition it among any number of derived functions, for example, by prepending or appending of a DST to the message.
Domain separation is used within the implementation of some hash functions to produce multiple different functions from the same design. For example, in SHA-3 the domain separation makes sure that the differently named functions (like SHA3-512 or SHAKE128) are independent.
Symmetric ciphers and MACs
The security of symmetric ciphers and MACs critically depends on the key not being used for other purposes. If an application needs multiple keys but has only one source of keying material, it would typically employ a key derivation function to produce the keys. KDFs can usually produce output of arbitrary length, so they can be used to generate any number of keys.
Also, just like hash functions, some symmetric ciphers and MACs use domain separation internally.
Signatures
In many cases, it is desirable to use a single signing key to produce digital signatures for different purposes. If this is done, it is important to make sure that signed messages intended for one purpose cannot be used for the other. A simple way to achieve this is to add to each message an identifier specifying the purpose, and to reject a message if the identifier doesn't match.
References
Sources
Cryptography | Domain separation | [
"Mathematics",
"Engineering"
] | 750 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
75,938,839 | https://en.wikipedia.org/wiki/NGC%207625 | NGC 7625, or Arp 212, is a peculiar galaxy in the constellation of Pegasus. It was discovered on October 15, 1784, by William Herschel. In his New General Catalogue (1888), J. L. E. Dreyer described it as pretty bright, considerably small, round, with a suddenly much brighter middle. It is located at an estimated distance of from the Milky Way galaxy.
Halton Arp included NGC 7625 as object 212 in his Atlas of Peculiar Galaxies, indicating it displayed unexplained physical processes. In the Third Reference Catalogue of Bright Galaxies, NGC 7625 was assigned a morphological classification of SA(rs)a pec, which indicates a peculiar spiral galaxy (SA) with a transitional ring structure (rs) and tightly wound spiral arms (a). In 1981 it was designated a blue compact dwarf by T. X. Thaun and G. E. Martin on the basis of strong emission lines from ionized gas. A prominent visible feature is an open ring of dust lanes with an angular radius of about .
NGC 7625 displays indications of a recent interaction with another galaxy. Velocity measurements suggest the inner part of the galaxy is rotating in a different plane than the outer parts. The angle between these two planes increases with distance from the galactic center, reaching 50° at a radius of 6 kpc. Hence this may be a polar-ring galaxy, with the added gas accreted from the dwarf satellite galaxy UGC 12549. There is a large amount of gas and dust undergoing significant star formation, with emission of H-alpha concentrated at the core and in separate knots along exterior curved structures.
On October 28, 2023 type Ia supernova SN 2023vyl was discovered in this galaxy by ATLAS.
References
Further reading
Pegasus (constellation)
Peculiar galaxies
7625
12529
212
17841015
Discoveries by William Herschel
71133
71133
+03-59-038 | NGC 7625 | [
"Astronomy"
] | 394 | [
"Pegasus (constellation)",
"Constellations"
] |
75,964,056 | https://en.wikipedia.org/wiki/Punctularia%20atropurpurascens | Punctularia atropurpurascens, also known as violet crust or purple fuzz, is a species of fungus. Purple fuzz is a saprotrophic crust fungus. The preferred nutrient source of purple fuzz is the wood of deciduous trees. Purple fuzz is prone to guttation and weeps red. Purple fuzz appears to be a fairly widespread fungus capable of adapting to a variety of climates.
See also
Corticioid fungi
References
Fungi described in 1916
Punctulariaceae
Fungus species | Punctularia atropurpurascens | [
"Biology"
] | 106 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
75,966,255 | https://en.wikipedia.org/wiki/Category%20of%20measurable%20spaces | In mathematics, the category of measurable spaces, often denoted Meas, is the category whose objects are measurable spaces and whose morphisms are measurable maps.
This is a category because the composition of two measurable maps is again measurable, and the identity function is measurable.
N.B. Some authors reserve the name Meas for categories whose objects are measure spaces, and denote the category of measurable spaces as Mble, or other notations. Some authors also restrict the category only to particular well-behaved measurable spaces, such as standard Borel spaces.
As a concrete category
Like many categories, the category Meas is a concrete category, meaning its objects are sets with additional structure (i.e. sigma-algebras) and its morphisms are functions preserving this structure. There is a natural forgetful functor
U : Meas → Set
to the category of sets which assigns to each measurable space the underlying set and to each measurable map the underlying function.
The forgetful functor U has both a left adjoint
D : Set → Meas
which equips a given set with the discrete sigma-algebra, and a right adjoint
I : Set → Meas
which equips a given set with the indiscrete or trivial sigma-algebra. Both of these functors are, in fact, right inverses to U (meaning that UD and UI are equal to the identity functor on Set). Moreover, since any function between discrete or between indiscrete spaces is measurable, both of these functors give full embeddings of Set into Meas.
Limits and colimits
The category Meas is both complete and cocomplete, which means that all small limits and colimits exist in Meas. In fact, the forgetful functor U : Meas → Set uniquely lifts both limits and colimits and preserves them as well. Therefore, (co)limits in Meas are given by placing particular sigma-algebras on the corresponding (co)limits in Set.
Examples of limits and colimits in Meas include:
The empty set (considered as a measurable space) is the initial object of Meas; any singleton measurable space is a terminal object. There are thus no zero objects in Meas.
The product in Meas is given by the product sigma-algebra on the Cartesian product. The coproduct is given by the disjoint union of measurable spaces.
The equalizer of a pair of morphisms is given by placing the induced sigma-algebra on the subset given by the set-theoretic equalizer. Dually, the coequalizer is given by placing the quotient sigma-algebra on the set-theoretic coequalizer.
Direct limits and inverse limits are the set-theoretic limits with the final and initial sigma-algebra respectively. Canonical examples of direct and inverse systems are the ones arising from filtrations in probability theory, and the limits and colimits of such systems are, respectively, the join and the intersection of sigma-algebras.
Other properties
The monomorphisms in Meas are the injective measurable maps, the epimorphisms are the surjective measurable maps, and the isomorphisms are the isomorphisms of measurable spaces.
The split monomorphisms are (essentially) the inclusions of measurable retracts into their ambient space.
The split epimorphisms are (up to isomorphism) the measurable surjective maps of a measurable space onto one of its retracts.
Meas is not cartesian closed (and therefore also not a topos) since it does not have exponential objects for all spaces.
See also
Citations
References
Categories in category theory
Measure theory
Probability theory | Category of measurable spaces | [
"Mathematics"
] | 786 | [
"Mathematical structures",
"Category theory",
"Categories in category theory"
] |
75,975,490 | https://en.wikipedia.org/wiki/SN%202010jl | SN 2010jl was a luminous type IIn supernova that was discovered on November 3, 2010, in the irregular galaxy UGC 5189A. It is 48.9 ± 3.4 Mpc distant from the solar system. It showed an infrared excess which lasted for over 1400 days.
Discovery
2010jl was discovered during the Puckett Observatory Supernova Search, by Newton & Puckett with a 0.40-m reflector at Portal, Arizona. The discovery was made on Nov. 3.52 UT and was confirmed on Nov. 4.50. Follow-up spectroscopy showed broad emission and narrow-line emission from hydrogen and helium leading to a classification of type IIn.
Infrared excess
CSM interaction
The classification as type IIn showed that the supernova was interacting with the circumstellar medium (CSM). The supernova itself produces the broad emission, the flash-ionized circumstellar medium produces on the other hand the type IIn typical narrow-line emission features. Observations with Chandra-ACIS X-ray showed absorption features caused by circumstellar matter. At the time of the observation it was one of the most luminous supernovae observed in X-rays.
Infrared echo
Observations with Hubble detected near-infrared excess that lasted for 400 days. While the early near-infrared detection is dominated by the supernova, the later near-infrared detection becomes more dominated by the infrared echo. The echo is caused by pre-existing circumstellar dust that does not interact with the supernova, but that scatters the light of the supernova.
New dust
A later study with Gemini and Spitzer showed that infrared excess persisted until the end of the observations on day 1367 after the discovery. This very late detection of infrared excess cannot be explained with an infrared echo alone. Between days 260 and 464 the near-infrared jumps in brightness and then slowly fades until day 1000. The jump in near-infrared brightness is explained with the formation of new dust.
The formation of new dust can be shown by several other features. 2010jl showed on the one hand infrared excess caused by thermal radiation of the newly formed dust. It also showed blueshift of the emission-lines, which is caused by the dust blocking the material that is further away from our line of sight. A third line of evidence is increased fading in the optical, which could not be shown due to lacking observations in a specific time-span. It was determined that the supernova produced about 0.005-0.01 (about 5-10 Jupiter masses) of predominantly carbon dust grains by the day 1400.
2010jl-like supernovae
Following the discovery of 2010jl, several other type IIn supernovae with long-lasting infrared excess were discovered. Their H- and K-band and mid-infrared light curve is dominated by two increases of the brightness. The first increase appears during the discovery and is attributed to the CSM interaction and the light echo. The second increase is attributed to the formation of new dust. After the second increase the infrared light curve shows a fading.
The following 2010jl-like supernovae are known: SN 2014ab, SN 2015da and SN 2017hcc. The supernova ASASSN-15ua is also mentioned to be similar to 2010jl. Additionally there are type II supernovae with mid-infrared light curves that are similar to 2010jl.
References
Supernovae
Astronomical objects discovered in 2010 | SN 2010jl | [
"Chemistry",
"Astronomy"
] | 722 | [
"Supernovae",
"Astronomical events",
"Explosions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.