id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
77,095,044 | https://en.wikipedia.org/wiki/Enzymatic%20polymerization | Enzymatic polymerization is a potential area in polymer research, providing a sustainable and adaptable alternative to conventional polymerization processes. Its capacity to manufacture polymers with exact structures in mild circumstances opens up new possibilities for material design and application, helping to progress both research and industry. It is a novel and sustainable method of synthesizing polymers that utilizes the catalytic properties of enzymes to both initiate and regulate the polymerization process. It works under mild circumstances, usually at room temperature and pressure as well as in aqueous environments, in contrast to conventional chemical polymerization techniques that frequently need for harsh conditions and harmful reagents. This approach allows fine control over the structure and functionality of polymers while simultaneously consuming less energy and having a less environmental impact.
This polymerization technique has the considerable advantage of being compatible with renewable resources. Many of the monomers utilized in these procedures come from natural sources, which aligns with the ideas of green chemistry and sustainability. This alignment is especially crucial given growing environmental concerns and the quest for more sustainable industrial operations. The potential applications of polymers produced via enzymatic polymerization are vast, spanning the fields of biomedicine, materials science, and environmental engineering. For example, biodegradable polymers produced using this method are very useful for medical applications such as drug delivery systems, biosensors and tissue engineering scaffolds. Furthermore, enzymatic polymerization opens up fascinating possibilities for the production of innovative biomaterials with tailored characteristics for specific industrial applications.
Mechanism of enzymatic polymerization
Enzymatic polymerization can happen in a variety of ways, including:
Condensation Polymerization: Enzymes such as lipases and proteases catalyze the step-growth polymerization of monomers by establishing ester, amide, or peptide bonds, releasing tiny molecules such as water or alcohol as waste.
Addition Polymerization: This method includes radical-mediated processes, in which enzymes such as peroxidases initiate polymerization by producing radical species that propagate the polymer chain.
Ring-Opening Polymerization: Enzymes help to open cyclic monomers to produce linear polymers, which is a typical process for synthesizing polyesters and polyamides.
Types of enzymes used in polymerization
Polymerases, or polymerase enzymes, can catalyze the synthesis of different kinds of polymers. Key enzymes involved include: Lipases are used in the synthesis of polyesters and polyamides, lipases accelerate esterification and transesterification processes, which are required for polymer chain formation. In oxidative polymerization, peroxidases aid in the polymerization of phenolic and aniline derivatives, resulting in the production of conductive polymers. Glycosyltransferases are necessary for polysaccharide formation because they catalyze the transfer of sugar moieties to create glycosidic linkages. Proteases are enzymes that help create peptide bonds, allowing amino acid monomers to be polymerized into polyamides or proteins.
References
Polymerization reactions | Enzymatic polymerization | [
"Chemistry",
"Materials_science"
] | 625 | [
"Polymerization reactions",
"Polymer chemistry"
] |
77,095,053 | https://en.wikipedia.org/wiki/Metallopeptide | Metallopeptides (also called metal-peptides or metal peptide complexes) are peptides that contain one or more metal ions in their structure. This specific type of peptide are, just like metalloproteins, metallofoldamers. And very similar to metalloproteins, metallopeptide's functionality is attributed through the contained metal ion cofactor. These short structured peptides are often employed to develop mimics of metalloproteins and systems similar to artificial metalloenzymes.
A multitude of naturally occurring peptides display biological and chemical activities when bound to various metal ions. Where different metal ion cofactor can lead to different reactivity and even different folding and physical characteristics (e.g. solubility or stability) of the structure. Synthetic equivalents of such peptides are engineered to bind metal ions and display a variety of physical, chemical, and biological reactivity and characteristics.
Examples
In the last 40 years, there has been a significant amount of research on metal binding peptides and their characteristics, structures, and chemical reactivities.
Vincent L. Pecoraro and his group investigate the interaction of peptides with heavy metals in the body; Katherine Franz leads a group studying Cu-binding peptides; Angela Lombardi and her unit focus on the development of artificial metalloenzymes and similar peptide systems, and the group of Peter Faller focuses on redox reactivity of Cu-peptides.
Natural
Natural metallopeptides with antibiotic, antimicrobial and anticancer properties have been of particular interest to the scientific community (e.g. the divalent bacitracin, histatin and Fe/Cu-bleomycin). At the same time there is an increasing attention to the role of metalloppeptides in disease development. For example, metallochemical interactions in brain tissue can contribute to neurodegenerative conditions due to the naturally high concentration of metal ions in the brain. Hence the metallochemical reactions occurring outside the physiologically healthy concentrations, can contribute to the development of diseases such as Alzheimer's disease. The condition is related to the β-amyloid metallopeptides. Another example are infectious prion polypeptides and specific isoforms of prion protein which contribute to disease transmission and development.
Artificial
De novo designed peptides which self-assemble in the presence of copper (Cu), forming supramolecular assemblies were presented by Korendovych et al. Additionally there are examples of metallopeptides that are, at least partially, composed of non-natural amino acids with possible applications in drug discovery and biomaterials.
Metal coordination
Being a type of molecules that are often only activated for biological and chemical function following metal-binding, the specific coordination of metal ions imposes certain restrictions and requirements onto metallopeptides. Usually metal cofactors are coordinated by nitrogen, oxygen or sulfur centers belonging to amino acid residues of the peptide. These donor groups can be introduced by histidine (or the corresponding imidazole), cysteine (thiolate group), as well as carboxylate substituents (e.g by aspartate) but are not limited to these. The other amino acid residues, including non-natural amino acids and the peptide backbone have been shown to bind metal centers and provide donor groups. The research on metal-binding of peptides ranges from coordination of biometals (such as Calcium, Magnesium, Manganese, Zinc, Sodium, Potassium, and Iron) to heavy metals (such as Arsenic, Mercury, and Cadmium).
Synthesis and analysis
Biosynthesis
Peptides are synthesized in living organisms inside the cell analogously to proteins.
Chemical synthesis
Solid phase peptide synthesis (SPPS) is a well-established method for producing synthetic peptides. SPPS enables the building of a peptide chain by sequential interactions of amino acid derivatives.
Analysis
The interaction between metal ions and peptides are typically studied in solution using spectroscopic or electrochemical methods. Amongst which are circular dichroism (CD), nuclear magnetic resonance (NMR) spectroscopy, cyclic voltammetry, and mass spectrometry (MS).
See also
Bioinorganic chemistry
Evolution of metal ions in biological systems
Biometal (biology)
Coenzyme
Metalloproteins
References
Peptides
Biochemistry
Bioinorganic chemistry
Metalloproteins
Synthetic biology | Metallopeptide | [
"Chemistry",
"Engineering",
"Biology"
] | 918 | [
"Synthetic biology",
"Biomolecules by chemical classification",
"Biological engineering",
"Peptides",
"Bioinformatics",
"Molecular genetics",
"nan",
"Molecular biology",
"Biochemistry",
"Metalloproteins",
"Bioinorganic chemistry"
] |
77,095,997 | https://en.wikipedia.org/wiki/Scale-down%20bioreactor | A scale-down bioreactor is a miniature model designed to mimic or reproduce large-scale bio-processes or specific process steps on a smaller scale. These models play an important role during process development stage by fine-tuning the minute parameters and steps without the need for substantial investments in both materials and consumables. Vessel geometry like aspect ratios, impeller designs, and sparger placements should be nearly identical between the small and large scales. For this purpose computer fluid dynamics (CFD) are used as they can be employed to investigate the scalability of mixing processes from small-scale models to larger production scales. Scientists use outcome of these studies on scale down systems to derive and facilitate the transition from laboratory-scale studies to industrial large-scale conditions.
Types of scale-down bioreactors
Stirred tank bioreactors are systems further developed to two compartment systems to provide a fundamental structure for Scale down bioreactors. Two commonly used developed systems are cells which are circulated between either two stirred tank reactors (STR–STR), or from a STR through a plug flow reactor (STR–PFR).
STR-STR
The application of coupled stirred-tank reactors in scale-down models is a powerful technical model for simulating and studying the complex conditions of large-scale industrial bioreactors. It provides a controlled environment to replicate non-homogeneous conditions, these models offer valuable insights into optimizing bioprocesses, ensuring consistent product quality, and reducing costs and time in biotechnological production. Co-cultures, meaning that more than two microbes complementing cultivation can be conducted. One such recent study conducted for two compartment bioreactor is the production of Violecin.
STR-PFR
Scale down reactors can be two compartment bioreactor. In a two-compartment bioreactor setup, the first compartment can be operated as an STR for initial growth/biomass buildup, while the second compartment functions as a PFR for the production phase with a defined residence time. Fusing a mixed stirred tank reactor (STR) with a plug flow reactor (PFR) in a two-compartment system offers significant options in flow characteristics to meet specific process requirements. This configuration allows for precise control over various factors, including improved bioprocess results by enhancing residence time distribution and substrate gradients. The integration of this system results in a portion of the culture being exposed to varying environmental cues, such as altered mixing times, nutrient deprivation, aeration, pH, or temperature, before being recirculated in to the main STR. The formed perturbations simulate transient stresses encountered in large-scale industrial reactors. The residence time in the PFR zone is calibrated to match the typical timescale experienced in large scale industrial bioprocesses. This system is further optimized to explore shorter timescales and they are termed dynamic microfluidic systems. Computational fluid dynamics (CFD) simulations can predict and model the flow patterns in STR-PFR complex systems.
Advantages of Scale down bioreactors
Efficient Exploration of Operating Conditions
During process development, a wide range of operating conditions should be deployed, in order to identify the optimal parameter ranges, and is crucial to achieve successful large scale bioprocesses. However, due to number of experiments in large-scale fermenters can be time-consuming, resource-intensive, and cost. Hence, smaller scale-down systems, in the form of miniaturized bioreactors ranging from micro liters to milliliters in scale.
Miniaturized bioreactors enable researchers to conduct numerous experiments simultaneously, exploring various combinations of process parameters such as temperature, pH, agitation rates, and nutrient concentrations. These models facilitate efficient process optimization at a small scale, the insights gained from these experiments can be seamlessly transferred to larger-scale systems. The scalability of the process parameters and operating conditions identified through scale-down models ensures a smooth transition to pilot and commercial-scale production. This high-throughput approach allows for rapid screening and identification of optimal operating conditions, which would be impractical and costly with larger-scale systems. By working at a smaller scale, these miniaturized bioreactors significantly reduce the consumption of raw materials, media components, and other consumables needed for reactor fermentations runs. This resource-efficient approach not only minimizes costs but also aligns with sustainable practices, reducing waste and environmental impact. Bioprocess Engineering strategies are applied to upgrade and enhance the overall productivity of the cultivation experiments. Some important parameters like oxygen transfer rate (OTR), dissolved oxygen concentration, superficial gas velocity, volume‐specific power input P/V, mixing time, could be modified and optimized to obtain high titre formation according to the desired requirements. These titre values could be comparable to values obtained in large scale industrial bioprocesses.
Efficient microbial strain testing and characterization
Microbial strain Engineering and cell factory engineering is a developing area of interest and important in determining the outcome of large scale fermentation. With the development in metabolic engineering and synthetic biology new strains are constructed, which need to be tested in large scale like conditions. This is an instance where scale down bioreactors could be coupled with microbial strain engineering to broaden the scope of research and bridge the gap between two interdisciplinary fields of studies.
Application of computational fluid dynamics
By developing and applying computational fluid dynamics simulations, process scientists and engineers can gain valuable insights into the fluid flow patterns and mixing dynamics within various geometries.The ability to run multiple experiments in parallel, combined with the reduced resource requirements, translates into accelerated process development timelines. Researchers can quickly iterate through various conditions, analyze results, and make informed decisions, ultimately shortening the overall development cycle. Two parameters that need to be focused on are the Reynolds number and power number, as non-dimensional values for technical know-how and scaling processes, both upscaling and scale-down processes. By understanding this relationship between power number and reynold's number, it becomes possible to predict the power requirements for a given flow regime and impeller configuration. This knowledge is crucial for designing and operating agitated systems at different scales while maintaining consistent mixing performance.
References
Bioreactors
Biotechnology
Biochemical engineering
Biological engineering | Scale-down bioreactor | [
"Chemistry",
"Engineering",
"Biology"
] | 1,284 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Chemical engineering",
"Biochemical engineering",
"Microbiology equipment",
"Biotechnology",
"nan",
"Biochemistry"
] |
77,097,363 | https://en.wikipedia.org/wiki/Gary%20Patti | Gary J. Patti is an American biochemist known for his research in metabolism and for using mass spectrometry to characterize biological processes. He is the Michael and Tana Powell Professor at Washington University in St. Louis. He is co-founder and Chief Scientific Officer of Panome Bio and an Associate Editor for Clinical & Translational Metabolism.
Awards
Biemann Medal, 2024
ACS Midwest Award, 2023
Academy of Science Innovation Award, 2016
Edward Mallinckrodt Jr. Scholar Award, 2016
Pew Biomedical Scholars Award, 2015
Alfred P. Sloan Award, 2014
Camille Dreyfus Teacher-Scholar Award, 2014
References
External links
Year of birth missing (living people)
Living people
Washington University in St. Louis faculty
21st-century American chemists
Mass spectrometrists
Biochemistry
Metabolism
Cancer | Gary Patti | [
"Physics",
"Chemistry",
"Biology"
] | 163 | [
"Spectrum (physical sciences)",
"Mass spectrometrists",
"Mass spectrometry",
"Cellular processes",
"nan",
"Biochemistry",
"Biochemists",
"Metabolism"
] |
77,099,254 | https://en.wikipedia.org/wiki/Chile%20Architecture%20Biennial | The Chilean Architecture and Urbanism Biennial is a significant event that has been organized by the Chilean Association of Architects since 1977. It aims to create a space for meeting, reflection, and exchange of ideas about architectural work, serving as a showcase for the best architectural and urban projects of the last two years.
Expositions
Since its inaugural edition, the biennial has been held in various cultural spaces in Santiago. In 2015, the event took place in Valparaíso, making it the first edition outside the Chilean capital.
Since 2015, the selection of curators for the event has been conducted through an open call launched by the Chilean Association of Architects. Seven years later, the Ministry of Cultures, Arts, and Heritage joined the selection process of pavilion proposals.
The 2022 edition, entitled Vulnerable Habitats (Hábitats vulnerables), was postponed until January 2023 and featured several installations around La Moneda Palace in Santiago. These installations included designs by Smiljan Radić and Nicolás Schmidt, the reconstruction of a pavilion originally designed by Montserrat Palmer in 1972, and temporary structures designed by Jean Araya and Miguel Casassus, as well as Low Estudio.
Editions
See also
Architecture of Chile
Chicago Architecture Biennial
Architecture Biennial
References
Architecture festivals
Festivals in Chile
Festivals established in 1977
Architecture in Chile | Chile Architecture Biennial | [
"Engineering"
] | 257 | [
"Architecture festivals",
"Architecture"
] |
71,164,067 | https://en.wikipedia.org/wiki/Pore%20structure | Pore structure is a common term employed to characterize the porosity, pore size, pore size distribution, and pore morphology (such as pore shape, surface roughness, and tortuosity of pore channels) of a porous medium. Pores are the openings in the surfaces impermeable porous matrix which gases, liquids, or even foreign microscopic particles can inhabit them. The pore structure and fluid flow in porous media are intimately related.
With micronanoscale pore radii, complex connectivity, and significant heterogeneity, the complexity of the pore structure affects the hydraulic conductivity and retention capacity of these fluids. The intrinsic permeability is the attribute primarily influenced by the pore structure, and the fundamental physical factors governing fluid flow and distribution are the grain surface-to-volume ratio and grain shape.
The idea that the pore space is made up of a network of channels through which fluid can flow is particularly helpful. Pore openings are the comparatively thin sections that divide the relatively large portions known as pore bodies. Other anatomical analogies include "belly" or "waist" for the broad region of a pore and "neck" or "throat" for the constrictive part. Pore bodies are the intergranular gaps with dimensions that are generally significantly smaller than those of the surrounding particles in a medium where textural pore space predominates, such as sand. On the other hand, a wormhole can be regarded as a single pore if its diameter is practically constant over its length.
Such pores can have one of three types of boundaries: (1) constriction, which is a plane across the locally narrowest part of the pore space; (2) interface with another pore (such as a wormhole or crack); or (3) interface with solid.
Porosity
The proportion of empty space in a porous media is called porosity. It is determined by dividing the volume of the pores or voids by the overall volume. It is expressed as a percentage or as a decimal fraction between 0 and 1. Porosity for the majority of rocks ranges from less than 1% to 40%.
Porosity influences fluid storage in geothermal systems, oil and gas fields, and aquifers, making it evident that it plays a significant role in geology. Fluid movement and transport across geological formations, as well as the link between the bulk properties of the rock and the characteristics of particular minerals, are controlled by the size and connectivity of the porous structure.
Measuring porosity
The samples' total volume and pore space volume were measured in order to calculate the porosities.
Measuring pore space volume
A helium pyrometer was used to calculate the volume of the pores and relied on Boyle's law. (P1V1=P2V2) and helium gas, which easily passes through tiny holes and is inert, to identify the solid fraction of a sample. A sample chamber with a known volume is where the core is put. Pressure is applied to a reference chamber with a known volume. The helium gas may now go from the reference chamber to the sample chamber thanks to the connection between the two rooms. The volume of the sample solid is calculated using the ratio between the starting and final pressures. The pore volume, as calculated by the helium pycnometer, is the difference between the total volume and the solid volume.
Pore size and pore size distribution
Pore size
Typically, the effective radius of the pore body or neck is used to define the size of pores. The position, shape, and connection of pores in solids are only a few of their numerous attributes and the most straightforward aspect of a pore to visualize is likely its size, or its extent in a single spatial dimension.
In comparison to other factors like pore shape, it is arguable that pore size has the biggest or broadest impact on the characteristics of solids. Therefore, using pore size or pore size distribution to describe and contrast various porous substances is definitely convenient and valuable.
The three main pore size ranges (The current classification of pore size recommended by the International Union of Pure and Applied Chemistry) are as following:
Pore size distribution
The relative abundance of each pore size in a typical volume of soil is represented by the pore size distribution. It is represented by the function f(r), whose value is proportional to the total volume of all pores whose effective radius is within an infinitesimal range centered on r. And f(r) can be thought to have textural and structural components.
Measuring pore size distribution
Mercury intrusion porosimetry and gas adsorption are common techniques for determining the pore size distribution of materials and power sources.
When studying the pore size distribution using the gas adsorption technique utilizing the nitrogen or argon adsorption isotherm at their boiling temperatures, it is possible to determine the pore size from the molecular level to a few hundred nm. The pressure sensor's precise constraints and the coolant's temperature stability result in a maximum observed pore size of just a little bit more than 100 nm in a realistic environment.
Mercury porosimetry determines the pore size distribution and quantifies the associated incursion amount by applying pressure to the non-wetting mercury. The pore size may be readily estimated using this method and ranges from a few nm to 1000 m. The material must be robust enough to withstand the pressure since mercury intrusion requires 140 MPa of pressure for pores smaller than 10 nm. Additionally, it utilizes the idea to determine the pore size of the inkbottle neck.
The relation of pore size to pore size distribution
The relationship between pore size and pore size distribution in a randomly constructed porous system, is expected to be monotone: bigger pores are connected to larger particles. The relationship between pore size and particle size is complicated by the nonrandom nature of most soils. Big pores may be found in both large and tiny particles, including clays, which promote aggregation and therefore the development of large interaggregate pores. Subdivisions of a pore size distribution in randomly structured media can express more specific characteristics of soils with more complex conceptualizations, such as the hysteresis of soil water retention.
Pore morphology
The pore morphology is the shape, surface roughness, and tortuosity of pore channels representing the liquid and gaseous phases.
Tortuosity of pore channels
Tortuosity of pore channels is a unique geometric quantity that is utilized not only to measure the transport characteristics of porous system, but also to express the sinuosity and complexity of internal percolation routes.
Toruosity is intimately connected to the transport behavior of electrical conductivity, fluid permeation, molecular diffusion, and heat transfer in geoscience, impacting petrophysical parameters such as permeability, effective diffusivity, thermal conductivity, and formation resistivity factor.
Surface roughness
The standard definition of surface roughness for porous medium is based on the average measured vertical coordinate value in comparison to a relative surface height, such as root-mean-square roughness or arithmetic roughness. However, the lack of fractal topology consideration led to the relative surface height definition being deemed inadequate in reality.
The ratio of "real surface area" to "geometric smooth-surface area" was used as the second definition of surface roughness. This definition has been applied in several research to alter flow equations or measure the fluid-fluid interfacial area.
The fundamental idea of fractal geometry is where the third definition of surface roughness comes from, in which either modifies the pore surfaces (two-dimensional) or the whole porous medium (three-dimensional) using fractal dimension adjustments, resulting in larger surface dimensions or reduced media dimensions. The hurst roughness exponent, a similar definition, is occasionally used. This quantity, which spans from 0 to 1, is connected to the fractal dimension.
See also
Bulk density
Filtration
Permeability
Poromechanics
Porous medium
Reactive transport modeling in porous media
References
Further reading
J. Bear; (1972) Dynamics of Fluids in Porous Media. (Elsevier, New York)
Hillel, D.; (2004) Introduction to environmental soil physics. (Sydney : Elsevier/Academic Press: Amsterdam ;)
Leeper GW (1993) Soil science : an introduction. (Melbourne University Press: Carlton, Victoria.)
External links
Geology Buzz: Porosity
Defining Permeability
Tailoring porous media to control permeability
Permeability of Porous Media
Graphical depiction of different flow rates through materials of differing permeability
Fundamentals of Fluid Flow in Porous Media
Hydrology
Soil mechanics
Soil science
Porous media
In situ geotechnical investigations | Pore structure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 1,831 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Porous media",
"Soil mechanics",
"Materials science",
"Environmental engineering"
] |
71,165,100 | https://en.wikipedia.org/wiki/Jens%20Lindhard | Jens Lindhard (26 February 1922 – 15 October 1997) was a Danish physicist and professor at Aarhus University working on condensed matter physics, statistical physics and special relativity. He was the president of the Royal Danish Academy of Sciences and Letters between 1981 and 1988.
He is known for the development of the Lindhard theory, that describes the behaviour of metals under the influence of electromagnetic fields, named in his honour. He is also known for the development of channelling theory, to describe the path of a charged particle in a crystalline solid.
Early life
Jens Lindhard was born in Tystofte, Denmark in 1922, son of Erik and Agnes Lindhard. He was the youngest son of six, consisting of four girls and two boys. Jens' father was a professor at University of Copenhagen Faculty of Life Sciences but died young in 1928. Jens' older brother was a bomber in England and died during the Invasion of Normandy.
Jens went to school at the Metropolitanskolen.
Later, during his university studies, Jens joined the Danish Brigade in Sweden and also joined them back in the defence of the Danish-German border.
He started his studies in physics at the Niels Bohr Institute and in 1945 he received a Master of Science degree in physics from University of Copenhagen.
Research
During his university years, he worked under the supervision of Oskar Klein in Sweden on superconductivity, publishing his first major work on the subject in 1944.
Later he moved to work with Rudolf Peierls in the University of Birmingham. There, in 1954, he published the first description of the dielectric function of metals in the linear response regime, today known as Lindhard theory.
In 1950, he worked in close collaboration with Niels Bohr in Blegdamsvej on the penetration of particles in matter. There Lindhard, Morten Scharff and H. E. Schiøt developed what is now known as the (carrying their initials), which describes the penetration of low-energy ions. He also worked in fundamental problems related to statistical physics and relativity. As a teaching assistant, he helped to verify the formulas and problems in Christian Møller's The Theory of Relativity. Later Lindhard would provide a solution to the controversy related to the transformation of temperature in special relativity.
Lindhard moved to Aarhus University in 1957 in collaboration with experimentalist Karl Ove Nielsen, where he created and led a research group to study the penetration of charged particles in crystal lattices. During his time in Aarhus, Lindhard developed what is now known as the classical theory of channelling (sometimes also referred as Lindhard's theory) in continuum models in 1965.
Awards and membership
Jens Lindhard received several awards including:
the Rigmor and Carl Holst-Knudsen Award for Scientific Research in 1965
the H. C. Ørsted Medal in 1974
the Danish Physical Society Physics award in 1988
Honorary Doctorate degrees from Odense University in 1996 and Fudan University, Shanghai, in 1997
Jens Linhard was member of the Royal Danish Academy of Sciences and Letters and a member of the since 1962. We was president of the Royal Society from 1981–1988. He was also member of the Koninklijke Hollandsche Maatschappij der Wetenschappen since 1984.
References
1922 births
1997 deaths
20th-century Danish physicists
Condensed matter physicists
Academic staff of Aarhus University
University of Copenhagen alumni | Jens Lindhard | [
"Physics",
"Materials_science"
] | 702 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
71,165,287 | https://en.wikipedia.org/wiki/Non-relativistic%20gravitational%20fields | Within general relativity (GR), Einstein's relativistic gravity, the gravitational field is described by the 10-component metric tensor. However, in Newtonian gravity, which is a limit of GR, the gravitational field is described by a single component Newtonian gravitational potential. This raises the question to identify the Newtonian potential within the metric, and to identify the physical interpretation of the remaining 9 fields.
The definition of the non-relativistic gravitational fields provides the answer to this question, and thereby describes the image of the metric tensor in Newtonian physics. These fields are not strictly non-relativistic. Rather, they apply to the non-relativistic (or post-Newtonian) limit of GR.
A reader who is familiar with electromagnetism (EM) will benefit from the following analogy. In EM, one is familiar with the electrostatic potential and the magnetic vector potential . Together, they combine into the 4-vector potential , which is compatible with relativity. This relation can be thought to represent the non-relativistic decomposition of the electromagnetic 4-vector potential. Indeed, a system of point-particle charges moving slowly with respect to the speed of light may be studied in an expansion in , where is a typical velocity and is the speed of light. This expansion is known as the post-Coulombic expansion. Within this expansion, contributes to the two-body potential already at 0th order, while contributes only from the 1st order and onward, since it couples to electric currents and hence the associated potential is proportional to .
Definition
In the non-relativistic limit, of weak gravity and non-relativistic velocities, general relativity reduces to Newtonian gravity. Going beyond the strict limit, corrections can be organized into a perturbation theory known as the post-Newtonian expansion. As part of that, the metric gravitational field , is redefined and decomposed into the non-relativistic gravitational (NRG) fields : is the Newtonian potential, is known as the gravito-magnetic vector potential, and finally is a 3d symmetric tensor known as the spatial metric perturbation. The field redefinition is given by
In components, this is equivalent to
where .
Counting components, has 10, while has 1, has 3 and finally has 6. Hence, in terms of components, the decomposition reads .
Motivation for definition
In the post-Newtonian limit, bodies move slowly compared with the speed of light, and hence the gravitational field is also slowly changing. Approximating the fields to be time independent, the Kaluza-Klein reduction (KK) was adapted to apply to the time direction. Recall that in its original context, the KK reduction applies to fields which are independent of a compact spatial fourth direction. In short, the NRG decomposition is a Kaluza-Klein reduction over time.
The definition was essentially introduced in, interpreted in the context of the post-Newtonian expansion in, and finally the normalization of was changed in to improve the analogy between a spinning object and a magnetic dipole.
Relation with standard approximations
By definition, the post-Newtonian expansion assumes a weak field approximation. Within the first order perturbation to the metric , where is the Minkowski metric, we find the standard weak field decomposition into a scalar, vector and tensor , which is similar to the non-relativistic gravitational (NRG) fields. The importance of the NRG fields is that they provide a non-linear extension, thereby facilitating computation at higher orders in the weak field / post-Newtonian expansion. Summarizing, the NRG fields are adapted for higher order post-Newtonian expansion.
Physical interpretation
The scalar field is interpreted as the Newtonian gravitational potential.
The vector field is interpreted as the gravito-magnetic vector potential. It is magnetic-like, or analogous to the magnetic vector potential in electromagnetism (EM). In particular, it is sourced by massive currents (the analogue of charge currents in EM), namely by momentum.
As a result, the gravito-magnetic vector potential is responsible for current-current interaction, which appears at the 1st post-Newtonian order. In particular, it generates a repulsive contribution to the force between parallel massive currents. However, this repulsion is overturned by the standard Newtonian gravitational attraction, since in gravity a current "wire" must always be massive (charged) -- unlike EM.
A spinning object is the analogue of an electromagnetic current loop, which forms as magnetic dipole, and as such it creates a magnetic-like dipole field in .
The symmetric tensor is known as the spatial metric perturbation. From the 2nd post-Newtonian order and onward, it must be accounted for. If one restricts to the 1st post-Newtonian order, can be ignored, and relativistic gravity is described by the , fields. Hence it becomes a strong analogue of electromagnetism, an analogy known as gravitoelectromagnetism.
Applications and generalizations
The two body problem in general relativity holds both intrinsic interest and observational, astrophysical interest. In particular, it is used to describe the motion of binary compact objects, which are the sources for gravitational waves. As such, the study of this problem is essential for both detection and interpretation of gravitational waves.
Within this two body problem, the effects of GR are captured by the two body effective potential, which is expanded within the post-Newtonian approximation. Non-relativistic gravitational fields were found to economize the determination of this two body effective potential.
Generalizations
In higher dimensions, with an arbitrary spacetime dimension , the definition of non-relativistic gravitational fields generalizes into
Substituting reproduces the standard 4d definition above.
See also
Non-relativistic spacetime
References
General relativity | Non-relativistic gravitational fields | [
"Physics"
] | 1,211 | [
"General relativity",
"Theory of relativity"
] |
68,270,636 | https://en.wikipedia.org/wiki/Linearized%20augmented-plane-wave%20method | The linearized augmented-plane-wave method (LAPW) is an implementation of Kohn-Sham density functional theory (DFT) adapted to periodic materials. It typically goes along with the treatment of both valence and core electrons on the same footing in the context of DFT and the treatment of the full potential and charge density without any shape approximation. This is often referred to as the all-electron full-potential linearized augmented-plane-wave method (FLAPW). It does not rely on the pseudopotential approximation and employs a systematically extendable basis set. These features make it one of the most precise implementations of DFT, applicable to all crystalline materials, regardless of their chemical composition. It can be used as a reference for evaluating other approaches.
Introduction
At the core of density functional theory the Hohenberg-Kohn theorems state that every observable of an interacting many-electron system is a functional of its ground-state charge density and that this density minimizes the total energy of the system. The theorems do not answer the question how to obtain such a ground-state density. A recipe for this is given by Walter Kohn and Lu Jeu Sham who introduce an auxiliary system of noninteracting particles constructed such that it shares the same ground-state density with the interacting particle system. The Schrödinger-like equations describing this system are the Kohn-Sham equations. With these equations one can calculate the eigenstates of the system and with these the density. One contribution to the Kohn-Sham equations is the effective potential which itself depends on the density. As the ground-state density is not known before a Kohn-Sham DFT calculation and it is an input as well as an output of such a calculation, the Kohn-Sham equations are solved in an iterative procedure together with a recalculation of the density and the potential in every iteration. It starts with an initial guess for the density and after every iteration a new density is constructed as a mixture from the output density and previous densities. The calculation finishes as soon as a fixpoint of a self-consistent density is found, i.e., input and output density are identical. This is the ground-state density.
A method implementing Kohn-Sham DFT has to realize these different steps of the sketched iterative algorithm. The LAPW method is based on a partitioning of the material's unit cell into non-overlapping but nearly touching so-called muffin-tin (MT) spheres, centered at the atomic nuclei, and an interstitial region (IR) in between the spheres. The physical description and the representation of the Kohn-Sham orbitals, the charge density, and the potential is adapted to this partitioning. In the following this method design and the extraction of quantities from it are sketched in more detail. Variations and extensions are indicated.
Solving the Kohn-Sham equations
The central aspect of practical DFT implementations is the question how to solve the Kohn-Sham equations
with the single-electron kinetic energy operator , the effective potential , Kohn-Sham states , energy eigenvalues , and position and Bloch vectors and . While in abstract evaluations of Kohn-Sham DFT the model for the exchange-correlation contribution to the effective potential is the only fundamental approximation, in practice solving the Kohn-Sham equations is accompanied by the introduction of many additional approximations. These include the incompleteness of the basis set used to represent the Kohn-Sham orbitals, the choice of whether to use the pseudopotential approximation or to consider all electrons in the DFT scheme, the treatment of relativistic effects, and possible shape approximations to the potential. Beyond the partitioning of the unit cell, for the LAPW method the central design aspect is the use of the LAPW basis set to represent the valence electron orbitals as
where are the expansion coefficients. The LAPW basis is designed to enable a precise representation of the orbitals and an accurate modelling of the physics in each region of the unit cell.
Considering a unit cell of volume covering atoms at positions , an LAPW basis function is characterized by a reciprocal lattice vector and the considered Bloch vector . It is given as
where is the position vector relative to the position of atom nucleus . An LAPW basis function is thus a plane wave in the IR and a linear combination of the radial functions and multiplied by spherical harmonics in each MT sphere. The radial function is hereby the solution of the Kohn-Sham Hamiltonian for the spherically averaged potential with regular behavior at the nucleus for the given energy parameter . Together with its energy derivative these augmentations of the plane wave in each MT sphere enable a representation of the Kohn-Sham orbitals at arbitrary eigenenergies linearized around the energy parameters. The coefficients and are automatically determined by enforcing the basis function to be continuously differentiable for the respective channel. The set of LAPW basis functions is defined by specifying a cutoff parameter . In each MT sphere, the expansion into spherical harmonics is limited to a maximum number of angular momenta , where is the muffin-tin radius of atom . The choice of this cutoff is connected to the decay of expansion coefficients for growing in the Rayleigh expansion of plane waves into spherical harmonics.
While the LAPW basis functions are used to represent the valence states, core electron states, which are completely confined within a MT sphere, are calculated for the spherically averaged potential on radial grids, for each atom separately applying atomic boundary conditions. Semicore states, which are still localized but slightly extended beyond the MT sphere boundary, may either be treated as core electron states or as valence electron states. For the latter choice the linearized representation is not sufficient because the related eigenenergy is typically far away from the energy parameters. To resolve this problem the LAPW basis can be extended by additional basis functions in the respective MT sphere, so called local orbitals (LOs). These are tailored to provide a precise representation of the semicore states.
The plane-wave form of the basis functions in the interstitial region makes setting up the Hamiltonian matrix
for that region simple. In the MT spheres this setup is also simple and computationally inexpensive for the kinetic energy and the spherically averaged potential, e.g., in the muffin-tin approximation. The simplicity hereby stems from the connection of the radial functions to the spherical Hamiltonian in the spheres , i.e., and . In comparison to the MT approximation, for the full-potential description (FLAPW) contributions from the non-spherical part of the potential are added to the Hamiltonian matrix in the MT spheres and in the IR contributions related to deviations from the constant potential.
After the Hamiltonian matrix together with the overlap matrix is set up, the Kohn-Sham orbitals are obtained as eigenfunctions from the algebraic generalized dense Hermitian eigenvalue problem
where is the energy eigenvalue of the j-th Kohn-Sham state at Bloch vector and the state is given as indicated above by the expansion coefficients .
The considered degree of relativistic physics differs for core and valence electrons. The strong localization of core electrons due to the singularity of the effective potential at the atomic nucleus is connected to large kinetic energy contributions and thus a fully relativistic treatment is desirable and common. For the determination of the radial functions and the common approach is to make an approximation to the fully relativistic description. This may be the scalar-relativistic approximation (SRA) or similar approaches. The dominant effect neglected by these approximations is the spin-orbit coupling. As indicated above the construction of the Hamiltonian matrix within such an approximation is trivial. Spin-orbit coupling can additionally be included, though this leads to a more complex Hamiltonian matrix setup or a second variation scheme, connected to increased computational demands. In the interstitial region it is reasonable and common to describe the valence electrons without considering relativistic effects.
Representation of the charge density and the potential
After calculating the Kohn-Sham eigenfunctions, the next step is to construct the electron charge density by occupying the lowest energy eigenstates up to the Fermi level with electrons. The Fermi level itself is determined in this process by keeping charge neutrality in the unit cell. The resulting charge density then has a region-specific form
i.e., it is given as a plane-wave expansion in the interstitial region and as an expansion into radial functions times spherical harmonics in each MT sphere. The radial functions hereby are numerically given on a mesh.
The representation of the effective potential follows the same scheme. In its construction a common approach is to employ Weinert's method for solving the Poisson equation. It efficiently and accurately provides a solution of the Poisson equation without shape approximation for an arbitrary periodic charge density based on the concept of multipole potentials and the boundary value problem for a sphere.
Postprocessing and extracting results
Because they are based on the same theoretical framework, different DFT implementations offer access to very similar sets of material properties. However, the variations in the implementations result in differences in the ease of extracting certain quantities and also in differences in their interpretation. In the following, these circumstances are sketched for some examples.
The most basic quantity provided by DFT is the ground-state total energy of an investigated system. To avoid the calculation of derivatives of the eigenfunctions in its evaluation, the common implementation replaces the expectation value of the kinetic energy operator by the sum of the band energies of occupied Kohn-Sham states minus the energy due to the effective potential. The force exerted on an atom, which is given by the change of the total energy due to an infinitesimal displacement, has two major contributions. The first contribution is due to the displacement of the potential. It is known as Hellmann-Feynman force. The other, computationally more elaborate contribution, is due to the related change in the atom-position-dependent basis functions. It is often called Pulay force and requires a method-specific implementation. Beyond forces, similar method-specific implementations are also needed for further quantities derived from the total energy functional. For the LAPW method, formulations for the stress tensor and for phonons have been realized.
Independent of the actual size of an atom, evaluating atom-dependent quantities in LAPW is often interpreted as calculating the quantity in the respective MT sphere. This applies to quantities like charges at atoms, magnetic moments, or projections of the density of states or the band structure onto a certain orbital character at a given atom. Deviating interpretations of such quantities from experiments or other DFT implementations may lead to differences when comparing results. On a side note also some atom-specific LAPW inputs relate directly to the respective MT region. For example, in the DFT+U approach the Hubbard U only affects the MT sphere.
A strength of the LAPW approach is the inclusion of all electrons in the DFT calculation, which is crucial for the evaluation of certain quantities. One of which are hyperfine interaction parameters like electric field gradients whose calculation involves the evaluation of the curvature of the all-electron Coulomb potential near the nuclei. The prediction of such quantities with LAPW is very accurate.
Kohn-Sham DFT does not give direct access to all quantities one may be interested in. For example, most energy eigenvalues of the Kohn-Sham states are not directly related to the real interacting many-electron system. For the prediction of optical properties one therefore often uses DFT codes in combination with software implementing the GW approximation (GWA) to many-body perturbation theory and optionally the Bethe-Salpeter equation (BSE) to describe excitons. Such software has to be adapted to the representation used in the DFT implementation. Both the GWA and the BSE have been formulated in the LAPW context and several implementations of such tools are in use. In other postprocessing situations it may be useful to project Kohn-Sham states onto Wannier functions. For the LAPW method such projections have also been implemented and are in common use.
Variants and extensions of the LAPW method
APW: The augmented-plane-wave method is the predecessor of LAPW. It uses the radial solution to the spherically averaged potential for the augmentation in the MT spheres. The energy derivative of this radial function is not involved. This missing linearization implies that the augmentation has to be adapted to each Kohn-Sham state individually, i.e., it depends on the Bloch vector and the band index, which subsequently leads to a non-linear, energy-dependent eigenvalue problem. In comparison to LAPW this is a more complex problem to solve. A relativistic generalization of this approach, RAPW, has also been formulated.
Local orbitals extensions: The LAPW basis can be extended by local orbitals (LOs). These are additional basis functions having nonvanishing values only in a single MT sphere. They are composed of the radial functions , , and a third radial function tailored to describe use-case-specific physics. LOs have originally been proposed for the representation of semicore states. Other uses involve the representation of unoccupied states or the elimination of the linearization error for the valence states.
APW+lo: In the APW+lo method the augmentation in the MT spheres only consists of the function . It is matched to the plane wave in the interstitial region only in value. As an alternative implementation of the linearization the function is included in the basis set as an additional local orbital. While the matching conditions result in an unphysical kink of the basis functions at the MT sphere boundaries, a careful consideration of the kink in the construction of the Hamiltonian matrix suppresses it in the Kohn-Sham eigenfunctions. In comparison to the classical LAPW method the APW+lo approach leads to a less stiff basis set. The outcome is a faster convergence of the DFT calculations with respect to the basis set size.
Soler-Williams formulation of LAPW: In the Soler-Williams formulation of LAPW the plane waves cover the whole unit cell. In the MT spheres the augmentation is implemented by replacing up to the angular momentum cutoff the plane waves by the functions and . This yields basis functions continuously differentiable also in the channels above the angular momentum cutoff. As a consequence the Soler-Williams approach has reduced angular momentum cutoff requirements in comparison to the classical LAPW formulation.
ELAPW: In the extended LAPW method pairs of local orbitals introducing the functions and are added to the LAPW basis. The energy parameters are chosen to systematically extend the energy region in which Kohn-Sham states are accurately described by the linearization in LAPW.
QAPW: In the quadratic APW method the augmentation in the MT spheres additionally includes the second energy derivative . The matching at the MT sphere boundaries is performed by enforcing continuity of the basis functions in value, slope, and curvature. This is similar to the super-linearized APW (SLAPW) method in which radial functions and/or their derivatives at more than one energy parameter are used for the augmentation. In comparison to a pure LAPW basis these approaches can precisely represent Kohn-Sham orbitals in a broader energy window around the energy parameters. The drawback is that the stricter matching conditions lead to a stiffer basis set.
Lower-dimensional systems: The partitioning of the unit cell can be extended to explicitly include semi-infinite vacuum regions with their own augmentations of the plane waves. This enables efficient calculations for lower-dimensional systems such as surfaces and thin films. For the treatment of atomic chains an extension to one-dimensional setups has been formulated.
Software implementations
There are various software projects implementing the LAPW method and/or its variants. Examples for such codes are
Elk
Exciting
Flair
FLEUR
HiLAPW
Wien2k
References
Electronic structure methods
Computational chemistry
Computational physics
Condensed matter physics | Linearized augmented-plane-wave method | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,327 | [
"Quantum chemistry",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry",
"Condensed matter physics",
"Matter"
] |
68,274,287 | https://en.wikipedia.org/wiki/Moessner%27s%20theorem | In number theory, Moessner's theorem or Moessner's magic
is related to an arithmetical algorithm to produce an infinite sequence of the exponents of positive integers with by recursively manipulating the sequence of integers algebraically. The algorithm was first published by Alfred Moessner in 1951; the first proof of its validity was given by Oskar Perron that same year.
For example, for , one can remove every even number, resulting in , and then add each odd number to the sum of all previous elements, providing .
Construction
Write down every positive integer and remove every -th element, with a positive integer. Build a new sequence of partial sums with the remaining numbers. Continue by removing every -st element in the new sequence and producing a new sequence of partial sums. For the sequence , remove the -st elements and produce a new sequence of partial sums.
The procedure stops at the -th sequence. The remaining sequence will correspond to
Example
The initial sequence is the sequence of positive integers,
For , we remove every fourth number from the sequence of integers and add up each element to the sum of the previous elements
Now we remove every third element and continue to add up the partial sums
Remove every second element and continue to add up the partial sums
,
which recovers .
Variants
If the triangular numbers are removed instead, a similar procedure leads to the sequence of factorials
References
External links
Number theory | Moessner's theorem | [
"Mathematics"
] | 285 | [
"Mathematical theorems",
"Theorems in number theory",
"Mathematical problems",
"Number theory"
] |
78,417,866 | https://en.wikipedia.org/wiki/Chloronitramide%20anion | The chloronitramide anion, also known as chloro(nitro)azanide, is a recently (2024) identified chemical byproduct of the disinfectant chloramine. It is present in the tap water of about 113 million people in the United States of America in varying concentrations. Its toxicity has not yet been determined, although it may be removable with an activated carbon filter. The chloronitramide anion was first observed and determined to be a degradation byproduct of chloramine in the early 1980s. Its molecular formula and structure were disclosed in a paper published in November 2024.
Research
Early research
The chloronitramide anion was first detected as a UV absorbance interference during monitoring of chloramine and dichloramine in 1981. It was then shown to form during the decomposition of both chemicals. It was shown to likely be an anion in 1990. In the 1980s and 1990s methods of producing it in high concentrations were identified, and the molecule was shown through destruction to contain both nitrogen and chlorine. According to Julian Fairey, research on the compound slowed down in the mid-1990s after attempts to identify it were unsuccessful.
Identification of structure
The structure of the molecule was finally identified in 2024 using a combination of techniques, first identifying the molecular formula, then creating a candidate structure, then confirming it.
Ion chromatography, a method of separating ions and ionizable polar molecules, was used to separate the chloronitramide anion from the many salts present in water samples containing it, which otherwise made it difficult to use mass spectrometry; the water salinity was higher than that of saltwater.
Mass spectrometry was sufficient to determine the molecular mass of the ion, but it was too small for structure determination from the fragmentation pattern. The ion was found to have the molecular formula ClN2O2−1 (containing two oxygen atoms, two nitrogen atoms, and one chlorine atom) by electrospray ionisation mass spectrometry. A candidate structure was confirmed by 15N NMR spectroscopy and infrared spectroscopy.
Future research
Research investigating the toxicity of the chloronitramide anion, as well as the reasons for its formation in high or low concentration in different places, is expected.
Formation
The identifying paper proposes that the chloronitramide anion is formed through the reaction of chloramine (or dichloramine, which forms in chloramine solution) with NO2+, one of its degradation products. The formation of NO2+ begins when dichloramine (NHCl2) is hydrolyzed to form nitroxyl (HNO), which then reacts with dissolved oxygen (O2) to form the unstable peroxynitrite (ONOOH). NO2+ is one of the several reactive nitrogen species formed when peroxynitrite decomposes. The chloronitramide formed in this way then dissociates, losing the hydrogen, to form the corresponding anion.
References
Inorganic compounds
Water treatment
Drinking water
Nitroamines
Chlorides
Anions | Chloronitramide anion | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 647 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Anions",
"Water treatment",
"Salts",
"Water pollution",
"Environmental engineering",
"Water technology",
"Ions"
] |
78,424,168 | https://en.wikipedia.org/wiki/Synergetic%20theory | Synergetic theory, also known as "synergy" and referred to by some as a pseudoscientific theory, was developed by René-Louis Vallée and first disseminated in 1971 with the publication of his book L'énergie électromagnétique matérielle et gravitationally (Material and Gravitational Electromagnetic Energy).
The magazine Science et Vie published several articles on the subject, and in 1975, it reported on an experiment that allegedly generated more energy than was input into the system. This sparked a long-standing controversy over the discovery of "free energy." The following year, La Recherche examined Vallée's book and, under his guidance, commissioned physicists to conduct a rigorous test to verify or refute the initial claims. The results were negative: no excess energy was observed.
A critical examination of the theory in question reveals a multitude of inconsistencies. It becomes evident that the author's work is based on his personal beliefs, formulating a set of disconnected equations. Vallée opposed modern physics, viewing the theoretical advancements of the 20th century as overly intricate and incompatible with reality.
Vallée was affiliated with the Alexandre Dufour Physics Circle and supported by free-energy enthusiasts until the early 2000s, which enabled him to achieve a certain degree of media presence before synergetics faded from public discourse.
History
René-Louis Vallée (1926–2007), a 1951 graduate of Supélec, was employed by Alsthom from 1953 to 1958 and subsequently by the French Atomic Energy Commission (CEA) until 1976. Between 1970 and 1974, he authored several books about his professional expertise.
Vallée, a student of Louis de Broglie during his studies, developed a lifelong interest in physics and an alternative theory aimed at unifying the four fundamental forces. This culminated in the publication of his book, L'énergie électromagnétique matérielle et gravitationnelle, in 1971.
As a member of the Alexandre Dufour Physics Circle, Vallée promoted his book and theory during a 1972 lecture. The main objective of the circle was to challenge the tenets of modern physics. Despite presenting his critiques of physics objectively in his written works, Vallée expressed clear anti-relativist sentiments in his speeches, rejecting what he described as a "blind worship of revealed relativity."
In the context of the initial oil crisis, Vallée advanced his theory as a radical departure from conventional wisdom, asserting that global capitalism had hastily embraced relativity with the fervor of a religious doctrine. He leveled accusations against several physicists, characterizing their views as those of members of a "discreet philosophical-scientific sect." In a letter to La Recherche, he referred to Richard Feynman as a "man in black."
Following his dismissal from the CEA in 1976, Vallée engaged in a public debate with Industry Minister André Giraud on the radio program Le téléphone sonne in 1979. He challenged the prevailing preference for nuclear power, advocating instead for his concept of "free energy."
In February 1974, Science et Vie commenced coverage of synergistic theory, subsequently revisiting the topic in January 1975. During this latter period, Vallée employed his theory to elucidate the failure of nuclear fusion experiments in tokamaks, asserting that conventional scientific paradigms lacked rational explanations.
In 1976, forming a support committee for Vallée resulted in the establishment of the SEPED (Society for the Study and Promotion of Diffuse Energy), which was operational between 1976 and 1984.
In the context of the 1978 French nuclear debate, anti-nuclear activist René Barjavel expressed support for Vallée's Lettre ouverte aux vivants et à ceux qui veulent le rester (An open letter to the living and those who wish to remain living), citing the latter's theory as a disruption to the habits of thought and work that had been based on relativity for approximately half a century. Barjavel further noted that Einstein and his contemporaries had asserted that space vehicles could never exceed the speed of light, a claim that he regarded as untenable.
Vallée expressed dismay at the apparent lack of interest in synergistic, suggesting that there were conspiracies at play, involving official science, global capitalism, and even the World Zionist Organization, which were preventing scientific progress. In a letter to Prime Minister Jacques Chirac dated May 21, 1986, he reiterated these accusations.
Vallée was unable to disseminate his theory through conventional channels; therefore, he turned to the Internet as a means of sharing his ideas.
By the year 2000, he had become a member of the New York Academy of Sciences and had written the preface for a scientific document in which he discussed his theory, which was subsequently renamed GUST (Grand Unified Synergetic Theory).
While Vallée's work receded from public consciousness, the concept of free energy endured. In 2003, the Swiss-based GIFNET (Global Institute for New Energy Technologies) was established, with its French director Jean-Luc Naudin espousing the tenets of the synergistic theory. The institute has since ceased to maintain an online presence.
Definitions
In 1973, Vallée submitted the terms "synergy" and "synergetic theory" to the French Academy's Committee for Technical Terms. In his 1971 book, he introduced the concept of "synergetic potential", which he defined as follows:
: The square of the propagation speed of electromagnetic waves in a vacuum filled with matter.
Science et Vie widely adopted the terms " synergistic" and "synergetic theory", which described the "synergetic generator" or "battery" as a device purportedly capable of harnessing the diffuse energy present in the universe.
Vallée discussed the potential for harnessing "diffuse electromagnetic energy that traverses the immensity of the Universe", which could be realized with a more profound comprehension of the characteristics of matter, particularly within "diffuse energetic environments."
This central notion of synergetic theory inspired the designation of the SEPED (Society for the Study and Promotion of Diffuse Energy), which was operational for less than a decade.
Synergy
René-Louis Vallée offered a critique of the complexity of special relativity but drew extensively upon its conceptual framework. He put forth a concept of "synergy", or total energy, which he expressed through the formula S=mc2. This formula is identical to Albert Einstein's, but it incorporates not only the system's energy but also the diffuse energy of the medium surrounding it. However, his other theoretical propositions diverged from the prevailing consensus. For Vallée, space was Euclidean, time was universal, and the speed of light was not constant. He also proposed that gravitation is a force of electromagnetic origin, which contrasts with Einstein's advancements in the field, which deepened understanding but lacked definitive explanations.
Laws
Synergetics puts forth two hypotheses regarding the conversion between energy and matter. The first is the "law of materialization", which posits that energy can be transformed into matter. The second is the "upper limit value of the electric field", which suggests that matter can be transformed into energy when a field reaches a value of 39 × 1015 V/m. These laws are, in fact, hypotheses. Vallée characterizes the law of materialization as "a fundamental law of nature that was missing from the known laws of physics."
He claimed to have discovered an "inexhaustible source of cosmic energy" available everywhere, asserting that matter is a localized form of this diffuse energy, explained through his upper limit field hypothesis.
Diffuse energy
Vallée put forth the theory that the universe is permeated by a vast, hitherto undiscovered form of "diffuse energy", which can account for all observable physical phenomena. According to this hypothesis, elementary particles represent distinct manifestations of this energy.
In Chapter 9 of his book, Vallée posited that "gravitation and cosmic radiation have a common origin in diffuse electromagnetic energy." He put forth the hypothesis that the speed of light is variable and dependent on the diffuse medium through which it propagates, deriving a formula for the "energy equivalence of gravitational fields:"
A simple calculation shows that a cubic metre of empty space on the Earth's surface contains 57,000 Megajoules less energy than a cubic metre of interstellar space.
This formula is, apart from one factor, the same as that for gravitational potential energy. The added constant, , is the energy density of the matter-free diffuse medium, but the theory doesn't know how to calculate it, and Vallée merely gives orders of magnitude.
Experiments
In 1975, Belgian scientist Eric d'Hoker conducted the inaugural experiment based on synergetic theory in Mortsel. The results indicated that the generated energy was four times the input. The experiment entailed charging a capacitor with a battery and then discharging the current through a graphite rod. Vallée ascribed the surplus energy to a reaction in which a carbon-12 atom transformed a radioactive boron-12 atom, which subsequently reverted to carbon via beta decay, thereby releasing additional energy.
A second experiment was conducted on January 23, 1976, at the Physics Faculty of Paris 7, by Francis Kovacs, to validate the aforementioned findings. The experiment was designed to confirm the energy surplus and convert it into usable electric current, using parameters provided by Vallée. A capacitor was used to discharge a current through a glass tube filled with powdered graphite, surrounded by a coil that recovered a secondary current, which was then visualized on an oscilloscope. Tests were conducted in three configurations: no magnetic field, a field aligned with the electric current, and a field opposed to it. In all cases, the results matched the predictions made by the Lenz-Faraday law, showing no "synergetic" effects.
Criticism
Vallée posited that synergetic theory could be used to harness limitless energy from any point in space using a simple, inexpensive device. In November 1975, Science et Vie published an article endorsing Vallée's theory based on a single experiment and critiqued the lack of interest from physicists.
In the wake of the 1976 verification, which yielded negative results, drew parallels between synergetics and a contemporary iteration of perpetual motion, noting that both promised the generation of free energy from seemingly unlimited sources.
The individual who conducted the counter-experiment, Jean-Marc Lévy-Leblond, was highly critical of Vallée, who postulated a conspiracy against his theory. Lévy-Leblond argued that the principles of synergetics were not susceptible to refutation, as they were not formalized and predictive, and therefore not scientific. He described Vallée's theoretical framework as incomprehensible, likening it to the peculiar calligraphy of Saul Steinberg, composed of recognizable symbols but lacking an intelligible whole.
Vallée's apparent objective was to develop a comprehensive theoretical framework, which he referred to as a "theory of everything." He sought to portray synergetics as a "quantum and gravitational energy theory" that would restore objectivity to science by making it "accessible to the general public."
Nevertheless, this assertion of accessibility proved to be illusory upon examination of the text in question. Furthermore, Vallée himself never subjected his theories to empirical testing, thereby rendering them inherently unverifiable.
Legacy
Synergetic theory, which was promoted by free-energy advocates from the 1970s to the early 2000s, enjoyed a brief period of popularity between its coverage in Science et Vie and its definitive refutation in La Recherche. It has since become an example of "alter-science", according to , and a scientific imposture, per .
Moatti drew a parallel between Vallée and Maurice Allais, who developed an interest in physics at a relatively advanced age and published his inaugural theory on the subject at the age of 86. Allais is renowned for challenging the prevailing theories of Newton and Einstein. He shares similarities with Vallée in this regard. Nevertheless, the term "synergetics" is more frequently linked with Nikola Tesla and his notion of free energy. In the early 20th century, renowned engineer Nikola Tesla sought to transmit electricity wirelessly and harness cosmic radiation energy. Despite the discovery of X-rays in 1895, Tesla rejected the concept of energy contained within matter. In 1931, he claimed to have constructed a "cosmic energy receiver" and used it to power a vehicle. Like Vallée, Tesla rejected overly theoretical science, dismissed the theory of relativity as false, and announced a "unified theory of gravitation" that explained this force simply and denied Einstein's concept of curved space.
René-Louis Vallée bibliography
Reference Work
Presentations and writings
Notes
References
Bibliography
Articles from the circle of friends of Vallée or SEPED
Pseudoscience
Energy (physics)
Conspiracy theories in France | Synergetic theory | [
"Physics",
"Mathematics"
] | 2,689 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
72,645,041 | https://en.wikipedia.org/wiki/Core-compact%20space | In general topology and related branches of mathematics, a core-compact topological space is a topological space whose partially ordered set of open subsets is a continuous poset. Equivalently, is core-compact if it is exponentiable in the category Top of topological spaces. Expanding the definition of an exponential object, this means that for any , the set of continuous functions has a topology such that function application is a unique continuous function from to , which is given by the Compact-open topology and is the most general way to define it.
Another equivalent concrete definition is that every neighborhood of a point contains a neighborhood of whose closure in is compact. As a result, every (weakly) locally compact space is core-compact, and every Hausdorff (or more generally, sober) core-compact space is locally compact, so the definition is a slight weakening of the definition of a locally compact space in the non-Hausdorff case.
See also
Locally compact space
References
Further reading
Topology | Core-compact space | [
"Physics",
"Mathematics"
] | 199 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
72,646,521 | https://en.wikipedia.org/wiki/Levitated%20optomechanics | Levitated optomechanics is a field of mesoscopic physics which deals with the mechanical motion of mesoscopic particles which are optically or electrically or magnetically levitated. Through the use of levitation, it is possible to decouple the particle's mechanical motion exceptionally well from the environment. This in turn enables the study of high-mass quantum physics, out-of-equilibrium- and nano-thermodynamics and provides the basis for precise sensing applications.
Motivation
In order to use mechanical oscillators in the regime of quantum physics or for sensing applications, low damping of the oscillator's motion and thus high quality factors are desirable. In nano and micromechanics, the Q-factor of a system is often limited by its suspension, which usually demands filigree structures. Nevertheless, the maximally achievable Q-factor usually correlates with the system's size, requiring large systems for achieving high Q-factors.
Particle levitation in external fields can alleviate this constraint. This is one of the reasons why the field of levitated optomechanics has become attractive for research on the foundations in physics and for high-precision applications.
Physical basics
The interaction between a dielectric particle with polarizability and an electric field is given by the gradient force . When a particle is trapped and optically levitated in the focus of a Gaussian laser beam, the force can be approximated to first order by with , i.e. a harmonic oscillator with frequency , where is the particle's mass. Including passive damping, active external feedback and coupling results in the Langevin equations of motion:
Here is the total damping rate, which has usually two dominant contributions: collisions with atoms or molecules of the background gas and photon shot noise, which becomes dominant below pressures on the order of 10−6 mbar.
The coupling term allows to model any coupling to an external heat bath.
The external feedback is usually used to cool and control the particle motion.
The approximation of a classical harmonic oscillator holds true until one reaches the regime of quantum mechanics, where the quantum harmonic oscillator is the superior approximation and the quantization of the energy levels becomes apparent. The QHO has a ground state of lowest energy where both position and velocity have a minimal variance, determined by the Heisenberg uncertainty principle.
Such quantum states are interesting starting conditions for preparing non-Gaussian quantum states, quantum enhanced sensing, matter-wave interferometry or the realization of entanglement in many-particle systems.
Methods of cooling
Parametric feedback cooling and cold damping
The idea of feedback cooling is to apply a position and/or velocity dependent force on the particle in a way which produces a negative feedback loop.
One way to achieve that is by adding a feedback term, which is proportional to the particle's position (). Since that mechanism provides damping, which cools down the mechanical motion, without the introduction of fluctuations, it is referred to as “cold damping”. The first experiment employing this type of cooling was done in 1977 by Arthur Ashkin, who received the 2018 Nobel Prize in Physics for his pioneering work on trapping with optical tweezers.
Instead of applying a linear feedback signal, one can also combine position and velocity via to get a signal with twice the frequency of the particle's oscillation. This way the stiffness of the trap increases when the particle moves out of the trap and decreases when the particle is moving back.
Cavity-enhanced Sisyphus cooling
Coherent scattering cavity cooling
References
Mesoscopic physics
Quantum mechanics | Levitated optomechanics | [
"Physics",
"Materials_science"
] | 744 | [
"Condensed matter physics",
"Theoretical physics",
"Mesoscopic physics",
"Quantum mechanics"
] |
75,468,396 | https://en.wikipedia.org/wiki/Gravitational%20Aharonov-Bohm%20effect | In physics, the gravitational Aharonov-Bohm effect is a phenomenon involving the behavior of particles acting according to quantum mechanics while under the influence of a classical gravitational field. It is the gravitational analog of the well-known Aharonov–Bohm effect, which is about the quantum mechanical behavior of particles in a classical electromagnetic field.
Electric effect
There are many variants of the Aharonov-Bohm effect in electromagnetism. Here we review an electric version of the Aharonov-Bohm effect that is most similar to the gravitational effect which has been experimentally observed. This electric effect is caused by a charged particle (say, an electron) being in a superposition of traveling down two different paths. In both paths, the electric field that the electron sees is zero everywhere along the path, but the scalar electric potential that the electron sees is not the same for both paths.
In the above figure, the beamsplitter puts the electron in a superposition of taking the upper path and taking the lower path. In both paths, when the electron gets to the mirror, it is stopped and held there. During that time when the electron is held in place at a mirror, 2 electric charges each with charge are brought near the upper mirror in a symmetric manner such that the net electric field caused by the 2 charges at the upper mirror is 0. We assume that the lower mirror is far enough away from the upper mirror such that the electric potential (and electric field) caused by the 2 charges is 0 at the lower mirror. So, this creates an electric potential difference between upper and lower mirrors equal to , where is the distance of the charges from the mirror and is the electric constant. The electron is held there for a time , after which the charges are moved away and the electron is allowed to continue moving along its path. Assuming that the time we take to move the 2 charges to and from the mirror is much smaller than , this time that the electron spends at the mirror causes a phase shift equal to
where is the elementary charge.
When the 2 paths of the interferometer are recombined, we see a different interference pattern depending on whether we brought the charges near the upper mirror to create a potential difference. This is surprising, because no matter whether we brought the charges near the upper mirror to create a potential difference, the electron always remains at a location where the electric field is zero (to be more precise, the wavefunction of the electron is only ever nonzero at locations where the electric field is 0).
This electric Aharonov-Bohm effect has not been experimentally observed, unlike the magnetic effect. It not generally feasible to trap an electron at a "mirror" in the interferometer while the potential is turned on and off, which is necessary in this setup to ensure that the electron stays in a region where the field is 0 while the potential is varied. Proposals for experimentally observing the effect instead involve shielding the electron from any electric field by having it travel through a conducting cylinder while the potential is varied. In contrast, one experiment proposal for the gravitational Aharonov-Bohm effect actually does involve trapping atoms (which play an analogous role to electrons in the experiment proposal) and holding them in a region where the gravitation field is zero using optical lattices.
Gravitational effect
Just as there are many variants of the Aharonov-Bohm effect in electromagnetism, there are many variants of the gravitational effect. The simplest version of the gravitational effect is analogous to the electric effect above, with the electron replaced by a small test mass such as an atom, and the 2 charges that create an electric potential replaced by 2 masses that create a gravitational potential.
In the above figure, an atom passes through an atomic "beamsplitter" that puts the atom in a superposition of taking the upper and lower paths. The atoms are then reflected by atomic "mirrors" that cause them to recombine at the detector on the right, where an interference pattern is detected.
When the atom is at a "mirror", it is paused and held there while a potential is introduced. The potential is created by moving 2 massive objects, each with mass , to the left and right sides of the upper mirror, a distance away from the mirror. The masses are brought towards the upper mirror in a symmetric manner such that the gravitational field caused by the masses is 0 at the upper mirror. We assume that the upper mirror is far enough away from the lower mirror such that the masses create zero potential (and zero field) at the lower mirror, which means they create a gravitational potential difference of between the upper and lower mirrors. Despite this gravitational potential difference, the gravitational field at the upper and lower mirrors is 0, and the atom is never in any position with a nonzero gravitation field. Still, a time spent at the mirrors with that potential difference causes a phase shift,
where is the mass of the atom. This phase shift is detected by observing the interference pattern where the atom paths recombine, which will be different depending on whether the potential difference was applied.
Instead of these idealized paths for the atom that involve "mirrors" that pause the atom in its place while a potential is applied, the atom could be moved in those paths by an optical lattice. This would allow precise control over the positions of the atom and the amount of time spent in the gravitational potential.
The various electromagnetic versions of the Aharonv-Bohm effect can be described in a way that does not suggest any physical reality to the electromagnetic potentials and does not require any nonlocality, by treating the sources of the electromagnetic field and the electromagnetic field itself quantum mechanically, instead of treating the test charge (electron) quantum mechanically and the electromagnetic field and its sources classically. Without a theory of quantum gravity, we cannot appeal to a fully quantum treatment of the test mass (atom), the sources of the gravitational field, and the gravitational field itself in order to explain the gravitational Aharonov-Bohm effect in a fully local, gauge-independent manner. However, this effect can be explained in a local, gauge-independent manner by considering the gravitational time dilation experienced by the atom in the path with the nonzero potential, and taking into account that matter waves pick up a phase at the Compton frequency of the matter.
Experimental observation
In January 2022, a team led by Mark Kasevich announced that they had experimentally observed a gravitational Aharonov-Bohm effect with an experiment broadly similar to the one outlined above.
The source of the gravitational potential in their experiment was a single 1.25 kg tungsten mass. The test masses were rubidium-87 atoms. The tungsten mass was fixed, so the gravitational field caused by the tungsten mass was not zero everywhere along the paths of the 87Rb atoms. This means that the phase shift of the rubidium atoms between the 2 paths was not caused by a gravitational potential energy difference alone, but also by a difference in the gravitational force felt by the atoms in the 2 paths. By detecting a difference in the phase shift between when the tungsten mass is present and when it is not present, they observed a phase shift consistent with that predicted by the Aharonov-Bohm effect.
The "beamsplitters" and "mirrors" used to make the 87Rb atoms interfere are not solid-state components as would be the case with standard interferometers with light. Rather, they consisted of laser pulses that coherently transfer momentum between the atoms and photons.
References
Quantum mechanics | Gravitational Aharonov-Bohm effect | [
"Physics"
] | 1,544 | [
"Theoretical physics",
"Quantum mechanics"
] |
75,470,023 | https://en.wikipedia.org/wiki/Surface%20imperfections%20%28optics%29 | Surface imperfections on optical surfaces such as lenses or mirrors, can be caused during the manufacturing of the part or handling. These imperfections are part of the surface and cannot be removed by cleaning. Surface quality is characterized either by the American military standard notation (eg "60-40") or by specifying RMS (root mean square) roughness (eg "0.3 nm RMS"). American notation focuses on how visible surface defects are, and is a "cosmetic" specification. RMS notation is an objective measurable property of the surface. Tighter specifications increase the costs of fabricating optical elements but looser ones affect performance.
While surface imperfections can be labeled "cosmetic defects", they are not purely cosmetic. Optics for laser applications are more sensitive to surface quality as any imperfections can lead to laser-induced damage. In some cases, imperfections in optical elements will be directly imaged as defects in the image plane. Optical systems requiring high radiation intensity tend to be sensitive to any loss of power due to surface scattering caused by imperfections. Systems operating in the ultraviolet range require a more demanding standard as the shorter wavelength of the ultraviolet radiation is more sensitive to scattering.
There are many different standards used by optical element manufacturers, designers, and users which vary by geographic region and industry. For example, German manufacturers use ISO 10110, while the US military developed MIL-PRF-13830 and their long-standing use of it has made it the de facto global standard. It is not always possible to translate the scratch grade by one standard to another and sometimes the translation ends up being statistical (sampling defects to ensure that statistically, the percentage rejected elements will be similar in both methods).
Examining surface quality in terms of 'Scratch & Dig' is a specialized skill that takes time to develop. The practice is to compare the element to a standard master (reference). Automated systems now replace the human technician, for flat optics, but recently also for convex and concave lenses. In contrast, 'Roughness' characterization is done with more precise and easier-to-quantify methods.
Overview of types
The various standards separate two main categories for surface quality: scratch & dig and roughness.
A scratch is defined as a long and narrow defect that tears the surface of the glass or coating. There are standards that refer to the degree of visibility, which is the relative brightness of the scratch. In these cases, there is also a standard for the lighting conditions used for the test. Other standards classify scratches according to their dimensions. A dig is defined as a pit, a rough area, or a small crater on the surface of the glass (or any other optical material). All standards measure the physical size of the dig. Some standards include small defects within the glass that are visible through the surface, such as bubbles and inclusions.
Roughness, texture or optical finish is a defect that originates from the element's manufacturing. Texture is a periodical phenomenon with a high spatial frequency (or in other words, in small dimensions), which affects the entire surface and causes the scattering of incident light. A higher value of roughness means a rougher surface. The texture is especially important in cases where the polishing is carried out using new processing methods such as diamond turning, which leaves a residual periodical signature on the surface, affecting the quality of the obtained image or the level of scattering from the surface. The amount of scattered light is proportional to the square of the RMS of the roughness.
Scratch & Dig
Military standard MIL-PRF-13830B
This is the most common standard, stemming from a standard that was originally proposed by McLeod and Sherwood of Kodak back in 1945 and evolved in 1954 into the military standard MIL-O-13830A. It defines the quality of the surface by a pair of numbers, the first is a measure of the visibility of the scratch and the second is the size of the dig.
Scratch visibility grades are described by a series of arbitrary numbers: 10, 20, 40, 60, and 80 where the brightest scratches, the easiest to see using the naked eye, are grade 80, while the most difficult to detect are grade 10. A scratch on a tested part is compared with an industrial or military standard (master) on which there are scratches of different degrees of visibility and the comparison is made using the naked eye, under controlled lighting conditions. It is important to recognize that this is a subjective test and its results can vary between different people. The scratches' visibility largely depends on their shape, and contrary to popular belief, there is little correlation between the scratch's visibility grade and its width. One cannot measure the width of a scratch to determine its grade.
On the other hand, a dig's grade is a precise and measurable value. It is the diameter of the largest dig that is found on the tested surface, in units of hundredths of a millimeter. It is customary to use discrete grades of 5, 10, 20, 40, or 50, where of course the larger numbers describe larger imperfections.
There are many default definitions in the MIL standard. For example, the grade that must be required outside the clear aperture (the part of the lens to which the standard applies, also called "effective diameter" or CA) is, in the absence of another definition, 80-50. This is a very basic surface characterization and is easy to achieve. It describes a scratch whose brightness is less than that of a scratch at visibility grade 80 and a dig with a diameter of up to 0.5 mm (50 hundredths = 50/100=0.5). 60-40 is considered "commercial" quality, while for demanding laser applications 20-10 or even 10-5 are used. The scratches on a 10-5 or 20-10 surface can be hard to see, making the visibility standard more subjective. Other standards may work better when precision surfaces are required. Optical coating can change scratch visibility, so for example an element that passes 40-20 before coating can be worse than 60-40 after coating.
Accumulation and concentration rules regulate common situations in which there are multiple defects on the surface of an optical element, and clarify how they should be added up. For example, if one or more scratches are found with the maximum visibility allowed, to pass the test, the sum of the length of these scratches is limited to a quarter of the diameter of the element. The number of digs at the maximum permitted level is determined by dividing the measured clear aperture diameter (in millimeters) by 20, and rounding up. For example, for a clear aperture of 81 mm, 5 digs are allowed at the maximum level.
Since the comparison master is only in possession of the US Army, several commercial masters have been developed that are intended to be compatible, but due to the complexity of the factors that make a scratch visible, these masters are not always compatible with the original and there is no way to match one set to another. For example, a visibility grade 10 scratch on one master can appear brighter than a visibility grade 60 scratch on another master. For this reason, it is recommended to also indicate on the drawing the type of master set to which it must be compared during the test.
Examples of such commercial comparison sets made of plastic or glass are Davidson Optronics, Brysen Optical, and Jenoptik Paddle – sold by ThorLabs and Edmund Optics.
ISO 10110-7
This standard is used in the USA, China, Japan, Russia, and all of Europe.
The notation as of 2007 is: 5/ N x A; C N' x A'; L N" x A"; E A''', where N and A represent the number of defects and the maximum size of the defect, N' and A' represent the number of imperfections on the coating and their maximum size, N'' and A'' represent the number of scratches allowed and their maximum size and A''' represent the maximum size of an edge chip (a defect on the rim of the optical element).
A scratch in this case is defined as a defect longer than 2 mm. Only the first part of the characterization, N x A, is mandatory. The rest of the details can be omitted. A and A' are given as the square root of the area of the defect and are indicated by discrete values from the series: 4,2.5,1.6,1,0.63,0.4,0.25.
In addition to the limits on the number of defects and their size, the total area of all imperfections must not exceed A*N2. Long defects (scratches) are summed up by their width, independent of length. There is no limit on the number of edge chips, and the concentration of imperfections is limited by the rule that at most 20% of the defects, allowance can be concentrated in an area of 5% of the clear aperture.
A fundamental advantage of ISO is a relatively simple translation between the percentage of light scattered from a surface and the characterization of its surface, according to the formula:
Scatter % = 4 x [(N x A2)+(N' x A'2)+ N" x A" x Φ]/(π x Φ2)
Unlike MIL-PRF-13830B which is cheap and fast to use, but suffers from inaccuracies, the use of the dimensional standard of ISO 10110-7 is more accurate but takes a longer time to test and is therefore expensive. The relatively long test time is derived from the fact that testing according to this standard is carried out using a microscope, comparing sizes of each defect to defects on a master, and because of the large magnification needed the field of view is small, requiring several measurements to map each optical element.
David Aikens, director of Optics and Electro-Optics Standards Council, presented a recommended conversion chart that preserves the level of quality control, or percent fall, in ISO scratch & dig testing versus the military standard. For example 5/2x0.40; L 3 x0.010 is a statistically-equivalent standard to 60-40 of the strict military standard, over a 20 mm opening.
The logical flaw of this dimensional standard is in defining a scratch according only to its width. For example, if a lens with a diameter of 100 mm has a requirement of L 1 x 0.025, a single scratch with a thickness of up to 25 microns is allowed, even if it covers the entire 100 mm diameter. However, if the manufacturer polishes the surface and removes the scratch from the central 95 millimeters of the lens, there will be two scratches each 2.5 mm in length and now the lens will fail the acceptance tests because the characterization allows only one scratch. The illogicality here is obvious: it is not acceptable to reject a component due to a process that improves its quality.
As of 2017, to support quick measurements intended for less sensitive surfaces, ISO 10110-7 also allows the definition of scratches according to their visibility, and the definition of digs according to their diameter, just like MIL-PRF-13830B, using the same grades, for example 60-40.
It is possible to expand and also mark coating imperfections as well as edge chips, similarly the definition in the dimensional standard: 5 / S - D; C S' - D'; E A''' where S and D are the definitions for scratches and digs, S' and D' for these defects on the coating and A''' characterizes edge chip as defined above. As explained about the military standard, it is important to explicitly specify which master set the scratches brightness are to be compared to.
MIL-C-48497A ו- MIF-F-48616
These standards are almost as popular as MIL-PRF-13830B but they have become less popular with time.
These standards define scratches and digs according to their physical size and mark their grade with the letters: A, B, C, D, E, F, G (and H which is used only for digs). The letter A represents the narrowest scratch, which is 0.005 mm wide, and the smallest dig, which is 0.05 mm in diameter. On the other hand, the letter G represents a scratch that is 0.12 mm wide and a dig that is 0.7 mm in diameter. A microscope or magnifying glass is used for testing, or sometimes even just using the naked eye to compare to a master.
ANSI OP1.002
This American standard was first published in 2006. Just like in the MIL-PRF-13830B standard, ANSI OP1.002 defines digs according to their diameter.
ANSI OP1.002 also supports two separate methods for scratches: visibility and size.
The visibility method defines scratches according to their visibility and is identical in design and terminology to the MIL-PRF-13830B standard. Just like the military standard, it uses two numbers, the first for scratches and the second for digs, maintaining their meaning as in the military standard. Examples: 80-50, 60-40. This method takes advantage of the speed and low cost of the visual inspection and is used for elements with looser tolerances.
The dimensionality method for scratches is based on the MIL-C-48497A standard, which is considered easy to use and functional. The dimensional method uses two letters, the first for scratches and the second for digs. For example: A-A or E-E. This standard is intended for parts with tight surface quality tolerances, such as CCD cover glasses or demanding laser applications.
The OP1.002 standard allows using a microscope to compare with the master.
This standard allows a relatively easy translation between the desired scattering level and the surface quality, as mentioned above.
Roughness
US military standard MIL-STD-10A
This original standard was common in nature, not intended for the characterization of polished surfaces per se. It used parameters that are not typically used for the characterization of optical elements such as average roughness.
ASME B46.1-2002
This standard replaced MIL-STD-10A and defines more than forty different parameters including RMS (root mean square), slope, skew, PSD (Power Spectral Density, which is the most comprehensive characteristic), and more. There is a significant improvement in this standard because it allows the characterization of machined surfaces, at different spatial frequencies, which is especially important in cases where the optics were produced using techniques that leave periodic marks, such as caused by diamond turning. For most uses it is sufficient to use RMS. In all cases, it is important to specify the range in which the calculation is performed because without defining the spatial frequency range in which the measurement is performed, this standard is meaningless.
ISO 10110-8 (2010)
This popular standard, similar to ASME B46.1, also defines the RMS of the surface over a specific length scale, PSD and more. It differs from the ASME specification by using symbols instead of words.
See also
Surface roughness
Scattering from rough surfaces
Quality control
Visual inspection
References
External links
Comparing various specifications for Scratch and Dig
Poster explaining drawing notations by ISO 10110 (2023 update)
Optics
Applied and interdisciplinary physics
Glass engineering and science | Surface imperfections (optics) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,150 | [
"Glass engineering and science",
"Applied and interdisciplinary physics",
"Optics",
"Materials science",
" molecular",
"Atomic",
" and optical physics"
] |
75,474,225 | https://en.wikipedia.org/wiki/Mitochondrial%20pyruvate%20carrier | The mitochondrial pyruvate carriers are composed of:
Mitochondrial pyruvate carrier 1
Mitochondrial pyruvate carrier 2
The pyruvate carriers are involved in mitochondrial metabolism but it is possible to compensate for their loss of function. They have been studied for a role in cardiac stress adaption.
References
Human genes
Transport proteins
Solute carrier family
Autosomal recessive disorders
Inborn errors of carbohydrate metabolism | Mitochondrial pyruvate carrier | [
"Chemistry"
] | 88 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism",
"Protein stubs",
"Biochemistry stubs"
] |
75,474,255 | https://en.wikipedia.org/wiki/Azemiglitazone | Azemiglitazone (MSDC-0602) is a novel insulin sensitizer designed to retain the effect of thiazolidinediones on mitochondrial pyruvate carriers with limited PPAR-gamma binding. It is hoped to have fewer adverse effects than the thiazolidinediones and is being developed by Cirius Therapeutics for type 2 diabetes and non-alcoholic fatty liver disease. It is formulated as its potassium salt, azemiglitazone potassium (MSDC-0602K).
References
Experimental diabetes drugs
Experimental drugs developed for non-alcoholic fatty liver disease
Thiazolidinediones
3-Methoxyphenyl compounds
Ketones
Aromatic ethers | Azemiglitazone | [
"Chemistry"
] | 147 | [
"Pharmacology",
"Ketones",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
75,474,478 | https://en.wikipedia.org/wiki/HU6 | HU6 is a prodrug of the mitochondrial uncoupler 2,4-dinitrophenol (DNP) that is intended to "minimize the rapid absorption and high peak blood concentrations of DNP to provide a wider therapeutic index and improve safety." Developed by Rivus Pharmaceuticals, the drug is tested to reduce weight and liver fat in humans with risk factors for metabolic dysfunction-associated steatohepatitis. In a phase 2a trial, the higher dosage levels reduced liver fat on average by more than 30 percent and also reduced body weight significantly. A phase 2b trial was launched in late 2023. A phase 2b trial in patients with metabolic dysfunction-associated steatohepatitis was subsequently initiated. Data from this study are expected to be reported in 2025. Additionally, a phase 2a study was performed in patients suffering from heart failure with preserved ejection fraction, a disease that is mediated by visceral fat and obesity. The study achieved the primary endpoint of weight loss, as well as a number of secondary endpoints.
References
Prodrugs
Uncouplers
Experimental drugs developed for non-alcoholic fatty liver disease
Experimental anti-obesity drugs
Nitrobenzenes
Nitroimidazoles | HU6 | [
"Chemistry"
] | 253 | [
"Chemicals in medicine",
"Cellular respiration",
"Prodrugs",
"Uncouplers"
] |
75,474,927 | https://en.wikipedia.org/wiki/HEC96719 | HEC96719 is a tricyclic farnesoid X receptor agonist developed for non-alcoholic steatohepatitis.
References
Farnesoid X receptor agonists
Experimental drugs developed for non-alcoholic fatty liver disease
Chloroarenes
Isoxazoles
Cyclopropanes
Carboxylic acids
Benzoxepines
Pyridines
Spiro compounds | HEC96719 | [
"Chemistry"
] | 82 | [
"Organic compounds",
"Carboxylic acids",
"Functional groups",
"Spiro compounds"
] |
75,475,368 | https://en.wikipedia.org/wiki/Tipelukast | Tipelukast (KCA 757 or MN-001) is a sulfidopeptide leukotriene receptor antagonist with suspected anti-inflammatory properties. It is developed by MediciniNova.
References
Receptor antagonists
Leukotriene antagonists
Experimental drugs developed for non-alcoholic fatty liver disease
Acetophenones
Phenols
Thioethers
Carboxylic acids | Tipelukast | [
"Chemistry"
] | 83 | [
"Receptor antagonists",
"Neurochemistry",
"Carboxylic acids",
"Functional groups"
] |
75,475,815 | https://en.wikipedia.org/wiki/Crinecerfont | Crinecerfont, sold under the brand name Crenessity, is a medication used for the treatment of congenital adrenal hyperplasia. It is a corticotropin-releasing factor type 1 receptor (CRF1R) antagonist developed to treat classic congenital adrenal hyperplasia due to 21-hydroxylase deficiency (21OHD). It is taken by mouth.
The most common side effects of crinecerfont in adults include fatigue, dizziness, and arthralgia (joint pain). For children, the most common side effects include headache, abdominal pain, and fatigue.
Crinecerfont was approved for medical use in the United States in December 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Crinecerfont is indicated as adjunctive treatment to glucocorticoid replacement to control androgens in people four years of age and older with classic congenital adrenal hyperplasia.
Adverse effects
The US Food and Drug Administration prescription label for crinecerfont has a warning for acute adrenal insufficiency or adrenal crisis.
History
Crinecerfont's approval is based on two randomized, double-blind, placebo-controlled trials in 182 adults and 103 children with classic congenital adrenal hyperplasia. In the first trial, 122 adults received crinecerfont twice daily and 60 received placebo twice daily for 24 weeks. After the first four weeks of the trial, the glucocorticoid dose was reduced to replacement levels, then adjusted based on levels of androstenedione, an androgen hormone. The primary measure of efficacy was the change from baseline in the total glucocorticoid daily dose while maintaining androstenedione control at the end of the trial. The group that received crinecerfont reduced their daily glucocorticoid dose by 27% while maintaining control of androstenedione levels, compared to a 10% daily glucocorticoid dose reduction in the group that received placebo.
In the second trial, 69 children received crinecerfont twice daily and 34 received placebo twice daily for 28 weeks. The primary measure of efficacy was the change from baseline in serum androstenedione at week four. The group that received crinecerfont experienced a statistically significant reduction from baseline in serum androstenedione, compared to an average increase from baseline in the placebo group. At the end of the trial, children assigned to crinecerfont were able to reduce their daily glucocorticoid dose by 18% while maintaining control of androstenedione levels compared to an almost 6% daily glucocorticoid dose increase in children assigned to placebo.
The US Food and Drug Administration (FDA) granted the application for crinecerfont fast track, breakthrough therapy, orphan drug, and priority review designations. The FDA granted the approval of Crenessity to Neurocrine Biosciences, Inc.
Society and culture
Legal status
Crinecerfont was approved for medical use in the United States in December 2024.
Names
Crinecerfont is the international nonproprietary name.
Crinecerfont is sold under the brand name Crenessity.
References
Further reading
External links
Receptor antagonists
Fluoroarenes
Chloroarenes
Alkyne derivatives
Cyclopropyl compounds
Thiazoles
Tertiary amines | Crinecerfont | [
"Chemistry"
] | 743 | [
"Neurochemistry",
"Receptor antagonists"
] |
75,475,887 | https://en.wikipedia.org/wiki/Onfasprodil | Onfasprodil (MIJ821) is a drug delivered via intravenous infusion that is designed as a fast-acting treatment for treatment-resistant depression. It works as a negative allosteric modulator of the NMDA receptor subunit 2B (NR2B). The drug is developed by Novartis.
References
Experimental antidepressants
NMDA receptor antagonists
Drugs developed by Novartis
2-Fluorophenyl compounds
Pyridines
Negative allosteric modulators | Onfasprodil | [
"Chemistry"
] | 107 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,476,697 | https://en.wikipedia.org/wiki/1%2C4-Butanedithiol | 1,4-Butanedithiol is an organosulfur compound with the formula . It is a malodorous, colorless liquid that is highly soluble in organic solvents. The compound has found applications in biodegradable polymers.
Reactions
Alkylation with geminal dihalides gives 1,3-dithiepanes. Oxidation gives the cyclic disulfide 1,2-dithiane:
It forms self-assembled monolayers on gold.
It is also used in polyadditions along with 1,4-butanediol to form sulfur-containing polyester and polyurethanes containing diisocyanate. Several of these polymers are considered biodegradable and many of their components are sourced from non-petroleum oils.
Related compounds
Dithiothreitol
1,3-Propanedithiol
References
Reagents for organic chemistry
Thiols
Foul-smelling chemicals | 1,4-Butanedithiol | [
"Chemistry"
] | 192 | [
"Organic compounds",
"Thiols",
"Reagents for organic chemistry"
] |
75,478,745 | https://en.wikipedia.org/wiki/Danicamtiv | Danicamtiv is a cardiac myosin activator developed by Bristol Myers Squibb to treat dilated cardiomyopathy.
References
Drugs developed by Bristol Myers Squibb
Pyrazoles
Sulfones
Piperidines
Ureas
Isoxazoles
Fluorine compounds | Danicamtiv | [
"Chemistry"
] | 60 | [
"Organic compounds",
"Sulfones",
"Functional groups",
"Ureas"
] |
75,479,302 | https://en.wikipedia.org/wiki/PL-3994 | PL-3994 is an experimental bronchodilator that acts as an agonist of the natriuretic peptide receptor A. It is developed by Palatin Technologies.
References
Receptor agonists
Bronchodilators
Experimental drugs | PL-3994 | [
"Chemistry"
] | 51 | [
"Receptor agonists",
"Neurochemistry"
] |
75,480,170 | https://en.wikipedia.org/wiki/Evazarsen%20sodium | Evazarsen sodium (IONIS-AGT-LRx) is an antisense RNA designed to inhibit angiotensinogen as an alternative to other mechanisms to target the renin–angiotensin–aldosterone system.
References
Antisense RNA
Agents acting on the renin-angiotensin system | Evazarsen sodium | [
"Chemistry"
] | 70 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
74,056,025 | https://en.wikipedia.org/wiki/Four%20Core%20Genotypes%20mouse%20model | Four Core Genotypes (FCG) mice are laboratory mice produced by genetic engineering that allow biomedical researchers to determine if a sex difference in phenotype is caused by effects of gonadal hormones or sex chromosome genes. The four genotypes include XX and XY mice with ovaries, and XX and XY mice with testes. The comparison of XX and XY mice with the same type of gonad reveals sex differences in phenotypes that are caused by sex chromosome genes. The comparison of mice with different gonads but the same sex chromosomes reveals sex differences in phenotypes that are caused by gonadal hormones.
Development
The FCG model was created by Paul Burgoyne and Robin Lovell-Badge at the National Institute for Medical Research, London (now Francis Crick Institute). The model involves deleting the testis-determining gene Sry from the Y chromosome, and inserting Sry onto chromosome 3. Therefore the sex chromosomes no longer determine the type of gonad, so that XX and XY mice can have the same type of gonad and gonadal hormones.
Significance
The FCG model has been used to discover that the XX and XY animals respond differently in models of human physiology and disease, including autoimmunity, metabolism, cardiovascular disease, cancer, Alzheimer’s disease, and neural and behavioral processes. These findings imply that some sex chromosome genes may protect from disease, rationalizing the search for therapies that enhance such protective factors.
References
Genetic engineering
Sex | Four Core Genotypes mouse model | [
"Chemistry",
"Engineering",
"Biology"
] | 314 | [
"Biological engineering",
"Genetic engineering",
"Sex",
"Molecular biology"
] |
74,059,230 | https://en.wikipedia.org/wiki/Kramkov%27s%20optional%20decomposition%20theorem | In probability theory, Kramkov's optional decomposition theorem (or just optional decomposition theorem) is a mathematical theorem on the decomposition of a positive supermartingale with respect to a family of equivalent martingale measures into the form
where is an adapted (or optional) process.
The theorem is of particular interest for financial mathematics, where the interpretation is: is the wealth process of a trader, is the gain/loss and the consumption process.
The theorem was proven in 1994 by Russian mathematician Dmitry Kramkov. The theorem is named after the Doob-Meyer decomposition but unlike there, the process is no longer predictable but only adapted (which, under the condition of the statement, is the same as dealing with an optional process).
Kramkov's optional decomposition theorem
Let be a filtered probability space with the filtration satisfying the usual conditions.
A -dimensional process is locally bounded if there exist a sequence of stopping times such that almost surely if and for and .
Statement
Let be -dimensional càdlàg (or RCLL) process that is locally bounded. Let be the space of equivalent local martingale measures for and without loss of generality let us assume .
Let be a positive stochastic process then is a -supermartingale for each if and only if there exist an -integrable and predictable process and an adapted increasing process such that
Commentary
The statement is still true under change of measure to an equivalent measure.
References
Probability theorems | Kramkov's optional decomposition theorem | [
"Mathematics"
] | 297 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
66,843,966 | https://en.wikipedia.org/wiki/Nissan%20EM%20motor | Nissan EM is a brand of electric motors by Nissan. The first EM motor, the EM61, debuted in 2010 as part of the first-generation Nissan Leaf. The EM series of motors have since been used in various hybrid and all-electric Nissan vehicles.
EM61
The EM61 made its debut in 2010. It was used only in the first generation Nissan Leaf (ZE0 2010-2012). The EM61 generates 280Nm of peak torque and has a max rpm of 10,390.
EM57
The EM57 was first released with the 'AZE0' Nissan Leaf refresh in 2013. This motor has a smaller footprint compared to the EM61, allowing for 11.7 kg of weight savings in the inverter/motor package. The motor also trades some peak torque for a more efficient power range. It peaks at 250Nm of torque and has a max rpm of 10,500.
It is used in the following electric vehicles:
Nissan Leaf (AZE0 2013–2017)
Nissan e-NV200 (2014–present)
Nissan Leaf (ZE1 40kWh, 2018–present)
Nissan Leaf (ZE1 e+ 62kWh, 2019–present)
It is also used in the following hybrids:
Nissan Note e-Power (2017–2020)
Nissan Serena e-Power (2018–present)
Nissan Kicks e-Power (2020–present)
EM57 refresh
In 2018, the EM57 motor received an update with the introduction of the ZE1 Nissan LEAF. Depending on which inverter was mounted on the motor, power levels were increased to 110kW (320Nm) and on the e+ model it was further raised to 160kW (340Nm). The rpm range is also increased to 11,330 on the e+ LEAF. The motor received three tweaks:
Slight reduction of permanent magnet material
L-shaped coolant inlet
Minor casting tweak to front&rear cover
EM47
The EM47 motor released in 2020 with the refreshed Nissan Note. It is only used in Nissan's e-POWER lineup. It is matched with an inverter which has a 40% size reduction and 30% weight reduction. The EM47 has a max speed of 10,500rpm and produce 254Nm of torque.
It is used in the following hybrids:
Nissan Note e-Power (2020–present)
Nissan Kicks e-Power (2022–present; Thailand)
References
Electric motors
EM | Nissan EM motor | [
"Technology",
"Engineering"
] | 513 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
66,846,197 | https://en.wikipedia.org/wiki/Random%20graph%20theory%20of%20gelation | Random graph theory of gelation is a mathematical theory for sol–gel processes. The theory is a collection of results that generalise the Flory–Stockmayer theory, and allow identification of the gel point, gel fraction, size distribution of polymers, molar mass distribution and other characteristics for a set of many polymerising monomers carrying arbitrary numbers and types of reactive functional groups.
The theory builds upon the notion of the random graph, introduced by mathematicians Paul Erdős and Alfréd Rényi, and independently by Edgar Gilbert in the late 1950s, as well as on the generalisation of this concept known as the random graph with a fixed degree sequence. The theory has been originally developed to explain step-growth polymerisation, and adaptations to other types of polymerisation now exist. Along with providing theoretical results the theory is also constructive. It indicates that the graph-like structures resulting from polymerisation can be sampled with an algorithm using the configuration model, which makes these structures available for further examination with computer experiments.
Premises and degree distribution
At a given point of time, degree distribution , is the probability that a randomly chosen monomer has connected neighbours. The central idea of the random graph theory of gelation is that a cross-linked or branched polymer can be studied separately at two levels: 1) monomer reaction kinetics that predicts and 2) random graph with a given degree distribution. The advantage of such a decoupling is that the approach allows one to study the monomer kinetics with relatively simple rate equations, and then deduce the degree distribution serving as input for a random graph model. In several cases the aforementioned rate equations have a known analytical solution.
One type of functional groups
In the case of step-growth polymerisation of monomers carrying functional groups of the same type (so called polymerisation) the degree distribution is given by: where is bond conversion, is the average functionality, and is the initial fractions of monomers of functionality . In the later expression unit reaction rate is assumed without loss of generality. According to the theory, the system is in the gel state when , where the gelation conversion is . Analytical expression for average molecular weight and molar mass distribution are known too. When more complex reaction kinetics are involved, for example chemical substitution, side reactions or degradation, one may still apply the theory by computing using numerical integration. In which case, signifies that the system is in the gel state at time t (or in the sol state when the inequality sign is flipped).
Two types of functional groups
When monomers with two types of functional groups A and B undergo step growth polymerisation by virtue of a reaction between A and B groups, a similar analytical results are known. See the table on the right for several examples. In this case, is the fraction of initial monomers with groups A and groups B. Suppose that A is the group that is depleted first. Random graph theory states that gelation takes place when , where the gelation conversion is and . Molecular size distribution, the molecular weight averages, and the distribution of gyration radii have known formal analytical expressions. When degree distribution , giving the fraction of monomers in the network with neighbours connected via A group and connected via B group at time is solved numerically, the gel state is detected when , where and .
Generalisations
Known generalisations include monomers with an arbitrary number of functional group types, crosslinking polymerisation, and complex reaction networks.
References
Polymerization reactions
Polymer chemistry
Graph theory | Random graph theory of gelation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 709 | [
"Discrete mathematics",
"Graph theory",
"Materials science",
"Combinatorics",
"Mathematical relations",
"Polymer chemistry",
"Polymerization reactions"
] |
69,605,663 | https://en.wikipedia.org/wiki/Zeldovich%E2%80%93Taylor%20flow | Zeldovich–Taylor flow (also known as Zeldovich–Taylor expansion wave) is the fluid motion of gaseous detonation products behind Chapman–Jouguet detonation wave. The flow was described independently by Yakov Zeldovich in 1942 and G. I. Taylor in 1950, although G. I. Taylor carried out the work in 1941 that being circulated in the British Ministry of Home Security. Since naturally occurring detonation waves are in general a Chapman–Jouguet detonation wave, the solution becomes very useful in describing real-life detonation waves.
Mathematical description
Consider a spherically outgoing Chapman–Jouguet detonation wave propagating with a constant velocity . By definition, immediately behind the detonation wave, the gas velocity is equal to the local sound speed with respect to the wave. Let be the radial velocity of the gas behind the wave, in a fixed frame. The detonation is ignited at at . For , the gas velocity must be zero at the center and should take the value at the detonation location . The fluid motion is governed by the inviscid Euler equations
where is the density, is the pressure and is the entropy. The last equation implies that the flow is isentropic and hence we can write .
Since there are no length or time scales involved in the problem, one may look for a self-similar solution of the form , where . The first two equations then become
where prime denotes differentiation with respect to . We can eliminate between the two equations to obtain an equation that contains only and . Because of the isentropic condition, we can express , that is to say, we can replace with . This leads to
For polytropic gases with constant specific heats, we have . The above set of equations cannot be solved analytically, but has to be integrated numerically. The solution has to be found for the range subjected to the condition at
The function is found to monotonically decrease from its value to zero at a finite value of , where a weak discontinuity (that is a function is continuous, but its derivatives may not) exists. The region between the detonation front and the trailing weak discontinuity is the rarefaction (or expansion) flow. Interior to the weak discontinuity everywhere.
Location of the weak discontinuity (Mach wave)
From the second equation described above, it follows that when , . More precisely, as , that equation can be approximated as
As , and if decreases as . The left hand side of the above equation can become positive infinity only if . Thus, when decreases to the value , the gas comes to rest (Here is the sound speed corresponding to ). Thus, the rarefaction motion occurs for and there is no fluid motion for .
Behavior near the weak discontinuity
Rewrite the second equation as
In the neighborhood of the weak discontinuity, the quantities to the first order (such as ) reduces the above equation to
At this point, it is worth mentioning that in general, disturbances in gases are propagated with respect to the gas at the local sound speed. In other words, in the fixed frame, the disturbances are propagated at the speed (the other possibility is although it is of no interest here). If the gas is at rest , then the disturbance speed is . This is just a normal sound wave propagation. If however is non-zero but a small quantity, then one find the correction for the disturbance propagation speed as obtained using a Taylor series expansion, where is the Landau derivative (for ideal gas, , where is the specific heat ratio). This means that the above equation can be written as
whose solution is
where is a constant. This determines implicitly in the neighborhood of the week discontinuity where is small. This equation shows that at , , , but all higher-order derivatives are discontinuous. In the above equation, subtract from the left-hand side and from the right-hand side to obtain
which implies that if is a small quantity. It can be shown that the relation not only holds for small , but throughout the rarefaction wave.
Behavior near the detonation front
First let us show that the relation is not only valid near the weak discontinuity, but throughout the region. If this inequality is not maintained, then there must be a point where between the weak discontinuity and the detonation front. The second governing equation implies that at this point must be infinite or, . Let us obtain by taking the second derivative of the governing equation. In the resulting equation, impose the condition to obtain . This implies that reaches a maximum at this point which in turn implies that cannot exist for greater than the maximum point considered since otherwise would be multi-valued. The maximum point at most can be corresponded to the outer boundary (detonation front). This means that can vanish only on the boundary and it is already shown that is positive near the weak discontinuity, is positive everywhere in the region except the boundaries where it can vanish.
Note that near the detonation front, we must satisfy the condition . The value evaluated at for the function , i.e., is nothing but the velocity of the detonation front with respect to the gas velocity behind it. For a detonation front, the condition must always be met, with the equality sign representing Chapman–Jouguet detonations and the inequalities representing over-driven detonations. The analysis describing the point must correspond to the detonation front.
See also
Taylor–von Neumann–Sedov blast wave
Guderley–Landau–Stanyukovich problem
References
Flow regimes
Fluid dynamics
Combustion
Hyperbolic partial differential equations | Zeldovich–Taylor flow | [
"Chemistry",
"Engineering"
] | 1,181 | [
"Chemical engineering",
"Combustion",
"Flow regimes",
"Piping",
"Fluid dynamics"
] |
69,607,043 | https://en.wikipedia.org/wiki/Guderley%E2%80%93Landau%E2%80%93Stanyukovich%20problem | Guderley–Landau–Stanyukovich problem describes the time evolution of converging shock waves. The problem was discussed by G. Guderley in 1942 and independently by Lev Landau and K. P. Stanyukovich in 1944, where the later authors' analysis was published in 1955.
Mathematical description
Consider a spherically converging shock wave that was initiated by some means at a radial location and directed towards the center. As the shock wave travels towards the origin, its strength increases since the shock wave compresses lesser and lesser amount of mass as it propagates. The shock wave location thus varies with time. The self-similar solution to be described corresponds to the region , that is to say, the shock wave has travelled enough to forget about the initial condition.
Since the shock wave in the self-similar region is strong, the pressure behind the wave is very large in comparison with the pressure ahead of the wave . According to Rankine–Hugoniot conditions, for strong waves, although , , where represents gas density; in other words, the density jump across the shock wave is finite. For the analysis, one can thus assume and , which in turn removes the velocity scale by setting since .
At this point, it is worth noting that the analogous problem in which a strong shock wave propagating outwards is known to be described by the Taylor–von Neumann–Sedov blast wave. The description for Taylor–von Neumann–Sedov blast wave utilizes and the total energy content of the flow to develop a self-similar solution. Unlike this problem, the imploding shock wave is not self-similar throughout the entire region (the flow field near depends on the manner in which the shock wave is generated) and thus the Guderley–Landau–Stanyukovich problem attempts to describe in a self-similar manner, the flow field only for ; in this self-similar region, energy is not constant and in fact, will be shown to decrease with time (the total energy of the entire region is still constant). Since the self-similar region is small in comparison with the initial size of the shock wave region, only a small fraction of the total energy is accumulated in the self-similar region. The problem thus contains no length scale to use dimensional arguments to find out the self-similar description i.e., the dependence of on cannot be determined by dimensional arguments alone. The problems of these kind are described by the self-similar solution of the second kind.
For convenience, measure the time such that the converging shock wave reaches the origin at time . For , the converging shock approaches the origin and for , the reflected shock wave emerges from the origin. The location of shock wave is assumed to be described by the function
where is the similarity index and is a constant. The reflected shock emerges with the same similarity index. The value of is determined from the condition that a self-similar solution exists, whereas the constant cannot be described from the self-similar analysis; the constant contains information from the region and therefore can be determined only when the entire region of the flow is solved. The dimension of will be found only after solving for . For Taylor–von Neumann–Sedov blast wave, dimensional arguments can be used to obtain
The shock-wave velocity is given by
According to Rankine–Hugoniot conditions the gas velocity , pressure and density immediately behind the strong shock front, for an ideal gas are given by
These will serve as the boundary conditions for the flow behind the shock front.
Self-similar solution
The governing equations are
where is the density, is the pressure, is the entropy and is the radial velocity. In place of the pressure , we can use the sound speed using the relation .
To obtain the self-similar equations, we introduce
Note that since both and are negative, . Formally the solution has to be found for the range . The boundary conditions at are given by
The boundary conditions at can be derived from the observation at the time of collapse , wherein becomes infinite. At the moment of collapse, the flow variables at any distance from the origin must be finite, that is to say, and must be finite for . This is possible only if
Substituting the self-similar variables into the governing equations lead to
From here, we can easily solve for and (or, ) to find two equations. As a third equation, we could two of the equations by eliminating the variable . The resultant equations are
where and . It can be easily seen once the third equation is solved for , the first two equations can be integrated using simple quadratures.
The third equation is first-order differential equation for the function with the boundary condition pertaining to the condition behind the shock front. But there is another boundary condition that needs to be satisfied, i.e., pertaining to the condition found at . This additional condition can be satisfied not for any arbitrary value of , but there exists only one value of for which the second condition can be satisfied. Thus is obtained as an eigenvalue. This eigenvalue can be obtained numerically.
The condition that determines can be explained by plotting the integral curve as shown in the figure as a solid curve. The point is the initial condition for the differential equation, i.e., . The integral curve must end at the point . In the same figure, the parabola corresponding to the condition is also plotted as a dotted curve. It can be easily shown than the point always lies above this parabola. This means that the integral curve must intersect the parabola to reach the point . In all the three differential equation, the ratio appears implying that this ratio vanishes at point where the integral curve intersects the parabola. The physical requirement for the functions and is that they must be single-valued functions of to get a unique solution. This means that the functions and cannot have extrema anywhere inside the domain. But at the point , can vanish, indicating that the aforementioned functions have extrema. The only way to avoid this situation is to make the ratio at finite. That is to say, as becomes zero, we require also to be zero in such a manner to obtain . At ,
Numerical integrations of the third equation provide for and for . These values for may be compared with an approximate formula , derived by Landau and Stanyukovich. It can be established that as , . In general, the similarity index is an irrational number.
See also
Taylor–von Neumann–Sedov blast wave
Zeldovich–Taylor flow
References
Flow regimes
Fluid dynamics
Combustion
Lev Landau | Guderley–Landau–Stanyukovich problem | [
"Chemistry",
"Engineering"
] | 1,331 | [
"Chemical engineering",
"Combustion",
"Flow regimes",
"Piping",
"Fluid dynamics"
] |
69,607,361 | https://en.wikipedia.org/wiki/Therapeutic%20interfering%20particle | A therapeutic interfering particle is an antiviral preparation that reduces the replication rate and pathogenesis of a particular viral infectious disease. A therapeutic interfering particle is typically a biological agent (i.e., nucleic acid) engineered from portions of the viral genome being targeted. Similar to Defective Interfering Particles (DIPs), the agent competes with the pathogen within an infected cell for critical viral replication resources, reducing the viral replication rate and resulting in reduced pathogenesis. But, in contrast to DIPs, TIPs are engineered to have an in vivo basic reproductive ratio (R0) that is greater than 1 (R0>1). The term "TIP" was first introduced in 2011 based on models of its mechanism-of-action from 2003. Given their unique R0>1 mechanism of action, TIPs exhibit high barriers to the evolution of antiviral resistance and are predicted to be resistance proof. Intervention with therapeutic interfering particles can be prophylactic (to prevent or ameliorate the effects of a future infection), or a single-administration therapeutic (to fight a disease that has already occurred, such as HIV or COVID-19). Synthetic DIPs that rely on stimulating innate antiviral immune responses (i.e., interferon) were proposed for influenza in 2008 and shown to protect mice to differing extents but are technically distinct from TIPs due to their alternate molecular mechanism of action which has not been predicted to have a similarly high barrier to resistance. Subsequent work tested the pre-clinical efficacy of TIPs against HIV, a synthetic DIP for SARS-CoV-2 (in vitro), and a TIP for SARS-CoV-2 (in vivo).
Mechanism of action
Therapeutic Interfering Particles, often referred to as TIPs, are typically synthetic, engineered versions of naturally occurring defective interfering particles (DIPs), in which critical portions of the virus genome are deleted rendering the TIP unable to replicate on its own. Often a TIP has the vast majority of the virus genome deleted. However, TIPs are engineered to retain specific elements of the genome that allow them to efficiently compete with the wild-type virus for critical replication resources inside an infected cell. TIPs thereby deprive wild-type virus of replication material through competitive inhibition, and therapeutically reduce viral load. Competitive inhibition enables TIPs to conditionally replicate and efficiently mobilize between cells, essentially "piggybacking" on wild-type virus, to act as single-administration antivirals with a high genetic barrier to the evolution of resistance. TIPs have been engineered for HIV and SARS-CoV-2, and do not induce innate immune responses such as interferon
Three mechanistic criteria define a TIP:
Conditional replication: Due to a lack of genes required for replication, TIPs cannot self-replicate. However, when wild-type virus is present in the same cell (i.e., there is a superinfection of the cell), it provides the missing intracellular replication resources, allowing TIPs to conditionally replicate. In molecular genetics terms, the wild-type virus is said to provide complementation in trans.
Interference via competitive inhibition: TIPs reduce wild-type virus replication specifically by competing for intracellular viral replication resources (e.g., packaging proteins like the capsid). This mechanism of action reduces wild-type virus burst size and provides TIPs with a high genetic barrier to the evolution of viral resistance.
Mobilization with R0>1: when a TIP is conditionally activated by the wild-type "helper" virus in a super-infected cell, it will generate virus-like particles (VLPs). These TIP VLPs mobilize from the cell, are phenotypically identical to the virus being targeted, and can transduce new target cells. The central requirement for a therapeutic interfering particle is that it mobilizes with a basic reproductive ratio (R0) that is greater than 1 (R0>1). That is, for every TIP-producing cell, more than one new TIP-transduced cell must be generated. This third characteristic differentiates TIPs from naturally occurring DIPs.
As a result of these mechanistic criteria, TIPs have been referred to as "piggyback" or alternatively as "virus hijackers".
TIPs do not stimulate or function through the induction of innate cellular immune responses (such as interferon). In fact, stimulation of innate cellular antiviral mechanisms has been shown to contravene criterion (#3) (i.e., R0>1), as innate immune mechanisms inhibit efficient mobilization of TIPs. As such, several VLP-based therapy proposals for influenza and other viruses that do not satisfy these criteria are DIPs, but not TIPs.
History
TIPs are built off the phenomenon of defective interfering particles (DIPs) discovered by Preben Von Magnus in the early 1950s, during his work on influenza viruses. DIPs are spontaneously arising virus mutants, first described by von Magnus as "incomplete" viruses, in which a critical portion of the viral genome has been lost. Direct evidence for DIPs was only found in the 1960s by Hackett, who observed the presence of "stumpy" particles of vesicular stomatitis virus in electron micrographs, and the DIP terminology was formalized in 1970 by Huang and Baltimore. DIPs have been reported for many classes of DNA and RNA viruses in clinical and laboratory settings.
Whereas DIPs had been proposed as potential therapeutics that would act via stimulation of the immune system – a concept tested in influenza with mixed results – the TIP R0>1 mechanism of action was first proposed in 2003 with the term “TIP” and the unique benefits of the R0>1 mechanism shown in 2011.
In 2016 the US government launched a major funding initiative (DARPA INTERCEPT, ) to discover and engineer antiviral TIPs for diverse viruses, based on prior investments from the US National Institutes of Health. This program led to renewed interest in the concept of interfering particles as therapies with the development of technologies to isolate DIPs for influenza and engineer TIPs for HIV and Zika virus. The first successful experimental demonstration of the TIP concept was reported in 2019 for HIV, and the discovery of a TIP for SARS-CoV-2 was reported in 2020 and results on the effect on hamsters in 2021. In 2020, the US government funded first-in-human clinical trials of TIPs.
References
Clinical pharmacology
Genetic engineering
Medical procedures | Therapeutic interfering particle | [
"Chemistry",
"Engineering",
"Biology"
] | 1,319 | [
"Pharmacology",
"Biological engineering",
"Genetic engineering",
"Clinical pharmacology",
"Molecular biology"
] |
78,427,572 | https://en.wikipedia.org/wiki/David%20N.%20Seidman | David N. Seidman is an American materials scientist known for his work in atom-probe tomography and atomic-scale characterization of materials. He holds the title of Walter P. Murphy Professor Emeritus of Materials Science and Engineering at Northwestern University and is the founding and current director of the Northwestern University Center for Atom-Probe Tomography (NUCAPT).
Early life and education
Seidman was born in Brooklyn, New York, in 1938. He attended Brooklyn Technical High School, graduating with honors in 1956. Seidman earned his Bachelor of Science in Physical Metallurgy and Physics from New York University in 1960 and his Master of Science in Physical Metallurgy in 1962. He completed his Ph.D. in Physical Metallurgy and Physics at the University of Illinois Urbana-Champaign in 1965 under the mentorship of Robert W. Balluffi, focusing on atomic defects in metals.
Academic career
Seidman's academic career began at Cornell University, where he served as a professor of materials science and engineering. During his tenure, he initiated the use of field-ion microscopy in January 1966 to study point defects in quenched or irradiated materials. He also constructed the first ultrahigh vacuum atom-probe field-ion microscope that was entirely computer-controlled for high mass resolution, setting the standard for future instrument design
In 1985, Seidman joined Northwestern University as a professor of materials science and engineering. He was appointed the Walter P. Murphy Professor in 1996. At Northwestern, he founded NUCAPT, which has become a leading center for atom-probe tomography research. Over his career, Seidman has mentored 55 Ph.D. students and 53 postdoctoral researchers, many of whom have gone on to leading positions in academia and industry. His laboratory also welcomes undergraduate and high school students, with a focus on engaging underrepresented groups in science.
Awards and honors
Seidman has been recognized extensively for his contributions to materials science and engineering. In 2018, he was elected to the National Academy of Engineering, one of the highest professional distinctions for engineers. He is also a Fellow of numerous prestigious organizations, including the American Academy of Arts & Sciences (2010), the American Physical Society (Condensed Matter Physics Division, 1984), ASM International (2005), the Materials Research Society (2010), and TMS (Minerals, Metals & Materials Society, 1997). He is also a Member of the EU Academy of Sciences (EUAS, 2018) and the Böhmische Physical Society. In 2016, he was named to the inaugural class of Fellows of the International Field-Emission Society. Seidman was also a two-time Fellow of the John Simon Guggenheim Foundation (1972–73, 1980–81) and was named an Honorary Member of the American Institute of Mining, Metallurgical, and Petroleum Engineers in 2014. Seidman has received numerous awards throughout his career, including:
Robert Lansing Hardy Gold Medal (TMS, 1966)
Albert Sauveur Achievement Award (ASM International, 2006)
Gold Medal (ASM International, 2019)
David Turnbull Lecturer Award (Materials Research Society, 2008)
Distinguished Scientist Award in Physical Sciences (Microscopy Society of America, 2020)
Alexander von Humboldt Stiftung Prize (1989, 1992)
Max Planck Research Prize, jointly awarded with Professor Peter Haasen (1993)
Robert Franklin Mehl Medal (TMS, 2011)
Additionally, his research from 1968 to 1977 was rated among the top twenty most highly rated major achievements sponsored by the National Science Foundation in materials science, as identified by a MITRE evaluative study of Materials Research Laboratory Programs. He also received the National Science Foundation Creativity Extension Award (2001–2003) and an IBM Faculty Research Award (2010–2011). In February 2009, Seidman was honored with a two-and-a-half-day symposium at the annual TMS meeting in San Francisco, California. From 2011 to 2012, he served as a Sackler Lecturer at the Mortimer and Raymond Sackler Institute of Advanced Studies at Tel Aviv University.
Research
Seidman has authored over 500 high-impact peer-reviewed publications focusing on the atomic-scale study of materials. His research enables advancements in applications such as nanotechnology, additive manufacturing, quantum computing, alternative green energy solutions, and structural materials for aerospace, automotive, and defense industries.
Selected work
Z. Mao, C.K. Sudbrack, K.E. Yoon, G. Martin, D.N. Seidman, "The mechanism of morphogenesis in a phase-separating concentrated multicomponent alloy," Nat. Mater. 6(3) (2007) 210-216.
J.D. Rittner, D.N. Seidman,"<110> symmetric tilt grain-boundary structures in fcc metals with low stacking-fault energies," Physical Review B 54(10) (1996) 6999-7015.
O.C. Hellman, J.A. Vandenbroucke, J. Rüsing, D. Isheim, D.N. Seidman, "Analysis of Three-dimensional Atom-probe Data by the Proximity Histogram," Microsc. microanal. 6(5) (2000) 437-444.
E.A. Marquis, D.N. Seidman, "Nanoscale structural evolution of Al3Sc precipitates in Al(Sc) alloys," Acta Materialia 49(11) (2001) 1909-1919.
K. Biswas, J. He, I.D. Blum, C.-I. Wu, T.P. Hogan, D.N. Seidman, V.P. Dravid, M.G. Kanatzidis, "High-performance bulk thermoelectrics with all-scale hierarchical architectures," Nature 489(7416) (2012) 414-418.
Y. Amouyal, Z. Mao, D.N. Seidman, "Effects of tantalum on the partitioning of tungsten between the γ- and γ′-phases in nickel-based superalloys: Linking experimental and computational approaches," Acta Materialia 58(18) (2010) 5898-5911.
A.R. Farkoosh, D.C. Dunand, D.N. Seidman, "Enhanced age-hardening response and creep resistance of an Al-0.5Mn-0.3Si (at.%) alloy by Sn inoculation," Acta Materialia 240 (2022) 118344.
D.N. Seidman, "Three-Dimensional Atom-Probe Tomography: Advances and Applications," Annual Review of Materials Research 37(1) (2007) 127-158.
Professional roles
Seidman served as editor-in-chief and a member of the editorial boards for leading journals, including Interface Science, the MRS Bulletin, Materials Today and Materials Research Letters. He was also President of the International Field-Emission Society from 2000 to 2002. He is founder and director of the Northwestern University Center for Atom-Probe Tomography (NUCAPT) and a co-founder of NanoAl LLC, a startup specializing in advanced aluminum alloys, which was later acquired by Braidy Industries. In 2019 he was elected a governor of the Board of Governors of Tel Aviv University. He is also a member of the International Advisory Board of the Department of Materials Science and Engineering at Tel Aviv University. Seidman held several visiting professorships at prominent institutions worldwide, including:
Technion – Israel Institute of Technology (1969)
Tel Aviv University (1972, 2008, 2009, 2010)
Hebrew University (1978)
Centre d’Études Nucléaires de Grenoble (1981)
Institut für Metallphysik der Universität Göttingen (1989, 1992)
Centre d’Études Nucléaires de Saclay (1989)
Legacy and impact
Seidman's innovations in materials characterization, particularly through atom-probe tomography, have had profound implications across industries, from aerospace to nanotechnology. His mentorship and leadership continue to influence future generations of scientists and engineers. Seidman is the director of the Northwestern University Center for Atom-Probe Tomography (NUCAPT), which is a core facility of Northwestern University and the National Science Foundation funded Materials Research Science and Engineering Center. NUCAPT was founded during the summer of 2004 and has been operational on a full-time basis since late December 2004. It is a completely open facility for all researchers internal and external to Northwestern University and it has attracted researchers from universities and national laboratories in the US as well as researchers from around the world. NUCAPT is constantly improving its infrastructure as well as developing new techniques for analyzing the data collected using the local-electrode atom-probe (LEAP) tomograph on which NUCAPT is based. The facility recently upgraded its LEAP 4000 instrument to the cutting-edge LEAP 5000XS (Cameca, Madison, Wisconsin,) which combines new flight path technology with enhanced detector performance to offer an improved field-of-view while achieving unprecedented detection efficiency of about 80%, the highest of any such analytical technique. This is a unique facility at Northwestern and the US, which has attracted a significant number of professors to explore new avenues of research based on the use of the LEAP tomograph as well as researchers from other universities and national laboratories, Argonne National Laboratory, Pacific Northwest National Laboratory, Oak Ridge National Laboratory.
References
Cornell University faculty
Materials scientists and engineers
Northwestern University faculty
American scientists
Physics
National academies of engineering
American Physical Society
American Academy of Arts and Sciences
Materials science organizations
American Institute of Mining, Metallurgical, and Petroleum Engineers | David N. Seidman | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,004 | [
"Mining engineering",
"Metallurgy",
"Petroleum engineering",
"Materials science",
" and Petroleum Engineers",
"Materials science organizations",
"Materials scientists and engineers",
"National academies of engineering",
"American Institute of Mining",
" Metallurgical"
] |
78,432,857 | https://en.wikipedia.org/wiki/Hyperbolization%20procedures | A hyperbolization procedure is a procedure that turns a polyhedral complex into a non-positively curved space , retaining some of its topological features. Roughly speaking, the procedure consists in replacing every cell of with a copy of a certain non-positively curved manifold with boundary, which is fixed a priori and is called the hyperbolizing cell of the procedure.
There are many different hyperbolization procedures available in the literature. While they all satisfy some common axioms, they differ by what kind of polyhedral complex is allowed as input and what kind of hyperbolizing cell is used. As a result, different procedures preserve different topological features and provide spaces with different geometric flavors. The first hyperbolization procedures were introduced by Mikhael Gromov in and later other versions were developed by several mathematicians including Ruth Charney, Michael W. Davis, and Pedro Ontaneda.
It is important to note that the word "hyperbolization" here does not have the same meaning that it has in the uniformization or hyperbolization results typical of low-dimensional geometry. Indeed, the space is not homeomorphic to . For instance, is always aspherical, regardless of whether is aspherical. Moreover, despite the name of the procedure, is not always guaranteed to be negatively curved, so some authors refer to these procedures as asphericalization procedures.
Axioms
An assignment is a hyperbolization procedure if it satisfies the following properties:
(Non-positive curvature). admits a locally CAT(0) metric.
(Functoriality). If is the inclusion of a subcomplex, then there is an isometric embedding with locally convex image.
(Local structure is preserved). If is an -cell of , then is a connected -manifold with boundary and the link of in is isomorphic to the link of in , possibly up to subdivisions.
(Homology is enriched). The map that sends back to induces a surjection on homology.
It follows in particular that if is a closed orientable -manifold, then so is .
Examples
The following are some examples of common hyperbolization procedures.
Strict hyperbolization
In Charney and Davis introduced a hyperbolization procedure for which is locally CAT(-1). In particular, when is compact, the fundamental group is a Gromov hyperbolic group. The hyperbolizing cell in this procedure is a real hyperbolic manifold with boundary and corners constructed via arithmetic methods.
Riemannian hyperbolization
In Ontaneda showed that if K is a smooth triangulation of a smooth manifold, then the strict hyperbolization procedure of Charney-Davis can be refined to ensure that is a smooth manifold and that it admits a Riemannian metric of negative sectional curvature. Moreover, it is possible to pinch the curvature arbitrarily close to .
Relative hyperbolization
Any hyperbolization procedure admits a relative version, which allows to work relatively to a subcomplex, i.e., keep it unaltered under the hyperbolization. More precisely, if is a subcomplex, then one can attach to the cone over , apply the hyperbolization procedure to the coned-off complex, and the remove a small neighborhood of the cone point. Thanks to axiom (3) above, the link of the cone point is a copy of , so removing a small neighborhood of the cone point results in a boundary component homeomorphic to .
If is the strict hyperbolization of Charney-Davis, then Belegradek showed that the relative version of results in a space whose fundamental group is hyperbolic relative to .
Applications
The following are some classical applications of hyperbolization procedures. The general recipe consists in constructing a complex or manifold with some desired topological features, and then applying a hyperbolization procedure to infuse it with non-positive or negative curvature. Depending on which procedure is used, one can get more geometric control on the output.
Every triangulable manifold is cobordant to a triangulable aspherical manifold. Namely, if is a triangulable manifold, let denote the hyperbolization of with respect to some triangulation. Then and are cobordant. The cobordism is obtained by applying to the cone over , and then removing a small open neighborhood of the cone point. Using strict hyperbolization, can be chosen to admit a topological metric of negative curvature. If is a smooth manifold, then , the metric, and the cobordism can even be taken to be smooth.
For any there are a closed -manifold with a topological metric of negative curvature whose universal cover is not homeomorphic to , and also a closed -manifold with a topological metric of negative curvature whose universal cover is homeomorphic to , but whose ideal boundary is not homeomorphic to the sphere .
For any and for any there exists a closed Riemannian -manifold such that all the sectional curvatures of are in , but is not homeomorphic to a locally symmetric space. In particular, is a Gromov hyperbolic group whose Gromov boundary is a sphere, but is not isomorphic to a uniform lattice in a Lie group of rank 1.
If is a closed orientable PL-manifold that is the boundary of another PL-manifold, then there is a Gromov hyperbolic group whose Gromov boundary is the tree of manifolds defined by , i.e., a certain inverse limit of connected sums of .
References
Topology
Geometry | Hyperbolization procedures | [
"Physics",
"Mathematics"
] | 1,122 | [
"Spacetime",
"Topology",
"Space",
"Geometry"
] |
68,281,082 | https://en.wikipedia.org/wiki/Silk%20surfacing | Silk surfacing was a surface finishing of cotton to obtain an appearance similar to silk.
Process
In contrast to other imitative finishes such as mercerizing, In Silk surfacing, real silk was used in this treatment. Cotton was treated with acid and then silk waste (mixed) solution cotton to provide a lustrous appearance.
Treatment
The steps are as follows:
Soaking of cotton yarns in Tannic acid or other metallic acid.
Soaking in a solution of pure silk (of dissolved Silk waste/remnants in some acid.)
Dry
Passing through rollers.
The cotton is encased with silk. Although the finish was less durable, was adapted for selected products only that were less likely to wash.
See also
Finishing (textiles)
Plasma treatment (textiles)
References
Textile techniques
Textile chemistry
Properties of textiles | Silk surfacing | [
"Chemistry"
] | 159 | [
"nan"
] |
71,168,243 | https://en.wikipedia.org/wiki/Fosmanogepix | Fosmanogepix is an experimental antifungal drug being developed by Amplyx Pharmaceuticals (now currently by Pfizer and Basilea) It is being investigated for its potential to treat various fungal infections including aspergillosis, candidaemia, and coccidioidomycosis.
Fosmanogepix is a prodrug and is converted into the active drug form, manogepix in vivo. Manogepix targets the enzyme GWT1 (Glycosylphosphatidylinositol-anchored Wall protein Transfer 1), an enzyme in the glycosylphosphatidylinositol biosynthesis pathway. Inhibiting this enzyme prevents the fungi from properly modifying certain (so called GPI-anchored) proteins essential to the fungal life cycle. This mechanism of action is totally novel; therefore, if approved, fosmanogepix would become a first-in-class medication.
In 2023, the drug was given a compassionate use authorization for four patients with Fusarium solani meningitis.
References
2-Aminopyridines
Isoxazoles
Antifungals
Prodrugs
Organophosphates
Zwitterions
Experimental drugs
Drugs developed by Pfizer | Fosmanogepix | [
"Physics",
"Chemistry"
] | 262 | [
"Matter",
"Prodrugs",
"Zwitterions",
"Chemicals in medicine",
"Ions"
] |
71,173,762 | https://en.wikipedia.org/wiki/Cineromycin%20B | Cineromycin B is an antiadipogenic antibiotic with the molecular formula C17H26O4 which is produced by the bacterium Streptomyces cinerochromogenes.
References
Further reading
Antibiotics
Lactones
Unsaturated compounds
Diols
Heterocyclic compounds with 1 ring | Cineromycin B | [
"Chemistry",
"Biology"
] | 65 | [
"Biotechnology products",
"Organic compounds",
"Antibiotics",
"Unsaturated compounds",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
71,174,615 | https://en.wikipedia.org/wiki/Poly%28pentafluorophenyl%20acrylate%29 | Poly(pentafluorophenyl acrylate) (variously abbreviated PPFPA, PolyPFPA, PPfpA, or PolyPfpA) is a highly fluorinated polymer. It features the pentafluorophenyl ester functionality, from which its properties and applications result. It is most commonly used in post-polymerization modification to synthesize functional polyacrylamides or polyacrylates. As such, it is advantageous to poly(N-acryloyl succinimide) due to its broader solubility in organic solvents as well as its higher stability towards hydrolysis.
Synthesis
Commonly poly(pentafluorophenyl acrylate) is synthesized by free radical polymerization of the monomer pentafluorophenyl acrylate.
Additionally, pentafluorophenyl acrylate can be successfully polymerized by RAFT polymerization yield homopolymers, copolymers, or block copolymers.
It has been shown that poly(pentafluorophenyl acrylate) can also be prepared by pulsed plasma deposition.
Chemical properties
Reactivity
Poly(pentafluorophenyl acrylate) is a polymeric active ester and hence features an inherent reactivity towards nucleophiles such as amines. It is therefore used in the preparation of polyacrylamides by reacting it with amines.
Poly(pentafluorophenyl acrylate) can also be used in a transesterification by reacting it with alcohols when auxiliary DMAP and DMF are used, allowing for the synthesis of polyacrylate homopolymers and copolymers.
Low refractive index polymers
Polymers are known for featuring low refractive indices typically in the range from 1.3 to 1.8, with fluorinated polymers exhibiting refractive indices in the range from 1.3 to 1.4.
As such, poly(pentafluorophenyl acrylate) has been explored as a crosslinkable cladding for optical fibers when copolymerized with glycidyl methacrylate.
Applications
Poly(pentafluorophenyl acrylate) finds application in the synthesis of functional polymers by post-polymerization modification. Applications of the resulting polyacrylamides can be found in drug delivery, functional surfaces, and nanoparticles.
References
Polymer chemistry
Acrylate polymers | Poly(pentafluorophenyl acrylate) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 513 | [
"Materials science",
"Polymer chemistry"
] |
71,176,219 | https://en.wikipedia.org/wiki/Convex%20cap | A convex cap, also known as a convex floating body or just floating body, is a well defined structure in mathematics commonly used in convex analysis for approximating convex shapes. In general it can be thought of as the intersection of a convex Polytope with a half-space.
Definition
A cap, can be defined as the intersection of a half-space with a convex set . Note that the cap can be defined in any dimensional space. Given a , can be defined as the cap containing corresponding to a half-space parallel to with width times greater than that of the original.
The definition of a cap can also be extended to define a cap of a point where the cap can be defined as the intersection of a convex set with a half-space containing . The minimal cap of a point is a cap of with .
Floating Bodies and Caps
We can define the floating body of a convex shape using the following process. Note the floating body is also convex. In the case of a 2-dimensional convex compact shape , given some where is small. The floating body of this 2-dimensional shape is given by removing all the 2 dimensional caps of area from the original body. The resulting shape will be our convex floating body . We generalize this definition to n dimensions by starting with an n dimensional convex shape and removing caps in the corresponding dimension.
Relation to affine surface area
As , the floating body more closely approximates . This information can tell us about the affine surface area of which measures how the boundary behaves in this situation. If we take the convex floating body of a shape, we notice that the distance from the boundary of the floating body to the boundary of the convex shape is related to the convex shape's curvature.
Specifically, convex shapes with higher curvature have a higher distance between the two boundaries. Taking a look at the difference in the areas of the original body and the floating body as . Using the relation between curvature and distance, we can deduce that is also dependent on the curvature. Thus,
.
In this formula, is the curvature of at and is the length of the curve.
We can generalize distance, area and volume for n dimensions using the Hausdorff measure. This definition, then works for all . As well, the power of is related to the inverse of where is the number of dimensions. So, the affine surface area for an n-dimensional convex shape is
where is the -dimensional Hausdorff measure.
Wet part of a convex body
The wet part of a convex body can be defined as where is any real number describing the maximum volume of the wet part and .
We can see that using a non-degenerate linear transformation (one whose matrix is invertible) preserves any properties of . So, we can say that is equivariant under these types of transformations. Using this notation, . Note that
is also equivariant under non-degenerate linear transformations.
Caps for approximation
Assume and choose randomly, independently and according to the uniform distribution from . Then, is a random polytope. Intuitively, it is clear that as , approaches . We can determine how well approximates in various measures of approximation, but we mainly focus on the volume. So, we define , when refers to the expected value. We use as the wet part of and as the floating body of . The following theorem states that the general principle governing is of the same order as the magnitude of the volume of the wet part with .
Theorem
For and , . The proof of this theorem is based on the technique of M-regions and cap coverings. We can use the minimal cap which is a cap containing and satisfying . Although the minimal cap is not unique, this doesn't have an effect on the proof of the theorem.
Lemma
If and , then for every minimal cap .
Since , this lemma establishes the equivalence of the M-regions and a minimal cap : a blown up copy of contains and a blown up copy of contains . Thus, M-regions and minimal caps can be interchanged freely, without losing more than a constant factor in estimates.
Economic cap covering
A cap covering can be defined as the set of caps that completely cover some boundary . By minimizing the size of each cap, we can minimize the size of the set of caps and create a new set. This set of caps with minimal volume is called an economic cap covering and can be explicitly defined as the set of caps covering some boundary where each has some minimal width and the total volume of this covering is ≪ ⋅ .
References
Metric geometry
Convex analysis
Computational geometry | Convex cap | [
"Mathematics"
] | 929 | [
"Computational geometry",
"Computational mathematics"
] |
71,181,672 | https://en.wikipedia.org/wiki/OGLE-2011-BLG-0462 | OGLE-2011-BLG-0462, also known as MOA-2011-BLG-191, is a stellar-mass black hole isolated in interstellar space. OGLE-2011-BLG-0462 lies at a distance of 1,720 parsecs (5,610 light years) in the direction of the galactic bulge in the constellation Sagittarius. The black hole has a mass of about . OGLE-2011-BLG-0462 is the first truly isolated black hole which has been confirmed.
Discovery
OGLE-2011-BLG-0462 was discovered through microlensing when it passed in front of a background star that was 20,000 light years away from Earth. The black hole's gravity bent the star's light, causing a sharp spike in brightness that was detected by the Hubble Space Telescope. It took six years to confirm the existence of OGLE-2011-BLG-0462. Its initial kick velocity has been estimated to have an upper limit of 100 km/s. No significant X-ray emission has been detected from gas accreting onto the black hole indicating that it is truly isolated.
See also
Rogue black hole
References
Stellar black holes
Sagittarius (constellation)
Gravitational lensing | OGLE-2011-BLG-0462 | [
"Physics",
"Astronomy"
] | 272 | [
"Black holes",
"Stellar black holes",
"Unsolved problems in physics",
"Constellations",
"Sagittarius (constellation)"
] |
71,184,920 | https://en.wikipedia.org/wiki/Oxalate%20sulfate | Oxalate sulfates are mixed anion compounds containing oxalate and sulfate. They are mostly transparent, and any colour comes from the cations.
Related compounds include the sulfite oxalates and oxalate selenates.
Production
Oxalate sulfates may be deposited from a solution in water of the metal anions and sulfate with oxalic acid when evaporated. Crystals formed this way may be hydrates.
Properties
Many crystal forms are non-centrosymmetric and have non-linear optical properties.
When heated, oxalate sulfates will first dehydrate, and then give off carbon dioxide.
List
References
Oxalates
Mixed anion compounds
Sulfates | Oxalate sulfate | [
"Physics",
"Chemistry"
] | 141 | [
"Matter",
"Mixed anion compounds",
"Sulfates",
"Salts",
"Ions"
] |
77,110,354 | https://en.wikipedia.org/wiki/Scaled%20Wind%20Farm%20Technology%20Facility | The Scaled Wind Farm Technology (SWiFT) Facility is a collaborative research facility, located at the Reese Technology Center in Lubbock, Texas. It is the first facility to offer multiple wind turbines to measure turbine performance in a wind farm environment for as a user facility for the Wind Energy Technologies Office of the United States Department of Energy. The project was formally commissioned in summer 2013.
Partners
Some of the present research collaboration is involving the following research partners:
Texas Tech University
The National Wind Institute
Sandia National Laboratories (SNL) for the Wind Energy Technologies Office (WETO) of the U.S. Department of Energy.
Vestas, a Danish wind company
Group NIRE, which is a renewable energy corporation created in 2010 by Texas Tech.
Facilities
The facilities consist of:
SWiFT Wind Turbines: Three research-scale wind turbines (modified Vestas V27s), two deployed by Sandia and the third one by Vesta.
Meteorological (MET) Towers: Two 60-meter-tall anemometer towers for measuring wind speed
Control Building: Housing 640 square feet of computing space for wind-turbine control.
Assembly Building: A 5,500 square foot, environmentally controlled high-bay assembly area with machining capabilities (lathe, multiple mills, drill press, welders, and related items).
References
Energy research institutes
Energy research
Wind turbines
Wind energy organizations
Renewable energy | Scaled Wind Farm Technology Facility | [
"Engineering"
] | 277 | [
"Energy research institutes",
"Energy organizations"
] |
75,485,660 | https://en.wikipedia.org/wiki/Yuan%20Wang%20%28control%20theorist%29 | Yuan Wang () is a Chinese-American mathematician specializing in control theory and known for her research on input-to-state stability. She is a professor of mathematics at Florida Atlantic University, chair of the university's Department of Mathematical Sciences, and a moderator for the arXiv mathematical preprint repository in the areas of optimization and control (math.OC) and systems and control (cs.SY).
Education and career
Wang studied mathematics at Shandong University in China, graduating with a bachelor's degree in 1982. She completed a Ph.D. in 1990 at Rutgers University, with the dissertation Algebraic Differential Equations and Nonlinear Control Systems supervised by Eduardo D. Sontag.
She joined Florida Atlantic University as an assistant professor of mathematics in 1990. She was promoted to associate professor in 1995 and full professor in 2000.
Recognition
Wang was named as an IEEE Fellow in 2013, "for contributions to stability and control of nonlinear systems".
References
External links
Year of birth missing (living people)
Living people
American mathematicians
American women mathematicians
Control theorists
Shandong University alumni
Rutgers University alumni
Florida Atlantic University faculty
Fellows of the IEEE | Yuan Wang (control theorist) | [
"Engineering"
] | 225 | [
"Control engineering",
"Control theorists"
] |
75,490,293 | https://en.wikipedia.org/wiki/Alessandra%20Ricca | Alessandra Ricca is a computational chemist whose research focuses primarily on theoretical chemistry. She researches modeling properties of organic compounds in the gas and ice phases, emphasizing the formation, reactivity, spectroscopy, and optical properties of the researched compounds. In Astrophysics and Analysis at NASA, Ricca studies PAH infrared spectroscopy and nanograins in the interstellar medium. She loads data into the NASA Ames PAH IR Spectroscopic Database (PAHdb), which helps interpret JWST data. In NASA Solar System Workings, Ricca studies ammonia hydrates on Charon and other icy bodies, in which she interprets data collected by the Cassini mission, which detected small, large, and macromolecular organics near the Enceladus plume. The goal of this project is to determine if these substances were derived from life or abiotic processes. In addition to her work at NASA, Ricca is a Senior Research Chemist at the SETI Institute.
Early life and education
Ricca was born to an Italian father and a Swiss mother in Sanremo, Italy. Her family was heavily focused on medicine, as her father was an M.D., and her mother helped him with cancer detection testing. From a young age, she enjoyed watching the TV series Medical Center and was influenced by her family to become a doctor, as her father was a surgeon. She also has a brother who is five years younger than her.
Ricca has an Italian and Swiss dual citizenship. After spending her early years in Italy, she attended a religious boarding school in Monaco in 9th grade. French was the primary language spoken at the school, and the whole experience shocked her. She then transferred to Geneva, Switzerland, to finish high school and attended the University of Geneva. All her classes were in French. In college, she initially majored in biochemistry but later switched to chemistry, as biochemistry was a newer field that was relatively harder to understand. Ricca also wanted to major in medicine but eventually became interested in research: "I’m very curious, and I like to solve things. I’m a problem solver, so I became more interested in research, and I realized that I was distancing myself more and more from being a practitioner or even a surgeon."
Ricca received her Bachelor of Science degree (B.S.) and Master of Science (M.S.) from the University of Geneva in December 1988 and March 1989, respectively. She studied organic chemistry in college and later began a PhD in Zurich, Switzerland. However, she eventually left and went to Geneva for a PhD in theoretical chemistry in collaboration with the University of Geneva's pharmaceutical department. In July 1993, she received a PhD. in Physical chemistry from the University of Geneva, Switzerland. After receiving her PhD, she decided to stay in Switzerland but later moved to the Bay Area when she received a National Research Council Research Associateship at NASA Ames Research Center. In 1995, she became a NASA Ames Postdoctoral Fellow. Since she had a J-1 visa, she had to leave after two years and went to London, England for another postdoc. She became a Postdoctoral Fellow at Kings College, London, UK. She eventually returned to the United States and worked with Professor Charles Musgrave on calculations in material science at Stanford from 1997 to 1998. She was also hired by Eloret Corporation to work on thermochemistry and nanotechnology. When the nanotechnology project ended, she began to work on PAHs with scientists in Code S and eventually studied space science.
Personal life
Ricca is married and has two daughters, one in college and the other in high school. She also has two black cats and a hamster. Ricca likes to travel with her family and hike long trails with her husband in Hawaii. In addition, she enjoys music, singing, and the arts, often attending concerts and art exhibits. She also likes to do gardening, photography, and do-it-yourself projects.
Ricca is proud of her family: "I think having a family and nice kids and a great husband, whose support is really great." She also dreamed of coming to the United States when she was younger, and she fulfilled that dream by moving to the U.S. alone. Her first inspirations were her parents and the people she met who helped her throughout her life.
Ricca also enjoys reading classical French literature, which she didn't when she was younger. She prefers reading in French, her native language, stating that "I just take pleasure in reading [French literature] because I can capture all the subtleties, which are very often lost to me in English." To her, reading Proust "is like a painting with all these colors. It’s like a piece of art and gives me great enjoyment."
Other activities
From 2001 to 2008, she mentored students attending summer programs at the University of Notre Dame and UC Berkeley. In 2005, she was a reviewer for the National Science Foundation, and from 2006 through 2008, she was a mentor for the Summer Research for Undergraduates Program in Astrobiology at the SETI Institute. She was also a referee for the Journal of Physical Chemistry, Chemical Physics, Chemical Physics Letters, and Astrophysical Journal. In addition, she has published many articles and has been invited to peer review them as well.
A piece of advice that she would give younger students is to stay focused and tenacious: "You have to really be extremely perseverant because you get rejection after rejection. You have to be willing to keep going on and on and not get discouraged if you get a lot of negative comments. You have to have a lot of grit. You need to be very passionate to overcome all these kinds of barriers."
Honors and awards
She won the 1997 Feynman Prize in Nanotechnology and the 2008 NASA Honor Award. She also won the 1999 ELORET Thermosciences Institute Outstanding Achievement Award and the 2000 and 2002 ELORET Superior Achievement Award.
References
Living people
Year of birth missing (living people)
Computational chemists
Women chemists
Nationality missing
NASA people
Search for extraterrestrial intelligence | Alessandra Ricca | [
"Chemistry"
] | 1,253 | [
"Computational chemistry",
"Theoretical chemists",
"Computational chemists"
] |
75,492,429 | https://en.wikipedia.org/wiki/Activation%20strain%20model | The activation strain model, also referred to as the distortion/interaction model, is a computational tool for modeling and understanding the potential energy curves of a chemical reaction as a function of reaction coordinate (ζ), as portrayed in reaction coordinate diagrams. The activation strain model decomposes these energy curves into 2 terms: the strain of the reactant molecules as they undergo a distortion and the interaction between these reactant molecules. A particularly important aspect of this type of analysis compared others is that it describes the energetics of the reaction in terms of the original reactant molecules and describes their distortion and interaction using intuitive models such as molecular orbital theory that are capable using most quantum chemical programs. Such a model allows for the calculation of transition state energies, and hence the activation energy, of a particular reaction mechanism and allows the model to be used as a predictive tool for describing competitive mechanisms and relative preference for certain pathways. In chemistry literature, the activation strain model has been used for modeling bimolecular reactions like SN2 and E2 reactions, transition metal mediated C-H bond activation, 1,3-dipolar cycloaddition reactions, among others.
Theory
The activation strain model was originally proposed and has been extensively developed by Bickelhaupt and coworkers. This model breaks the potential energy curve as a function of reaction coordinate, ζ, of a reaction into 2 components as shown in equation 1: the energy due to straining the original reactant molecules (∆Estrain) and the energy due to interaction between reactant molecules (∆Eint). The strain term ∆Estrain is usually destabilizing as it represents the distortion of a molecule from the equilibrium geometry. The interaction term, ∆Eint, is generally stabilizing as it represents the electronic interactions of reactants that typically drive the reaction. The interaction energy is further decomposed based on an energy decomposition scheme from an approach by Morokuma and the Transition State Method from by Ziegler and Rauk. This decomposition breaks the interaction energy into terms that are easily processed within the framework of Kohn-Sham molecular orbital model. These terms relate to the electrostatic interactions, steric repulsion, orbital interactions, and dispersion forces as shown in equation 2.
The electrostatic interaction, ∆Velst, is the classical repulsion and attraction between the nuclei and electron densities of the approaching reactant molecules. The Pauli repulsion term, ∆Epauli, relates to the interaction between the filled orbitals of reactant molecules. In other words, it describes steric repulsion between approaching reactants. The orbital interaction, ∆Eoi, describes bond formation, HOMO-LUMO interactions, and polarization. Further, this term is well complimented by group theory and MO theory as a way to describe interaction between orbitals of the correct symmetry. The last term, , relates to dispersion forces between the reactants.
The transition states, defined as local maxima of potential energy surface, are found where equation 3 is satisfied. At this point along the reaction coordinate, as long as the strain and interaction energies at ζ = 0 is set to zero, the transition state energy () is the activation energy () of the reaction. The activation energy can then be defined as the sum of the activation strain () and the TS interaction energy () as shown in equation 4.
Select applications
The bimolecular elimination (E2) and substitution (SN2) reactions are often in competition with each other because of mechanistic similarities, mainly that both benefit from a good leaving group and that the E2 reaction uses strong bases, which are often good nucleophiles for an SN2 reaction. Bickelhaupt et. al used the activation strain model to analyze this competition between the two reactions in acidic and basic media using the 4 representative reactions below. Reactions [1] and [2] represent the E2 and SN2 reactions, respectively, in basic conditions while reactions [3] and [4] represent the E2 and SN2 reactions in acidic conditions.
[1] \ OH^- \ +\ CH_3CH_2OH\ -> H_2O\ {+}\ CH_2CH_2\ {+} \ OH^-
[2] \ OH^- \ +\ CH_3CH_2OH\ -> CH_3CH_2OH \ + \ OH^-
[3] \ H_2O \ + \ CH3CH2OH_2+ -> H_3O+\ +\ CH2CH2 \ + \ H2O
[4] \ H2O \ + \ CH3CH2OH2+-> CH3CH2OH2+ \ + \ H2O
Initial calculations show that, in basic media, the transition state energy ΔE‡ of the E2 pathway is lower while acidic conditions favor the SN2. Closer observation of the interaction and strain energies show that, for the E2 mechanism, upon shifting from acidic to basic media, the strain energy becomes more destabilizing, yet the interaction energy becomes more even more stabilizing, making it the driving force for the preference of the E2 pathway in basic conditions.
To rationalize this increase in stabilizing interaction upon shifting to basic conditions, it is useful to represent the interaction energy in terms of molecular orbital theory. The figure below shows the lowest unoccupied molecular orbitals (LUMO)s of ethanol (basic conditions) and protonated ethanol (acidic conditions), which can be visualized as a combinations of the fragment *CH_3 radical and either the *CH2OH (basic conditions) or the *CH2OH2+ (acidic conditions) radical. Upon protonation of the *CH2OH fragment, these orbitals are lowered in energy, resulting in the overall LUMO for each molecule having different parentage. This change in parentage in the linear combination of atomic orbitals results in the LUMO of CH3CH2OH2+ having bonding character between β-carbon and the hydrogen atom abstracted in the E2 pathway while the LUMO of CH3CH2OH has antibonding character along this bond.
In either the SN2 or the E2 pathway, the HOMO of the nucleophile/base will be donating electron density into this LUMO. As the LUMO for CH3CH2OH2+ has bonding character along the C(β)-H bond, putting electrons into this orbital should result in strengthening of this bond, dissuading its abstraction as necessary in the E2 reaction. The opposite goes for the LUMO of CH3CH2OH, as donation into the orbital that is antibonding with respect to this bond will weaken the C(β)-H bond and allow it abstraction in the E2 reaction. This relatively intuitive comparison within MO theory shows how the increase in stabilizing interaction for the E2 mechanism arises when switching from acidic to basic conditions.
Single point calculations
An issue in the interpretation of interaction (∆Eint) and strain (∆Estrain) curves arises when only single points along the reaction coordinate are considered. Such issues become apparent when two model reactions are considered, which have identical strain energy ∆Estrain curves that become more destabilizing along the reaction coordinate but have different interaction energy curves. If one of the reactions has a more stabilizing interaction energy curve with greater curvature, the transition state will be reached sooner along the reaction coordinate in order to satisfy the condition in equation 3, while a reaction with a less stabilizing interaction curve will reach the transition state later in the reaction coordinate with a higher transition state energy.
If only the transition states are observed, it would appear that the transition state of the second representative reaction would have a higher energy due to the higher strain energy at the respective transition states. However, if one considers the entire curves for both of the reactions, it would become clear that the higher transition sate energy of the second reaction is due to the less stabilizing interaction energy at all points along the reaction coordinate, while they have identical strain energy curves.
References
Wikipedia Student Program
Computational chemistry
Simulation | Activation strain model | [
"Chemistry"
] | 1,681 | [
"Theoretical chemistry",
"Computational chemistry"
] |
75,492,509 | https://en.wikipedia.org/wiki/Niobium%20perchlorate | Niobium perchlorate is a chemical compound with the formula . It is a hygroscopic, white crystalline solid that readily reacts with moist air or water to produce niobium(V) oxide.
Synthesis and reactions
Niobium perchlorate is produced from the reaction of niobium pentachloride and anhydrous perchloric acid:
It decomposes at to niobyl perchlorate, releasing dichlorine heptoxide:
Niobyl perchlorate further decomposes at to , which decomposes at to niobium pentoxide.
Perchloratoniobates, such as and , are produced by the reaction of perchlorate sources, such as cesium perchlorate and niobium perchlorate, in anhydrous perchloric acid at .
Structure
Although the structure of niobium perchlorate has not been elucidated by single-crystal X-ray diffraction, the structure has been probed by IR spectroscopy and powder X-ray diffraction. Niobium perchlorate has both monodentate and bidentate perchlorate ligands.
References
Niobium(V) compounds
Perchlorates | Niobium perchlorate | [
"Chemistry"
] | 255 | [
"Perchlorates",
"Salts"
] |
75,492,878 | https://en.wikipedia.org/wiki/S-309309 | S-309309 is an experimental MGAT2 inhibitor developed as an anti-obesity drug by the Japanese company Shionogi. Phase II trial results are expected in late 2023.
References
Experimental anti-obesity drugs
Spiro compounds
Sulfones
Fluoroarenes
Pyridines
Chromanes
Pyrazolopyridines
Acetamides | S-309309 | [
"Chemistry"
] | 74 | [
"Organic compounds",
"Sulfones",
"Functional groups",
"Spiro compounds"
] |
75,496,122 | https://en.wikipedia.org/wiki/Exercise%20mimetic | An exercise mimetic is a drug that mimics some of the biological effects of physical exercise. Exercise is known to have an effect in preventing, treating, or ameliorating the effects of a variety of serious illnesses, including cancer, type 2 diabetes, cardiovascular disease, and psychiatric and neurological diseases such as Alzheimer's disease. As of 2021, no drug is known to have the same benefits.
Known biological targets affected by exercise have also been targets of drug discovery, with limited results. These known targets include:
The majority of the effect of exercise in reducing cardiovascular and all-cause mortality cannot be explained via improvements in quantifiable risk factors, such as blood cholesterol. This further increases the challenge of developing an effective exercise mimetic. Moreover, even if a broad spectrum exercise mimetic were invented, it is not necessarily the case that its public health effects would be superior to interventions to increase exercise in the population.
References
Exercise biochemistry
Drugs | Exercise mimetic | [
"Chemistry",
"Biology"
] | 197 | [
"Pharmacology",
"Products of chemical industry",
"Exercise biochemistry",
"Biochemistry",
"Chemicals in medicine",
"Drugs"
] |
75,500,805 | https://en.wikipedia.org/wiki/Pasterski%E2%80%93Strominger%E2%80%93Zhiboedov%20triangle | In theoretical physics, the Pasterski–Strominger–Zhiboedov (PSZ) triangle or infrared triangle is a series of relationships between three groups of concepts involving the theory of relativity, quantum field theory and quantum gravity. The triangle highlights connections already known or demonstrated by its authors, Sabrina Gonzalez Pasterski, Andrew Strominger and Alexander Zhiboedov.
The connections are among weak and lasting effects caused by the passage of gravitational or electromagnetic waves (memory effects), quantum field theorems on graviton and photon and geometrical symmetries of spacetime. Because all of this occurs under conditions of low energy, known as infrared in the language of physicists, it is also referred to as the infrared triangle.
Elements of the triangle
Related concepts
The concepts that are interconnected by the triangle are:
a) soft particle theorems (quantum field theory theorems regarding the behavior of low-energy gravitons or photons):
soft graviton theorem, published by Steven Weinberg in 1965;
extension of the previous theorem, published by Freddy Cachazo and Strominger in 2014;
soft photon theorem, also published by Weinberg in the same paper of 1965 regarding the graviton;
b) asymptotic symmetries (symmetries of spacetime distant from the sources of the fields):
supertranslations of the Bondi-Metzner-Sachs group, published in 1962;
superrotations (symmetry analogous to that of the Virasoro algebra), published by Glenn Barnich and Cédric Troessaert in 2010;
symmetries of U(1) gauge theories, published by Pasterki in 2017;
c) memory effects:
gravitational memory effect, published by Yakov Zeldovich and A. G. Polnarev in 1974 and Demetrios Christodoulou in 1991;
new gravitational memory effects, published by Pasterski, Strominger and Zhiboedov in 2016;
the electromagnetic analogue of the memory effect, published by Lydia Bieri and David Garfinkle in 2013.
Binding relationships
Each group is linked to another by special relationships:
Fourier transforms tie together soft theorems and memory effects;
vacuum transitions tie together asymptotic symmetries and memory effects;
Ward's identities tie together soft theorems and asymptotic symmetries.
So, for example:
the soft graviton theorem (a.1) is related to the supertranslations (b.1) by a Ward's identity;
the supertranslations (b.1) correspond to different vacuum states created by the gravitational memory effect (c.1)
the gravitational memory effect (c.1) reduces to the soft graviton theorem (a.1) via a Fourier transform.
In addition to the first triangular relationship highlighted by the authors, several others may exist and have been hypothesized.
See also
Gravitational memory effect
Bondi-Metzner-Sachs group
Soft graviton theorem
References
Bibliography
External links
Theory of relativity
Quantum field theory | Pasterski–Strominger–Zhiboedov triangle | [
"Physics"
] | 630 | [
"Quantum field theory",
"Quantum mechanics",
"Theory of relativity"
] |
72,662,717 | https://en.wikipedia.org/wiki/Semantic%20spacetime | Semantic spacetime is a theoretical framework for agent-based modelling of spacetime, based on Promise Theory. It is relevant both as a model of computer science and as an alternative network based formulation of physics in some areas.
Semantic Spacetime was introduced by physicist and computer scientist Mark Burgess, in a series of papers called Spacetimes with Semantics, as a practical alternative to describing space and time, initially for Computer Science. It attempts to unify both quantitative and qualitative aspects of spacetime processes into a single model. This is referred to by Burgess as covering both “dynamics and semantics”.
Promise theory is used as a representation for semantics. Directed adjacency is the graph theoretic logical primitive, but with the caveat that each node must both emit and absorb adjacency relations, cooperatively, similar to the unitary structure of quantum probabilities and transitions. Thus space is made up of cooperating nodes and edges. The representation of spacetime becomes a form of labelled graph, specifically built from promise theoretic bindings.
Origins
According to Burgess, Semantic Spacetime originates from asking what are the implications of Promise Theory to our understanding of space and time. The traditional view of spacetime seems to have no relevance to phenomena in computing, electronics, biology, or many other information based processes. The classical understanding of spacetime from Newton's era is based on ballistics, the idea about space and time was that of a purely passive theatre for the motion and behaviours of material bodies. Einstein partially changed that perception with General Relativity, in which spacetime geometry is an active participant with its own properties, i.e. curvature, energy, and mass. In the process models of Computer Science, Electronics, Biology, and Logistics, however, space is formed from functional components that act more like service providers. Processes are representations of autonomous modular outcomes, a result of information passing between agents in networks of such active components, with a certain strength of coupling.
Burgess also observed a relationship between semantic knowledge representations and the bigraphs of Robin Milner, but found existing languages excessively formal and lacking in expressibility. In Semantic Spacetime one uses the language of Promise Theory to formulate a process (spacetime) model for autonomous agents. The property of autonomy becomes closely linked to locality in physics, so the approach has an appeal to universality.
Relationship to other models
Burgess has stated that Semantic Spacetime is an attempt to demystify the explanation of certain phenomena in both physics and information science. "Until we can get past the prejudices of classical separation of science into disciplines we will not make progress in understanding computer systems at enormous scale".
In 2019, Burgess wrote an extended book about the idea called ‘’Smart Spacetime’’ to encourage interest in the approach and explain the vision behind Semantic Spacetime, and made a documentary video. The book goes further in pointing out `deep connections’ to other fields of science, suggesting a multi-disciplinary viewpoint. Commentators have likened the idea to other graph theoretic models of spacetime, such as Causal Sets, Quantum Graphity and the Wolfram Physics Project, however Burgess emphasizes key differences that go beyond the obvious use of graphs for modelling space in these writings.
In physics, spacetime is a purely quantitative description of metric properties, labelled by coordinates to map out a region or a volume; but in Information Sciences spacetime may also have semantics, or ‘’qualitative’’ functional aspects, which arise as the container of active processes. These also need to be included in descriptions of phenomena. Classically, the role is separated from space and time, but this may add layers of unwanted complexity as there are hidden assumptions behind a model of spacetime.
For example, one region of space might be a factory, while another could be a river. In biology, cells are regions of spacetime that play different roles in an organism, and organs are larger regions composed of many cells. Regions of spacetime thus take of the role of agents, and a full description of the topology and dynamics of these may be required to model the behaviour of the whole. Semantic spacetime doesn't distinguish between space and matter, it treats matter as a local property of the spacetime network of agents.
Reception and usage
Burgess describes Semantic Spacetime as an idea in its infancy, with much work left to do, attracting a small amount of interest mainly from deep specialists. In a number of papers, he has developed applications of the idea mainly in the design of technology systems. In interviews he states that some documents, pertaining to technology, are proprietary and thus cannot be published or referenced.
Semantic Spacetime model and Promise Theory were references as an approach to multi-model database design and Resource Description Framework embedding for ArangoDB.
Limited papers on smart data pipelines and consistent propagation of information have been based on semantic spacetime and led to startups Aljabr and Dianemo to develop the respective technologies. It has also been the subject of much interest for understanding 5G telecommunications, especially in China.
Applications of the model to neuroscience and machine learning were recognized by an invitation to a special closed event salon in October 2022 by the Kavli Foundation (United States).
Virtual Motion and Sociophysics
Semantic Spacetime, identifies three ways in which motion can be understood for a graph. These are called Motion of the First, Second, and Third kinds. Burgess writes that `The semantics of ordinary space and time are diverse in interpretation. For space, we think of distance, trajectory, adjacency (topology), neighbourhood, continuity, direction, etc. For time, we have clock time, duration, time of day, partial ordering, etc.’. Semantic spacetime unifies these in promise theoretic (and thus graph theoretic) language.
The notion of Semantic Spacetime allows phenomena in Cloud computing to be viewed as a form of virtual physics, in which processes and properties (such as data records) can move around from host to host as moving promises. A description of this in terms of Promise Theory and Semantic Spacetime has been developed in a series of papers called Motion of the Third Kind. Burgess has claimed that we should expect to "rediscover physics again in the cloud".
Trust is the underlying measure of promise keeping in Promise Theory. Semantic Spacetime has also been used as an agent-based model for sociophysics in which trust plays a role similar to that of energy in ordinary mechanics.
Tutorial series
A tutorial series with programming examples was published under the name "Semantic Spacetime and Data Analytics". A video documentary called Bigger, Faster, Smarter was also produced.
References
Formal methods
Theoretical computer science | Semantic spacetime | [
"Mathematics",
"Engineering"
] | 1,356 | [
"Theoretical computer science",
"Applied mathematics",
"Software engineering",
"Formal methods"
] |
72,666,509 | https://en.wikipedia.org/wiki/Zandkreekdam | The is a compartmentalisation dam located approximately 3 kilometres north of the city of Goes in The Netherlands, which connects Zuid-Beveland with Noord-Beveland, and separates the Oosterschelde from the Veerse Meer.
A navigation lock in the dam permits shipping connections to Middelburg and Vlissingen, via the Veerse Meer and the Walcheren navigation channel. The Zandkreekdam is 830 metres in length, and was the first compartmentalisation dam to be constructed as part of the Delta Works, having been proposed by Johan van Veen as part of the (English: Three Islands Plan) which originated in the 1930s. It was the second project constructed under the Delta Works Plan, after the Stormvloedkering Hollandse IJssel which was completed in 1958.
The construction of the Zandkreekdam, together with the Veerse Gatdam in 1961, created the freshwater Veerse Meer (Veerse Lake). Poor water quality in the lake led to the decision to build a control lock, known as the , which was completed in 2004 and re-established saltwater intrusion from the Oosterschelde into the Veerse Meer, and led to a significant improvement in water quality. There are two bridges at the Zandkreekdam locks to permit vehicular traffic to pass over it at any time.
Johan van Veen's Three-Island Plan required that construction of the Zankreekdam and the Veerse Gatdam should be undertaken as early as possible in the Delta Works programme, to permit Dutch civil engineers and contractors to gain experience that would be necessary for more complicated Delta Works projects such as the Brouwersdam and Oosterscheldekering.
Feasibility, planning and design
Johan van Veen had been developing his Three Islands Plan since the 1930s, in which he considered land reclamation around the islands of Walcheren, Nord-Beveland and Zuid-Beveland and proposed the closure of two bodies of water: the Veerse Gat and the Zandkreek. In combination with the effects of the previously-constructed Sloedam, this would shorten the coastline from 52 kilometers to 2.5 kilometres and open up large areas of land which could then be reclaimed from the sea.
Van Veen recognised the need to close both bodies of water, with the Zandkreekdam acting as a secondary dam to make the works on the Veerse Gatdam easier and therefore being constructed first. Having made extensive studies, van Veen realised that the closure of the Veerse Gat alone would cause unacceptable tidal streams in the Zandkreek.
The Delta Plan was of such unprecedented size and complexity that the plan was to start with the easiest parts and gain experience along the way. There were a total of four sea arms to be closed in the Delta region, of which the Veerse Gat - extending east into the Zandkreek - was the smallest.
By commencing with the smaller works, the engineers of the Delta Service could thus gain knowledge of construction methods, materials, and equipment - essential exercises for closing the larger Brouwershavense Gat and the Eastern Scheldt. The location pinpointed by van Veen for the Zandkreekdam is at a , a Dutch term for the point at which the tidal currents from both sea arms meet at high tide, and the current is minimal.
It was also important that construction of the Veerse Gatdam did not lag too far behind the Zandkreekdam, as closing only the Zandkreek would dangerously increase the effects of storm surges in both the Veerse Gat and the Zandkreek.
The body set up to implement the Delta Works scheme, known as the (English: Delta Commission), adopted the Three Islands Plan and the Zandkreekdam was taken forward. The design was based on the use of caissons 6 metres high, 7.5 metres wide and 11 metres long to form a closure dam, along with the construction of a lock to permit navigation.
Construction
Construction began in the spring of 1957, with dredging undertaken to form a foundation trench 6.5 metres below Amsterdam Ordnance Datum (, N.A.P.). Weak soils including soft clay and peat were removed and replaced with approximately 160,000 cubic metres of sand, and excavation depths up to 14 metres below N.A.P. were realised. Unit caissons were used to construct the dam, with the maximum depth of the closing hole being 5m below N.A.P.
On 3 May 1960, a pair of caissons were sunk into the final gap and the dam was then completed to a height of 8.25m above N.A.P.
The navigation lock, 140 metres long and 20 metres wide, was ready for shipping in the spring of 1960.
See also
Delta Works
Flood control in the Netherlands
Rijkswaterstaat
Johan van Veen
References
External links
Information on the Zandkreekdam from the official Watersnoodmuseum website
Delta Works
Dams completed in 1960
Dams in Zeeland
Noord-Beveland
Zuid-Beveland
Transport in Goes | Zandkreekdam | [
"Physics"
] | 1,097 | [
"Physical systems",
"Hydraulics",
"Delta Works"
] |
74,077,355 | https://en.wikipedia.org/wiki/Titan%20submersible%20implosion | On 18 June 2023, Titan, a submersible operated by the American tourism and expeditions company OceanGate, imploded during an expedition to view the wreck of the Titanic in the North Atlantic Ocean off the coast of Newfoundland, Canada. Aboard the submersible were Stockton Rush, the American chief executive officer of OceanGate; Paul-Henri Nargeolet, a French deep-sea explorer and Titanic expert; Hamish Harding, a British businessman; Shahzada Dawood, a Pakistani-British businessman; and Dawood's son, Suleman.
Communication between Titan and its mother ship, , was lost 1 hour and 33 minutes into the dive. Authorities were alerted when it failed to resurface at the scheduled time later that day. After the submersible had been missing for four days, a remotely operated underwater vehicle (ROV) discovered a debris field containing parts of Titan, about from the bow of the Titanic. The search area was informed by the United States Navy's (USN) sonar detection of an acoustic signature consistent with an implosion around the time communications with the submersible ceased, suggesting the pressure hull had imploded while Titan was descending, resulting in the instantaneous deaths of all five occupants.
The search and rescue operation was performed by an international team organized by the United States Coast Guard (USCG), USN, and Canadian Coast Guard. Support was provided by aircraft from the Royal Canadian Air Force and United States Air National Guard, a Royal Canadian Navy ship, as well as several commercial and research vessels and ROVs.
Numerous industry experts had stated concerns about the safety of the vessel. OceanGate executives, including Rush, had not sought certification for Titan, arguing that excessive safety protocols and regulations hindered innovation.
Background
OceanGate
OceanGate was a private company, initiated in 2009 by Stockton Rush and Guillermo Söhnlein. From 2010 until the loss of the Titan submersible, OceanGate transported paying customers in leased commercial submersibles off the coast of California, in the Gulf of Mexico, and in the Atlantic Ocean. The company was based in Everett, Washington, US.
Rush realized that visiting shipwreck sites was a method of getting media attention. OceanGate had previously conducted voyages to other shipwrecks, including its 2016 dive to the wreck of aboard their other submersible Cyclops1. (A near disaster on that expedition was recounted in Vanity Fair in 2023.) In 2019, Rush told Smithsonian magazine: "There's only one wreck that everyone knows... If you ask people to name something underwater, it's going to be sharks, whales, Titanic".
Titanic
The Titanic was a British ocean liner that sank in the North Atlantic Ocean on 15 April 1912, after colliding with an iceberg. More than 1,500 people died, making it the deadliest sinking of a single ship at the time. In 1985, Robert Ballard located the wreck of the Titanic from the coast of Newfoundland. The wreck lies at a depth of about . Since its discovery, it has been a destination for research expeditions and tourism. By 2012, 140 people had visited the wreck site.
Submersible Titan
Formerly known as Cyclops 2, Titan was a five-person submersible vessel operated by OceanGate Inc. The , vessel was constructed from carbon fibre and titanium. The entire pressure vessel consisted of two titanium hemispheres (domes) with matching titanium interface rings bonded to the internal diameter, carbon fibre-wound cylinder. One of the titanium hemispherical end caps could be detached to provide the hatch and was fitted with a acrylic window. In 2020, Rush said that the hull, originally designed to reach below sea level, had been downgraded to a depth rating of after demonstrating signs of cyclic fatigue. In 2020 and 2021, the hull was repaired or rebuilt. Rush told the Travel Weekly editor-in-chief that the carbon fibre had been sourced at a discount from Boeing because it was too old for use in the company's airplanes. Boeing stated they have no records of any sale to Rush or to OceanGate. OceanGate had initially not sought certification for Titan, arguing that excessive safety protocols hindered innovation. Lloyd's Register, a ship classification society, refused OceanGate's request to class the vessel in 2019.
Titan could move at as much as using four electric thrusters, arrayed two horizontal and two vertical. Its steering controls consisted of a Logitech F710 wireless game controller with modified longer analogue sticks resembling traditional joysticks. The University of Washington's Applied Physics Laboratory assisted with the control design on the Cyclops 1 using a DualShock 3 video game controller, which was carried over to Titan, substituting with the Logitech controller. The use of commercial off-the-shelf game controllers is common for remote-controlled vehicles such as unmanned aerial vehicles or bomb disposal robots, whilst the United States Navy uses Xbox 360 controllers to control periscopes in s.
OceanGate claimed on its website that Titan was "designed and engineered by OceanGate Inc. in collaboration [with] experts from NASA, Boeing, and the University of Washington" (UW). A -scale model of the Cyclops 2 pressure vessel was built and tested at the Applied Physics Laboratory (APL) at UW; the model was able to sustain a pressure of , corresponding to a depth of about . After the disappearance of Titan in 2023, these earlier associates disclaimed involvement with the Titan project. UW claimed the APL had no involvement in the "design, engineering, or testing of the Titan submersible". A Boeing spokesperson also claimed Boeing "was not a partner on Titan and did not design or build it". A NASA spokesperson said that NASA's Marshall Space Flight Center had a Space Act Agreement with OceanGate, but "did not conduct testing and manufacturing via its workforce or facilities". It was designed and developed originally in partnership with UW and Boeing, both of which put forth numerous design recommendations and rigorous testing requirements, which Rush ignored, despite prior tests at lower depths resulting in implosions at UW's lab. The partnerships dissolved as Rush refused to work within quality standards.
According to OceanGate, the vessel contained monitoring systems to continuously monitor the strength of the hull. The vessel had life support for five people for 96 hours. There is no GPS underwater; the support ship, which monitored the position of Titan relative to its target, sent text messages to Titan providing distances and directions.
According to OceanGate, Titan had several backup systems intended to return the vessel to the surface in case of emergency, including ballasts that could be dropped, a balloon, thrusters, and sandbags held by hooks that dissolved after a certain number of hours in saltwater. Ideally, this would release the sandbags, allowing the vessel to float to the surface. An OceanGate investor explained that if the vessel did not ascend automatically after the elapsed time, those inside could help release the ballast either by tilting the ship back and forth to dislodge it or by using a pneumatic pump to loosen the weights.
Dives to wreck of Titanic
Dives by Titan to the wreck of the Titanic occurred as part of multi-day excursions organized by OceanGate, which the company referred to as "missions". Five missions occurred in the middle of 2021 and 2022. Titan imploded during the fifth mission of 2023; it was the first mission of the year in which a dive came close to Titanic, due to poor weather during previous attempts.
Passengers would sail to and from the wreckage site aboard a support ship and spend approximately five days in the ocean above the Titanic wreckage site. Two dives were usually attempted during each excursion, though dives were often cancelled or aborted due to weather or technical malfunctions.
Each dive typically had a pilot, a guide, and three paying passengers aboard. Once inside the submersible, the hatch would be bolted shut and could only be reopened from the outside. The descent from the surface to the Titanic wreck typically took two hours, with the full dive taking about eight hours. Throughout the journey, the submersible was expected to emit a safety ping every 15 minutes to be monitored by the above-water crew. The vessel and surface crew were also able to communicate via brief text messages.
Customers who travelled to the wreck with OceanGate, referred to as "mission specialists" by the company, paid each for the eight-day expedition.
OceanGate intended to perform multiple dives to the Titanic wreck in 2023, but the dive in which Titan was destroyed was the only one the company had launched that year.
Safety
Because Titan operated in international waters and did not carry passengers from a port, it was not subject to safety regulations. The vessel was not certified as seaworthy by any regulatory agency or third-party organization. Reporter David Pogue, who completed the expedition in 2022 as part of a CBS News Sunday Morning feature, said that all passengers who enter Titan sign a waiver confirming their knowledge that it is an "experimental" vessel "that has not been approved or certified by any regulatory body, and could result in physical injury, disability, emotional trauma or death". Television producer Mike Reiss, who also completed the expedition, said the waiver "mention[s] death three times on page one". A 2019 article published in Smithsonian magazine referred to Rush as a "daredevil inventor". In the article, Rush is described as having said that the U.S. Passenger Vessel Safety Act of 1993 "needlessly prioritized passenger safety over commercial innovation". In a 2022 interview, Rush told CBS News, "At some point, safety just is pure waste. I mean, if you just want to be safe, don't get out of bed. Don't get in your car. Don't do anything." Rush said in a 2021 interview, "I've broken some rules to make [Titan]. I think I've broken them with logic and good engineering behind me. The carbon fibre and titanium, there's a rule you don't do that. Well, I did."
OceanGate claimed that Titan was the only crewed submersible that used an integrated real-time monitoring system (RTM) for safety. The proprietary system, patented by Rush in 2021, used acoustic sensors and strain gauges at the pressure boundary to analyse the effects of increasing pressure as the watercraft ventured deeper into the ocean and to monitor the hull's integrity in real time. This would supposedly give early warning of problems and allow enough time to abort the descent and return to the surface.
Prior concerns
In 2018, OceanGate's director of marine operations, David Lochridge, composed a report documenting safety concerns he had about Titan. In court documents, Lochridge said that he had urged the company to have Titan assessed and certified by the American Bureau of Shipping, but OceanGate had refused to do so, instead seeking classification from Lloyd's Register. He also said that the transparent viewport on its forward end, due to its nonstandard and therefore experimental design, was only certified to a depth of , only a third of the depth required to reach the Titanic wreck. According to Lochridge, RTM would "only show when a component is about to fail – often milliseconds before an implosion" and could not detect existing flaws in the hull before it was too late. Lochridge was also concerned that OceanGate would not perform nondestructive testing on the vessel's hull before undertaking crewed dives and alleged that he was "repeatedly told that no scan of the hull or Bond Line could be done to check for delaminations, porosity and voids of sufficient adhesion of the glue being used due to the thickness of the hull". The viewport was rated to only , and the engineer of the viewport also prepared an analysis from an independent expert that concluded the design would fail after only a few 4,000 m dives.
OceanGate said that Lochridge, who was not an engineer, had refused to accept safety approvals from OceanGate's engineering team and that the company's evaluation of Titan hull was stronger than any kind of third-party evaluation Lochridge thought necessary. OceanGate sued Lochridge for allegedly breaching his confidentiality contract and making fraudulent statements. Lochridge counter-sued, stating that his employment had been wrongfully terminated as a whistleblower for stating concerns about Titan ability to operate safely. The two parties settled the case a few months later, before it came to court. He filed a whistleblower complaint with Occupational Safety and Health Administration, but withdrew it after the lawsuit was filed.
Later in 2018, a group organized by William Kohnen, the chair of the Submarine Group of the Marine Technology Society, drafted a letter to Rush expressing "unanimous concern regarding the development of 'TITAN' and the planned Titanic Expedition", indicating that the "current experimental approach ... could result in negative outcomes (from minor to catastrophic) that would have serious consequences for everyone in the industry". The letter said that OceanGate's marketing of the Titan was misleading because it claimed that the submersible would meet or exceed the safety standards of classification society DNV, even though the company had no plans to have the craft certified formally by the society. While the letter was never sent officially by the Marine Technology Society, it did result in a conversation with OceanGate that resulted in some changes, but in the end Rush "agreed to disagree" with the rest of the civilian submarine community. Kohnen told the New York Times that Rush had telephoned him after reading it to tell him that he believed industry standards were stifling innovation.
Another signatory, engineer Bart Kemper, agreed to sign the letter because of OceanGate's decision not to use established engineering standards like ASME Pressure Vessels for Human Occupancy (PVHO) or design validation. Kemper said the submersible was "experimental, with no oversight". Kohnen and Kemper stated OceanGate's methods were not representative of the industry. Kohnen and Kemper are both members of the ASME Codes and Standards committee for PVHOs, which develops and maintains the engineering safety standards for submarines, commercial diving systems, hyperbaric systems, and related equipment. Kemper is an engineering researcher who has published a number of technical papers on submarine windows, including the need to innovate.
In March 2018, one of Boeing's engineers involved in the preliminary designs, Mark Negley, carried out an analysis of the hull and emailed Rush directly stating, "We think you are at high risk of a significant failure at or before you reach 4,000 meters. We do not think you have any safety margin." He included a graph of the strain of the design with a skull and crossbones at a red line of 4,000 meters.
Also in March 2018, Rob McCallum, a major deep sea exploration specialist, emailed Rush to warn him he was potentially risking his clients' safety and advised against the submersible's use for commercial purposes until it had been tested independently and classified: "I implore you to take every care in your testing and sea trials and to be very, very conservative." Rush replied that he was "tired of industry players who try to use a safety argument to stop innovation ... We have heard the baseless cries of 'you are going to kill someone' way too often. I take this as a serious personal insult". McCallum then sent Rush another email in which he said: "I think you are potentially placing yourself and your clients in a dangerous dynamic. In your race to Titanic you are mirroring that famous catch cry: 'She is unsinkable. This prompted OceanGate's lawyers to threaten McCallum with legal action.
In 2022, the British actor and television presenter Ross Kemp, who had participated previously with deep sea dives for the television channel Sky History, had planned to mark the 110th anniversary of the sinking of the Titanic by recording a documentary in which he would undertake a dive to the wreck using Titan. Kemp's agent Jonathan Shalit said that the project was cancelled after checks by production company Atlantic Productions deemed the submersible to be unsafe and not "fit for purpose".
Previous incidents
In 2021, a new hull was constructed after a previous hull had cracked after 50 submersion dives, only three of which were to 4,000 m. Scale models of the hull imploded at the UW lab, so a different method of curing the hull was developed and passed a full-sized pressure test at a facility in Maryland. Rush refused to construct new domes and other components from the failed submersible and instructed the engineers to salvage and reuse parts. Anonymous former employees told Wired that damage to the components could have weakened the join with the new hull. They also added lifting rings, which was previously warned against by engineers because the submersible could not handle any tension or load.
In 2022, reporter David Pogue was aboard the surface ship when Titan became lost and could not locate the wreck of the Titanic during a dive. Pogue's December 2022 report for CBS News Sunday Morning, which questioned Titan safety, went viral on social media after the submersible lost contact with its support ship in June 2023. In the report, Pogue commented to Rush that "it seems like this submersible has some elements of MacGyvery jerry-rigged-ness". He said that a $30 Logitech F710 wireless game controller with modified control sticks was used to steer and pitch the submersible and that construction pipes were used as ballast.
In another 2022 dive to the wreck, one of Titans thrusters was accidentally installed backwards and the submersible started spinning in circles when trying to move forward near the sea floor. As documented by the BBC documentary Take Me to Titanic, the issue was bypassed by steering while holding the game controller sideways. According to November 2022 court filings, OceanGate reported that, in a 2022 dive, the submersible suffered from battery problems and, as a result, had to be attached manually to a lifting platform, causing damage to external components.
On 15 July 2022 (dive 80), Titan experienced a "loud acoustic event" as it was ascending, which was heard by the passengers aboard and picked up by Titan'''s real-time monitoring system (RTM). Data from the RTM later revealed that the hull had permanently shifted following this event.
Incident
Expedition arrangements
The voyage was booked in early 2023. Rush offered Jay Bloom, an American businessman, two discounted tickets, intending for Bloom and his son to be on the excursion. Bloom, a billionaire, was offered a price of $150,000 per seat, rather than the full price of $250,000, with Rush claiming that it was "safer than crossing the street", but Bloom declined the offer due to his concerns about its safety. At that time, the excursion was scheduled for May, but unfavourable weather caused it to be delayed until June.
16–17 June preparations
On 16 June 2023 at 9:31 a.m., (local time; 12:01 UTC) the expedition to the Titanic wreck, which the company referred to as "Mission 5," departed from St. John's, Newfoundland, aboard the Canadian-flagged research and expedition ship . One of the occupants, Hamish Harding, posted on Facebook: "Due to the worst winter in Newfoundland in 40 years, this mission is likely to be the first and only crewed mission to Titanic in 2023. A weather window has just opened up and we are going to attempt a dive tomorrow." He also indicated that the operation was scheduled to begin about 4:00 a.m. EDT (08:00 UTC).
18 June, dive, disappearance, and implosion
The ship arrived in vicinity of the Titanic wreck site on 18 June 5:15 a.m. Newfoundland Daylight Time (NDT; UTC−02:30). Around 8:30 a.m., five people were on-boarded into the Titan mounted on top of a floating platform, known as the launch and recovery system (LARS). Subsequently, the forward dome was secured for the expedition designated by the company as "Dive 88". At 8:55 a.m., the platform was vented, causing it to sink below the surface of the water. At 9:18 a.m., Titan disengaged from the platform and commenced diving. For the first hour and a half of the descent, Titan communicated with Polar Prince via text about every 15 minutes and received a "ping" every 5–10 seconds. At a depth of , the submarine sent "all good here", and usual "pings" continued on the communications channel. There were no messages during the descent that indicated trouble. A final text communication was sent from Titan at 10:47:27 a.m., at an approximate depth of which read "dropped two wts". Final "ping" (data) from Titan was received at 10:47:33 a.m. NDT (13:17:33 UTC), at depth of . Titan's location was .
A U.S. Navy acoustic detection system designed to locate military submarines detected an acoustic signal consistent with an implosion hours after Titan submerged.
Shortly after the disaster, James Cameron indicated that it was likely the submersible's early warning system alerted the passengers to an impending delamination of the hull, saying "we understand from inside the community that they had dropped their ascent weights and were coming up, trying to manage an emergency." Bob Ballard, the discoverer of the Titanic wreck, also said that the crew was likely "experiencing difficulties" and was trying to ascend at the time of the implosion.
In September 2024, Tym Catterson, an OceanGate contractor who was aboard the Polar Prince at the time of the disaster, testified at the United States Coast Guard's inquiry that there is no indication the crew was aware of any problems before the implosion. The last human-written communication by Titan indicated that they dropped two weights, amounting to about of the or of dropweights on board. This was apparently routine to adjust the Titans buoyancy from negative to neutral as it approached the seabed, and was an indication that the crew was not aware of any emergency situation. The last automatic ping was received by the Polar Prince approximately six seconds later, after which contact was lost.
Simulations developed in 2023 suggest the implosion of the vessel took less than one second, likely only tens of milliseconds, faster than the brain can process information; there would not have been time for the victims to experience the collapse of the hull, and they would have died immediately, with no pain, as their bodies were crushed.
18–22 June, search and rescue efforts
The submersible was expected to resurface at 4:30 p.m. (19:00 UTC). At 7:10 p.m. (21:40 UTC), the U.S. Coast Guard was notified that the vessel was missing. The Navy reviewed its acoustic data from that time, and passed the information about the possible implosion event to the Coast Guard. Titan had as much as 96 hours of breathable air supply for its five passengers when it set out, which would have expired on the morning of 22 June 2023 if the submersible had remained intact.
The United States Coast Guard, United States Navy, and Canadian Coast Guard organized the search. Aircraft from the Royal Canadian Air Force and United States Air National Guard, a Royal Canadian Navy ship, and several commercial and research ships and remotely operated underwater vehicle (ROVs) also assisted with the search. The surface was searched, as were the depths by sonar.
Crews from the United States Coast Guard launched search missions from the shore of Cape Cod, Massachusetts. Joint Rescue Coordination Centre Halifax reported that a Royal Canadian Air Force Lockheed CP-140 Aurora aircraft and CCGS Kopit Hopson 1752 were participating in the search in response to a request for assistance by the Maritime Rescue Coordination Center in Boston made on 18 June at 9:43 p.m. (00:13 UTC). The search on 19 June involved three C-130 Hercules aircraft, two from the United States and one from Canada; a P-8 Poseidon anti-submarine warfare aircraft from the United States, and sonobuoys. Search and rescue was hampered by low-visibility weather conditions, which cleared the next day.
The U.S. Coast Guard indicated that the search and rescue mission was difficult because of the remote location, weather, darkness, sea conditions, and water temperature. Rear Admiral John Mauger said that they were "deploying all available assets". Many submersibles have acoustic beacons that can be detected underwater by rescuers; Titan did not.
The pipe-laying ship Deep Energy, operated by TechnipFMC, arrived on site on 20 June 2023, with two ROVs and other equipment suited to the seabed depths in the area. As of 10:45 a.m. (13:15 UTC), the U.S. Coast Guard had searched . The New York Air National Guard's 106th Rescue Wing joined in the search and rescue mission with a HC-130J, with plans for two more to join by the end of the day.
According to an internal U.S. government memo, a Canadian CP-140 Aurora's sonar picked up underwater noises while searching for the submersible. The U.S. Coast Guard officially acknowledged the sounds early the next morning, but reported that early investigations had not yielded results. Rear Admiral John Mauger of the U.S. Coast Guard said the source of the noise was unknown and may have come from the many metal objects at the site of the wreck. A Canadian CP-140 Aurora airplane had previously spotted a "white rectangular object" floating on the surface. A ship sent to find and identify the object was diverted to help find the source of the noise. The noises were later described by the U.S. Coast Guard as being apparently unrelated to the missing vessel.
CCGS John Cabot arrived on the morning of 21 June, bringing additional sonar capabilities to the search effort. Commercial vessels Skandi Vinland and Atlantic Merlin also arrived that day, as did a US Coast Guard C-130 crew. As of about 3:00 p.m. (17:30 UTC), five air and water vehicles were searching actively for Titan, and another five were expected to arrive in the next 24–48 hours. Search and rescue assets included two ROVs, one CP-140 Aurora aircraft, and the C-130 aircraft.
The U.S. Navy's Flyaway Deep Ocean Salvage System (FADOSS), a ship lift system designed to lift large and heavy objects from the deep sea, arrived in St. John's, though no ships were available to carry the system to the wreck site. Officials estimated it would take about 24 hours to weld the FADOSS system to the deck of a carrier ship before it could set sail to the search and rescue operation.
Despite increasing concerns about the depletion of air supplies in Titan, a U.S. Coast Guard spokesperson said at a press conference "This is a search and rescue mission 100%", rather than a wreckage recovery mission.
An Odysseus6k ROV from Pelagic Research Services, travelling aboard the Canadian-flagged offshore tugboat MV Horizon Arctic, reached the sea floor and began its search for the missing submersible. The French RV L'Atalante also deployed its ROV , which can reach depths of as much as and transmit images to the surface.
22 June, discovery of debris
At 1:18 p.m. (15:48 UTC) on 22 June the U.S. Coast Guard's Northeast Sector announced that a debris field had been found near the wreck of the Titanic. The debris, located by Pelagic Research Services' Odysseus6k ROV five hours into its search, was later confirmed to be part of the submersible. At 4:30 p.m. (19:00 UTC) – at a U.S. Coast Guard press conference in Boston – the Coast Guard said that the loss of the submersible was due to an implosion of the pressure chamber and that pieces of Titan had been found on the sea floor about 1,600 feet (about 500 metres) northeast of the bow of the Titanic.
The identified debris consisted of the tail cone (not part of the pressure vessel) and the forward and aft end bells – both part of the pressure vessel intended to protect the crew from the ocean environment. According to the U.S. Coast Guard, the debris field was concentrated in two areas, with the aft end bell lying separate from the front end bell and the tail cone.
Rear Admiral John Mauger of the US Coast Guard said that the debris was consistent with a "catastrophic loss of the pressure chamber". Mauger stated that he did not have an answer as to whether the bodies of those on board would be recovered, but he did say that it was "an incredibly unforgiving environment".
Fatalities
The implosion killed all five occupants:
Recovery operations
Pelagic Research Services confirmed on 23 June 2023 that a new mission to the Titan debris field was already underway and that it had taken the Odysseus 6k ROV one hour to reach the site to continue searching and documenting debris.
It was further reported that the debris from Titan was too heavy for Pelagic's ROV to lift and that any recovery would need to occur at a later time.
On 24 June, Polar Prince returned to St. John's harbour. In their bid to understand what caused Titan catastrophic loss, investigators boarded the support ship. Another boat was seen in the harbour towing the floating launch platform, which the company referred to as the launch and recovery system (LARS), which Titan used.
On 28 June, Horizon Arctic returned to St. John's Harbour with the remains of Titan that were recovered from the debris field. Photographs and videos showed the titanium covers on both ends of Titan intact, with the single viewport missing, mangled pieces of the tail cone, electronics, the landing frame and other debris. The debris was to be transported to the U.S. as evidence for the investigation. The Coast Guard confirmed that presumed human remains were found within the debris, and that American medical professionals would conduct an analysis. Pelagic Research Services, which was operating the Odysseus 6K ROV from Horizon Arctic, confirmed that its team had completed their mission. The initial human remains underwent DNA testing, but no report was released shortly after. In September 2024, during the public hearing by the Marine Board of Investigation, USCG confirmed that the Armed Forces DNA Identification Laboratory, located in Dover, Delaware, positively identified DNA profiles for the five victims.
On 30 June, Insider published an analysis of the recovery photos by Plymouth University professor Jasper Graham-Jones. He concluded that a failure of the carbon-fibre hull was the most likely cause of the loss, given that no large pieces of carbon fibre are known to have been recovered. Another possible cause was the acrylic viewing window. He noted that the window was absent from its bell housing when it was recovered. While the salvage team may have removed the window before salvaging its bell housing, they more likely would have left it in place. However, Graham-Jones said that if the window had failed before the hull rather than after, he would have expected larger pieces of carbon fibre to be recovered.
During early October, engineers recovered the rest of the debris and presumed human remains.
Investigations
On 23 June, both the Canadian and the United States federal governments announced that they were beginning investigations of the incident. They were joined by authorities from France (Bureau d'Enquêtes sur les Événements de Mer, BEAmer) and the United Kingdom (Marine Accident Investigation Branch, MAIB) by 25 June; the final report will be issued to the International Maritime Organization (IMO). Whether lasting reforms will result from the investigation is uncertain. While there are variety of possible options, the IMO may not have appropriate regulatory authority.
United States
The United States investigation is being directed by the Coast Guard (USCG) with support from the National Transportation Safety Board; the Coast Guard is taking control because it declared the incident a "major marine casualty". USCG Captain Jason Neubauer has been named the chief investigator for a Marine Board of Investigation. Though at first it was anticipated to be completed within one year, the USCG eventually acknowledged it would take longer. "The investigation into the implosion of the Titan submersible is a complex and ongoing effort", said Neubauer in June 2024. "We are working closely with our domestic and international partners to ensure a comprehensive understanding of the incident."
Canada
The Transportation Safety Board of Canada (TSB) is investigating because Titan support vessel, MV Polar Prince, is a Canadian-flagged ship. A team of TSB investigators headed to the port of origin, St. John's, Newfoundland, to "gather information, conduct interviews and assess the occurrence", with other agencies also expected to be involved. The Royal Canadian Mounted Police (RCMP) also announced that it was performing a preliminary examination of the incident in order to determine whether to begin a full investigation, which will occur if the RCMP determine criminal, federal, or provincial laws were broken.
Lawsuit
On 6 August 2024, Nargeolet's family sued OceanGate for wrongful death.
Financial costs of operations
Numerous assets from the U.S. Air Force and the U.S. Coast Guard were deployed to search for the submersible, and to subsequently retrieve the victims' remains. On 23 June 2023, a Washington Post analysis made by Mark Cancian, a defence budget expert, estimated the costs of U.S. Coast Guard operations alone at about USD$1.2 million of taxpayers' money as of 23 June 2023, with the additional operations to recover the submersible's debris not included. Cancian said that while the Titan search operation was funded by money already in the federal budget, the U.S. military would assume some unexpected costs, since personnel and equipment were used in an unforeseen manner. Deploying a single Lockheed CP-140 Aurora aircraft and 341 sonobuoys cost Canadian taxpayers at least CAD$3 million, and the total Canadian contribution is likely to be much greater when all expenditures are tallied.
Chris Boyer of the National Association for Search and Rescue said the search for Titan likely cost millions of dollars of public funds; however, the USCG refused to give an estimate, saying they "do not associate cost with saving a life". According to U.S. attorney Stephen Koerting, the USCG is generally prohibited by federal law from collecting reimbursement related to any search or rescue service.
The incident renewed past debates about whether taxpayers should bear the cost of search and rescue missions involving wealthy people engaged in high-risk adventuring, such as incidents involving Steve Fossett and Richard Branson.
Reactions
Discussing the scale of the search and rescue response, Sean Leet, co-founder and chair of Horizon Maritime Services, the company that owns Polar Prince, said:
The scale of the search and rescue efforts and media coverage compared to those for the Messenia migrant boat disaster, which occurred days earlier, sparked criticism. In the Ionian Sea off the coast of Pylos, Messenia, Greece, a fishing boat sank while carrying an estimated 400 to 750 migrants, resulting in nearly 100 persons confirmed dead, another 100 rescued, and hundreds more missing and presumed dead. Search and rescue efforts for the migrant ship were conducted by the Hellenic Coast Guard and military. Ishaan Tharoor of The Washington Post wrote that Pakistani Internet users compared and contrasted the Pakistani victims in both incidents, who were on opposite sides of Pakistan's large socioeconomic divide. According to David Scott-Beddard, the CEO of White Star Memories Ltd, a Titanic exhibition company, the likelihood of performing future research at the Titanic wreck decreased due to the incident.
James Cameron, who directed the 1997 movie Titanic, visited the Titanic wreck 33 times, and piloted Deepsea Challenger to the bottom of the Mariana Trench, said he was "struck by the similarity" between the submersible's implosion and the events that resulted in the Titanic disaster. He noted that both disasters seemed preventable, and were caused indirectly by someone deliberately ignoring safety warnings from others. Cameron criticized the choice of carbon-fibre composite construction of the pressure vessel, saying it has "no strength in compression" when subject to the immense pressures at depth. Cameron said that pressure hulls should be made out of contiguous materials such as steel, titanium, ceramic, or acrylic, and that the wound carbon fibre of Titans hull had seemed like a bad idea to him from the beginning. He stated that it was long known that composite hulls were vulnerable to microscopic water ingress, delamination, and progressive failure over time. He also criticized Rush's real-time monitoring of the hull as an inadequate solution that would do little to prevent an implosion. Cameron expressed regret for not being more outspoken about these concerns before the accident, and criticized what he termed "false hopes" being presented to the victims' families; he and his colleagues realized early on that for communication and tracking (the latter housed in a separate pressure vessel, with its own battery) to be lost simultaneously, the cause was almost certainly a catastrophic implosion.
The Logitech F710 game controller used to steer Titan sold out on Amazon soon after the incident, which was described as "a more benign form of disaster tourism" by the New York weblog the Cut.
In social and mass media
The submersible became widely discussed on social media as the story developed and was the subject of "public schadenfreude", inspiring grimly humorous Internet memes, namely interactive video game recreations and image macros that ridiculed the submersible's deficient construction, OceanGate's perceived poor safety record, and the individuals who died. The memes were criticized as insensitive, with David Pogue regarding such media as "inappropriate and a little bit sick". Some have felt the negative reaction to the victims may be a response to past news coverage of other expeditions by billionaires, often using their own companies such as Blue Origin. Molly Roberts wrote in The Washington Post that those joking about the incident were demonstrating Internet users' impulses to be ironic, provocative, and angry with each other, combined with an "eat-the-rich attitude".
According to media psychology expert Pamela Rutledge, an American expert in social media and mass media, the Titan incident was widely treated on social media as entertainment. Major elements include the allure of disasters, fascination with the wealthy, conspiracy theories, uncertainty, and the mythology of the Titanic, as well as the romance of rescue operations. Rutledge opined that the trend displayed a lack of accountability and empathy. She asserted there is a need for individuals to rethink the way in which they use social media.
In September 2023, it was announced that a new movie about the Titan submersible incident, named Salvaged, was in development. The amount of media coverage and public attention for the Titan incident was criticized by people such as Barack Obama, the former U.S president. commenting that the contemporaneous 2023 Messenia migrant boat disaster had received much less attention.
The 2024 American Broadcasting Company (ABC) special Truth and Lies: Fatal Dive to the Titanic examined the submersible implosion of the Titan. In February 2024, a movie inspired by the events of the Titan submersible incident, titled Locker, was announced. In March 2024, a two-part documentary by ITN Productions, Minute by Minute: The Titan Sub Disaster'', was broadcast by UK's Channel 5. The documentary included interviews with the Canadian air crew that searched the surface, Edward Cassano of the Pelagic remotely-operated vehicle team that found the wreckage, and members of the Marine Technology Society William Kohnen and Bart Kemper. Kohnen and Kemper had warned OceanGate about their deviation from accepted engineering practices in 2018. Analysis of the mysterious "banging" sounds that seemed to indicate the occupants were still alive was a main feature of the first part.
See also
List of shipwrecks in 2023
List of submarine and submersible incidents since 2000
Notes
References
External links
Titan Submersible Marine Board of Investigation | U.S. Coast Guard Marine Board of Investigation
Marine transportation safety investigation M23A0169 | Transportation Safety Board of Canada
June 2023
Maritime incidents in 2023
Maritime incidents involving engineering failures
Submarine accidents
Submarines lost with all hands
Internet memes introduced in 2023
submersible, 2023 incident
Articles containing video clips
Implosion
2023 controversies | Titan submersible implosion | [
"Physics"
] | 8,488 | [
"Mechanics",
"Implosion"
] |
74,090,169 | https://en.wikipedia.org/wiki/It%C3%B4%E2%80%93Nisio%20theorem | The Itô-Nisio theorem is a theorem from probability theory that characterizes convergence in Banach spaces. The theorem shows the equivalence of the different types of convergence for sums of independent and symmetric random variables in Banach spaces. The Itô-Nisio theorem leads to a generalization of Wiener's construction of the Brownian motion. The symmetry of the distribution in the theorem is needed in infinite spaces.
The theorem was proven by Japanese mathematicians Kiyoshi Itô and in 1968.
Statement
Let be a real separable Banach space with the norm induced topology, we use the Borel σ-algebra and denote the dual space as . Let be the dual pairing and is the imaginary unit. Let
be independent and symmetric -valued random variables defined on the same probability space
be the probability measure of
some -valued random variable.
The following is equivalent
converges almost surely.
converges in probability.
converges to in the Lévy–Prokhorov metric.
is uniformly tight.
in probability for every .
There exist a probability measure on such that for every
Remarks:
Since is separable point (i.e. convergence in the Lévy–Prokhorov metric) is the same as convergence in distribution . If we remove the symmetric distribution condition:
in a finite-dimensional setting equivalence is true for all except point (i.e. the uniform tighness of ),
in an infinite-dimensional setting is true but does not always hold.
Literature
References
Probability theorems
Banach spaces | Itô–Nisio theorem | [
"Mathematics"
] | 298 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
68,291,716 | https://en.wikipedia.org/wiki/Vida%20Dujmovi%C4%87 | Vida Dujmović is a Canadian computer scientist and mathematician known for her research in graph theory and graph algorithms, and particularly for graph drawing, for the structural theory of graph width parameters including treewidth and queue number, and for the use of these parameters in the parameterized complexity of graph drawing. She is a professor of electrical engineering & computer science at the University of Ottawa, where she holds the University Research Chair in Structural and Algorithmic Graph Theory.
Education
Dujmović studied telecommunications and computer science as an undergraduate at the University of Zagreb, graduating in 1996. She came to McGill University for graduate study in computer science, earning a master's degree in 2000 and completing her Ph.D. in 2004. Her dissertation, Track Layouts of Graphs, was supervised by Sue Whitesides, and won the 2005 NSERC Doctoral Prize of the Natural Sciences and Engineering Research Council.
Career
She was an NSERC Postdoctoral Fellow at Carleton University, a CRM-ISM Postdoctoral Fellow at McGill University, and a postdoctoral researcher again at Carleton University before finally becoming an assistant professor at Carleton University in 2012. She moved to the University of Ottawa in 2013.
Recognition
In 2023 the University of Ottawa gave her the Glinski Award for Excellence in Research and the University Research Chair in Structural and Algorithmic Graph Theory. Vida Dujmović was an invited speaker at the 9th European Congress of Mathematics.
References
External links
Home page
Living people
Canadian computer scientists
Canadian mathematicians
Canadian women computer scientists
Canadian women mathematicians
Academic staff of Carleton University
Yugoslav emigrants to Canada
Graph theorists
McGill University alumni
Researchers in geometric algorithms
Academic staff of the University of Ottawa
University of Zagreb alumni
Year of birth missing (living people) | Vida Dujmović | [
"Mathematics"
] | 340 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
68,292,003 | https://en.wikipedia.org/wiki/Temperature%20paradox | The Temperature paradox or Partee's paradox is a classic puzzle in formal semantics and philosophical logic. Formulated by Barbara Partee in the 1970s, it consists of the following argument, which speakers of English judge as wildly invalid.
The temperature is ninety.
The temperature is rising.
Therefore, ninety is rising. (invalid conclusion)
Despite its obvious invalidity, this argument would be valid in most formalizations based on traditional extensional systems of logic. For instance, the following formalization in first order predicate logic would be valid via Leibniz's law:
t=90
R(t)
R(90) (valid conclusion in this formalization)
To correctly predict the invalidity of the argument without abandoning Leibniz's Law, a formalization must capture the fact that the first premise makes a claim about the temperature at a particular point in time, while the second makes an assertion about how it changes over time. One way of doing so, proposed by Richard Montague, is to adopt an intensional logic for natural language, thus allowing "the temperature" to denote its extension in the first premise and its intension in the second.
extension(t)=90
R(intension(t))
R(90) (invalid conclusion)
Thus, Montague took the paradox as evidence that nominals denote individual concepts, defined as functions from a world-time pair to an individual. Later analyses build on this general idea, but differ in the specifics of the formalization.
Notes
External links
Non-classical logic
Philosophical logic
Predicate logic
Formal semantics (natural language)
Paradoxes | Temperature paradox | [
"Mathematics"
] | 327 | [
"Basic concepts in set theory",
"Predicate logic",
"Mathematical logic"
] |
68,296,727 | https://en.wikipedia.org/wiki/Hegedus%20indole%20synthesis | The Hegedus indole synthesis is a name reaction in organic chemistry that allows for the generation of indoles through palladium(II)-mediated oxidative cyclization of ortho-alkenyl anilines. The reaction can still take place for tosyl-protected amines.
Application
2-Allylaniline can be converted to 2-Methylindole using the Hegedus indole synthesis.
References
Indole forming reactions
Carbon-heteroatom bond forming reactions
Name reactions | Hegedus indole synthesis | [
"Chemistry"
] | 106 | [
"Organic reactions",
"Name reactions",
"Carbon-heteroatom bond forming reactions",
"Chemical reaction stubs",
"Ring forming reactions"
] |
77,121,003 | https://en.wikipedia.org/wiki/Nonmetallic%20material | Nonmetallic material, or in nontechnical terms a nonmetal, refers to materials which are not metals. Depending upon context it is used in slightly different ways. In everyday life it would be a generic term for those materials such as plastics, wood or ceramics which are not typical metals such as the iron alloys used in bridges. In some areas of chemistry, particularly the periodic table, it is used for just those chemical elements which are not metallic at standard temperature and pressure conditions. It is also sometimes used to describe broad classes of dopant atoms in materials. In general usage in science, it refers to materials which do not have electrons that can readily move around, more technically there are no available states at the Fermi energy, the equilibrium energy of electrons. For historical reasons there is a very different definition of metals in astronomy, with just hydrogen and helium as nonmetals. The term may also be used as a negative of the materials of interest such as in metallurgy or metalworking.
Variations in the environment, particularly temperature and pressure can change a nonmetal into a metal, and vica versa; this is always associated with some major change in the structure, a phase transition. Other external stimuli such as electric fields can also lead to a local nonmetal, for instance in certain semiconductor devices. There are also many physical phenomena which are only found in nonmetals such as piezoelectricity or flexoelectricity.
General definition
The original approach to conduction and nonmetals was a band-structure with delocalized electrons (i.e. spread out in space). In this approach a nonmetal has a gap in the energy levels of the electrons at the Fermi level. In contrast, a metal would have at least one partially occupied band at the Fermi level; in a semiconductor or insulator there are no delocalized states at the Fermi level, see for instance Ashcroft and Mermin. These definitions are equivalent to stating that metals conduct electricity at absolute zero, as suggested by Nevill Francis Mott, and the equivalent definition at other temperatures is also commonly used as in textbooks such as Chemistry of the Non-Metals by Ralf Steudel and work on metal–insulator transitions.
In early work this band structure interpretation was based upon a single-electron approach with the Fermi level in the band gap as illustrated in the Figure, not including a complete picture of the many-body problem where both exchange and correlation terms can matter, as well as relativistic effects such as spin-orbit coupling. A key addition by Mott and Rudolf Peierls was that these could not be ignored. For instance, nickel oxide would be a metal if a single-electron approach was used, but in fact has quite a large band gap. As of 2024 it is more common to use an approach based upon density functional theory where the many-body terms are included. Rather than single electrons, the filling involves quasiparticles called orbitals, which are the single-particle like solutions for a system with hundreds to thousands of electrons. Although accurate calculations remain a challenge, reasonable results are now available in many cases.
It is also common to nuance somewhat the early definitions of Alan Herries Wilson and Mott. As discussed by both the chemist Peter Edwards and colleagues, as well as Fumiko Yonezawa,it is also important in practice to consider the temperatures at which both metals and nonmetals are used. Yonezawa provides a general definition:
When a material 'conducts' and at the same time 'the temperature coefficient of the electric conductivity of that material is not positive under a certain environmental condition,' the material is metallic under that environmental condition. A material which does not satisfy these requirements is not metallic under that environmental condition.
Band structure definitions of metallicity are the most widely used, and apply both to single elements such as insulating boron as well as compounds such as strontium titanate. (There are many compounds which have states at the Fermi level and are metallic, for instance titanium nitride.) There are many experimental methods of checking for nonmetals by measuring the band gap, or by ab-initio quantum mechanical calculations.
Functional definition
An alternative in metallurgy is to consider various malleable alloys such as steel, aluminium alloys and similar as metals, and other materials as nonmetals; fabricating metals is termed metalworking, but there is no corresponding term for nonmetals. A loose definition such as this is often the common usage, but can also be inaccurate. For instance, in this usage plastics are nonmetals, but in fact there are (electrically) conducting polymers which should formally be described as metals. Similar, but slightly more complex, many materials which are (nonmetal) semiconductors behave like metals when they contain a high concentration of dopants, being called degenerate semiconductors. A general introduction to much of this can be found in the 2017 book by Fumiko Yonezawa
Periodic table elements
The term nonmetal (chemistry) is also used for those elements which are not metallic in their normal ground state; compounds are sometimes excluded from consideration. Some textbooks use the term nonmetallic elements such as the Chemistry of the Non-Metals by Ralf Steudel, which also uses the general definition in terms of conduction and the Fermi level. The approach based upon the elements is often used in teaching to help students understand the periodic table of elements, although it is a teaching oversimplification. Those elements towards the top right of the periodic table are nonmetals, those towards the center (transition metal and lanthanide) and the left are metallic. An intermediate designation metalloid is used for some elements.
The term is sometimes also used when describing dopants of specific elements types in compounds, alloys or combinations of materials, using the periodic table classification. For instance metalloids are often used in high-temperature alloys, and nonmetals in precipitation hardening in steels and other alloys. Here the description implicitly includes information on whether the dopants tend to be electron acceptors that lead to covalently bonded compounds rather than metallic bonding or electron acceptors.
Nonmetals in astronomy
A quite different approach is used in astronomy where the term metallicity is used for all elements heavier than helium, so the only nonmetals are hydrogen and helium. This is a historical anomaly. In 1802, William Hyde Wollaston noted the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths, and they are now called Fraunhofer lines. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters.
About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identifies in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Their observations were in the visible range where the strongest lines come from metals such as Na, K, Fe. In the early work on the chemical composition of the sun the only elements that were detected in spectra were hydrogen and various metals, with the term metallic frequently used when describing them. In contemporary usage all the extra elements beyond just hydrogen and helium are termed metallic.
The astrophysicst Carlos Jaschek, and the stellar astronomer and spectroscopist Mercedes Jaschek, in their book The Classification of Stars, observed that:
Stellar interior specialists use 'metals' to designate any element other than hydrogen and helium, and in consequence ‘metal abundance’ implies all elements other than the first two. For spectroscopists this is very misleading, because they use the word in the chemical sense. On the other hand photometrists, who observe combined effects of all lines (i.e. without distinguishing the different elements) often use this word 'metal abundance', in which case it may also include the effect of the hydrogen lines.
Metal-insulator transition
There are many cases where an element or compound is metallic under certain circumstances, but a nonmetal in others. One example is metallic hydrogen which forms under very high pressures. There are many other cases as discussed by Mott, Inada et al and more recently by Yonezawa.
There can also be local transitions to a nonmetal, particularly in semiconductor devices. One example is a field-effect transistor where an electric field can lead to a region where there are no electrons at the Fermi energy (depletion zone).
Properties specific to nonmetals
Nonmetals have a wide range of properties, for instance the nonmetal diamond is the hardest known material, while the nonmetal molybdenum disulfide is a solid lubricants used in space. There are some properties specific to them not having electrons at the Fermi energy. The main ones, for which more details are available in the links are:
Dielectric polarization, approximately equivalent to alignment of local dipoles with an electric field, as in capacitors.
Electrostriction, a change in volume due to an electric field, or more accurately polarization density.
Flexoelectricity, where there is a coupling between strain gradients and polarization. This plays a role in the generation of static electricity due to the triboelectric effect.
Piezoelectricity, a coupling between polarization and linear strains.
A decreased resistance with temperature, due to having more carriers (via Fermi–Dirac statistics) available in partially occupied higher energy bands
Increased conductivity when illuminated with light or ultraviolet radiation, called photoconductivity. This is similar to the effect of temperature, but with the photons exciting electrons into partially occupied states.
Transmit electric fields as in the capacitor figure above; in a metal there is electric-field screening that prevents this beyond very small distances, see Classical Electrodynamics.
See also
References
Chemical physics
Condensed matter physics
Materials science
Metallurgy
Nonmetals
Periodic table
Solid-state chemistry | Nonmetallic material | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,091 | [
"Periodic table",
"Applied and interdisciplinary physics",
"Metallurgy",
"Phases of matter",
"Nonmetals",
"Materials science",
"Chemical physics",
"Condensed matter physics",
"nan",
"Matter",
"Solid-state chemistry"
] |
77,126,555 | https://en.wikipedia.org/wiki/David%20W.%20Flaherty | David W. Flaherty is the Thomas C. Loach Jr. Endowed Professor in the School of Chemical and Biomolecular Engineering at Georgia Institute of Technology, joining in June 2023 after previously serving at the University of Illinois, Urbana-Champaign. His research focuses on catalysis, surface science, and materials synthesis aimed at sustainability.
Education and career
B.S. in Chemical Engineering, University of California, Berkeley
Ph.D. in Chemical Engineering, University of Texas at Austin (advisor: Charles Buddie Mullins)
Postdoctoral research with Prof. Enrique Iglesia at the University of California, Berkeley
Research
Flaherty's research focuses on developing the science and application of catalysis for sustainability.
Awards and honors
Eastman Foundation Distinguished Lecturer in Catalysis, University of California, Berkeley (2021)
Department of Energy Early Career Award (2019)
National Science Foundation CAREER Award (2016)
ACS PRF Doctoral New Investigator Award (2013)
References
External links
[Google Scholar Profile](https://scholar.google.com/citations?user=EULNYK8AAAAJ&hl=en)
Georgia Tech faculty
Chemical engineers
University of California, Berkeley alumni
University of Texas at Austin alumni
Year of birth missing (living people)
Living people | David W. Flaherty | [
"Chemistry",
"Engineering"
] | 258 | [
"Chemical engineering",
"Chemical engineers"
] |
77,129,977 | https://en.wikipedia.org/wiki/Axial%20loading | Axial loading is defined as applying a force on a structure directly along a given axis of said structure. In the medical field, the term refers to the application of weight or force along the course of the long axis of the body. The application of an axial load on the human spine can result in vertebral compression fractures. Axial loading takes place during the practice of head-carrying, an activity which a prospective case–control study in 2020 shows leads to "accelerated degenerative changes, which involve the upper cervical spine more than the lower cervical spine and predisposes it to injury at a lower threshold."
References
Biomechanics | Axial loading | [
"Physics"
] | 132 | [
"Biomechanics",
"Mechanics"
] |
75,505,829 | https://en.wikipedia.org/wiki/48-volt%20electrical%20system | A 48-volt DC electrical system voltage is a relatively low-voltage electrical system that is increasingly used in vehicles. It began in the 2010s as a way to increase the propulsion and battery recharge during braking for fuel savings in internal combustion engine vehicles, especially mild hybrid vehicles.
History
Traditionally, vehicle low-voltage applications were powered by a 12-volt system. In the 1990s, an attempt by a cross-industry standards group to specify a 42-volt electrical system failed to catch on and was abandoned by 2009. During the 2010s, renewed interest arose for a 48-volt low-voltage standard for powering automotive electronics, especially in hybrid vehicles.
In 2011, German car manufacturers Audi, BMW, Daimler Benz, Porsche, and Volkswagen agreed on a 48 V system supplementing the legacy 12 V low-voltage automotive standard.
In model year 2017, the Renault Scenic dCi Hybrid Assist was the first 48 V mild-hybrid passenger car.
As of 2018, a 48 V electrical subsystem operated production vehicles such as Porsche and Bentley SUVs. Audi and Mercedes-Benz used a 48 V subsystem in 2018 vehicles such as A6, A7, A8 with 3.0 TDI 48 V mild-hybrid, CLS, E-Class, S-Class with M256 3.0 Turbo Otto 48 V Mild-Hybrid.
Hyundai Tucson, Hyundai Santa Fe, Kia Ceed and Kia Sportage followed in model year 2019 with 1.6 and 2.0 turbodiesel engines supported by 48 V mild-hybrid technology.
A European automotive trade association, CLEPA, estimated in 2018 that as many as 1 of every 10 new vehicles in 2025 would use at least one 48-volt device in the vehicle, covering 15 million vehicles per year.
In March 2023, Tesla Inc. revealed that the Tesla Cybertruck and next-generation vehicle would utilize a 48-volt mid-voltage subsystem as a replacement of 12 V system, migrating the low-voltage components with highest power demand to 48 V.
In December 2023, in order to accelerate the adoption by other automakers of 48 V system voltage for automotive components, Tesla offered a "48-volt electrical system whitepaper" to all industry leaders. CEO Jim Farley confirmed that Ford had received a copy and agreed to 'help the supply base move into the 48-volt future". Tesla also adopted 48 volts for its Optimus robot.
Benefits
A 48 V system can provide more power, improve energy recuperation, and allow up to an 85% decrease in cable mass.
12-volt systems can provide only 3.5 kilowatts, while a 48 V power could achieve 15 to 20 kW or even 50 kW. 48 volts is below the level that is considered safe in dry conditions without special protective measures. (See the article on electrical injury.)
One example of where these benefits can be used is in the Gordan Murray Automotive T.50, where it uses an integrated starter-generator to generate power for a 48 V AC compressor, without the need for a belt. This allows the engine to rev more freely and give the vehicle good AC, no matter the RPM.
Another example is with the use of electric turbochargers, active suspension, and rear-wheel steering systems that require a lot of power to run, and might be more responsive and capable with a 48 V system.
See also
Automotive battery
Extra-low voltage
Load dump
List of electric vehicle battery manufacturers
References
48 V Vehicle Electrical System – More Than Just a Bridging Technology? Dusan Graovac, Christoph Schulz-Linkholt, Thomas Blasius, 23 April 2020, EE Times/Asia.
ISO 21780:2020(en) Road vehicles — Supply voltage of 48 V — Electrical requirements and tests
Electric power distribution
Automotive electrics | 48-volt electrical system | [
"Engineering"
] | 792 | [
"Electrical engineering",
"Automotive electrics"
] |
75,506,491 | https://en.wikipedia.org/wiki/Hessea%20%28microsporidian%29 | Hessea is the type genus of the monotypic microsporidian family Hesseidae, described in 1973. Hessea itself is monotypic, containing only the type species Hessea squamosa. It is a parasite of the larvae of the Sciara fungus gnats, infesting the gut epithelium.
References
External links
Hessea squamosa
Microsporidia
Microsporidia genera
Taxa described in 1973
Parasitic fungi
Parasites of arthropods | Hessea (microsporidian) | [
"Biology"
] | 99 | [
"Fungus stubs",
"Fungi"
] |
75,508,317 | https://en.wikipedia.org/wiki/Gallylene | Gallylenes are a class of gallium species which are electronically neutral and in the +1-oxidation state. This broad definition may include many gallium species, such as oligomeric gallium compounds in which the gallium atoms are coordinated to each other, but these classes of compounds are often referred to as gallanes. In recent literature, the term gallylene has mostly been reserved for low valent gallium species which may have a lone pair, analogous to NHC's or terminal borylenes. They are compounds of academic interest because of their distinctive electronic properties which have been achieved for higher main group elements such as borylenes and carbenes.
Common gallylenes
β-diketiminate ligands
β-diketiminate ligands (commonly referred to as NacNac ligands) are commonly employed to stabilize gallylenes. These ligands have a lone pair which allows them to act as a Lewis base and form a sigma bond with the gallylene which has Lewis acid character due to its empty p orbitals. These ligands can be modified with bulky substituents which afford kinetic protection to the gallylene. For example, a monomeric Ga(I) compound coordinated to the NacNac ligand with Dipp substituents was synthesized by Power and co-workers. The resulting gallylene had remarkable stability and decomposes above 150 °C. This stability is attributed to the steric bulk of the β-diketiminate ligand and its kinetic protection. This gallylene also had a singlet lone pair and an empty p-orbital, analogous to other metallylene species. NacNacGa(I) is capable of oxidative addition reactions, C-H bond activation, and some substrates will undergo both processes. For example, this gallylene is capable of cleaving E-Et bonds and forming E-E complexes between two NacNacGa(I) complexes. Metal salts will undergo oxidative addition as well. The general form of the oxidative addition reactions is shown in the figure below, but many substrates will form more complex species in between the two NacNacGa(I) ligands. Roesky and coworkers point out that this suite of reactivity demonstrates the electrophilic and nucleophilic character of these gallylenes, since they can both accept electron density into the empty p orbital and donate their lone pair.
The β-diketiminate ligands are also capable of activating Ga-H bonds for subsequent reactivity. For example, Aldridge and coworkers demonstrated that a β-diketiminato gallane (GaH2) could react with [Cr(CO)4(COD)] and replace the COD ligand. The reaction resulted in two distinct products, one which resulted in two Ga-H-Cr bridging bonds, and one in which the hydrogen atoms were eliminated and the resulting gallylene coordinated to the Cr center with a bond length of 2.459 Angstroms. The reaction was notably slower with the Al analogue, indicating the relatively lower hydricity of Ga-H species.
Pincer ligands
The complexes formed by NacNacGa(I) and monodentate ligated gallylenes are typically incapable of downstream functionalization and further reactivity, as the metallylene will typically be lost during reactions. Pincer type ligands can be used to stabilize gallylene-derived complexes during reactivity. Iwasawa and coworkers demonstrated this by synthesizing an iridium complex with a pincer-type gallylene ligand. They note that the gallium is reduced to Ga(I) with the addition of Ir(I), and thus the ligand can be termed a gallylene. There is no lone pair on the gallylene in the resulting complex, but the formal oxidation states nonetheless suggest a complex featuring a neutral Ga(I). The reaction of this pincer Ir complex with tetrabutylammonium formate resulted in ligand exchange of the pincer complex and decarboxylation of the tetrabutylammonium formate. Iwasawa and coworkers also demonstrated other various ligand exchanges which resulted in the loss of a chloride and the addition of other ligands such as CO, PhH2Si, and GaCl3.
Transition metal ligands
Gallylenes are often used as ligands in transition metal chemistry. One early example of a Ga-M system was the reported Ga-Fe triple bond by Robinson and coworkers. This was refuted by Albert Cotton, who stated that there was a dative Ga-Fe bond, and any further bond order would be achieved with back-donation of Fe electrons into the empty orbitals on the Ga atom. Back-donation into Ga would be accompanied by less back-bonding into the CO ligands on the iron, and this would be reflected in the stretching frequency of the CO. Experimentally, this is not observed and thus the Ga-M bond was considered a single dative bond. Indeed this topic has been studied computationally since, and the lack of multibond character is mostly supported. Aldridge and Pandey conducted a DFT study on cationic metal-gallylene complexes of iron, ruthenium, and osmium and found that the bonding can be described as a single bond with high Ga 4s character, and a small degree of pi-bonding. M-GaX bonds (X = halide) are weaker than M-CO bonds, and have a considerable ionic character.
The ability of the gallylene to behave as a transition metal ligand is highly dependent on the gallylene's ligands itself. Fischer demonstrated that GaCp* (Cp* = C5Me5), could be used to prepare homoleptic octahedral molybdenum complexes, and homoleptic trigonal bipyramidal rhodium complexes. In contrast, NacNacGa(I) does not as effectively coordinate to the metal centers. The relative difference in their coordinating ability is attributed to the rigidity of the NacNac ligand, its increased steric bulk, and concave ligand shape.
Reactivity
CO and CN cleavage
Gallylenes can undergo [1+2] cycloaddition reactions with isocyanates and cleave C=O and C=N bonds. The reaction proceeds via a two-electron reduction of the isocyanate (O=C=N-R) by the gallylene to produce a digallacyclohexane in which the gallium atoms are in the +3 oxidation state and the C=N double bond has been cleaved. This reaction is sensitive to the substituents on the isocyanate. When R = p-tolyl, the reaction afforded two gallane heterocycles composed of two of the isocyanates, with an N-p-tolyl inserted in the heterocycle.
Hydride transfer
Gallylenes can be used to prepare gallane hydride species, which can act as a source of two hydrides and also a strong electron donor to stabilize resulting high-oxidation state transition metal hydride complexes. Aldridge et al. demonstrated this reactivity by preparing an Ir(IV) complex coordinated to four hydrides, a bidentate phosphorus ligand, and the NacNacGa ligand.
C-H activation
Fischer and coworkers demonstrated that a NacNacGa(I) complex could cleave the C-H bonds of an organoruthenium derivative and subsequently stabilize the resulting ruthenium species. This reaction can also be thought of as a ligand exchange between the hydrido ligands and the gallylene, in which the gallylene adopts a bridging coordination mode after hydrido elimination. Similar reactivity was demonstrated with an organoruthenium complex which had 4 chloride ligands instead of hydrido ligands: two chloride ligands were abstracted by the addition of the NacNacGa(I), resulting in an organoruthenium complex with two bridging chlorides and a NacNacGa(III) coordinated to two chlorides.
Cycloadducts
Fedushkin and coworkers have demonstrated a suite of reactivity for gallylenes that are stabilized via the 1,2-bis[(2,6-diisopropylphenyl)imino]acenaphthene ligand (abbreviated as dpp-bian). This is a redox active ligand which is able to cooperatively react with the gallylene. Fedushkin and coworkers demonstrated that this gallylene could react with Ytterbium in dimethoxyethane (DME) in the presence of CO2 to afford a gallium coordinated to a methyl group and the dimethoxyethane. They reported that the resulting coordination of DME is unprecedented. The same gallylene reacted with magnesium in DME in the presence of diphenylketene to afford a cycloadduct in which the terminal carbon of the ketene bonded to the beta amine position and the oxygen bonded to the gallium. This product also featured a Ga-Me bond which is thought to arise from the solvent DME. These reactions are proposed to proceed via a mechanism where the substrate coordinates to the metal center and is reduced. This initiates homolytic cleavage of Ga-M bond, and the now activated Ga species will attack DME to extract a methyl group. The dpp-bian gallylene has been used by the Fedushkin group to prepare other cycloadducts, and dimeric species where the substrates are coordinated by two of the gallylene species.
Azides
Fedushkin and coworkers demonstrated that the dimer composed of two gallylenes with a-diimine ligands was able to react with organic azides. This gallylene's reactivity is especially dependent on the ligand which is redox-active, and can thus cooperatively react with the gallylene. The ligand may aid in bond formation on the gallylene, wherein the delocalized pi bond between the imines is able to reduce the gallium atom during an oxidative addition, thereby preserving the oxidation state of the gallylene. Alternatively, the delocalized pi bond can directly form bonds with substrates and lead to the cycloadducts mentioned above. The cooperative reactivity between the gallylene and the redox active ligand enabled this dimer to perform azide transformations and afford imido-, azoimido-, and tetrazene complexes.
Carbodiiimides
Fedushkin and coworkers demonstrated that treating the a-diimine ligated gallylene with carbodiimides resulted in guanidinate derivatives via reductive coupling. In contrast to the reactivity demonstrated with azides, the ligand system here is reported as "innocent", meaning redox inactive. Products were confirmed via NMR and crystal XRD. The proposed mechanism for this reactivity involves a [1+2] cycloaddition between the gallylene and carbodiimide, and there is computational evidence for this mechanism.
Computational studies and electronic structure
Five-membered gallylene heterocycles have been modeled computationally, and they have been found to have a singlet-triplet energy gap of ca. 52 kcal/mol. The Ga-N bonds are very polar, with electron density being concentrated on the N atom of the heterocycle. Moreover, the singlet lone pair of the gallylene is found to reside within an sp-hybridized orbital. Six-membered gallylene heterocycles, such as those prepared with NacNac ligands have a higher singlet-triplet energy separation than aluminum counterparts. This is due to the relative stabilization of the gallium metal lone pair, and gallium's relatively diminished Lewis acidity.
One common application of gallylene species is their use as transition metal ligands. Braunschweig and coworkers conducted a bonding energy analysis of terminal gallylene complexes of vanadium and niobium to investigate the nature of this bonding. The bond distances in these gallylene complexes almost resemble the single bond distances expected from an estimate of covalent radii, and are larger than those expected for a double bond. Based on these bond distances, the M-Ga bond resemble single bonds with very small pi-orbital contribution. This is confirmed by the wiberg bond index of ~0.5. The bonding overall can be attributed to sigma donation from the gallylene to the metal. These bonds are considerably more ionic than the covalent bonding of other group-13 elements such as boron in equivalent complexes.
Jeyakumar and coworkers conducted DFT calculations and NBO analysis on Group 10 metal gallylene complexes of the form [TM(CO)3(GaX)]. Their calculations confirm the idea that gallylene ligands behave as sigma donors and, to a lesser extent, pi-acceptors. Based on their calculated energies of transition state in the formation of TM(CO)3(GaX) from TM(CO)4 and GaX in THF, they suggest that GaF substituted Pt complexes are the most viable products.
Mondal and coworkers have computationally studied the aromaticity of NHC analogues, and found that gallylene NHC's are the second most aromatic among the group 13 elements. This result was consistent across a variety of methods used to assess aromaticity: 1H NMR, nucleus-independent chemical shift, aromatic ring shielding, gauge-including magnetically induced current, and Stranger's method.
References
Gallium compounds
Coordination complexes | Gallylene | [
"Chemistry"
] | 2,883 | [
"Coordination chemistry",
"Coordination complexes"
] |
75,509,668 | https://en.wikipedia.org/wiki/Opnurasib | Opnurasib (JDQ-443) is a small-molecule covalent KRASG12C inhibitor developed for non-small-cell lung cancer.
References
Experimental cancer drugs
Indazoles
Pyrazoles
Acrylamides
Azetidines
Ketenes
Chloroarenes
Spiro compounds | Opnurasib | [
"Chemistry"
] | 66 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Ketenes",
"Organic compounds",
"Pharmacology stubs",
"Spiro compounds"
] |
75,510,166 | https://en.wikipedia.org/wiki/UBXD8 | UBXD8 is a protein in the Ubiquitin regulatory X (UBX) domain-containing protein family. The UBX domain contains many eukaryotic proteins that have similarities in amino acid sequence to the tiny protein modifier ubiquitin. UBXD8 engages in a molecular interaction with p97, a protein that is essential for the degradation of membrane proteins associated with the endoplasmic reticulum (ER) through the proteasome. Ubxd8 possesses a UBA domain, alongside the UBX domain, that could interact with polyubiquitin chains. Additionally, it possesses a UAS domain of undetermined function, and this protein is used as a protein sensor that detects long chain unsaturated fatty acids (FAs), having a vital function in regulating the balance of Fatty Acids within cells to maintain cellular homeostasis.
Influence of UBXD8 on lipid droplets
The hairpin loop in cell membranes helps Ubxd8 get inside by sensing unsaturated fatty acids (FAs) and controlling the production of triglycerides (TGs). The inhibition of TG synthesis is caused by Ubxd8, which blocks the conversion of diacylglycerols (DAGs) to TGs. However, this inhibition is alleviated when there is an abundance of unsaturated fatty acids. The structure of Ubxd8 is altered by unsaturated FAs, which in turn releases the brake on the synthesis of TG. Ubxd8 contributes to maintaining cellular energy balance by attracting p97/VCP to lipid droplets (LDs) and suppressing the function of adipose triglyceride lipase (ATGL), the enzyme that controls the rate of triacylglycerol breakdown. Moreover, VCP brings UBXD8 to mitochondria, where it participates in the regulation of mitochondrial protein quality. Disruption of UBXD8 gene hinders the breakdown of the pro-survival protein Mcl1 and excessively stimulates the process of mitophagy. To better understand how lipo-toxicity is caused by saturated fatty acids, it might be helpful to learn how Ubxd8 works with unsaturated fatty acids. The inhibitory effect of long-chain unsaturated fatty acids (FAs) on the interaction between Ubxd8 and Insig-1 is due to their ability to obstruct the binding between these two proteins, hence impeding the extraction of Insig-1 from the membrane. This inhibition is independent of the ubiquitination of Insig-1 and occurs after ubiquitination. Without affecting its ubiquitination, unsaturated FAs stabilize Insig-1, and they improve the capacity of sterols to inhibit the proteolytic activation of SREBP-1. The polymerization of the UAS domain of Ubxd8 occurs when it interacts with long-chain unsaturated FAs, which is essential for this process. For the polymerization reaction to be facilitated, the surface area of the UAS domain must be positively charged. The capacity of long-chain unsaturated FAs to stimulate oligomerization of Ubxd8 is hindered by mutations that take place in this specific region.
References
Proteins | UBXD8 | [
"Chemistry"
] | 717 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
66,876,349 | https://en.wikipedia.org/wiki/Spiridoula%20Matsika | Spiridoula Christos Matsika (born 1971) is a Greek theoretical chemist. She was elected as a fellow of the American Physical Society in 2014.
Education
Spiridoula Christos Matsika was born in 1971 in Greece; she attended the National and Kapodistrian University of Athens for her bachelor's degree in chemistry, graduating in 1994. She completed her PhD at the Ohio State University, graduating in 2000 under the advisorship of Russell M. Pitzer. Following the completion of her PhD, she was a postdoctoral researcher at Johns Hopkins University under David Yarkony for three years.
Career
In 2003 she was hired at Temple University as an assistant professor in its College of Science and Technology. She was promoted to associate professor in 2009 and full professor in 2014.
Awards and honors
In 2005 she was awarded the National Science Foundation CAREER Award. She was awarded a Alexander von Humboldt Foundation fellowship in 2013.
In 2014 she was elected as a fellow of the American Physical Society "for her contributions to understanding the dynamics of excited molecules around conical intersections and method development to calculate such at the highest levels of theory".
References
Living people
1971 births
National and Kapodistrian University of Athens alumni
Ohio State University alumni
Temple University faculty
Fellows of the American Physical Society
Theoretical chemists
Greek women scientists
Greek chemists | Spiridoula Matsika | [
"Chemistry"
] | 264 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
66,880,356 | https://en.wikipedia.org/wiki/Tardigrade%20specific%20proteins | Tardigrade specific proteins are types of intrinsically disordered proteins specific to tardigrades. These proteins help tardigrades survive desiccation, one of the adaptations which contribute to tardigrade's extremotolerant nature. Tardigrade specific proteins are strongly influenced by their environment, leading to adaptive malleability across a variety of extreme abiotic environments.
History
The mechanisms of tardigrade desiccation protection were originally thought to result from high levels of the sugar trehalose. Trehalose is used by organisms like yeast to avoid desiccation in dry environments by working with heat shock proteins to keep desiccation-sensitive proteins in solution. However, while tardigrades can accumulate small levels of trehalose, the levels are insufficient to provide protection from extreme conditions. Other molecules which help certain organisms avoid cellular desiccation include late embryogenesis abundant proteins, which provide protection to embryonic cotton seeds. Certain proteins actually responsible for the tardigrade's hardiness, including the cytoplasmic and secreted abundant heat soluble proteins, were discovered when searching for late embryogenesis abundant proteins in tardigrades.
One strategy used by the tardigrade to survive in dry environments is anhydrobiosis. Anhydrobiosis is a process in which an organism can lose nearly all of its water and enter an ametabolic state.
Function
Tardigrade specific proteins are a type of intrinsically disordered proteins, which have no predetermined shape or task. These proteins use many different conformations, called an ensemble, to move through different structures. Because of this, intrinsically disordered proteins may react strongly to the environment they inhabit. There are three families of tardigrade specific proteins, each named after where the protein is localized within a cell. These proteins are similar to late embryogenesis abundant proteins but are specific to tardigrades. The three families do not resemble each other and are expressed or enriched during desiccation. Unlike traditional proteins, intrinsically disordered proteins do not precipitate out of solution or denature during high heat. Tardigrades rely on these proteins to help them survive extreme environments, where they put their bodies in a dehydrated state called a tun. In most organisms, dehydration causes problems for cells, which need a hydrated environment for their proteins to function. However, tardigrade specific proteins assist in preventing aggregation of cell contents upon dehydration, and maintain the integrity of the cell membrane upon rehydration.
Types
Cytoplasmic
Cytoplasmic abundant heat soluble (CAHS) proteins are highly expressed in response to desiccation. There are two hypotheses for their function in tardigrades. The vitrification hypothesis is the idea that, when a tardigrade becomes desiccated, the viscosity within its cells increases to the point that denaturation and membrane fusion in proteins would stop. A second hypothesis, the water replacement hypothesis, posits that CAHS proteins replace water in other desiccation-sensitive proteins, protecting the hydrogen bonds normally reliant on water. CAHS proteins are dispersed throughout the cell in normal conditions, but form a network of filaments during environmentally stressful conditions. This network transforms the cytoplasm into a gel-like matrix and prevents the cell from collapsing as water leaches out. This state is reversible and the proteins disaggregate when exposed to less stressful conditions.
When forming the filament network, CAHS proteins have long helical domains that interact in a coiled manner with each other. These interactions are possible due to the proteins' partial disorder, with two flexible tails surrounding the helical domains.
CAHS proteins have been studied to observe their interactions with trehalose, a sugar used by other species to prevent desiccation. Trehalose was found to interact at higher levels with CAHS proteins than other sugars such as sucrose. Trehalose averages only 1% in most species of tardigrades, and in no species more than 3%, indicating that tardigrades use other strategies to tolerate dehydration.
Tardigrade CAHS protein injected into mice produced no inflammatory response or hemolysis.
Secreted
Secreted abundant heat soluble (SAHS) proteins are similar to fatty acid-binding proteins, notably in their structure with an antiparallel beta-barrel and internal fatty acid binding pocket. SAHS proteins are often secreted into media and associated with special extracellular structures. Dried tardigrades have an abundance of secretory cells which are not found in hydrated individuals. The mechanism behind SAHS proteins has not yet been determined, but the presence of secretory cells only during desiccation suggests they are used to protect cells during periods of dehydration.
Mitochondrial
Mitochondrial abundant heat soluble (MAHS) proteins are localized in mitochondria and are responsible for protecting mitochondria during desiccation. Because of its role in metabolizing reactive oxygen species, the mitochondrion is an important organelle to protect in extreme environments. During dehydration, the mitochondria of tardigrades grow much smaller and lose their cristae. MAHS proteins may act to replace water in the membrane of the mitochondria, preventing uneven rehydration and membrane rupture. Mitochondria and muscle contraction due to mitochondria are essential for tardigrade to enter the "tun" state of anhydrobiosis.
Dsup (damage suppressor protein)
Dsup is a DNA-associating protein, unique to the tardigrade, that suppresses the occurrence of DNA breaks by radiation. Dsup localized to nuclear DNA reduces single-strand breaks and double-strand breaks when subjected to ionizing radiation.
LEA Proteins
Late embryogenesis abundant proteins (LEA proteins) are proteins that protect against protein aggregation due to dehydration or osmotic stress. However, no LEA proteins have been found in tardigrades.
References
Tardigrades
Xerophiles
Proteins
Molecular biology | Tardigrade specific proteins | [
"Chemistry",
"Biology"
] | 1,265 | [
"Biomolecules by chemical classification",
"Space-flown life",
"Tardigrades",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
66,880,796 | https://en.wikipedia.org/wiki/Tumor%20mutational%20burden | Tumour mutational burden (abbreviated as TMB) is a genetic characteristic of tumorous tissue that can be informative to cancer research and treatment. It is defined as the number of non-inherited mutations per million bases (Mb) of investigated genomic sequence, and its measurement has been enabled by next generation sequencing. High TMB and DNA damage repair mutations were discovered to be associated with superior clinical benefit from immune checkpoint blockade therapy by Timothy Chan and colleagues at the Memorial Sloan Kettering Cancer Center.
TMB has been validated as a predictive biomarker with several applications, including associations reported between different TMB levels and patient response to immune checkpoint inhibitor (ICI) therapy in a variety of cancers. TMB is also strongly predictive of overall as well as disease-specific survival, independently of cancer type, stage or grade. Patients with both low and high TMB fare notably better than those with intermediate burden.
While both TMB and mutational signatures provide critical information about cancer behaviour, they have different definitions. TMB is defined as the number of somatic mutations/megabase whereas mutational signatures are distinct mutational patterns of single base substitutions, double base substitutions, or small insertions and deletions in tumors. For instance, COSMIC single base substitution signature 1 is characterized by the enzymatic deamination of cytosine to thymine and has been associated with age of an individual.
Scientists postulate that high TMB is associated with an increased amount of neoantigens, which are tumour specific markers displayed by cells. An increase in these antigens may then lead to increased detection of cancer cells by the immune system and more robust activation of cytotoxic T-lymphocytes. Activation of T-cells is further regulated by immune checkpoints that can be displayed by cancer cells, thus treatment with ICIs can lead to improved patient survival.
On June 16, 2020 the U.S. Food and Drug Administration expanded the approval of the immunotherapy drug pembrolizumab to treat any advanced solid-tumor cancers with a TMB greater than 10 mutations per Mb and continued growth following prior treatments. This marks the first time that the FDA has approved a drug with its use based on TMB measurements.
Importance
TMB as a Biomarker
One survival mechanism in tumors is to increase the expression of immune checkpoint molecules that can bind to tumor-specific T-cells and inactivate them, so that the tumor cells cannot be detected and killed. ICIs have been shown to improve patients' response and the survival rates as they help the immune system to target tumor cells. However, there is a variation in response to ICIs among patients and it is crucial to know which patients can benefit from ICI therapy. The expression of PD-L1 (programmed death-ligand 1; one of the immune checkpoints) has been demonstrated to be a good biomarker of PD-L1 blockade therapy in some cancers. However, there is a need for better biomarkers as there are some predictive errors with PD-L1 expression. Studies on TMB have illustrated that there is an association between patients' outcome (of ICI therapy) and the TMB value. It has been proposed that TMB can be used as a predictive marker of response in ICI therapy across many cancer types. Also, TMB can be helpful to identify individuals that can benefit from ICI therapy with cancers that generally have low TMB values. Furthermore, it has been shown that tumors with higher TMB values usually result in a higher number of neoantigens, the antigens that are presented on the tumor cells surface that are usually a result of missense mutations. So, TMB can be a good estimator of neoantigen load and can help find the patients who can benefit from ICI therapy by increasing the chance of detecting the neoantigens. However, it is important to note that different sequencing platforms and bioinformatics pipelines have been used to estimate TMB and it is important to harmonize TMB quantification protocols and procedures before it can be used as a reliable biomarker. There have been some efforts to standardize these methods.
Treatment Response
TMB has been found to correlate with patient response to therapies such as immune checkpoint inhibitors (ICIs). An analysis of a large cohort of patients receiving ICI therapy revealed that higher TMB levels (≥ 20 mutations/Mb) corresponded to a 58% response rate to ICIs while lower TMB levels (<20 mutations/Mb) reduced response to 20%. Researchers could also show a significant correlation between treatment response rate and TMB level in patients treated with anti-PD-1 or anit-PD-L1 (types of ICIs). Additionally, it has been reported that when ICIs were the only treatments used by patients, 55% of the differences in the objective response rate across cancer types were explained by TMB.
Patient Prognosis
Associations have been reported between TMB and patient outcome in a variety of cancers. In one study, scientists observed differences in survival rates, with high TMB individuals having a median progression-free survival of 12.8 months and a median overall survival not reached by the time of publication, compared to 3.3 months and 16.3 months respectively for individuals with lower TMB. Another study examining patients who had not received ICI therapy found that intermediate levels of TMB (>5 and <20 mutations/Mb) correlate with significantly decreased survival, likely as a result of the accumulation of mutations in oncogenes. This relationship does not appear to be significantly disparate across different tissues types and is only modestly affected by corrections for confounders such as smoking, sex, age, and ethnicity. This suggests that TMB is both an independent and reliable indicator of poor patient outcomes in the absence of ICI therapy. Interestingly, very high levels of TMB (≥ 50 mutations/Mb) were reported to correlate with increased survival, giving an overall parabolic shape to the trend. While this association is still under investigation, it has been hypothesized that the decreased risk of death under very high TMB could result from reduced cell viability due to genetic instability or increased production of neoantigens recognized by the immune system.
TMB in different cancers
There is a large variation in TMB values across different cancer types as the number of somatic mutations can span from 0.01 to 400 mutations per megabase of genome. It has been shown that melanoma, NSCLC and other squamous carcinomas have the highest levels of TMB in this order, while leukemias and pediatric tumors have the lowest levels of TMB and other cancers like breast, kidney, and ovary have intermediate TMB values. There is also variation in TMB across different subtypes of different cancers. Due to high variability in TMB across different cancer types and subtypes, it is important to define different cut-offs to have an improved survival prediction and a better treatment decision. For example, Fernandez et al. showed that TMB can range from 0.03 to 14.13 mutations per megabase (mean=1.23) in TCGA prostate cancer cohort while this range is from 0.04-99.68 mutations per megabase (mean=6.92) in TCGA bladder cancer cohort. A recent study illustrated that different cut-offs are needed for different cancer types to find the patients who can benefit from ICI therapy. In addition, it is crucial to understand that usually there are different clusters of cells in a tumor, known as tumor heterogeneity, that can affect TMB and consequently the response to ICIs. Another factor that can affect TMB is whether the source of the sample is primary or metastatic tissue. Most metastatic samples have been shown to be monoclonal (i.e. there is only one cluster of cells in the tumor), while primary tumors usually consist of a higher number of clusters and have higher overall genetic diversity (more heterogeneous). Scientists have shown that metastatic tumors usually have a higher TMB level compared to primary tumors and this can be due to monoclonal nature of metastatic lesions.
TMB calculation
There are disparities between how TMB is calculated in clinical and research settings. Broadly, whole genome sequencing, whole exome sequencing, and panel based approaches can be used to help to calculate TMB. Studies of TMB from research perspectives typically incorporate whole exome sequencing, and occasionally whole genome sequencing within their workflows while clinical applications use panel sequencing to estimate TMB primarily for their comparatively quicker speed and low cost. Within panel based approaches, different strategies to calculate TMB have been adopted. For instance, consider MSK-IMPACT developed by the Memorial Sloan Kettering Cancer Center and F1CDx developed by Foundation Medicine. F1CDx utilizes tumor-only sequencing strategy while MSK-IMPACT requires sequencing of both the tumor and its matched normal sample. Additionally, F1CDx counts synonymous mutations while excluding hotspot driver mutations. MSK-IMPACT calculates TMB with similar filtering criteria to those used in whole exome sequencing, considering both synonymous mutations and hotspot driver mutations. Ensembles of targeted panels and whole exome sequencing panels have been recommended for optimal results. As an approach that is potentially more expedient and cost effective than sequencing, TMB can be calculated directly from H&E stained pathology images using deep learning.
Factors that Influence TMB Calculation
Overall, 5 primary factors have been identified to influence TMB calculations.
Tumor Cell Content and Sequencing Coverage
Greater tumor cell content and sequencing coverage play a key role in the quality of TMB data. For instance, targeted panels may enable deeper sequencing compared to whole exome sequencing, enabling higher sensitivity, that have been shown to perform well even when tumor cell content is low (defined as <10%). Targeted panels have shown to enable much greater coverage than in whole exome sequencing. For example, one recent study reached a mean sequencing coverage across all tumor samples of 744× when using the MSK-IMPACT panel, while the WES led to a mean target coverage of 232× in tumor sequences.
Tissue Preprocessing
Typically, tumor tissues are fixated in formalin to preserve tissue and cellular morphology in the formalin-fixed paraffin-embedded (FFPE) protocols. While FFPE offers a cost-effective method to store tissues for long durations of time, limitations must be considered as to how it will affect TMB calculations. One limitation of this method is that it induces the formation of various crosslinks, whereby strands of DNA become covalently bound to each other, which may consequently lead to deamination of cytosine bases. Cytosine deamination is the major cause of baseline noise in Next Generation Sequencing, leading to the most prevalent sequence artifacts in FFPE (C:G > T:A). This may generate artefacts that must be removed in the downstream pipeline.
Sequencing Strategy
Different sequencing strategies enable different number of genes to be included in the calculation of TMB (with WGS and WES approaches allowing a greater quantity of genes to be analyzed). While panel based approaches analyze comparatively fewer genes than other strategies, one advantage of panel based sequencing is that genes of interest can be covered in much greater sequencing depths, and rare variants can possibly be identified. The panel sizes vary across panels with 468 genes in the MSK-IMPACT panel, 315 genes in the Foundation Medicine panel, and 409 genes in the Life Technologies panel. As panel sizes are smaller, uncertainty associated with TMB estimation becomes greater, with coefficient of variance increases rapidly when the size of the targeted panels is less than 1 Mb.
Bioinformatics Pipeline
In most calculations of TMB, synonymous variants and germline variants are filtered out as they are unlikely to be directly involved in creating neoantigens. However, some pipelines maintain synonymous variants. To account for germline variants, ideally sequencing would have been performed on a matched non-tumor sample from each patient. However, in a clinical practice, the availability of this matched sample may vary across different institutions and diverse organizational factors, and data unavailability may inhibit germline variants to be filtered. The choice of variant callers and other software in the downstream analyses may also affect how TMB is ultimately calculated. TMB can be calculated directly from histopathology images using a multiscale deep learning pipeline, avoiding the need for sequencing and variant calling.
Cut-offs
Different studies have assigned different cut-offs to delineate between high and low TMB status. In the lung, the median TMB across more than 18,000 lung cancer cases was 7.2 mutations/Mb, with approximately 12% of the patients showing more than 20 mutations/Mb. The authors identified a tumor mutational burden greater than or equal to 10 mutations/Mb as the optimal cut-off to benefit from combination immunotherapy. However, in other cancer types, high TMB status has been classified as >20 mutations/Mb.
Issues and future directions
One approved biomarker of ICI therapy is PD-L1 expression, but the predictive power of this biomarker is affected by factors such as assay interpretation and lack of standard methods. TMB is also affected by these factors in addition to accessibility issues. Biological factors like specimen type and cancer type as well as technical factors like sequencing technology can affect evaluation of TMB. Thus, it is necessary to harmonize evaluation methods and there are still so many factors that can complicate this task. For example, gene fusions and post-translational changes in proteins contribute to tumor behaviour and consequently response to therapy while these factors are not considered in TMB estimation. In addition, currently all mutations have the same weight in TMB calculation, while they can have very different effects on proteins and pathways activity. Furthermore, there is still no good answer to the question of how mutations in genes that are known to influence ICI therapy should be treated in TMB evaluation. It is also important to note that TMB is highly variable across cancer types and subtypes and different studies are being conducted to find distinct TMB thresholds.
Some studies argue that to have better prediction of response to ICI therapy, TMB should be used as a complementary marker with other biomarkers such as PD-L1. Other studies have shown that a combination of TMB and neoantigen load can be used as a biomarker to predict survival in patients with melanoma who received adaptive T cell transfer therapy. Since TMB is a relatively new biomarker, there is still a need to perform more studies and many labs are being focused on different aspects of this biomarker.
References
Genetics
Mutation
DNA
Tumor
Cancer research | Tumor mutational burden | [
"Biology"
] | 3,049 | [
"Genetics"
] |
66,887,016 | https://en.wikipedia.org/wiki/NGSI-LD | NGSI-LD is an information model and API for publishing, querying and subscribing to context information. It is meant to facilitate the open exchange and sharing of structured information between different stakeholders. It is used across application domains such as smart cities, smart industry, smart agriculture, and more generally for the Internet of things, cyber-physical systems, systems of systems and digital twins.
NGSI-LD has been standardized by ETSI (European Telecommunications Standardization Institute) through the Context Information Management Industry Specification Group, following a request from the European Commission. Its takeup and further development are spelled out in the EU's "Rolling plan for ICT standardization". NGSI-LD builds upon a decades-old corpus of research in context management frameworks and context modelling. The acronym NGSI stands for "Next Generation Service Interfaces", a suite of specifications originally issued by the OMA which included Context Interfaces. These were taken up and evolved as NGSIv2 by the European Future Internet Public-Private-Partnership (PPP), which spawned the FIWARE open source community.
The NGSI-LD information model represents Context Information as entities that have properties and relationships to other entities. It is derived from property graphs, with semantics formally defined on the basis of RDF and the semantic web framework. It can be serialized using JSON-LD. Every entity and relationship is given a unique IRI reference as identifier, making the corresponding data exportable as linked data datasets. The -LD suffix denotes this affiliation to the linked data universe.
Design
Information model
The NGSI-LD information model can be considered as the first formal specification by a de jure standards organization of the property graph model, which has emerged since the early 2000s as an informal common denominator model for graph databases.
The core concepts are:
A property graph (a.k.a. "attributed graph") is a directed multigraph, made up of nodes (vertices) connected by directed links, where nodes and arcs both may have multiple optional attached properties (i.e. attributes)
Properties (similar to attributes in object models) have the form of arbitrary key-value pairs. Keys are character strings and values are arbitrary data types. By contrast to RDF graphs, properties are not arcs of the graph.
Relationships are arcs (directed edges) of the graph, which always have an identifier, a start node and an end node
The NGSI-LD meta-model formally defines these foundational concepts (Entities, Relationships, Properties) on the basis of RDF/RDFS/OWL, and partially on the basis of JSON-LD.
An entity is the informational representative of something (a referent) that is supposed to exist in the real world, outside of the computational platform using NGSI-LD. This referent need not be something strictly physical (it could be a legal or administrative entity), nor self-contained (it may be a distributed system-level construct). Any instance of such an entity is supposed to be uniquely identified by an IRI, and characterized by reference to one or more NGSI-LD Entity Type(s). In property-graph language, it is a node.
A property is an instance that associates a characteristic, an NGSI-LD Value, to either an NGSI-LD Entity, an NGSI-LD Relationship or another NGSI-LD Property. Properties of properties are explicitly allowed and are encouraged e.g. to express the accuracy of a particular measured value.
A relationship is a directed link between a subject (starting point), that may be an NGSI-LD Entity, an NGSI-LD Property, or another NGSI-LD Relationship, and an object (end-point), that is an NGSI-LD Entity. A NGSI-LD Relationship from a Property to an Entity can for example be used to express that the Property was measured by that Entity (Provenance of the measurement).
A value is a JSON value (i.e. a string, a number, true or false, an object, an array), or a JSON-LD typed value (i.e. a string as the lexical form of the value together with a type, defined by an XSD base type or more generally an IRI), or a JSON-LD structured value (i.e. a set, a list, or a language-tagged string).
A type is an OWL class that is a subclass of either the NGSI-LD Entity, NGSI-LD Relationship, NGSI-LD Property or NGSI-LD Value classes defined in the NGSI-LD meta-model. NGSI-LD pre-defines a small number of types, but is otherwise open to any types defined by users.
Complementing this metamodel, the NGSI-LD information model specification also provides a cross-domain ontology that defines key constructs related to spatial, temporal or system-composition characteristics of entities.
The flexible information model allows the specification of any kind of entity. In order to allow interoperability between NGSI-LD users, standardized entities are collaboratively defined at Smart Data Models Program and made available at its repository with an open-source license.
Architecture
The NGSI-LD specification consists of an information model and an API. The API provides functionalities to support the architectural roles described in the following.
Context Consumer: A Context Consumer consumes NGSI-LD Entities from a Context Broker (or possibly directly from a Context Source) using the Context Information Consumption functionalities of the NGSI-LD API.It can retrieve a specific NGSI-LD Entity or query relevant NGSI-LD Entities using synchronous requests. It can also subscribe for relevant NGSI-LD Entities and receive asynchronous notifications whenever there are changes in the requested NGSI-LD Entities.
Context Producer: A Context Producer creates, updates and deletes NGSI-LD Entities, NGSI-LD Properties and NGSI-LD Relationships in the Context Broker using the Context Information Provision functionalities of the NGSI-LD API.
Context Source: A Context Source makes NGSI-LD Entities available through the Context Information Consumption functionalities of the NGSI-LD API. To make the information discoverable for a Context Broker, it registers the kind of context information it can provide with a Registry Server using the Context Source Registration functionality of the NGSI-LD API.
Context Broker: A Context Broker acts as the primary access points to context information for Context Consumers. NGSI-LD Entity information can be stored by the Context Broker itself, if it has been provided by a Context Producer using the Context Information Provision functionalities of the NGSI-LD API, or the Broker can request is from Context Sources using the Context Information Consumption functionalities of the NGSI-LD API. The Context Broker aggregates all NGSI-LD Entity information related to a request and returns the aggregated result to the Context Consumer. In the case of a subscription, it sends notifications whenever there are relevant changes, potentially as a result of receiving notifications from Context Sources. To find Context Sources that may have NGSI-LD Entities relevant to a Context Consumer request, the Context Broker uses the Context Source Discovery functionality of the NGSI-LD API implemented by the Registry Server.
Registry Server: The Registry Server stores Context Source Registrations provided by Context Sources using the Context Source Registration functionalities of the NGSI-LD API. Context Source Registrations contain information about what kind of Context Information a Context Source can provide, but not actual values. The kind of context information can be provided on different granularity levels ranging from very detailed information, e.g. certain properties or relationships of a specific NGSI-LD Entity, to any information of a specific NGSI-LD Entity, or to the level that it can provide NGSI-LD Entities that have a certain Entity Type, possibly for a given geographic area. The Context Source Discovery functionality of the NGSI-LD API allows the Context Broker (or possibly a Context Consumer) to find Context Sources that may have relevant NGSI-LD Entities.
The architectural roles allow the implementation of different deployment architectures. In a centralized architectures, there is a central Context Broker that stores the context information provided by Context Producers. In a distributed setting, all context information can be stored by Context Sources. In a federated architecture, Context Sources can again be Context Brokers that make aggregated information from a lower hierarchy level available. These architectures are not mutually exclusive, i.e. an actual deployment may combine them in different ways.
API
The NGSI-LD Context Information Management API allows users to provide, consume and subscribe to context information in multiple scenarios and involving multiple stakeholders. It enables close to real-time access to information coming from many different sources (not only IoT data sources), named Context Sources, as well as publishing that information through interoperable data publication platforms.
It provides advanced geo-temporal queries, and it includes subscription mechanisms, in order for content consumers to be notified when content matching some constraints becomes available.
The API is designed to be agnostic to the architecture (central, distributed, federated or combinations thereof), so that applications which produce and consume information do not have to be tailored to the specifics of the system that distributes/brokers context information for them.
API operations comprise:
Context Information operations, concerned with Provision (creating NGSI-LD Entities, and updating their Attributes), Consumption (querying NGSI-LD Entities) and Subscription (subscribing to specific information, under specified constraints, in order to be notified when matching Entities appear, carrying the specified information).
Context Sources operations, concerned with Registration (make a new source of context information available in the overall distributed system, by registering it) and Discovery (querying the system about what context sources have registered, which offer information of a specified type).
Uses
NGSI-LD was initiated by partners of the FIWARE programme, and is primarily used by the FIWARE open source community, supported by the FIWARE Foundation as well as a diverse range of other projects and users such as below:
The Connecting Europe Facility recommends the use of the FIWARE context broker with NGSI-LD
The Open & Agile Smart Cities & Communities (OASC) organisation references the NGSI-LD specification as the first of their Minimal Interoperability Mechanisms (MIM1).
The Living-in.eu project recommends the use of NGSI-LD in their joint declaration and their technical commitments. The declaration has been endorsed and signed by 86 cities and public administrations from the EU, and is supported by many more companies and organizations.
The GSMA "IoT Big Data Framework Architecture" is based on NGSI-LD.
The Fed4IoT EU project, where it is used as a neutral data format for translating between various IoT data representations
The Thing'in graph-based digital twin platform from Orange uses NGSI-LD as its core information model.
The City Data Hub platform has been developed as part of the Smart City Data Hub project and is now used as a basis for smart cities in Korea.
The India Urban Data Exchange (IUDX) uses the NGSI-LD API as part of their Resource Access Service Interface. It is referenced in the Bureau of Indian Standards' Unified Data Exchange IS 18003(Part2):2021 standard.
History
NGSI-LD is the result of an evolution of Context Interfaces that started as part of the "Next Generation Service Interfaces" (NGSI) suite published by the Open Mobile Alliance (OMA) in 2012, which is also the source of the acronym NGSI. The NGSI suite included NGSI-9 as the Context Entity Discovery Interface and NGSI-10 as the Context Information Interface. The NGSI standard from OMA and its intermediary evolutions relied on a classical Entity–attribute–value model and an XML-based representation. The NGSI Context Interfaces were adapted by the FI-WARE project, which developed the platform for the European Future Internet Public-Private-Partnership (PPP). The OMA NGSI Context Interfaces got an HTTP binding with a JSON representation, referred to as NGSIv1, which included both NGSI-9 and NGSI-10. In the course of FI-PPP the interfaces further evolved into NGSIv2, which became the key interface of the FIWARE platform. After the end of the FI-PPP in 2016, the FIWARE platform became the core of the FIWARE Open Source Community managed by the FIWARE Foundation. In 2017, the ETSI Industry Specification Group on cross-cutting Context Information Management (ETSI ISG CIM) was created to evolve the Context Information Interface, which resulted in the creation of NGSI-LD. The limitations of the original information model led to the specification of a broader model which derives from property graphs, explicitly including relationships between entities, on a par with entities themselves. ETSI ISG CIM continues to evolve the NGSI-LD Information Model and API. It publishes new versions of the specification once or twice a year.
See also
Context awareness
Graph Query Language
References
External links
ETSI CIM group home page
Implementations in open-source software projects
Orion-LD from the FIWARE Foundation
Scorpio from NEC
Stellio from EGM
Cassiopeia from Geonet
City Data Hub Data Core Module from KETI
Information science
Knowledge representation
Modeling languages
Computer standards
Telecommunications standards | NGSI-LD | [
"Technology"
] | 2,761 | [
"Computer standards"
] |
69,611,543 | https://en.wikipedia.org/wiki/Elsa%20Lundanes | Elsa Lundanes (born 22 May 1953) is a Norwegian chemist.
She was born in Ålesund and took her cand.real. degree in 1978. After two years at Texas University she took the dr.scient. degree in 1986. She worked in the pharmaceutical industry for Nycomed before she was employed by the University of Oslo in 1988. Her specialty is analytical chemistry. She became professor in 1999 and a member of the Norwegian Academy of Science and Letters in 2009.
References
1953 births
Living people
People from Ålesund
Norwegian chemists
Analytical chemists
Norwegian expatriates in the United States
Academic staff of the University of Oslo
Members of the Norwegian Academy of Science and Letters | Elsa Lundanes | [
"Chemistry"
] | 140 | [
"Analytical chemists"
] |
69,612,044 | https://en.wikipedia.org/wiki/Scotland%27s%20Churches%20Trust | Scotland's Churches Trust is a Scottish registered charity whose “aims are to advance the preservation, promotion and understanding of Scotland’s rich architectural heritage represented in its churches and places of worship of all denominations.” Its principal activities are “promoting heritage and tourism” and “giving of grants”. It primarily carries out these activities by offering financial support and practical advice for church repairs and modernisation projects, organ recitals and concerts, a church recording scheme and by promoting its fourteen Pilgrim Journeys across Scotland, that include over 500 places of current or former worship.
Formed in 2012 from two older built heritage organisations, the Scottish Churches Architectural Heritage Trust and Scotland's Churches Scheme, it currently has over 1300 churches in its membership.
History
In 1974, broadcaster and writer Magnus Magnusson created The Steeplechase fundraising scheme to help raise funds to preserve Scotland's churches. In 1978 he became the founding chairperson of the Scottish Churches Architectural Heritage Trust, a position he held until 1985. Its primary aim was to assist congregations in the preservation and upkeep of their buildings.
In 1980, the board invited noted fundraiser Florence MacKenzie (1935-2010) to become the Trust's director, in which post she remained until her retirement in 2009. MacKenzie was granted an MBE for her services to the restoration of church buildings in 1996. Other former trustees include Lady Marion Fraser and Lord Penrose.
During its first three decades SCAHT was instrumental in preserving “churches of all sizes – historic and small country kirks as well as synagogues”. These buildings include Kilarrow Parish Church in Islay, St Magnus' in Orkney, St Marnoch's in Angus, Sacred Heart in Wigton, Yester Parish Church in East Lothian and St Michael and All Angels, Inverness.
Founded in 1996, Scotland's Churches Scheme was an ecumenical membership charitable trust that assisted “living” churches work together and make their buildings the focus of their communities by regularly opening their doors and sharing their history and heritage. Among other activities, the Scheme provided a series of “how-to” guides to assist its member churches in researching and presenting their stories, secure their buildings, welcome visitors and record and interpret their graveyards. It also published a series of Regional Guides listing the history and architectural heritage of ecclesiastical buildings across Scotland.
In 2012 the Scottish Churches Architectural Heritage Trust and Scotland's Churches Scheme merged to form Scotland's Churches Trust. HRH Princess Anne, Princess Royal became its patron and Dr Brian Fraser its first Director. In 2013 the SCT launched Scotland’s Pilgrim Journeys, a collection of six trails across Scotland that encompassed the medieval tradition pilgrim visits to ecclesiastical sites with contemporary faith tourism.
In recent years the Trust has provided grants towards the costs of major fabric works and minor maintenance activities. It also offers financial support to church organists seeking to improve their skills and churches offering organ concerts. Its Scottish Pilgrim Journeys initiative has been increased from six to fourteen different trails across the country.
Governance
Patron: HRH Princess Anne, Princess Royal KG, KT, GCVO, GCStJ, QSO, CD
Hon President: Robin Blair CVO, WS
Vice Presidents:
Trustees:
Director: Dr DJ Johnston-Smith
Chairperson, Board of Trustees: Prof Adam Cumming
Chairperson, Grants Committee: Ros Taylor RIBA
References
External links
Christian charities based in the United Kingdom
Heritage organisations in the United Kingdom
Architectural history | Scotland's Churches Trust | [
"Engineering"
] | 686 | [
"Architectural history",
"Architecture"
] |
69,614,642 | https://en.wikipedia.org/wiki/Window%20%28optics%29 | In optics, a window is an optical element that is transparent to a range of wavelengths, and that has no optical power. Windows may be flat or curved. They are used to block the flow of air or other fluids while allowing light to pass into or out of an optical system.
General characteristics
In general, an optical window is a material that allows light into an optical instrument. The material has to be transparent to a wavelength range of interest but not necessarily to visible light. Usually, it is mechanically flat and sometimes it also is optically flat, depending on resolution requirements. A window of this sort is commonly parallel and is likely to be anti-reflection coated, especially if it is designed for visible light. An optical window may be built into a piece of equipment (such as a vacuum chamber) to allow optical instruments to view inside that equipment.
In spectroscopy
Optical windows used for UV/VIS spectroscopy, are usually made from glass or fused silica. In IR spectroscopy, there is a wide range of materials that transmit light into the far infrared and can be utilized for the construction of optical windows, from barium fluoride (BaF2), calcium fluoride, potassium bromide, potassium chloride, sodium chloride, germanium (Ge), zinc selenide (ZnSe) and sapphire. These windows are either built into circular, elliptical or rectangular configurations.
References
Optical components | Window (optics) | [
"Materials_science",
"Technology",
"Engineering"
] | 282 | [
"Glass engineering and science",
"Optical components",
"Components"
] |
69,615,015 | https://en.wikipedia.org/wiki/USBM%20wettability%20index | The U.S. Bureau of Mines (USBM), developed by Donaldson et al. in 1969, is a method to measure wettability of petroleum reservoir rocks. In this method, the areas under the forced displacement Capillary pressure curves of oil and water drive processes are denoted as and to calculate the USBM index.
USBM index is positive for water-wet rocks, and negative for oil-wet systems.
Bounded USBM (or USBM*)
The USBM index is theoretically unbounded and can vary from negative infinity to positive infinity. Since other wettability indices such as Amott-Harvey, Lak wettability index and modified Lak are bounded in the range of -1 to 1, Abouzar Mirzaei-Paiaman highlighted the bounded form of USBM (called USBM*) as a replacement of the traditional USBM as
USBM* varies from -1 to 1 for strongly oil-wet and strongly water-wet rocks, respectively.
See also
Wetting
Amott test
Lak wettability index
References
Petroleum geology
Surface science
Fluid mechanics | USBM wettability index | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 224 | [
"Surface science",
"Petroleum",
"Civil engineering",
"Condensed matter physics",
"Petroleum geology",
"Fluid mechanics"
] |
78,442,020 | https://en.wikipedia.org/wiki/Sisunatovir | Sisunatovir is an investigational new drug that is being evaluated for the treatment of respiratory syncytial virus (RSV) infections. It functions as an orally administered RSV fusion inhibitor, targeting the RSV-F protein on the viral surface to prevent viral replication. Sisunatovir has been granted Fast Track designation by the U.S. Food and Drug Administration (FDA) due to its potential to address serious RSV infections, which can lead to severe respiratory conditions such as bronchiolitis and pneumonia.
References
Antiviral drugs
Amines
Benzimidazoles
Cyclopropanes
Organofluorides
Oxindoles
Spiro compounds | Sisunatovir | [
"Chemistry",
"Biology"
] | 138 | [
"Pharmacology",
"Antiviral drugs",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Organic compounds",
"Pharmacology stubs",
"Biocides",
"Bases (chemistry)",
"Spiro compounds"
] |
78,442,304 | https://en.wikipedia.org/wiki/Vasilis%20Fthenakis | Vasilis M. Fthenakis is a Greek American chemical engineer, environmental scientist, author and academic. He is an adjunct professor, and founding director of the center for Life Cycle Analysis at Columbia University.
Fthenakis is most known for his research on the environmental sustainability of photovoltaic energy technologies and for demonstrating the feasibility of solar energy as a solution to meet US energy demands while addressing climate challenges. His publications comprise journal articles and books including Electricity from Sunlight: Photovoltaics Systems Integration and Sustainability and Onshore and Offshore Wind Energy: Evolution, Grid Integration and Impact. He has received awards such as a Certificate of Appreciation from the US Department of Energy in 2006, the Brookhaven National Laboratory's Certificate of Recognition in 2015, the 2018 IEEE William Cherry Award, and the 2022 Karl Böer Solar Energy Medal of Merit from the International Solar Energy Society.
Fthenakis is an elected Fellow of the American Institute of Chemical Engineers, the International Energy Foundation, and the Institute of Electrical and Electronics Engineers. Additionally, he has served as Editor-in-Chief of Green Energy and Sustainability, Section Editor-in-Chief of Energies, and associate editor for Progress in Energy.
Education and early career
Fthenakis earned a diploma in chemistry from the University of Athens in 1975 while working as a chemist at ChemiResearch in Greece from 1974 to 1976. He then completed an MS in Chemical Engineering at Columbia University in 1978 and held research roles at Columbia's Catalysis Laboratory and Fossil Energy Laboratory. In 1980, he joined Brookhaven National Laboratory, working as a research engineer and senior scientist across departments focused on sustainable energy and environmental sciences. He received a PhD in fluid dynamics and atmospheric science from New York University in 1991, with a focus on toxic gas release modeling and mitigation using water spray systems.
Career
Fthenakis continued his academic and research career at Columbia University, serving as an adjunct associate professor of earth and environmental engineering from 1995 to 2000, and has been an adjunct professor since 2006. In 2006, he became a senior research scientist and founded the Center for Life Cycle Analysis (CLCA), where he continues to serve as director. He also co-founded the Global Clean Water Desalination Alliance (GCWDA), where he served on the board of directors, leading efforts to integrate solar energy systems with desalination technologies. Concurrently, at Brookhaven National Laboratory, he served in various roles from 1980 to 2016 and has been a distinguished scientist emeritus since 2017. From 2002 to 2016, he led the National Photovoltaic Environmental Research Center and has coordinated international collaborations on life cycle assessment (LCA) under the direction of the US Department of Energy and the International Energy Agency.
Contributions
Fthenakis has led collaborations on silane safety and lead-free solder technologies, conducted foundational life-cycle studies on thin-film photovoltaics and PV recycling, and anticipated regulatory trends concerning lead and cadmium, supporting industry adaptation. In later years, the scope of his research expanded to topics at the energy-water-environment nexus and he led applied research on solar-powered water desalination with applications in the United States and Chile.
In 2004, Fthenakis began studying life cycle analysis (LCA) to address what he identified as an unbalanced portrayal of photovoltaics' environmental impacts and started an international collaboration to update LCA studies on photovoltaics. He established an ad-hoc committee and organized scoping meetings with researchers from institutions such as the University of Utrecht, the Energy Research Center of the Netherlands, Chalmers University, the University of Stuttgart, Siena University, and Ambient Italia to assess the LCA needs of the photovoltaic industry.
Through related research, Fthenakis has developed and advocated for a proactive, long-term environmental strategy for photovoltaics, including recycling processes for end-of-life photovoltaic modules. In 1999, he organized a workshop to promote lead-free solder technology. From 2002 to 2005, he established a laboratory focused on recycling spent photovoltaic modules and manufacturing scrap, employing hydrometallurgical separation technologies and resulting in a patented method for separating copper, cadmium, and tellurium, with applications in cadmium telluride (CdTe) and copper indium gallium selenide (CIGS) technologies. He also conducted studies on optimizing the collection of end-of-life photovoltaics to reduce recycling costs. Recognizing the environmental concerns surrounding the growth of the CdTe and CIGS markets, he designed experiments simulating fire effects on photovoltaics using techniques such as NSLS-x ray diffraction analysis. In 2005, he led a European Union workshop, organized with the Joint Research Center and the German Ministry of the Environment, facilitating a US company's establishment of a manufacturing facility in Germany. In 2007, he launched and led a five-year International Energy Agency (IEA) Photovoltaic Environmental Health and Safety Task (Task 12), serving as the US Operating Agent until 2012.
Over the years, Fthenakis’ research has been highlighted by news outlets, including The New York Times, Science News, Environmental Science & Technology, IEEE Spectrum, Scientific American, Spiegel, and NRC Handelsblad.
Publications
Fthenakis' research on photovoltaics and the environment has led to approximately 300 journal and conference papers, contributing to over 450 publications on energy and environmental topics. As of November 15, 2024, his publications have been cited 18,464 times and his h-index is 66. In 2007, he co-authored the Grand Plan for Solar Energy with Ken Zweibel and James Mason, a study demonstrating the feasibility of solar energy to meet most of the US electricity needs; this was the prelude of the detailed SunShot Solar Vision studies. Earlier, in 1993, he published his first book, Prevention and Control of Accidental Releases of Hazardous Gases, which was used by the chemical and oil refinery industries as a primer on the prevention of industrial disasters. His publications have also focused on electricity generation through renewable energy sources such as wind and solar.
Personal life
Fthenakis is the son of Menelaos Fthenakis and Antonia Korkidis-Fthenakis, who died in the sinking of the SS Heraklion in Greece when he was 14 years old. He is married to Christina Georgakopoulos and has two children and two grandchildren.
Awards and honors
2006 – Certificate of Appreciation, US Department of Energy
2018 – William Cherry Award, IEEE
2022 – Karl Böer Solar Energy Medal of Merit, International Solar Energy Society
Bibliography
Selected books
Prevention and Control of Accidental Releases of Hazardous Gases (1993) ISBN 978–0471284086
Third Generation Photovoltaics (2012) ISBN 978–9535103042
Electricity from Sunlight: Photovoltaics Systems Integration and Sustainability (2nd edition, 2018) ISBN 978–1118963807
A Comprehensive Guide to Solar Energy Systems: With Special Focus on Photovoltaic Systems (2018) ISBN 978–0128114797
Comprehensive Renewable Energy: Photovoltaic Solar Energy (2nd edition, 2022) ISBN 978–0323990110
Onshore and Offshore Wind Energy: Evolution, Grid Integration and Impact (2nd edition, 2024) ISBN 978–1119854494
Energy and Climate Change: Our New Future (2025) ISBN 978–0443219276
Selected articles
Fthenakis, V. M., Kim, H. C., & Alsema, E. (2008). Emissions from photovoltaic life cycles. Environmental science & technology, 42(6), 2168–2174.
Fthenakis, V., & Kim, H. C. (2009). Land use and electricity generation: A life-cycle analysis. Renewable and Sustainable Energy Reviews, 13(6–7), 1465–1474.
Fthenakis, V. (2009). Sustainability of photovoltaics: The case for thin-film solar cells. Renewable and Sustainable Energy Reviews, 13(9), 2746–2750.
Fthenakis, V., Mason, J., & Zweibel, K. (2009). The technical, geographical and economic feasibility for solar energy to supply the energy needs of the United States. Energy Policy, 37(2), 387–399.
Turney, D., & Fthenakis, V. (2011). Environmental impacts from the installation and operation of large-scale solar power plants. Renewable and Sustainable Energy Reviews, 15(6), 3261–3270.
Fthenakis, V. (2015). Considering the total cost of electricity from sunlight and the alternatives. Proceedings of the IEEE, 103(3), 283–286.
Fthenakis, V., & Leccisi, E. (2021). Updated sustainability status of crystalline-silicon-based photovoltaic systems – Life-cycle energy and environmental impact reduction trends. Progress in Photovoltaics, 29(10), 1068–1077.
Leccisi, E., & Fthenakis, V. (2021). Life-cycle energy demand and carbon emissions of scalable perovskite PV systems. Progress in Photovoltaics, 29(10), 1078–1092.
Fthenakis, V., Yetman, G., Zhang, Z., Squires, J., Atia, A. A., Alarcon-Padilla, D.-C., Palenzuela, P., Vicraman, V., & Zaragoza, G. (2022). A solar energy desalination analysis tool, SEDAT, with data and models for selecting technologies and regions. Nature Scientific Data, 9(223), 1-20.
Ginsberg, M., Esposito, D., & Fthenakis, V. (2023). Designing off-grid green hydrogen plants using dynamic polymer electrolyte membrane electrolyzers to minimize the cost of hydrogen production. Cell Reports Physical Science, 4(10), 101625.
References
Chemical engineers
Environmental scientists
Non-fiction environmental writers
Fellows of the American Institute of Chemical Engineers
Fellows of the IEEE
Columbia University faculty
New York University alumni
Columbia University alumni
National and Kapodistrian University of Athens alumni
Year of birth missing (living people)
Living people | Vasilis Fthenakis | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,171 | [
"Chemical engineering",
"Chemical engineers",
"Environmental scientists",
"American environmental scientists"
] |
78,443,075 | https://en.wikipedia.org/wiki/Iscartrelvir | Iscartrelvir is an investigational new drug developed by the Westlake University for the treatment of COVID-19. It targets the SARS-CoV-2 3CL protease, which is crucial for the replication of the virus responsible for COVID-19.
See also
3CLpro-1
Rupintrivir
References
Amines
Anilines
Benzamides
Bromobenzene derivatives
Nitrobenzenes
Cyclohexanes
Isoquinolines | Iscartrelvir | [
"Chemistry"
] | 101 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Pharmacology stubs",
"Bases (chemistry)"
] |
71,200,412 | https://en.wikipedia.org/wiki/Critical%20embankment%20velocity | Critical embankment velocity or critical speed, in transportation engineering, is the velocity value of the upper moving vehicle that causes the severe vibration of the embankment and the nearby ground. This concept and the prediction method was put forward by scholars in civil engineering communities before 1980 and stressed and exhaustively studied by Krylov in 1994 based on the Green function method and predicted more accurately using other methods in the following. When the vehicles such as high-speed trains or airplanes move approaching or beyond this critical velocity (firstly regarded as the Rayleigh wave speed and later obtained by sophisticated calculation or tests), the vibration magnitudes of vehicles and nearby ground increase rapidly and possibly lead to the damage to the passengers and the neighboring residents. This relevant unexpected phenomenon is called the ground vibration boom from 1997 when it was observed in Sweden for the first time.
This critical velocity is similar to that of sound which results in the sonic boom. However, there are some differences in terms of the transferring medium. The critical velocity of sound just changes in a small range, although the air quality and the interaction between the jet flight and atmosphere affect the critical velocity. But the embankment including the filling layers and ground soil underneath surface is a typically random medium. Such complex soil-structure coupling vibration system may have several critical velocity values. Therefore, the critical embankment velocity belongs to the general concept, the value of which is not constant and should be acquired by calculation or experiment in accordance with certain engineerings nowadays.
Mechanism
The wave superposition
Under the ideal assumptions, when the moving loads are imposed on the surface of the embankment, they will induce sub-waves which propagate inside and along the surface of the embankment. If the velocity of the moving loads is less than the propagating waves, which could be the body or surface waves, the vehicles move slowly than the propagating waves and the crests of waves on the embankment surface don't intersect at all. Therefore, there is no superposition of the waves taking place. The vibration of embankment and vehicles changes in a small range under this stage.
Oppositely, when the operating velocity of the vehicles is greater than the critical velocity, the vehicles move faster gradually and inevitably run at the critical velocity. At that moment, all the crests of the propagating waves coincide in the position where the loads imposed or the wheels and structure contact, which leads to the serious vibration around the vehicles because of the waves superposition.(The phenomenon is shown in the schematic figure in the right side)
From this perspective, the critical embankment velocity equals the dominant value of propagating waves velocity.
The structure resonance
In reality and practical application, the speed of the propagating sub-waves is still related to their frequency components. The total vibration consists of infinite wave components with different frequencies, the magnitude of each sub-wave changes in accordance with the wave speed and different vehicles moving velocity makes different part of sub-waves oscillate maximally.
Obtaining the critical velocity of the embankment is similar with looking for the resonant frequencies of a multi-DOF system. There are many orders of frequencies and the first few ones could make structure vibrate seriously. When the vehicle moves at the critical embankment velocity with respect to the embankment structure, excitation frequency locates very close to the resonant frequencies of most of the propagating waves in the vehicle-embankment coupling structure. Meanwhile, the vehicles moving velocity coincides with most sub-waves. The detailed determination realized by the dispersion analysis to the whole structure and illustrated in the following sections.
The dominant frequencies of the vibration induced by the critical embankment velocity determined according to the specific configuration of the engineering structure. For instance, the loads of moving train passages transfer from wheels through the welded rails, sleepers to the embankment. The discontinuous sleepers under the moving wheels make the cyclic loads imposed on the embankment propagate in low frequencies.
Impact
Abnormal vibration
As compared with the relatively low-speed scenarios, the magnitude and range of the vibration of the high-speed line increases. Obviously, the track or pavement structure and the embankment will deteriorate faster as the cyclic loads act repeatedly in the operation period. More importantly, it is commonly neglected that the vibration of the vehicle itself is magnified under the critical velocity, especially around the area where the wheels interact with the rail. Such high level local vibration can also evidently lead to an increase in the risk of the whole vehicle derailing. That is the real reason why the critical embankment velocity is of much importance.
Low-frequency noise
Apart from the vibration, the radiation of low-frequency noises induced by vehicle moving at the critical velocity transfer for a very long distance to the residential district. The civilians who live near the line may endure the low-frequency noises for over millions of cycles, which probably makes people feel annoyed, nervous and insomniac as well as even leading to the resonance of human organs. This impacts upon humans are still ignored in the engineering designing process and invoke the research of low frequency noise damage.
Prediction
Calculating the accurate critical embankment velocity of new line is still difficult and should also be verified by many experiments in practical application. However, performing the analytical or numerical modeling even some simple ones gives lots of insights on the qualitative changes of the typical lines, such as exposing the potential issues in the embankment and track structures designing or the ways to relieve the impacts from critical velocity. As the quick development of the HPC, it gradually becomes feasible to predict the feasible critical embankment velocity through numerical methods before the construction of lines.
Elastic foundation beam model
Under low frequency ranges (under 100 Hz, less than the dominant frequencies of general embankment induced), it's reasonable to obtain the critical embankment velocity through the theory of the beam on elastic foundation. Based on the elastic theory, The dynamic governing equation of the Euler beam on elastic foundation under the moving point load with velocity demonstrates the vertical deflection of the track and sleepers
Herein, , , and represent the material properties related to the track structure and foundation respectively. is the Dirac function determining the location of the point load . The solution of the above equation is derived as
Where is a ratio representing the mechanical difference between the track and foundation. and are the dimensionless parameter associated with the minimal velocity of bend waves of the Euler beam respectively, which are written as
When the velocity of moving vehicles approaches the minimal velocity
Therefore, the minimal phase velocity of bend waves is regarded as the critical embankment velocity in the elastic foundation beam model. Nevertheless, this model is justified for the scenarios when the stiffness of vehicles and track structure is greater than the embankment. The soil-structure interaction and the space dimensional effect are the key factors for the general cases.
Elastic half-space beam model
If there is no beam putting on the top of the semi-space, the critical velocity of it is Rayleigh wave speed in accordance to the elastic theory, which is smaller than other two types of body wave speed. Furthermore, taking into consideration the above beam and its SSI increasing the number of factors which are related to the critical velocity. The dynamic governing equations of the elastic half-space and beam are respectively
wherein , are the Lamé constants, represents the contacting forces between the semi-space and the beam. The boundary conditions assume the contacting surface is ideally smooth
Based on the decomposition of the elastic potentials and the integral transform, the vertical displacement response of half-space surface can be obtained
Wherein is the wave number in corresponding direction. represents the width of the beam. is the partially transformed value of vertical displacement of half-space surface
Therefore, substitute the expression of vertical displacement into the above, the integral expression related to it in the frequency-wavenumber domain is
The equation above demonstrates the vibration of the beam and half-space respectively. Rewrite it in order to simplify
The first term in the equation above is the dispersion equation of the beam, it has the simple form in this model . The second term represents the relation of the semi-space.
In order to analyze the critical velocity of this coupling structure, the equivalent stiffness of it related to the conventional Winkler foundation in the Fourier domain is needed. In Winkler foundation, the last above equation has this form
Thus, the equivalent stiffness of semi-space of this SSI model to the Winker foundation is , it is written as
The equation above has a really complex form, usually approximate form is used to replace it under practical application. The critical velocity is determined by solving the simultaneous equations with beam model
The critical velocity approximate equation under the Poisson's ratio ranges from 0.2 to 0.38 is
According to this equation, there are two critical velocity values existing in this kind of model. One is less than the Rayleigh wave speed and the other one equals it. Advanced research shows that if the periodic supports are taken into consideration, there is a series of critical velocity values of the elastic half-space.
Multi-layered elastic half-space beam model
The top part of the embankment consists of many layered structure such as track, ballast or slab and foundation with different material properties. Therefore, a more sophisticated critical velocity analysis on the multi-layered or inhomogeneous structure is needed in the practical application. The critical velocity could be determined in accordance with the dispersion relation of each parts. The radial and vertical surface stresses and displacements of layered half-space in the wavenumber-frequency domain obtained by Thompson–Haskell method is
Herein, represents the stiffness matrix of the whole model. According to Cramer's rule, if the displacements in the frequency domain exist, the determinant of should be equal to zero
Solve it, the surface dispersion curve of the elastic layered foundation has the form below
The first equation explains the change of the horizontal transverse displacement resulted by the SH waves. The second one is related to the P-SV waves. Study shows that the dispersive SH, P-SV waves curves distribute among the ones of the surface Rayleigh wave and shear wave of the half-space, which are non-dispersive waves.
Considering the track structure dispersion relation could obtain more accurate results. For instance the dispersion equation of a typical slab track is written as a function of wavenumber and radial frequency
The intersecting points of dispersion curves of structure components are related to the critical velocity of the embankment. The velocity values could be obtained according to the definition of wavenumber.
Wherein, the represents the excitation frequency of the moving loads. means the different moving directions.
Mitigation
For engineering design, improving the critical embankment velocity to a higher value as compared with the operating speed is a conservative way to protect the passengers safety. As the issues related to the critical embankment velocity taking place after the operation of lines for many years, mitigation measures play an imperative role for the refurbished and new lines with high speed moving vehicles. Considering the convenience of the construction, mitigating measures focuses on the areas near the embankment for new lines and upon nearby area for the renovated lines. However, the former ones, active ones, are more efficient as compared with the latter ones, namely the passive measures.
Measures towards the embankment
The propagating speed of wave inside objects mainly depends on the stiffness index, namely . Therefore, the critical embankment velocity could be improved evidently through ground strengthening methods such as pile foundation, grouting, dry deep mixing, etc. The famous Swedish railway line running X2 trains was initially designed using ordinary construction methods. However, since the softness of the top clay, the vibration level induced by the X2 trains was few times higher than that of the conventional trains. The mitigation measure adopted by the operation sector Banverket was dry deep mixing method. After installing a total of 12 trial columns made by special binder with a length of about 8 meters for 2 weeks. The vibration level was reduced to an acceptable value after the mitigation.
Apart from the measures inside the embankment, engineers usually install the damper supports under the rail-pads to isolate the vibration transferred from the wheels downwards. Another common method to weaken the transmission of vibration is to construct the isolating trench with or without filling into porous materials like EPS concrete.
Measures towards the nearby area
The vibration transferred to a distant area belongs to the low-frequency ones. For the sensitive architectures like museums, laboratory etc., damper supports are installed under the building foundations to decrease the extra vibration. Since the magnitude of this kind of vibration cannot be easily reduced, the mitigating measures are mainly adopted to decrease the noise level. The most common way is installing the noise isolation wall near the borders of lines, which could change the direction of the sonic wave because of the reflection effect.
See also
Critical speed
Sonic boom
Speed of sound
Shock wave
References
External links
Project - MOTIV (Modelling Of Train Induced Vibration)
Project - CONVURT (CONtrol of Vibrations from Underground Rail Traffic)
PiP: a software based on Pipe in Pipe model for calculating vibration from underground railways
Civil engineering
Solid mechanics | Critical embankment velocity | [
"Physics",
"Engineering"
] | 2,652 | [
"Solid mechanics",
"Industrial engineering",
"Construction",
"Civil engineering",
"Mechanics",
"Transportation engineering"
] |
71,200,583 | https://en.wikipedia.org/wiki/Autonomous%20mobility%20on%20demand | Autonomous mobility on demand (AMoD) is a service consisting of a fleet of autonomous vehicles used for one-way passenger mobility. An AMoD fleet operates in a specific and limited environment, such as a city or a rural area.
Origin
Mobility on demand (MoD)
The idea of developing a form of passengers transportation based on shared vehicles rather than private cars comes from the research in the field of sustainable mobility, which aims at creating an efficient and environmentally-friendly way for people to move. As at the end of April 2022, the number of cars in the world has reached 1.1 billion, meaning that there is approximately a vehicle for every seven people on earth. Such large number of private vehicles in the streets causes several issues, namely a huge release of greenhouse gases and request for fossil fuels, since most cars are still fuel-powered, as well as infrastructural issues such as roads congestion and parking spots lacking. The concept of mobility-on-demand (MoD) addresses these issues providing a potential solution to them: in MoD, people do not need a private vehicle to travel. Mobility on demand is in fact a service in which shared vehicles are used for passenger mobility in one-way trips. The adoption of mobility-on-demand services has the potential of increasing the utilization rate of vehicles, which for private cars is on average below 10%, thus allowing to transport the same number of people with a lower number of vehicles. In this way, both the congestion and the pollution in the cities can be reduced. The service offered in the cities by taxi companies, which nowadays has been taken over also by other providers such as Uber and Bolt, is itself an expression of mobility on demand: upon request, a driver goes to pick up passengers to drive them to their desired destination, and then goes on with the next demand. The other manifestation of the concept of mobility on demand is the car-sharing, which allows people to rent a vehicle, drive it to their destination and then leave it there, so that it remains available for the next customers. The idea of car-sharing has become popular among the public since the end of 20th century, and is gaining more and more success in the present years, with companies such as ShareNow and Enjoy that are delivering it all over the world. A big drawback of mobility on demand systems is that an imbalance is periodically introduced in the system, consisting in an accumulation of vehicles in some areas and a lack in others, due to the fact that some zones are more popular than others. Imbalance makes the service inefficient, because customers are less likely to find a vehicle close to them.
Autonomous cars in MoD
The advent of the technology of self-driving cars has recently started to revolution the concept of mobility-on-demand, turning it into autonomous mobility on demand (AMoD). An AMoD fleet is composed of vehicles of level 5 autonomy, controlled in a centralized way. The communication with the customers happens via phone applications, where they can request a vehicle in a precise location, which then picks them up and drives them to their desired destination. Many academic researchers and market players are focusing on the development of AMoD systems, the main companies that are already developing fleets of vehicles for AMoD are shown in the following table.
Control
Different aspects of a fleet of vehicles used for AMoD are accurately controlled for it to function in a proper way.
Routing
Being the vehicles autonomous, an accurate control of their trajectories is operated by providing them with an optimized routing system. The routes of the cars are calculated in real-time according to specific objectives defined in the design phase of the fleet control algorithms. Those aim at minimizing the distance travelled or the time needed to reach a specific location, so they need to take into account different metrics such as the traffic in the streets and the condition of the roads.
Dispatching
A crucial aspect of AMoD technology is the assignment of vehicles to open customer requests. To take the dispatching decisions, the controller first registers the real-time positions of all the vehicles and open requests. Different strategies can be adopted to perform the assignments, and the choice among them affects the complexity of the fleet control and the effectiveness of the whole system. An option is to assign customers to the closest vehicle following a first come, first serve rule, which is easy in terms of computational time but only leads to suboptimal solutions. For this reason, researchers are proposing approaches based on the mathematical programming. Those consist in formulating an assignment problem by defining the cost value of each potential vehicle-customer assignment and the constraints present in the system. The problem is then solved using an algorithm for the optimal resource-task assignment. The cost value of each possible assignment can be computed basing on different metrics. Most of the dispatching strategies proposed up to now are based on one of the following parameters or on a combination of some of them:
Spatial distance between vehicle and customer. It can be evaluated either in terms of Euclidean distance, less accurate but computationally lighter, or as shortest path, which is more precise but causes a sensible increase in computational time, thus might limit the scalability of the system to which the method can be applied
Estimate of the time needed for the vehicle to reach the customer
Customer waiting time
Traffic
Autonomy of the car before the next refuel
Predictions about the future demand
Rebalancing
The stochasticity of the customer demand makes AMoD systems unstable, causing them to become unbalanced after some time. This results in an uneven distribution of resources along the network, which sensibly affects the quality of the service. For this reason the empty vehicles of an AMoD fleet are periodically redistributed in their whole operating area, for them to be available where they will be most needed in the future. When planning the rebalancing, it is necessary to take into account the fact that moving empty vehicles has a cost. Besides, this action contributes to the congestion of the streets. For these reasons, to compute the optimal rebalancing decisions an accurate trade-off between the different cost factors is operated. Various strategies can be adopted to decide where vehicles should be rebalanced:
Studying records of the customer demand in each area during the previous days, and from there estimating the average number of vehicles necessary in each zone at every time of the day
Periodically computing the imbalance between the number of cars and that of customers present in each zone, and issuing rebalancing actions aimed at minimizing such parameter in all the areas of the city
Estimating the future customer demand in each zone through some forecasting method, and anticipating it by sending the necessary vehicles to the right areas in advance
Benefits
Sustainability and traffic reduction
The introduction of AMoD fleets constitutes an alternative to the usage of private cars for passenger mobility, so it has the potential to sensibly reduce the number of vehicles in the streets. This has the effect of decreasing both the congestion in the streets and the emissions of greenhouse gases, besides bringing an increase in fuel efficiency.
Safety
Evidence shows that the vast majority of road accidents in the world is caused by human errors. The adoption of self-driving vehicles would then eliminate the human risk factor from car trips, thus sensibly decreasing the probability of an incident to occur.
Automatic rebalancing
The biggest limit that characterizes MoD systems is the problem if the imbalance of resources caused by uneven demand across the area serviced. The introduction of self-driving vehicles provides a solution to this issue, in fact the autonomous drive technology allows the fleet to periodically rebalance itself without the need of human intervention. This brings an increase in the quality of the service, in fact customers are more likely to find cars when and where they need them.
Accessibility
AMoD systems allow mobility for non-drivers, and give travellers the possibility to employ the time of the ride in useful ways, or even to relax. The absence of the drivers grants the same availability of the service during all the hours of the day and night: the only interruptions of the service happens in case of faults of the vehicles, or when they are being refueled.
Limits
Cost
In order for an AMoD service to start making the difference in terms of congestion and pollution mitigation and safety, it needs to be adopted in large-scale. Before a large number of people can start preferring such kind of system to their private cars, it is necessary to optimize AMoD fleets and infrastructures production and management in order for it to be advantageous in terms of costs compared to owning a vehicle.
Responsibility
The emerging world of self-driving vehicles is full of open ethical questions that make the adoption of such technology complicated even from a moral point of view: who should be held accountable for accidents involving self-driving cars? When an autonomous vehicle is in a harmful situation when it is inevitable to harm somebody, how should it decide who to save? Agreed solutions to these questions do not yet exist in the scientific community, but researchers and legislators are working to produce regulations on the matter.
Safety
Autonomous vehicles have the potential of eliminating road accidents caused by human errors, but they are not exempt from concerns themselves. Failures might occur in their system and potentially harm passengers or other road users. They could also be subjected to criminal activities such as hacker attacks that could affect their safety and performance.
Open challenges
Ride sharing
The possibility of sharing AMoD rides between strangers who have to travel along the same route would have the effect of improving both the quality of the service, by reducing waiting and travel times, and decreasing the cost, because the price of the ride would be divided between the passengers. Moreover, less vehicles would be necessary to fulfill the customer demand, bringing benefits also in terms of sustainability and traffic reduction. Researchers are thus working to develop control systems able to combine the needs of different customers to better satisfy all of them and optimize the whole system.
Electric vehicles
A further advancement in the technology of AMoD, that some companies such as Cruise are already working on, would be to use fleets of electric vehicles. This would bring a huge advantage in terms of sustainability, but introduces an additional complexity in the system. Charging an electric vehicle, in fact, takes longer than re-fuelling a petrol car, so the charging aspect needs to be optimized to see the benefits of such system.
See also
Self-driving car
Vehicular automation
Mobility as a service
Demand-responsive transport
External links
References
Transport culture
Road transport
Cars | Autonomous mobility on demand | [
"Physics",
"Engineering"
] | 2,126 | [
"Self-driving cars",
"Transport culture",
"Physical systems",
"Transport",
"Automotive engineering"
] |
72,678,994 | https://en.wikipedia.org/wiki/Chung%20K.%20Law | Chung King Law (;born 31 August 1947;also known as Ed Law) is a Chinese-born American scientist and a Robert H. Goddard professor at Princeton University. He is a specialist in the combustion science.
Career and research
Law received his bachelors and masters degree, respectively from University of Alberta and University of Toronto. He completed his PhD in 1973 under the supervision of Forman A. Williams at University of California San Diego. He worked at the General Motors Research Laboratories for two years and briefly at the Princeton University before joining the faculty at the Northwestern University in 1976. He joined the faculty of University of California Davis in 1984 and left in 1988, to join the faculty at Princeton University, where he is currently the Robert H. Goddard professor.
Law has made several contributions to the combustion field, especially, in connection with droplet dynamics and burning, laminar flame speed and stretched flames, chemical mechanism reduction.
Books
Honors and awards
Law holds many honors and awards. He is an elected fellow of ASME (1989), AIAA (1992), APS (2006), the American Academy of Arts and Sciences (2010), the American Association for the Advancement of Science (2012), Combustion Institute (2018). He is an elected member of the US National Academy of Engineering (2002). He is a past president of the Combustion Institute (2000-2004).
Silver Combustion Medal (1990) from The Combustion Institute
Alfred C. Egerton Gold Medal (2006) from The Combustion Institute
Pendray Aerospace Literature Award (2004) from AIAA
The journal Combustion and Flame issued a special issue commemorating Law's 70th birthday in 2018.
References
External links
1947 births
University of Toronto alumni
Fluid dynamicists
Living people
University of California, San Diego alumni
Fellows of the Combustion Institute
Fellows of the American Physical Society
University of Alberta alumni | Chung K. Law | [
"Chemistry"
] | 372 | [
"Fellows of the Combustion Institute",
"Combustion",
"Fluid dynamicists",
"Fluid dynamics"
] |
74,098,497 | https://en.wikipedia.org/wiki/Rubidium%20permanganate | Rubidium permanganate is the permanganate salt of rubidium, with the chemical formula .
Preparation
Rubidium permanganate can be formed by the reaction of potassium permanganate and rubidium chloride:
Properties
Physical
Rubidium permanganate is soluble in water with a solubility of 6.03 g/L at 7 °C, 10.6 g/L at 19 °C, and 46.8 g/L at 60 °C. Its crystal structure is orthorhombic, the same as caesium permanganate, ammonium permanganate and potassium permanganate.
Chemical
Similar to potassium permanganate, the two-step decomposition of rubidium permanganate leads to the formation of rubidium manganate intermediates. It breaks down into manganese dioxide, rubidium oxide and oxygen. The decomposition temperature is between 200 and 300 °C. Drift-away oxygen caused an 8% mass loss in the product.
Total reaction:
Uses
In qualitative analysis, rubidium permanganate is used as a reagent to detect perchlorate ions. It is produced as an intermediate from rubidium nitrate and potassium permanganate and precipitates with existing perchlorate ions as RbClO4·RbMnO4 mixed crystal.
References
Rubidium compounds
Permanganates | Rubidium permanganate | [
"Chemistry"
] | 279 | [
"Oxidizing agents",
"Permanganates"
] |
74,100,836 | https://en.wikipedia.org/wiki/Process%20safety%20management | Process safety management (PSM) is a practice to manage business operations critical to process safety. It can be implemented using the established OSHA scheme or others made available by the EPA, AIChE's Center for Chemical Process Safety, or the Energy Institute.
PSM schemes are organized in 'elements'. Different schemes are based on different lists of elements. This is a typical list of elements that may be reconciled with most established PSM schemes:
Commit to process safety
Process safety culture
Compliance with standards
Process safety competency
Workforce involvement
Stakeholder outreach
Understand hazards and risks
Process knowledge and documentation management
Hazard identification and risk analysis
Manage risk
Operating procedures
Safe work practices (e.g. a permit-to-work system)
Asset integrity management
Contractor management
Training and performance assurance
Management of change
Operational readiness
Conduct of operations
Emergency management
Learn from experience
Incident investigation
Process safety metrics and performance measurement
Auditing
Management review and continuous improvement
References
Further reading
Process safety
Management by type | Process safety management | [
"Chemistry",
"Engineering"
] | 191 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
74,102,918 | https://en.wikipedia.org/wiki/Pure%204D%20N%20%3D%201%20supergravity | In supersymmetry, pure 4D supergravity describes the simplest four-dimensional supergravity, with a single supercharge and a supermultiplet containing a graviton and gravitino. The action consists of the Einstein–Hilbert action and the Rarita–Schwinger action. The theory was first formulated by Daniel Z. Freedman, Peter van Nieuwenhuizen, and Sergio Ferrara, and independently by Stanley Deser and Bruno Zumino in 1976. The only consistent extension to spacetimes with a cosmological constant is to anti-de Sitter space, first formulated by Paul Townsend in 1977. When additional matter supermultiplets are included in this theory, the result is known as matter-coupled 4D supergravity.
Flat spacetime
To describe the coupling between gravity and particles of arbitrary spin, it is useful to use the vielbein formalism of general relativity. This replaces the metric by a set of vector fields indexed by flat indices such that
In a sense the vielbeins are the square root of the metric. This introduces a new local Lorentz symmetry on the vielbeins , together with the usual diffeomorphism invariance associated with the spacetime indices . This has an associated connection known as the spin connection defined through , it being a generalization of the Christoffel connection to arbitrary spin fields. For example, for spinors the covariant derivative is given by
where are gamma matrices satisfing the Dirac algebra, with . These are often contracted with vielbeins to construct which are in general position-dependent fields rather than constants. The spin connection has an explicit expression in terms of the vielbein and an additional torsion tensor which can arise when there is matter present in the theory. A vanishing torsion is equivalent to the Levi-Civita connection.
The pure supergravity action in four dimensions is the combination of the Einstein–Hilbert action and the Rarita–Schwinger action
Here is the Planck mass, , and is the Majorana gravitino with its spinor index left implicit. Treating this action within the first-order formalism where both the vielbein and spin connection are independent fields allows one to solve for the spin connections equation of motion, showing that it has the torsion . The second-order formalism action is then acquired by substituting this expression for the spin connection back into the action, yielding additional quartic gravitino vertices, with the Einstein–Hilbert and Rarita–Schwinger actions now being written with a torsionless spin connection that explicitly depends on the vielbeins.
The supersymmetry transformation rules that leave the action invariant are
where is the spinorial gauge parameter. While historically the first order and second order formalism were the first ones used to show the invariance of the action, the 1.5-order formalism is the easiest for most supergravity calculations. The additional symmetries of the action are general coordinate transformations and local Lorentz transformations.
Curved spacetime
The four dimensional super-Poincare algebra in Minkowski spacetime can be generalized to anti-de Sitter spacetime, but not to de Sitter spacetime, since the super-Jacobi identity cannot be satisfied in that case. Its action can be constructed by gauging this superalgebra, yielding the supersymmetry transformation rules for the vielbein and the gravitino.
The action for AdS supergravity in four dimensions is
where is the AdS radius and the second term is the negative cosmological constant . The supersymmetry transformations are
While the bilinear term in the action appears to be giving a mass to the gravitino, it still belongs to the massless gravity supermultiplet. This is because mass is not well-defined in curved spacetimes, with no longer being a Casimir operator of the AdS super-Poinacre algebra. It is however conventional to define a mass through the Laplace–Beltrami operator, in which case particles within the same supermultiplet have different masses, unlike in flat spacetimes.
See also
N = 8 supergravity
References
Supersymmetric quantum field theory
Theories of gravity | Pure 4D N = 1 supergravity | [
"Physics"
] | 881 | [
"Supersymmetric quantum field theory",
"Theoretical physics",
"Theories of gravity",
"Supersymmetry",
"Symmetry"
] |
74,104,164 | https://en.wikipedia.org/wiki/Hardy%20distribution | In probability theory and statistics, the Hardy distribution is a discrete probability distribution that expresses the probability of the hole score for a given golf player. It is based on Hardy's (Hardy, 1945) basic assumption that there are three types of shots:
good ,
bad and
ordinary ,
where the probability of a good hit equals , the probability of a bad hit equals and the probability of an ordinary hit equals . Hardy further assigned
a value of 2 to a good stroke,
a value of 0 to a bad stroke and
a value of 1 to a regular or ordinary stroke.
Once the sum of the values is greater than or equal to the value of the par of the hole, the number of strokes in question is equal to the score achieved on that hole. A birdie on a par three could then have come about in three ways: , and , respectively, with probabilities , and .
Definitions
Probability mass function
A discrete random variable is said to have a Hardy distribution, with parameters , and if it has a probability mass function given by:
if m is odd
and
if m is even
with
and
where
is the par of the hole ()
is the golf hole score () if is even
is the golf hole score () if is odd
is the probability of a good shot ()
is the probability of a bad shot () and ()
The moment generating function is given by:
if m is odd
and
if m is even
with
and
Each raw moment and each central moment can be easily determined with the moment generating function, but the formulas involved are too large to present here.
Hardy distribution for a par three, four and five
For a par three:
For a par four:
Note the resemblance with . For a par five:
Note the resemblance with the formulas for and .
History
When trying to make a probability distribution in golf that describes the frequency distribution of the number of strokes on a hole, the simplest setup is to assume that there are only two types of strokes:
A good stroke with a probability of
A bad stroke with a probability of .
while
a good shot then gets the value 1 and
a bad shot gets the value 0.
Once the sum of the shot values equals the par of the hole, that is the number of strokes needed for the hole.
It is clear that with this setup, a birdie is not possible. After all, the smallest number of strokes one can get is the par of the hole. Hardy (1945) probably realized that too and then came up with the idea not to assume that there were just two types of strokes: good and bad , but three types:
good with probability
bad with probability
ordinary with probability .
In fact, Hardy called a good shot a supershot and a bad shot a subshot. Minton later called Hardy's supershot an excellent shot and Hardy's subshot a bad shot . In this article, Minton's excellent shot is called a good shot . Hardy came up with the idea of three types of shots in 1945, but the actual derivation of the probability distribution of the hole score was not given until 2012 by van der Ven.
Hardy assumed that the probability of a good stroke was equal to the probability of a bad stroke, namely . This was confirmed by Kang:
In retrospect, Hardy might well have been right, as the data in Table 2 in van der Ven (2013) show. This table shows the estimated - and -values for holes 1-18 for rounds 1 and 2 of the 2012 British Open Championship. The mean values were equal to 0.0633 and 0.0697, respectively. Later Cohen (2002) introduced the idea that and should be different. Kang says about this:
For the Hardy distribution the values of and may be different.
Goodness of fit
The Hardy distribution gives the probability distribution of a single player's hole score. It takes several observations to perform a goodness-of-fit test (see Goodness of fit test) to check whether the Hardy distribution applies or not. This can be done with a single individual by having the individual play the same hole multiple times. Goodness-of-fit tests assume pure replications (see Replication (statistics)). This means that there should be no change in the player's golfing ability during repeated play of the hole. For example, there should not be an ongoing learning process (see Learning). Such effects cannot really be ruled out. One way around this problem is to use multiple players who can be assumed to have approximately the same golf proficiency. Such players are the participants in professional golf tournaments (see PGA Tour). Before using a goodness-of-fit test, it should first be checked that the participants indeed have approximately the same golf proficiency. This can be done separately for each hole by using, for example, the Pearson correlation coefficient between the hole score on the first day and the second day of a tournament. If there are no systematic differences (see Classical test theory) between players, the correlation (see Correlation) between the score achieved on Day 1 on a hole and the score achieved on Day 2 on that hole will not differ significantly (see Statistical significance) from zero. This can be easily tested statistically. In a study by van der Ven, the results of a goodness-of-fit test of the Hardy distribution were reported using the hole-by-hole scores from the 2012 Open Championship played at the St Andrews Golf Club. The distribution has been tested separately for each hole. Pearson's chi-squared test was used to determine whether the observed sample frequencies of the hole scores differed significantly from the expected frequencies according to the Hardy distribution. The fit between observed and expected frequencies was generally very satisfactory.
References
Notes
Probability distributions | Hardy distribution | [
"Mathematics"
] | 1,152 | [
"Functions and mappings",
"Mathematical relations",
"Mathematical objects",
"Probability distributions"
] |
68,305,758 | https://en.wikipedia.org/wiki/Urey%E2%80%93Bigeleisen%E2%80%93Mayer%20equation | In stable isotope geochemistry, the Urey–Bigeleisen–Mayer equation, also known as the Bigeleisen–Mayer equation or the Urey model, is a model describing the approximate equilibrium isotope fractionation in an isotope exchange reaction. While the equation itself can be written in numerous forms, it is generally presented as a ratio of partition functions of the isotopic molecules involved in a given reaction. The Urey–Bigeleisen–Mayer equation is widely applied in the fields of quantum chemistry and geochemistry and is often modified or paired with other quantum chemical modelling methods (such as density functional theory) to improve accuracy and precision and reduce the computational cost of calculations.
The equation was first introduced by Harold Urey and, independently, by Jacob Bigeleisen and Maria Goeppert Mayer in 1947.
Description
Since its original descriptions, the Urey–Bigeleisen–Mayer equation has taken many forms. Given an isotopic exchange reaction , such that designates a molecule containing an isotope of interest, the equation can be expressed by relating the equilibrium constant, , to the product of partition function ratios, namely the translational, rotational, vibrational, and sometimes electronic partition functions. Thus the equation can be written as: where and is each respective partition function of molecule or atom . It is typical to approximate the rotational partition function ratio as quantized rotational energies in a rigid rotor system. The Urey model also treats molecular vibrations as simplified harmonic oscillators and follows the Born–Oppenheimer approximation.
Isotope partitioning behavior is often reported as a reduced partition function ratio, a simplified form of the Bigeleisen–Mayer equation notated mathematically as or . The reduced partition function ratio can be derived from power series expansion of the function and allows the partition functions to be expressed in terms of frequency. It can be used to relate molecular vibrations and intermolecular forces to equilibrium isotope effects.
As the model is an approximation, many applications append corrections for improved accuracy. Some common, significant modifications to the equation include accounting for pressure effects, nuclear geometry, and corrections for anharmonicity and quantum mechanical effects. For example, hydrogen isotope exchange reactions have been shown to disagree with the requisite assumptions for the model but correction techniques using path integral methods have been suggested.
History of discovery
One aim of the Manhattan Project was increasing the availability of concentrated radioactive and stable isotopes, in particular 14C, 35S, 32P, and deuterium for heavy water. Harold Urey, Nobel laureate physical chemist known for his discovery of deuterium, became its head of isotope separation research while a professor at Columbia University. In 1945, he joined The Institute for Nuclear Studies at the University of Chicago, where he continued to work with chemist Jacob Bigeleisen and physicist Maria Mayer, both also veterans of isotopic research in the Manhattan Project. In 1946, Urey delivered the Liversidge lecture at the then-Royal Institute of Chemistry, where he outlined his proposed model of stable isotope fractionation. Bigeleisen and Mayer had been working on similar work since at least 1944 and, in 1947, published their model independently from Urey. Their calculations were mathematically equivalent to a 1943 derivation of the reduced partition function by German physicist Ludwig Waldmann.{{efn|Bigeleisen & Mayer (1947) contains the addendum:}}
Applications
Initially used to approximate chemical reaction rates, models of isotope fractionation are used throughout the physical sciences. In chemistry, the Urey–Bigeleisen–Mayer equation has been used to predict equilibrium isotope effects and interpret the distributions of isotopes and isotopologues within systems, especially as deviations from their natural abundance. The model is also used to explain isotopic shifts in spectroscopy, such as those from nuclear field effects or mass independent effects. In biochemistry, it is used to model enzymatic kinetic isotope effects. Simulation testing in computational systems biology often uses the Bigeleisen–Mayer model as a baseline in the development of more complex models of biological systems. Isotope fractionation modeling is a critical component of isotope geochemistry and can be used to reconstruct past Earth environments as well as examine surface processes.
See also
Timeline of the Manhattan Project
Isotope-ratio mass spectrometry
Hydrogen isotope biogeochemistry
Notes
References
External links
Biochemistry methods
Biogeochemistry
Chemical oceanography
Earth system sciences
Equations
Isotopes
Manhattan Project
University of Chicago | Urey–Bigeleisen–Mayer equation | [
"Physics",
"Chemistry",
"Mathematics",
"Biology",
"Environmental_science"
] | 902 | [
"Biochemistry methods",
"Environmental chemistry",
"Mathematical objects",
"Isotopes",
"Equations",
"Chemical oceanography",
"Biogeochemistry",
"Nuclear physics",
"Biochemistry"
] |
66,895,075 | https://en.wikipedia.org/wiki/Pestov%E2%80%93Ionin%20theorem | The Pestov–Ionin theorem in the differential geometry of plane curves states that every simple closed curve of curvature at most one encloses a unit disk.
History and generalizations
Although a version of this was published for convex curves by Wilhelm Blaschke in 1916, it is named for and , who published a version of this theorem in 1959 for non-convex doubly differentiable () curves, the curves for which the curvature is well-defined at every point. The theorem has been generalized further, to curves of bounded average curvature (singly differentiable, and satisfying a Lipschitz condition on the derivative), and to curves of bounded convex curvature (each point of the curve touches a unit disk that, within some small neighborhood of the point, remains interior to the curve).
Applications
The theorem has been applied in algorithms for motion planning. In particular it has been used for finding Dubins paths, shortest routes for vehicles that can move only in a forwards direction and that can turn left or right with a bounded turning radius. It has also been used for planning the motion of the cutter in a milling machine for pocket machining, and in reconstructing curves from scattered data points.
References
Theorems in differential geometry
Theorems about circles
Theorems about curves
Curvature (mathematics) | Pestov–Ionin theorem | [
"Physics",
"Mathematics"
] | 263 | [
"Geometric measurement",
"Theorems in differential geometry",
"Physical quantities",
"Theorems about curves",
"Theorems in geometry",
"Curvature (mathematics)"
] |
66,900,050 | https://en.wikipedia.org/wiki/ALESS%20073.1 | ALESS 073.1 is an old spiral galaxy 12 billion light years away from Earth. The discovery was published in February 2021 in the journal Science. It has challenged the way astronomers understand galaxies and galaxy formation.
Observation
The galaxy was reported in a study conducted by a team of astronomers led by Dr. Federico Lelli at Cardiff University. The team used the Atacama Large Millimeter/submillimeter Array (ALMA) telescope, currently the largest radio telescope in the world, to observe the galaxy in its adolescence. The publication of the study of ALESS 073.1 includes “one of the sharpest, direct images of a primordial galaxy ever produced which allowed the team to undertake a detailed study of its internal structure," according to Cardiff University.
Distance
ALESS 073.1 is about 12 billion light years away from Earth. Due to its distance away from Earth, the light being shown is from when the universe was only 10% of its current age.
Characteristics
Like all galaxies, ALESS 073.1 is composed of gas, dark matter, and dust. It is made from stars that are held together by gravity.
ALESS 073.1 is estimated to have formed 12 billion years ago, just 1.2 billion years after the Big Bang. The image of the galaxy seen now gives an image of it during its early years. However, the physical characteristics of the galaxy indicate that the galaxy is much older than its features indicate. ALESS 073.1 exhibits features normally attributed to mature galaxies, such as spiral arms that extend from its center. In this way, it has similar features to spiral galaxies. It also has a rotating disk and a bulge, characteristics found in mature galaxies. This is contrary to the previous understanding of newer galaxies being chaotic, without a particular shape or structure. Over billions of years, young galaxies slow down and stabilize. This creates the distinctive features that are associated with mature galaxies.
The core of ALESS 073.1 hints at the presence of a supermassive black hole, since it is producing more energy than is typical for stars.
Scientific implications
The galaxy's young features, while displaying mature features, challenges scientists’ understanding of galaxy formation. However, more images and information are needed to indicate if this can be observed from other galaxies.
The massive bulge of ALESS 073.1 also puts features typically associated with mature galaxies into question. A bulge is a group of stars that are clustered together at the center of the galaxy. Bulges were generally thought to be a prominent feature of mature galaxies. It was thought that these bulges formed slowly over a long period of time through the merging of smaller galaxies. However, the discovery of ALESS 073.1's bulge indicates that they are able to be formed much quicker than previously thought. Approximately half of ALESS 073.1's stars were found to be present in the bulge.
References
Spiral galaxies
Fornax
Stellar evolution
Cardiff University
Black holes
Radio telescopes | ALESS 073.1 | [
"Physics",
"Astronomy"
] | 606 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Stellar evolution",
"Constellations",
"Density",
"Fornax",
"Stellar phenomena",
"Astronomical objects"
] |
77,138,130 | https://en.wikipedia.org/wiki/Prince%20Chaldean | Prince Chaldean (also known as Chaldean 854 and Chaldean 637) is a Percheron gray stallion, known for his very long, abundant mane. Born in the Perche region of France in 1877, he was exported as a youngster to the United States, where he was briefly owned by Mark Wentworth Dunham, who sold him a few months later to Mr. Babcock in Wisconsin. Chaldean became a popular local breeding stallion.
He earned his nickname "Prince Chaldean" when he toured with the Ringling Brothers Circus from 1892 onwards. He was presented as the most beautiful and heaviest Percheron horse ever to arrive in the United States. One of his daughters, the mare Isis 1744, is the dam of three famous stallions, Primus 5705, Horus 6491, and Ilderim 10356.
History
Chaldean was born in 1877 in the department of Eure-et-Loir, France. He was imported as a young foal by the famous horse owner and breeder Mark Wentworth Dunham, to his stud in the town of Wayne, Illinois in the USA, the same year. His coat color was black.
Property of Babcock
In February 1878, it was acquired by a man named H. A. Babcock (according to the U.S. Register and the Breeder's Gazette), residing in Neenah, Wisconsin. However, author Jean-Léo Dugast attributes the name Geo Babcock to its owner, specifying that he resides in Appleton. Chaldean was bred from the age of 43. Babcock testified that his horse had "never been beaten in a show ring". By 1890, while still owned by Babcock, Chaldean's coat color had changed to gray.
Circus career
The stallion first took part in Ringling Brothers circus shows in 1892, which presented him as the heaviest and most handsome Percheron ever exported to the USA, under the name "Prince Chaldean, The Percheron Beauty ". Contextualizing all shows involving physically challenged circus animals, including the Ringling Brothers circus, skeptical investigator Joe Nickell notes that these animals were often integrated into sideshows (entresorts), shows presented separately from the main tent, based on capturing the audience's interest through bon mots. Numerous animals with physical peculiarities, including horses, were exhibited in the great American circus shows of the period.
Prince Chaldean, for example, was exhibited in Wisconsin in 1892. For the occasion, the Ringling Brothers circus distributed press releases to the local American press, promoting the horse's appearance; at the same time, it distributed another press release promoting "the biggest hippopotamus in the world".
Description
Chaldean is a Percheron horse. This stallion is best known for his very long, abundant mane. It reached a length of over 2.20 m (7 feet 4 inches) in 1890, according to its second owner Babcock. This feature was described in the Breeder's Gazette as "out of the ordinary". His mane was measured at 9 feet and two inches (2 meters and 80 centimeters) two years later, in 1892, his tail being the same length.
His weight exceeded 1,800 pounds (810 kg) in 1892.
He is registered as a black-coated horse in the Stud-book percheron, but the Breeder's Gazette and author Jean-Léo Dugast report, based on the study of iconographic documents, that he was more likely gray, his color having gradually changed after his birth.
Origins
There is disagreement about Chaldean's origins. The French Studbook (1883), the American Percheron Studbook (1888) and the Breeder's Gazette of 1890 all report him as the son of a stallion named Coco, himself a son of Coco II 714. This makes Chaldean a grandson of Coco II 714. Author Jean-Léo Dugast states that his father was the stallion Coco II 714.
His mother is a daughter of Superior 730.
The Ringling brothers describe him as a "noble" animal with an "impeccable pedigree".
Descent and homage
Chaldean is said to have been a very popular sire, producing between 75 and 90 foals a year. All his foals would be gray, whatever the color of the mom.
One of his daughters, registered in the American Percheron Studbook, is the mare Isis 1744, dam of Primus 5705, Horus 6491, and Ilderim 10356, the latter presented by H.C. Farnum and awarded third prize at the Detroit International Show in 1890.
In 1889, American illustrator Lou Burk drew Chaldean to illustrate the cover of an issue of the Breeder's Gazette, published on May 28, 1890. This illustration is described as being very accurate with the horse used as a model.
See also
Percheron
Draft horse
References
Bibliography
Individual horses
Circuses
Breeding | Prince Chaldean | [
"Biology"
] | 1,032 | [
"Behavior",
"Breeding",
"Reproduction"
] |
77,144,455 | https://en.wikipedia.org/wiki/PKS%201402%2B044 | PKS 1402+044 is a quasar located in the constellation of Virgo. It has a redshift of 3.207, estimating the object to be located 11.3 billion light-years away from Earth.
Characteristics
PKS 1402+044 is classified as a broad absorption-line quasar (BAL QSO) observed by Sloan Digital Sky Survey with a flat-spectrum radio source. It is also classified a blazar, a type of active galaxy and such produces a powerful astrophysical jet that is shot out into the depths of intergalactic space.
The blazar is known to be in its quiescent state, but it shows repeated periods of outbursts that are visible throughout the electromagnetic spectrum. According to observations from Gamma-Ray Blazar Survey and Fermi Gamma-Ray Space Telescope, PKS 1402+044 is found optically variable with >6σ significance, γ-ray detected and more Compton dominated than high synchrotron peaked (HSP) BL Lac objects.
Through radio imaging by researchers, the quasar is core-dominated with fluctuating radio emission and radio morphology found smaller in comparison of steep-spectrum quasars. The quasar is radio-loud with straightened jet magnetic fields along its source axis and a lobe field found to have a misaligned orientation.
References
Quasars
Virgo (constellation)
Blazars
Supermassive black holes
Active galaxies
2827828
SDSS objects
BL Lacertae objects | PKS 1402+044 | [
"Physics",
"Astronomy"
] | 315 | [
"Black holes",
"Unsolved problems in physics",
"Supermassive black holes",
"Virgo (constellation)",
"Constellations"
] |
77,145,895 | https://en.wikipedia.org/wiki/Armstrong%20process | The Armstrong process is used to refine titanium. Its output is particle-sized dust which can be sprayed into pattern-molds. It was patented in 1999. The output of this process has a "coral-like morphology", which differs from the traditional outputs like "spherical gas-atomized powder, mechanically crushed angular particles, or the titanium sponge morphology created during the Kroll process."
History
The Armstrong process was patented in 1999.
In 2016 a paper by MacDonald et al. told that the Armstrong powder was produced directly from the reduction of Titanium tetrachloride "in a continuous liquid loop", and cost only "11-24 USD/kg", or roughly an order of magnitude higher than the price of steel.
Description
The reducing agent for the Armstrong process is sodium, which is liquefied and introduced in a combined stream with titanium tetrachloride.
{TiCl4} + 4{Na} ->[98~^{\circ}\mathrm{C}]{Ti} + 4{NaCl}
References
Industrial processes
Metallurgical processes
Titanium processes
Materials science
1999 introductions
20th-century inventions
American inventions | Armstrong process | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 238 | [
"Applied and interdisciplinary physics",
"Metallurgical processes",
"Metallurgy",
"Materials science",
"Titanium processes",
"nan"
] |
71,208,584 | https://en.wikipedia.org/wiki/Zinc%20in%20biology | Zinc is an essential trace element for humans and other animals, for plants and for microorganisms. Zinc is required for the function of over 300 enzymes and 1000 transcription factors, and is stored and transferred in metallothioneins. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes.
In proteins, zinc ions are often coordinated to the amino acid side chains of aspartic acid, glutamic acid, cysteine and histidine. The theoretical and computational description of this zinc binding in proteins (as well as that of other transition metals) is difficult.
Roughly grams of zinc are distributed throughout the human body. Most zinc is in the brain, muscle, bones, kidney, and liver, with the highest concentrations in the prostate and parts of the eye. Semen is particularly rich in zinc, a key factor in prostate gland function and reproductive organ growth.
Zinc homeostasis of the body is mainly controlled by the intestine. Here, ZIP4 and especially TRPM7 were linked to intestinal zinc uptake essential for postnatal survival.
In humans, the biological roles of zinc are ubiquitous. It interacts with "a wide range of organic ligands", and has roles in the metabolism of RNA and DNA, signal transduction, and gene expression. It also regulates apoptosis. A review from 2015 indicated that about 10% of human proteins (~3000) bind zinc, in addition to hundreds more that transport and traffic zinc; a similar in silico study in the plant Arabidopsis thaliana found 2367 zinc-related proteins.
In the brain, zinc is stored in specific synaptic vesicles by glutamatergic neurons and can modulate neuronal excitability. It plays a key role in synaptic plasticity and so in learning. Zinc homeostasis also plays a critical role in the functional regulation of the central nervous system. Dysregulation of zinc homeostasis in the central nervous system that results in excessive synaptic zinc concentrations is believed to induce neurotoxicity through mitochondrial oxidative stress (e.g., by disrupting certain enzymes involved in the electron transport chain, including complex I, complex III, and α-ketoglutarate dehydrogenase), the dysregulation of calcium homeostasis, glutamatergic neuronal excitotoxicity, and interference with intraneuronal signal transduction. L- and D-histidine facilitate brain zinc uptake. SLC30A3 is the primary zinc transporter involved in cerebral zinc homeostasis.
Enzymes
Zinc is an efficient Lewis acid, making it a useful catalytic agent in hydroxylation and other enzymatic reactions. The metal also has a flexible coordination geometry, which allows proteins using it to rapidly shift conformations to perform biological reactions. Two examples of zinc-containing enzymes are carbonic anhydrase and carboxypeptidase, which are vital to the processes of carbon dioxide () regulation and digestion of proteins, respectively.
In vertebrate blood, carbonic anhydrase converts into bicarbonate and the same enzyme transforms the bicarbonate back into for exhalation through the lungs. Without this enzyme, this conversion would occur about one million times slower at the normal blood pH of 7 or would require a pH of 10 or more. The non-related β-carbonic anhydrase is required in plants for leaf formation, the synthesis of indole acetic acid (auxin) and alcoholic fermentation.
Carboxypeptidase cleaves peptide linkages during digestion of proteins. A coordinate covalent bond is formed between the terminal peptide and a C=O group attached to zinc, which gives the carbon a positive charge. This helps to create a hydrophobic pocket on the enzyme near the zinc, which attracts the non-polar part of the protein being digested.
Signalling
Zinc has been recognized as a messenger, able to activate signalling pathways. Many of these pathways provide the driving force in aberrant cancer growth. They can be targeted through ZIP transporters.
Other proteins
Zinc serves a purely structural role in zinc fingers, twists and clusters. Zinc fingers form parts of some transcription factors, which are proteins that recognize DNA base sequences during the replication and transcription of DNA. Each of the nine or ten ions in a zinc finger helps maintain the finger's structure by coordinately binding to four amino acids in the transcription factor.
In blood plasma, zinc is bound to and transported by albumin (60%, low-affinity) and transferrin (10%). Because transferrin also transports iron, excessive iron reduces zinc absorption, and vice versa. A similar antagonism exists with copper. The concentration of zinc in blood plasma stays relatively constant regardless of zinc intake. Cells in the salivary gland, prostate, immune system, and intestine use zinc signaling to communicate with other cells.
Zinc may be held in metallothionein reserves within microorganisms or in the intestines or liver of animals. Metallothionein in intestinal cells is capable of adjusting absorption of zinc by 15–40%. However, inadequate or excessive zinc intake can be harmful; excess zinc particularly impairs copper absorption because metallothionein absorbs both metals.
The human dopamine transporter contains a high affinity extracellular zinc binding site which, upon zinc binding, inhibits dopamine reuptake and amplifies amphetamine-induced dopamine efflux in vitro. The human serotonin transporter and norepinephrine transporter do not contain zinc binding sites. Some EF-hand calcium binding proteins such as S100 or NCS-1 are also able to bind zinc ions.
Nutrition
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for zinc in 2001. The current EARs for zinc for women and men ages 14 and up is 6.8 and 9.4 mg/day, respectively. The RDAs are 8 and 11 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher-than-average requirements. RDA for pregnancy is 11 mg/day. RDA for lactation is 12 mg/day. For infants up to 12 months, the RDA is 3 mg/day. For children ages 1–13 years, the RDA increases with age from 3 to 8 mg/day. As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of zinc the adult UL is 40 mg/day (lower for children). Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs).
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 18 and older, the PRI calculations are complex, as the EFSA has set higher and higher values as the phytate content of the diet increases. For women, PRIs increase from 7.5 to 12.7 mg/day as phytate intake increases from 300 to 1200 mg/day; for men, the range is 9.4 to 16.3 mg/day. These PRIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and set its UL at 25 mg/day, which is much lower than the U.S. value.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For zinc labeling purposes, 100% of the Daily Value was 15 mg, but on May 27, 2016, it was revised to 11 mg. A table of the old and new adult daily values is provided at Reference Daily Intake.
Dietary intake
Animal products such as meat, fish, shellfish, fowl, eggs, and dairy contain zinc. The concentration of zinc in plants varies with the level in the soil. With adequate zinc in the soil, the food plants that contain the most zinc are wheat (germ and bran) and various seeds, including sesame, poppy, alfalfa, celery, and mustard. Zinc is also found in beans, nuts, almonds, whole grains, pumpkin seeds, sunflower seeds, and blackcurrant.
Other sources include fortified food and dietary supplements in various forms. A 1998 review concluded that zinc oxide, one of the most common supplements in the United States, and zinc carbonate are nearly insoluble and poorly absorbed in the body. This review cited studies that found lower plasma zinc concentrations in the subjects who consumed zinc oxide and zinc carbonate than in those who took zinc acetate and sulfate salts. For fortification, however, a 2003 review recommended cereals (containing zinc oxide) as a cheap, stable source that is as easily absorbed as the more expensive forms. A 2005 study found that various compounds of zinc, including oxide and sulfate, did not show statistically significant differences in absorption when added as fortificants to maize tortillas.
Deficiency
Nearly two billion people in the developing world are deficient in zinc. Groups at risk include children in developing countries and the elderly with chronic illnesses. In children, it causes an increase in infection and diarrhea and contributes to the death of about 800,000 children worldwide per year. The World Health Organization advocates zinc supplementation for severe malnutrition and diarrhea. Zinc supplements help prevent disease and reduce mortality, especially among children with low birth weight or stunted growth. However, zinc supplements should not be administered alone, because many in the developing world have several deficiencies, and zinc interacts with other micronutrients. While zinc deficiency is usually due to insufficient dietary intake, it can be associated with malabsorption, acrodermatitis enteropathica, chronic liver disease, chronic renal disease, sickle cell disease, diabetes, malignancy, and other chronic illnesses.
In the United States, a federal survey of food consumption determined that for women and men over the age of 19, average consumption was 9.7 and 14.2 mg/day, respectively. For women, 17% consumed less than the EAR, for men 11%. The percentages below EAR increased with age. The most recent published update of the survey (NHANES 2013–2014) reported lower averages – 9.3 and 13.2 mg/day – again with intake decreasing with age.
Symptoms of mild zinc deficiency are diverse. Clinical outcomes include depressed growth, diarrhea, impotence and delayed sexual maturation, alopecia, eye and skin lesions, impaired appetite, altered cognition, impaired immune functions, defects in carbohydrate utilization, and reproductive teratogenesis. Zinc deficiency depresses immunity, but excessive zinc does also.
Despite some concerns, western vegetarians and vegans do not suffer any more from overt zinc deficiency than meat-eaters. Major plant sources of zinc include cooked dried beans, sea vegetables, fortified cereals, soy foods, nuts, peas, and seeds. However, phytates in many whole-grains and fibers may interfere with zinc absorption and marginal zinc intake has poorly understood effects. The zinc chelator phytate, found in seeds and cereal bran, can contribute to zinc malabsorption. Some evidence suggests that more than the US RDA (8 mg/day for adult women; 11 mg/day for adult men) may be needed in those whose diet is high in phytates, such as some vegetarians. The European Food Safety Authority (EFSA) guidelines attempt to compensate for this by recommending higher zinc intake when dietary phytate intake is greater. These considerations must be balanced against the paucity of adequate zinc biomarkers, and the most widely used indicator, plasma zinc, has poor sensitivity and specificity.
Soil availability and remediation
Zinc can be present in six different forms in soil namely; water soluble zinc, exchangeable zinc, organically bound zinc, carbonate bound zinc, aluminium and manganese oxide bound zinc and residual fractions of zinc.
In toxic conditions, species of Calluna, Erica and Vaccinium can grow in zinc-metalliferous soils, because translocation of toxic ions is prevented by the action of ericoid mycorrhizal fungi.
Agriculture
Zinc deficiency appears to be the most common micronutrient deficiency in crop plants; it is particularly common in high-pH soils. Zinc-deficient soil is cultivated in the cropland of about half of Turkey and India, a third of China, and most of Western Australia. Substantial responses to zinc fertilization have been reported in these areas. Plants that grow in soils that are zinc-deficient are more susceptible to disease. Zinc is added to the soil primarily through the weathering of rocks, but humans have added zinc through fossil fuel combustion, mine waste, phosphate fertilizers, pesticide (zinc phosphide), limestone, manure, sewage sludge, and particles from galvanized surfaces. Excess zinc is toxic to plants, although zinc toxicity is far less widespread.
Biodegradable implants
Zinc (Zn), alongside Magnesium (Mg) and Iron (Fe), constitutes one of the three families of biodegradable metals. Zinc, as an abundant trace element, ranks sixth among all the essential metallic elements crucial for sustaining life within the human body. Zinc exhibits an intermediate biodegradation rate, falling between that of Fe (relatively slow) and Mg (relatively high) which positions it as a promising material for use in biodegradable implants.
References
Bibliography
Biological systems
Biology and pharmacology of chemical elements
Dietary minerals
Nutrition
Physiology
Biology | Zinc in biology | [
"Chemistry",
"Biology"
] | 2,905 | [
"Pharmacology",
"Properties of chemical elements",
"Physiology",
"Biology and pharmacology of chemical elements",
"nan",
"Biochemistry"
] |
71,208,595 | https://en.wikipedia.org/wiki/Molybdenum%20in%20biology | Molybdenum is an essential element in most organisms. It is most notably present in nitrogenase which is an essential part of nitrogen fixation.
Mo-containing enzymes
Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals).
At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C.
In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth.
Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen:
The biosynthesis of the FeMoco active site is highly complex.
Molybdate is transported in the body as MoO42−.
Human metabolism and deficiency
Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxidase, and mitochondrial amidoxime reductase. People severely deficient in molybdenum have poorly functioning sulfite oxidase and are prone to toxic reactions to sulfites in foods. The human body contains about 0.07 mg of molybdenum per kilogram of body weight, with higher concentrations in the liver and kidneys and lower in the vertebrae. Molybdenum is also present within human tooth enamel and may help prevent its decay.
Acute toxicity has not been seen in humans, and the toxicity depends strongly on the chemical state. Studies on rats show a median lethal dose (LD50) as low as 180 mg/kg for some Mo compounds. Although human toxicity data is unavailable, animal studies have shown that chronic ingestion of more than 10 mg/day of molybdenum can cause diarrhea, growth retardation, infertility, low birth weight, and gout; it can also affect the lungs, kidneys, and liver. Sodium tungstate is a competitive inhibitor of molybdenum. Dietary tungsten reduces the concentration of molybdenum in tissues.
Low soil concentration of molybdenum in a geographical band from northern China to Iran results in a general dietary molybdenum deficiency, and is associated with increased rates of esophageal cancer. Compared to the United States, which has a greater supply of molybdenum in the soil, people living in those areas have about 16 times greater risk for esophageal squamous cell carcinoma.
Molybdenum deficiency has also been reported as a consequence of non-molybdenum supplemented total parenteral nutrition (complete intravenous feeding) for long periods of time. It results in high blood levels of sulfite and urate, in much the same way as molybdenum cofactor deficiency. Since pure molybdenum deficiency from this cause occurs primarily in adults, the neurological consequences are not as marked as in cases of congenital cofactor deficiency.
A congenital molybdenum cofactor deficiency disease, seen in infants, is an inability to synthesize molybdenum cofactor, the heterocyclic molecule discussed above that binds molybdenum at the active site in all known human enzymes that use molybdenum. The resulting deficiency results in high levels of sulfite and urate, and neurological damage.
Excretion
Most molybdenum is excreted from the human body as molybdate in the urine. Furthermore, urinary excretion of molybdenum increases as dietary molybdenum intake increases. Small amounts of molybdenum are excreted from the body in the feces by way of the bile; small amounts also can be lost in sweat and in hair.
Excess and copper antagonism
High levels of molybdenum can interfere with the body's uptake of copper, producing copper deficiency. Molybdenum prevents plasma proteins from binding to copper, and it also increases the amount of copper that is excreted in urine. Ruminants that consume high levels of molybdenum suffer from diarrhea, stunted growth, anemia, and achromotrichia (loss of fur pigment). These symptoms can be alleviated by copper supplements, either dietary or injection. The effective copper deficiency can be aggravated by excess sulfur.
Copper reduction or deficiency can also be deliberately induced for therapeutic purposes by the compound ammonium tetrathiomolybdate, in which the bright red anion tetrathiomolybdate is the copper-chelating agent. Tetrathiomolybdate was first used therapeutically in the treatment of copper toxicosis in animals. It was then introduced as a treatment in Wilson's disease, a hereditary copper metabolism disorder in humans; it acts both by competing with copper absorption in the bowel and by increasing excretion. It has also been found to have an inhibitory effect on angiogenesis, potentially by inhibiting the membrane translocation process that is dependent on copper ions. This is a promising avenue for investigation of treatments for cancer, age-related macular degeneration, and other diseases that involve a pathologic proliferation of blood vessels.
In some grazing livestock, most strongly in cattle, molybdenum excess in the soil of pasturage can produce scours (diarrhea) if the pH of the soil is neutral to alkaline; see teartness.
References
Biological systems
Biology and pharmacology of chemical elements
Dietary minerals
Biology
Nutrition
Physiology | Molybdenum in biology | [
"Chemistry",
"Biology"
] | 1,468 | [
"Pharmacology",
"Properties of chemical elements",
"Physiology",
"Biology and pharmacology of chemical elements",
"nan",
"Biochemistry"
] |
71,209,771 | https://en.wikipedia.org/wiki/Cobalt%20in%20biology | Cobalt is essential to the metabolism of all animals. It is a key constituent of cobalamin, also known as vitamin B, the primary biological reservoir of cobalt as an ultratrace element. Bacteria in the stomachs of ruminant animals convert cobalt salts into vitamin B, a compound which can only be produced by bacteria or archaea. A minimal presence of cobalt in soils therefore markedly improves the health of grazing animals, and an uptake of 0.20 mg/kg a day is recommended because they have no other source of vitamin B.
Proteins based on cobalamin use corrin to hold the cobalt. Coenzyme B12 features a reactive C-Co bond that participates in the reactions. In humans, B12 has two types of alkyl ligand: methyl and adenosyl. MeB12 promotes methyl (−CH3) group transfers. The adenosyl version of B12 catalyzes rearrangements in which a hydrogen atom is directly transferred between two adjacent atoms with concomitant exchange of the second substituent, X, which may be a carbon atom with substituents, an oxygen atom of an alcohol, or an amine. Methylmalonyl coenzyme A mutase (MUT) converts MMl-CoA to Su-CoA, an important step in the extraction of energy from proteins and fats.
Although far less common than other metalloproteins (e.g. those of zinc and iron), other cobaltoproteins are known besides B12. These proteins include methionine aminopeptidase 2, an enzyme that occurs in humans and other mammals that does not use the corrin ring of B12, but binds cobalt directly. Another non-corrin cobalt enzyme is nitrile hydratase, an enzyme in bacteria that metabolizes nitriles.
Cobalt deficiency
In humans, consumption of cobalt-containing vitamin B12 meets all needs for cobalt. For cattle and sheep, which meet vitamin B12 needs via synthesis by resident bacteria in the rumen, there is a function for inorganic cobalt. In the early 20th century, during the development of farming on the North Island Volcanic Plateau of New Zealand, cattle suffered from what was termed "bush sickness". It was discovered that the volcanic soils lacked the cobalt salts essential for the cattle food chain. The "coast disease" of sheep in the Ninety Mile Desert of the Southeast of South Australia in the 1930s was found to originate in nutritional deficiencies of trace elements cobalt and copper. The cobalt deficiency was overcome by the development of "cobalt bullets", dense pellets of cobalt oxide mixed with clay given orally for lodging in the animal's rumen.
References
Biological systems
Biology and pharmacology of chemical elements
Dietary minerals
Nutrition
Physiology
Biology | Cobalt in biology | [
"Chemistry",
"Biology"
] | 574 | [
"Pharmacology",
"Properties of chemical elements",
"Physiology",
"Biology and pharmacology of chemical elements",
"nan",
"Biochemistry"
] |
71,216,636 | https://en.wikipedia.org/wiki/2%2C4%2C6-Heptanetrione | 2,4,6-Heptanetrione is the organic compound with the formula . It is a white or colorless solid. The molecule, which exists mainly in the enol form, undergoes condensation with 1,2-diketones. The compound contributes to the flavor of strawberries. It forms a variety of metal complexes.
See also
Triacetylmethane, an isomer
Refercences
Triketones
Chelating agents
Ligands
3-Hydroxypropenals
Enols
Tridentate ligands | 2,4,6-Heptanetrione | [
"Chemistry"
] | 109 | [
"Enols",
"Ligands",
"Coordination chemistry",
"Functional groups",
"Chelating agents",
"Process chemicals"
] |
75,514,044 | https://en.wikipedia.org/wiki/Radio%20Spectrum%20Management | The Radio Spectrum Management (RSM) is a New Zealand public service business unit within the Ministry of Business, Innovation and Employment (MBIE) that is in charge of the radio spectrum and radio-related regulations in New Zealand.
Radio Spectrum Management is charged with regulating New Zealand's radio spectrum activities such as planning, allocations, and licensing.
References
Government agencies of New Zealand
Ministry of Business, Innovation and Employment
Radio in New Zealand
Radio spectrum | Radio Spectrum Management | [
"Physics"
] | 91 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
75,514,824 | https://en.wikipedia.org/wiki/Fidanacogene%20elaparvovec | Fidanacogene elaparvovec, sold under the brand name Beqvez among others, is a gene therapy delivered via adeno-associated virus used for the treatment of hemophilia B (congenital Factor IX deficiency).
Fidanacogene elaparvovec was approved for medical use in Canada in December 2023, in the United States in April 2024, and in the European Union in July 2024.
Medical uses
In the US, fidanacogene elaparvovec is indicated for the treatment of adults with moderate to severe hemophilia B (congenital factor IX deficiency) who currently use factor IX prophylaxis therapy; or have current or historical life-threatening hemorrhage; or have repeated, serious spontaneous bleeding episodes; and do not have neutralizing antibodies to adeno-associated virus serotype Rh74var (AAVRh74var) capsid as detected by an FDA-approved test. It is given as a one-time infusion.
Society and culture
Legal status
Fidanacogene elaparvovec was approved for medical use in Canada in December 2023, in the United States in April 2024, and in the European Union in July 2024. The FDA granted the application breakthrough therapy designation.
In May 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a conditional marketing authorization for the medicinal product Durveqtix, intended for the treatment of severe and moderately severe hemophilia B. The applicant for this medicinal product is Pfizer Europe MA EEIG. The conditional marketing authorization was granted in July 2024.
Economics
Pfizer announced a cost of 3.5 million per treatment, the same cost as the CSL Behring's competing hemophilia gene therapy etranacogene dezaparvovec.
Research
Fidanacogene elaparvovec partially restored factor IX production in preliminary studies. The results of a phase 3 trial were published in September 2024. It showed that even 15 months after treatment factor IX was still being expressed and the number of bleedings had decreased significantly compared to the time before the treatment, when study participants had been given prophylactic infusions of factor IX.
References
External links
Antihemorrhagics
Gene therapy
Haemophilia drugs | Fidanacogene elaparvovec | [
"Engineering",
"Biology"
] | 502 | [
"Gene therapy",
"Genetic engineering"
] |
75,517,927 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Dielectrics%20and%20Electrical%20Insulation | IEEE Transactions on Dielectrics and Electrical Insulation is a peer-reviewed scientific journal published bimonthly by the Institute of Electrical and Electronics Engineers. It was co-founded in 1965 by the IEEE Dielectrics and Electrical Insulation Society under the name IEEE Transactions on Electrical Insulation. The journal covers the advances in dielectric phenomena and measurements, and electrical insulation. Its editor-in-chief is Michael Wübbenhorst (KU Leuven).
According to the Journal Citation Reports, the journal has a 2022 impact factor of 3.1.
References
External links
Dielectrics and Electrical Insulation, IEEE Transactions on
English-language journals
Academic journals established in 2016
Bimonthly journals
Electronics journals
Materials science journals | IEEE Transactions on Dielectrics and Electrical Insulation | [
"Materials_science",
"Engineering"
] | 147 | [
"Materials science journals",
"Materials science"
] |
75,519,801 | https://en.wikipedia.org/wiki/One%20Water%20%28water%20management%29 | One Water is a term encompassing the management of all water sources in an integrated and sustainable way considering all water sources and uses. This idea stems from core principles of providing affordable water access for everyone.
Origins and influences
The term “One Water” refers to integrated and effective water management practices that are “older than Texas.” Holistic, system-wide, interconnected approaches to water have been used before.
Holistic water planning by municipalities has become an international trend. The international water community developed Integrated Water Resources Management (IWRM) in the early 2000s to protect water resources and promote sustainability. The Global Water Partnership has an IWRM Action hub to share information and insights into implementing an integrated water program.
Definition and core principles
The Water Research Foundation (WRF) defines One Water as an integrated planning and implementation approach to managing finite water resources for long-term resilience and reliability, meeting both community and ecosystem needs. While many cities manage various water sources and disposal systems separately, One Water emphasizes integrating water and land resources for a holistic planning approach to water management. The importance of all water sources is stressed. One Water principles involve taking an interconnected approach to complex issues such as water infrastructure crises, environmental and public health crises, droughts, and climate change at a all scales: individual and building, local, regional, state, country, and international.
Related scholarly research
Jiang et al. 2021 developed a model using One Water concepts to show how thinking about interconnections can improve modeling and assessing the hydrologic cycle.
Initiatives and organizations
The United Nations and World Health Organization host the WHO/UNICEF Joint Monitoring Programme (JMP) for Water Supply, Sanitation and Hygiene Program that uses One Water principles to monitor progress on local to global scales for attaining Sustainable Development Goal targets for “universal and equitable access to safe drinking water, sanitation, and hygiene.”
The Environmental Protection Agency noted that meeting the Clean Water Act (1972 requirements for managing and accessing water would be more efficient using an overall approach to water management. The agency, along with the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Water Alliance have provided webinars and other guidance for using a One Water approach for water management. The U.S. Water Alliance also has more initiatives to support One Water, including the One Water Council to bring organizations together, and Value of Water Campaign to educate about the importance of all water sources.
One Water Panel helps develop strategies for integrating water source development and management to more effectively meet present and future water needs and address climate change impacts.
The National Association of Clean Water Agencies (NACWA) and Association of Metropolitan Water Agencies (AMWA) have developed a campaign for Affordable Water, Resilient Communities to increase political awareness around water issues.
Examples of One Water strategies and implementation
Water management education
American Rivers uses a holistic approach to water management and hosted a 2-day conference to collect ideas for helping cities adapt an integrated water management approach in 2016.
The Southeast Michigan Council of Governments (SEMCOG) has a One Water Program to educate about water use and systems and their interrelationships and as a water stewardship approach. They hosted events at the 2023 Great Lakes and Fresh Water Week, posted videos to showcase benefits of a one-water approach, and have a series of articles.
The University of Illinois Urbana-Champaign includes One Water among five curricula toward a Bachelor of Science degree in Environmental engineering. Their One Water program emphasizes application of physical, chemical, and biological principles to design innovative water quality control processes for safe and reliable community or household drinking water, sanitation, stormwater management, and resource (water, nutrient, energy) recovery systems.
Infrastructure and sustainability
One Water concepts are also used in building planning and sustainable development. Blue Hole Primary School, Texas used One Water concepts as it built the school.
Cities
Cities have developed a variety of One Water Strategies, and there is guidance and studies for helping more cities develop their plans. The Water Research Foundation and Colorado State University are developing guidance for One Water Cities (2020–2023). International Water Association developed a Cities of the Future integrated water management program.
United States
The City of Los Angeles launched the One Water LA 2040 Plan, an integrated and unified approach to sustainably manage all water resources—surface water, groundwater, potable water, wastewater, recycled water, and stormwater.
Palo Alto is developing a One Water Plan as part of their climate Action-Protection and Adaptation planning priority.
San Francisco, California, has a broad OneWater SF Vision with many resources into their water planning, including “water, energy, financial, human, community partnerships and natural resources”
Denver, Colorado, adopted a One Water Plan in September 2021.
Wake County, North Carolina started a One Water Plan for its municipalities with a public visioning survey in May 2023.
Milwaukee, Wisconsin, has a One Water approach with information tailored to different audiences at #onewaterourwater
Worldwide
Vancouver, British Columbia, uses a One Water Approach to address changes in its watershed.
Awards
One Water Panel Honolulu received the US Water Prize for the Outstanding Public Sector Category, US Water Alliance in 2022
One Water conferences
One Water Summits were held in 2015, 2017 (New Orleans, Louisiana), and 2018 (Twin Cities, Minnesota)
A City Summit for cities to adapt One Water plans was held in Charlotte, North Carolina November 15–18, 2017.
One Water Summit 2024: Kazakhstan-France Climate Initiative (bnn.network). The planned One Water Summit is a crucial part of a series of combined actions by Kazakhstan and France to address climate issues on a global scale.
See also
Reclaimed water
Water conservation
References
Water treatment
Water management | One Water (water management) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,153 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
75,521,522 | https://en.wikipedia.org/wiki/Cosmic%20coincidence | In cosmology, the cosmic coincidence is the observation that at the present epoch of the universe's evolution, the energy densities associated with dark matter and dark energy are of the same order of magnitude, leading to their comparable effects on the dynamics of the cosmos. This coincidence is puzzling because these energies have vastly different effects on the universe's expansion—dark matter tends to slow down expansion through gravitational attraction, while dark energy seems to accelerate it. The observed similarity in the magnitudes of these two components' energy densities at this particular epoch in the universe's history raises questions about whether there might be some underlying physical connection or shared origin between dark matter and dark energy. Indeed, some theories attempt to explain this coincidence by proposing that they are different manifestations of the same fundamental force or field.
References
See also
Fine-tuned universe
Physical cosmology
Unsolved problems in physics | Cosmic coincidence | [
"Physics",
"Astronomy"
] | 182 | [
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Physical cosmology",
"Astronomical sub-disciplines"
] |
74,122,214 | https://en.wikipedia.org/wiki/Lexell%27s%20theorem | In spherical geometry, Lexell's theorem holds that every spherical triangle with the same surface area on a fixed base has its apex on a small circle, called Lexell's circle or Lexell's locus, passing through each of the two points antipodal to the two base vertices.
A spherical triangle is a shape on a sphere consisting of three vertices (corner points) connected by three sides, each of which is part of a great circle (the analog on the sphere of a straight line in the plane, for example the equator and meridians of a globe). Any of the sides of a spherical triangle can be considered the base, and the opposite vertex is the corresponding apex. Two points on a sphere are antipodal if they are diametrically opposite, as far apart as possible.
The theorem is named for Anders Johan Lexell, who presented a paper about it (published 1784) including both a trigonometric proof and a geometric one. Lexell's colleague Leonhard Euler wrote another pair of proofs in 1778 (published 1797), and a variety of proofs have been written since by Adrien-Marie Legendre (1800), Jakob Steiner (1827), Carl Friedrich Gauss (1841), Paul Serret (1855), and Joseph-Émile Barbier (1864), among others.
The theorem is the analog of propositions 37 and 39 in Book I of Euclid's Elements, which prove that every planar triangle with the same area on a fixed base has its apex on a straight line parallel to the base. An analogous theorem can also be proven for hyperbolic triangles, for which the apex lies on a hypercycle.
Statement
Given a fixed base an arc of a great circle on a sphere, and two apex points and on the same side of great circle Lexell's theorem holds that the surface area of the spherical triangle is equal to that of if and only if lies on the small-circle arc where and are the points antipodal to and respectively.
As one analog of the planar formula for the area of a triangle, the spherical excess of spherical triangle can be computed in terms of the base (the angular length of arc and "height" (the angular distance between the parallel small circles
This formula is based on consideration of a sphere of radius , on which arc length is called angle measure and surface area is called spherical excess or solid angle measure. The angle measure of a complete great circle is radians, and the spherical excess of a hemisphere (half-sphere) is steradians, where is the circle constant.
In the limit for triangles much smaller than the radius of the sphere, this reduces to the planar formula.
The small circles and each intersect the great circle at an angle of
Proofs
There are several ways to prove Lexell's theorem, each illuminating a different aspect of the relationships involved.
Isosceles triangles
The main idea in Lexell's geometric proof – also adopted by Eugène Catalan (1843), Robert Allardice (1883), Jacques Hadamard (1901), Antoine Gob (1922), and Hiroshi Maehara (1999) – is to split the triangle into three isosceles triangles with common apex at the circumcenter and then chase angles to find the spherical excess of triangle In the figure, points and are on the far side of the sphere so that we can clearly see their antipodal points and all of Lexell's circle
Let the base angles of the isosceles triangles (shaded red in the figure), (blue), and (purple) be respectively and (In some cases is outside then one of the quantities will be negative.) We can compute the internal angles of (orange) in terms of these angles: (the supplement of and likewise and finally
By Girard's theorem the spherical excess of is
If base is fixed, for any third vertex falling on the same arc of Lexell's circle, the point and therefore the quantity will not change, so the excess of which depends only on will likewise be constant. And vice versa: if remains constant when the point is changed, then so must be, and therefore must be fixed, so must remain on Lexell's circle.
Cyclic quadrilateral
Jakob Steiner (1827) wrote a proof in similar style to Lexell's, also using Girard's theorem, but demonstrating the angle invariants in the triangle by constructing a cyclic quadrilateral inside the Lexell circle, using the property that pairs of opposite angles in a spherical cyclic quadrilateral have the same sum.
Starting with a triangle , let be the Lexell circle circumscribing and let be another point on separated from by the great circle Let
Because the quadrilateral is cyclic, the sum of each pair of its opposite angles is equal, or rearranged
By Girard's theorem the spherical excess of is
The quantity does not depend on the choice of so is invariant when is moved to another point on the same arc of Therefore is also invariant.
Conversely, if is changed but is invariant, then the opposite angles of the quadrilateral will have the same sum, which implies lies on the small circle
Spherical parallelograms
Euler in 1778 proved Lexell's theorem analogously to Euclid's proof of Elements I.35 and I.37, as did Victor-Amédée Lebesgue independently in 1855, using spherical parallelograms – spherical quadrilaterals with congruent opposite sides, which have parallel small circles passing through opposite pairs of adjacent vertices and are in many ways analogous to Euclidean parallelograms. There is one complication compared to Euclid's proof, however: The four sides of a spherical parallelogram are the great-circle arcs through the vertices rather than the parallel small circles. Euclid's proof does not need to account for the small lens-shaped regions sandwiched between the great and small circles, which vanish in the planar case.
A lemma analogous to Elements I.35: two spherical parallelograms on the same base and between the same parallels have equal area.
Proof: Let and be spherical parallelograms with the great circle (the "midpoint circle") passing through the midpoints of sides and coinciding with the corresponding midpoint circle in Let be the intersection point between sides and Because the midpoint circle is shared, the two top sides and lie on the same small circle parallel to and antipodal to a small circle passing through and
Two arcs of are congruent, thus the two curvilinear triangles and each bounded by on the top side, are congruent. Each parallelogram is formed from one of these curvilinear triangles added to the triangle and to one of the congruent lens-shaped regions between each top side and with the curvilinear triangle cut away. Therefore the parallelograms have the same area. (As in Elements, the case where the parallelograms do not intersect on the sides is omitted, but can be proven by a similar argument.)
Proof of Lexell's theorem: Given two spherical triangles and each with its apex on the same small circle through points and construct new segments and congruent to with vertices and on The two quadrilaterals and are spherical parallelograms, each formed by pasting together the respective triangle and a congruent copy. By the lemma, the two parallelograms have the same area, so the original triangles must also have the same area.
Proof of the converse: If two spherical triangles have the same area and the apex of the second is assumed to not lie on the Lexell circle of the first, then the line through one side of the second triangle can be intersected with the Lexell circle to form a new triangle which has a different area from the second triangle but the same area as the first triangle, a contradiction. This argument is the same as that found in Elements I.39.
Saccheri quadrilateral
Another proof using the midpoint circle which is more visually apparent in a single picture is due to Carl Friedrich Gauss (1841), who constructs the Saccheri quadrilateral (a quadrilateral with two adjacent right angles and two other equal angles) formed between the side of the triangle and its perpendicular projection onto the midpoint circle which has the same area as the triangle.
Let be the great circle through the midpoints of and of and let and be the perpendicular projections of the triangle vertices onto The resulting pair of right triangles and (shaded red) have equal angles at (vertical angles) and equal hypotenuses, so they are congruent; so are the triangles and (blue). Therefore, the area of triangle is equal to the area of Saccheri quadrilateral as each consists of one red triangle, one blue triangle, and the green quadrilateral pasted together. (If falls outside the arc then either the red or blue triangles will have negative signed area.) Because the great circle and therefore the quadrilateral is the same for any choice of lying on the Lexell circle the area of the corresponding triangle is constant.
Stereographic projection
The stereographic projection maps the sphere to the plane. A designated great circle is mapped onto the primitive circle in the plane, and its poles are mapped to the origin (center of the primitive circle) and the point at infinity, respectively. Every circle on the sphere is mapped to a circle or straight line in the plane, with straight lines representing circles through the second pole. The stereographic projection is conformal, meaning it preserves angles.
To prove relationships about a general spherical triangle without loss of generality vertex can be taken as the point which projects to the origin. The sides of the spherical triangle then project to two straight segments and a circular arc. If the tangent lines to the circular side at the other two vertices intersect at point a planar straight-sided quadrilateral can be formed whose external angle at is the spherical excess of the spherical triangle. This is sometimes called the Cesàro method of spherical trigonometry, after crystallographer who popularized it in two 1905 papers.
Paul Serret (in 1855, a half century before Cesàro), and independently Aleksander Simonič (2019), used Cesàro's method to prove Lexell's theorem. Let be the center in the plane of the circular arc to which side projects. Then is a right kite, so the central angle is equal to the external angle at the triangle's spherical excess Planar angle is an inscribed angle subtending the same arc, so by the inscribed angle theorem has measure This relationship is preserved for any choice of therefore, the spherical excess of the triangle is constant whenever remains on the Lexell circle which projects to a line through in the plane. (If the area of the triangle is greater than a half-hemisphere, a similar argument can be made, but the point is no longer internal to the angle
Perimeter of the polar triangle
Every spherical triangle has a dual, its polar triangle; if triangle (shaded purple) is the polar triangle of (shaded orange) then the vertices are the poles of the respective sides and vice versa, the vertices are the poles of the sides The polar duality exchanges the sides (central angles) and external angles (dihedral angles) between the two triangles.
Because each side of the dual triangle is the supplement of an internal angle of the original triangle, the spherical excess of is a function of the perimeter of the dual triangle
where the notation means the angular length of the great-circle arc
In 1854 Joseph-Émile Barbier – and independently László Fejes Tóth (1953) – used the polar triangle in his proof of Lexell's theorem, which is essentially dual to the proof by isosceles triangles above, noting that under polar duality the Lexell circle circumscribing becomes an excircle of (incircle of a colunar triangle) externally tangent to side
If vertex is moved along the side changes but always remains tangent to the same circle Because the arcs from each vertex to either adjacent touch point of an incircle or excircle are congruent, (blue segments) and (red segments), the perimeter is
which remains constant, depending only on the circle but not on the changing side Conversely, if the point moves off of the associated excircle will change in size, moving the points and both toward or both away from and changing the perimeter of and thus changing
The locus of points for which is constant is therefore
Trigonometric proofs
Both Lexell () and Euler (1778) included trigonometric proofs in their papers, and several later mathematicians have presented trigonometric proofs, including Adrien-Marie Legendre (1800), Louis Puissant (1842), Ignace-Louis-Alfred Le Cointe (1858), and Joseph-Alfred Serret (1862). Such proofs start from known triangle relations such as the spherical law of cosines or a formula for spherical excess, and then proceed by algebraic manipulation of trigonometric identities.
Opposite arcs of Lexell's circle
The sphere is separated into two hemispheres by the great circle and any Lexell circle through and is separated into two arcs, one in each hemisphere. If the point is on the opposite arc from then the areas of and will generally differ. However, if spherical surface area is interpreted to be signed, with sign determined by boundary orientation, then the areas of triangle and have opposite signs and differ by the area of a hemisphere.
Lexell suggested a more general framing. Given two distinct non-antipodal points and there are two great-circle arcs joining them: one shorter than a semicircle and the other longer. Given a triple of points, typically is interpreted to mean the area enclosed by the three shorter arcs joining each pair. However, if we allow choice of arc for each pair, then 8 distinct generalized spherical triangles can be made, some with self intersections, of which four might be considered to have the same base
These eight triangles do not all have the same surface area, but if area is interpreted to be signed, with sign determined by boundary orientation, then those which differ differ by the area of a hemisphere.
In this context, given four distinct, non-antipodal points and on a sphere, Lexell's theorem holds that the signed surface area of any generalized triangle differs from that of any generalized triangle by a whole number of hemispheres if and only if and are concyclic.
Special cases
Lunar degeneracy
As the apex approaches either of the points antipodal to the base vertices – say – along Lexell's circle in the limit the triangle degenerates to a lune tangent to at and tangent to the antipodal small circle at and having the same excess as any of the triangles with apex on the same arc of As a degenerate triangle, it has a straight angle at (i.e. a half turn) and equal angles
As approaches from the opposite direction (along the other arc of Lexell's circle), in the limit the triangle degenerates to the co-hemispherical lune tangent to the Lexell circle at with the opposite orientation and angles
Half-hemisphere area
The area of a spherical triangle is equal to half a hemisphere (excess if and only if the Lexell circle is orthogonal to the great circle that is if arc is a diameter of circle and arc is a diameter of
In this case, letting be the point diametrically opposed to on the Lexell circle then the four triangles and are congruent, and together form a spherical disphenoid (the central projection of a disphenoid onto a concentric sphere). The eight points are the vertices of a rectangular cuboid.
Related concepts and results
Spherical parallelogram
A spherical parallelogram is a spherical quadrilateral whose opposite sides and opposite angles are congruent It is in many ways analogous to a planar parallelogram. The two diagonals and bisect each-other and the figure has 2-fold rotational symmetry about the intersection point (so the diagonals each split the parallelogram into two congruent spherical triangles, and if the midpoints of either pair of opposite sides are connected by a great circle , the four vertices fall on two parallel small circles equidistant from it. More specifically, any vertex (say of the spherical parallelogram lies at the intersection of the two Lexell circles ( and ) passing through one of the adjacent vertices and the points antipodal to the other two vertices.
As with spherical triangles, spherical parallelograms with the same base and the apex vertices lying on the same Lexell circle have the same area; see above. Starting from any spherical triangle, a second congruent triangle can be formed via a (spherical) point reflection across the midpoint of any side. When combined, these two triangles form a spherical parallelogram with twice the area of the original triangle.
Sorlin's theorem (polar dual)
The polar dual to Lexell's theorem, sometimes called Sorlin's theorem after A. N. J. Sorlin who first proved it trigonometrically in 1825, holds that for a spherical trilateral with sides on fixed great circles (thus fixing the angle between them) and a fixed perimeter (where means the length of the triangle side the envelope of the third side is a small circle internally tangent to and externally tangent to the excircle to trilateral Joseph-Émile Barbier later wrote a geometrical proof (1864) which he used to prove Lexell's theorem, by duality; see above.
This result also applies in Euclidean and hyperbolic geometry: Barbier's geometrical argument can be transplanted directly to the Euclidean or hyperbolic plane.
Foliation of the sphere
Lexell's loci for any base make a foliation of the sphere (decomposition into one-dimensional leaves). These loci are arcs of small circles with endpoints at and on which any intermediate point is the apex of a triangle of a fixed signed area. That area is twice the signed angle between the Lexell circle and the great circle at either of the points or see above. In the figure, the Lexell circles are in green, except for those whose triangles' area is a multiple of a half hemisphere, which are black, with area labeled; see above.
These Lexell circles through and are the spherical analog of the family of Apollonian circles through two points in the plane.
Maximizing spherical triangle area subject to constraints
In 1784 Nicolas Fuss posed and solved the problem of finding the triangle of maximal area on a given base with its apex on a given great circle Fuss used an argument involving infinitesimal variation of but the solution is also a straightforward corollary of Lexell's theorem: the Lexell circle through the apex must be tangent to at
If crosses the great circle through at a point , then by the spherical analog of the tangent–secant theorem, the angular distance to the desired point of tangency satisfies
from which we can explicitly construct the point on such that has maximum area.
In 1786 Theodor von Schubert posed and solved the problem of finding the spherical triangles of maximum and minimum area of a given base and altitude (the spherical length of a perpendicular dropped from the apex to the great circle containing the base); spherical triangles with constant altitude have their apex on a common small circle (the "altitude circle") parallel to the great circle containing the base. Schubert solved this problem by a calculus-based trigonometric approach to show that the triangle of minimal area has its apex at the nearest intersection of the altitude circle and the perpendicular bisector of the base, and the triangle of maximal area has its apex at the far intersection. However, this theorem is also a straightforward corollary of Lexell's theorem: the Lexell circles through the points antipodal to the base vertices representing the smallest and largest triangle areas are those tangent to the altitude circle. In 2019 Vincent Alberge and Elena Frenkel solved the analogous problem in the hyperbolic plane.
Steiner's theorem on area bisectors
In the Euclidean plane, a median of a triangle is the line segment connecting a vertex to the midpoint of the opposite side. The three medians of a triangle all intersect at its centroid. Each median bisects the triangle's area.
On the sphere, a median of a triangle can also be defined as the great-circle arc connecting a vertex to the midpoint of the opposite side. The three medians all intersect at a point, the central projection onto the sphere of the triangle's extrinsic centroid – that is, centroid of the flat triangle containing the three points if the sphere is embedded in 3-dimensional Euclidean space. However, on the sphere the great-circle arc through one vertex and a point on the opposite side which bisects the triangle's area is, in general, distinct from the corresponding median.
Jakob Steiner used Lexell's theorem to prove that these three area-bisecting arcs (which he called "equalizers") all intersect in a point, one possible alternative analog of the planar centroid in spherical geometry. (A different spherical analog of the centroid is the apex of three triangles of equal area whose bases are the sides of the original triangle, the point with as its spherical area coordinates.)
Spherical area coordinates
The barycentric coordinate system for points relative to a given triangle in affine space does not have a perfect analogy in spherical geometry; there is no single spherical coordinate system sharing all of its properties. One partial analogy is spherical area coordinates for a point relative to a given spherical triangle
where each quantity is the signed spherical excess of the corresponding spherical triangle These coordinates sum to and using the same definition in the plane results in barycentric coordinates.
By Lexell's theorem, the locus of points with one coordinate constant is the corresponding Lexell circle. It is thus possible to find the point corresponding to a given triple of spherical area coordinates by intersecting two small circles.
Using their respective spherical area coordinates, any spherical triangle can be mapped to any other, or to any planar triangle, using corresponding barycentric coordinates in the plane. This can be used for polyhedral map projections; for the definition of discrete global grids; or for parametrizing triangulations of the sphere or texture mapping any triangular mesh topologically equivalent to a sphere.
Euclidean plane
The analog of Lexell's theorem in the Euclidean plane comes from antiquity, and can be found in Book I of Euclid's Elements, propositions 37 and 39, built on proposition 35. In the plane, Lexell's circle degenerates to a straight line (which could be called Lexell's line) parallel to the base.
Elements I.35 holds that parallelograms with the same base whose top sides are colinear have equal area. Proof: Let the two parallelograms be and with common base and and on a common line parallel to the base, and let be the intersection between and Then the two top sides are congruent so, adding the intermediate segment to each, Therefore the two triangles and have matching sides so are congruent. Now each of the parallelograms is formed from one of these triangles, added to the triangle with the triangle cut away, so therefore the two parallelograms and have equal area.
Elements I.37 holds that triangles with the same base and an apex on the same line parallel to the base have equal area. Proof: Let triangles and each have its apex on the same line parallel to the base Construct new segments and congruent to with vertices and on The two quadrilaterals and are parallelograms, each formed by pasting together the respective triangle and a congruent copy. By I.35, the two parallelograms have the same area, so the original triangles must also have the same area.
Elements I.39 is the converse: two triangles of equal area on the same side of the same base have their apexes on a line parallel to the base. Proof: If two triangles have the same base and same area and the apex of the second is assumed to not lie on the line parallel to the base (the "Lexell line") through the first, then the line through one side of the second triangle can be intersected with the Lexell line to form a new triangle which has a different area from the second triangle but the same area as the first triangle, a contradiction.
In the Euclidean plane, the area of triangle can be computed using any side length (the base) and the distance between the line through the base and the parallel line through the apex (the corresponding height). Using point as the apex, and multiplying both sides of the traditional identity by to make the analogy to the spherical case more obvious, this is:
The Euclidean theorem can be taken as a corollary of Lexell's theorem on the sphere. It is the limiting case as the curvature of the sphere approaches zero, i.e. for spherical triangles as which are infinitesimal in proportion to the radius of the sphere.
Hyperbolic plane
In the hyperbolic plane, given a triangle the locus of a variable point such that the triangle has the same area as is a hypercycle passing through the points antipodal to and which could be called Lexell's hypercycle. Several proofs from the sphere have straightforward analogs in the hyperbolic plane, including a Gauss-style proof via a Saccheri quadrilateral by Barbarin (1902) and Frenkel & Su (2019), an Euler-style proof via hyperbolic parallelograms by Papadopoulos & Su (2017), and a Paul Serret-style proof via stereographic projection by Shvartsman (2007).
In spherical geometry, the antipodal transformation takes each point to its antipodal (diametrically opposite) point. For a sphere embedded in Euclidean space, this is a point reflection through the center of the sphere; for a sphere stereographically projected to the plane, it is an inversion across the primitive circle composed with a point reflection across the origin (or equivalently, an inversion in a circle of imaginary radius of the same magnitude as the radius of the primitive circle).
In planar hyperbolic geometry, there is a similar antipodal transformation, but any two antipodal points lie in opposite branches of a double hyperbolic plane. For a hyperboloid of two sheets embedded in Minkowski space of signature known as the hyperboloid model, the antipodal transformation is a point reflection through the center of the hyperboloid which takes each point onto the opposite sheet; in the conformal half-plane model it is a reflection across the boundary line of ideal points taking each point into the opposite half-plane; in the conformal disk model it is an inversion across the boundary circle, taking each point in the disk to a point in its complement. As on the sphere, any generalized circle passing through a pair of antipodal points in hyperbolic geometry is a geodesic.
Analogous to the planar and spherical triangle area formulas, the hyperbolic area of the triangle can be computed in terms of the base (the hyperbolic length of arc and "height" (the hyperbolic distance between the parallel hypercycles
As in the spherical case, in the small-triangle limit this reduces to the planar formula.
Notes
References
Eponymous theorems of geometry
Theorems about triangles and circles
Area
Spherical trigonometry
Articles containing proofs | Lexell's theorem | [
"Physics",
"Mathematics"
] | 5,713 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Eponymous theorems of geometry",
"Theorems in geometry",
"Articles containing proofs",
"Wikipedia categories named after physical quantities",
"Area"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.