id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
72,826,846 | https://en.wikipedia.org/wiki/Hydrogen-bonded%20organic%20framework | Hydrogen-bonded organic frameworks (HOFs) are a class of porous polymers formed by hydrogen bonds among molecular monomer units to afford porosity and structural flexibility. There are diverse hydrogen bonding pair choices that could be used in HOFs construction, including identical or nonidentical hydrogen bonding donors and acceptors. For organic groups acting as hydrogen bonding units, species like carboxylic acid, amide, 2,4-diaminotriazine, and imidazole, etc., are commonly used for the formation of hydrogen bonding interaction. Compared with other organic frameworks, like COF and MOF, the binding force of HOFs is relatively weaker, and the activation of HOFs is more difficult than other frameworks, while the reversibility of hydrogen bonds guarantees a high crystallinity of the materials. Though the stability and pore size expansion of HOFs has potential problems, HOFs still show strong potential for applications in different areas.
An important consequence of the natural porous architecture of hydrogen-bonded organic frameworks is to realize the adsorption of guest molecules. This character accelerates the emergence of various applications of different HOFs structures, including gas removal/storage/separation, molecule recognition, proton conduction, and biomedical applications, etc.
History
Reports of extended 2D hydrogen-bonding-based porous frameworks can be traced back to the 1960s. In 1969, Duchamp and Marsh reported a 2D interpenetrated nonporous crystal structure with a honeycomb network constructed by benzene-1,3,5-tricarboxylic acid (trimesic acid or TMA). Then Ermer reported an adamantane-1,3,5,7-tetracarboxylic acid (ADTA) based hydrogen-bonded network with interpenetrated diamond topology. Meanwhile diverse works of guest-induced hydrogen-bonded frameworks were reported successively, which gradually developed the concept of hydrogen-bonded organic frameworks. Another milestone in the evolution of hydrogen-bonded organic frameworks was set by Chen. In 2011, Chen reported a porous organic framework with hydrogen bonding as binding force and demonstrated its porosity by gas adsorption for the first time. Since then, numerous HOF structures have been designed and constructed, meanwhile various applications related to porous frameworks have been attempted and applied to HOFs, whose effectiveness has been proved.
Hydrogen bonding pairs in HOFs
Hydrogen bonds formed among various monomers guarantee the construction of hydrogen-bonded organic frameworks with different assembly architectures. The constitution of the hydrogen pairs is based on the structural and functional design of the HOFs, therefore different hydrogen bonding pairs should be selected following systematic requirements. The hydrogen bonding pairs generally include 2,4-diaminotriazine, carboxylic acid, amide, imide, imidazole, imidazolone and resorcinol, etc. Assorting with appropriate backbones, in every crystallization condition, the hydrogen-bonding pairs will exhibit specific assembly states, which means the morphologies with favored energy for this crystallization condition could be assembled by the monomers. In order to realize 2D or 3D HOFs, monomers with more than one hydrogen bonding pair are generally considered: the rigidity and directionality are also in favor of HOF construction.
Backbones of HOF monomer
Rigidity and directionality of the constructional units offer HOFs various pore structures, topologies, and further applications. Therefore, a proper choice of monomer backbones plays an important role in the construction of HOFs. These backbones not only can combine with different hydrogen bonding pairs mentioned above to realize stable HOF structural design and expand pore size, but also give opportunities to offer more topologies of HOFs. Also, by using backbones with similar geometry and same connection pattern to generate the monomers and HOFs, the isoreticular expansion of the frameworks becomes a reliable method to expand the pore size effectively. As mentioned, for the sake of constructing porous and stable HOFs, multiple aspects should be considered simultaneously, such as the rigidity of the backbones, the orientation and binding strength of the hydrogen pairs, and other intermolecular interactions for orderly stacking. Therefore, the design of HOF monomers should focus on their H-bonds orientations and structural rigidity, and consequent framework stability and porosity.
Synthetic methods
In principle, HOFs could be crystallized from solvents. However, the factors of solvent types, precursor concentration, crystallization time and temperature, etc., can have significant influence on HOFs crystallization process. Generally, the crystal products can correspond to kinetics through high concentration and short crystallization time, while slowing down the crystallization rate might yield thermodynamic crystals. One common method to produce HOF crystal is to slowly evaporate the solvent of the solution, which benefits the stacking of the monomers. Another widely used method is to diffuse low boiling point poor solvents into monomer solution with higher boiling point good solvents, in order to induce the assembly of the monomers. Depending on different crystallization systems, other methods have also been applied to HOF construction.
Characterization methods
There are various methods to characterize HOF materials and their monomers. Nuclear magnetic resonance (NMR) spectroscopy and high-resolution mass spectrometry (HR-MS) are generally used for characterizing the synthesis of monomers. Single crystal X-ray diffraction (SCXRD) is the powerful tool for determining the structure of the HOF crystal packing. Powder X-ray diffraction (PXRD) is also a supported technique to demonstrate the pure phase formation of HOFs. The gas adsorption and desorption study through Brunauer-Emmett-Teller (BET) method could reasonably demonstrate some key parameters of HOFs, like pore size, specific gas adsorption amount and surface area from the adsorption isotherms. Depending on application directions and study fields, diverse techniques have been applied to the characterization of HOFs.
Applications
The porous structures and unique properties guarantee HOFs good application performance in practical fields. The applications include but are not limited to gas adsorption, hydrocarbon separation, proton conductivity, and molecular recognition, etc.
Gas adsorption
As a kind of networks with tailorable pore size, HOFs could serve as storage containers for gas molecules with certain sizes and interactions. The relatively constrained pore size in HOFs could help to store, capture, or separate different small gas molecules, including H2, N2, CO2, CH4, C2H2, C2H4, C2H6 and so on. Mastalerz and Oppel reported a special 3D HOF with triptycene trisbenzimidazolone (TTBI) as constitutional monomers. Because of the molecular rigidity and stereo construction, 1D channels were formed through the frameworks and the surface area was largely enhanced, to the extent of 2796 m2/g as shown by BET. The HOF also presented good adsorption ability of H2 and CO2, as 243 and 80.7 cm3/g at 1 bar at 77 and 273 K, separately.
CO2 adsorption
As a typical greenhouse gas that could cause serious problems in many aspects, the capture of carbon dioxide is always under big concern. Meanwhile, carbon dioxide has also been widely used as a gas resource or emitted as waste gas in manufacturing and industry, therefore the storage and separation of CO2 have always been emphasized as an important application. Chen and co-workers reported a structural transformation HOF with high CO2 adsorption capacity in 2015. The N–H···N hydrogen bond is formed between the units to realize the assembly of the HOF architecture with binodal topology. The CO2 uptake capacity of the HOF could reach 117.1 cm3/g at 273 K.
Hydrocarbon separation
The hydrogen-bonded organic framework used for C2H2/C2H4 separation was reported by Chen and coworkers. In the structure of this HOF, each 4,4',4'',4'''-tetra(4,6-diamino-s-triazin-2-yl)tetraphenylmethane unit connected with eight other units by N–H···N hydrogen bonds. Due to certain structural flexibility, the framework was able to uptake C2H2 up to 63.2 cm3/g while the adsorption amount of C2H4 was 8.3 cm3/g at 273 K, showing effective C2H2/C2H4 separation.
Molecules recognition
The non-covalent interactions existing in the hydrogen-bonded organic frameworks, e.g., hydrogen bonding, π-π interaction and Van der Waals force, are considered as important intermolecular interactions for molecules recognition. Meanwhile, the multiple binding sites and adaptable structures also make HOFs good molecules recognition platform. By exploiting these features, so far different kinds of recognition have been realized, including gas molecules recognition, fullerene recognition, aniline recognition, pyridine recognition, etc.
Optical materials
Some luminescence molecules with large π conjugation structures are also used for HOFs construction. Therefore, various luminescent HOFs are designed and assembled in order to realize the non-covalent controlled luminescence adjustment which could introduce more functions to the HOF materials. For example, by using tetraphenylethylene (TPE) as backbones, a series of HOFs combined with solvents presenting different color emission have been reported.
Proton conduction
The hydrogen-bonded organic frameworks constructed with proton carriers have been widely used for proton conduction. The hydrogen bonds can also serve as proton sources in the frameworks to transfer protons. As an example, porphyrin-based structures and guanidinium sulfonate salt monomers have been studied and included in HOF design and construction for proton conduction since the certain conductivity they have.
Biological applications
As kinds of metal-free porous materials, hydrogen-bonded organic frameworks are also ideal platform for drug delivery and disease treatment. Meanwhile, with proper monomer selection and reasonable arrangement, Cao reported a robust HOF which could effectively encapsulate a cancer drug Doxorubicin and yield singlet oxygen by embedded photoactive pyrene moiety in order to realize dual functions of drug release and photodynamic therapy for cancer remedy.
References | Hydrogen-bonded organic framework | [
"Chemistry",
"Materials_science"
] | 2,175 | [
"Porous polymers",
"Hydrogen-bonded organic frameworks"
] |
72,827,238 | https://en.wikipedia.org/wiki/Branched%20pathways | Branched pathways, also known as branch points (not to be confused with the mathematical branch point), are a common pattern found in metabolism. This is where an intermediate species is chemically made or transformed by multiple enzymatic processes. linear pathways only have one enzymatic reaction producing a species and one enzymatic reaction consuming the species.
Branched pathways are present in numerous metabolic reactions, including glycolysis, the synthesis of lysine, glutamine, and penicillin, and in the production of the aromatic amino acids.
In general, a single branch may have producing branches and consuming branches. If the intermediate at the branch point is given by , then the rate of change of is given by:
At steady-state when the consumption and production rates must be equal:
Biochemical pathways can be investigated by computer simulation or by looking at the sensitivities, i.e. control coefficients for flux and species concentrations using metabolic control analysis.
Elementary properties
A simple branched pathway has one key property related to the conservation of mass. In general, the rate of change of the branch species based on the above figure is given by:
At steady-state the rate of change of is zero. This gives rise to a steady-state constraint among the branch reaction rates:
Such constraints are key to computational methods such as flux balance analysis.
Control properties of a branch pathway
Branched pathways have unique control properties compared to simple linear chain or cyclic pathways. These properties can be investigated using metabolic control analysis. The fluxes can be controlled by enzyme concentrations , , and respectively, described by the corresponding flux control coefficients. To do this the flux control coefficients with respect to one of the branch fluxes can be derived. The derivation is shown in a subsequent section. The flux control coefficient with respect to the upper branch flux, are given by:
where is the fraction of flux going through the upper arm, , and the fraction going through the lower arm, . and are the elasticities for with respect to and respectively.
For the following analysis, the flux will be the observed variable in response to changes in enzyme concentrations.
There are two possible extremes to consider, either most of the flux goes through the upper branch or most of the flux goes through the lower branch, . The former, depicted in panel a), is the least interesting as it converts the branch in to a simple linear pathway. Of more interest is when most of the flux goes through
If most of the flux goes through , then and (condition (b) in the figure), the flux control coefficients for with respect to and can be written:
That is, acquires proportional influence over its own flux, . Since only carries a very small amount of flux, any changes in will have little effect on . Hence the flux through is almost entirely governed by the activity of . Because of the flux summation theorem and the fact that , it means that the remaining two coefficients must be equal and opposite in value. Since is positive, must be negative. This also means that in this situation, there can be more than one Rate-limiting step (biochemistry) in a pathway.
Unlike a linear pathway, values for and are not bounded between zero and one. Depending on the values of the elasticities, it is possible for the control coefficients in a branched system to greatly exceed one. This has been termed the branchpoint effect by some in the literature.
Example
The following branch pathway model (in antimony format) illustrates the case and have very high flux control and step J2 has proportional control.
J1: $Xo -> S1; e1*k1*Xo
J2: S1 ->; e2*k3*S1/(Km1 + S1)
J3: S1 ->; e3*k4*S1/(Km2 + S1)
k1 = 2.5;
k3 = 5.9; k4 = 20.75
Km1 = 4; Km2 = 0.02
Xo =5;
e1 = 1; e2 = 1; e3 = 1
A simulation of this model yields the following values for the flux control coefficients with respect to flux
Branch point theorems
In a linear pathway, only two sets of theorems exist, the summation and connectivity theorems. Branched pathways have an additional set of branch centric summation theorems. When combined with the connectivity theorems and the summation theorem, it is possible to derive the control equations shown in the previous section. The deviation of the branch point theorems is as follows.
Define the fractional flux through and as and respectively.
Increase by . This will decrease and increase through relief of product inhibition.
Make a compensatory change in by decreasing such that is restored to its original concentration (hence ).
Since and have not changed, .
Following these assumptions two sets of equations can be derived: the flux branch point theorems and the concentration branch point theorems.
Derivation
From these assumptions, the following system equation can be produced:
Because and, assuming that the flux rates are directly related to the enzyme concentration thus, the elasticities, , equal one, the local equations are:
Substituting for in the system equation results in:
Conservation of mass dictates since then . Substitution eliminates the term from the system equation:
Dividing out results in:
and can be substituted by the fractional rates giving:
Rearrangement yields the final form of the first flux branch point theorem:
Similar derivations result in two more flux branch point theorems and the three concentration branch point theorems.
Flux branch point theorems
Concentration branch point theorems
Following the flux summation theorem and the connectivity theorem the following system of equations can be produced for the simple pathway.
Using these theorems plus flux summation and connectivity theorems values for the concentration and flux control coefficients can be determined using linear algebra.
See also
Control coefficient (biochemistry)
Elasticity coefficient
Metabolic control analysis
References
Metabolic pathways | Branched pathways | [
"Chemistry"
] | 1,208 | [
"Metabolic pathways",
"Metabolism"
] |
77,260,009 | https://en.wikipedia.org/wiki/Job%20cuffing | In human resources, job cuffing refers to the reluctance of employees to leave an employer, typically due to economic uncertainty. Job cuffing typically occurs in the winter in the hopes that employment prospects will improve in the spring. Remote employees are less resistant to return to the office during job cuffing.
Job cuffing can negatively impact productivity as disengaged employees continue to work while waiting to resume the job search. Employers can counter job cuffing by improving their employee value proposition.
The term stems from cuffing season and being handcuffed to one's job.
References
Labor relations
2022 neologisms
Popular culture neologisms
Human resource management
Occupational stress
Motivation
Work
Labor | Job cuffing | [
"Biology"
] | 137 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
78,620,422 | https://en.wikipedia.org/wiki/Jet%20disrupter | In mass spectrometry, jet disrupters are specialized electrodes within ion funnels that counteract the effects of directed gas flow. Acting as physical barriers to neutral molecules, they disperse gas molecules and charged droplets while improving ion transmission and reducing vacuum system demands.
Development and functionality
The development of the jet disrupter stemmed from the discovery that directed gas flow continued beyond both the capillary inlet and the ion funnel exit. This persistence caused inaccurate pressure readings, contamination of mass spectrometer components, increased background noise, and placed greater demand on downstream vacuum pumps. To address these challenges posed by non-uniform gas pressures within ion funnels, the jet disrupter was introduced.
The first jet disrupter was developed by Taeman Kim, consisting of a 9-mm brass disk positioned 22 mm downstream of the first ion funnel electrode. Operating at a higher voltage than the adjacent ring electrodes, this configuration enabled ions to be deflected around the electrode while causing neutral molecules and charged droplets to disperse and more efficiently be removed by vacuum pumps. Implementation of the jet disrupter in ion funnels yielded several improvements: downstream vacuum chamber pressure was reduced by a factor of 2-3, ion transmission improved by 15%, and MS/MS spectra demonstrated enhanced signal-to-noise ratios, increasing between 5.3 and 14.1-fold (depending on sample concentrations).
Furthermore, jet disrupters can function as ion valves. By modulating the applied voltage, it is possible to control the transmission efficiency of ions through the funnel. This capability is particularly valuable for reducing the relative intensity of highly abundant analyte ions, which can rapidly fill ion trap analyzers and cause unwanted space charging effects, which occur when excessive ion populations degrade mass analyzer performance. Their application into ion cyclotron resonance (ICR) cells helped maintain optimal ion populations, improving mass accuracy and sensitivity. The valve-like properties have also proven beneficial in dual-channel ion funnel designs, where a jet disrupter can modulate the flow of ions from one channel without affecting the ion transmission efficiency of the other.
Problems and alternative technologies
While jet disrupters effectively manage directed gas flow and improve ion transmission, they face several operational challenges. Over time, the electrode surface becomes contaminated through exposure to liquid droplets and neutral molecules. Moreover, since jet disrupters cannot completely block these particles, some inevitably pass through to downstream components of the mass spectrometer, gradually degrading signal quality and necessitating periodic maintenance or cleaning.
An alternative approach involves orthogonal ion injection, where the capillary input is orthogonally aligned with the ion funnel axis. Instead of using a physical barrier like a jet disrupter, this configuration allows the ion funnel to capture ions while naturally directing gas flow toward an outlet away from the funnel. This design effectively separates the gas dynamics from the ion path while maintaining ion transmission.
References
Ions | Jet disrupter | [
"Physics",
"Chemistry"
] | 581 | [
"Ions",
"Matter"
] |
78,627,051 | https://en.wikipedia.org/wiki/MWC%20656 | MWC 656 is an X-ray binary star system in the northern constellation of Lacerta. It has the identifier HD 215227 from the Henry Draper Catalogue. With an apparent visual magnitude of 8.75, it is too faint to be viewed with the naked eye. Based on parallax measurements, it is located at a distance of approximately 6,700 light years from the Sun. At one time it was considered a member of the Lacerta OB1 association of co-moving stars, but the distance estimate places it well past that group.
Observations
On August 11, 1935, R. F. Sanford found a weak emission line of Hydrogen–β in the spectrum of this star, and it was included in the 1943 supplement to the Mount Wilson catalogue of similar stars with the identifier MWC 656. In 1964, it was assigned a stellar classification of B5:ne, where 'B5' indicates this is a B-type star, 'n' means it displays 'nebular' lines due to rapid rotation, and 'e' shows it has emission lines. The ':' suffix indicates some uncertainty about the classification. It was included in a catalogue of Be stars in 1982. In 2005 it was found to have high projected rotational velocity of .
In 2009, the AGILE satellite discovered a nearby source of gamma-ray emission above . This source was given the identifier AGL J2241+4454. HD 215227 is the only suitable optical counterpart to lay within the 0.6° error circle. Spectra from the star showed evidence of emission from a circumstellar disk, as well as absorption from a shell feature. Rapid changes in emission line variability suggest an orbiting companion that is tidally interacting with the disk. Hipparcos light curve data indicated an orbital period of . This was confirmed in 2012 via radial velocity measurements of helium lines in photosphere of the Be star.
Refined radial velocity measurements in 2014 indicated a massive companion in the range of , assuming the Be star has a mass of . A main sequence companion with a mass this high should be readily visible in the optical band. Likewise, a subdwarf or a stripped helium core from a massive progenitor star don't fit the observations. The mass is too high for a white dwarf or a neutron star, leaving a stellar mass black hole as the only viable candidate. A faint X-ray emission was detected later the same year with a total luminosity of ·s−1, making this a high mass X-ray binary system. This luminosity is consistent with a stellar black hole in quiescence – meaning very little material is being fed into the black hole from the primary star.
This was the first reported binary system combining a black hole with a Be star. However, many Be stars are now found to have subdwarf OC companions, and the properties of these appear similar to MWC 656. The 2022 discovery of tidal distortion of the disk orbiting the Be star invalidated the original radial velocity amplitude, which called into question the 2014 mass estimates. The correction for this probably rules out a black hole companion. Emission from ionized helium near the companion appears double-peaked, indicating there is an orbiting accretion disk being fed from the disk orbiting the Be star. Revised measurements reported in 2023 found a mass range of for the companion, which means this is instead a neutron star, a white dwarf, or a hot helium star.
The position of the star at a distance of below the galactic plane suggests this is a runaway star system, since it is a young star not located near any star forming region. This scenario favors the neutron star companion.
References
Further reading
Be stars
Black holes
Subdwarfs
Astronomical X-ray sources
Lacerta
Durchmusterung objects
215227
112148 | MWC 656 | [
"Physics",
"Astronomy"
] | 783 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Lacerta",
"Unsolved problems in physics",
"Astrophysics",
"Constellations",
"Density",
"Astronomical X-ray sources",
"Stellar phenomena",
"Astronomical objects"
] |
78,630,383 | https://en.wikipedia.org/wiki/1%2C1-Dimethylurea | 1,1-Dimethylurea (DMU) is a urea derivative used as a polar solvent and a reagent in organic reactions. It is a solid, but forms a eutectic with a low melting point in combination with various hydroxylic additives that can serve as a environmentally sustainable solvent for various chemical reactions. The unsubstituted nitrogen, as an amine-like region, can serve as a nucleophile for a wide range of reactions, including reaction with acyl halides to form acylureas, coupling with vinyl halides, and multi-component condensation reaction with aldehydes. The unsubstituted amide-like portion can undergo oxidative coupling with alkenes to give dihydrooxazoles.
References
Ureas
Methyl compounds
Amide solvents | 1,1-Dimethylurea | [
"Chemistry"
] | 175 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs",
"Ureas"
] |
78,631,349 | https://en.wikipedia.org/wiki/5-Methylchrysene | 5-Methylchrysene is a polycyclic aromatic hydrocarbon (PAH) with a molecular weight of 242.3 g/mol and melting point of 117.5 °C (243.5°F). The chemical formula of it is C19H14. It has a vapour pressure of 0.00000025 mmHg. It can cause cancer according to an independent committee of scientific and health experts (California Office of Environmental Health Hazard Assessment (OEHHA)). It appears as purple crystals and it is water insoluble (0.062 mg/L at 27 °C)(80.6°F)but soluble in acetone. It is a carbopolycyclic compound.
5-Methylchrysene is a member of a group of chemicals called polycyclic aromatic hydrocarbons (PAHs). 5-Methylchrysene is a product of incomplete combustion and as a component of tobacco and marijuana smoke, which will result in its direct release to the natural environment. There is no commercial production of this compound. 5-Methylchrysene is formed during the incomplete burning of coal, oil, gas, wood, garbage, or other organic substances. PAHs generally occur as complex mixtures, for example as part of combustion products such as soot, not as single compounds. PAHs occur naturally in volcanoes and forest fires. They can also be found in substances such as crude oil and coal. They are found throughout the environment in the air, water, and soil.
It is a solid that exhibits a brilliant bluish-violet fluorescence in ultraviolet (= UV) light. When heated to decomposition it emits acrid smoke & irritating fumes. According to the MeSH Pharmacological Classification it is a carcinogen.
It has an OSHA Permissible Exposure Limit (=PEL)over an 8 hours Time Weighted Average (= TWA) of 0.2 mg/m3. This is also the Threshold Limit Values (TLV).
NIOSH recommends recommends an 10 Hours Time-Weighted Average (= TWA) Exposure Limit (= EL) of 0.1 mg/m3. NIOSH considers coal tar pitch volatiles to be potential occupational carcinogens. NIOSH usually recommends that occupational exposures to carcinogens be limited to the lowest feasible concentration.
Indoor air particulate samples (<10 um) were collected in Chinese homes from Xuan Wei county burning smokey coal, smokeless coal, and wood in 1983 and 1984; the concentration of 5-methylchrysene was 1.6 to 17 ug/m3, 0.21 to 3.5 ug/m3, and 0.03 to 0.05 ug/m3, respectively. Sampling was conducted in March and September, 2011.
5-Methylchrysene was detected outdoors at 21 and 13 pg/m3 in PM2.5 samples within 10 m of an 8-lane highway in Raleigh, NC, with an annual average daily traffic count of 125,000 vehicles and a parallel secondary road of 200 vehicles/day 275 m distant from the highway collection site, respectively.
Concentrations in mainstream smoke of US domestic brand cigarettes at a range of 2.5-3.9 ng/cigarette; limit of detection in smoke = 0.94 pg.
Dust/air mixture may ignite and explode. Vigorous reactions, sometimes amounting to explosions, can result from the contact between aromatic hydrocarbons, such as 5-METHYLCHRYSENE, and strong oxidizing agents. They can react exothermically with bases and with diazo compounds. Substitution at the benzene nucleus occurs by halogenation (acid catalyst), nitration, sulfonation, and the Friedel-Crafts reaction.
There is sufficient evidence in experimental animals for the carcinogenicity of 5-methylchrysene. 5-Methylchrysene is also possibly carcinogenic to humans (IARC Group 2B).
Associated disorders and diseases are adenoma, carcinoma, sarcoma, liver- and ling-neoplasms.
References
Carcinogens
Polycyclic aromatic hydrocarbons | 5-Methylchrysene | [
"Chemistry",
"Environmental_science"
] | 857 | [
"Carcinogens",
"Toxicology"
] |
75,630,144 | https://en.wikipedia.org/wiki/Tris%282%2C4%2C6-trimethoxyphenyl%29phosphine | Tris(2,4,6-trimethoxyphenyl)phosphine (TTMPP) is a large triaryl organophosphine whose strong Lewis-basic properties make it useful as an organocatalyst for several types of chemical reactions.
Reactions
TTMPP removes the trimethylsilyl group from ketene silyl acetals (the enol ether of esters) to give enolates that can then act as strong nucleophiles. It thus serves as a catalyst for Mukaiyama aldol reactions and group-transfer chain-growth polymerization reactions.
As a Brønsted base, TTMPP can deprotonate various alcohols, giving nucleophilic alkoxides that can undergo Michael addition reactions.
TTMPP can act as a Michael nucleophile itself to catalyze Baylis–Hillman reactions.
Uses
TTMPP is used as a ligand to form palladium-phosphine catalysts which are more reactive than triphenylphosphine-based catalysts.
References
Tertiary phosphines
Catalysts
Methoxy compounds
Phenyl compounds | Tris(2,4,6-trimethoxyphenyl)phosphine | [
"Chemistry"
] | 244 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
75,632,051 | https://en.wikipedia.org/wiki/Neptunium%28III%29%20bromide | Neptunium(III) bromide is a bromide of neptunium, with the chemical formula of NpBr3.
Preparation
Neptunium(III) bromide can be prepared by reacting neptunium dioxide and aluminium bromide:
Properties
Neptunium(III) bromide is a green solid. It can crystallize in two crystal systems:
α-NpBr3 is hexagonal with lattice parameters a = 791.7 pm and c = 438.2 pm. It has the same structure as uranium trichloride.
β-NpBr3 is orthorhombic with lattice parameters a = 411 pm, b = 1265 pm and c = 915 pm. It has the same structure as the bromides from plutonium to californium.
Neptunium(III) bromide also has a green hexahydrate, which is monoclinic.
Reactions
At 425 °C, neptunium(III) bromide bromide can be further brominated by bromine to form neptunium(IV) bromide.
References
External reading
Neptunium(III) compounds
Bromides
Actinide halides | Neptunium(III) bromide | [
"Chemistry"
] | 250 | [
"Bromides",
"Salts"
] |
75,633,903 | https://en.wikipedia.org/wiki/Anna%20Moore | Anna Marie Moore is an astronomer who was instrumental in the formation of the Australian Space Agency as part of the expert reference group of the Australian Government. She was nominated as a fellow of the Australian Academy of Technological Sciences and Engineering in 2023 for her contributions to space exploration. She is Director of The Australian National University Institute for Space and the Advanced Instrumentation Technology Centre.
Education
Moore was awarded a BA from Cambridge University, 1994, a Masters of Space Sciences from The University of London, 1995 and PhD in astronomy from the University of Sydney, 2000.
Career
Moore was employed at the Arcetri Observatory from 2004 to 2005, California Institute of Technology, from 2005 to 2017, and the Australian National University from 2017 onwards. She has received funding from various sources including the National Science Foundation, for SGER: United States participation in the 2007 Traverse to Dome A- Optical Sky Brightness and Ground Layer Turbulence Profiling. Moore also has received funding from the NSF for Gattini-UV South Pole camera research and the Australian Research Council for research on the Kunlun Infrared Sky Survey.
Moore is director of InSpace, and established and led the Institute for Space at ANU. At InSpace Director, she has exceeded normal diversity benchmarks by cultivating a workforce that is 75% women in an industry that is traditionally occupied by men. Her initiatives have facilitated the inclusion of female researchers within the InSpace Mission Specialist team and Technical Advisory Groups, two bodies that influence Australia's overarching space strategy.
During her tenure as Director of the Advanced Instrumentation and Technology Centre (AITC) at ANU, she played a role broaden the scope of space testing services for the aerospace sector in both Australia and New Zealand. She also ensured access for the space community to the AITC's National Space Test Facility (NSTF).
By early 2020, during the COVID-induced closures affecting much of Australian business, Moore facilitated the reopening of NSTF's first facility at ANU to open. This action ensured the continual fulfillment of heightened space testing demands from space companies, start-ups, and universities across Australia.
Select publications
Moore has authored over 100 peer-reviewed publications, with over 3060 citations and an H index of 29 as of 2023. Moore has also written various articles on space for The Conversation, on 'Why space matters' and space exploration in a post-covid world.
P Morrissey, M Matuszewski, DC Martin, JD Neill, H Epps, J Fucik, et al. (2018). The keck cosmic web imager integral field spectrograph. The Astrophysical Journal 864 (1), 93. DOI 10.3847/1538-4357/aad597
DC Martin, D Chang, M Matuszewski, P Morrissey, S Rahman, A Moore, et al. (2014). Intergalactic medium emission observations with the Cosmic Web Imager. I. The circum-QSO medium of QSO 1549+ 19, and evidence for a filamentary gas inflow. The Astrophysical Journal 786 (2), 106
JE Larkin, AM Moore, SA Wright, JE Wincentsen, D Anderson, et al. (2016) The infrared imaging spectrograph (IRIS) for TMT: instrument overview. Ground-based and Airborne Instrumentation for Astronomy VI 9908, 582–594
Awards
2023 – Fellow of the Australian Academy of Technological Sciences and Engineering
2021 – Australian Space Awards
References
External links
TROVE
ATSE
Living people
Astronomers
Alumni of the University of Cambridge
Fellows of the Australian Academy of Technological Sciences and Engineering
Australian women academics
Women in space
Australian women scientists
Year of birth missing (living people)
University of Sydney alumni | Anna Moore | [
"Astronomy"
] | 764 | [
"Astronomers",
"People associated with astronomy"
] |
75,639,463 | https://en.wikipedia.org/wiki/Aluminium%20gallium%20antimonide | Aluminium gallium antimonide, also known as gallium aluminium antimonide or AlGaSb (AlxGa1-xSb), is a ternary III-V semiconductor compound. It can be considered as an alloy between aluminium antimonide and gallium antimonide. The alloy can contain any ratio between aluminium and gallium. AlGaSb refers generally to any composition of the alloy.
Preparation
AlGaSb films have been grown by molecular beam epitaxy, chemical beam epitaxy and liquid phase epitaxy on gallium arsenide and gallium antimonide substrates. The result is a layered heterostructure on various III-V compounds.
Electronic properties
The bandgap and lattice constant of AlGaSb alloys are between those of pure AlSb (a = 0.614 nm, Eg = 1.62 eV) and GaSb (a = 0.610 nm, Eg = 0.73 eV). At an intermediate composition, the bandgap transitions from an indirect gap, like that of pure AlSb, to a direct gap, like that of pure GaSb. Different values of the composition at which this transition occurs have been reported over time, both from computational and experimental studies, with reported values ranging from x = 0.23 to x = 0.43. The spread in the reported values of the transition is mainly due to the closeness of the gap sizes at the Γ and L points in the Brillouin zone and variations in the experimentally-determined gap sizes.
Applications
AlGaSb has been incorporated into devices such as heterojunction bipolar and high-electron-mobility transistors, resonant-tunneling diodes, solar cells, short-wave infrared lasers, and a novel infrared light modulator. It is sometimes selected as an interlayer or buffer layer in studies of GaSb and InAs quantum wells.
Al-rich AlGaSb is sometimes selected over AlSb in heterostructures for being more chemically stable and resistant to oxidation than pure AlSb.
References
Antimonides
Aluminium compounds
Gallium compounds
III-V compounds | Aluminium gallium antimonide | [
"Chemistry"
] | 434 | [
"III-V compounds",
"Inorganic compounds"
] |
68,437,256 | https://en.wikipedia.org/wiki/Plethystic%20exponential | In mathematics, the plethystic exponential is a certain operator defined on (formal) power series which, like the usual exponential function, translates addition into multiplication. This exponential operator appears naturally in the theory of symmetric functions, as a concise relation between the generating series for elementary, complete and power sums homogeneous symmetric polynomials in many variables. Its name comes from the operation called plethysm, defined in the context of so-called lambda rings.
In combinatorics, the plethystic exponential is a generating function for many well studied sequences of integers, polynomials or power series, such as the number of integer partitions. It is also an important technique in the enumerative combinatorics of unlabelled graphs, and many other combinatorial objects.
In geometry and topology, the plethystic exponential of a certain geometric/topologic invariant of a space, determines the corresponding invariant of its symmetric products.
Definition, main properties and basic examples
Let be a ring of formal power series in the variable , with coefficients in a commutative ring . Denote by
the ideal consisting of power series without constant term. Then, given , its plethystic exponential is given by
where is the usual exponential function. It is readily verified that (writing simply when the variable is understood):
Some basic examples are:
In this last example, is number of partitions of .
The plethystic exponential can be also defined for power series rings in many variables.
Product-sum formula
The plethystic exponential can be used to provide innumerous product-sum identities. This is a consequence of a product formula for plethystic exponentials themselves. If denotes a formal power series with real coefficients , then it is not difficult to show that:The analogous product expression also holds in the many variables case. One particularly interesting case is its relation to integer partitions and to the cycle index of the symmetric group.
Relation with symmetric functions
Working with variables , denote by the complete homogeneous symmetric polynomial, that is the sum of all monomials of degree k in the variables , and by the elementary symmetric polynomials. Then, the and the are related to the power sum polynomials: by Newton's identities, that can succinctly be written, using plethystic exponentials, as:
Macdonald's formula for symmetric products
Let X be a finite CW complex, of dimension d, with Poincaré polynomialwhere is its kth Betti number. Then the Poincaré polynomial of the nth symmetric product of X, denoted , is obtained from the series expansion:
The plethystic programme in physics
In a series of articles, a group of theoretical physicists, including Bo Feng, Amihay Hanany and Yang-Hui He, proposed a programme for systematically counting single and multi-trace gauge invariant operators of supersymmetric gauge theories. In the case of quiver gauge theories of D-branes probing Calabi–Yau singularities, this count is codified in the plethystic exponential of the Hilbert series of the singularity.
References
Symmetric functions | Plethystic exponential | [
"Physics",
"Mathematics"
] | 631 | [
"Sequences and series",
"Symmetry",
"Mathematical structures",
"Symmetric functions",
"Generating functions",
"Algebra"
] |
68,438,168 | https://en.wikipedia.org/wiki/FLEUR | The FLEUR code (also Fleur or fleur) is an open-source scientific software package for the simulation of material properties of crystalline solids, thin films, and surfaces. It implements Kohn-Sham density functional theory (DFT) in terms of the all-electron full-potential linearized augmented-plane-wave method. With this, it is a realization of one of the most precise DFT methodologies. The code has the common features of a modern DFT simulation package. In the past, major applications have been in the field of magnetism, spintronics, quantum materials, e.g. in ultrathin films, complex magnetism like in spin spirals or magnetic Skyrmion lattices, and in spin-orbit related physics, e.g. in graphene and topological insulators.
Simulation model
The physical model used in Fleur simulations is based on the (F)LAPW(+LO) method, but it is also possible to make use of an APW+lo description. The calculations employ the scalar-relativistic approximation for the kinetic energy operator. Spin-orbit coupling can optionally be included. It is possible to describe noncollinear magnetic structures periodic in the unit cell. The description of spin spirals with deviating periodicity is based on the generalized Bloch theorem. The code offers native support for the description of three-dimensional periodic structures, i.e., bulk crystals, as well as two-dimensional periodic structures like thin films and surfaces. For the description of the exchange-correlation functional different parametrizations for the local density approximation, several generalized-gradient approximations, Hybrid functionals, and partial support for the libXC library are implemented. It is also possible to make use of a DFT+U description.
Features
The Fleur code can be used to directly calculate many different material properties. Among these are:
The total energy
Forces on atoms
Density of states (including projections onto individual atoms and orbitals characters)
Band structures (including projections onto individual atoms and orbitals characters and band unfolding)
Charges, magnetic moments, and orbital moments at individual atoms
Electric multipole moments and magnetic dipole moments
Heisenberg interaction parameters (via the magnetic force theorem or via comparing different magnetic structures)
Magnetocrystalline anisotropy energy (via the magnetic force theorem or via comparing different magnetic structures)
Dzyaloshinskii-Moriya interaction parameters (via the magnetic force theorem or via comparing different magnetic structures)
Spin-spiral dispersion relations (via the magnetic force theorem or via comparing different magnetic structures)
EELS spectra
Magnetic circular dichroism spectra
The Work function for surfaces
For the calculation of optical properties Fleur can be combined with the Spex code to perform calculations employing the GW approximation to many-body perturbation theory. Together with the Wannier90 library it is also possible to extract the Kohn-Sham eigenfunctions in terms of Wannier functions.
See also
List of quantum chemistry and solid state physics software
References
External links
The FLEUR project
Computational chemistry software
Density functional theory software
Physics software | FLEUR | [
"Physics",
"Chemistry"
] | 639 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
68,444,579 | https://en.wikipedia.org/wiki/Takeuti%E2%80%93Feferman%E2%80%93Buchholz%20ordinal | In the mathematical fields of set theory and proof theory, the Takeuti–Feferman–Buchholz ordinal (TFBO) is a large countable ordinal, which acts as the limit of the range of Buchholz's psi function and Feferman's theta function. It was named by David Madore, after Gaisi Takeuti, Solomon Feferman and Wilfried Buchholz. It is written as using Buchholz's psi function, an ordinal collapsing function invented by Wilfried Buchholz, and in Feferman's theta function, an ordinal collapsing function invented by Solomon Feferman. It is the proof-theoretic ordinal of several formal theories:
, a subsystem of second-order arithmetic
-comprehension + transfinite induction
IDω, the system of ω-times iterated inductive definitions
Definition
Let represent the smallest uncountable ordinal with cardinality .
Let represent the th epsilon number, equal to the th fixed point of
Let represent Buchholz's psi function
References
Proof theory
Ordinal numbers
Set theory | Takeuti–Feferman–Buchholz ordinal | [
"Mathematics"
] | 240 | [
"Ordinal numbers",
"Set theory",
"Proof theory",
"Mathematical logic",
"Mathematical objects",
"Number stubs",
"Order theory",
"Numbers"
] |
68,445,549 | https://en.wikipedia.org/wiki/Mizab%20al-Rahma | The Mīzāb al-Raḥma (, 'gutter of mercy'), also known as the Mīzāb al-Kaʿba ('gutter of the Kaʿba'), is a rain gutter projecting from the roof of the Kaʿba enabling rainwater to pour to the ground below.
Architecture
The roof of the Kaʿba is flat, but slopes gently down to the north-west corner. From this corner, the mīzāb juts out, conducting rainwater from the roof. The lip of the mīzāb has an appendage known as the "beard of the mīzāb". The ground below is paved with marble slabs and decorated with inlaid mosaic designs. The design of the mīzāb has changed over the years; the current form is golden. Its length is , which is included in the wall of the Kaaba, its cavity width is , the height of each side is , and its entry into the roof wall is .
A detailed description of the mīzāb around 1183–85 CE is offered by Ibn Jubayr:
The Mizab is on the top of the wall which overlooks the Hijr. It is of gilded copper and projects four cubits over the Hijr, its breadth being a span. This place under the waterspout is also considered as being a place where, by the favour of God Most High, prayers are answered. The Yemen corner is the same. The wall connecting this place with the Syrian corner is called al-Mustajar [The Place of Refuge]. Underneath the water-spout, and in the court of the Hijr near to the wall of the blessed House, is the tomb of Isma'il [Ishmael] - may God bless and preserve him. Its mark is a slab of green marble, almost oblong and in the form of a mihrab. Beside it is a round green slab of marble, and both [they are verde antico] are remarkable to look upon.
Role in worship
In his Kitāb Akhbār Makka, the ninth-century scholar al-Azraqī wrote with reference to the mīzāb that "anyone who performs the ṣalāt under the mat̲h̲ʿab becomes as pure as on the day when his mother bore him".
Ibn Jubayr offers a vivid account of worship at the mīzāb in 1183 CE:
One of the things that deserve to be confirmed and recorded for the blessings and favour of seeing and observing it is that on Friday the 19th of Jumada l-Ula, which was the 9th of September [1183], God raised from the sea a cloud which moved towards Damascus and rained heavily like an abundant fountain, according to the words of the Messenger of God--may God bless and preserve him. It came at the ending of the afternoon's prayers and with the evening of the same day, raining copiously. Men hastened to the Hijr and stood beneath the blessed water-spout, stripping off their clothes and meeting the water that flowed from it with their heads, their hands, and their mouths. They pressed round it in a throng, raising a great clamour, each one coveting for his body a share of the divine mercy. Their prayers went up, the tears of the contrite flowed, and you could hear nothing but the swell of voices in prayer and the sobs of the weeping. The women stood without the Hijr, watching with weeping eyes and humble hearts, wishing they could go to that spot. Some pilgrims listful of performing a meritorious act, and moved as well to pity, drenched their clothes in the blessed water and, going out to the women, wrung them into the hands of some of them. They took it and drank it and laved it over their faces and bodies.
History
The first Mīzāb that worked for the Kaʿba was that the Quraish made when building it before the Prophetic mission.
Then the Mīzāb of Abd Allah ibn al-Zubayr when he built the Kaʿba in 684 AD.
Then the Mīzāb of Al-Hajjaj ibn Yusuf, who rebuilt the Kaʿba in 692 AD.
Then the Mīzāb of Sheikh Abu al-Qasim Ramesht, which his slave reached after his death in 1142 AD.
Then the Mīzāb of Al-Muqtafi in 1146 AD.
Then the Mīzāb of Al-Nasir in 1279 AD.
Then the Mīzāb of Suleiman the Magnificent in 1551 AD.
Then the Mīzāb which was made from Egypt in 1554 AD.
Then the Mīzāb of The Ottoman Sultan Ahmed I Ibn Muhammad III in 1612 AD.
Then the Mīzāb of The Sultan Abdulmejid I in 1856 AD.
Then the Mīzāb, which was sent with Haji Rida Pasha in 1859 AD.
Then the Mīzāb of the reign of King Fahd bin Abdulaziz in 1997, when he replaced the old Mīzāb for the roof of the Ka'aba with a new one, stronger with the same specifications as the old one.
Further reading
Caїd Ben Chérif, Aux Villes Saintes de l’Islam (Paris, 1919), p. 75.
References
Kaaba
Stormwater management | Mizab al-Rahma | [
"Chemistry",
"Environmental_science"
] | 1,101 | [
"Water treatment",
"Stormwater management",
"Water pollution"
] |
68,446,173 | https://en.wikipedia.org/wiki/Grating-coupled%20interferometry | Grating-coupled interferometry (GCI) is a biophysical characterization method mainly used in biochemistry and drug discovery for label-free analysis of molecular interactions. Similar to other optical methods such as surface plasmon resonance (SPR) or bio-layer interferometry (BLI), it is based on measuring refractive index changes within an evanescent field near a sensor surface. After immobilizing a target to the sensor surface, analyte molecules in solution which bind to that target cause a small increase in local refractive index. By monitoring these refractive changes over time characteristics such as kinetic rates and affinity constants of the analyte-target binding, or analyte concentrations, can be determined.
Explanation
GCI is based on phase-shifting waveguide interferometry. Light of the sensing arm of the interferometer is coupled into a monomode waveguide through a first grating, and undergoes a phase change until it reaches a second grating, depending on the local refractive index within the evanescent field (see image). The second grating is used for coupling in light of the reference arm of the interferometer, and interference created by the superposition of the sensing and reference waves after the second grating translates the phase changes into an intensity modulation. By rapid phase modulation of one of the arms using a liquid crystal element, and thanks to the long interaction length with the sample, extremely high sensitivities with respect to surface refractive index can be achieved even at acquisition rates above 10 Hz. Since the interference is created on chip and not through free-space propagation, a high robustness with respect to ambient disturbances such as vibrations or temperature changes is achieved.
References
See also
Receptor–ligand kinetics
Affinity
Ligand binding assay
Immunoassay
Label-free quantification
Electromagnetism
Nanotechnology
Spectroscopy
Biochemistry methods
Biophysics
Forensic techniques
Protein–protein interaction assays
Plasmonics
Optical phenomena | Grating-coupled interferometry | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 403 | [
"Physical phenomena",
"Protein–protein interaction assays",
"Surface science",
"Fundamental interactions",
"Nanotechnology",
"Spectroscopy",
"Plasmonics",
"Electromagnetism",
"Instrumental analysis",
"Materials science",
"Biophysics",
"Biochemistry methods",
"Molecular physics",
"Spectrum ... |
74,324,439 | https://en.wikipedia.org/wiki/Thermal%20equation%20of%20state%20of%20solids | In physics, the thermal equation of state is a mathematical expression of pressure P, temperature T, and, volume V. The thermal equation of state for ideal gases is the ideal gas law, expressed as (where R is the gas constant and n the amount of substance), while the thermal equation of state for solids is expressed as:
where is the volume dependence of pressure at room temperature (isothermal), and is the temperature dependence of pressure at constant volume (isochoric), known as thermal pressure.
For the ideal gas at high pressure-temperature (high P-T), the soft gas is filled in a solid firm container, and the gas is restrained inside the container; while for a solid at high P-T, a solid is loaded inside the soft medium, and the solid can expands/shrinks in the soft medium when heated and compressed. Therefore, the compression/heating process of the gas could be either constant temperature (isothermal), constant pressure (isobaric) or constant volume (isochoric). Though the compression/heating process of solids can be constant temperature (isothermal), and constant pressure (isobaric), it can not be a constant volume (isochoric), At high P-T, the pressure for the ideal gas is calculated by the force divided by the area, while the pressure for the solid is calculated from bulk modulus (K, or B) and volume at room temperature, or from Eq (1) at high P-T. A pressure gauge's bulk modulus is known, and its thermal equation of state is well known. To study a solid with unknown bulk modulus, it has to be loaded with a pressure gauge, and its pressure will be determined from its pressure gauge.
The most common pressure gauges are Au, Pt, Cu, and MgO, etc. When two or more pressure gauges are loaded together at high P-T, their pressure readings should be the same. However, large discrepancies have been reported in pressure determination using different pressure gauges or different thermal equations of state for the same pressure gauge. Fig.1 is a schematic plot showing the discrepancy in paper.
Out of the total pressure in Eq.(1), the first term pressures on the right side of Ag, Cu, Mo, Pd at room temperature are consistent in a wide pressure range, according to the Mao ruby scale up to 1 Mba. In addition, the first term pressure of Ag, Cu, and MgO are consistent according to third-order Birch–Murnaghan equation of state. Therefore, the discrepancy of the total pressure, P(V, T), should be from the second term in Eq. 1, which is the thermal pressure Pth(V, T) at high P-T.
Thermal pressure
Anderson thermal pressure model
In 1968, Anderson developed for the thermal gradient, and its reciprocal correlate the thermal pressure and temperature in a constant volume heating process by . Note, thermal pressure is the pressure change in a constant volume heating process, and expressed by integration of .
Anderson thermal pressure model is the first thermal pressure model and it is the most common thermal pressure model as well.
Experimental
The thermal pressure is the pressure change in a constant volume heating. In the section above, there are large discrepancies in pressure determination using different pressure gauges or different thermal equations of state for the same pressure gauge, however, the pressure determination in the heating process need to be reliable to measure the thermal pressure in experiments. In addition, to measure the thermal pressure in experiments, the heating process has to be a volume constant (isochoric) process. According to the first section above, an heating for a solid can not be a isochoric, so the pressure change in a non-isochoric heating process is not exactly the thermal pressure.
When a solid is loaded with a pressure gauge, and heated/compressed together at high P-T, the thermal pressure of the solid does not equal that of its gauge. The pressure is a state variable, while the thermal pressure is a process variable. A solid is subject to the same pressure as its gauge, In a heating process from T1 to T2, if the solid's volume is kept constant by compression, most likely its pressure gauge's volume will not be constant in the same heating process. In paper, the authors demonstrate that, , and , so
which means the thermal pressure of a solid doesn't equal that of its gauge.
Determination from models
According to the Anderson model, thermal pressure is the integration of the product thermal expansion and bulk modulus , i.e. . In this model, both and are pressure dependent and temperature dependent, so integrating the αp and KT in an isochoric process over temperature is not straight forward. To bypass this issue, the P-T dependent and are assumed to constant and . But authors in publication demonstrated that the model predicted pressure of Au and MgO from constant and at ambient pressure deviate from its experimental values, and the higher temperature, the higher deviation. A cartoon plot for the pressures predicted from thermal pressure version equation of state in paper is shown in Fig. 2 here.
Authors in paper proposed an altenate way to make the integration of ) possible. They assumes the thermal expansion to be pressure independent, and reduce the P-T dependent and to only temperature dependent. But in a preprint paper, author proved that pressure independent thermal expansion leads to the bulk modulus to be temperature independent, which again reduce the P-T dependent and to constant and .
There are various other thermal pressure models, but accurately determined thermal pressures are required to prove these models.
Pressure-dependent thermal expansion equation of state
It was explained that the thermal pressure can not be accurately determined in experiments in section "The thermal pressure from an experiment" above, and the thermal pressure can't accurately calculated from Anderson model above.Thermal expansion equation of state has been proposed before, which consists of a thermal expansion at ambient pressure and followed by an isothermal compression at high temperature. In this model, there is no thermal pressure term, but accurate pressure determination high P-T and temperature dependent KT are of big challenge at the present. In paper, the authors proposed a different thermal expansion equation of state, which consists of isothermal compression at room temperature, following by thermal expansion at high pressure. To distinguish these two thermal expansion equations of state, the latter one is called pressure-dependent thermal expansion equation of state.
To deveop the pressure-dependent thermal expansion equation of state, in an compression process at room temperature from (V0, T0, P0) to (V1, T0,P1), a general form of volume is expressed as
.................................(1)
f in the above expression is the mathematical relation between volume and temperature, pressure. It could be expressed by various models, such as Murnaghan, modified Tait, natural strain, Vinet, Birch-Murnaghan, and others. Its inverse function of pressure can be written as
............................(2)
The thermal expansion in an isobaric heating process from (V1, T0, P) to (V, T2,P) can be expressed as
V1= V·exp(-∫T0T2αp·dx).......................................................(3)
The authors integrate the thermal expansion formula Eq. (3) into Eq. (2), and call V·exp(-∫αp·dx) as VM, and yield the general form of thermal expansion equation of state from (V0, T0, P0) to (V, T2,P)
...........................(4)
Here V is the volume after the isobaric heating. In paper, authors explained in detail how to develop Eq (4), and took the third order Birch Murnaghan as an example.
To partially prove the pressure-dependent thermal expansion equation of state, the authors collected a set of MgO x-ray diffraction data at various temperatures at ambient pressure. At ambient pressure, P=0 GPA is known, so, the volume, pressure, and temperature are all given. Then, authors predict the pressure value from the given (V, T) from pressure-dependent thermal expansion equation of state. The predicted pressures match with the known experimental value of 0 GPa, see in Figure 2. In addition of MgO, the authors demonstrate that the Au has a similar trend as well. In the future, the validation of the pressure-dependent thermal expansion equation of state at high P-T conditions is required.
The pressure dependent αp has to be determined from an isobaric heating process. It has been reported that the heating in DAC with membrane at high P-T were isobaric. Authors in the paper propose a reversible isobaric heating concept, in which the plotted heating data points and cooling data points line on the same curve. Authors consider this heating and cooling process very close to the ideal isobaric. A cartoon plot of reversible heating/cooling proposed in paper is shown as Fig. 3.
In paper, the authors demonstrated the reversible isobaric heating concept by MgO at 9.5 GPa. In a reversible heating process, no pressure determination at high P-T is required, thus, avoid the difficulty of accurately determining the pressure at high P-T.
References
Equations of state | Thermal equation of state of solids | [
"Physics"
] | 2,077 | [
"Equations of state",
"Statistical mechanics",
"Equations of physics"
] |
74,325,364 | https://en.wikipedia.org/wiki/Kato%27s%20inequality | In functional analysis, a subfield of mathematics, Kato's inequality is a distributional inequality for the Laplace operator or certain elliptic operators. It was proven in 1972 by the Japanese mathematician Tosio Kato.
The original inequality is for some degenerate elliptic operators. This article treats the special (but important) case for the Laplace operator.
Inequality for the Laplace operator
Let be a bounded and open set, and such that . Then the following holds
in ,
where
is the space of locally integrable functions – i.e., functions that are integrable on every compact subset of their domains of definition.
Remarks
Sometimes the inequality is stated in the form
in
where and is the indicator function.
If is continuous in then
in .
Literature
References
Functional analysis
Inequalities
Differential operators | Kato's inequality | [
"Mathematics"
] | 162 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical theorems",
"Mathematical objects",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Differential operators"
] |
72,845,151 | https://en.wikipedia.org/wiki/Kenneth%20T.%20Gillen | Kenneth T. Gillen is a retired Sandia National Labs researcher noted for contributions to service life prediction methods for elastomers
Education
Gillen completed his PhD in chemistry at University of Wisconsin - Madison in 1970 under advisor Joseph H. Noggle.
Career
Gillen joined Sandia National Labs in 1974, working on elastomeric seals in nuclear weapons and satellites. His research has focused on the prediction of the service life of polymers under exposure to temperature, radiation, humidity and mechanical stress. His most highly cited published work was the development of testing and analysis methods for the combined effects of diffusion and oxidation in polymers. His methods overcame limitations of earlier, less accurate methods based on the Arrhenius equation. His development of a technique for profiling of oxidation-induced stiffness gradients in aged elastomers was applied in the tire industry.
Gillen served as an editor of the Elsevier journal Polymer Degradation and Stability from 1999 to 2006.
He retired from Sandia in 2004 but continued in a part time consulting role until 2015.
Awards
2020 - Melvin Mooney Distinguished Technology Award from Rubber Division of the ACS
References
Polymer scientists and engineers
Living people
Year of birth missing (living people) | Kenneth T. Gillen | [
"Chemistry",
"Materials_science"
] | 243 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
71,321,817 | https://en.wikipedia.org/wiki/Carboxylate-based%20metal%E2%80%93organic%20frameworks | Carboxylate–based metal–organic frameworks are metal–organic frameworks that are based on organic molecules comprising carboxylate functional groups.
Divalent Carboxylates
The divalent metal carboxylate based frameworks MOF-5 and HKUST-1 are examples of prototypical MOF materials and triggered a huge growth in the field of metal-organic frameworks. A keyword search for “metal-organic frameworks” registers>1,600 publications in 2010 and >2,000 for 2011, a strong indication of the worldwide interest in this area and indicates that the growth is continuing. More recent work on divalent carboxylates with longer and more complex organic components is pushing the limits of gas adsorption and storage properties with the highest surface areas and lowest densities of all known crystalline materials.
MOF-5 & MOF-177
MOF-5 is an early and heavily studied example of a MOF material. The material is an example of a cubic 3-dimensional extended lattice composed of Zn4O inorganic clusters connected by terephthalate linkers. Each cluster involves 6 carboxylate groups of 6 terephthalate molecules bridging zinc atoms leading to an octahedral type arrangement around the cluster which, when expanded in three dimensions, reproduces the cubic arrangement of the material. The structure is retained upon solvent removal and the literature states that the Langmuir surface area (a monolayer-equivalent surface area) is stated to be of the order of 3000 m2g−1, significantly higher than that of most zeolites and at the time, among the highest of all known materials. One downside to the large open pore structure is the potential for interpenetration of 2 frameworks. In the case of MOF-5 and the IRMOF family of isoreticular structures, if the pore size is sufficient to contain a Zn4O cluster and the dicarboxylate is of sufficient length, then the formation of a second extended lattice can occur within the first. This interpenetration or catenation of two frameworks results in a significant reduction in the porosity as the majority of the void space in the cage is filled with the other framework. Examples of interpenetration are reported for some members of the IRMOF series.
MOF-177 is another example of a MOF material containing the tetrahedral Zn4O cluster but with a more complex and extended tricarboxylate linker. The carboxylate molecule in this case in the large BTB molecule (BTB = benzene-1,3,5-tribenzoic acid). This large yet rigid tri-carboxylate unit connects to the cluster in the same manner as in the MOF-5 structure but as there are three carboxylate units and a triangular geometry, this produces a more spherical porous cage structure rather than the cubic pore geometry in MOF-5. MOF-177 has been shown to have one of the largest surface areas of known materials to date. Literature states a Langmuir surface area value of 4500 m2g−1 with N2 adsorption giving a type I isotherm with adsorption of 1350 mg g-1 between 0.4 and 1 P/P0.50 These values show that MOF-177 is a highly porous open framework MOF material with a 3-dimensionally connected array of porous cages.
HKUST-1
HKUST-1 is another early example of a divalent carboxylate MOF. Reported around the same time as MOF-5, HKUST-1 is a copper ‘paddlewheel’ based MOF where two copper ions form a dimeric unit with four bridging carboxylates creating a square planar geometry around two adjacent copper sites. The two copper ions in the paddlewheel coordinate to the oxygen of two water molecules to create a double square pyramidal geometry for the two metal sites in the hydrated form of the structure. Activation of the material prior to adsorption studies results in the removal of the terminal water molecules resulting in a coordinatively unsaturated metal site. The rigid, porous structure of the HKUST-1 framework combined with the accessibility of the activated metal sites upon dehydration has led to a lot of interest in adsorption, separation and catalysis applications.
CPO-27 / MOF-74
While surface area and pore volume is important to the adsorption properties of MOF materials, another consideration is the availability of coordinatively unsaturated metals sites. The divalent metal carboxylate CPO-27(M) (where M = ) was reported by Dietzel and co-workers, while at a similar time work by Rosi et. al. produced the isostructural zinc analogue, referred to as MOF-74. The divalent metal and 2,5-dihydroxyterephthalic acid linker form a honeycomb like array of hexagonal channels. The inorganic component is a helical chain of edge sharing NiO6octahedra where each metal is bound to two oxygens from hydroxyl groups on the ligand, three oxygens from carboxylate groups and one water molecule. MOF-74 was prepared using DMF as the solvent and as such, has terminal DMF molecules bound to the free metal sites on the chain. The helical chains of CPO-27(Ni) are separated by the planar acid linker. This organic molecule acts as a rigid pillar between chains with each linker bound to three different metal sites from each chain. Removal of the solvent molecules from the terminal metal sites creates five-coordinate metal cations with little freedom to rotate or distort the chain to change the coordination environment. This unfavourable coordination environment means that the activated metal site has a high enthalpy of adsorption and is readily filled by adsorbed guest species. The chain is topologically identical to that of the nickel bisphosphonate STA-12(Ni) but differs upon dehydration where the additional flexibility of the bisphosphonate linker allows the chain to twist and distort reducing the accessibility of the coordinatively unsaturated metal site. The availability of this metal site in the CPO-27 framework has been explored for a number of different adsorption applications involving gases such as CO2, H2 and NO.
Trivalent Carboxylates
Some of the most widely studied of all metal organic frameworks are trivalent metal carboxylate materials. Extensive work in this area has provided an understanding of the crystal chemistry of a wide variety of the trivalent first row transition metals
MIL-47 & MIL-53
The vanadium terephthalate MIL-47, first reported by Barthelet and co-workers, is an example of a metal organic framework consisting of infinite corner sharing metal chains of VO6 octahedra bridged by the linear terephthalate organic linker. This connectivity results in the formation of large diamond shaped channels. The channels in as-prepared vanadium MIL-47 contain some residual guest terephthalic acid and is reported as having the formula VIII(OH)(CO2-C6H4-CO2) and a hydroxide μ2-OH ion forming the infinite chains. Activation of the solid by heating in a tube furnace at 573 K for 24 hours results in deprotonation of the hydroxide on the chains to form a μ2-oxo and oxidation of the vanadium, to give VIVO(CO2-C6H4-CO2) as the formula for the activated material. The activated MIL-47(V) is anhydrous at room temperature under ambient pressure, as the channels are hydrophobic, being lined with phenyl rings and with no accessible metal sites or favourable hydrogen bonding positions the channels are hydrophobic.
MIL-53 was first reported with chromium (Cr3+) and shortly after with aluminium (Al3+) with terephthalic acid as the linker. MIL-53 is isostructural with MIL-47, the main difference is that MIL-53 only contains the trivalent metal and a μ2-hydroxide bridge whereas the activated MIL-47 is the tetravalent V4+ with μ2-oxobridging. As the activated form of the MIL-53 contains the metal hydroxide chains, the channels are hydrophilic with the hydroxide protons available for hydrogen bonding. When activated MIL-53(Cr or Al) is exposed to moisture, X-ray diffraction shows the material adopts a ‘closed’ structure, due to the strong hydrogen bonding interaction between the hydroxyl groups of the inorganic chains and the adsorbed water molecules. As a result of this hydration behavior, the unit cell volume reduces by ~30%, fully reversible upon subsequent dehydration. Such large structural changes, in response to adsorption of gas or solvent molecules, is commonly referred to as ‘breathing’.
Since the initial reports of the chromium and aluminium MIL-53, extensive work has been undertaken in this area, and the range of MIL-53 materials now extends to: Cr3+,Al3+, Fe3+, Ga3+, In3+, and Sc3+.
MIL-68
Work by Barthelet and co-workers also identified MIL-68, another trivalent metal terephthalate. The framework is a polymorph of the MIL-47/MIL-53 structure with the chemical formula, MIIIOH(CO2-C6H4-CO2) initially reported for V3+and later with In3+ and Ga3+. In this case the metal hydroxide chains connect to form two types of unidirectional channels, triangular and hexagonal in shape, creating a‘kagome lattice’ like network of pore channels. The cross-sectional diameters of the triangular and hexagonal channels are 6 Å and 17 Å respectively.
N,N’-Dimethylformamide (DMF) solvent molecules were observed in the smaller triangular channels of as-prepared MIL-68(In) reported by Volkringer et. al, disordered over two positions, with hydrogen bonding between the oxygen of the DMF and the hydroxyl group of the inorganic chain. The solvent was removed by calcination at 200 °C overnight in a furnace. The activated samples were stored under inert atmosphere to prevent rehydration which would lead to hydrolysis and ultimately decomposition of the structure. Adsorption studies on the indium and gallium forms of MIL-68 give a value for the BET surface area of 1117(24) m2 g-1,746(31) m2g−1 and 603(22) m2g−1 for the gallium, indium and vanadium forms respectively. A number of activation procedures were attempted and NMR analysis used to verify complete removal of guest molecules to obtain the surface area results. The BET values suggest that the indium and vanadium analogues were not fully activated prior to adsorption. Notably, a recent computational study of the theoretical surface area, gave a value of 3333 m2g−1 for MIL-68(V) suggesting that there may still be activation issues with all the MIL-68 derivatives, rendering some of the porosity in accessible.
MIL-88
Further study on the metal carboxylate systems of trivalent iron and chromium yielded a series of materials referred to as MIL-88(A-D). First reported as an iron fumarate, and based on the trimeric building unit obtained on the crystallization of iron (and chromium) acetate, MIL-88 is a family of isoreticular materials prepared withdicarboxylate linkers.
Reactions using the metal acetate trimeric building unit are thought to proceed via a ligand exchange mechanism where the acetate of the starting material is replaced with a longer linear dicarboxylate to create the three dimensional framework. MIL-88 is an isoreticular series with increasing length of dicarboxylate forming the same network topology prepared using the fumaric acid (MIL-88A), terephthalic acid (-88B),naphthalene-2,6-dicarboxylic acid (-88C) and 4,4’-biphenyldicarboxylic acid (-88D). The framework consists of both one dimensional channels, and trigonal bipyramidal cages. Solvent exchange experiments on the terephthalate form, MIL-88B, show that large organic molecules such as lutidine and butanol are able to enter the framework and induce an increase in cell volume over the dried material. The three metals in the trimeric cluster share a μ3-O and are bridged to the adjacent metals with four carboxylate groups, leaving a coordinatively unsaturated metal site pointing into the cages within the framework. To maintain charge balance in the material, there must be one negatively charged species on the cluster, either a hydroxide or fluoride (depending on the synthesis conditions) occupying one of the unsaturated metal sites and the other two could be water or exchanged solvent species.
MIL-88(Cr and Fe) also exhibits a breathing behaviour in response to solvent exchange and gas adsorption. The mechanism for the breathing is similar to that observed in the MIL-53 where there is a hinge like motion around the axis of the two oxygen atoms of the carboxylate. The observed expansion and contraction of the unit cell volume is, however, much greater than that observed in MIL-53. As the trimeric units are connected in three dimensions, rather than the columnar rows of terephthalates connecting the chains in MIL-53, the change occurs over all three axes resulting in a cell volume expansion for the terephthalate form (MIL-88B) of 125% from the fully dried form (1500 Å3) to the most open form observed upon methanol solvation (3375Å3).
References | Carboxylate-based metal–organic frameworks | [
"Chemistry",
"Materials_science"
] | 2,986 | [
"Porous polymers",
"Metal-organic frameworks"
] |
71,322,459 | https://en.wikipedia.org/wiki/Kaniadakis%20distribution | In statistics, a Kaniadakis distribution (also known as κ-distribution) is a statistical distribution that emerges from the Kaniadakis statistics. There are several families of Kaniadakis distributions related to different constraints used in the maximization of the Kaniadakis entropy, such as the κ-Exponential distribution, κ-Gaussian distribution, Kaniadakis κ-Gamma distribution and κ-Weibull distribution. The κ-distributions have been applied for modeling a vast phenomenology of experimental statistical distributions in natural or artificial complex systems, such as, in epidemiology, quantum statistics, in astrophysics and cosmology, in geophysics, in economy, in machine learning.
The κ-distributions are written as function of the κ-deformed exponential, taking the form
enables the power-law description of complex systems following the consistent κ-generalized statistical theory., where is the Kaniadakis κ-exponential function.
The κ-distribution becomes the common Boltzmann distribution at low energies, while it has a power-law tail at high energies, the feature of high interest of many researchers.
List of κ-statistical distributions
Supported on the whole real line
The Kaniadakis Gaussian distribution, also called the κ-Gaussian distribution. The normal distribution is a particular case when
The Kaniadakis double exponential distribution, as known as Kaniadakis κ-double exponential distribution or κ-Laplace distribution. The Laplace distribution is a particular case when
Supported on semi-infinite intervals, usually [0,∞)
The Kaniadakis Exponential distribution, also called the κ-Exponential distribution. The exponential distribution is a particular case when
The Kaniadakis Gamma distribution, also called the κ-Gamma distribution, which is a four-parameter () deformation of the generalized Gamma distribution.
The κ-Gamma distribution becomes a ...
κ-Exponential distribution of Type I when .
κ-Erlang distribution when and positive integer.
κ-Half-Normal distribution, when and .
Generalized Gamma distribution, when ;
In the limit , the κ-Gamma distribution becomes a ...
Erlang distribution, when and positive integer;
Chi-Squared distribution, when and half integer;
Nakagami distribution, when and ;
Rayleigh distribution, when and ;
Chi distribution, when and half integer;
Maxwell distribution, when and ;
Half-Normal distribution, when and ;
Weibull distribution, when and ;
Stretched Exponential distribution, when and ;
Common Kaniadakis distributions
κ-Exponential distribution
κ-Gaussian distribution
κ-Gamma distribution
κ-Weibull distribution
κ-Logistic distribution
κ-Erlang distribution
κ-Distribution Type IV
The Kaniadakis distribution of Type IV (or κ-Distribution Type IV) is a three-parameter family of continuous statistical distributions.
The κ-Distribution Type IV distribution has the following probability density function:
valid for , where is the entropic index associated with the Kaniadakis entropy, is the scale parameter, and is the shape parameter.
The cumulative distribution function of κ-Distribution Type IV assumes the form:
The κ-Distribution Type IV does not admit a classical version, since the probability function and its cumulative reduces to zero in the classical limit .
Its moment of order given by
The moment of order of the κ-Distribution Type IV is finite for .
See also
Giorgio Kaniadakis
Kaniadakis statistics
Kaniadakis κ-Exponential distribution
Kaniadakis κ-Gaussian distribution
Kaniadakis κ-Gamma distribution
Kaniadakis κ-Weibull distribution
Kaniadakis κ-Logistic distribution
Kaniadakis κ-Erlang distribution
References
External links
Giorgio Kaniadakis Google Scholar page
Kaniadakis Statistics on arXiv.org
Probability distributions | Kaniadakis distribution | [
"Physics",
"Mathematics"
] | 776 | [
"Functions and mappings",
"Probability distributions",
"Mathematical objects",
"Mathematical relations",
"Statistical mechanics"
] |
77,268,639 | https://en.wikipedia.org/wiki/Termitophile | Termitophiles are macro-organisms adapted to live in association with termites or their nests. They include vertebrates, invertebrates and fungi and can either be obligate termitophiles (those that cannot live without the termites) or non-obligate termitophiles (those that can live independently and make use of the termite nests facultatively or opportunistically). Termitophiles may spend a just a part or the whole of their lifecycle inside a termite nest. The term termitariophily has been suggested as a term to describe the situation where a foreign organism merely uses the termite nest.
Termites live in colonies and construct nests whose environments are controlled. The temperature, humidity, and other conditions inside the nests may be more favourable than the outdoor environment for the termitophiles while potentially also making use of the food resources within the nest, including the fungi grown by the colony or the eggs or larvae being reared.
Termitophilous insects avoid the defenses of the termite colony through one or more of a number of adaptations including having a rounded and smooth body, having bristles (often yellow) on their body surface, masking their odor to avoid detection, exuding chemicals from their body that the termites find pleasing, or by appearing like inanimate objects or mimicking termites.
Insects
A number of species of staphylinid beetles are known to be termitophiles. Cretotrichopsenius burmiticus has been described from 99 million year old Burmese amber and shows termitophilous adaptations. Some like Trichopsenius frosti and Xenistusa hexagonalis are known to follow the trail pheromones of their termite host Reticulitermes virginicus. Trichopsenius frosti also has a cuticular hydrocarbon profile closely matching that of its host. Staphylinid termitophiles mostly in the subfamily Aleocharinae curl their abdomen over their body. The abdomen may also show enlargement of physogastry and in a few species there are protruding appendages that mimic the body structure of a termite. The Australian species Austrospirachtha mimetes and Austrospirachtha carrijoi have abdomen resembling termites. Similar adaptations are seen in the South American Thyreoxenus alakazam and the African Coatonachthodes ovambolandicus.
A subfamily of scarab beetles, the Termitotroginae, are small, blind, and with reduced antennae. The genus Termitotrox (includes Aphodiocopris) is known from the fungus combs of termites in India and Africa. They are thought to be obligate termitophiles.
Some flies in the family Phoridae are termitophilous and grow as larvae within the termite nests. Some species have larvae that feed on the fungus comb while others are termite endoparasites or predators.
Fungi
Termite nest specific fungi include the Basidiobolus, Antennopsis, and some species of Xylaria. Several species of Termitomyces are grown intentionally as food by termites within their comb.
See also
Myrmecophiles
Symphiles
Inquiline
References
Symbiosis
Termites | Termitophile | [
"Biology"
] | 686 | [
"Biological interactions",
"Behavior",
"Symbiosis"
] |
77,268,780 | https://en.wikipedia.org/wiki/Lofting%20coordinates | Lofting coordinates are used for aircraft body measurements. The system derives from the one that was used in the shipbuilding lofting process, with longitudinal axis labeled as "stations" (usually fuselage stations, frame stations, FS), transverse axis as "buttocks lines" (or butt lines, BL), and vertical axis as "waterlines" (WL). The lofting coordinate frame is similar, but not the same as aircraft principal axes used to describe the aircraft flight. For the US-manufactured aircraft the ticks on the axes are labeled in inches, (for example, WL 100 is 100 inches above the base waterline).
Fuselage station
Fuselage stations are traditionally nonnegative, thus the origin is located at the nose of the plane or, sometimes, ahead of it. When compared to the coordinates used for aeromechanics, the fuselage stations are measured in the opposite direction than the ticks on the x-axis (and might not be aligned at all, if the wind-aligned coordinate system is used to describe the flight). Some manufacturers use the designation "body stations", with the corresponding abbreviation BS.
Waterline
Per the US Air Force Airframe Maintenance and Repair Manual (1960), a horizontal waterline extends from the nose cone of the aircraft to the exhaust cone. The base line of the aircraft is designated as waterline 0 (zero). The location of this base line varies on different types of aircraft. However. the planes of all waterlines above and below the zero waterline are parallel. The waterline number (WL or W.L.) in the US is expressed in inches, values increase upwards. Two typical alignments for the base line are the tip of the nose (negative WL are possible) or the "nominal ground plane" (measurements will be nonnegative).
Butt line
Butt line ticks increase to the right of the pilot with the origin at the centerline. When compared to the (right-handed) aeromechanics coordinate systems, the direction of the butt line is opposite to the y-axis.
Other
Many other reference points are used, especially on a large aircraft:
Aileron station (AS), distance from the inboard edge of an aileron;
Flap station (KS), distance from the edge of the flap;
Nacelle station (NS);
Elevator station (ES);
Vertical stabilizer station (VSS).
References
Sources
Aerospace engineering | Lofting coordinates | [
"Engineering"
] | 503 | [
"Aerospace engineering"
] |
77,269,960 | https://en.wikipedia.org/wiki/Manuel%20Bibes | Manuel Bibes, born on July 15, 1976, in Sainte-Foy-la-Grande, is a French physicist specializing in functional oxides, multiferroic materials, and spintronics. He is currently a Research Director at the National Center for Scientific Research (CNRS).
Biography
After earning an engineering degree from the Institut National des Sciences Appliquées de Toulouse in 1998, Bibes completed his Ph.D. under the supervision of Josep Fontcuberta at the ICMAB, at the Autonomous University of Barcelona in 2001, focusing on thin manganite films and their application in spintronics. His PhD was followed by a postdoctoral fellowship at the Joint Physics Unit CNRS/Thales (currently known as Laboratory Albert Fert) under the guidance of Prof. Albert Fert. Bibes joined the CNRS in 2003 at the Institute of Fundamental Electronics, now known as the Center for Nanoscience and Nanotechnology (C2N). Afterwards he completed research stays at MIT and the University of Cambridge as a visiting researcher and joined the Laboratory Albert Fert at 2007. All his research publications are listed in Google Scholar.
Throughout his career, Bibes has been a leader in research of multiferroic materials (which simultaneously exhibits magnetic and ferroelectric properties) and their utilisation in electrical control of magnetism. In 2009, his team discovered the phenomenon of giant tunnel electroresistance in ferroelectric tunnel junctions (results published in Nature) demonstrating their potential as artificial synapses. In 2016, in collaboration with the Spintec laboratory, he demonstrated that non-magnetic oxide interfaces can be used as ultrasensitive spin detectors. This findings led to a collaboration with Intel for the development of a new type of energy efficient transistor (MESO) aimed at replacing the current transistors based on CMOS technology. Since 2018, Manuel Bibes has been recognized as a Highly Cited Researcher by Clarivate Analytics. In June 2022, along with Agnès Barthélémy, Ramamoorthy Ramesh and Nicola Spaldin, he received the Europhysics Prize from the European Physical Society for their significant contributions to the fundamental and applied physics of multiferroic and magnetoelectric materials. In October 2024, he co-founds the start-up company Nellow, together with Laurent Vila and Jean-Philippe Attané from Spintec. Nellow aims to develop and commercialize chips with an ultralow power consumption for logic and artificial intelligence.
Awards and honors
Europhysics Prize, European Physical Society (2022)
ERC Advanced Grant, European Research Council (2019)
Friedrich Wilhelm Bessel Research Award, Alexander von Humboldt Foundation (2018)
Descartes-Huygens Prize, French Academy of Sciences and Royal Netherlands Academy of Arts and Sciences (2017)
Fellow of American Physical Society, APS (2015)
ERC Consolidator Grant, European Research Council, ERC (2014)
EU-40 Materials Prize, European Materials Research Society, EMRS (2013)
Extraordinary Doctorate Award, Autonomous University of Barcelona (2001)
Selected lectures and talks
Electric-field control of magnetism in oxide heterostructures (Seminar at Collège de France, May 30, 2017)
A journey through the oxide world (a talk at French Academy of Sciences, February 20, 2018)
References
External links
Official Website
Materials science
Condensed matter physicists
Oxides
Thin film deposition
21st-century French physicists
1976 births
Living people | Manuel Bibes | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 708 | [
"Applied and interdisciplinary physics",
"Condensed matter physicists",
"Thin film deposition",
"Coatings",
"Thin films",
"Materials science",
"Oxides",
"Salts",
"Condensed matter physics",
"nan",
"Planes (geometry)",
"Solid state engineering"
] |
77,270,675 | https://en.wikipedia.org/wiki/D%E1%BA%A7u%20Ti%E1%BA%BFng%20Lake | Dầu Tiếng Lake is an artificial lake in the three provinces of Tây Ninh, Bình Dương, and Bình Phước in the Southeast region, Vietnam. The lake was formed by damming the upper reaches of the Saigon River, making it the largest irrigation reservoir in Vietnam and Southeast Asia. Construction of the lake began in 1981 after surveys were conducted starting in 1976. The project was completed with over 100 million USD in funding, which the Vietnamese government borrowed from the World Bank, making it the first project in Vietnam to be built using US dollars since the Reunification of Vietnam on April 30, 1975.
Before construction, the lake's development sparked considerable debate among the leaders of Tây Ninh province regarding the choice of the lake's name and the project's feasibility. Nevertheless, the construction proceeded. Since 2017, Dầu Tiếng Lake has been classified as a project of national security importance due to concerns that a dam breach would affect the Ho Chi Minh City area. As of 2022, Dầu Tiếng Lake is managed by the Southern Irrigation Exploitation One Member Limited Liability Company under the management of the Ministry of Agriculture and Rural Development.
In 2005, along with Black Virgin Mountain, Dầu Tiếng Lake was selected to appear in the central part of the logo of Tây Ninh province.
Overview
Dầu Tiếng Lake spans across four districts: Dầu Tiếng (Bình Dương Province), Dương Minh Châu, Tân Châu (Tây Ninh Province), and Hớn Quản (Bình Phước Province) with a surface area of up to 270 km² and a total capacity of 1.58 billion m³ of water. The lake is located 20 km northeast of Tây Ninh city and 70 km north of Ho Chi Minh City. Formed by damming the upper reaches of the Saigon River, its main purpose is to regulate water flow into the Saigon River and provide irrigation for over 100,000 hectares of agricultural land in Tây Ninh and neighboring provinces such as Bình Dương, Ho Chi Minh City, and Long An. The Dầu Tiếng irrigation system consists of three main canals: East Canal, West Canal, and Tân Hưng Canal, which distribute water through over 1,550 km of branch canals in various localities. Additionally, the lake is used by local residents for aquaculture.
The project is regulating water to serve 93,000 hectares of land in Tây Ninh, including the districts of Tân Biên, Châu Thành, Bến Cầu, Dương Minh Châu, Gò Dầu, the town of Hòa Thành, Trảng Bàng, and the city of Tây Ninh; in Củ Chi district, Ho Chi Minh City; and in Đức Hòa district, Long An province.
History
Survey
After the unification of Vietnam in 1975, the government of the Socialist Republic of Vietnam decided to establish the Southern Irrigation Design Survey Team. Initial maps of the land for planning were adapted from the 1:100,000 maps of the Republic of Vietnam government. In 1976, Phạm Hùng, the Deputy Prime Minister of Vietnam, launched the movement "The entire population and army to undertake irrigation works" with the philosophy that "Irrigation is the foremost measure to rejuvenate the country." During this period, many officials from the North were sent to the South to survey and construct the lake, including Nguyễn Xuân Hùng from the Survey Design Institute (Ministry of Water Resources), who was the chief designer of the project. During the survey, seven officials died after stepping on mines in what is now the lake area. Before the official construction, some rubber plantation owners had intended to build a lake in the area for recreational purposes.
Before the official construction, many auxiliary dams were built by the Vietnamese military starting in 1977.
Construction
The Dầu Tiếng irrigation project was approved by the Prime Minister in 1979 and commenced construction on April 29, 1981, with a total investment of 110 million USD at Thuận Bình hamlet, Truông Mít commune, Dương Minh Châu district, Tây Ninh province, in the presence of Deputy Prime Minister Huỳnh Tấn Phát. At that time, the project faced opposition from Tây Ninh province leaders due to land concerns, as two-thirds of the lake's area was in Tây Ninh, but it was named after Dầu Tiếng, a location in Bình Dương. The opposition was so strong that Nguyễn Văn Tốt, the then-Secretary of Tây Ninh province, instructed all provincial agencies not to receive the Ministry of Water Resources or discuss the Dầu Tiếng project, including the Chairman of the Provincial People's Committee. According to Nông nghiệp newspaper, Minister of Water Resources Nguyễn Thanh Bình was even accused by Tốt to the Central Secretariat of fearing sabotage by the CIA. To placate Tây Ninh, Prime Minister Phạm Hùng named the lake Dầu Tiếng – Tây Ninh. However, Đặng Văn Thượng, the Deputy Secretary of the Provincial Party Committee and Chairman of the Provincial People's Committee of Tây Ninh, supported the project. At its peak, the construction involved up to 36,300 workers, with a minimum of 7,200 workers. Despite the initial opposition from the province, the project saw "significant contributions from the youth and people of Tây Ninh" after Thượng and his officials mobilized the public to dig canals and build the lake.
The capital for constructing the reservoir is said to have come from a preferential loan of over US$100 million from the World Bank, which was chaired at the time by Robert Strange McNamara. McNamara was a former United States Secretary of Defense and considered the "chief architect" of the Vietnam War. This was also the first loan successfully secured by the State of the Socialist Republic of Vietnam, making this project the first to be constructed with US dollars after 1975. The construction workforce at that time consisted of young people, with the number reaching tens of thousands at its peak. According to statistics from the Tay Ninh provincial youth union, by the time the project was put into operation, Tay Ninh and neighboring provinces and cities had mobilized over 450,000 youth members, completed nearly 15 million workdays, excavated more than 11.6 million cubic meters of earth, and constructed nearly 54,000 cubic meters of concrete and masonry, building thousands of kilometers of canals and thousands of structures along the canals. The reservoir was also built in the context of Vietnam being Khmer Rouge attacked on its border by Cambodia.
To obtain the vast area in 1982, thousands of households in Lộc Ninh commune, Dương Minh Châu district, gave up their land and moved to new residences in the Truông Mít and Bến Củi communes. On July 2, 1984, the reservoir began storing water, and on January 10, 1985, the Dầu Tiếng irrigation system with its two main canals, East and West, officially started operation. In 1985, right at the construction site, Thân Công Khởi, a former operator of a self-propelled scraper who participated in building the main dam and later the West canal (one of the two main arteries of the Dầu Tiếng reservoir), was awarded the title of Hero of Labor by the Chairman of the State Council Trường Chinh. He was the only person among the half a million workers to receive this title. During the 1985–1986 period, the reservoir was substantially completed to deliver water to Củ Chi district, with an annual flow of about 135 million cubic meters of water. From 1996 to 1999, the Tân Hưng canal was constructed to direct water from the Dầu Tiếng reservoir to irrigate the southern communes of the two districts Tân Châu and Tân Biên. In 2012, the Phước Hòa irrigation reservoir and canal system became operational, transferring water from the Bé River to the Sài Gòn River, thereby supplementing the water for the Dầu Tiếng reservoir. Thus, this is an irrigation project that transfers water from one river to another, with one reservoir replenishing another.
Activities
Since June 6, 2017, the Prime Minister and the Ministry of Public Security have deployed officials to Tay Ninh to decide to include the Dau Tieng water reservoir project in the list of important projects related to national security. By June 2019, the first solar power plant was established on the submerged area within the Dau Tieng reservoir. Upon completion, the plant became the largest clean energy project in Southeast Asia. In 2021, the Ministry of Agriculture and Rural Development assigned the reservoir's management company to implement a project to invest in, repair, and enhance the safety of the dam and water reservoir, with a budget of 157 billion dong, completed in 2022 after detecting cracks. The company also proposed an investment of 1,500 billion dong for repair and upgrade from 2021 to 2025. During the second phase of repairs, the Southern Water Irrigation Company cut off water for 90 days to the West main canal with the approval of the Tay Ninh People's Committee, extending from April 5, 2023, and resumed water flow on January 10, 2024, to serve the new agricultural season.
In 2024, Prime Minister Phạm Minh Chính assigned Tây Ninh to leverage the functions provided by the Dau Tieng reservoir. Previously, the province was tasked with developing the comprehensive multi-objective project "Dau Tieng Development Master Plan Phase 2022–2030, Vision 2050," which was later cancelled due to exceeding the province's authority in various laws, sectors, and planning aspects.
Exploitation
Tourism
In 2008, the Tây Ninh Provincial People's Committee called for investment in tourism areas within the lake, including Nhim Island, Sin Islet, Tan Thiet Islet, Tan Hoa Islet, Ba Chiem Islet, Ta Do Islet, Dong Ken Islet, and along the southern shore of the lake. In 2022, the Bình Dương Provincial People's Committee held a meeting to hear the consulting unit's report on the Ecotourism, Resort, and Recreation Project for the Nui Cau – Dầu Tiếng Protective Forest for the period 2021 – 2030, envisioning it as a "miniature Đà Lạt."
In 2016, many residents discovered an impromptu beach area in the Dầu Tiếng Lake area. Shortly thereafter, local authorities requested that residents dismantle and move out of the lake area as it was unregulated, posing potential security and drowning risks. By the end of May of the same year, the Tây Ninh Provincial People's Committee assigned relevant units to invest in infrastructure and transform the area into a natural beach named "Tây Ninh Sea.".
In 2022, under the management of the Tây Ninh Provincial Department of Culture, Sports, and Tourism, the Paragliding and Kite Flying Sports Federation was established. The launch event and paragliding performance were held at the Dầu Tiếng Lake area in Dương Minh Châu District, Tây Ninh province, over two days, April 30 and May 1. As part of Tây Ninh province's urban development plan, Dương Minh Châu has been approved to become a tourism service development area surrounding Ba Den Mountain and Dầu Tiếng Lake.
Fisheries
According to a survey conducted by the Livestock and Veterinary Bureau under the Tây Ninh Department of Agriculture and Rural Development, Dầu Tiếng Lake is home to more than 50 fish species, including 10 economically valuable species such as featherback fish, catfish, snakehead fish, and anchovy. However, according to statistics from the Institute of Fisheries II under the Ministry of Agriculture and Rural Development, the lake hosts around 60 fish species and numerous other aquatic species, with 15 species providing economic value such as carp, climbing perch, mystus fish, and catfish. Species belonging to the carp family are reported to account for about 33.33%, catfish family 30%, goby family 23.33%, and various other fish species. Since its construction, the number of fish species in the lake has increased by 14 compared to the upstream Saigon River, with 33 new species appearing and 19 species disappearing. The vanished aquatic species include climbing perch, fire snakehead, gourami, barred mystus, herring, giant gourami, and notably, giant freshwater prawn. The appearance of many other fish species is attributed to local residents farming various economically valuable fish in cages and pens. From 2005 to 2013, 7.8 million fish were released into the lake by local authorities to replenish the aquatic resources.
According to the Tay Ninh newspaper, the province annually allocates a budget of 500–700 million VND to release tens of millions of fish fry into Dau Tieng Lake to replenish aquatic resources. However, local authorities report that many fishermen in the lake have been using prohibited fishing gear such as stacked cages, gillnets, electrofishing, and light-attracting devices, causing severe damage to the lake's ecosystem. On March 25, 2024, the Southern Irrigation Exploitation One Member Limited Liability Company established a "Plan for coordinated inspection of activities within the protection scope of Dau Tieng irrigation works", which includes inspecting fishing gear and cage fish farming around the lake to protect the ecosystem and aquatic resources.
Irrigation
The Dau Tieng – Phuoc Hoa irrigation system, under the management of the Southern Irrigation Exploitation One Member Limited Liability Company, serves to provide water, control floods, repel salinity, and improve the environment for the downstream areas of the Saigon and Vam Co Dong river basins. The reservoir is currently the direct source of water for production for 116,953 hectares in the provinces and cities of Tay Ninh (92,953 ha), Ho Chi Minh City (12,000 ha), and Long An (12,000 ha); it also provides irrigation for 93,954 hectares along the Saigon and Vam Co Dong rivers. In 2017, Dinh La Thang proposed restarting the pipeline project connecting Dau Tieng Lake to the water plant in Ho Chi Minh City by calling for socialized investment. In 2018, the Tay Ninh government commenced construction of a project to bring water from Dau Tieng Lake to the west of the Vam Co Dong River to supply water to communes in Chau Thanh and Ben Cau districts of Tay Ninh province with a total investment of 1,246 billion VND. The project includes a steel pipe crossing the river with a span of 30 meters and a static clearance height of about 6 meters.
During the 2024 heatwave in Vietnam, Long An province requested the Southern Irrigation Exploitation One Member Limited Liability Company to release water from Dau Tieng Lake into the Vam Co Dong River to push back saltwater intrusion after the province declared a level 4 drought and salinity disaster on April 17. The company agreed and committed to releasing 7 million cubic meters of water through the Vam Co Dong River to support Long An. However, the Department of Agriculture and Rural Development of Long An province later requested the reservoir to increase the discharge rate to further assist the province. Previously, the reservoir had also released water to prevent saltwater intrusion in the Saigon River and the Dong Nai river system.
Minerals
In 2010, the People's Committee of Tay Ninh province agreed with Binh Phuoc province in Official Letter No. 1968 to hand over the licensing of mineral exploitation in the 16 km upstream area of the Saigon River, which is co-managed by the two provinces, to the People's Committee of Binh Phuoc province. Upon handing it over to Binh Phuoc, Tay Ninh stated that it would only coordinate management when necessary. Two months after the official letter took effect, Binh Phuoc granted the sole license to Thai Thinh Private Enterprise for sand exploitation for a period of 10 years. However, in 2017, this license was transferred to Phu Tho Production and Trading One Member Limited Liability Company. Subsequently, residents of Tan Hoa, Tan Chau reported to Tay Ninh province about serious erosion on the Tay Ninh side. Upon inspection, Thai Thinh Company had caused 150 meters of erosion into Tan Hoa territory without deploying buoys or mining signs. According to Tay Ninh provincial authorities, due to the vast area of the lake and the absence of residents in many areas, detecting illegal mineral exploitation within the lake is extremely difficult.
Notes
References
Artificial lakes
Saigon River
Hydraulic structures
Lakes of Vietnam
Irrigation projects
1981 establishments in Vietnam | Dầu Tiếng Lake | [
"Engineering"
] | 3,366 | [
"Irrigation projects"
] |
69,735,469 | https://en.wikipedia.org/wiki/Chentsov%27s%20theorem | In information geometry, Chentsov's theorem states that the Fisher information metric is, up to rescaling, the unique Riemannian metric on a statistical manifold that is invariant under sufficient statistics.
The theorem is named after its inventor Nikolai Chentsov
See also
Fisher information
Sufficient statistic
Information geometry
References
N. N. Čencov (1981), Statistical Decision Rules and Optimal Inference, Translations of mathematical monographs; v. 53, American Mathematical Society, http://www.ams.org/books/mmono/053/
Shun'ichi Amari, Hiroshi Nagaoka (2000) Methods of information geometry, Translations of mathematical monographs; v. 191, American Mathematical Society, http://www.ams.org/books/mmono/191/ (Theorem 2.6)
Differential geometry
Information geometry
Statistical distance | Chentsov's theorem | [
"Physics",
"Mathematics"
] | 178 | [
"Mathematical structures",
"Physical quantities",
"Statistical distance",
"Distance",
"Category theory",
"Information geometry"
] |
69,743,250 | https://en.wikipedia.org/wiki/Courant%E2%80%93Snyder%20parameters | In accelerator physics, the Courant–Snyder parameters (frequently referred to as Twiss parameters or CS parameters) are a set of quantities used to describe the distribution of positions and velocities of the particles in a beam. When the positions along a single dimension and velocities (or momenta) along that dimension of every particle in a beam are plotted on a phase space diagram, an ellipse enclosing the particles can be given by the equation:
where is the position axis and is the velocity axis. In this formulation, , , and are the Courant–Snyder parameters for the beam along the given axis, and is the emittance. Three sets of parameters can be calculated for a beam, one for each orthogonal direction, x, y, and z.
History
The use of these parameters to describe the phase space properties of particle beams was popularized in the accelerator physics community by Ernest Courant and Hartland Snyder in their 1953 paper, "Theory of the Alternating-Gradient Synchrotron". They are also widely referred to in accelerator physics literature as "Twiss parameters" after British astronomer Richard Q. Twiss, although it is unclear how his name became associated with the formulation.
Phase space area description
When simulating the motion of particles through an accelerator or beam transport line, it is often desirable to describe the overall properties of an ensemble of particles, rather than track the motion of each particle individually. By Liouville's Theorem it can be shown that the density occupied on a position and momentum phase space plot is constant when the beam is only affected by conservative forces. The area occupied by the beam on this plot is known as the beam emittance, although there are a number of competing definitions for the exact mathematical definition of this property.
Coordinates
In accelerator physics, coordinate positions are usually defined with respect to an idealized reference particle, which follows the ideal design trajectory for the accelerator. The direction aligned with this trajectory is designated "z", (sometimes "s") and is also referred to as the longitudinal coordinate. Two transverse coordinate axes, x and y, are defined perpendicular to the z axis and to each other.
In addition to describing the positions of each particle relative to the reference particle along the x, y, and z axes, it is also necessary to consider the rate of change of each of these values. This is typically given as a rate of change with respect to the longitudinal coordinate (x' = dx/dz) rather than with respect to time. In most cases, x' and y' are both much less than 1, as particles will be moving along the beam path much faster than transverse to it. Given this assumption, it is possible to use the small angle approximation to express x' and y' as angles rather than simple ratios. As such, x' and y' are most commonly expressed in milliradians.
Ellipse equation
When an ellipse is drawn around the particle distribution in phase space, the equation for the ellipse is given as:
"Area" here is an area in phase space, and has units of length * angle. Some sources define the area as the beam emittance , while others use . It is also possible to define the area as a specific fraction of the particles in a beam with a 2 dimensional gaussian distribution.
The other three coefficients, , , and , are the CS parameters. As this ellipse is an instantaneous plot of the positions and velocities of the particles at one point in the accelerator, these values will vary with time. Since there are only two independent variables, x and x', and the emittance is constant, only two of the CS parameters are independent. The relationship between the three parameters is given by:
Derivation for periodic systems
In addition to treating the CS parameters as an empirical description of a collection of particles in phase space, it is possible to derive them based on the equations of motion of particles in electromagnetic fields.
Equation of motion
In a strong focusing accelerator, transverse focusing is primarily provided by quadrupole magnets. The linear equation of motion for transverse motion parallel to an axis of the magnet is:
where is the focusing coefficient, which has units of length−2, and is only nonzero in a quadrupole field. (Note that x is used throughout this explanation, but y could be equivalently used with a change of sign for k. The longitudinal coordinate, z, requires a somewhat different derivation.)
Assuming is periodic, for example, as in a circular accelerator, this is a differential equation with the same form as the Hill differential equation. The solution to this equation is a pseudo harmonic oscillator:
where A(z) is the amplitude of oscillation, is the "betatron phase" which is dependent on the value of , and is the initial phase. The amplitude is decomposed into a position dependent part and an initial value , such that:
(It is important to remember that ' continues to indicated a derivative with respect to position along the direction of travel, not time.)
Particle distributions
Given these equations of motion, taking the average values for particles in a beam yields:
These can be simplified with the following definitions:
giving:
These are the CS parameters and emittance in another form. Combined with the relationship between the parameters, this also leads to a definition of emittance for an arbitrary (not necessarily Gaussian) particle distribution:
Properties
The advantage of describing a particle distribution parametrically using the CS parameters is that the evolution of the overall distribution can be calculated using matrix optics more easily than tracking each individual particle and then combining the locations at multiple points along the accelerator path. For example, if a particle distribution with parameters , , and passes through an empty space of length L, the values , , and at the end of that space are given by:
See also
Beam emittance
Beta function (accelerator physics)
Ray transfer matrix analysis
References
Accelerator physics | Courant–Snyder parameters | [
"Physics"
] | 1,221 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
69,744,483 | https://en.wikipedia.org/wiki/CKLF-like%20MARVEL%20transmembrane%20domain-containing%20family | The CKLF-like MARVEL transmembrane domain-containing family (CMTM), previously termed the chemokine-like factor superfamily (CKLFSF), consists of 9 proteins, some of which have various isoforms due to alternative splicing of their respective genes. These proteins along with their isoforms are:
Chemokine-like factor (CKLF), the founding member of this family, has 4 known isoforms, CKLF1 to CKLF4.
CKLF like MARVEL transmembrane domain-containing 1 (CMTM1) has 23 known isoforms, CMTM1-v1 to CMTM1-v23.
CKLF like MARVEL transmembrane domain-containing 2 (CMTM2) has no known isoforms.
CKLF like MARVEL transmembrane domain-containing 3 (CMTM3) has no known isoforms.
CKLF like MARVEL transmembrane domain-containing 4 (CMTM4) has 3 known isoforms, CMTM4-v1 to CMTM4-v3.
CKLF-like MARVEL transmembrane domain-containing 5 (CMTM5) has 6 known isoforms, CMTM5-v1 to CMTM5-v6.
CKLF like MARVEL transmembrane domain containing 6 (CMTM6) has no known isoforms.
CKLF like MARVEL transmembrane domain containing 7 (CMTM7) has 2 isoforms, CMTM7-v1 and CMTM7-v2.
CKLF like MARVEL transmembrane domain-containing 8 (CMTM8) has two isoforms, CMTM8 and CMTM8-v2 (Little is known about the CMTM8-v2 isoform and the CMTM8 isoform is referred to as CMTM8 rather than CMTM8-v1.).
All of these proteins have domains (i.e. regions) similar to analogous domains in the chemokine proteins; tetraspanin proteins (also termed transmembrane-4 superfamily proteins); myelin and lymphocyte protein (also termed MAR protein); proteins that direct membrane vesicle trafficking; and other proteins that are embedded in cell membranes. The genes encoding (i.e. directing the production of) these proteins, CKLF, CMTM1, CMTM2, CMTM3, CMTM4, CMTM5, CMTM6, CMTM7, and CMTM8, respectively, also share similar regions that encode the domains just cited for their proteins. (The 8 CMTM genes were formerly termed CKLFSF1, CKLFSF2, CKLFSF3, CKLFSF4, CKLFSF5, CKLFSF6, CKLFSF7, and CKLFSF8.) The CKLF, CMTM1, CMTM2, CTMT3, and CMTM4 genes cluster together in band 22 on the long (i.e. "q") arm of chromosome 16; the CMTM6, CMTM7, and CMTM8 genes form a second cluster in band 22 on the short (i.e. "p") of chromosome 3; and the CMTM5 gene, located in band 11.2 on the q arm of chromosome 14, is not clustered with the other CMTM genes. These structural similarities and clusterings reflect the close relationships of these proteins and genes. Studies suggest that the members of this family may be involved in the development of various cancers autoimmune diseases, cardiovascular diseases, the male reproductive system, and angiogenesis (i.e. development of new blood vessels from pre-existing blood vessels). In most of these cases, however, further studies are needed to determine if these CMTM proteins and/or their corresponding genes and mRNAs will be promising targets to help in the diagnosis, prognosis, and/or treatment of these disorders.
References
Human proteins
DNA replication
Gene expression
Transcription coregulators | CKLF-like MARVEL transmembrane domain-containing family | [
"Chemistry",
"Biology"
] | 828 | [
"Genetics techniques",
"Gene expression",
"DNA replication",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
69,746,055 | https://en.wikipedia.org/wiki/Precociality%20and%20altriciality | Precocial species in birds and mammals are those in which the young are relatively mature and mobile from the moment of birth or hatching. They are normally nidifugous, meaning that they leave the nest shortly after birth or hatching. Altricial species are those in which the young are underdeveloped at the time of birth, but with the aid of their parents mature after birth. These categories form a continuum, without distinct gaps between them.
In fish, this often refers to the presence or absence of a stomach: precocial larvae have one at the onset of first feeding whereas altricial fish do not. Depending on the species, the larvae may develop a functional stomach during metamorphosis (gastric) or remain stomachless (agastric).
Precociality
Precocial young have open eyes, hair or down, large brains, and are immediately mobile and somewhat able to flee from or defend themselves against predators. For example, with ground-nesting birds such as ducks or turkeys, the young are ready to leave the nest in one or two days. Among mammals, most ungulates are precocial, being able to walk almost immediately after birth.
Etymology
The word "precocial" is derived from the Latin root praecox, the same root as in precocious, meaning early maturity.
Superprecociality
Extremely precocial species are called "superprecocial". Examples are the megapode birds, which have full-flight feathers at hatching and which, in some species, can fly on the same day. Enantiornithes and pterosaurs were also capable of flight soon after hatching.
Another example is the blue wildebeest, the calves of which can stand within an average of six minutes from birth and walk within thirty minutes; they can outrun a hyena within a day. Such behavior gives them an advantage over other herbivore species and they are 100 times more abundant in the Serengeti ecosystem than hartebeests, their closest taxonomic relative. Hartebeest calves are not as precocial as wildebeest calves and take up to thirty minutes or more before they stand, and as long as forty-five minutes before they can follow their mothers for short distances. They are unable to keep up with their mothers until they are more than a week old.
Black mambas are highly precocial; as hatchlings, they are fully independent, and are capable of hunting prey the size of a small rat.
Phylogeny
Precociality is thought to be ancestral in birds. Thus, altricial birds tend to be found in the most derived groups. There is some evidence for precociality in protobirds and troodontids. Enantiornithes at least were superprecocial in a way similar to that of megapodes, being able to fly soon after birth. It has been speculated that superprecociality prevented enantiornithines from acquiring specialized toe anatomy seen in modern altricial birds.
Altriciality
In birds and mammals altricial species are those whose newly hatched or born young are relatively immobile, lack hair or down, are not able to obtain food on their own, and must be cared for by adults; closed eyes are common, though not ubiquitous. Altricial young are born helpless and require care for a length of time. Altricial birds include hawks, herons, woodpeckers, owls, cuckoos and most passerines. Among mammals, marsupials and most rodents are altricial. Domestic cats, dogs, and primates, such as humans, are some of the best-known altricial organisms. For example, newborn domestic cats cannot see, hear, maintain their own body temperature, or gag, and require external stimulation in order to defecate and urinate. The giant panda is notably the largest placental mammal to have altricial, hairless young upon birth. The larval stage of insect development is considered by some to be a form of altricial development, but it more accurately depicts, especially amongst eusocial animals, an independent phase of development, as the larvae of bees, ants, and many arachnids are completely physically different from their developed forms, and the pre-pupal stages of insect life might be regarded as equivalent to vertebrate embryonic development.
Etymology
The word “altriciality” is derived from the Latin root alere, meaning "to nurse, to rear, or to nourish", and indicates the need for young to be fed and taken care of for a long duration.
Differences
The span between precocial and altricial species is particularly broad in the biology of birds. Precocial birds hatch with their eyes open and are covered with downy feathers that are soon replaced by adult-type feathers. Birds of this kind can also swim and run much sooner after hatching than altricial young, such as songbirds. Very precocial birds can be ready to leave the nest in a short period of time following hatching (e.g. 24 hours). Many precocial chicks are not independent in thermoregulation (the ability to regulate their body temperatures), and they depend on the attending parent(s) to brood them with body heat for a short time. Precocial birds find their own food, sometimes with help or instruction from their parents. Examples of precocial birds include the domestic chicken, many species of ducks and geese, waders, rails, and the hoatzin.
Precocial birds can provide protein-rich eggs and thus their young hatch in the fledgling stage – able to protect themselves from predators and the females have less post-natal involvement. Altricial birds are less able to contribute nutrients in the pre-natal stage; their eggs are smaller and their young are still in need of much attention and protection from predators. This may be related to r/K selection; however, this association fails in some cases.
In birds, altricial young usually grow faster than precocial young. This is hypothesized to occur so that exposure to predators during the nestling stage of development can be minimized.
In the case of mammals, it has been suggested that large, hearty adult body sizes favor the production of large, precocious young, which develop with a longer gestation period. Large young may be associated with migratory behavior, extended reproductive period, and reduced litter size. It may be that altricial strategies in mammals, in contrast, develop in species with less migratory and more territorial lifestyles, such as Carnivorans, the mothers of which are capable of bearing a fetus in the early stages of development and focusing closely and personally upon its raising, as opposed to precocial animals which provide their youths with a bare minimum of aid and otherwise leave them to instinct.
Human children, and those of other primates, exemplify a unique combination of altricial and precocial development. Infants are born with minimal eyesight, compact and fleshy bodies, and "fresh" features (thinner skin, small noses and ears, and scarce hair if any). However, this stage is only brief amongst primates; their offspring soon develop stronger bones, grow in spurts, and quickly mature in features. This unique growth pattern allows for the hasty adaptivity of most simians, as anything learned by children in between their infancy and adolescence is memorized as instinct; this pattern is also in contrast to more prominently altricial mammals, such as many rodents, which remain largely immobile and undeveloped until grown to near the stature of their parents.
Terminology
In birds, the terms Aves altrices and Aves precoces were introduced by Carl Jakob Sundevall (1836), and the terms nidifugous and nidicolous by Lorenz Oken in 1816. The two classifications were considered identical in early times, but the meanings are slightly different, in that "altricial" and "precocial" refer to developmental stages, while "nidifugous" and "nidicolous" refer to leaving or staying at the nest, respectively.
See also
Parental investment
Precocious puberty
References
Bibliography
External links
The altricial-precocial spectrum in birds
Animal developmental biology
Bird breeding
Developmental biology | Precociality and altriciality | [
"Biology"
] | 1,730 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
78,642,276 | https://en.wikipedia.org/wiki/Louisiana%20artificial%20reefs | The Louisiana Artificial Reef Program (ARP) was established in 1986 to create habitats for providing food, and shelter for marine life that includes coastal fish, using human-made structures. The program includes several types of artificial reefs that supports ecosystem development, recreational fishing and diving, and critical research. A secondary benefit for those close to shore is coastal protection by reducing the impact of storms, flooding, preventing loss of life, property damage, and coastal erosion.
In 1999, the Louisiana Artificial Reef Program created the world's largest artificial reef by area, referred to as Grand Isle #9, from the Freeport Sulfur Mine off Grand Isle.
As of 2021 oil companies have taken advantage of the Rigs-to-Reefs program with over 600 platforms converted and over 350 are in Louisiana.
History
In 1984 Congress passed the Louisiana Fishing Enhancement Act (LA R.S. 56:639.6 or Public Law 98-623, Title II). The law also created the National Artificial Reef Plan that allowed the establishment of a state reef-permitting system.
The Louisiana Artificial Reef Development Council (Artificial Reef Council) was also created under the Louisiana Artificial Reef Program. The members are the Secretary of the Louisiana Department of Wildlife and Fisheries, Dean of LSU College of the Coast and Environment, and the executive director of the Louisiana Sea Grant.
In 1990 the Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) was signed into law by then President George H. W. Bush to fund construction of coastal wetlands restoration projects. Since 1990 there has been 210 projects authorized.
The Rigs-to-Reefs program
In response to concerns of habitat loss when a rig ceases production the state created the Rigs-to-reefs program based on criteria from the national Rigs-to-reefs program. Companies can donate decommissioned platforms to the LDWF. Companies also donate half the money saved to the Louisiana Artificial Reef Trust Fund that was created along with the ARP. For platforms farther offshore this can be a tremendous savings as opposed to tearing the rig down and bringing it to shore. The Bureau of Safety and Environmental Enforcement (BSEE) oversees permitting. After a state accepts a donated rig the U.S. Army Corps of Engineers issues a permit, the state accepts liability and maintenance. In 2017 there were 350 platforms converted to reefs.
The process of Rigs-to-Reefs is complex and lengthy. After a company expresses interest to include a platform in the program, permits are required. The LDWF is the lead state agency, starts the permit process, and notifies all the other regulatory agencies. A process of planning, site selection, material selection, permitting, and monitoring of inshore, and nearshore artificial reef development. Both nearshore (normally considered 3 nautical miles) and inshore sites require additional permitting per the National Fishing Enhancement Act and Louisiana Fishing Enhancement Act. Nearshore and inshore sites see far more traffic than offshore.
The depth of or less means the oil platform jackets (legs) might have to be excluded. Leaving the jackets upright, but shortened to comply with minimum jacket to surface requirements, is usually the preferred option. Jackets already in place will have already become an unintended artificial reef. The reef design and development standards depends on certain factors being examined for each site. These include, among other things, environmental and biological factors as well as social and economic considerations. A United States Coast Guard buoy permit is required as the size of a buoy is dependent on the depth and location of water. All five Gulf of Mexico coastal states, Alabama, Florida, Louisiana, Mississippi, and Texas, have artificial reef programs, that includes decommissioned platforms. All five states have an artificial reef coordinator. In Louisiana the coordinator reviews an operator's reefing plan and secures a permit from the Corps of Engineers.
In water that extends from the states boundary to the continental shelf break of the continental margin, or seaward, whichever is farthest from the state boundary, the federal government has authority. Any potential rigs-to-reefs in areas under federal authority are also subject to US Corps of Engineers oversight and permitting as well as the Bureau of Safety and Environmental Enforcement (BSEE) and issuing of a permit. At the state level, the Coastal Management Division of the Louisiana Department of Natural Resources will examine the permits and plans to ensure compliance with state and federal guidelines. Once a structure is accepted into the State reef program, and the reefing operation is complete, the state assumes title and all responsibility for the structure. This includes an exemption from 30 CFR §250.1725(a).
The Coastal Protection and Restoration Authority (CPRA) is the lead agency concerning the criteria of coastal protection and funding. This includes the authority (oversight) for developing, implementing, and enforcing master and annual plans that are submitted to the Louisiana Legislature. The 2023 Coastal Master Plan, the fourth since inception and revised every six years, lays the foundation for current and future goals concerning the Louisiana coast protection, conservation, enhancement, and restoration. The authority of CPRA is vested by the Louisiana House and Senate through legislation (Louisiana Revised Statutes) R.S. 49:214.1, R.S. 49:214.5.3, and R.S. 49:214.5.3(E)
Opposition
Opponents to artificial reefs vary in their reasoning. Some materials have shown to not be suitable for artificial reefs because they are not "risk-free". Tires can become dislodged by storms, especially during extreme weather like hurricanes, and can potentially damaging natural coral reefs. Material erosion can result in toxic chemicals leaching into the water. Marine life can become entangled in the loose debris. In 1972 Firestone donated approximately 2,000,000 tires used at the "Osborne Reef" around a mile off the coast of Florida. Jack Sobel, Director of Strategic Conservation Science and Policy for The Ocean Conservancy and author stated, "We've literally dumped millions of tires in our oceans," and "I believe that people who were behind the artificial tire reef promotions actually were well-intentioned and thought they were doing the right thing. In hindsight, we now realize that we made a mistake." While some material or items have shown to be successful like the USS Spiegel Grove, sunk around from Key Largo, there are many things that are unknown like concerns that the reefs will attract the wrong species. The long term effect (100 to 200 years) is unknown, and over-fishing, because the sites are well advertised to attract fishermen.
List of artificial reefs
Converted oil platforms
As of 2021 oil companies have taken advantage of the Rigs-to-Reefs program with over 600 platforms converted and more than 350 in Louisiana.
The Lena platform, one of ExxonMobil's decommissioned platforms, located about 50 miles southeast of Grand Isle, was , and is the tallest structure to be converted to an artificial reef. The platform is in the Mississippi Canyon Area, Block 280, in 1,000 feet of water. It weighed and when upright was 50 feet taller than the Empire State Building. The platform was also the world's first First Cable-Stabilized Platform. Decommissioning began in 2017
Grand Isle #9: In 1999, the Louisiana Artificial Reef Program created the world's largest artificial reef by area. Created from the Freeport Sulfur Mine off Grand Isle, Louisiana the reef is made up of more than 68 structures with over 1.5 miles of bridgework. The reef is in 42–50 feet of water with of clearance.
Nearshore reefs
Louisiana Artificial Reef Program
Nearshore Reefs
Raising Cane’s Hotel Sid reef, located in the South Marsh Island Block 233 is a shallow artificial reef: A partnership between the Coastal Conservation Association (CCA) of Louisiana and Todd Graves founder of Raising Cane, who donated $500,000 to create an artificial reef named Raising Cane’s Hotel Sid reef. The reef is a combined partnership between the CCA’s REEF Louisiana Program, Danos Ventures (Amelia, Louisiana) and Climate technology company Natrx (Raleigh, North Carolina) the Louisiana Department of Wildlife and Fisheries (LDWF), Chevron, CCA’s Building Conservation Trust and Shell. The site is located where the platform Hotel Sid once stood. The platform removal destroyed an unintentional reef and the new reef is intended to mitigate that loss. The reef material, called Cajun Coral, are 3D printed concrete modules that allows project-specific ExoForms to be created.
References
External links
Louisiana Inshore and Nearshore Artificial Reef Plan
Atlantic States Marine Fisheries Commission "Special Report No. 38
Artificial reefs
Oil platforms
Coastal construction
Marine architecture
Reefs
Artificial landforms
United States Department of the Interior | Louisiana artificial reefs | [
"Chemistry",
"Engineering"
] | 1,772 | [
"Oil platforms",
"Structural engineering",
"Marine architecture",
"Petroleum technology",
"Construction",
"Coastal construction",
"Natural gas technology",
"Architecture"
] |
75,646,993 | https://en.wikipedia.org/wiki/Oxalate%20chloride | An oxalate chloride or oxalato chloride is a mixed anion compound contains both oxalate and chloride anions.
Related compounds include oxalate fluorides and oxalate bromides.
Production
Oxalate chlorides may be produced by treating an oxalate salt with concentrated hydrochloric acid, or with a metal oxide dissolved in oxalic acid and hydrochloric acid solutions that are evaporated.
List
References
Oxalates
Chlorides
Mixed anion compounds | Oxalate chloride | [
"Physics",
"Chemistry"
] | 103 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Salts",
"Ions"
] |
75,647,518 | https://en.wikipedia.org/wiki/Counterexample-guided%20abstraction%20refinement | Counterexample-guided abstraction refinement (CEGAR) is a technique for symbolic model checking. It is also applied in modal logic tableau calculi algorithms to optimise their efficiency.
In computer-aided verification and analysis of programs, models of computation often consist of states. Models for even small programs, however, may have an enormous number of states. This is identified as the state explosion problem. CEGAR addresses this problem with two stages — abstraction, which simplifies a model by grouping states, and refinement, which increases the precision of the abstraction to better approximate the original model.
If a desired property for a program is not satisfied in the abstract model, a counterexample is generated. The CEGAR process then checks whether the counterexample is spurious, i.e., if the counterexample also applies to the under-abstraction but not the actual program. If this is the case, it concludes that the counterexample is attributed to inadequate precision of the abstraction. Otherwise, the process finds a bug in the program. Refinement is performed when a counterexample is found to be spurious. The iterative procedure terminates either if a bug is found or when the abstraction has been refined to the extent that it is equivalent to the original model.
Program verification
Abstraction
To reason about the correctness of a program, particularly those involving the concept of time for concurrency, state transition models are used. In particular, finite-state models can be used along with temporal logic in automatic verification. The concept of abstraction is thus founded upon a mapping between two Kripke structures. Specifically, programs can be described with control flow automata (CFA).
Define a Kripke structure as , where
is a finite set of states,
is an initial state in ,
is a total transition relation, and
is a function that labels each state with a set of propositional names that hold therein.
An abstraction of is defined by where is an abstraction mapping that maps every state in to a state in .
To preserve the critical properties of the model, the abstraction mapping maps the initial state in the original model to its counterpart in the abstract model. The abstraction mapping also guarantees that the transition relations between two states are preserved.
Model Checking
In each iteration, model checking is performed for the abstract model. Bounded model checking, for instance, generates a propositional formula that is then checked for Boolean satisfiability by a SAT solver.
Refinement
When counterexamples are found, they are examined to determine if they are spurious examples, i.e., they are unauthentic ones that emerge from the under-abstraction of the model. A non-spurious counterexample reflects the incorrectness of the program, which may be sufficient to terminate the program verification process and conclude that the program is incorrect. The main objective of the refinement process handle spurious counterexamples. It eliminates them by increasing the granularity of the abstraction.
The refinement process ensures that the dead-end states and the bad states do not belong to the same abstract state. A dead-end state is a reachable one with no outgoing transition whereas a bad-state is one with transitions causing the counterexample.
Tableau calculi
Since modal logic is often interpreted with Kripke semantics, where a Kripke frame resembles the structure of state transition systems concerned in program verification, the CEGAR technique is also implemented for automated theorem proving.
References
Model checking
Logical calculi
Modal logic | Counterexample-guided abstraction refinement | [
"Mathematics"
] | 733 | [
"Mathematical logic",
"Logical calculi",
"Modal logic"
] |
75,650,483 | https://en.wikipedia.org/wiki/Bayesian%20persuasion | In economics and game theory, Bayesian persuasion involves a situation where one participant (the sender) wants to persuade the other (the receiver) of a certain course of action. There is an unknown state of the world, and the sender must commit to a decision of what information to disclose to the receiver. Upon seeing said information, the receiver will revise their belief about the state of the world using Bayes' Rule and select an action. Bayesian persuasion was introduced by Kamenica and Gentzkow, though its origins can be traced back to Aumann and Maschler (1995).
Bayesian persuasion is a special case of a principal–agent problem: the principal is the sender and the agent is the receiver. It can also be seen as a communication protocol, comparable to signaling games; the sender must decide what signal to reveal to the receiver to maximize their expected utility. It can also be seen as a form of cheap talk.
Example
Consider the following illustrative example. There is a medicine company (sender), and a medical regulator (receiver). The company produces a new medicine, and needs the approval of the regulator. There are two possible states of the world: the medicine can be either "good" or "bad". The company and the regulator do not know the true state. However, the company can run an experiment and report the results to the regulator. The question is what experiment the company should run in order to get the best outcome for themselves. The assumptions are:
Both company and regulator share a common prior probability that the medicine is good.
The company must commit to the experiment design and the reporting of the results (so there is no element of deception). The regulator observes the experiment design.
The company receives a payoff if and only if the medicine is approved.
The regulator receives a payoff if and only if it provides an accurate outcome (approving a good medicine or rejecting a bad one).
For example, suppose the prior probability that the medicine is good is 1/3 and that the company has a choice of three actions:
Conduct a thorough experiment that always detects whether the medicine is good or bad, and truthfully report the results to the regulator. In this case, the regulator will approve the medicine with probability 1/3, so the expected utility of the company is 1/3.
Don't conduct any experiment; always say "the medicine is good". In this case, the signal does not give any information to the regulator. As the regulator believes that the medicine is good with probability 1/3, the expectation-maximizing action is to always reject it. Therefore, the expected utility of the company is 0.
Conduct an experiment that, if the medicine is good, always reports "good", and if the medicine is bad, it reports "good" or "bad" with probability 1/2. Here, the regulator applies Bayes' rule: given a signal "good", the probability that the medicine is good is 1/2, so the regulator approves it. Given a signal "bad", the probability that the medicine is good is 0, so the regulator rejects it. All in all, the regulator approves the medicine in 2/3 of the cases, so the expected utility of the company is 2/3.
In this case, the third policy is optimal for the sender since this has the highest expected utility of the available options. Using the Bayes rule, the sender has persuaded the receiver to act in a favorable way to the sender.
Generalized model
The basic model has been generalized in a number of ways, including:
The receiver may have private information not shared with the sender.
The sender and receiver may have a different prior on the state of the world.
There may be multiple senders, where each sends a signal simultaneously and all receivers receive all signals before acting.
There may be multiple senders who send signals sequentially, and the receiver receives all signals before acting.
There may be multiple receivers, including cases where each receives their own signal, the same signal, or signals which are correlated in some way, and where each receiver may factor in the actions of other receivers.
A series of signals may be sent over time.
Practical application
The applicability of the model has been assessed in a number of real-world contexts:
Disclosure of capital reserves by banks to financial regulators.
Grading of students' work by teachers, where the receivers are potential future employers.
Provision of feedback by an employer to employees.
Revelation of plot points from a creator of fictional work to entertain its reader or viewer.
Computational approach
Algorithmic techniques have been developed to compute the optimal signalling scheme in practice. This can be found in polynomial time with respect to the number of actions and pseudo-polynomial time with respect to the number of states of the world. Algorithms with lower computational complexity are also possible under stronger assumptions.
The online case, where multiple signals are sent over time, can be solved efficiently as a regret minimization problem.
References
Applications of Bayesian inference
Mechanism design | Bayesian persuasion | [
"Mathematics"
] | 1,031 | [
"Game theory",
"Mechanism design"
] |
71,337,783 | https://en.wikipedia.org/wiki/N%20%3D%201%20supersymmetric%20Yang%E2%80%93Mills%20theory | In theoretical physics, more specifically in quantum field theory and supersymmetry, supersymmetric Yang–Mills, also known as super Yang–Mills and abbreviated to SYM, is a supersymmetric generalization of Yang–Mills theory, which is a gauge theory that plays an important part in the mathematical formulation of forces in particle physics. It is a special case of 4D N = 1 global supersymmetry.
Super Yang–Mills was studied by Julius Wess and Bruno Zumino in which they demonstrated the supergauge-invariance of the theory and wrote down its action, alongside the action of the Wess–Zumino model, another early supersymmetric field theory.
The treatment in this article largely follows that of Figueroa-O'Farrill's lectures on supersymmetry and of Tong.
While N = 4 supersymmetric Yang–Mills theory is also a supersymmetric Yang–Mills theory, it has very different properties to supersymmetric Yang–Mills theory, which is the theory discussed in this article. The supersymmetric Yang–Mills theory was studied by Seiberg and Witten in Seiberg–Witten theory. All three theories are based in super Minkowski spaces.
The supersymmetric Yang–Mills action
Preliminary treatment
A first treatment can be done without defining superspace, instead defining the theory in terms of familiar fields in non-supersymmetric quantum field theory.
Spacetime and matter content
The base spacetime is flat spacetime (Minkowski space).
SYM is a gauge theory, and there is an associated gauge group to the theory. The gauge group has associated Lie algebra .
The field content then consists of
a -valued gauge field
a -valued Majorana spinor field (an adjoint-valued spinor), known as the 'gaugino'
a -valued auxiliary scalar field .
For gauge-invariance, the gauge field is necessarily massless. This means its superpartner is also massless if supersymmetry is to hold. Therefore can be written in terms of two Weyl spinors which are conjugate to one another: , and the theory can be formulated in terms of the Weyl spinor field instead of .
Supersymmetric pure electromagnetic theory
When , the conceptual difficulties simplify somewhat, and this is in some sense the simplest gauge theory. The field content is simply a (co-)vector field , a Majorana spinor and a auxiliary real scalar field .
The field strength tensor is defined as usual as .
The Lagrangian written down by Wess and Zumino is then
This can be generalized to include a coupling constant , and theta term , where is the dual field strength tensor
and is the alternating tensor or totally antisymmetric tensor. If we also replace the field with the Weyl spinor , then a supersymmetric action can be written as
This can be viewed as a supersymmetric generalization of a pure gauge theory, also known as Maxwell theory or pure electromagnetic theory.
Supersymmetric Yang–Mills theory (preliminary treatment)
In full generality, we must define the gluon field strength tensor,
and the covariant derivative of the adjoint Weyl spinor,
To write down the action, an invariant inner product on is needed: the Killing form is such an inner product, and in a typical abuse of notation we write simply as , suggestive of the fact that the invariant inner product arises as the trace in some representation of .
Supersymmetric Yang–Mills then readily generalizes from supersymmetric Maxwell theory. A simple version is
while a more general version is given by
Superspace treatment
Superspace and superfield content
The base superspace is super Minkowski space.
The theory is defined in terms of a single adjoint-valued real superfield , fixed to be in Wess–Zumino gauge.
Supersymmetric Maxwell theory on superspace
The theory is defined in terms of a superfield arising from taking covariant derivatives of :
.
The supersymmetric action is then written down, with a complex coupling constant , as
where h.c. indicates the Hermitian conjugate of the preceding term.
Supersymmetric Yang–Mills on superspace
For non-abelian gauge theory, instead define
and . Then the action is
Symmetries of the action
Supersymmetry
For the simplified Yang–Mills action on Minkowski space (not on superspace), the supersymmetry transformations are
where .
For the Yang–Mills action on superspace, since is chiral, then so are fields built from . Then integrating over half of superspace, , gives a supersymmetric action.
An important observation is that the Wess–Zumino gauge is not a supersymmetric gauge, that is, it is not preserved by supersymmetry. However, it is possible to do a compensating gauge transformation to return to Wess–Zumino gauge. Then, after a supersymmetry transformation and the compensating gauge transformation, the superfields transform as
Gauge symmetry
The preliminary theory defined on spacetime is manifestly gauge invariant as it is built from terms studied in non-supersymmetric gauge theory which are gauge invariant.
The superfield formulation requires a theory of generalized gauge transformations. (Not supergauge transformations, which would be transformations in a theory with local supersymmetry).
Generalized abelian gauge transformations
Such a transformation is parametrized by a chiral superfield , under which the real superfield transforms as
In particular, upon expanding and appropriately into constituent superfields, then contains a vector superfield while contains a scalar superfield , such that
The chiral superfield used to define the action,
is gauge invariant.
Generalized non-abelian gauge transformations
The chiral superfield is adjoint valued. The transformation of is prescribed by
,
from which the transformation for can be derived using the Baker–Campbell–Hausdorff formula.
The chiral superfield is not invariant but transforms by conjugation:
,
so that upon tracing in the action, the action is gauge-invariant.
Extra classical symmetries
Superconformal symmetry
As a classical theory, supersymmetric Yang–Mills theory admits a larger set of symmetries, described at the algebra level by the superconformal algebra. Just as the super Poincaré algebra is a supersymmetric extension of the Poincaré algebra, the superconformal algebra is a supersymmetric extension of the conformal algebra which also contains a spinorial generator of conformal supersymmetry .
Conformal invariance is broken in the quantum theory by trace and conformal anomalies.
While the quantum supersymmetric Yang–Mills theory does not have superconformal symmetry, quantum N = 4 supersymmetric Yang–Mills theory does.
R-symmetry
The R-symmetry for supersymmetry is a symmetry of the classical theory, but not of the quantum theory due to an anomaly.
Adding matter
Abelian gauge
Matter can be added in the form of Wess–Zumino model type superfields . Under a gauge transformation,
,
and instead of using just as the Lagrangian as in the Wess–Zumino model, for gauge invariance it must be replaced with
This gives a supersymmetric analogue to QED. The action can be written
For flavours, we instead have superfields , and the action can be written
with implicit summation.
However, for a well-defined quantum theory, a theory such as that defined above suffers a gauge anomaly. We are obliged to add a partner to each chiral superfield (distinct from the idea of superpartners, and from conjugate superfields), which has opposite charge. This gives the action
Non-Abelian gauge
For non-abelian gauge, matter chiral superfields are now valued in a representation of the gauge group:
.
The Wess–Zumino kinetic term must be adjusted to .
Then a simple SQCD action would be to take to be the fundamental representation, and add the Wess–Zumino term:
.
More general and detailed forms of the super QCD action are given in that article.
Fayet–Iliopoulos term
When the center of the Lie algebra is non-trivial, there is an extra term which can be added to the action known as the Fayet–Iliopoulos term.
References
Supersymmetric quantum field theory | N = 1 supersymmetric Yang–Mills theory | [
"Physics"
] | 1,766 | [
"Supersymmetric quantum field theory",
"Supersymmetry",
"Symmetry"
] |
71,350,420 | https://en.wikipedia.org/wiki/Gauss%20separation%20algorithm | Carl Friedrich Gauss, in his treatise Allgemeine Theorie des Erdmagnetismus, presented a method, the Gauss separation algorithm, of partitioning the magnetic field vector, B, measured over the surface of a sphere into two components, internal and external, arising from electric currents (per the Biot–Savart law) flowing in the volumes interior and exterior to the spherical surface, respectively. The method employs spherical harmonics. When radial currents flow through the surface of interest, the decomposition is more complex, involving the decomposition of the field into poloidal and toroidal components. In this case, an additional term (the toroidal component) accounts for the contribution of the radial current to the magnetic field on the surface.
The method is commonly used in studies of terrestrial and planetary magnetism, to relate measurements of magnetic fields either at the planetary surface or in orbit above the planet to currents flowing in the planet's interior (internal currents) and its magnetosphere (external currents). Ionospheric currents would be exterior to the planet's surface, but might be internal currents from the vantage point of a satellite orbiting the planent.
Notes
References
.
.
Magnetism
Geomagnetism
Physical quantities
Harmonic analysis | Gauss separation algorithm | [
"Physics",
"Materials_science",
"Mathematics"
] | 251 | [
"Materials science stubs",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Physical properties",
"Electromagnetism stubs"
] |
71,354,355 | https://en.wikipedia.org/wiki/Hiroshi%20Suura | Hiroshi Suura (born August 19, 1925, Hiroshima, Japan – September 15, 1998) was a Japanese theoretical physicist, specializing in particle physics.
Education and career
Suura graduated in 1947 with a B.S. from the University of Tokyo and in 1954 with a Ph.D. in physics from Hiroshima University. From September 1955 to June 1956 he did research at the Institute for Advanced Study. From 1960 to 1965 he was a professor at Nihon University. From 1965 until his retirement as professor emeritus, he was a professor at the University of Minnesota.
In the theory of infrared corrections, Suura made important contributions, essential for many precise measurements involving elementary particles, especially electrons.
He was elected in 1967 a Fellow of the American Physical Society. On June 1, 1994, the University of Minnesota held a colloquium in honor of Hiroshi Suura. After his death, the Physical Society of Japan published a collection of articles as a memorial to him.
Selected publications
(over 1900 citations)
References
1925 births
1998 deaths
20th-century Japanese physicists
Particle physicists
University of Tokyo alumni
Hiroshima University alumni
Academic staff of Nihon University
University of Minnesota faculty
Fellows of the American Physical Society
People from Hiroshima | Hiroshi Suura | [
"Physics"
] | 242 | [
"Particle physicists",
"Particle physics"
] |
68,449,046 | https://en.wikipedia.org/wiki/Vaccine%20resistance | Vaccine resistance is the evolutionary adaptation of pathogens to infect and spread through vaccinated individuals, analogous to antimicrobial resistance. It concerns both human and animal vaccines. Although the emergence of a number of vaccine resistant pathogens has been well documented, this phenomenon is nevertheless much more rare and less of a concern than antimicrobial resistance.
Vaccine resistance may be considered a special case of immune evasion, from the immunity conferred by the vaccine. Since the immunity conferred by a vaccine may be different from that induced by infection by the pathogen, the immune evasion may also be easier (in case of an inefficient vaccine) or more difficult (would be the case of the universal flu vaccine). We speak of vaccine resistance only if the immune evasion is a result of evolutionary adaptation of the pathogen (and not a feature of the pathogen that it had before any evolutionary adaptation to the vaccine) and the adaptation is driven by the selective pressure induced by the vaccine (this would not be the case of an immune evasion that is the result of genetic drift that would be present even without vaccinating the population).
Some of the causes advanced for less frequent emergence of resistance are that
vaccines are mostly used for prophylaxis, that is before infection occurs, and usually act to suppress the pathogen before the host becomes infectious
most vaccines target multiple antigenic sites of the pathogen
different hosts may produce different immune responses to the same pathogen
For diseases that confer long lasting immunity after exposure, typically childhood diseases, it was argued that a vaccine may provide the same immune response as natural infection, so it is expected that there should be no vaccine resistance.
If vaccine resistance emerges the vaccine may retain some level of protection against serious infection, possibly by modifying the immune response of the host away from immunopathology.
The best known cases of vaccine resistance are for the following diseases
animal diseases
Marek's disease where actually more virulent strains emerged after vaccination because the vaccine did not protect against infection and transmission, only against serious forms of the disease
Yersinia ruckeri because a single mutation was sufficient to generate vaccine resistance
avian metapneumovirus
human diseases
Streptococcus pneumoniae because recombination with another serotype not targeted by the vaccine
hepatitis B virus because the vaccine targeted a single site formed by 9 amino acids
Bordetella pertussis because not all serotypes were targeted and later because acellular vaccines targeted only a few antigens
Other less documented cases are for avian influenza, avian reovirus, Corynebacterium diphtheriae, feline calicivirus, H. influenzae, infectious bursal disease virus, Neisseria meningitidis, Newcastle disease virus, and porcine circovirus type 2.
References
Vaccination
Parasitology
Immunology
Immune system
Evolutionary biology
Pharmaceuticals policy | Vaccine resistance | [
"Biology"
] | 590 | [
"Evolutionary biology",
"Immune system",
"Organ systems",
"Immunology",
"Vaccination"
] |
68,450,344 | https://en.wikipedia.org/wiki/Time%20in%20Malawi | Time in Malawi is given by a single time zone, officially denoted as Central Africa Time (CAT; UTC+02:00). Malawi does not observe daylight saving time.
IANA time zone database
In the IANA time zone database, Malawi is given one zone in the file zone.tab – Africa/Blantyre. "MW" refers to the country's ISO 3166-1 alpha-2 country code. Data for Malawi directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
List of time zones by country
List of UTC time offsets
References
External links
Current time in Malawi at Time.is
Time in Malawi at TimeAndDate.com
Time by country
Geography of Malawi
Time in Africa | Time in Malawi | [
"Physics"
] | 161 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
68,453,203 | https://en.wikipedia.org/wiki/Avy%20B.V. | Avy B.V. is a Dutch technology company that develops and operates drones and aerial networks for long-range missions. Avy's B.V.'s drones can take off and land vertically like a helicopter and fly longer distances than a quadcopter because of their fixed-wing configuration. Its second drone, the Avy Aera, is a VTOL fixed-wing drone and was released at Amsterdam Drone Week in 2019 (Dec 4th - 6th).
History
Avy B.V. was founded in 2016 by Patrique Zaman in Amsterdam, Netherlands
2015–2017: European Space Agency (ESA) Incubator
2017: UAE Drones For Good Award. Avy in Dubai as one of the ten finalists.
2017: Avy exhibited in the Stedelijk Museum as part of the Design for Refugees exhibition.
2017: First BVLOS missions in three national parks in South Africa (Hluhluwe, Adventures with elephants, Leshiba)
2018: Move to new HQ in Amsterdam.
2018: Seed investment from Orange Wings.
2019: Release of the Avy Aera at Amsterdam Drone Week.
2019: Foundation Medical Drone Service consortium.
2020: Avy receives 1.4 million euros in subsidy grant from EU horizon 2020.
2020: Avy takes part in Lake Kivu challenge, a VTOL drone competition hosted by African Drone Forum in Rwanda. The company competed in the "Emergency Delivery category" and won a safety award.
2020: The company wins a Blue Tulip Award in the category of "Best Mobility Innovation".
2021: Launch "Drones for health" project in partnership with Botswana International University of Science and Technology (BIUST), United Nations Population Fund (UNFPA) and the Botswana Ministry of Health and Wellness.
2021: Won an Airwards in the "Emergency Response and SAR" category.
Products
Avy Aera
Avy Aera was launched on Dec 4th, 2019 at the Amsterdam Drone Week in the RAI. Aera has an external dimension of 2400mm x 1300mm X 500mm, and carries a maximum payload of up to 1.5 kg. A VTOL drone is a combination between a helicopter and a plane, as it can take-off and land vertically. It has wings to enlarge the flight endurance. This drone can cover up to 85 km and has one hour of flight time.
The long-range drone can fly beyond visual line-of-sight (BVLOS) missions, and it has a modular payload, making it suitable for different applications. It can be equipped with a stabilized gimbal that has RGB and a thermal camera for wildfire detection and monitoring. For medical deliveries, this model can transport a medical (cooled) cargo box, which is able to keep medical commodities such as blood, samples and vaccines in a temperature controlled state between 2-8 °C. Avy Area is certified to fly BVLOS in compliance with the new EU drone regulations.
Docking station
The Avy Aera can be remotely and autonomously operated from the docking station, a locally placed and secured drone station where the drone can autonomously take off and land for check and charge. The drone and the station are connected through software and are remotely operated from the network control center. This center can be separate or integrated inside the control room of emergency services. This whole system forms the infrastructure for an aerial drone network.
Projects
Healthcare Logistics
The Medical Drones Service consortium was launched in late 2019. This consists of ANWB MAA (flights operator), PostNL (logistic provider), Erasmus MC (hospital), Isala (hospitals), Sanquin (blood bank), KPN (telecom), Certe (Lab), and Avy, that collaboratively joined a three year pilot to research and test how drones can contribute to deliver healthcare in the Netherlands and keep healthcare accessible in the future. The medical partners are important to develop the right kind of emergency service. Avy and KPN are the two technology partners. Halfway through the project, the first BVLOS flights have been performed by the ANWB MAA on different routes between hospitals & blood bank in the Netherlands.
Emergency Services
With climate change, rapid detection of wildfires becomes important with the increasing risk of wildfires. Avy partnered up with CHC Helicopters and Safety Region of North Holland to research the use of drones for detection of early-stage wildfires.
In February 2021, the Avy Area (equipped with a stabilized gimbal camera with RGB and thermal functionality) performed several test flights in National Park the Hoge Veluwe for the Security Regions VNOG and Gelderland Midden. In September 2021 phase 2 & 3 of this project will start with more test flights above the Veluwe.
Last-mile Medical Delivery
In April 2021, Avy partnered with the Botswana International University of Science and Technology (BIUST), UNFPA and Botswana Ministry of Health and Wellness to start the Drones for Health project. This aims to reduce the numbers of maternal deaths by using drones to deliver health supplies and emergency commodities. The Avy Aera was 65% faster than common road transport to reach certain communities. The Drones for Health project was officially launched on May 7, 2021 as it was initiated by BIUST, UNFPA, and the Ministry of Health & Wellness of Botswana.
Drone Specifications
Avy Aera
Dimensions:
Wingspan: 2400mm
Length: 1300mm
Height: 500mm
Transport case: 2000 x 600 x 600mm
Weight: 12 kg
Payloads:
Maximum payload weight: 1.5 kg
Payload volume: 200 x 275 x 135mm (L x W x H)
Cargo module: Default
Medical payload: Insulated for cooled transport
First response payload: Nighthawk 2
Flight Performance:
Flight time: 55 minutes
Range: 60 km
Cruise speed: 40 kt (74 km/h)
Awards
In 2020, Avy won a Blue Tulip Award (organized by Accenture) for Best Mobility Innovation category. In 2021, the company in partnership with the Dutch fire brigade won an Airwards, the global award to recognize positive drone use cases, in the "Emergency Response and SAR" category. The project aims to build early wildfire warning systems with daily drone flights.
References
Unmanned aerial vehicle manufacturers
Companies based in Amsterdam
Sustainable transport
2016 establishments in the Netherlands
Technology companies of the Netherlands
Companies of the Netherlands
Privately held companies of the Netherlands
Multinational companies headquartered in the Netherlands
Dutch brands | Avy B.V. | [
"Physics"
] | 1,312 | [
"Sustainable transport",
"Transport",
"Physical systems"
] |
68,453,661 | https://en.wikipedia.org/wiki/Arnidiol | Arnidiol is a cytotoxic triterpene with the molecular formula C30H50O2. Arnidiol has been first isolated from the bloom of the plant Arnica montana. Arnidiol has also been isolated from the plant Taraxacum officinale.
References
Further reading
Triterpenes
Diols
Vinylidene compounds
Pentacyclic compounds | Arnidiol | [
"Chemistry"
] | 84 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
68,453,678 | https://en.wikipedia.org/wiki/Collapsing%20can | Collapsing can or can crusher experiment is a demonstration of an aluminum can being crushed by atmospheric pressure. Due to the low pressure inside a can as compared to the pressure outside, the pressure outside exerts a force on the can causing the can to collapse.
Explanation
The demonstration starts with boiling water inside the can. As the water is boiled, water vapor is created and fills the space inside the can which then pushes the air out.
H2O → H2O
Then, inverting a water vapor-filled can into a water bath causes the water vapor to rapidly condense back to liquid water. The condensation of water reduces pressure inside the can, so the higher pressure outside the can makes the can collapse.
H2O → H2O
Limitation
Since the can is open when immersed, this demonstration only works with aluminum cans. Aluminum cools quickly when immersed, causing almost instantaneous condensation of the steam, leading the weak aluminum to collapse. With steel cans the water in the cooling bath condenses the interior steam by contact through the opening in the can. Then the cooling water is drawn inside the can by the reduced pressure preventing the collapse of the can. The steam condenses before the steel cools.
A variation where the opening in the can is sealed air-tight can make even a strong a steel drum collapse. After the water inside the drum boils and forces the air out, the opening is sealed air tight. When the steam condenses the can, or drum, will be crushed by the pressure differential between the internal partial pressure of water, and the external atmosphere.
Alternatives
Addition of sodium hydroxide to a can filled with carbon dioxide can produce a similar result.
Gallery
References
External links
Can Crush Demonstration
Physics education
Chemistry classroom experiments
Articles containing video clips
Atmospheric pressure | Collapsing can | [
"Physics",
"Chemistry"
] | 365 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Physics education",
"Meteorological quantities",
"Atmospheric pressure",
"Chemistry classroom experiments"
] |
74,330,686 | https://en.wikipedia.org/wiki/Analytical%20band%20centrifugation | Analytical band centrifugation (ABC) (also known as analytical band ultracentrifugation, or band sedimentation-velocity), is a specialized ultracentrifugation procedure, where unlike the typical use of (boundary) sedimentation velocity analytical ultracentrifugation (SV-AUC) wherein a homogenous bulk solution is centrifuged, in ABC a thin (~15 μL, ~500 μm) sample is layered on top of a bulk solvent and then centrifuged. The method is distinguished from zone-sedimentation in that a stabilizing density gradient is self-generated during centrifugation, through the use of a higher density (than the sample) bulk "binary solvent", containing both a solvent (i.e. H2O), and a second component (small molecules, i.e. CsCl) that will sediment to form a stabilizing density gradient for the sample.
ABC also requires specially designed analytical ultracentrifuge cells, as the sample is not manually applied by pipette but instead automatically delivered via capillary under low g-forces at the beginning of a run from a reservoir within the cell. It was first demonstrated in 1963, and was not commonly used for many decades, but recently has become more widely used due to its applicability to quality control measurements on therapeutic viruses such as adeno-associated viruses (AAVs). The profiles resulting from ABC analyses are similar in their interpretation to the profiles from electrophoretic separations ("electropherograms"), and thus have been dubbed "centrifugrams".
See also
Svedberg
Sedimentation
Centrifugation
Differential centrifugation
References
Laboratory techniques
Centrifugation | Analytical band centrifugation | [
"Chemistry",
"Biology"
] | 362 | [
"Centrifugation",
"Separation processes",
"Biotechnology stubs",
"Biochemistry stubs",
"nan",
"Biochemistry"
] |
74,340,046 | https://en.wikipedia.org/wiki/Commercial%20fusion | Commercial Fusion is a term used to refer to privately owned companies whose aim is to sell electricity produced by nuclear fusion. The industry now consists of over 40 companies who have attracted a combined total of more than $6 billion in investment.
Commercial fusion companies
First fusion electricity to the grid
For decades researchers have famously said that fusion power is always 30, or even 50, years away. The advent of commercial fusion has changed that, and now fusion power is typically predicted to be around 10 years away, with most companies predicting that the first fusion plant will deliver electricity to the grid before 2035. Although the majority of the companies have only existed for a few years, some have already failed to deliver on their predictions. General Fusion first predicted that it would deliver electricity to the grid by 2009.
References
Fusion power
Nuclear technology companies | Commercial fusion | [
"Physics",
"Chemistry",
"Engineering"
] | 164 | [
"Plasma physics",
"Nuclear technology companies",
"Fusion power",
"Engineering companies",
"Nuclear fusion"
] |
74,341,487 | https://en.wikipedia.org/wiki/Nadel%20vanishing%20theorem | In mathematics, the Nadel vanishing theorem is a global vanishing theorem for multiplier ideals, introduced by A. M. Nadel in 1989. It generalizes the Kodaira vanishing theorem using singular metrics with (strictly) positive curvature, and also it can be seen as an analytical analogue of the Kawamata–Viehweg vanishing theorem.
Statement
The theorem can be stated as follows. Let X be a smooth complex projective variety, D an effective -divisor and L a line bundle on X, and is a multiplier ideal sheaves. Assume that is big and nef. Then
Nadel vanishing theorem in the analytic setting: Let be a Kähler manifold (X be a reduced complex space (complex analytic variety) with a Kähler metric) such that weakly pseudoconvex, and let F be a holomorphic line bundle over X equipped with a singular hermitian metric of weight . Assume that for some continuous positive function on X. Then
Let arbitrary plurisubharmonic function on , then a multiplier ideal sheaf is a coherent on , and therefore its zero variety is an analytic set.
References
Citations
Bibliography
Further reading
Theorems in algebraic geometry
Theorems in complex geometry | Nadel vanishing theorem | [
"Mathematics"
] | 251 | [
"Theorems in algebraic geometry",
"Theorems in complex geometry",
"Theorems in geometry"
] |
77,284,585 | https://en.wikipedia.org/wiki/Clavin%E2%80%93Garcia%20equation | Clavin–Garcia equation or Clavin–Garcia dispersion relation provides the relation between the growth rate and the wave number of the perturbation superposed on a planar premixed flame, named after Paul Clavin and Pedro Luis Garcia Ybarra, who derived the dispersion relation in 1983. The dispersion relation accounts for Darrieus–Landau instability, Rayleigh–Taylor instability and diffusive–thermal instability and also accounts for the temperature dependence of transport coefficients.
Dispersion relation
Let and be the wavenumber (measured in units of planar laminar flame thickness ) and the growth rate (measured in units of the residence time of the planar laminar flame) of the perturbations to the planar premixed flame. Then the Clavin–Garcia dispersion relation is given by
where
and
Here
The function , in most cases, is simply given by , where , in which case, we have ,
In the constant transport coefficient assumption, , in which case, we have
See also
Clavin–Williams formula
References
Fluid dynamics
Combustion
Fluid dynamic instabilities | Clavin–Garcia equation | [
"Chemistry",
"Engineering"
] | 236 | [
"Fluid dynamic instabilities",
"Chemical engineering",
"Combustion",
"Piping",
"Fluid dynamics"
] |
72,860,972 | https://en.wikipedia.org/wiki/List%20of%20nuclear%20waste%20storage%20facilities%20in%20Canada | Nuclear waste is stored in Canada at the following locations:
See also
List of nuclear fuel storage facilities in Canada
Nuclear power in Canada
References
Nuclear energy in Canada
Radioactive waste | List of nuclear waste storage facilities in Canada | [
"Chemistry",
"Technology"
] | 34 | [
"Environmental impact of nuclear power",
"Radioactive waste",
"Hazardous waste",
"Radioactivity"
] |
72,866,575 | https://en.wikipedia.org/wiki/Response%20coefficient%20%28biochemistry%29 | Control coefficients measure the response of a biochemical pathway to changes in enzyme activity. The response coefficient, as originally defined by Kacser and Burns, is a measure of how external factors such as inhibitors, pharmaceutical drugs, or boundary species affect the steady-state fluxes and species concentrations. The flux response coefficient is defined by:
where is the steady-state pathway flux. Similarly, the concentration response coefficient is defined by the expression:
where in both cases is the concentration of the external factor. The response coefficient measures how sensitive a pathway is to changes in external factors other than enzyme activities.
The flux response coefficient is related to control coefficients and elasticities through the following relationship:
Likewise, the concentration response coefficient is related by the following expression:
The summation in both cases accounts for cases where a given external factor, , can act at multiple sites. For example, a given drug might act on multiple protein sites. The overall response is the sum of the individual responses.
These results show that the action of an external factor, such as a drug, has two components:
The elasticity indicates how potent the drug is at affecting the activity of the target site itself.
The control coefficient indicates how any perturbation at the target site will propagate to the rest of the system and thereby affect the phenotype.
When designing drugs for therapeutic action, both aspects must therefore be considered.
Proof of Response Theorem
There are various ways to prove the response theorems:
Proof by perturbation
The perturbation proof by Kacser and Burns is given as follows.
Given the simple linear pathway catalyzed by two enzymes and :
where is the fixed boundary species. Let us increase the concentration of enzyme by an amount . This will cause the steady state flux and concentration of , and all downstream species
beyond to increase. The concentration of is now decreased such that the flux and steady-state concentration of is restored back to their original values. These changes allow one to write down the following local and systems equations for the changes that occurred:
There is no term in either equation because the concentration of is unchanged. Both right-hand sides of the equations are guaranteed to be zero by construction. The term can be eliminated by combining both equations. If we also assume that the reaction rate for an enzyme-catalyzed reaction is proportional to the enzyme concentration, then , therefore:
Since
this yields:
.
This proof can be generalized to the case where may act at multiple sites.
Pure algebraic proof
The pure algebraic proof is more complex and requires consideration of the system equation:
where is the stoichiometry matrix and the rate vector. In this derivation, we assume there are no conserved moieties in the network, but this doesn't invalidate the proof. Using the chain rule and differentiating with respect to yields, after rearrangement:
The inverted term is the unscaled control coefficient so that after scaling, it is possible to write:
To derive the flux response coefficient theorem, we must use the additional equation:
See also
Control coefficient (biochemistry)
Elasticity coefficient
Metabolic control analysis
References
Biochemistry methods
Metabolism
Mathematical and theoretical biology
Systems biology | Response coefficient (biochemistry) | [
"Chemistry",
"Mathematics",
"Biology"
] | 631 | [
"Biochemistry methods",
"Mathematical and theoretical biology",
"Applied mathematics",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
72,867,464 | https://en.wikipedia.org/wiki/Nuclear%20Now | Nuclear Now is a 2022 American documentary film, directed and co-written by Oliver Stone. The movie argues that nuclear energy is a solution needed to fight climate change because other renewable energies by themselves will not be sufficient in time for the planet to obtain carbon neutrality before climate change becomes irreversible.
The movie is based on the book A Bright Future: How Some Countries Have Solved Climate Change and the Rest Can Follow written by the US scientists Joshua S. Goldstein and Staffan A. Qvist. Goldstein co-authored the screenplay together with Oliver Stone. Producers include: Stefano Buono, Isabelle Boemeke, and Jon Kilik.
The documentary premiered out of competition at the 79th edition of the Venice Film Festival. Stone and Goldstein later also pledged for their propositions at the 53rd World Economic Forum 2023 in Davos, Switzerland. It features one of the final film scores of Vangelis.
Plot
As the narrator of the movie, Stone advocates nuclear power as a safe source of energy that can replace fossil fuels and thereby help to fight climate change. He predicts a doubling or quadrupling of the demand for electricity worldwide in the coming 30 years. In order to ensure sufficient backing with low-carbon power, Stone suggests a mass-production of nuclear power plants.
Stone argues that recycling, electric cars and consumption of environmentally friendly products are just attempts of middle class citizens to feel good but will not make a real difference for the climate. The script writers accuse the anti-nuclear movement of equating nuclear power with nuclear weapons and thus creating a primal fear against this form of energy. The writers furthermore imply that the oil and gas industry has been funding the campaigns.
Reception
Positive
A review in Variety points out that two sides debating pros and cons of nuclear power have been entrenched for a long time. The reviewer recommends an open-minded look at the movie, however, and speculates that it may have an impact similar to An Inconvenient Truth. At the 2022 Venice International Film Festival, the International Council for Film, Television and Audiovisual Communication (CICT ICFT) awarded Nuclear Now with the Enrico Fulchignoni prize. The jury stated that the movie adds new and bold scientific insights to the discussion of a controversial topic. Damon Wise of Deadline reviewed the film, calling it "a hard watch", but stating that it "puts forward a lot of unexpected proposals about nuclear energy, debunking powerful myths along the way."
Negative
See also
Nuclear power debate
References
External links
2022 films
2022 documentary films
Films directed by Oliver Stone
Films scored by Vangelis
Films with screenplays by Oliver Stone
Nuclear power
Climate change
Documentary films about nuclear technology | Nuclear Now | [
"Physics"
] | 549 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
78,649,096 | https://en.wikipedia.org/wiki/Ensartinib | Ensartinib, sold under the brand name Ensacove, is an anti-cancer medication used for the treatment of non-small cell lung cancer. Ensartinib is an Anaplastic lymphoma kinase (ALK) inhibitor used as the salt ensartinib hydrochloride. It is taken by mouth.
The most common adverse reactions include rash, musculoskeletal pain, constipation, cough, pruritis, nausea, edema, pyrexia, and fatigue.
Ensartinib was approved for medical use in the United States in December 2024.
Medical uses
Ensartinib is indicated for the treatment of adults with anaplastic lymphoma kinase (ALK)-positive locally advanced or metastatic non-small cell lung cancer who have not previously received an ALK-inhibitor.
History
Efficacy was evaluated in eXALT3 (NCT02767804), an open-label, randomized, active-controlled, multicenter trial in 290 participants with locally advanced or metastatic ALK-positive non-small cell lung cancer who had not previously received an ALK-targeted therapy. Participants were randomized 1:1 to receive ensartinib or crizotinib.
Society and culture
Legal status
Ensartinib was approved for medical use in the United States in December 2024.
Names
Ensartinib is the international nonproprietary name.
Ensartinib is sold under the brand name Ensacove.
References
External links
Kinase inhibitors
Benzamides
Chlorobenzenes
Ethers
Fluorobenzenes
Piperazines
Pyridazines
Antineoplastic drugs | Ensartinib | [
"Chemistry"
] | 353 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
78,649,718 | https://en.wikipedia.org/wiki/Grayanic%20acid | Grayanic acid is an organic compound found in certain lichens, particularly Cladonia grayi, where it serves as a secondary metabolite with notable taxonomic importance. Identified in the 1930s, it is now recognised as a chemotaxonomic marker that helps distinguish closely related species within the Cladonia chlorophaea species group. Grayanic acid crystallises as colourless, needle-like structures, melts at approximately , and displays distinctive fluorescence under ultraviolet light, aiding in its detection and study.
Chemically, grayanic acid is a depsidone, featuring two aromatic rings linked by ester and ether bonds. Its biosynthesis occurs in the fungal partner of the lichen and does not require the presence of the algal symbiont. Genetic research has identified a key biosynthetic gene cluster responsible for its formation, highlighting biochemical pathways and enzymes that convert precursor compounds into grayanic acid and related metabolites such as sphaerophorin.
Beyond its chemical characteristics, grayanic acid has proven invaluable in refining lichen taxonomy, as variations in its presence and concentration underpin subtle species distinctions. By comparing grayanic acid profiles across different populations and geographic regions, researchers have gained insights into evolutionary relationships, species distribution patterns, and the ecological roles that these fungal–algal partnerships play in diverse environments.
History
Grayanic acid was first isolated in the 1930s by Yasuhiko Asahina and Zyozi Simosato from the lichen species Cladonia grayi. In their initial study, they determined it to be a crystalline acid with a melting point of 185 °C and proposed a molecular formula of C21H24O7. However, further investigation was limited at the time due to a shortage of material.
By 1943, Alexander W. Evans highlighted the utility of Asahina's microchemical methods, including microcrystallisation, in identifying grayanic acid. Evans described its needle-like crystals, which often formed radiating clusters under specific conditions, and noted a melting point near , consistent with Asahina's findings.
In 1963, Shoji Shibata and Hsiich-Ching Chiang revised the molecular formula to C23H26O7 and refined the melting point to 186–189 °C, aligning it with subsequent modern analyses. Their work also supported Asahina's classification of the Cladonia chlorophaea complex into distinct species based on chemical markers, such as grayanic acid, cryptochlorophaeic acid, and merochlorophaeic acid. However, Elke Mackenzie suggested that such differences were better explained as chemical strains (chemotypes) within a single species. Later synthetic studies in 1976 determined a slightly lower range of 181.5–182.5 °C for synthetic grayanic acid, highlighting minor variations attributable to synthetic purity.
Structure
The molecular structure of grayanic acid consists of a depside skeleton with two benzene rings connected by both ester (-CO-O-) and ether (-O-) linkages, forming a depsidone. The molecule contains one methoxy group (H3CO-), one free hydroxyl group (-OH), and a chelated carboxyl group (-COOH). Nuclear magnetic resonance studies revealed the presence of alkyl side chains, specifically determined to be either (1) CH3 and C7H15 or (2) C2H5 and C6H13. The complete systematic name for the compound is 6-heptyl-8-hydroxy-3-methoxy-1-methyl-11-oxo-11H-dibenzo[b,e][1,4]dioxepin-7-carboxylic acid.
While the initial structural assignment was based primarily on spectroscopic evidence, some uncertainty remained regarding the precise positions of the alkyl groups. This ambiguity was definitively resolved through total synthesis in 1976, which confirmed the original structural proposal. The compound's structure is notably similar to sphaerophorin, another lichen metabolite found in the genus Sphaerophorus.
Properties
Physical properties
Grayanic acid forms radiating clusters of colourless needles upon crystallisation, and has a melting point of 186–189°C. It dissolves readily in ethyl acetate, methyl acetate, ethanol, and chloroform, is sparingly solubility in benzene, and is insoluble in hexane and petroleum ether. These solubility characteristics facilitate its extraction and crystallisation from lichen material. Synthetic material provided a more precise melting point, measured at 181.5–182.5°C.
Nuclear magnetic resonance spectroscopy identifies signals at δ 0.89 (deformed triplet, methyl), 1.26 (broad signal, five methylene groups), 2.50 (singlet, methyl), 3.24 (broad signal, ArCH₂), 3.83 (singlet, methoxy), and 6.62–6.72 (aromatic protons). Mass spectrometry detects a molecular ion peak at m/z 414 (M+, C23H26O7), with characteristic fragmentation patterns including peaks at m/z 396 (M+-H₂O), 370 (M+-CO₂), and 165 (A-ring fragment). High-resolution mass spectrometry verifies the molecular formula, providing an exact 414.1679. The compound has identical Rf values across multiple solvent systems when compared with authentic natural samples.
The compound fluoresces blue under ultraviolet light, a distinctive property. This fluorescence aids in studying its accumulation in laboratory cultures of the fungal partner. When the fungus is grown in culture, grayanic acid forms visible extracellular deposits on aerial fungal filaments (hyphae). These deposits appear as patches or bands along the hyphae, accumulating more densely in older regions farther from the growing tips. The deposits dissolve readily in acetone or methanol, leaving only the fungal cell walls' natural fluorescence.
Chemical properties
The chemical behaviour of grayanic acid includes several distinctive reactions and spectroscopic characteristics. In ethanolic solution, it forms a violet colour with 1% ferric chloride, and a pale yellow colour with diazonium reagent. Its ultraviolet absorption spectrum shows two peaks (λmax): one at 258 nm (log ε 4.10), and another at 300–310 nm (log ε of 3.5). Infrared spectroscopy identifies structural features such as a chelated carboxyl group at 1650 cm⁻¹, a lactonic linkage at 1750 cm⁻¹, and benzenoid rings with bands at 1570 and 1610 cm⁻¹. The compound remains stable under methanolysis, showing no changes after boiling in methanol for 18 hours.
Nuclear magnetic resonance studies of grayanic acid in chloroform show proton signals at τ = 9.10 (terminal methyl groups of long alkyl chains), τ = 8.63 (intermediate methylenes), and τ = 6.75 (end methylenes attached to the benzene ring). These signals, compared with those of similar compounds, helped identify the positions of functional groups in the molecule. In acetone, benzene ring protons exhibit chemical shifts at 6.13, 6.66, and 6.80 ppm, matching the pattern of related compounds like sphaerophorin.
Thin-layer chromatography shows grayanic acid as a UV+ pale blue spot before heating, which becomes pale pinkish-brown with a UV+ purple hue after acid spray and heating. This chromatographic behaviour aids in identifying grayanic acid in complex lichen extracts, especially in chemotaxonomic studies distinguishing species like Neophyllis melacarpa and N. pachyphylla by their metabolite profiles.
Grayanic acid displays characteristic behaviour in solvents and chemical tests. During bicarbonate solution tests, it forms an oily layer between ether and aqueous phases, in addition to its standard solubility properties. It fluoresces green when treated with potassium hydroxide and chloral hydrate but gives a negative result in the homofluorescein reaction. These chemical properties helped classify grayanic acid as an orcinol-type depsidone rather than a simple depside.
Reactivity
Grayanic acid undergoes chemical transformations that aid in understanding its structure and reactivity. It readily forms a mono-acetate derivative (melting point 155–157°C) and can be converted to a methyl ether methyl ester (melting point 88–90°C). Acetylgrayanic acid is prepared by treating grayanic acid with acetic anhydride and sulfuric acid. The resulting crystals melt at 57–59°C after recrystallisation from benzene and n-hexane.
Under ice-cooling, potassium hydroxide converts grayanic acid into grayanoldicarboxylic acid, while barium hydroxide treatment yields grayanolic acid. These reactions illustrate the compound's reactivity with bases and its capacity to form structurally distinct derivatives.
Grayanic acid also shows characteristic solubility behaviour in chemical tests. For example, when shaken with aqueous sodium bicarbonate, it forms an oily layer between the ethereal and aqueous phases, a property that facilitates its separation during analysis.
Occurrence
Grayanic acid was first discovered and isolated from Cladonia grayi. Initial extractions yielded about 0.7% grayanic acid from raw lichen material, producing 350 milligrams of pure crystals from 50 grams of lichen. Ethanol and chloroform facilitated this yield, aiding the purification process.
Although initially identified only in C. grayi, later research detected grayanic acid in other Cladonia species. One example is Cladonia anitae, an endemic species discovered in 1982 along the Atlantic Coast of southeastern North Carolina. In this species, grayanic acid is a major metabolite, found with usnic acid and rhodocladonic acid. Grayanic acid is also a major secondary metabolite in Jarmania tristis, a byssoid lichen endemic to Tasmania's cool temperate rainforests. In J. tristis, it co-occurs with usnic acid and 4-O-demethylgrayanic acid, shaping the species' distinctive chemistry.
Grayanic acid production varies geographically among C. grayi populations. Caribbean specimens exhibit chemical variants, with some populations producing grayanic acid alongside related compounds like stenosporonic and divaronic acids. This variation appears geographically influenced, with West Indian specimens showing different proportions of these compounds compared to North American ones. For example, Jamaican specimens typically contain grayanic acid and stenosporonic acid as major constituents, while other populations often produce grayanic acid alone.
Laboratory cultivation has revealed the conditions required for grayanic acid production by the fungal partner (mycobiont) of C. grayi. Isolated from its algal partner, the fungus produces substantial grayanic acid, particularly on solid media under dry conditions. Production starts days after transferring the fungus from liquid to solid growth medium and increases as aerial fungal filaments develop. Under optimal conditions, the cultured fungus can achieve production rates comparable to those of some non-lichen fungi producing similar compounds. The fungus's ability to synthesise grayanic acid in pure culture shows that the compound, while characteristic of the intact lichen, does not require the algal partner.
Taxonomic significance
Grayanic acid is integral to lichen taxonomy, particularly for distinguishing species in the Cladonia chlorophaea complex. Initially used with taste tests to separate species, detailed studies in the 1970s revealed more nuanced relationships between chemical composition and morphology.
Studies of North Carolina populations showed a correlation between grayanic acid and specific morphological traits. C. grayi, which contains grayanic acid, consistently exhibits smaller granules (soredia) in its podetial cups than C. cryptochlorophaea. These differences, unaffected by fumarprotocetraric acid content, indicate grayanic acid's taxonomic relevance. Similarly, in the Australasian genus Neophyllis, grayanic acid is a key chemotaxonomic marker distinguishing N. melacarpa from N. pachyphylla. N. melacarpa consistently produces grayanic acid with melacarpic acid and sometimes fumarprotocetraric acid, whereas N. pachyphylla contains only melacarpic acid. These chemical distinctions help resolve taxonomic ambiguities between the two species.
Taxonomic interpretations of chemical variation in these lichens have changed over time. Early classifications focused on the presence or absence of fumarprotocetraric acid (a bitter compound), but later studies suggested this variation reflects different genotypes of the same species rather than separate species. This pattern mirrors chemical variation seen in other lichens, such as the Cetraria islandica complex.
North American distribution studies reveal that specimens with both grayanic acid and fumarprotocetraric acid are more common in mountainous regions, while coastal populations primarily contain grayanic acid alone. Despite these chemical differences, the variants seem to belong to the same species, sharing consistent morphology aside from fumarprotocetraric acid presence.
Synthesis
The first total synthesis of grayanic acid was accomplished by Peter Djura and Melvyn Sargent in 1976 at the University of Western Australia. The key step in their synthetic route was an Ullmann reaction to construct the diaryl ether linkage. Their successful synthesis not only provided access to the compound but also definitively confirmed its structural assignment.
The synthetic pathway proceeded through several key intermediates. Initially, the researchers constructed the two aromatic rings separately. The first ring component was prepared from methyl acetoacetate and (E)-methyl dec-2-enoate through a series of transformations. The second ring was synthesised starting from a benzyl-protected hydroxybenzoate.
The crucial Ullmann coupling reaction joined these two components with a 73% yield, forming the diaryl ether intermediate. Following this step, hydrogenolysis produced a hydroxy acid which was then converted to methyl O-methylgrayanate through lactonisation with trifluoroacetic anhydride. The final stages of the synthesis involved careful manipulation of protecting groups to yield grayanic acid, which was identical in all respects to the natural product isolated from lichens.
Biosynthesis
The biosynthesis of grayanic acid involves fungal polyketide synthases and subsequent modifications, following a pathway similar to other lichen depsidones. Grayanic acid shares biosynthetic origins with sphaerophorin, a known lichen depside. Structural similarities and chemical transformation studies led Shibata and Chiang to propose sphaerophorin as a biosynthetic precursor to grayanic acid. The relationship is supported by shared structural features, such as similar methoxy and hydroxyl group arrangements on their benzenoid rings.
These foundational insights have been refined through genetic and biochemical studies. A 1985 study showed that grayanic acid biosynthesis depends entirely on the fungal genetics of C. grayi. Resynthesised lichens, formed by pairing fungal spores from grayanic acid-producing chemotypes with algal symbionts from unrelated lichens, consistently produced grayanic acid. This finding confirmed that the algal partner does not influence the chemotype, establishing the fungal component as the sole regulator of secondary metabolite production.
A 1992 study demonstrated that the fungal partner (mycobiont) of Cladonia grayi produces grayanic acid independently of its algal partner. Biosynthesis was linked to the development of aerial hyphae—thread-like fungal filaments that develop blue-fluorescent patches of grayanic acid under ultraviolet light. Production increased significantly under conditions of water stress and air exposure.
Genetic studies have elucidated the molecular mechanisms of grayanic acid biosynthesis. A biosynthetic gene cluster in C. grayi, including CgrPKS16 (a polyketide synthase that assembles the depside precursor 4-O-demethylsphaerophorin), drives the process. The pathway includes CYP682BG1, a cytochrome P450 monooxygenase for oxidative coupling, and an O-methyltransferase that adds a methyl group to complete the synthesis.
Grayanic acid belongs to a broader family of orcinol-type depsidones produced by lichens in the Cladonia chlorophaea group. These compounds form via biosequential patterns, with simpler depsides converting into more complex depsidones. This dynamic biosynthetic network produces related compounds, such as stenosporonic and divaronic acids, which exhibit variations in their carbon side-chain lengths across populations. This variation highlights the ecological and taxonomic relevance of grayanic acid in lichen communities.
The biosynthetic process shows distinct patterns during laboratory cultivation. Under suitable growing conditions, fungi first produce simpler depsides like 4-O-demethylsphaerophorin, followed by more complex depsidones like grayanic acid. This sequential process reflects the gene-driven enzymatic pathway and demonstrates the metabolic flexibility of lichen fungi.
Related compounds
Grayanic acid shares key structural features with sphaerophorin, a depside found in Sphaerophorus lichens. Cryptochlorophaeic acid and merochlorophaeic acid, structurally related to grayanic acid, were first identified in the Cladonia chlorophaea complex. These compounds, described in detail by Shibata and Chiang, share structural similarities with grayanic acid, including benzenoid and ester group arrangements.
In 1985, two additional related depsidones were reported: stenosporonic acid (C23H26O7) and divaronic acid (C21H22O7). These compounds are lower homologs in the same chemical series as grayanic acid, sharing its basic structure but differing in carbon side-chain lengths. Both compounds were first identified in Caribbean populations of C. grayi, where they occur alongside grayanic acid in varying proportions. Mass spectrometry confirmed their structures, with stenosporonic acid displaying a characteristic molecular ion at m/z (mass-to-charge ratio) 414 and divaronic acid at m/z 386.
Discovered in 1982, 4-O-demethylgrayanic acid (C22H24O7) naturally co-occurs with grayanic acid in several lichen species. This compound is present in all studied grayanic acid-producing lichens, including Cladonia and Gymnoderma melacarpum. Congrayanic acid, another related compound, may result from the nonenzymatic hydrolysis of grayanic acid, though it usually appears in trace amounts and is challenging to detect in unmanipulated extracts.
In 1980, congrayanic acid (C23H28O8) was first synthesised by treating grayanic acid with aqueous sodium hydroxide, cleaving the ester linkage. It crystallises as colorless prisms with a melting point of 183–183.5°C. This process confirmed structural aspects of grayanic acid, as congrayanic acid retained key spectroscopic features of the parent compound.
Researchers have prepared several derivatives of grayanic acid, including:
Methyl O-methylgrayanate, which forms needles with a melting point of 86.5–87.5°C
Benzyl grayanate, crystallising as prisms with a melting point of 101.5–102°C
Grayanoldicarboxylic acid, produced by treatment with potassium hydroxide
Grayanic acid belongs to the broader depsidone class, presumably formed through the oxidative cyclisation of p-depsides. This relationship is supported by the occasional, though rare, co-occurrence of depside-depsidone pairs in lichens.
References
Lichen products
Benzoic acids
Phenols
O-methylated natural phenols
Heptyl compounds
Benzodioxepines
Methoxy compounds
Heterocyclic compounds with 3 rings | Grayanic acid | [
"Chemistry"
] | 4,305 | [
"Natural products",
"Lichen products"
] |
78,653,089 | https://en.wikipedia.org/wiki/NGC%205857 | NGC5857 is a barred spiral galaxy in the constellation of Boötes. Its velocity with respect to the cosmic microwave background for is , which corresponds to a Hubble distance of . In addition, 20 non-redshift measurements give a distance of . It was discovered by German-British astronomer William Herschel on 27 April 1788.
The SIMBAD database lists NGC5857 as a Seyfert II Galaxy, i.e. it has a quasar-like nucleus with very high surface brightnesses whose spectra reveal strong, high-ionisation emission lines, but unlike quasars, the host galaxy is clearly detectable.
NGC 5859 Group
According to A. M. Garcia, NGC 5857 is a member of the NGC 5859 galaxy group (also known as LGG 394). This group has six members, including NGC 5859, UGC 9620, UGC 9622, UGC 9672, and UGC 9777.
Abraham Mahtessian mentions that NGC 5857 and NGC 5859 form a pair of galaxies and they are in gravitational interaction.
Supernovae
Two supernovae have been observed in NGC 5857:
SN1950H (type unknown, mag. 17.6) was discovered by Fritz Zwicky on 17 March 1950.
SN1955M (type unknown, mag. 14.5) was discovered by Fritz Zwicky on 14 May 1955.
See also
List of NGC objects (5001–6000)
References
External links
5857
053995
09724
+03-39-004
Boötes
17880427
Discoveries by William Herschel
Barred spiral galaxies
Seyfert galaxies
Interacting galaxies | NGC 5857 | [
"Astronomy"
] | 347 | [
"Boötes",
"Constellations"
] |
78,653,518 | https://en.wikipedia.org/wiki/Ilunocitinib | Ilunocitinib, sold under the brand name Zenrelia, is a veterinary medication used for the treatment of pruritus (itching) in dogs. It is a non-selective janus kinase inhibitor.
Ilunocitinib was approved for medical use in the United States in September 2024, and in Canada in December 2024.
Medical uses
Ilunocitinib is indicated for the control of pruritus associated with allergic dermatitis and control of atopic dermatitis in dogs at least twelve months of age.
Contraindications
It is not safe to administer vaccines to dogs that are concurrently receiving ilunocitinib.
Society and culture
Legal status
Ilunocitinib was approved for medical use in the United States in September 2024, and in Canada in December 2024.
Names
Ilunocitinib is the international nonproprietary name.
Ilunocitinib is sold under the brand name Zenrelia.
References
Dog medications
Immunosuppressants
Janus kinase inhibitors
Azetidines
Cyclopropanes
Nitriles
Pyrazoles
Pyrrolopyrimidines
Sulfonamides | Ilunocitinib | [
"Chemistry"
] | 241 | [
"Nitriles",
"Functional groups"
] |
78,668,010 | https://en.wikipedia.org/wiki/Single%20pilot%20operations | In aviation, Single Pilot Operations (SPO) refers to a proposal for commercial flights operated with one pilot, where previously two would be required.
Single pilot operations will require improvements in technology including aircraft and cockpit design, and changes to pilot training. Safety must be proved to win acceptance by regulators and the public.
History
Historically, large aircraft required several personnel on the flight deck, such as a navigator, a flight engineer, and a dedicated radio operator. Improvements in automation, reliability and technology such as autopilot and satellite navigation have enabled modern large aircraft to operate safely with only two pilots on duty.
With further technological improvements, it may be possible safely to reduce crew requirements to one, providing cost savings.
The European Union Aviation Safety Agency (EASA) has been investigating Extended Minimum Crew Operations (eMCO), where an aircraft could be operated by one pilot in the cruise. This would be an initial requirement before single pilot operations might be allowed at a later stage. The research project runs from 2022 to 2025. Airbus and Dassault have expressed interest in eMCO.
Airbus completed its Autonomous Taxi, Take-Off and Landing (ATTOL) project in 2020, demonstrating an autonomous flight with an A350-1000 aircraft. In 2023, Airbus project Dragonfly used a combination of normal and infrared cameras, as well as radar, to assist pilots in various situations. In 2024 Airbus began testing an autonomous aircraft taxi system called "Optimate".
Opposition
Proposals for Single Pilot Operations are opposed by several pilots' trade unions, including the International Federation of Air Line Pilots' Associations (IFALPA), the American Air Line Pilots Association (ALPA), the European Cockpit Association, and the British Air Line Pilots Association.
In their white paper, ALPA argue that two-pilot operations reduces errors through cross-checking, workload sharing and better decision-making, and provides redundancy in the case of pilot incapacitation. They argue that pilots learn from each other when working together; that pilots are more versatile than aircraft equipment and sensors; and that pilots are better at autonomous decision making. They also raise concerns about cybersecurity.
See also
Automated flight attending
Autonomous aircraft
Germanwings Flight 9525
Single-pilot resource management
Uninterruptible autopilot
References
External links
Airbus – Autonomous Flight
One Means None, European Cockpit Association campaign website
Aircraft automation | Single pilot operations | [
"Engineering"
] | 484 | [
"Automation",
"Aircraft automation"
] |
77,289,309 | https://en.wikipedia.org/wiki/Calibrator%20star | A calibrator star is a star that is typically used tor calibration purposes on high-sensitized sensors located on space telescopes.
Calibrator stars do not usually follow a specific criteria, but are normally hand-picked for different reasons.
Definition
Infrared and optically bright stars may be observed for calibration purposes by satellites, particularly those with sensitivity to both infrared and visible radiation. The stars chosen generally meet the following criteria: they have a visual manitude that is eual to or less than +6, and an IR brightness (in the 1-5 micrometer range) greater than that of Vega.
The stars are strictly southern objects (i.e., their declinations are negative), and most are cool stars of spectral classes K and M. While these are not the only stars that might serve for these purposes, they are well distributed across the southern sky and some should be visible at all times.
List
Catalog
A catalog of recommended calibrator stars does exist, with 1,510 stars being listed. The catalog gives the magnitude, mass and other statistics.
See also
Lists of stars
First light (astronomy)
References
Stars
Astronomical spectroscopy
Astrometry
Space telescopes | Calibrator star | [
"Physics",
"Chemistry",
"Astronomy"
] | 242 | [
"Spectrum (physical sciences)",
"Stars",
"Astrometry",
"Astrophysics",
"Spectroscopy",
"Space telescopes",
"Astronomical spectroscopy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
77,291,462 | https://en.wikipedia.org/wiki/Milestone-Based%20Fusion%20Development%20Program | The Milestone-Based Fusion Development Program is an ongoing program under the United States Department of Energy, office of fusion energy sciences to support the development of a fusion pilot plant (FPP) and eventually commercialize fusion power. As of 2024, eight private companies have received a total of $46 million for the first 18-month period of performance. The program is planned to run for five years and culminate in one or more fusion pilot plants.
History
The need for a fusion pilot plant has been recognized throughout the program to develop fusion power. Most recently before the announcement of the Milestone-Based Fusion Development Program, in 2021 the National Academies of Sciences, Engineering, and Medicine (NASEM) released a report which highlighted the need for such a program and advised its creation. In 2022 the Biden administration and DOE announced a Bold Decadal Vision for Commercial Fusion Energy which included plans to fund support for pilot plant development program.
The funding opportunity announcement (FOA) was announced by the US Department of Energy in September 2022, and $50 million was earmarked for the program.
Applications were received in December 2022. Eight companies were selected for negotiation in May 2023. However, agreements were not signed with the awardees until more than a year later in June 2024, reportedly due to concerns over how intellectual property would be handled.
In June 2024, at a White House summit the Department of Energy announced that all eight companies had successfully concluded detailed milestones negotiations with the federal government and that agreements had been signed to commence the Milestone Program.
Awardees
The eight awardees are:
Commonwealth Fusion Systems (Cambridge, MA)
Focused Energy Inc. (Austin, TX)
Thea Energy, Inc (formerly Princeton Stellarators Inc., Branchburg, NJ)
Realta Fusion Inc. (Madison, WI)
Tokamak Energy Inc. (US subsidiary of UK-based company, Bruceton Mills, WV)
Type One Energy Group (Madison, WI)
Xcimer Energy Inc. (Redwood City, CA)
Zap Energy Inc. (Everett, WA)
The awardees include 2 companies pursuing the tokamak approach, 2 companies pursuing the stellarator approach, 2 companies pursuing inertial confinement fusion, one company pursuing the magnetic mirror approach, and one company pursuing the Z-pinch approach.
Structure
The program is structured as a public–private partnership between the DOE and the awardees. The companies unlock matching funds upon completion of quantitative milestones, up to the full award amount.
The program is structured in three periods of performance: One spanning the first 18 months, one spanning the second 18 months, and one spanning the remaining 24 months of the five-year term. The first period of performance will presumably end around the end of 2025, assuming a start date based on the June 2024 announcement.
Only the first period of performance has been announced and awarded. The $46 million number is for the first period of performance. Subsequent periods of performance have not as of 2024 been announced, appropriated, or awarded.
The applicants were encouraged to propose collaborations with US national laboratories, with which the DOE would contract separately and pay directly. Oak Ridge National Laboratory will work with six of the eight companies.
Dispute over intellectual property
The year-long gap between the announcement of awardees and their signing agreements with the DOE was apparently due to a dispute over how companies' intellectual property would be treated under the award. Reporting stated that the companies' rights to existing and subject intellectual property was not sufficiently safeguarded under the DOE's initial proposed terms. At the time of the announcement, the total private investment in Commonwealth Fusion Systems was larger than $2 billion.
Awardee table
References
United States Department of Energy
Fusion power | Milestone-Based Fusion Development Program | [
"Physics",
"Chemistry"
] | 754 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
75,665,246 | https://en.wikipedia.org/wiki/Mikael%20Kubista | Mikael Kubista (born 13 August 1961) is a Czech-born Swedish chemist and entrepreneur who works in the field of molecular diagnostics. Since 2007, he is serving as a Professor of Chemistry and Head of the Department of Gene Expression Profiling at the Biotechnology Institute, Czech Academy of Sciences in the Czech Republic.
Kubista has contributed to the field of quantitative real-time PCR (qPCR), with his work recognized as part of the early research in this area.
Kubista was a member of the research team at Astra Hässle, where they focused on investigating Omeprazole, an inhibitor of K+/H+-ATPase. The drug is now marketed under the trade names Losec and Nexium, widely prescribed medications for the treatment of gastric ulcer. Additionally, Kubista is the Chairman of the Board of MultiD Analyses AB and the founder of TATAA Biocenter.
Early life
Kubista was born to his medical doctor father in the former Czechoslovakia in 1961. His father received a scholarship and relocated to Sweden. At the age of 7 in 1968, Kubista went to Sweden to visit his father. However, on that very day, Russia invaded Czechoslovakia in the so called Warsaw Pact invasion of Czechoslovakia, and as a result, the family decided to stay making Sweden their new home.
Education
He completed his undergraduate studies at University of Gothenburg, earning a B.Sc. degree in chemistry in 1984. He then pursued a Licentiate in Physical Chemistry at the Institute of Chemistry and Chemical Engineering, Chalmers University of Technology in Göteborg, which he completed in 1986. Kubista obtained his Ph.D. in chemistry from Chalmers University of Technology. Following his doctoral studies, he conducted postdoctoral research at institutions such as La Trobe University in Melbourne, Australia, and Yale University in New Haven, US. Additionally, he has held visiting professor positions at various universities, including the University of Maryland in College Park, US, in June 2000, and the University of A Coruña in Spain, during September–November 2003 and July 2006 to June 2007. Since 2007, Kubista is serving as an adjunct professor at the Institute of Biotechnology, Czech Academy of Sciences.
Career
Academic career
Kubista began his academic career in 1991 as an Assistant Professor in the Department of Physical Chemistry at Chalmers University of Technology. From 1993 to 1997, he served as an Associate Professor in the Department of Biochemistry at the same institution. Following this, he held the position of Professor in the Department of Biochemistry at Chalmers University of Technology from 1997 to 2006. Since 2007, he is the Head of the Department of Gene Expression at the Institute of Biotechnology, BIOCEV, Czech Academy of Sciences.
Entrepreneurial activities
In 1998 Kubista founded LightUp Technologies AB after his research finding of lightUp probes, a company that specializes in the development of real-time PCR tests for human infectious diseases. Three years later In 2001, Kubista's research led to the establishments of MultiD Analyses AB, which develops GenEx software for gene expression data analysis and TATAA Biocenter for qPCR and gene expression analysis. The company became known for its qPCR training services globally and its provision of qPCR services, particularly in Europe. TATAA Biocenter was the first laboratory in Europe to obtain flexible ISO 17025 accreditation and also was the first to provide COVID tests at the onset of the pandemic. In 2014 Kubista implemented non-invasive prenatal testing (NIPT) in Sweden and subsequently founded Life Genomics AB. In 2020, Kubista co-founded SimSen Diagnostics, a company focused on developing technology for liquid biopsy analyses.
Advisorial roles and memberships
Kubista holds several positions and advisory roles within the scientific and biotechnology communities including: Roche, ThermoFisher, Qiagen, Bio-Rad, and RealSeq Biosciences. He is also a member of the Scientific Advisory Council of Genetic Engineering News.
Kubista has also been involved in the establishment of modern molecular diagnostics in developing countries. Since 1999, he has served as an advisor to UNESCO, providing guidance and assistance to countries such as: Libya, Egypt, Iran, Grenada, and Ghana.
Kubista is an expert advisor for the European Commission Research Directorate General. Kubista advises the United Nations Educational Scientific and Cultural Organization (UNESCO) and is part of the scientific advisory board for the International Biotechnology Research in Tripoli, Libya, under UNESCO.
Selected findings and publications
Studied and identified chromophores and a variety of dyes commonly used as biomolecule labels like: tryptophan, DAPI, fluorescein, thiazole orange, and BEBO.
Explained DNA strand exchange in homologous recombination.
Applying Widlund experiment, identified specific nucleosome positioning sequences.
Uncovered mechanism of oncogene activation involving the formation of internal G-quadruplexes.
Designed a probe that exhibit luminescence upon binding to specific nucleic acids.
Techniques for gene expression at the level of individual cells and subcellular compartments.
The occurrence of horizontal transfer of mitochondria within living organisms.
Awards and recognition
Was recognized by ScholarGPS as one of the 50 highly ranked scholars of 2022.
In 2021 Kubista's organization, TATAA was on Sweden Technology Fast 50 list
In 2019, Global Health & Pharma recognized and awarded TATAA as the "Best Nucleic Acid Analysis Service Provider – Europe."
In 2013 TATAA Biocenter was honored with the Frost & Sullivan Award for Customer Value Leadership for their outstanding services in analyzing genetic material
In 2012, Pioneer of the year in western Sweden
In 1996, won Innovation Cup in western Sweden for the LightUp probes
References
1961 births
Living people
People from Podbořany
Biochemists
Molecular biologists
Swedish biochemists
University of Gothenburg alumni | Mikael Kubista | [
"Chemistry",
"Biology"
] | 1,213 | [
"Molecular biologists",
"Biochemists",
"Biochemistry",
"Molecular biology"
] |
75,671,117 | https://en.wikipedia.org/wiki/Lobster-eye%20optics | Lobster-eye optics are a biomimetic design, based on the structure of the eyes of a lobster with an ultra wide field of view, used in X-ray optics. This configuration allows X-ray light to enter from multiple angles, capturing more X-rays from a larger area than other X-ray telescopes. The idea was originally proposed for use in X-ray astronomy by Roger Angel in 1979, with a similar idea presented earlier by W. K. H. Schmidt in 1975. It was first used by NASA on a sub-orbital sounding rocket experiment in 2012. The Lobster Eye Imager for Astronomy, a Chinese technology demonstrator satellite, was launched in 2022. The Chinese Einstein Probe, launched in 2024, is the first major space telescope to use lobster-eye optics. Several other such space telescopes are currently under development or consideration.
Description
While most animals have refractive eyes, lobsters and other crustaceans have reflective eyes. The eyes of a crustacean contain clusters of cells, each reflecting a small amount of light from a particular direction. Lobster-eye optics technology mimics this reflective structure. This arrangement allows the light from a wide viewing area to be focused into a single image. The optics are made of microchannel plates. X-ray light can enter small tubes within these plates from multiple angles, and is focused through grazing-incidence reflection that gives a wide field of view. That, in turn, makes it possible to locate and image transient astronomical events that could not have been predicted in advance.
The field of view (FoV) of a lobster-eye optic, which is the solid angle subtended by the optic plate to the curvature center, is limited only by the optic size for a given curvature radius. Since the micropore optics are spherically symmetric in essentially all directions, theoretically, an idealized lobster-eye optic is almost free from vignetting except near the edge of the FoV. Micropore imagers are created from several layers of lobster-eye optics that creates an approximation of Wolter type-I optical design.
History
Only three geometries that use grazing incidence reflection of X-rays to produce X-ray images are known: the Wolter system, the Kirkpatrick-Baez system, and the lobster-eye geometry.
The lobster-eye X-ray optics design was first proposed in 1979 by Roger Angel. His design is based on Kirkpatrick-Baez optics, but requires pores with a square cross-section, and is referred to as the "Angel multi-channel lens". This design was inspired directly by the reflective properties of lobster eyes. Before Angel, an alternative design involving a one-dimensional arrangement consisting of a set of flat reflecting surfaces had been proposed by W. K. H. Schmidt in 1975, known as the "Schmidt focusing collimator objective". In 1989, physicists Keith Nugent and Stephen W. Wilkins collaborated to develop lobster-eye optics independently of Angel. Their key contribution was to open up an approach to manufacturing these devices using microchannel plate technology. This lobster-eye approach paved the way for X-ray telescopes with a 360-degree view of the sky.
In 1992, Philip E. Kaaret and Phillip Geissbuehler proposed a new method for creating lobster-eye optics with microchannel plates. Micropores required for lobster-eye optics are difficult to manufacture and have strict requirements. The pores must have widths between 0.01 and 0.5 mm and should have a length-to-width ratio of 20–200 (depends on the X-ray energy range); they need to be coated with a dense material for optimal X-ray reflection. The pore's inner walls must be flat and they should be organized in a dense array on a spherical surface with a radius of curvature of 2F, ensuring an open fraction greater than 50% and pore alignment accuracy between 0.1 and 5 arc minutes towards a common center.
Similar optics designs include honeycomb collimators (used in NEAR Shoemaker's XGRS detectors and MESSENGER's XRS) and silicon pore imagers (developed by ESA for its planned ATHENA mission).
Uses
NASA launched the first lobster-eye imager on a Black Brant IX sub-orbital sounding rocket in 2012. The STORM/DXL instrument (Sheath Transport Observer for the Redistribution of Mass/Diffuse X-ray emission from the Local galaxy) had micropore reflectors arranged in an array to form a Kirkpatrick-Baez system. BepiColombo, a joint ESA and JAXA Mercury mission launched in 2018, has a non-imaging collimator MIXS-C, with a microchannel geometry similar to the lobster-eye micropore design.
CNSA launched the Lobster-Eye X-ray Satellite in 2020, the first in-orbit lobster-eye telescope. In 2022, the Chinese Academy of Sciences built and launched the Lobster Eye Imager for Astronomy (LEIA), a wide-field X-ray imaging space telescope. It is a technology demonstrator mission that tests the sensor design for the Einstein Probe. LEIA has a sensor module that gives it a field of view of 340 square degrees. In August and September of 2022, LEIA conducted measurements to verify its functionality. A number of preselected sky regions and targets were observed, including the Galactic Center, the Magellanic Clouds, Sco X-1, Cas A, Cygnus Loop, and a few extragalactic sources. To eliminate interference from sunlight, the observations were obtained in Earth's shadow, starting 2 minutes after the satellite entered the shadow and ending 10 minutes before leaving it, resulting in an observational duration of ~23 minutes in each orbit. The CMOS detectors were operating in the event mode.
Current and future space telescopes
The Einstein Probe, a joint mission by the Chinese Academy of Sciences (CAS) in partnership with the European Space Agency (ESA) and the Max Planck Institute for Extraterrestrial Physics, was launched on 9 January 2024. It uses a 12-sensor module wide-field X-ray telescope for a 3600 square degree field of view, first tested by the Lobster Eye Imager for Astronomy mission.
The joint French-Chinese SVOM was launched on 22 June 2024.
NASA's Goddard Space Center proposed an instrument that uses the lobster-eye design for the ISS-TAO mission (Transient Astrophysics Observatory on the International Space Station), called the X-ray Wide-Field Imager. ISS-Lobster is a similar concept by ESA.
Several space telescopes that use lobster-eye optics are under construction. SMILE, a space telescope project by ESA and CAS, is planned to be launched in 2025. ESA's THESEUS is now under consideration.
Other uses
Lobster-eye optics can also be used for backscattering imaging for homeland security, detection of improvised explosive devices, nondestructive testing, and medical imaging.
References
Optics
Optics
Optics
Optics
Bioinspiration
Biomimetics | Lobster-eye optics | [
"Physics",
"Chemistry",
"Astronomy",
"Technology",
"Engineering",
"Biology"
] | 1,439 | [
"Biological engineering",
"Applied and interdisciplinary physics",
"Optics",
"X-rays",
"Spectrum (physical sciences)",
"Bioinspiration",
"Bionics",
"Electromagnetic spectrum",
"Measuring instruments",
"X-ray instrumentation",
"Bioinformatics",
" molecular",
"X-ray astronomy",
"Atomic",
"... |
75,674,975 | https://en.wikipedia.org/wiki/Printed%20circuit%20board%20manufacturing | Printed circuit board manufacturing is the process of manufacturing bare printed circuit boards (PCBs) and populating them with electronic components. It includes all the processes to produce the full assembly of a board into a functional circuit board.
In board manufacturing, multiple PCBs are grouped on a single panel for efficient processing. After assembly, they are separated (depaneled). Various techniques, such as silk screening and photoengraving, replicate the desired copper patterns on the PCB layers. Multi-layer boards are created by laminating different layers under heat and pressure. Holes for vias (vertical connections between layers) are also drilled.
The final assembly involves placing components onto the PCB and soldering them in place. This process can include through-hole technology (in which the component goes through the board) or surface-mount technology (SMT) (in which the component lays on top of the board).
Design
Manufacturing starts from the fabrication data generated by computer aided design, and component information. The fabrication data is read into the CAM (Computer Aided Manufacturing) software. CAM performs the following functions:
Input of the fabrication data.
Verification of the data
Compensation for deviations in the manufacturing processes (e.g. scaling to compensate for distortions during lamination)
Panelization
Output of the digital tools (copper patterns, drill files, inspection, and others)
Initially PCBs were designed manually by creating a photomask on a clear mylar sheet, usually at two or four times the true size. Starting from the schematic diagram the component pin pads were laid out on the mylar and then traces were routed to connect the pads. Rub-on dry transfers of common component footprints increased efficiency. Traces were made with self-adhesive tape. Pre-printed non-reproducing grids on the mylar assisted in layout. The finished photomask was photolithographically reproduced onto a photoresist coating on the blank copper-clad boards.
Modern PCBs are designed with dedicated layout software, generally in the following steps:
Schematic capture through an electronic design automation (EDA) tool.
Card dimensions and template are decided based on required circuitry and enclosure of the PCB.
The positions of the components and heat sinks are determined.
Layer stack of the PCB is decided, with one to tens of layers depending on complexity. Ground and power planes are decided. A power plane is the counterpart to a ground plane and behaves as an AC signal ground while providing DC power to the circuits mounted on the PCB. Signal interconnections are traced on signal planes. Signal planes can be on the outer as well as inner layers. For optimal EMI performance high frequency signals are routed in internal layers between power or ground planes.
Line impedance is determined using dielectric layer thickness, routing copper thickness and trace-width. Trace separation is also taken into account in case of differential signals. Microstrip, stripline or dual stripline can be used to route signals.
Components are placed. Thermal considerations and geometry are taken into account. Vias and lands are marked.
Signal traces are routed. Electronic design automation tools usually create clearances and connections in power and ground planes automatically.
Fabrication data consists of a set of Gerber format files, a drill file, and a pick-and-place file.
Panelization
Several small printed circuit boards can be grouped together for processing as a panel. A panel consisting of a design duplicated n-times is also called an n-panel, whereas a multi-panel combines several different designs onto a single panel. The outer tooling strip often includes tooling holes, a set of panel fiducials, a test coupon, and may include hatched copper pour or similar patterns for even copper distribution over the whole panel in order to avoid bending. The assemblers often mount components on panels rather than single PCBs because this is efficient. Panelization may also be necessary for boards with components placed near an edge of the board because otherwise the board could not be mounted during assembly. Most assembly shops require a free area of at least 10 mm around the board.
Depaneling
The panel is eventually broken into individual PCBs along perforations or grooves in the panel through milling or cutting. For milled panels a common distance between the individual boards is 2–3 mm. Today depaneling is often done by lasers which cut the board with no contact. Laser depaneling reduces stress on the fragile circuits, improving the yield of defect-free units.
Copper patterning
The first step is to replicate the pattern in the fabricator's CAM system on a protective mask on the copper foil PCB layers. Subsequent etching removes the unwanted copper unprotected by the mask. (Alternatively, a conductive ink can be ink-jetted on a blank (non-conductive) board. This technique is also used in the manufacture of hybrid circuits.)
Silk screen printing uses etch-resistant inks to create the protective mask.
Photoengraving uses a photomask and developer to selectively remove a UV-sensitive photoresist coating and thus create a photoresist mask that will protect the copper below it. Direct imaging techniques are sometimes used for high-resolution requirements. Experiments have been made with thermal resist. A laser may be used instead of a photomask. This is known as maskless lithography or direct imaging.
PCB milling uses a two or three-axis mechanical milling system to mill away the copper foil from the substrate. A PCB milling machine (referred to as a 'PCB Prototyper') operates in a similar way to a plotter, receiving commands from the host software that control the position of the milling head in the x, y, and (if relevant) z axis.
Laser resist ablation involves spraying black paint onto copper clad laminate, then placing the board into CNC laser plotter. The laser raster-scans the PCB and ablates (vaporizes) the paint where no resist is wanted. (Note: laser copper ablation is rarely used and is considered experimental.)
Laser etching, in which the copper may be removed directly by a CNC laser. Like PCB milling above, this is used mainly for prototyping.
EDM etching uses an electrical discharge to remove a metal from a substrate submerged into a dielectric fluid.
The method chosen depends on the number of boards to be produced and the required resolution.
Large volume
Silk screen printing – Used for PCBs with bigger features
Photoengraving – Used when finer features are required
Small volume
Print onto transparent film and use as photo mask along with photo-sensitized boards, then etch. (Alternatively, use a film photoplotter.)
Laser resist ablation
PCB milling
Laser etching
Hobbyist
Laser-printed resist: Laser-print onto toner transfer paper, heat-transfer with an iron or modified laminator onto bare laminate, soak in water bath, touch up with a marker, then etch.
Vinyl film and resist, non-washable marker, some other methods. Labor-intensive, only suitable for single boards.
Etching
The process by which copper traces are applied to the surface is known as etching after the subtractive method of the process, though there are also additive and semi-additive methods.
Subtractive methods remove copper from an entirely copper-coated board to leave only the desired copper pattern. The simplest method, used for small-scale production and often by hobbyists, is immersion etching, in which the board is submerged in etching solution such as ferric chloride. Compared with methods used for mass production, the etching time is long. Heat and agitation can be applied to the bath to speed the etching rate. In bubble etching, air is passed through the etchant bath to agitate the solution and speed up etching. Splash etching uses a motor-driven paddle to splash boards with etchant; the process has become commercially obsolete since it is not as fast as spray etching. In spray etching, the etchant solution is distributed over the boards by nozzles, and recirculated by pumps. Adjustment of the nozzle pattern, flow rate, temperature, and etchant composition gives predictable control of etching rates and high production rates. As more copper is consumed from the boards, the etchant becomes saturated and less effective; different etchants have different capacities for copper, with some as high as 150 grams of copper per liter of solution. In commercial use, etchants can be regenerated to restore their activity, and the dissolved copper recovered and sold. Small-scale etching requires attention to disposal of used etchant, which is corrosive and toxic due to its metal content. The etchant removes copper on all surfaces not protected by the resist. "Undercut" occurs when etchant attacks the thin edge of copper under the resist; this can reduce conductor widths and cause open-circuits. Careful control of etch time is required to prevent undercut. Where metallic plating is used as a resist, it can "overhang" which can cause short circuits between adjacent traces when closely spaced. Overhang can be removed by wire-brushing the board after etching.
In additive methods the pattern is electroplated onto a bare substrate using a complex process. The advantage of the additive method is that less material is needed and less waste is produced. In the full additive process the bare laminate is covered with a photosensitive film which is imaged (exposed to light through a mask and then developed which removes the unexposed film). The exposed areas are sensitized in a chemical bath, usually containing palladium and similar to that used for through hole plating which makes the exposed area capable of bonding metal ions. The laminate is then plated with copper in the sensitized areas. When the mask is stripped, the PCB is finished.
Semi-additive is the most common process: The unpatterned board has a thin layer of copper already on it. A reverse mask is then applied (Unlike a subtractive process mask, this mask exposes those parts of the substrate that will eventually become the traces). Additional copper is then plated onto the board in the unmasked areas; copper may be plated to any desired weight. Tin-lead or other surface platings are then applied. The mask is stripped away and a brief etching step removes the now-exposed bare original copper laminate from the board, isolating the individual traces. Some single-sided boards which have plated-through holes are made in this way. General Electric made consumer radio sets in the late 1960s using additive boards. The (semi-)additive process is commonly used for multi-layer boards as it facilitates the plating-through of the holes to produce conductive vias in the circuit board.
Industrial etching is usually done with ammonium persulfate or ferric chloride. For PTH (plated-through holes), additional steps of electroless deposition are done after the holes are drilled, then copper is electroplated to build up the thickness, the boards are screened, and plated with tin/lead. The tin/lead becomes the resist leaving the bare copper to be etched away.
Lamination
Multi-layer printed circuit boards have trace layers inside the board. This is achieved by laminating a stack of materials in a press by applying pressure and heat for a period of time. This results in an inseparable one piece product. For example, a four-layer PCB can be fabricated by starting from a two-sided copper-clad laminate, etch the circuitry on both sides, then laminate to the top and bottom pre-preg and copper foil. It is then drilled, plated, and etched again to get traces on top and bottom layers.
The inner layers are given a complete machine inspection before lamination because mistakes cannot be corrected afterwards. Automatic optical inspection (AOI) machines compare an image of the board with the digital image generated from the original design data. Automated Optical Shaping (AOS) machines can then add missing copper or remove excess copper using a laser, reducing the number of PCBs that have to be discarded. PCB tracks can have a width of just 10 micrometers.
Drilling
Holes through a PCB are typically drilled with drill bits coated with tungsten carbide. Coated tungsten carbide is used because board materials are abrasive. High-speed-steel bits would dull quickly, tearing the copper and ruining the board. Drilling is done by computer-controlled drilling machines, using a drill file or Excellon file that describes the location and size of each drilled hole.
Vias
Holes may be made conductive, by electroplating or inserting hollow metal eyelets, to connect board layers. Some conductive holes are intended for the insertion of through-hole-component leads. Others used to connect board layers, are called vias.
Micro vias
When vias with a diameter smaller than 76.2 micrometers are required, drilling with mechanical bits is impossible because of high rates of wear and breakage. In this case, the vias may be laser drilled—evaporated by lasers. Laser-drilled vias typically have an inferior surface finish inside the hole. These holes are called micro vias and can have diameters as small as 10 micrometers.
Blind and buried vias
It is also possible with controlled-depth drilling, laser drilling, or by pre-drilling the individual sheets of the PCB before lamination, to produce holes that connect only some of the copper layers, rather than passing through the entire board. These holes are called blind vias when they connect an internal copper layer to an outer layer, or buried vias when they connect two or more internal copper layers and no outer layers. Laser drilling machines can drill thousands of holes per second and can use either UV or lasers.
The hole walls for boards with two or more layers can be made conductive and then electroplated with copper to form plated-through holes. These holes electrically connect the conducting layers of the PCB.
Smear
For multi-layer boards, those with three layers or more, drilling typically produces a smear of the high temperature decomposition products of bonding agent in the laminate system. Before the holes can be plated through, this smear must be removed by a chemical de-smear process, or by Plasma etching. The de-smear process ensures that a good connection is made to the copper layers when the hole is plated through. On high reliability boards a process called etch-back is performed chemically with a potassium permanganate based etchant or plasma etching. The etch-back removes resin and the glass fibers so that the copper layers extend into the hole and as the hole is plated become integral with the deposited copper.
Plating and coating
Proper plating or surface finish selection can be critical to process yield, the amount of rework, field failure rate, and reliability.
PCBs may be plated with solder, tin, or gold over nickel.
After PCBs are etched and then rinsed with water, the solder mask is applied, and then any exposed copper is coated with solder, nickel/gold, or some other anti-corrosion coating.
It is important to use solder compatible with both the PCB and the parts used. An example is ball grid array (BGA) using tin-lead solder balls for connections losing their balls on bare copper traces or using lead-free solder paste.
Other platings used are organic solderability preservative (OSP), immersion silver (IAg), immersion tin (ISn), electroless nickel immersion gold (ENIG) coating, electroless nickel electroless palladium immersion gold (ENEPIG), and direct gold plating (over nickel). Edge connectors, placed along one edge of some boards, are often nickel-plated then gold-plated using ENIG. Another coating consideration is rapid diffusion of coating metal into tin solder. Tin forms intermetallics such as Cu6Sn5 and Ag3Cu that dissolve into the Tin liquidus or solidus (at 50 °C), stripping surface coating or leaving voids.
Electrochemical migration (ECM) is the growth of conductive metal filaments on or in a printed circuit board (PCB) under the influence of a DC voltage bias. Silver, zinc, and aluminum are known to grow whiskers under the influence of an electric field. Silver also grows conducting surface paths in the presence of halide and other ions, making it a poor choice for electronics use. Tin will grow "whiskers" due to tension in the plated surface. Tin-lead or solder plating also grows whiskers, only reduced by reducing the percentage of tin. Reflow to melt solder or tin plate to relieve surface stress lowers whisker incidence. Another coating issue is tin pest, the transformation of tin to a powdery allotrope at low temperature.
Solder resist application
Areas that should not be soldered may be covered with solder resist (solder mask). The solder mask is what gives PCBs their characteristic green color, although it is also available in several other colors, such as red, blue, purple, yellow, black and white. One of the most common solder resists used today is called "LPI" (liquid photoimageable solder mask). A photo-sensitive coating is applied to the surface of the PWB, then exposed to light through the solder mask image film, and finally developed where the unexposed areas are washed away. Dry film solder mask is similar to the dry film used to image the PWB for plating or etching. After being laminated to the PWB surface it is imaged and developed as LPI. Once but no longer commonly used, because of its low accuracy and resolution, is to screen print epoxy ink. In addition to repelling solder, solder resist also provides protection from the environment to the copper that would otherwise be exposed.
Legend / silkscreen
A legend (also known as silk or silkscreen) is often printed on one or both sides of the PCB. It contains the component designators, switch settings, test points and other indications helpful in assembling, testing, servicing, and sometimes using the circuit board.
There are three methods to print the legend:
Silkscreen printing epoxy ink was the established method, resulting in the alternative name.
Liquid photo imaging is a more accurate method than screen printing.
Inkjet printing is increasingly used. Inkjet printers can print variable data, unique to each PCB unit, such as text, a serial number, or a bar code.
Bare-board test
Boards with no components installed are usually bare-board tested for "shorts" and "opens". This is called electrical test or PCB e-test. A short is a connection between two points that should not be connected. An open is a missing connection between points that should be connected. For high-volume testing, a rigid needle adapter makes contact with copper lands on the board. The fixture or adapter is a significant fixed cost and this method is only economical for high-volume or high-value production. For small or medium volume production flying probe testers are used where test probes are moved over the board by an XY drive to make contact with the copper lands. There is no need for a fixture and hence the fixed costs are much lower. The CAM system instructs the electrical tester to apply a voltage to each contact point as required and to check that this voltage appears on the appropriate contact points and only on these.
Assembly
In assembly the bare board is populated (or "stuffed") with electronic components to form a functional printed circuit assembly (PCA), sometimes called a "printed circuit board assembly" (PCBA). In through-hole technology, the component leads are inserted in holes surrounded by conductive pads; the holes keep the components in place. In surface-mount technology (SMT), the component is placed on the PCB so that the pins line up with the conductive pads or lands on the surfaces of the PCB; solder paste, which was previously applied to the pads, holds the components in place temporarily; if surface-mount components are applied to both sides of the board, the bottom-side components are glued to the board. In both through hole and surface mount, the components are then soldered; once cooled and solidified, the solder holds the components in place permanently and electrically connects them to the board.
There are a variety of soldering techniques used to attach components to a PCB. High volume production is usually done with a pick-and-place machine and bulk wave soldering for through-hole parts or reflow ovens for SMT components or through-hole parts, but skilled technicians are able to hand-solder very tiny parts (for instance 0201 packages which are 0.02 in. by 0.01 in.) under a microscope, using tweezers and a fine-tip soldering iron, for small volume prototypes. Selective soldering may be used for delicate parts. Some SMT parts cannot be soldered by hand, such as ball grid array (BGA) packages. All through-hole components can be hand soldered, making them favored for prototyping where size, weight, and the use of the exact components that would be used in high volume production are not concerns.
Often, through-hole and surface-mount construction must be combined in a single assembly because some required components are available only in surface-mount packages, while others are available only in through-hole packages. Or, even if all components are available in through-hole packages, it might be desired to take advantage of the size, weight, and cost reductions obtainable by using some available surface-mount devices. Another reason to use both methods is that through-hole mounting can provide needed strength for components likely to endure physical stress (such as connectors that are frequently mated and demated or that connect to cables expected to impart substantial stress to the PCB-and-connector interface), while components that are expected to go untouched will take up less space using surface-mount techniques. For further comparison, see the SMT page.
After the board has been populated it may be tested in a variety of ways:
While the power is off, visual inspection, automated optical inspection. JEDEC guidelines for PCB component placement, soldering, and inspection are commonly used to maintain quality control in this stage of PCB manufacturing.
While the power is off, analog signature analysis, power-off testing.
While the power is on, in-circuit test, where physical measurements (for example, voltage) can be done.
While the power is on, functional test, just checking if the PCB does what it had been designed to do.
To facilitate these tests, PCBs may be designed with extra pads to make temporary connections. Sometimes these pads must be isolated with resistors. The in-circuit test may also exercise boundary scan test features of some components. In-circuit test systems may also be used to program nonvolatile memory components on the board.
In boundary scan testing, test circuits integrated into various ICs on the board form temporary connections between the PCB traces to test that the ICs are mounted correctly. Boundary scan testing requires that all the ICs to be tested use a standard test configuration procedure, the most common one being the Joint Test Action Group (JTAG) standard. The JTAG test architecture provides a means to test interconnects between integrated circuits on a board without using physical test probes, by using circuitry in the ICs to employ the IC pins themselves as test probes. JTAG tool vendors provide various types of stimuli and sophisticated algorithms, not only to detect the failing nets, but also to isolate the faults to specific nets, devices, and pins.
When boards fail the test, technicians may desolder and replace failed components, a task known as rework.
Protection and packaging
PCBs intended for extreme environments often have a conformal coating, which is applied by dipping or spraying after the components have been soldered. The coat prevents corrosion and leakage currents or shorting due to condensation. The earliest conformal coats were wax; modern conformal coats are usually dips of dilute solutions of silicone rubber, polyurethane, acrylic, or epoxy. Another technique for applying a conformal coating is for plastic to be sputtered onto the PCB in a vacuum chamber. The chief disadvantage of conformal coatings is that servicing of the board is rendered extremely difficult.
Many assembled PCBs are static sensitive, and therefore they must be placed in antistatic bags during transport. When handling these boards, the user must be grounded (earthed). Improper handling techniques might transmit an accumulated static charge through the board, damaging or destroying components. The damage might not immediately affect function but might lead to early failure later on, cause intermittent operating faults, or cause a narrowing of the range of environmental and electrical conditions under which the board functions properly.
See also
Reference list
Electronics manufacturing
Printed circuit board manufacturing | Printed circuit board manufacturing | [
"Engineering"
] | 5,225 | [
"Electrical engineering",
"Electronic engineering",
"Electronics manufacturing",
"Printed circuit board manufacturing"
] |
68,466,234 | https://en.wikipedia.org/wiki/SLAC%20Theory%20Group | The SLAC Theory Group is the hub of theoretical particle physics research at the SLAC National Accelerator Laboratory at Stanford University. It is a subdivision of the Elementary Particle Physics (EPP) Division at SLAC.
Research
The group has a diverse research program, specializing in areas of quantum field theory, beyond the standard model physics, dark matter, neutrinos, and collider phenomenology.
Members
The group is currently led by 9 faculty members, and has a dozen postdoctoral researchers and students at any given time.
Notable physicists who were students or postdoctoral researchers in the SLAC Theory Group include Nima Arkani-Hamed, Thomas Appelquist, Mirjam Cvetic, Michael Dine, John Ellis, Rouven Essig, Edward Farhi, Steven Frautschi, Joshua Frieman, Roscoe Giles, Yuval Grossman, Jack F. Gunion, Alan Guth, Howard Haber, Claude Itzykson, Robert Jaffe, David E. Kaplan, Igor Klebanov, Peter Lepage, Christopher Llewellyn Smith, Kirill Melnikov, Stephen Parke, Maxim Perelstein, Joel Primack, Joseph Polchinski, Davison Soper, Henry Tye, Mark Wise, and Tung-Mow Yan.
Past and present members of the SLAC Theory Group have received a total of at least 3 Breakthrough in Fundamental Physics Prizes ($3 million USD prize), 10 Sakurai Prizes ($10,000 USD), 5 Dirac Medals ($5,000 USD), 4 New Horizons in Physics Prizes ($100,000 USD), and 2 Gribov Medals ($5,000 USD).
Faculty
Current and former faculty members in the SLAC Theory Group include:
James Bjorken, discoverer of Bjorken Scaling (light-cone scaling) and Bjorken Sum Rule, 2004 Dirac Medal recipient
Stanley Brodsky, 2007 Sakurai Prize recipient for applications of perturbative quantum field theory to the analysis of hard exclusive strong interaction processes
Lance Dixon, pioneer of new methods to calculate Feynman diagrams in quantum chromodynamics and other Yang–Mills theories; 2014 recipient of the Sakurai Prize and 2023 recipient of the Galileo Medal
Sidney Drell, known for his contributions to quantum electrodynamics, including the Drell-Yan process, 2011 National Medal of Science recipient
Alexander Friedland, neutrino physicist
JoAnne Hewett, associate lab director of the Fundamental Physics Directorate and the chief research officer at SLAC
Stefan Hoeche, known for SHERPA parton shower event-generator framework
Shamit Kachru, pioneer of string theory
Rebecca Leane, astroparticle physicist
Bernhard Mistlberger, pioneer of multi-loop Higgs calculations at hadron colliders and 2021 Gribov Medal recipient
Pierre Noyes, known for theoretical work on the quantum mechanical three-body problem for strongly interacting particles
Jogesh Pati, known for the Pati-Salam model, 2000 Dirac Medal recipient
Michael Peskin, known for the Peskin–Takeuchi parameter, and his popular Quantum Field Theory textbook with Daniel Schroeder
Helen Quinn, known for Peccei–Quinn theory which earned her the 2010 Dirac Medal and the 2014 Sakurai Prize
Thomas Rizzo, Theory Group leader
Philip Schuster, 2015 New Horizons in Physics Prize recipient, known for new experimental searches for dark sectors using high-intensity electron beams
Eva Silverstein, known for work on early universe cosmology and string theory, recipient of 1999 MacArthur "Genius grant" award
Natalia Toro, 2015 New Horizons in Physics Prize recipient, known for new experimental searches for dark sectors using high-intensity electron beams
Jay Wacker, particle phenomenologist
References
Theoretical physics institutes | SLAC Theory Group | [
"Physics"
] | 775 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
68,468,604 | https://en.wikipedia.org/wiki/Platinum%E2%80%93samarium | Platinum-samarium is a binary inorganic compound of platinum and samarium with the chemical formula PtSm. This intermetallic compound forms crystals.
Synthesis
Fusion of stoichiometric amounts of pure substances:
Physical properties
Platinum-samarium forms crystals of rhombic crystal system, space group P nma, cell parameters a = 0.7148 nm, b = 0.4501 nm, c = 0.5638 nm, Z = 4, structure similar to that of iron boride (FeB).
The compound melts congruently at a temperature of ≈1810 °C.
References
Samarium compounds
Platinum compounds
Inorganic compounds
Intermetallics | Platinum–samarium | [
"Physics",
"Chemistry",
"Materials_science"
] | 135 | [
"Inorganic compounds",
"Metallurgy",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
68,470,612 | https://en.wikipedia.org/wiki/A572%20steel | ASTM A572 steel is a common high strength, low alloy (HSLA) structural steel used in the United States. A572 steel properties are specified by ASTM International standards.
Grades
A572 steel has five different grades: 42, 50, 55, 60 and 65. Each of these grades differ in their mechanical properties and chemical composition.
Chemical Composition
Material Properties
Forms
A572 steel is produced in a variety of different steel forms, which include:
Plates
Bars
Structural Shapes
Channels
I-Beams
Angles
Wide Flange Beams
Sheet Piling
Applications
A572 steel is typically used in structural applications due to its high strength, ductility, weldability and corrosion resistance. These applications include structural sections, reinforcing bars, bridges, skyscrapers and houses.
References
Structural steel
Metals | A572 steel | [
"Chemistry",
"Engineering"
] | 161 | [
"Structural engineering",
"Metals",
"Structural steel"
] |
71,356,162 | https://en.wikipedia.org/wiki/Kevin%20Knuth | Kevin Hunter Knuth (born 1965) is a Professor of Physics at the University at Albany (SUNY). Knuth conducts research in information physics, foundations of quantum mechanics, and Bayesian analysis with applications towards various problems in physics. He also conducts research into UFOs.
Education
Knuth was born in Fond du Lac, Wisconsin. He received a Bachelor of Science in physics and mathematics from the University of Wisconsin–Oshkosh in 1988, a Master of Science in physics from Montana State University in 1990, and a PhD in physics (with a minor in mathematics) from the University of Minnesota (1995), where he was supervised by John H. Broadhurst.
Academic career
After receiving his doctorate, Knuth taught in the Department of Speech and Hearing Sciences of the Graduate Center, CUNY, the Departments of Otolaryngology and Neuroscience of the Albert Einstein College of Medicine, and the Department of Physiology and Biophysics of the Cornell University Medical Center from 1997 to 2000. He also worked as a researcher at the Nathan Kline Institute for Psychiatric Research from 1999 to 2001 and at NASA's Ames Research Center from 2001 to 2005. He became an assistant professor of physics at the University at Albany in 2005, was promoted to associate professor in 2009 and to professor in 2023.
He has been editor-in-chief of the MDPI journal Entropy since 2012.
UFO research
Knuth has been quoted in the media on the topic of UFOs. He serves as vice president of UAPx, a nonprofit organization that aims to conduct field research about UFOs, sometimes referred to as UAP, and is a research affiliate of The Galileo Project for the systematic scientific search for evidence of extraterrestrial technological artifacts at Harvard University.
References
External links
Knuth Information Physics Lab
UAPx Nonprofit UAP research organization
1965 births
Quantum physicists
Theoretical physicists
American astrophysicists
Year of birth missing (living people)
Living people
Ufologists
University of Minnesota alumni
Montana State University alumni
University of Wisconsin–Oshkosh alumni
People from Fond du Lac, Wisconsin
American biophysicists
Academic journal editors
University at Albany, SUNY faculty | Kevin Knuth | [
"Physics"
] | 433 | [
"Theoretical physics",
"Quantum physicists",
"Theoretical physicists",
"Quantum mechanics"
] |
69,796,600 | https://en.wikipedia.org/wiki/Curium%28III%29%20chloride | Curium(III) chloride is the chemical compound with the formula CmCl3.
Structure
Curium(III) chloride has a 9 coordinate tricapped trigonal prismatic geometry.
Synthesis
Curium(III) chloride can be obtained from the reaction of hydrogen chloride gas with curium dioxide, curium(III) oxide, or curium(III) oxychloride at a temperature of 400-600 °C:
It can also be obtained from the dissolution of metallic curium in dilute hydrochloric acid:
This method has a number of disadvantages associated with the ongoing processes of hydrolysis and hydration of the resulting compound in an aqueous solution, making it problematic to obtain a pure product using this reaction.
It can be obtained from the reaction of curium nitride with cadmium chloride:
References
Curium compounds
Nuclear materials
Chlorides
Actinide halides | Curium(III) chloride | [
"Physics",
"Chemistry"
] | 181 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Inorganic compound stubs",
"Materials",
"Nuclear materials",
"Matter"
] |
69,799,974 | https://en.wikipedia.org/wiki/Intestine-on-a-chip | Intestines-on-a-chip (gut-on-a-chip, mini-intestine) are microfluidic bioengineered 3D-models of the real organ, which better mimic physiological features than conventional 3D intestinal organoid culture. A variety of different intestine-on-a-chip models systems have been developed and refined, all holding their individual strengths and weaknesses and collectively holding great promise to the ultimate goal of establishing these systems as reliable high-throughput platforms for drug testing and personalised medicine. The intestine is a highly complex organ system performing a diverse set of vital tasks, from nutrient digestion and absorption, hormone secretion, and immunological processes to neuronal activity, which makes it particularly challenging to model in vitro.
Conventional intestine models
Conventional intestinal models, such as traditional 2D cell culture of immortalised cell lines (e.g. CaCo2 or HT29), transwell cultures, Ussing chambers, and everted gut sacs, have been used extensively to understand better (patho-)physiological processes in the intestine. However, many intestinal functions are difficult to recapitulate and study using such simplistic models. Thus, these systems' translational and experimental value is limited.
In 2009, the development of intestinal organoids marked a milestone in the in vitro modelling of intestinal tissue. Intestinal organoids mimic the in vivo stem cell niche as intestinal stem cells spontaneously give rise to a closed, cystic mini-tissue with outward-facing buds representing the characteristic crypt-villus architecture of the intestinal epithelium. Intestinal organoids can contain all the different cell types of the intestinal epithelium, e.g. enterocytes, goblet cells, Paneth cells and enteroendocrine cells. Together with the accurate representation of the tissue architecture and cell-type composition, organoids have been shown to also exhibit key functional similarities to the native tissue. Furthermore, their long-term stability in culture, derivation from healthy and diseased origin and genetic manipulation possibilities make intestinal organoids a useful though simplistic model for large spread use as a platform for functional studies and disease modelling.
Nevertheless, several limitations restrict their usefulness as an intestinal model. First and foremost, the organoids' closed cystic structure makes their inner (apical) surface inaccessible, and separate treatment of apical and basolateral sides — and thus transport studies — highly cumbersome. Moreover, this closed cystic structure implies that intestinal organoids accumulate shed dead cells in their lumen putting spatial strain on the organoids, thus impeding undisturbed organoid culture over longer periods of time without disruption by mechanical disruption and passaging. Furthermore, intestinal organoid cultures suffer from strongly variable sizes, shapes, morphologies and localisations between single organoids in their 3D culture environment.
Intestine-on-a-chip models
Although organoids usually are referred to as miniature organs, they lack vital features to mimic organ-level complexity. For this reason, biofabricated devices have been developed, which surpass organoid limitations. Especially microfluidic devices hold great potential as platforms for in vitro models of organs, as they enable perfusion mimicking the function of blood circulation in tissues. Apart from fluidic flow, other culture parameters are incorporated into intestine-on-a-chip devices, including architectural cues, mechanical stimulation, oxygen gradients and co-cultures with other cell populations and the microbiota, to more accurately display the physiological behaviour of the actual organ.
Microfluidics
Opposite to traditional static cell culture, in microfluidic devices, fluid flows can be created, which closely mimick physiological fluid flow patterns. Fluid flow introduces physiological shear stress to cell surfaces, introduces apical delivery of nutrients and growth factors and enables the establishment of chemical gradients of, e.g. growth factors, which are vital for proper organ development. Overall, microfluidic devices increase the control over the organ-specific microenvironment, which allows for more precise models.
Different technologies have been used to introduce microfluidic flows in intestine-on-a-chip devices, including peristaltic pumps, syringe pumps, pressure generators and pumpless systems driven by hydrostatic pressure and gravity. An example of a gravity-driven microfluidic intestine-on-a-chip device is the OrganoPlate platform by Mimetas, which has been used as a disease model for inflammatory bowel disease by Beaurivage et al.
Mechanical stimulation
Beginning from the early stages of embryonic development up to the post-natal life, the intestine is constantly exposed to a wide range of mechanical forces. Peristalsis, the involuntary and cyclic propulsion of intestinal contents, is an essential part of the digestive process. It facilitates food digestion, nutrient absorption and intestinal emptying on a macro scale and applies shear stress and radial pressure on the intestinal epithelium on a micro-scale. In particular, mechanical factors were shown to influence intestinal development and homeostasis, such as gut looping, villi formation, and crypt localisation. Moreover, the chronic absence of mechanical stimuli in the human intestine has been associated with intestinal morbidity.
A prominent example where both mechanical stimulations in the form of peristalsis and microfluidic flow are used in combination is the Emulate intestine-on-a-chip system. The system consists of a two-way central cell culture microchannel, which is separated by a porous, extracellular matrix-coated, PDMS membrane allowing the separate culture of two different cell populations in the upper and lower microchannel. The central chamber is enclosed by two vacuum chambers running in parallel. The application of vacuum allows the cyclic unidirectional expansion of the porous membrane separating the channels to mimic peristaltic motion
Architectural cues
As in traditional organoid culture, introducing a third culture dimension is critical for a better representation of the microanatomy of a tissue. Since 3D cell cultures implement more physiologically relevant biochemical and mechanical cues, 3D cultures generally achieve better cell viability and a more physiological transcriptome and proteome. Moreover, tissue homeostasis processes such as proliferation, differentiation and cell death are represented in a more physiological manner. The 3D support of cell cultures is commonly based on hydrogels, which mimick the native extracellular matrix. Cells can either be embedded into hydrogels or grown on a predefined micro-engineered hydrogel surface. The most commonly used hydrogel for 3D intestinal systems is Matrigel, a solubilised basement membrane extract from mouse sarcoma. However, Matrigel has significant disadvantages such as a xenogeneic origin, bath-to-batch variability, high cost and a poorly defined composition. As these factors hinder clinical translation, other hydrogels are increasingly used in 3D intestinal models, including fibrin, collagen, hyaluronic acid and PEG-based synthetic hydrogels.
In tissue engineering, microfabrication techniques are of critical importance, especially in modelling the tissue microenvironment. Apart from designing and fabricating the microfluidic device itself, microfabrication techniques are also used to create 3D microstructures which allow the patterning of cell culture surfaces closely resembling the native tissue topography, i.e. the crypt-villus-axis.
A prominent example of an intestine-on-a-chip system relying on architectural cues is the homeostatic mini-intestines by Nikolaev et al. They use microfabricated intestine-on-a-chip devices with a hydrogel chamber. The collagen-Matrigel-mix hydrogel is laser-ablated to generate a microchannel for a tubular intestinal lumen with crypt structures. The culture of intestinal stem cells in this device results in their self-organisation into a functional epithelium with the physiological spatial arrangement of the crypt-villus domains. These mini-intestines allow for an extended long term culture and give rise to rare intestinal cell types not commonly found in other 3D models. Another example for architecturally driven morphogenesis of intestine-on-a-chip models are the surface patterning techniques published by Gjorevski et al., they developed microfabricated devices to pattern hydrogel surfaces in order to reproducibly direct intestinal organoid geometry, size and cell distributions.
These examples show, that intestine-on-a-chip systems with extrinsically guided morphogenesis enable spatial and temporal control of signalling gradients and may provide a platform to extensively study intestinal morphogenesis, stem cell maintenance, crypt dynamics, and epithelial regeneration.
Co-culturing
The healthy intestine has a wide range of different functions, which requires a vast set of different cell types to fulfil them. The primary intestinal function, the absorption of nutrients, requires close contact between the intestinal epithelium and blood and lymph endothelial cells. Moreover, the intestinal microbiota plays a critical part in the digestion of food, which makes a reliable immune defence indispensable. Furthermore, muscle and nerve cells control peristalsis and satiety. Finally, mesenchymal cells are essential components of the intestinal stem cell niche as they provide physical support and secrete growth factors. Thus, incorporating different cell types in intestine-on-a-chip systems is vital to model different aspects of intestinal functions adequately.
First steps were taken in co-culturing the intestinal epithelium and the microbiota in intestine-on-a-chip systems. Examples are the establishment of an in vitro model for intestinal Shigella flexneri infection using the Emulate intestine-on-a-chip system or the recreation of a complex faeces-derived microbiota population with both aerobic and anaerobic species. Similarly, researchers have tried to recreate an immunocompetent intestinal epithelium in intestine-on-a-chip systems, by co-culturing the intestinal epithelium with peripheral blood mononuclear cells, monocytes, macrophages or neutrophils. Moreover, the epithelial-endothelial interface has been modelled in several different systems by culturing endothelial monolayers and the intestinal epithelium on opposite sides of a porous membrane.
Apart from co-culturing intestinal cells with other cell types, also the cell population of the intestinal epithelium is of high relevance. While some rather simplistic approaches use immortalised cell lines as cell source for an intestinal epithelium, there is a shift towards the use of organoid-derived intestinal stem cells, which allows the derivation of intestinal epithelia with a more physiological cell type composition.
References
External links
MIMETAS OrganoPlate Platform with video: https://www.mimetas.com/en/perfused-tubules/
EMULATE Duodenum Intestine-Chip: https://emulatebio.com/duodenum-intestine-chip/
Homeostatic mini-intestine video: https://www.youtube.com/watch?v=IHKuri9sFEM&list=PLdV2S7pxgq9ZY1IBzIRJxDlq-lNKClsnW
Microfluidics
Biotechnology
Tissue engineering | Intestine-on-a-chip | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 2,439 | [
"Biological engineering",
"Microfluidics",
"Microtechnology",
"Cloning",
"Chemical engineering",
"Biotechnology",
"Tissue engineering",
"nan",
"Medical technology"
] |
78,684,836 | https://en.wikipedia.org/wiki/PKS%201741-03 | PKS 1741-03 is a blazar located in the constellation of Ophiuchus. This is core-dominated quasar located at a redshift of (z) 1.054, found to be highly polarized. It was first discovered in 1970 as an extragalactic radio source by astronomers and has a radio spectrum appearing to be flat, making it a flat-spectrum source.
Description
PKS 1741-03 is found to undergo a period of extreme scattering event (ESE). This is a dramatic change represented in flux density of radio sources, usually showing a decreasing trend in flux with a duration period of at least several weeks to months. During its ESE period, there was an increase in its angular diameter by 0.7 milliarcsecond. When observed on timescales of few months, PKS 1741-03 exhibited extreme variations at 2.7 GHz but no traces of violent outbursts in its light curve. Variability was also detected in the blazar at 1.49 GHz likely caused by refractive interstellar scintillation.
Radio imaging made by Very Long baseline Interferometry shows PKS 1741-03 to be a simple but compact source, consisting of two components dominated by a central radio core. There is presence of much weaker emission located south of the core which becomes noticed at high frequencies. Imaging made by Very Long Baseline Array shows PKS 1741-03 has a weak component at both epochs. Other VLBI observations at 2 and 8 GHz shows a bright component and a diffused jet structure.
Seven components have been discovered inside the parsec-scale jet of PKS 1741-03. Based on interferometric imaging, the jet components display superluminal motion of various speeds with ranges of between 3.5 and 6.1c. Further evidence also shows they are moving ballistically with the exception of one component displaying signs of bent trajectory.
In 2022, PKS 1741-03 was found to be a candidate source of a neutrino event. IceCube Observatory located at the South Pole detected a high-energy neutrino event, designated as 220205B above 200 TeV and found it to be associated with the blazar. This observation occurred while PKS 1741-03 was undergoing a powerful flare.
References
External links
PKS 1741-03 on SIMBAD
PKS 1741-03 on NASA/IPAC Database
Blazars
Quasars
Ophiuchus
Active galaxies
2829362
Astronomical objects discovered in 1970 | PKS 1741-03 | [
"Astronomy"
] | 516 | [
"Ophiuchus",
"Constellations"
] |
78,689,128 | https://en.wikipedia.org/wiki/Broad%20Arrow%20Policy | The Broad Arrow Policy was a policy of the British government from 1691 to preserve tall trees in the American colonies which were of critical use for the Royal Navy. It applied to Massachusetts from 1691. It was extended to New Hampshire (1698); New England, New York, and New Jersey (1711); and Nova Scotia (1721). The colonists disliked the policy and it was one of the grievances that led to the American Revolution.
The broad arrow symbol was used by the British to mark trees (especially the eastern white pine) intended for ship building use. Three axe strikes, resembling an arrowhead and shaft, were marked on large mast-grade trees. Use of the broad arrow mark commenced in earnest in 1691 with the Massachusetts Charter, which contained a Mast Preservation Clause specifying:
Initially England imported its mast trees from the Baltic states, but it was an expensive, lengthy and politically treacherous proposition. Much of British naval policy at the time revolved around keeping the trade route to the Baltics open. With Baltic timber becoming less appealing to use, the Admiralty's eye turned towards the Colonies. Colonists paid little attention to the Charter's Mast Preservation Clause, and tree harvesting increased with disregard for broad arrow protected trees. However, as Baltic imports decreased, the British timber trade increasingly depended on North American trees, and enforcement of broad arrow policies increased. Persons appointed to the position of Surveyor-General of His Majesty's Woods were responsible for selecting, marking and recording trees as well as policing and enforcing the unlicensed cutting of protected trees. This process was open to abuse, and the British monopoly was very unpopular with colonists. Part of the reason was that many protected trees were on either town-owned or privately owned lands.
Colonists could only sell mast trees to the British, but were substantially underpaid for the lumber. Even though it was illegal for the colonists to sell to enemies of the crown, both the French and the Spanish were in the market for mast trees as well and would pay a much better price. Acts of Parliament in 1711, 1722 and the 1772 Timber for the Navy Act extended protection finally to trees and resulted in the Pine Tree Riot that same year. This was one of the first acts of rebellion by the American colonists leading to the American Revolution in 1775, and a flag bearing a white pine was flown at the Battle of Bunker Hill.
See also
Forest conservation in the United States
Notes
Further reading
Conway, Dick. "Roots of Revolution" American History (Dec 2002) 37#4 pp. 56–59.
Kinney, Jay P., Forest legislation in America prior to March 4, 1789 (1916) online
Malone, Joseph J. Pine Trees and Politics: The Naval Stores and Forest Policy in Colonial New England, 1691-1775 (U of Washington Press, 1985) online review of this book
Marshall, Philip. "The Historical and Physiological Ecology of Eastern White Pine (Pinus strobus L.) in Northeast Connecticut, 1700-2000" (PhD dissertation, Yale University; ProQuest Dissertations & Theses, 2011. 34969700).
Roberts, Strother E. "Pines, profits, and popular politics: Responses to the White Pine Acts in the colonial Connecticut River Valley" New England Quarterly, (2010) 83(1), 73–101. https://doi.org/10.1162/tneq.2010.83.1.73
"The King's Broad Arrow and Eastern White Pine," NELMA (2020) online
Shipbuilding
Royal Navy
Forestry in the United States
Forest conservation | Broad Arrow Policy | [
"Engineering"
] | 730 | [
"Shipbuilding",
"Marine engineering"
] |
78,690,934 | https://en.wikipedia.org/wiki/Tiffany%20Santos | Tiffany Suzanne Santos (born 1980) is an American electrical engineer and materials scientist who works for the research division of Western Digital as an expert on tunnel magnetoresistance, non-volatile memory, and magnetic thin-film memory, and as Director of Non-Volatile Memory Materials Research.
Education and career
Santos is the daughter of Ted Santos, a physician and pathologist in Valdosta, Georgia. After graduating as salutatorian from Valdosta High School, she became a student of materials science and engineering at the Massachusetts Institute of Technology, where she received a bachelor's degree in 2002 and a Ph.D. in 2007 under the supervision of Jagadeesh Moodera. She received the Outstanding Senior Thesis award of the MIT Department of Materials Science and Engineering for her bachelor's thesis, Ferromagnetic Europium Oxide as a Spin-Filter Material. Her doctoral dissertation, Europium oxide as a perfect electron spin filter, was based on research applying magnetic materials in spintronics.
She became a postdoctoral researcher and then staff scientist at the Argonne National Laboratory before joining Hitachi Global Storage Technologies (HGST) in 2011. HGST was acquired by Western Digital in 2012.
Recognition
Santos received the L’Oréal USA Fellowship for Women in Science in 2009. In 2022 she was a distinguished lecturer of the IEEE Magnetics Society.
She was named as a Fellow of the American Physical Society (APS) in 2024, after a nomination from the APS Topical Group on Magnetism and Its Applications, "for innovative contributions in synthesis and characterization of novel ultrathin magnetic films and interfaces, and tailoring their properties for optimal performance, especially in magnetic data storage and spin-transport devices".
References
External links
1980 births
Living people
People from Valdosta, Georgia
American electrical engineers
American women engineers
Women electrical engineers
American materials scientists
Women materials scientists and engineers
Massachusetts Institute of Technology alumni
Argonne National Laboratory people
Western Digital people
Fellows of the American Physical Society | Tiffany Santos | [
"Materials_science",
"Technology"
] | 406 | [
"Women materials scientists and engineers",
"Materials scientists and engineers",
"Women in science and technology"
] |
78,693,846 | https://en.wikipedia.org/wiki/Benjamin%20Gung | Benjamin W. Gung (born July 15, 1953) is a Chinese American organic chemist and academic. He is an emeritus Professor of Chemistry at Miami University.
Gung and his research group have concentrated on two main areas: organic synthesis, including the total synthesis of natural products and the development of new methodologies, and the study of nonbonding interactions involving aromatic rings. Together, they have synthesized a variety of natural products.
Gung is ranked among the top 2 percent of researchers worldwide, according to a recent Stanford University study that used citation analysis to identify leading scholars among nearly eight million authors.
Education and career
After receiving a bachelor's degree in chemistry from Nanjing University in China in 1982, Gung pursued graduate studies in organic chemistry at Kansas State University, where he completed his M.S. in 1984 and Ph.D. in 1987, studying under Richard McDonald and Duy Hua. Following a two-year postdoctoral appointment at the University of South Carolina under James Marshall, he joined the faculty at Miami University, in Oxford, Ohio in 1989. He was promoted to associate professor in 1994 and Full Professor in 2003, and during his tenure there, he spent his sabbatical leave doing research in the group of William Roush at the University of Michigan in 1999. Since 2024, he has been serving as an emeritus Professor.
Gung has supervised graduate and undergraduate students in chemistry research. He has held key roles, including serving as a reviewer for five NSF-Career Applications in 2004 and organizing an NSF-sponsored REU program at Miami University from 2004 to 2006. He also reviewed NIH Fellowship Study Sections from 2006 to 2011 and was a Co-PI on an NSF-DUE project from 2011 to 2014. After retiring in 2024, he continues collaborating with undergraduate students on research, with his publication, co-authored with Miami University students, focusing on amino acids and peptides as effective ligands for metal-centered catalysts
Research
Gung has made contributions to the field of organic chemistry, with over 100 published journal articles. His research interests include stereochemistry in organic reactions, non-covalent molecular interactions, methods development, and computational chemistry. He is known for his studies in stereochemical mechanism and in the development of a Transannular [4+3] cycloaddition reaction with gold-stabilized allylic carbocations as reactive intermediate.
Selected articles
Gung, B. W. (1996). Diastereofacial selection in nucleophilic additions to unsymmetrically substituted trigonal carbons. Tetrahedron, 52(15), 5263–5301.
Gung, B. W. (1999). Structure distortions in heteroatom-substituted cyclohexanones, adamantanones, and adamantanes: Origin of diastereofacial selectivity. Chemical reviews, 99(5), 1377–1386.
Gung, B. W., Xue, X., & Reich, H. J. (2005). The strength of parallel-displaced arene− arene interactions in chloroform. The Journal of Organic Chemistry, 70(9), 3641–3644.
Gung, B. W., Patel, M., & Xue, X. (2005). A threshold for charge transfer in aromatic interactions? A quantitative study of π-stacking interactions. The Journal of Organic Chemistry, 70(25), 10532–10537.
Gung, B. W., & Amicangelo, J. C. (2006). Substituent Effects in C6F6 C6H5X Stacking Interactions. The Journal of organic chemistry, 71(25), 9261–9270.
Gung, B. W., Zou, Y., Xu, Z., Amicangelo, J. C., Irwin, D. G., Ma, S., & Zhou, H. C. (2008). Quantitative study of interactions between oxygen lone pair and aromatic rings: Substituent effect and the importance of closeness of contact. The Journal of Organic Chemistry, 73(2), 689–693.
References
Organic chemists
Nanjing University alumni
Kansas State University alumni
Miami University faculty
1953 births
Living people | Benjamin Gung | [
"Chemistry"
] | 890 | [
"Organic chemists"
] |
78,699,108 | https://en.wikipedia.org/wiki/Karl%20F.%20Lindman | Karl Ferdinand Lindman (7 June 1874 – 14 February 1952) was a Finnish physicist and educator. Best known for his work on chiral media, he has performed the experimental demonstration of optical rotation of microwaves in an artificial chiral medium in 1914. For the most of his career, he was a professor of physics at Åbo Akademi University.
Biography
Karl Ferdinand Lindman was born on 7 June 1874 in Ekenäs, Grand Duchy of Finland to Karl Gustav and Lovisa Lindman. His father was a farmer with clerical duties. Receiving a degree of physics in 1895, Lindman obtained his PhD degree from University of Helsinki in 1901. He briefly resided in Leipzig from 1899 to 1901; his thesis work was partially done in Leipzig University.
Following his doctoral stufies, Lindman served as a secondary school teacher and authored textbooks in physics, chemistry and astronomy in Swedish and Finnish. He was a lecturer at Svenska normallyceum i Helsingfors, where he introduced laboratory courses. In 1907, he took sabbatical in England and Scotland to study teaching methods. Becoming a faculty member at Åbo Akademi University in 1918, he was appointed as the chair in physics in 1921, and served as the vice rector from 1921 to 1929. He also served as the dean of the Faculty of Mathematics and Natural Sciences during his tenure. Despite retiring in 1942, he carried a full teaching load until 1945.
Lindman was married to Hilma Lovisa Tallqvist. He died on 14 February 1952 and was survived by his son, Sven Lindman, who was a professor of political science in Åbo Akademi. A conference in honor of Lindman was organized in 1991 at Abo Akademi by Finnish chapters of URSI and IEEE. Electromagnetic Waves in Chiral and Bi-isotropic Media, a 1994 monograph on chiral and bi-isotropic media by Ismo Lindell and his colleagues, is dedicated to his honour.
Research and contributions to chiral media
Lindman was mainly an experimental physicist and his research work focused on electromagnetics: he is best known for his work on chiral media. In 1914, he has demonstrated the optical rotation in an artificial chiral medium experimentally. He has constructed the artificial medium from left- and right-handed copper helices that are suspended in cotton; he has observed that this composite material rotates the linearly polarized microwave signal in a circular waveguide apparatus. He has also shown that same number of left- and right-handed helices does not cause any polarization rotation. His observations were first reported in the same year in the proceedings of Finnish Society of Sciences and Letters; these were subsequently published in 1920 and 1922 in the German-language journal Annalen der Physik. Even though this experiment came after the Jagadish Chandra Bose's 1898 study on optical rotation of microwaves, it has acted as a progenitor to artificial dielectrics and metamaterials. The experiment was repeated in 1950s with more advanced apparatus and was subsequently adapted to terahertz waves in 2009. Following his publications from 1914 to early 1920s, Lindman continued his experiments in chirality and proposed different configurations to induce optical activity.
Lindman was also active in other areas of electromagnetics. His doctoral studies in University of Leipzig focused on the resonances and standing waves in a dipole antenna. In addition to resonances of wire antennas, Lindman has studied millimeter and infrared wave propagation, diffraction grids, scattering and waveguides. In 1940s, he studied the wave propagation in circular waveguides and parallel plates: these studies coincided with the flurry of interest in microwave propagation of waveguides for radar applications, stemming from the World War II.
Even though he did not publish any original research regarding the theory of relativity, he was critical of it and expressed his criticisms in his textbooks.
Selected publications
References
1874 births
1952 deaths
20th-century Finnish physicists
Microwave engineers
University of Helsinki alumni
Academic staff of Åbo Akademi University
People from Raseborg
20th-century Finnish educators
20th-century Finnish non-fiction writers
Textbook writers
Finnish schoolteachers
Finnish writers in Swedish
Relativity critics
Finnish expatriates in Germany
Experimental physicists | Karl F. Lindman | [
"Physics"
] | 861 | [
"Relativity critics",
"Experimental physics",
"Experimental physicists",
"Theory of relativity"
] |
78,699,643 | https://en.wikipedia.org/wiki/Noviflumuron | Noviflumuron is an insecticide of the benzoylurea class. It is an insect growth regulator that prevents juvenile termites from developing into adults by disrupting the synthesis of chitin, the main component of an insect's exoskeleton.
Noviflumuron is primarily used in termite bait products such as Sentricon.
References
Insecticides
Organochlorides
Organofluorides
Ureas
2-Fluorophenyl compounds | Noviflumuron | [
"Chemistry"
] | 97 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs",
"Ureas"
] |
78,700,040 | https://en.wikipedia.org/wiki/Consalazinic%20acid | Consalazinic acid is a chemical compound with the molecular formula . It is classified as a depsidone and is a secondary metabolite produced by a variety of lichens.
Consalazinic acid was first isolated from Parmotrema subisidiosum and described in 1980. It has since been identified in many other lichens.
References
Lactones
Lichen products
Polyphenols
Heterocyclic compounds with 4 rings
Benzodioxepines
Benzofurans | Consalazinic acid | [
"Chemistry"
] | 104 | [
"Natural products",
"Lichen products"
] |
78,700,249 | https://en.wikipedia.org/wiki/Gynocardin | Gynocardin is a chemical compound with the molecular formula . It is classified as a cyanogenic glycoside.
It was first isolated from Gynocardia odorata, from which it gets it name, and characterized in 1905. It has since been found in a variety of other plants, including those in the genus Passiflora (passionflowers).
Gynocardin may contribute to the toxicity of plants that contain it because, like other cyanogenic glycosides, cyanide is formed upon its hydrolysis.
References
Cyanogenic glycosides
Cyclopentenes | Gynocardin | [
"Chemistry"
] | 131 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
72,876,243 | https://en.wikipedia.org/wiki/Lomonosov%27s%20invariant%20subspace%20theorem | Lomonosov's invariant subspace theorem is a mathematical theorem from functional analysis concerning the existence of invariant subspaces of a linear operator on some complex Banach space. The theorem was proved in 1973 by the Russian–American mathematician Victor Lomonosov.
Lomonosov's invariant subspace theorem
Notation and terminology
Let be the space of bounded linear operators from some space to itself. For an operator we call a closed subspace an invariant subspace if , i.e. for every .
Theorem
Let be an infinite dimensional complex Banach space, be compact and such that . Further let be an operator that commutes with . Then there exist an invariant subspace of the operator , i.e. .
Citations
References
Banach spaces
Functional analysis
Operator theory
Theorems in functional analysis | Lomonosov's invariant subspace theorem | [
"Mathematics"
] | 163 | [
"Theorems in mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Theorems in functional analysis",
"Mathematical relations"
] |
72,876,407 | https://en.wikipedia.org/wiki/Fax%C3%A9n%20integral | In mathematics, the Faxén integral (also named Faxén function) is the following integral
The integral is named after the Swedish physicist Olov Hilding Faxén, who published it in 1921 in his PhD thesis.
n-dimensional Faxén integral
More generally one defines the -dimensional Faxén integral as
with
and
for and
The parameter is only for convenience in calculations.
Properties
Let denote the Gamma function, then
For one has the following relationship to the Scorer function
Asymptotics
For we have the following asymptotics
References
Mathematical analysis
Functions and mappings
Definitions of mathematical integration | Faxén integral | [
"Mathematics"
] | 124 | [
"Mathematical objects",
"Mathematical analysis",
"Mathematical relations",
"Functions and mappings"
] |
72,883,371 | https://en.wikipedia.org/wiki/Trend%20periodic%20nonstationary%20processes | Trend periodic non-stationary processes (or trend cyclostationary processes) are a type of cyclostationary process that exhibits both periodic behavior and a statistical trend. The trend can be linear or nonlinear, and it can result from systematic changes in the data over time. A cyclostationary process can be formed by removing the trend component. This approach is utilized in the analysis of the trend-stationary process.
In data analysis classification of periodic data into stationary-periodic, trend-periodic and stochastic-periodic time series is achieved by means of phase dispersion minimization (PDM) test, which is a method for identifying periodicity.
Applications
Trending cyclostationary processes have several applications in finance, engineering, economics, and environmental research. Trending cyclostationary processes are used in economics to predict the seasonality and trend of time series data that display both periodic and trending behavior, such as rail and air travel demand. Trending cyclostationary processes are used in engineering to simulate signals that display both periodic and trending behavior, such as signals in modulated radio communications or control systems. Trending cyclostationary processes are used in economics to represent time series data that display both periodic behavior and trends in which the trend is usually represented by a so-called unit root
in the autoregressive part of the model. Trending cyclostationary processes are used in environmental research to simulate time series data that display both periodic behavior and trends, such as temperature or pollutant appearance patterns. In fact, almost any pollutions related phenomena falls into one of stochastic, periodic-stochastic, or trend-period-stochastic processes.
Properties
Trending cyclostationary processes have traits that are a mix of cyclostationary processes and trends. Trending cyclostationary processes have second-order stationarity, which means that their second-order moments are time-periodic. They do, however, display non-stationarity, which means that their mean and variance alter over time as a result of the presence of the trend.
A trend periodic stationary process is a sort of stationary time series data that has a consistent underlying trend that repeats itself regularly. A Fourier series expansion is a popular mathematical depiction of a trend periodic stationary process:
where x(t) is the time series data, T is the period of the trend, is the mean of the series, and are the Fourier coefficients, and k is the harmonic number.
Another way to represent trend periodic stationary processes is by using a regression model with a sine and cosine function, such as:
where , , , and are the regression coefficients that can be estimated using statistical methods.
Decomposing the signal is widely used to separate the trend process from the periodic one and represent the periodic part as sinusoid functions. The spectral density estimation is one of the methods used for this purpose. The decomposed function of the periodic trend process has a trend and a principal function that governs the periodicity.
Example
An example of trend periodic in the second form is
where 10t is trend and plus the sinusiduals are periodic stationary processes.
Detection and estimation
Estimation and detection of trending cyclostationary processes are more difficult than for standard cyclostationary processes due to discrepancies in trend definitions. One popular strategy is to first remove the trend from the data before estimating and detecting cyclostationary processes. Another strategy is to represent the data as a cyclostationary process and a trend and estimate the parameters of both components at the same time.
References
Statistical signal processing | Trend periodic nonstationary processes | [
"Engineering"
] | 756 | [
"Statistical signal processing",
"Engineering statistics"
] |
72,884,770 | https://en.wikipedia.org/wiki/Albicidin | Albicidin is an antibiotic and phytotoxic molecule produced by the bacterium Xanthomonas albilineans which infects sugarcane causing leaf scald.
As a phytotoxin, it acts by inhibiting the differentiation of chloroplasts. It accomplishes this by inhibiting DNA gyrase, and thereby preventing the replication of chloroplast DNA. As such it plays a major role in leaf scald disease.
As a DNA gyrase inhibitor, albicindin also has potential therapeutic use as an antibiotic. Its antibiotic properties were discovered in the early 1980s, when the molecule was isolated and purified from cultures of Xanthomonas albilineans. However, the precise structure of the molecule was only identified in 2015. A laboratory synthesis of albicidin has been developed, and research is currently focused on the design and evaluation of synthetic derivatives of albicidin with improved properties.
References
Plant toxins
Antibiotics
Benzamides
Nitriles | Albicidin | [
"Chemistry",
"Biology"
] | 212 | [
"Chemical ecology",
"Biotechnology products",
"Biocides",
"Functional groups",
"Plant toxins",
"Antibiotics",
"Nitriles"
] |
72,885,437 | https://en.wikipedia.org/wiki/Mars%20Year%201 | Mars Year 1 is the first year of Martian timekeeping standard developed by Clancy et al. originally for the purposes of working with the cyclical temporal variations of meteorological phenomena of Mars, but later used for general timekeeping on Mars. Mars Years have no officially adopted month systems. Scientists generally use two sub-units of the Mars Year:
the Solar longitude (Ls) system: 360 degrees per Mars Year that represent the position of Mars in its orbit around the Sun, or
the Sol system: 668 sols per Mars Year. This system consists of uniform time units. However, Mars Year sols may be confused with rover mission times that are also expressed in sols.
Unlike in the day vs. sol distinction, "Mars Year" has no unique Latin term. Start and End dates of Mars Years were determined for 1607–2141 by Piqueux et al. Earth and Mars dates can be converted in the Mars Climate Database, however, the Mars Years are only rational to apply to events that take place on Mars.
Mars Year 1 started on 11 April 1955 and ended on 25 February 1957. Mars Year 1 is preceded by Mars Year 0.
Events of Mars Year 1
There was no spacecraft on or around Mars in Mars Year 1 (the first successful flyby occurred in Mars Year 6).
De Mottoni created two albedo maps, Kuiper made several drawings Millman made maps and detailed descriptions and Dollfus observed the poles of Mars during the 1956 opposition.
Ls 257 (Sol 495) Křivský et al. reports that the polar cap disappeared during a solar flare event.
Ls 263 (Sol 505): Earth is closest to Mars (10 September 1956). This was the best time to observe Mars, therefore most observations during M.Y. 1 took place during this time.
Around Ls 270: Major dust storm. Kuiper observed that a new polar cap formed before the southern summer solstice (Ls 270), and a dust storm developed over Mare Sirenum or Hellespontus and spread rapidly, covering the entire planet with dust except the south polar region. The lowered temperature may have led to the early formation of the polar cap where bright white snow was observed uncontaminated by yellow dust, however Millman attributes the disappearance of the cap to clouds.
References
Mars
Mars
Calendar eras | Mars Year 1 | [
"Physics"
] | 475 | [
"Spacetime",
"Timekeeping",
"Physical quantities",
"Time"
] |
77,298,068 | https://en.wikipedia.org/wiki/Landau%E2%80%93Peierls%20instability | Landau–Peierls instability refers to the phenomenon in which the mean square displacements due to thermal fluctuatuions diverge in the thermodynamic limit and is named after Lev Landau (1937) and Rudolf Peierls (1934). This instability prevails in one-dimensional ordering of atoms/molecules in 3D space such as 1D crystals and smectics and also in two-dimensional ordering in 2D space such as a monomolecular adsorbed filsms at the interface between two isotrophic phases. The divergence is logarthmic, which is rather slow and therefore it is possible to realize substances (such as the smectics) in practice that are subject to Landau–Peierls instability.
Mathematical description
Consider a one-dimensionally ordered crystal in 3D space. The density function is then given by . Since this is a 1D system, only the displacement along the -direction due to thermal fluctuations can smooth out the density function; displacements in other two directions are irrelevant. The net change in the free energy due to the fluctuations is given by
where is the free energy without flcutuations. Note that cannot depend on or be a linear function of because the first case corresponds to a simple uniform translation and the second case is unstable. Thus, must be quadratic in the derivatives of . These are given by
where , and are material constants; in smectics, where the symmetry must be obeyed, the second term has to be set zero, i.e., . In the Fourier space (in a unit volume), the free energy is just
From the equipartition theorem (each Fourier mode, on average, is allotted an energy equal to ) , we can deduce that
The mean square displacement is then given by
where the integral is cut off at a large wavenumber that is comparable to the linear dimension of the element undergoing deformation. In the thermodynamic limit, , the integral diverges logarthmically. This means that an element at a particular point is displaced through very large distances and therefore smoothes out the function , leaving constant as the only solution and destroying the 1D ordering.
See also
Peierls transition
References
Phases of matter
Statistical mechanics | Landau–Peierls instability | [
"Physics",
"Chemistry"
] | 461 | [
"Statistical mechanics",
"Phases of matter",
"Matter"
] |
77,304,698 | https://en.wikipedia.org/wiki/One-step%20method | In numerical mathematics, one-step methods and multi-step methods are a large group of calculation methods for solving initial value problems. This problem, in which an ordinary differential equation is given together with an initial condition, plays a central role in all natural and engineering sciences and is also becoming increasingly important in the economic and social sciences, for example. Initial value problems are used to analyze, simulate or predict dynamic processes.
The basic idea behind one-step methods is that they calculate approximation points step by step along the desired solution, starting from the given starting point. They only use the most recently determined approximation for the next step, in contrast to multi-step methods, which also include points further back in the calculation. The one-step methods can be roughly divided into two groups: the explicit methods, which calculate the new approximation directly from the old one, and the implicit methods, which require an equation to be solved. The latter are also suitable for so-called stiff initial value problems.
The simplest and oldest one-step method, the explicit Euler method, was published by Leonhard Euler in 1768. After a group of multi-step methods was presented in 1883, Carl Runge, Karl Heun and Wilhelm Kutta developed significant improvements to Euler's method around 1900. These gave rise to the large group of Runge-Kutta methods, which form the most important class of one-step methods. Further developments in the 20th century include the idea of extrapolation and, above all, considerations on step width control, i.e. the selection of suitable lengths for the individual steps of a method. These concepts form the basis for solving difficult initial value problems, as they occur in modern applications, efficiently and with the required accuracy using computer programs.
Introduction
Ordinary differential equations
The development of differential and integral calculus by the English physicist and mathematician Isaac Newton and, independently of this, by the German polymath Gottfried Wilhelm Leibniz in the last third of the 17th century was a major impetus for the mathematization of science in the early modern period. These methods formed the starting point of the mathematical subfield of analysis and are of central importance in all natural and engineering sciences. While Leibniz was led to differential calculus by the geometric problem of determining tangents to given curves, Newton started from the question of how changes in a physical quantity can be determined at a specific point in time.
For example, when a body moves, its average speed is simply the distance traveled divided by the time required to travel it. However, in order to mathematically formulate the instantaneous velocity of the body at a certain point in time , a limit transition is necessary: Consider short time spans of length , the distances traveled and the corresponding average velocities .If the time period Δ 𝑡 is now allowed to converge towards zero and if the average velocities also approach a fixed value, then this value is called the (instantaneous) velocity at the given time . If denotes the position of the body at time 𝑡 , then write and call the derivative of .
The decisive step in the direction of differential equation models is now the reverse question: In the example of the moving body, let the velocity be known at every point in time 𝑡 and its position be determined from this. It is clear that the initial position of the body at a point in time 𝑡 0 must also be known in order to be able to solve this problem unambiguously. We are therefore looking for a function with that fulfills the initial condition with given values and .
In the example of determining the position 𝑥 of a body from its velocity, the derivative of the function being searched for is explicitly given. In most cases, however, the important general case of ordinary differential equations exists for a sought-after variable : Due to the laws of nature or the model assumptions, a functional relation is known that specifies how the deriativey of the function to be determined can be calculated from and from the (unknown) value . In addition, an initial condition must again be given, which can be obtained, for example, from a measurement of the required variable at a fixed point in time. To summarize, the following general type of task exists: Find the function that satisfies the equations
is fulfilled, where is a given function.
A simple example is a variable that grows exponentially. This means that the instantaneous change, i.e. the derivative , is proportional to itself. Therefore, with a growth rate and, for example, an initial condition . In this case, the required solution 𝑦 can already be found using elementary differential calculus and specified using the exponential function: .
The required function in a differential equation can be vector-valued, i.e. for each , can be a vector with components. This is also referred to as an -dimensional system of differential equations. In the case of a moving body, is its position in -dimensional Euclidean space and is its velocity at time . The differential equation therefore specifies the velocity of the trajectory with direction and magnitude at each point in time and space. The trajectory itself is to be calculated from this.
Basic idea of the one-step procedure
In the simple differential equation of exponential growth considered above as an example, the solution function could be specified directly. This is generally no longer possible for more complicated problems. Under certain additional conditions, it is then possible to show that a clearly determined solution to the initial value problem exists for the function ; however, this can then no longer be explicitly calculated using solution methods of analysis (such as separation of variables, an exponential approach or variation of the constants). In this case, numerical methods can be used to determine approximations for the solution sought.
The methods for the numerical solution of initial value problems of ordinary differential equations can be roughly divided into two large groups: the one-step and the multi-step methods. Both groups have in common that they calculate approximations for the desired function values at points step by step. The defining characteristic of one-step methods is that only the "current" approximation is used to determine the following approximation . In contrast, multi-step methods also include previously calculated approximations; a three-step method would therefore use and to determine the new approximation in addition to .
The simplest and most basic one-step method is the explicit Euler method, which was introduced by the Swiss mathematician and physicist Leonhard Euler in 1768 in his textbook Institutiones Calculi Integralis. The idea of this method is to approximate the solution sought by a piecewise linear function in which the gradient of the straight line piece is given by in each step from the point math>t_{j+1}</math> to the point . In more detail: The problem definition already gives a value of the function being searched for, namely . However, the derivative at this point is also known, as applies. This allows the tangent to the graph of the solution function to be determined and used as an approximation. At the point the following results with the step size
.
This procedure can now be continued in the following steps. Overall, this results in the following calculation rule for the explicit Euler method
with the increments .
The explicit Euler method is the starting point for numerous generalizations in which the gradient is replaced by gradients that approximate the behaviour of the solution between the points and more precisely. An additional idea for one-step methods is provided by the implicit Euler method, which uses as the gradient. At first glance, this choice does not seem very suitable, as is unknown. However, as a procedural step, we now obtain the equation
from which can be calculated (using a numerical method if necessary). If, for example, the arithmetic mean of the slopes of the explicit and implicit Euler method is selected as the slope, the implicit trapezoidal method is obtained. In turn, an explicit method can be obtained from this if, for example, the unknown on the right-hand side of the equation is approximated using the explicit Euler method, the so-called Heun method. All these methods and all other generalizations have the basic idea of one-step methods in common: the step
with a gradient that can depend on , and as well as (for implicit methods) on .
Definition
With the considerations from the introductory section of this article, the concept of the one-step method can be defined as follows: Let the solution of the initial value problem be sought
, .
It is assumed that the solution
exists on a given interval and is uniquely determined. Are
Intermediate positions in the interval and the corresponding increments, then this is given by
,
given method is a one-step method with method function . If does not depend on , then it is called an explicit one-step method. Otherwise, an equation for must be solved in each step and the method is called implicit.
Consistency and convergence
Convergence order
For a practical one-step procedure, the calculated should be good approximations for the values of the exact solution at the point . As the variables are generally -dimensional vectors, the quality of this approximation is measured using a vector norm as , the error at the point . It is desirable that these errors quickly converge to zero for all if the step sizes are allowed to converge to zero. In order to also capture the case of non-constant step sizes, is defined more precisely as the maximum of the step sizes used and the behavior of the maximum error at all points is considered in comparison to powers of . The one-step method for solving the given initial value problem is said to have the order of convergence if the estimate
applies to all sufficiently small with a constant that is independent of . The order of convergence is the most important parameter for comparing different one-step methods. A method with a higher order of convergence generally delivers a smaller total error for a given step size or, conversely, fewer steps are required to achieve a given accuracy. For a method with , it is to be expected that the error will only be approximately halved if the step size is halved. With a method of convergence order , on the other hand, it can be assumed that the error is reduced by a factor of approximately .
Global and local error
The errors considered in the definition of the convergence order are made up of two individual components in a way that initially seems complicated: On the one hand, of course, they depend on the error that the method makes in a single step by approximating the unknown gradient of the function being searched for by the method function. On the other hand, however, it must also be taken into account that the starting point of a step generally does not match the exact starting point ; the error after this step therefore also depends on all errors that have already been made in the previous steps. Due to the uniform definition of the one-step procedures, which differ only in the choice of the procedure function , it can be proven, however, that (under certain technical conditions at ) one can directly infer the order of convergence from the error order in a single step, the so-called consistency order.
The concept of consistency is a general and central concept of modern numerical mathematics. While the convergence of a method involves investigating how well the numerical approximations match the exact solution, in simplified terms the "reverse" question is asked in the case of consistency: How well does the exact solution fulfill the method specification? In this general theory, a method is convergent if it is consistent and stable. To simplify the notation, the following consideration assumes that an explicit one-step procedure
with a constant step size exists. With the true solution , the local truncation error (also called local process error) is defined as
.
Thus, one assumes that the exact solution is known, starts a method step at the point and forms the difference to the exact solution at the point . This defines: A one-step method has the consistency order if the estimate
applies to all sufficiently small with a constant that is independent of .
The striking difference between the definitions of the consistency order and the convergence order is the power instead of . This can be clearly interpreted as meaning that a power of the step size is "lost" during the transition from local to global error. The following theorem, which is central to the theory of one-step methods, applies:
If the process function is Lipschitz-continuous and the associated one-step process has the consistency order , then it also has the convergence order .
The Lipschitz continuity of the process function as an additional requirement for stability is generally always fulfilled if the function from the differential equation itself is Lipschitz-continuous. This requirement must be assumed for most applications anyway in order to guarantee the unambiguous solvability of the initial value problem. According to the theorem, it is therefore sufficient to determine the consistency order of a one-step method. In principle, this can be achieved by Taylor expansion of to powers of . In practice, the resulting formulas for higher orders become very complicated and confusing, so that additional concepts and notations are required.
Stiffness and A-stability
The convergence order of a method is an asymptotic statement that describes the behavior of the approximations when the step size converges to zero. However, it says nothing about whether the method actually calculates a useful approximation for a given fixed step size. Charles Francis Curtiss and Joseph O. Hirschfelder first described in 1952 that this can actually be a major problem for certain types of initial value problems. They had observed that the solutions to some differential equation systems in chemical reaction kinetics could not be calculated using explicit numerical methods and called such initial value problems "stiff". There are numerous mathematical criteria for determining how stiff a given problem is. Stiff initial value problems are usually systems of differential equations in which some components become constant very quickly while other components change only slowly. Such behavior typically occurs in the modeling of chemical reactions. However, the most useful definition of stiffness for practical applications is: An initial value problem is stiff if, when solving it with explicit one-step methods, the step size would have to be chosen "too small" in order to obtain a useful solution. Such problems can therefore only be solved using implicit methods.
This effect can be illustrated more precisely by examining how the individual methods cope with exponential decay. According to the Swedish mathematician Germund Dahlquist, the test equation
with the exponentially decreasing solution for . The adjacent diagram shows - as an example for the explicit and implicit Euler method - the typical behavior of these two groups of methods for this seemingly simple initial value problem: If too large a step size is used in an explicit method, this results in strongly oscillating values that build up over the course of the calculation and move further and further away from the exact solution. Implicit methods, on the other hand, typically calculate the solution for arbitrary step sizes qualitatively correctly, namely as an exponentially decreasing sequence of approximate values.
More generally, the above test equation is also considered for complex values of . In this case, the solutions are oscillations whose amplitude remains limited precisely when , i.e. the real part of is less than or equal to 0. This makes it possible to formulate a desirable property of one-step methods that are to be used for stiff initial value problems: the so-called A-stability. A method is called A-stable if it calculates a sequence of approximations for any step size applied to the test equation for all with , which remains bounded (like the true solution). The implicit Euler method and the implicit trapezoidal method are the simplest examples of A-stable one-step methods. On the other hand, it can be shown that an explicit method can never be A-stable.
Special procedures and procedure classes
Simple procedures of order 1 and 2
As the French mathematician Augustin-Louis Cauchy proved around 1820, the Euler method has a convergence order of 1. If you average the slopes of the explicit Euler method and of the implicit Euler method, as they exist at the two end points of a step, you can hope to obtain a better approximation over the entire interval. In fact, it can be proven that the implicit trapezoidal method obtained in this way
has a convergence order of 2. This method has very good stability properties, but is implicit, meaning that an equation for 𝑦 𝑗 + 1 must be solved in each step. If this variable is approximated on the right-hand side of the equation using the explicit Euler method, the result is the explicit method of Heun
,
which also has convergence order 2. Another simple explicit method of order 2, the improved Euler method, is obtained by the following consideration: A "mean" slope in the method step would be the slope of the solution 𝑦 in the middle of the step, i.e. at the point . However, as the solution is unknown, it is approximated by an explicit Euler step with half the step size. This results in the following procedure
.
These one-step methods of order 2 were all published as improvements of the Euler method in 1895 by the German mathematician Carl Runge.
Runge-Kutta method
The aforementioned ideas for simple one-step methods lead to the important class of Runge-Kutta methods when generalized further. For example, Heun's method can be presented more clearly as follows: First, an auxiliary slope is calculated, namely the slope of the explicit Euler method. This is used to determine a further auxiliary slope, here . The actual process gradient used is then calculated as a weighted average of the auxiliary gradients, i.e. in Heun's method. This procedure can be generalized to more than two auxiliary slopes. An - -stage Runge-Kutta method first calculates auxiliary slopes by evaluating 𝑓 at suitable points and then as a weighted average. In an explicit Runge-Kutta method, the auxiliary slopes are calculated directly one after the other; in an implicit method, they are obtained as solutions to a system of equations. A typical example is the explicit classical Runge-Kutta method of order 4, which is sometimes simply referred to as the Runge-Kutta method: First, the four auxiliary slopes
and then the weighted average is calculated as the process slope
is used. This well-known method was published by the German mathematician Wilhelm Kutta in 1901, after Karl Heun had found a three-step one-step method of order 3 a year earlier.
The construction of explicit methods of even higher order with the smallest possible number of steps is a mathematically quite demanding problem. As John C. Butcher was able to show in 1965, there are, for example, only a minimum of six steps for order 5; an explicit Runge-Kutta method of order 8 requires at least 11 steps. In 1978, the Austrian mathematician Ernst Hairer found a method of order 10 with 17 levels. The coefficients for such a method must fulfill 1205 determinant equations. With implicit Runge-Kutta methods, the situation is simpler and clearer: for every number of steps there is a method of order ; this is also the maximum achievable order.
Extrapolation method
The idea of extrapolation is not limited to the solution of initial value problems with one-step methods, but can be applied analogously to all numerical methods that discretize the problem to be solved with a step size . A well-known example of an extrapolation method is the Romberg integration for the numerical calculation of integrals. In general, let be a value that is to be determined numerically, in the case of this article, for example, the value of the solution function of an initial value problem at a given point. A numerical method, for example a one-step method, calculates an approximate value for this, which depends on the choice of step size . It is assumed that the method is convergent, i.e. that converges to when converges to zero. However, this convergence is only a purely theoretical statement, as approximate values can be calculated for a finite number of different step sizes , but of course the step size cannot be allowed to "converge to zero". However, the calculated approximations for different step sizes can be interpreted as information about the (unknown) function : In the extrapolation methods, is approximated by an interpolation polynomial, i.e. by a polynomial with
for . The value of the polynomial at the point is then used as a computable approximation for the non-computable limit value of for towards zero. An early successful extrapolation algorithm for initial value problems was published by Roland Bulirsch and Josef Stoer in 1966.
A concrete example in the case of a one-step method of order can illustrate the general procedure of extrapolation. With such a method, the calculated approximation for small step sizes ℎ can be easily described by a polynomial of the form
with initially unknown parameters and . If you now calculate two approximations and using the method for a step size and for half the step size , two linear equations for the unknowns and are obtained from the interpolation conditions and .
The value extrapolated to
is then generally a significantly better approximation than the two values calculated initially. It can be shown that the order of the one-step method obtained in this way is at least , i.e. at least 1 greater than the original method.
Method with step width control
One advantage of the one-step method is that any step size can be used in each step 𝑗 independently of the other steps. In practice, this obviously raises the question of how ℎ 𝑗 should be selected. In real applications, there will always be an error tolerance with which the solution of an initial value problem is to be calculated; for example, it would be pointless to determine a numerical approximation that is significantly more "accurate" than the data for initial values and parameters of the given problem, which are subject to measurement errors. The aim will therefore be to select the step sizes in such a way that, on the one hand, the specified error tolerances are adhered to and, on the other hand, as few steps as possible are used in order to keep the computational effort to a minimum. This problem, in which an ordinary differential equation is given together with an initial condition, plays a central role in all natural and engineering sciences and is also becoming increasingly important in the economic and social sciences, for example. Initial value problems are used to analyze, simulate or predict dynamic processes.
For well-conditioned initial value problems, it can be shown that the global process error is approximately equal to the sum of the local truncation errors in the individual steps. Therefore, the largest possible should be selected as the step size, for which is below a selected tolerance threshold. The problem here is that cannot be calculated directly, as it depends on the unknown exact solution of the initial value problem at the point . The basic idea of step size control is therefore to approximate with a method that is more accurate than the underlying basic method.
Two basic ideas for step width control are step width halving and embedded processes. With step size halving, the result for two steps with half the step size is calculated as a comparison value in addition to the actual process step. A more precise approximation for is then determined from both values by extrapolation and the local error 𝜂 𝑗 is estimated. If this is too large, this step is discarded and repeated with a smaller step size. If it is significantly smaller than the specified tolerance, the step size can be increased in the next step. The additional computational effort for this step width halving procedure is relatively high; this is why modern implementations usually use so-called embedded procedures for step width control. The basic idea is to calculate two approximations for in each step using two one-step methods that have different orders of convergence and thus estimate the local error. In order to optimize the computational effort, the two methods should have as many computational steps in common as possible: They should be "embedded in each other". Embedded Runge-Kutta methods, for example, use the same auxiliary slopes and differ only in how they average them. Well-known embedded methods include the Runge-Kutta-Fehlberg method (, 1969) and the Dormand-Prince method (J. R. Dormand and P. J. Prince, 1980).
Practical example: Solving initial value problems with numerical software
Numerous software implementations have been developed for the mathematical concepts outlined in this article, which allow the user to solve practical problems numerically in a simple way. As a concrete example, a solution to the Lotka-Volterra equations will now be calculated using the popular numerical software Matlab. The Lotka-Volterra equations are a simple model from biology that describes the interactions between predator and prey populations. Given the differential equation system
with the parameters and the initial condition , . Here, and correspond to the temporal development of the prey and predator population respectively. The solution should be calculated on the time interval .
For the calculation using Matlab, the function is first defined for the given parameter values on the right-hand side of the differential equation :
a = 1; b = 2; c = 1; d = 1;
f = @(t,y) [a*y(1) - b*y(1)*y(2); c*y(1)*y(2) - d*y(2)];
The time interval and the initial values are also required:
t_int = [0, 20];
y0 = [3; 1];
The solution can then be calculated:
[t, y] = ode45(f, t_int, y0);
The Matlab function ode45 implements a one-step method that uses two embedded explicit Runge-Kutta methods with convergence orders 4 and 5 for step size control.
The solution can now be plotted, as a blue curve and as a red curve; the calculated points are marked by small circles:figure(1)
plot(t, y(:,1), 'b-o', t, y(:,2), 'r-o')
The result is shown below in the left-hand image. The right-hand image shows the step sizes used by the method and was generated with
figure(2)
plot(t(1:end-1), diff(t))This example can also be executed without changes using the free numerical software GNU Octave. However, the method implemented there results in a slightly different step size sequence.
Literature
External links
References
Computational mathematics
Differential equations | One-step method | [
"Mathematics"
] | 5,440 | [
"Applied mathematics",
"Mathematical objects",
"Computational mathematics",
"Differential equations",
"Equations"
] |
78,714,127 | https://en.wikipedia.org/wiki/List%20of%20steel%20manufacturers%20in%20Afghanistan | This is a list of steel manufacturers in Afghanistan:
Maihan Steel Mill in Kabul, Kabul Province
Milat Steel Mill in Shakardara District, Kabul Province
Khan Steel Mill in Kabul, Kabul Province
See also
List of companies of Afghanistan
References
External links
Afghanistan
Lists of companies of Afghanistan
Industry in Afghanistan | List of steel manufacturers in Afghanistan | [
"Chemistry"
] | 60 | [
"Steel industry by country",
"Metallurgical industry by country"
] |
74,370,559 | https://en.wikipedia.org/wiki/Berkelium%28III%29%20oxybromide | Berkelium(III) oxybromide is an inorganic compound of berkelium, bromine, and oxygen with the chemical formula BkOBr.
Synthesis
Berkelium oxybromide can be prepared by the action of a vapor mixture of HBr and on berkelium tribromide.
References
Berkelium compounds
Oxybromides | Berkelium(III) oxybromide | [
"Chemistry"
] | 74 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
74,371,509 | https://en.wikipedia.org/wiki/Pedersen%20current | A Pedersen current is an electric current formed in the direction of the applied electric field when a conductive material with charge carriers is acted upon by an external electric field and an external magnetic field. Pedersen currents emerge in a material where the charge carriers collide with particles in the conductive material at approximately the same frequency as the gyratory frequency induced by the magnetic field. Pedersen currents are associated with a Pedersen conductivity related to the applied magnetic field and the properties of the material.
History
The first expression for the Pedersen conductivity was formulated by Peder Oluf Pedersen from Denmark in his 1927 work "The Propagation of Radio Waves along the Surface of the Earth and in the Atmosphere", where he pointed out that the geomagnetic field means that the conductivity of the ionosphere is anisotropic.
Physical explanation
When a moving charge carrier in a conductor is under the influence of a magnetic field , the carrier experiences a force perpendicular to the direction of motion and the magnetic field, resulting in a gyratory path, which is circular in the absence of any other external force. When an electric field is applied in addition to the magnetic field and perpendicular to that field, this gyratory motion is driven by the electric field, leading to a net drift in the direction around the guiding center and a lack of mobility in the direction of the electric field. The charge carrier undergoes a helical motion whereby a charge carrier at rest acquires motion in the direction of the electric field according to Coulomb's law, gains a velocity perpendicular to the magnetic field, and subsequently is pushed in the direction due to the Lorentz force (as is in the direction of , is initially in the same direction as .) The motion will then oscillate backwards against the electric field until it again reaches a velocity of zero in the direction of the electric field, before again being driven by the electric and magnetic fields, forming a helical path. As a result, in a vacuum, no net current is possible in the direction of the electric field. Likewise, when there is a dense material with a high frequency of collisions between the charge carriers and the conductive medium, mobility is very low and the charge carriers are basically stationary.
For a positively charged particle, over the course of this helical path, there is a positive skew in the location distribution of the charge carrier in the direction of the electric field, such that at any given point in time a measurement of the location of the charge carrier will on average result in a positive change from original position in the direction of the electric potential. During a collision with another particle in the medium, the velocity of the charge carrier is randomized at the point of collision. This location of collision is likely to be a positive change in the direction of the electric field from the original location of the charge carrier. After the velocity is randomised, the charge carrier will then restart helical motion from a different original location. Overall, this results in a bulk movement in the direction of the electric field such that a current is able to flow, which is known as the Pedersen Current, with the associated Pedersen Conductivity reaching a maximum when the frequency of collisions is approximately equal to the gyratory frequency so that the charge carriers experience one collision for every gyration.
The Pedersen conductivity is determined by the following equation:
Where the electron density is , is the magnetic field, is the ion concentration for a given species, is the collision frequency between ion species i and other particles, is the gyrofrequency for that ion, is the collision frequency for the electron, and is the electron gyrofrequency.
A negative charge carrier undergoes a similar drift in the direction , but moves in the opposite direction to a positive charge carrier, and undergoes helical motion such that there is a net negative skew in the distribution of position from the original position over the gyration, and as these particles are negatively charged they will also produce a positive contribution to the Pedersen current.
Role in the Ionosphere
Pedersen currents play an important role in the ionosphere, especially in polar regions. In the ionospheric dynamo region near the poles, the ion density is low enough and the magnetic field high enough for the collision frequency to be comparable to the gyration frequency, and the Earth's magnetic field has a large component perpendicular to the horizontal electric field due to the high inclination of the field near the poles. As a result, Pedersen currents are a significant mechanism for charge carrier movement. The magnitude of the Pedersen current balances the drag on the ionospheric plasma due to ion‐neutral collisions.
Pedersen currents in the ionosphere are similar to Hall currents. They share similar production mechanisms, similar formulas for determining conductivity, and similar conductivity profiles and conductivity dependence on various factors. The Pedersen and Hall conductivities are maximised during daytime or in auroral regions at night, as they depend on plasma density, which in turn depends on auroral or solar ionization. The conductivities also vary by about 40% over the solar cycle, reaching a maximum conductivity around solar maximum.
The Pedersen conductivity reaches a maximum in the ionosphere at an altitude of around 125 km.
Pedersen currents flow between the Region 1 and Region 2 Birkeland current sheets (see the figure), completing the circuit of the flow of charge through the ionosphere (at a given local time, one region involves current entering the ionosphere along the geomagnetic field lines, and the other region involves current leaving the ionosphere.) There is also a Pedersen current that flows across the pole from the dawn side (local time 6:00) to the dusk side (local time 18:00) of the region 1 current sheet.
Electrons have also been shown to carry Pedersen currents in the D layer of the ionosphere.
Joule heating
The Joule heating of the ionosphere, a major source of energy loss from the magnetosphere, is closely related to the Pedersen conductivity through the following relation:
Where is the Joule heating per unit volume, is the Pedersen conductivity, and are the electric and magnetic fields, and is the neutral wind velocity.
See also
Electromagnetism
Magnetohydrodynamics
References
Electromagnetism
Electricity | Pedersen current | [
"Physics"
] | 1,276 | [
"Electromagnetism",
"Physical phenomena",
"Fundamental interactions"
] |
74,380,428 | https://en.wikipedia.org/wiki/Teaching%20quantum%20mechanics | Quantum mechanics is a difficult subject to teach due to its counterintuitive nature. As the subject is now offered by advanced secondary schools, educators have applied scientific methodology to the process of teaching quantum mechanics, in order to identify common misconceptions and ways of improving students' understanding.
Common learning difficulties
Students' misconceptions range from fully classical physics thinking, mixed models, to quasi-quantum ideas. For example, if the concept that quantum mechanics does not describe a path for electrons or photons is misunderstood, students may believe that they follow specific trajectories (classical), or sinusoidal paths (mixes), or are simultaneously wave and particles (quasi-quantum: "in which students understand that quantum objects can behave as both particles and waves, but still have difficulty describing events in a nondeterministic way"). Among the concepts most often misunderstood are:
the postulates of quantum mechanics provide no description for the trajectories for electrons or photons,
amplitude of a wave is not a measure of energy,
most bound states have no corresponding classical orbits,
in practice, quantum mechanics gives probabilisitic rather than deterministic results,
intrinsic uncertainty rather than measurement error.
Issues also arise from misunderstanding classical concepts related to quantum concepts, such as the difference between light energy and light intensity.
Teaching strategies
Mathematics
Quantum mechanics can be taught with a focus on different interpretations, different models, or via mathematical techniques. Studies have shown that focus on non-mathematical concepts can lead to adequate understanding.
Digital and multi-media
Despite the fundamental impossibility of directly viewing quantum states, multimedia visualizations are an important tool in education.
Interactive media provides an alternative experience beyond everyday personal experience as a tool for understanding quantum mechanics. Among the multimedia sites that have been studied with positive results are QuVis and Phet.
History and philosophy of science as educational guides
In introducing history as part of the process of teaching quantum mechanics sets up a potential conflict of goals: accurate history or pedagogical clarity. Studies have shown that teaching through history helps students recognize that the counterintuitive issues are fundamental rather than simply something they don't understand. Specifically discussing the historical debates on quantum concepts drives home the idea the quantum differs from classical. Discussing the philosophy of science introduces the idea that language derived from everyday experience limits our ability to describe quantum phenomena.
Directly discussing the meanings of words
Mohan analyzes two widely used representative quantum mechanics textbooks against the learning challenges reported by Krijtenburg-Lewerissa and others. Both texts adopt language ('waves' and 'particles') familiar to students in other contexts without directly exploring the significant shifts in meaning required by quantum mechanics. Mohan attributes some of the learning challenges to this unexplored application of inappropriate language.
Teaching for quantum computing
N. David Mermin reports that an unconventional strategy based on abstract but simple math concepts is sufficient to teach quantum mechanics to students interested in quantum computing application rather than physics. Many of the issues that confound students of physics to not apply to this case and the mathematical background of quantum computing resembles the background already taught in computer science. Mermin develops notation and operations with classical bits then introduces quantum bits as superpositions of two classical states. He never needs to discuss even the Planck constant, which he suggests is important for quantum computer hardware but not software.
Teaching based on quantum optics
Philipp Blitzenbauer engages students through simple but intrinsically quantum single-photon experiments. The approach avoids the ambiguous classical vs quantum character of photons in optical interference experiments like the double slit. Students exposed to quantum mechanics in this way avoid developing misconceptions apparent among students in the control group.
See also
Physics education
Physics education research
Introduction to quantum mechanics
Mermin's device
List of textbooks about quantum mechanics
Notes
References
Quantum mechanics
Pedagogy | Teaching quantum mechanics | [
"Physics"
] | 775 | [
"Theoretical physics",
"Quantum mechanics"
] |
71,380,095 | https://en.wikipedia.org/wiki/Twin-width | The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins, vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.
Definition
Twin-width is defined for finite simple undirected graphs. These have a finite set of vertices, and a set of edges that are unordered pairs of vertices. The open neighborhood of any vertex is the set of other vertices that it is paired with in edges of the graph; the closed neighborhood is formed from the open neighborhood by including the vertex itself. Two vertices are true twins when they have the same closed neighborhood, and false twins when they have the same open neighborhood; more generally, both true twins and false twins can be called twins, without qualification.
The cographs have many equivalent definitions, but one of them is that these are the graphs that can be reduced to a single vertex by a process of repeatedly finding any two twin vertices and merging them into a single vertex. For a cograph, this reduction process will always succeed, no matter which choice of twins to merge is made at each step. For a graph that is not a cograph, it will always get stuck in a subgraph with more than two vertices that has no twins.
The definition of twin-width mimics this reduction process. A contraction sequence, in this context, is a sequence of steps, beginning with the given graph, in which each step replaces a pair of vertices by a single vertex. This produces a sequence of graphs, with edges colored red and black; in the given graph, all edges are assumed to be black. When two vertices are replaced by a single vertex, the neighborhood of the new vertex is the union of the neighborhoods of the replaced vertices. In this new neighborhood, an edge that comes from black edges in the neighborhoods of both vertices remains black; all other edges are colored red.
A contraction sequence is called a -sequence if, throughout the sequence, every vertex touches at most red edges. The twin-width of a graph is the smallest value of for which it has a -sequence.
A dense graph may still have bounded twin-width; for instance, the cographs include all complete graphs. A variation of twin-width, sparse twin-width, applies to families of graphs rather than to individual graphs. For a family of graphs that is closed under taking induced subgraphs and has bounded twin-width, the following properties are equivalent:
The graphs in the family are sparse, meaning that they have a number of edges bounded by a linear function of their number of vertices.
The graphs in the family exclude some fixed complete bipartite graph as a subgraph.
The family of all subgraphs of graphs in the given family has bounded twin-width.
The family has bounded expansion, meaning that all its shallow minors are sparse.
Such a family is said to have bounded sparse twin-width.
The concept of twin-width can be generalized from graphs to various totally ordered structures (including graphs equipped with a total ordering on their vertices), and is in many ways simpler for ordered structures than for unordered graphs. It is also possible to formulate equivalent definitions for other notions of graph width using contraction sequences with different requirements than having bounded degree.
Graphs of bounded twin-width
Cographs have twin-width zero. In the reduction process for cographs, there will be no red edges: when two vertices are merged, their neighborhoods are equal, so there are no edges coming from only one of the two neighborhoods to be colored red. In any other graph, any contraction sequence will produce some red edges, and the twin-width will be greater than zero.
The path graphs with at most three vertices are cographs, but every larger path graph has twin-width one. For a contraction sequence that repeatedly merges the last two edges of the path, only the edge incident to the single merged vertex will be red, so this is a 1-sequence. Trees have twin-width at most two, and for some trees this is tight. A 2-contraction sequence for any tree may be found by choosing a root, and then repeatedly merging two leaves that have the same parent or, if this is not possible, merging the deepest leaf into its parent. The only red edges connect leaves to their parents, and when there are two at the same parent they can be merged, keeping the red degree at most two.
More generally, the following classes of graphs have bounded twin-width, and a contraction sequence of bounded width can be found for them in polynomial time:
Every graph of bounded clique-width, or of bounded rank-width, also has bounded twin-width. The twin-width is at most exponential in the clique-width, and at most doubly exponential in the rank-width. These graphs include, for instance, the distance-hereditary graphs, the -leaf powers for bounded values of , and the graphs of bounded treewidth.
Indifference graphs (equivalently, unit interval graphs or proper interval graphs) have twin-width at most two.
Unit disk graphs defined from sets of unit disks that cover each point of the plane a bounded number of times have bounded twin-width. The same is true for unit ball graphs in higher dimensions.
The permutation graphs coming from permutations with a forbidden permutation pattern have bounded twin-width. This allows twin-width to be applied to algorithmic problems on permutations with forbidden patterns.
Every family of graphs defined by forbidden minors has bounded twin-width. For instance, by Wagner's theorem, the forbidden minors for planar graphs are the two graphs and , so the planar graphs have bounded twin-width.
Every graph of bounded stack number or bounded queue number also has bounded twin-width. There exist families of graphs of bounded sparse twin-width that do not have bounded stack number, but the corresponding question for queue number remains open.
The strong product of any two graphs of bounded twin-width, one of which has bounded degree, again has bounded twin-width. This can be used to prove the bounded twin-width of classes of graphs that have decompositions into strong products of paths and bounded-treewidth graphs, such as the -planar graphs. For the lexicographic product of graphs, the twin-width is exactly the maximum of the widths of the two factor graphs. Twin-width also behaves well under several other standard graph products, but not the modular product of graphs.
In every hereditary family of graphs of bounded twin-width, it is possible to find a family of total orders for the vertices of its graphs so that the inherited ordering on an induced subgraph is also an ordering in the family, and so that the family is small with respect to these orders. This means that, for a total order on vertices, the number of graphs in the family consistent with that order is at most singly exponential in . Conversely, every hereditary family of ordered graphs that is small in this sense has bounded twin-width. It was originally conjectured that every hereditary family of labeled graphs that is small, in the sense that the number of graphs is at most a singly exponential factor times , has bounded twin-width. However, this conjecture was disproved using a family of induced subgraphs of an infinite Cayley graph that are small as labeled graphs but do not have bounded twin-width.
There exist graphs of unbounded twin-width within the following families of graphs:
Graphs of bounded degree.
Interval graphs.
Unit disk graphs.
In each of these cases, the result follows by a counting argument: there are more graphs of the given type than there can be graphs of bounded twin-width.
Properties
If a graph has bounded twin-width, then it is possible to find a versatile tree of contractions. This is a large family of contraction sequences, all of some (larger) bounded width, so that at each step in each sequence there are linearly many disjoint pairs of vertices each of which could be contracted at the next step in the sequence. It follows from this that the number of graphs of bounded twin-width on any set of given vertices is larger than by only a singly exponential factor, that the graphs of bounded twin-width have an adjacency labelling scheme with only a logarithmic number of bits per vertex, and that they have universal graphs of polynomial size in which each -vertex graph of bounded twin-width can be found as an induced subgraph.
Algorithms
The graphs of twin-width at most one can be recognized in polynomial time. However, it is NP-complete to determine whether a given graph has twin-width at most four, and NP-hard to approximate the twin-width with an approximation ratio better than 5/4. Under the exponential time hypothesis, computing the twin-width requires time at least exponential in , on -vertex graphs. In practice, it is possible to compute the twin-width of graphs of moderate size using SAT solvers. For most of the known families of graphs of bounded twin-width, it is possible to construct a contraction sequence of bounded width in polynomial time.
Once a contraction sequence has been given or constructed, many different algorithmic problems can be solved using it, in many cases more efficiently than is possible for graphs that do not have bounded twin-width. As detailed below, these include exact parameterized algorithms and approximation algorithms for NP-hard problems, as well as some problems that have classical polynomial time algorithms but can nevertheless be sped up using the assumption of bounded twin-width.
Parameterized algorithms
An algorithmic problem on graphs having an associated parameter is called fixed-parameter tractable if it has an algorithm that, on graphs with vertices and parameter value , runs in time for some constant and computable function . For instance, a running time of would be fixed-parameter tractable in this sense. This style of analysis is generally applied to problems that do not have a known polynomial-time algorithm, because otherwise fixed-parameter tractability would be trivial. Many such problems have been shown to be fixed-parameter tractable with twin-width as a parameter, when a contraction sequence of bounded width is given as part of the input. This applies, in particular, to the graph families of bounded twin-width listed above, for which a contraction sequence can be constructed efficiently. However, it is not known how to find a good contraction sequence for an arbitrary graph of low twin-width, when no other structure in the graph is known.
The fixed-parameter tractable problems for graphs of bounded twin-width with given contraction sequences include:
Testing whether the given graph models any given property in the first-order logic of graphs. Here, both the twin-width and the description length of the property are parameters of the analysis. Problems of this type include subgraph isomorphism for subgraphs of bounded size, and the vertex cover and dominating set problems for covers or dominating sets of bounded size. The dependence of these general methods on the length of the logical formula describing the property is tetrational, but for independent set, dominating set, and related problems it can be reduced to exponential in the size of the independent or dominating set, and for subgraph isomorphism it can be reduced to factorial in the number of vertices of the subgraph. For instance, the time to find a -vertex independent set, for an -vertex graph with a given -sequence, is , by a dynamic programming algorithm that considers small connected subgraphs of the red graphs in the forward direction of the contraction sequence. These time bounds are optimal, up to logarithmic factors in the exponent, under the exponential time hypothesis. For an extension of the first-order logic of graphs to graphs with totally ordered vertices, and logical predicates that can test this ordering, model checking is still fixed-parameter tractable for hereditary graph families of bounded twin-width, but not (under standard complexity-theoretic assumptions) for hereditary families of unbounded twin-width.
Coloring graphs of bounded twin-width, using a number of colors that is bounded by a function of their twin-width and of the size of their largest clique. For instance, triangle-free graphs of twin-width can be -colored by a greedy coloring algorithm that colors vertices in the reverse of the order they were contracted away. This result shows that the graphs of bounded twin-width are χ-bounded. For graph families of bounded sparse twin-width, the generalized coloring numbers are bounded. Here, the generalized coloring number is at most if the vertices can be linearly ordered in such a way that each vertex can reach at most earlier vertices in the ordering, through paths of length through later vertices in the ordering.
Speedups of classical algorithms
In graphs of bounded twin-width, it is possible to perform a breadth-first search, on a graph with vertices, in time , even when the graph is dense and has more edges than this time bound.
Approximation algorithms
Twin-width has also been applied in approximation algorithms. In particular, in the graphs of bounded twin-width, it is possible to find an approximation to the minimum dominating set with bounded approximation ratio. This is in contrast to more general graphs, for which it is NP-hard to obtain an approximation ratio that is better than logarithmic.
The maximum independent set and graph coloring problems can be approximated to within an approximation ratio of , for every , in polynomial time on graphs of bounded twin-width. In contrast, without the assumption of bounded twin-width, it is NP-hard to achieve any approximation ratio of this form with .
References
Further reading
Graph invariants | Twin-width | [
"Mathematics"
] | 2,844 | [
"Graph invariants",
"Mathematical relations",
"Graph theory"
] |
72,887,006 | https://en.wikipedia.org/wiki/Allison%20Hubel | Allison Hubel is an American mechanical engineer and cryobiologist who applies her expertise in heat transfer to study the cryopreservation of biological tissue. She is a professor of mechanical engineering at the University of Minnesota, where she directs the Biopreservation Core Resource and the Technological Leadership Institute, and is the president of the Society for Cryobiology from 2023 to 2024.
Education and career
Hubel majored in mechanical engineering at Iowa State University, graduating in 1983. She continued her studies at the Massachusetts Institute of Technology (MIT), where she earned a master's degree in 1989 and completed her Ph.D. in the same year.
She worked as a research fellow at Massachusetts General Hospital from 1989 to 1990, and as an instructor at MIT from 1990 to 1993, before moving to the University of Minnesota in 1993 as a research associate in the Department of Laboratory Medicine and Pathology. In 1996 she became an assistant professor in that department, and in 2002 she moved to the Department of Mechanical Engineering as an associate professor. She was promoted to full professor in 2009, and became director of the Biopreservation Core Resource in 2010.
With two of her students, she founded a spinoff company, BlueCube Bio (later renamed Evia Bio) to commercialize their technology for preserving cells in cell therapy. She continues to serve as chief scientific officer for Evia Bio.
She became president-elect of the Society for Cryobiology for the 2022–2023 term, and will become president in the subsequent term.
Book
Hubel is the author of the book Preservation of Cells: A Practical Manual (Wiley, 2017).
Recognition
Hubel was elected as an ASME Fellow in 2008, and a Fellow of the American Institute for Medical and Biological Engineering in 2012. She was named a Cryofellow of the Society for Cryobiology in 2021.
References
External links
Academic home page
Year of birth missing (living people)
Living people
American mechanical engineers
American women engineers
American biologists
American women biologists
Cryobiology
Iowa State University alumni
Massachusetts Institute of Technology alumni
University of Minnesota faculty
Fellows of the American Institute for Medical and Biological Engineering
Fellows of the American Society of Mechanical Engineers | Allison Hubel | [
"Physics",
"Chemistry",
"Biology"
] | 444 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
72,887,334 | https://en.wikipedia.org/wiki/Parallax%20in%20astronomy | The most important fundamental distance measurements in astronomy come from trigonometric parallax, as applied in the stellar parallax method. As the Earth orbits the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal-length legs. The amount of shift is quite small, even for the nearest stars, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years), and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media.
Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond, providing useful distances for stars out to a few hundred parsecs. The Hubble Space Telescope's Wide Field Camera 3 has the potential to provide a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to for small numbers of stars. The Gaia space mission provided similarly accurate distances to most stars brighter than 15th magnitude.
Distances can be measured within 10% as far as the Galactic Center, about 30,000 light years away. Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.
Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder.
Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an expansion parallax distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and do not suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion.
Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to provide fundamental distance estimates to supernovae in other galaxies. Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.
Parsec
Stellar parallax
Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only appears to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars.
Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from the Earth and Sun, i.e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets.
The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away.
The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn (then the most distant known planet) and the eighth sphere (the fixed stars).
In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos was only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The European Space Agency's Gaia mission, launched in December 2013, can measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. In April 2014, NASA astronomers reported that the Hubble Space Telescope, by using spatial scanning, can precisely measure distances up to 10,000 light-years away, a ten-fold improvement over earlier measurements.
Diurnal parallax
Diurnal parallax is a parallax that varies with the rotation of the Earth or with a difference in location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars.
The diurnal parallax has been used by John Flamsteed in 1672 to measure the distance to Mars at its opposition and through that to estimate the astronomical unit and the size of the Solar System.
Lunar parallax
Lunar parallax (often short for lunar horizontal parallax or lunar equatorial horizontal parallax), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, at times exceeding 1 degree.
The diagram for stellar parallax can illustrate lunar parallax as well if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth.
One of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment. That is, viewed along the vertical line in the diagram. The other viewing position is a place from which the Moon can be seen on the horizon at the same moment. That is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram.
The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth—equal to angle p in the diagram when scaled-down and modified as mentioned above.
The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth-Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 Earth radii, corresponding to a horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. The Astronomical Almanac and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and of celestial navigators), and the study of how this coordinate varies with time forms part of lunar theory.
Parallax can also be used to determine the distance to the Moon.
One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degrees, from which (with the solar apparent radius of 0.25 degrees) we get an Earth apparent radius of 1 degree. This yields for the Earth-Moon distance 60.27 Earth radii or This procedure was first used by Aristarchus of Samos and Hipparchus, and later found its way into the work of Ptolemy.
The diagram at the right shows how daily lunar parallax arises on the geocentric and geostatic planetary model, in which the Earth is at the center of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed.
Another method is to take two pictures of the Moon at the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:
This is the method referred to by Jules Verne in his 1865 novel From the Earth to the Moon: Until then, many people had no idea how one could calculate the distance separating the Moon from the Earth. The circumstance was exploited to teach them that this distance was obtained by measuring the parallax of the Moon. If the word parallax appeared to amaze them, they were told that it was the angle subtended by two straight lines running from both ends of the Earth's radius to the Moon. If they had doubts about the perfection of this method, they were immediately shown that not only did this mean distance amount to a whole two hundred thirty-four thousand three hundred and forty-seven miles (94,330 leagues) but also that the astronomers were not in error by more than seventy miles (≈ 30 leagues).
Solar parallax
After Copernicus proposed his heliocentric system, with the Earth in revolution around the Sun, it was possible to build a model of the whole Solar System without scale. To ascertain the scale, it is necessary only to measure one distance within the Solar System, e.g., the mean distance from the Earth to the Sun (now called an astronomical unit, or AU). When found by triangulation, this is referred to as the solar parallax, the difference in position of the Sun as seen from the Earth's center and a point one Earth radius away, i.e., the angle subtended at the Sun by the Earth's mean radius. Knowing the solar parallax and the mean Earth radius allows one to calculate the AU, the first, small step on the long road of establishing the size and expansion age of the visible Universe.
A primitive way to determine the distance to the Sun in terms of the distance to the Moon was already proposed by Aristarchus of Samos in his book On the Sizes and Distances of the Sun and Moon. He noted that the Sun, Moon, and Earth form a right triangle (with the right angle at the Moon) at the moment of first or last quarter moon. He then estimated that the Moon–Earth–Sun angle was 87°. Using correct geometry but inaccurate observational data, Aristarchus concluded that the Sun was slightly less than 20 times farther away than the Moon. The true value of this angle is close to 89° 50', and the Sun is about 390 times farther away.
Aristarchus pointed out that the Moon and Sun have nearly equal apparent angular sizes, and therefore their diameters must be in proportion to their distances from Earth. He thus concluded that the Sun was around 20 times larger than the Moon. This conclusion, although incorrect, follows logically from his incorrect data. It suggests that the Sun is larger than the Earth, which could be taken to support the heliocentric model.
Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect, but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers.
Much later, the Solar System was "scaled" using the parallax of asteroids, some of which, such as Eros, pass much closer to Earth than Venus. In a favorable opposition, Eros can approach the Earth to within 22 million kilometers. During the opposition of 1900–1901, a worldwide program was launched to make parallax measurements of Eros to determine the solar parallax (or distance to the Sun), with the results published in 1910 by Arthur Hinks of Cambridge and Charles D. Perrine of the Lick Observatory, University of California.
Perrine published progress reports in 1906 and 1908. He took 965 photographs with the Crossley Reflector and selected 525 for measurement. A similar program was then carried out, during a closer approach, in 1930–1931 by Harold Spencer Jones. The value of the Astronomical Unit (roughly the Earth-Sun distance) obtained by this program was considered definitive until 1968, when radar and dynamical parallax methods started producing more precise measurements.
Also radar reflections, both off Venus (1958) and off asteroids, like Icarus, have been used for solar parallax determination. Today, use of spacecraft telemetry links has solved this old problem. The currently accepted value of solar parallax is arcseconds.
Moving-cluster parallax
The open stellar cluster Hyades in Taurus extends over such a large part of the sky, 20 degrees, that the proper motions as derived from astrometry appear to converge with some precision to a perspective point north of Orion. Combining the observed apparent (angular) proper motion in seconds of arc with the also observed true (absolute) receding motion as witnessed by the Doppler redshift of the stellar spectral lines, allows estimation of the distance to the cluster (151 light-years) and its member stars in much the same way as using annual parallax.
Dynamical parallax
Dynamical parallax has sometimes also been used to determine the distance to a supernova when the optical wavefront of the outburst is seen to propagate through the surrounding dust clouds at an apparent angular velocity, while its true propagation velocity is known to be the speed of light.
Spatio-temporal parallax
From enhanced relativistic positioning systems, spatio-temporal parallax generalizing the usual notion of parallax in space only has been developed. Then, event fields in spacetime can be deduced directly without intermediate models of light bending by massive bodies such as the one used in the PPN formalism for instance.
Statistical parallax
Two related techniques can determine the mean distances of stars by modelling the motions of stars. Both are referred to as statistical parallaxes, or individually called secular parallaxes and classical statistical parallaxes.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year. For halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. Secular parallax introduces a higher level of uncertainty, because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the precision is inversely proportional to the square root of the sample size.
The mean parallaxes and distances of a large group of stars can be estimated from their radial velocities and proper motions. This is known as a classical statistical parallax. The motions of the stars are modelled to statistically reproduce the velocity dispersion based on their distance.
Other methods for distance measurement in astronomy
In astronomy, the term "parallax" has come to mean a method of estimating distances, not necessarily utilizing a true parallax, such as:
Photometric parallax method
Spectroscopic parallax
Dynamical parallax
See also
Cosmic distance ladder
Lunar distance (astronomy)
Notes
References
Parallax
Parallax
Parallax
Length, distance, or range measuring devices | Parallax in astronomy | [
"Physics",
"Astronomy"
] | 4,150 | [
"Concepts in astronomy",
"Astrometry",
"Astronomical sub-disciplines"
] |
72,890,483 | https://en.wikipedia.org/wiki/Materials%20Research%20Bulletin | Materials Research Bulletin is a peer-reviewed, scientific journal that covers the study of materials science and engineering. The journal is published by Elsevier and was established in 1966. The Editor-in-Chief is Rick Ubic.
The journal focuses on the development and understanding of materials, including their properties, structure, and processing, and the application of these materials in various fields. The scope of the journal includes the following areas: ceramics, metals, polymers, composites, electronic and optical materials, and biomaterials.
Materials Research Bulletin features original research articles, review articles, and short communications.
Abstracting and indexing
The journal is abstracted and indexed for example in:
Materials Science Citation Index
Chemical Abstracts
Cambridge Scientific Abstracts
Scopus
Web of Science
According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.6.
References
External links
Materials
English-language journals
Elsevier academic journals
Materials science journals
Academic journals established in 1966 | Materials Research Bulletin | [
"Physics",
"Materials_science",
"Engineering"
] | 191 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science",
"Materials",
"Matter"
] |
72,900,340 | https://en.wikipedia.org/wiki/Neural%20radiance%20field | A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.
Algorithm
The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location (x, y, z) and viewing direction in Euler angles (θ, Φ) of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image.
Data collection
A NeRF needs to be retrained for each unique scene. The first step is to collect images of the scene from different angles and their respective camera pose. These images are standard 2D images and do not require a specialized camera or software. Any camera is able to generate datasets, provided the settings and capture method meet the requirements for SfM (Structure from Motion).
This requires tracking of the camera position and orientation, often through some combination of SLAM, GPS, or inertial estimation. Researchers often use synthetic data to evaluate NeRF and related techniques. For such data, images (rendered through traditional non-learned methods) and respective camera poses are reproducible and error-free.
Training
For each sparse viewpoint (image and camera pose) provided, camera rays are marched through the scene, generating a set of 3D points with a given radiance direction (into the camera). For these points, volume density and emitted radiance are predicted using the multi-layer perceptron (MLP). An image is then generated through classical volume rendering. Because this process is fully differentiable, the error between the predicted image and the original image can be minimized with gradient descent over multiple viewpoints, encouraging the MLP to develop a coherent model of the scene.
Variations and improvements
Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases.
Fourier feature mapping
In 2020, shortly after the release of NeRF, the addition of Fourier Feature Mapping improved training speed and image accuracy. Deep neural networks struggle to learn high frequency functions in low dimensional domains; a phenomenon known as spectral bias. To overcome this shortcoming, points are mapped to a higher dimensional feature space before being fed into the MLP.
Where is the input point, are the frequency vectors, and are coefficients.
This allows for rapid convergence to high frequency functions, such as pixels in a detailed image.
Bundle-adjusting neural radiance fields
One limitation of NeRFs is the requirement of knowing accurate camera poses to train the model. Often times, pose estimation methods are not completely accurate, nor is the camera pose even possible to know. These imperfections result in artifacts and suboptimal convergence. So, a method was developed to optimize the camera pose along with the volumetric function itself. Called Bundle-Adjusting Neural Radiance Field (BARF), the technique uses a dynamic low-pass filter to go from coarse to fine adjustment, minimizing error by finding the geometric transformation to the desired image. This corrects imperfect camera poses and greatly improves the quality of NeRF renders.
Multiscale representation
Conventional NeRFs struggle to represent detail at all viewing distances, producing blurry images up close and overly aliased images from distant views. In 2021, researchers introduced a technique to improve the sharpness of details at different viewing scales known as mip-NeRF (comes from mipmap). Rather than sampling a single ray per pixel, the technique fits a gaussian to the conical frustum cast by the camera. This improvement effectively anti-aliases across all viewing scales. mip-NeRF also reduces overall image error and is faster to converge at ~half the size of ray-based NeRF.
Learned initializations
In 2021, researchers applied meta-learning to assign initial weights to the MLP. This rapidly speeds up convergence by effectively giving the network a head start in gradient descent. Meta-learning also allowed the MLP to learn an underlying representation of certain scene types. For example, given a dataset of famous tourist landmarks, an initialized NeRF could partially reconstruct a scene given one image.
NeRF in the wild
Conventional NeRFs are vulnerable to slight variations in input images (objects, lighting) often resulting in ghosting and artifacts. As a result, NeRFs struggle to represent dynamic scenes, such as bustling city streets with changes in lighting and dynamic objects. In 2021, researchers at Google developed a new method for accounting for these variations, named NeRF in the Wild (NeRF-W). This method splits the neural network (MLP) into three separate models. The main MLP is retained to encode the static volumetric radiance. However, it operates in sequence with a separate MLP for appearance embedding (changes in lighting, camera properties) and an MLP for transient embedding (changes in scene objects). This allows the NeRF to be trained on diverse photo collections, such as those taken by mobile phones at different times of day.
Relighting
In 2021, researchers added more outputs to the MLP at the heart of NeRFs. The output now included: volume density, surface normal, material parameters, distance to the first surface intersection (in any direction), and visibility of the external environment in any direction. The inclusion of these new parameters lets the MLP learn material properties, rather than pure radiance values. This facilitates a more complex rendering pipeline, calculating direct and global illumination, specular highlights, and shadows. As a result, the NeRF can render the scene under any lighting conditions with no re-training.
Plenoctrees
Although NeRFs had reached high levels of fidelity, their costly compute time made them useless for many applications requiring real-time rendering, such as VR/AR and interactive content. Introduced in 2021, Plenoctrees (plenoptic octrees) enabled real-time rendering of pre-trained NeRFs through division of the volumetric radiance function into an octree. Rather than assigning a radiance direction into the camera, viewing direction is taken out of the network input and spherical radiance is predicted for each region. This makes rendering over 3000x faster than conventional NeRFs.
Sparse Neural Radiance Grid
Similar to Plenoctrees, this method enabled real-time rendering of pretrained NeRFs. To avoid querying the large MLP for each point, this method bakes NeRFs into Sparse Neural Radiance Grids (SNeRG). A SNeRG is a sparse voxel grid containing opacity and color, with learned feature vectors to encode view-dependent information. A lightweight, more efficient MLP is then used to produce view-dependent residuals to modify the color and opacity. To enable this compressive baking, small changes to the NeRF architecture were made, such as running the MLP once per pixel rather than for each point along the ray. These improvements make SNeRG extremely efficient, outperforming Plenoctrees.
Instant NeRFs
In 2022, researchers at Nvidia enabled real-time training of NeRFs through a technique known as Instant Neural Graphics Primitives. An innovative input encoding reduces computation, enabling real-time training of a NeRF, an improvement orders of magnitude above previous methods. The speedup stems from the use of spatial hash functions, which have access times, and parallelized architectures which run fast on modern GPUs.
Related techniques
Plenoxels
Plenoxel (plenoptic volume element) uses a sparse voxel representation instead of a volumetric approach as seen in NeRFs. Plenoxel also completely removes the MLP, instead directly performing gradient descent on the voxel coefficients. Plenoxel can match the fidelity of a conventional NeRF in orders of magnitude less training time. Published in 2022, this method disproved the importance of the MLP, showing that the differentiable rendering pipeline is the critical component.
Gaussian splatting
Gaussian splatting is a newer method that can outperform NeRF in render time and fidelity. Rather than representing the scene as a volumetric function, it uses a sparse cloud of 3D gaussians. First, a point cloud is generated (through structure from motion) and converted to gaussians of initial covariance, color, and opacity. The gaussians are directly optimized through stochastic gradient descent to match the input image. This saves computation by removing empty space and foregoing the need to query a neural network for each point. Instead, simply "splat" all the gaussians onto the screen and they overlap to produce the desired image.
Photogrammetry
Traditional photogrammetry is not neural, instead using robust geometric equations to obtain 3D measurements. NeRFs, unlike photogrammetric methods, do not inherently produce dimensionally accurate 3D geometry. While their results are often sufficient for extracting accurate geometry (ex: via cube marching), the process is fuzzy, as with most neural methods. This limits NeRF to cases where the output image is valued, rather than raw scene geometry. However, NeRFs excel in situations with unfavorable lighting. For example, photogrammetric methods completely break down when trying to reconstruct reflective or transparent objects in a scene, while a NeRF is able to infer the geometry.
Applications
NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications.
Content creation
NeRFs have huge potential in content creation, where on-demand photorealistic views are extremely valuable. The technology democratizes a space previously only accessible by teams of VFX artists with expensive assets. Neural radiance fields now allow anyone with a camera to create compelling 3D environments. NeRF has been combined with generative AI, allowing users with no modelling experience to instruct changes in photorealistic 3D scenes. NeRFs have potential uses in video production, computer graphics, and product design.
Interactive content
The photorealism of NeRFs make them appealing for applications where immersion is important, such as virtual reality or videogames. NeRFs can be combined with classical rendering techniques to insert synthetic objects and create believable virtual experiences.
Medical imaging
NeRFs have been used to reconstruct 3D CT scans from sparse or even single X-ray views. The model demonstrated high fidelity renderings of chest and knee data. If adopted, this method can save patients from excess doses of ionizing radiation, allowing for safer diagnosis.
Robotics and autonomy
The unique ability of NeRFs to understand transparent and reflective objects makes them useful for robots interacting in such environments. The use of NeRF allowed a robot arm to precisely manipulate a transparent wine glass; a task where traditional computer vision would struggle.
NeRFs can also generate photorealistic human faces, making them valuable tools for human-computer interaction. Traditionally rendered faces can be uncanny, while other neural methods are too slow to run in real-time.
References
Machine learning algorithms
Computer vision | Neural radiance field | [
"Engineering"
] | 2,427 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
72,905,077 | https://en.wikipedia.org/wiki/Anion%20exchange%20membrane%20electrolysis | Anion exchange membrane (AEM) electrolysis is the electrolysis of water that utilises a semipermeable membrane that conducts hydroxide ions (OH−) called an anion exchange membrane. Like a proton-exchange membrane (PEM), the membrane separates the products, provides electrical insulation between electrodes, and conducts ions. Unlike PEM, AEM conducts hydroxide ions. The major advantage of AEM water electrolysis is that a high-cost noble metal catalyst is not required, low-cost transition metal catalyst can be used instead. AEM electrolysis is similar to alkaline water electrolysis, which uses a non-ion-selective separator instead of an anion-exchange membrane.
Advantages and Challenges
Advantages
Of all water electrolysis methods, AEM electrolysis can combine the advantages of alkaline water electrolysis (AWE) and PEM electrolysis.
Polymer electrolyte membrane electrolysis uses expensive platinum-group metals (PGMs) such as platinum, iridium, and ruthenium as a catalyst. Iridium, for instance, is more scarce than platinum; a 100 MW PEM electrolyser is expected to require 150 kg of Iridium, which will cost an estimated 7 million USD. Like alkaline water electrolysis, electrodes in AEM electrolysis operate in an alkaline environment, which allows non-noble, low-cost catalysts based on Ni, Fe, Co, Mn, Cu, etc to be used.
AEM electrolyser can run on pure water or slightly alkaline solutions (0.1-1M KOH/NaOH) unlike highly concentrated alkaline solutions (5M KOH/NaOH) in AWE. This reduces the risk of leakage. Using an alkaline solution, usually KOH/NaOH increases membrane conductivity and adds a hydroxide ion conductive pathway, which increases the utilisation of catalyst. The current density of an AEM electrolyser without a PGM catalyst operating at 1 A/cm2 was reported to require 1.8 volts and 1.57 volts in pure water-fed and 1 M KOH-fed, respectively. Electrolyte can be fed on both anode and cathode side or anode side only.
In the zero-gap design of AWE, the electrodes are separated only by a diaphragm which separates the gases. The diaphragm only allows water and hydroxide ions to pass through, but does not completely eliminate gas cross-over. Oxygen gas can enter the hydrogen half-cell and react on the cathode side to form water, which reduces the efficiency of the cell. Gas cross-over from the H2 to the O2 evolution side can pose a safety hazard because it can create an explosive gas mixture with >4%mol H2. The AEM electrolyser was reported to maintain H2 crossover to less than 0.4% for the 5000 h of operation.
AEM based on an aromatic polymer backbone is promising due to its significant cost reduction. Compare to Nafion membrane use in PEM, the production of Nafion required highly toxic chemicals, which increased the cost (>1000$/m2) and fluorocarbon gas is produced at the production stage of
tetrafluoroethylene, which poses a strong environmental impact. Fluorinated raw materials are inessential for AEM, allowing for a wider selection of low-cost polymer chemistry.
Challenges
AEM electrolysis is still in the early research and development stage, while alkaline water electrolysis is in the mature stage and PEM electrolysis is in the commercial stage. There is less academic literature on pure-water fed AEM electrolysers compared to the usage of KOH solution.
The major technical challenge facing a consumer level AEM electrolyser is the low durability of the membrane, which refers to the short device lifetime or longevity. The lifetimes of PEM electrolyser stacks range from 20,000 h to 80,000 h. Literature surveys have found that AEM electrolyser durability is demonstrated to be >2000 h, >12,000 h, and >700 h for pure water-fed (Pt group catalyst on anode and cathode), concentrated KOH-fed, and 1wt% K2CO3-fed respectively.
To overcome the obstacles for a large scale usage of AEM, increasing ionic conductivity and durability is essential. Many AEM breakdown at temperatures higher than 60 °C, AEM that can tolerate the presence of O2, high pH, and temperatures exceeding 60 °C are needed.
Science
Reactions
Oxygen evolution reactions (OER) need four electrons to produce one molecule of O2, consume multiple OH− anions, and form multiple adsorbed intermediates on the surface of the catalyst. These multiple steps of reaction create a high energy barrier and thus a high overpotential, which causes the OER to be sluggish. The performance of the AEM electrolyser largely depends on OER. The overpotential of OER can be reduced with an efficient catalyst that breaks the reaction's intermediate bond.
Hydrogen evolution reaction (HER) kinetics in alkaline solutions are slower than in acidic solutions because of additional proton dissociation and the formation of hydrogen intermediate (H*) that is not present in acidic conditions.
Anode reaction
Where the * indicate species adsorbed to the surface of the catalyst.
Cathode reaction
The reaction starts with water adsorption and dissociation in Volmer step and either hydrogen desorption in the Tafel step or Heyrovsky step.
Anion exchange membrane
Hydroxide ion intrinsically has lower mobility than H+, increasing ion exchange capacity can compensate for this lower mobility but also increase swelling and reduce membrane mechanical stability. Cross-linking membranes can compensate for membrane mechanical instability. The quaternary ammonium (QA) headgroup is commonly employed to attach polymer matrices in AEM. The head group allows anions but not cations to be transported. QA AEMs have low chemical stability because they are susceptible to OH− attack. Promising head group candidates include imidazolium-based head group and nitrogen-free head groups such as phosphonium, sulphonium, and ligand-metal complex. Most QAs and imidazolium groups degrade in alkaline environments by Hofmann degradation, SN2 reaction, or ring-opening reaction, especially at high temperatures and pH.
Polymeric AEM backbones are cationic-free base polymers. Poly(arylene ether)-based backbones, polyolefin-based backbones, polyphenylene-based backbones, and backbones containing cationic moieties are some examples.
Some of the best-performing AEMs are HTMA-DAPP, QPC-TMA, m-PBI, and PFTP.
Membrane electrode assembly
A membrane electrode assembly (MEA) is made of an anode and cathode catalyst layer with a membrane layer in between. The catalyst layer can be deposited on the membrane or the substrate. Catalyst-coated substrate (CCS) and catalyst-coated membrane (CCM) are two approaches to preparing MEA. A substrate must conduct electricity, support the catalyst mechanically, and remove gaseous products.
Nickel is typically used as a substrate for AEM, while titanium is for PEM; both nickel and titanium can be used on AEM. Carbon materials are not suitable for the anode side because of their degradation by HO− ions, which are nucleophiles. On the cathode, nickel, titanium, and carbon can be readily used. The catalyst layer is typically made by mixing catalyst powder and ionomer to produce an ink or slurry that is applied by spraying or painting.
Other methods include electrodeposition, magnetron sputtering, chemical electroless plating, and screen printing onto the substrate.
Ionomers act as a binder for the catalyst, substrate support, and membrane, which also provide OH− conducting ions and increase electrocatalytic activities.
See also
Electrochemistry
Electrochemical engineering
Electrolysis
Hydrogen production
Photocatalytic water splitting
Timeline of hydrogen technologies
Electrolysis of water
PEM fuel cell
proton-exchange membrane
Hydrogen economy
High-pressure electrolysis
References
Electrolysis
Hydrogen economy
Hydrogen production
Electrolytic cells | Anion exchange membrane electrolysis | [
"Chemistry"
] | 1,735 | [
"Electrochemistry",
"Electrolysis"
] |
69,803,700 | https://en.wikipedia.org/wiki/Phonon%20polariton | In condensed matter physics, a phonon polariton is a type of quasiparticle that can form in a diatomic ionic crystal due to coupling of transverse optical phonons and photons. They are particular type of polariton, which behave like bosons. Phonon polaritons occur in the region where the wavelength and energy of phonons and photons are similar, as to adhere to the avoided crossing principle.
Phonon polariton spectra have traditionally been studied using Raman spectroscopy. The recent advances in (scattering-type) scanning near-field optical microscopy((s-)SNOM) and atomic force microscopy(AFM) have made it possible to observe the polaritons in a more direct way.
Theory
A phonon polariton is a type of quasiparticle that can form in some crystals due to the coupling of photons and lattice vibrations. They have properties of both light and sound waves, and can travel at very slow speeds in the material. They are useful for manipulating electromagnetic fields at nanoscale and enhancing optical phenomena. Phonon polaritons only result from coupling of transverse optical phonons, this is due to the particular form of the dispersion relation of the phonon and photon and their interaction. Photons consist of electromagnetic waves, which are always transverse. Therefore, they can only couple with transverse phonons in crystals.
Near the dispersion relation of an acoustic phonon can be approximated as being linear, with a particular gradient giving a dispersion relation of the form , with the speed of the wave, the angular frequency and k the absolute value of the wave vector . The dispersion relation of photons is also linear, being also of the form , with c being the speed of light in vacuum. The difference lies in the magnitudes of their speeds, the speed of photons is many times larger than the speed for the acoustic phonons. The dispersion relations will therefore never cross each other, resulting in a lack of coupling. The dispersion relations touch at , but since the waves have no energy, no coupling will occur.
Optical phonons, by contrast, have a non-zero angular frequency at and have a negative slope, which is also much smaller in magnitude to that of photons. This will result in the crossing of the optical phonon branch and the photon dispersion, leading to their coupling and the forming of a phonon polariton.
Dispersion relation
The behavior of the phonon polaritons can be described by the dispersion relation. This dispersion relation is most easily derived for diatomic ion crystals with optical isotropy, for example sodium chloride and zinc sulfide. Since the atoms in the crystal are charged, any lattice vibration which changes the relative distance between the two atoms in the unit cell will change the dielectric polarization of the material. To describe these vibrations, it is useful to introduce the parameter w, which is given by:
Where
is the displacement of the positive atom relative to the negative atom;
μ is the reduced mass of the two atoms;
V is the volume of the unit cell.
Using this parameter, the behavior of the lattice vibrations for long waves can be described by the following equations:
Where
denotes the double time derivative of
is the static dielectric constant
is the high-frequency dielectric constant
is the infrared dispersion frequency
is the electric field
is the dielectric polarization.
For the full coupling between the phonon and the photon, we need the four Maxwell's equations in matter. Since, macroscopically, the crystal is uncharged and there is no current, the equations can be simplified. A phonon polariton must abide all of these six equations. To find solutions to this set of equations, we write the following trial plane wave solutions for , and :
Where denotes the wave vector of the plane wave, the position, t the time, and ω the angular frequency. Notice that wave vector should be perpendicular to the electric field and the magnetic field. Solving the resulting equations for ω and k, the magnitude of the wave vector, yields the following dispersion relation, and furthermore an expression for the optical dielectric constant:
With the optical dielectric constant.
The solution of this dispersion relation has two branches, an upper branch and a lower branch (see also the figure). If the slope of the curve is low, the particle is said to behave "phononlike", and if the slope is high the particle behaves "photonlike", owing these names to the slopes of the regular dispersion curves for phonons and photons. The phonon polariton behaves phononlike for low k in the upper branch, and for high k in the lower branch. Conversely, the polariton behaves photonlike for high k in the upper branch, low k in the lower branch.
Limit behaviour of the dispersion relation
The dispersion relation describes the behaviour of the coupling. The coupling of the phonon and the photon is the most promininent in the region where the original transverse disperion relations would have crossed. In the limit of large k, the solid lines of both branches approach the dotted lines, meaning, the coupling does not have a large impact on the behaviour of the vibrations.
Towards the right of the crossing point, the upper branch behaves like a photon. The physical interpretation of this effect is that the frequency becomes too high for the ions to partake in the vibration, causing them to be essentially static. This results in a dispersion relation resembling one of a regular photon in a crystal. The lower branch in this region behaves, because of their low phase velocity compared to the photons, as regular transverse lattice vibrations.
Lyddane–Sachs–Teller relation
The longitudonal optical phonon frequency is defined by the zero of the equation for the dielectric constant. Writing the equation for the dielectric constant in a different way yields:
Solving the equation yields:
This equation gives the ratio of the frequency of the longitudonal optical phonon (), to the frequency of the transverse optical phonon () in diatomic cubic ionic crystals, and is known as the Lyddane-Sachs-Teller relation. The ratio can be found using inelastic neutron scattering experiments.
Surface phonon polariton
Surface phonon polariton(SPhPs) are a specific kind of phonon polariton. They are formed by the coupling of optical surface phonon, instead of normal phonons, and light, resulting in an electromagnetic surface wave. They are similar to surface plasmon polaritons, although studied to a far lesser extent. The applications are far ranging from materials with negative index of refraction to high-density IR data storage.
One other application is in the cooling of microelectronics. Phonons are the main source of heat conductivity in materials, where optical phonons contribute far less than acoustic phonons. This is because of the relatively low group velocity of optical phonons. When the thickness of the material decreases, the conductivity of via acoustic also decreases, since surface scattering increases. This microelectronics are getting smaller and smaller, reductions is getting more problematic. Although optical phonons themselves do not have a high thermal conductivity, SPhPs do seem to have this. So they may be an alternative means of cooling these electronic devices.
Experimental observation
Most observations of phonon polaritons are made of surface phonon polaritons, since these can be easily probed by Raman spectroscopy or AFM.
Raman spectroscopy
As with any Raman experiment, a laser is pointed at the material being studied. If the correct wavelength is chosen, this laser can induce the formation of a polariton on the sample. Looking at the Stokes shifted emitted radiation and by using the conservation of energy and the known laser energy, one can calculate the polariton energy, with which one can construct the dispersion relation.
SNOM and AFM
The induction of polaritons is very similar to that in Raman experiments, with a few differences. With the extremely high spatial resolution of SNOM, one can induce polaritons very locally in the sample. This can be done continuously, producing a continuous wave(CW) of polariton, or with an ultrafast pulse, producing a polariton with a very high temporal footprint. In both cases the polaritons are detected by the tip of the AFM, this signal is then used to calculate the energy of the polariton. One can also perform these experiments near the edge of the sample. This will result in the polaritons being reflected. In the case of CW polaritons, standing waves will be created, which will again be detected by the AFM tip. In the case of the polaritons created by the ultrafast laser, no standing wave will be created. The wave can still interfere with itself the moment it is reflected of the edge. Whether one is observing on the bulk surface or close to an edge, the signal is in temporal form. One can Fourier transform this signal, converting the signal into frequency domain, which can used to obtain the dispersion relation.
Polaritonics and real-space imaging
Phonon polaritons also find use in the field of polaritonics, a field between photonics and electronics. In this field phonon polaritons are used for high speed signal processing and terahertz spectroscopy. The real-space imaging of phonon polaritons was made possible by projecting them onto a CCD camera.
See also
Polariton
Phonon
Surface plasmon polariton
References
Quasiparticles | Phonon polariton | [
"Physics",
"Materials_science"
] | 2,041 | [
"Quasiparticles",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
69,803,965 | https://en.wikipedia.org/wiki/Electroreflectance | Electroreflectance (also: electromodulated reflectance) is the change of reflectivity of a solid due to the influence of an electric field close to, or at the interface of the solid with a liquid. The change in reflectivity is most noticeable at very specific ranges of photon energy, corresponding to the band gaps at critical points of the Brillouin zone.
The electroreflectance effect can be used to get a clearer picture of the band structure at critical points where there is a lot of near degeneracy. Normally, the band structure at critical points (points of special interest) has to be measured within a background of adsorption from non-critical points at the Brillouin zone boundary. Using a strong electric field, the adsorption spectrum can be changed to a spectrum that shows peaks at these critical points, essentially lifting the critical points from the background.
The effect was first discovered and understood in semiconductor materials, but later research proved that metals also exhibit electroreflectance. An early observation of the changing optical reflectivity of gold due to a present electric field was attributed to a change in refractive index of the neighboring liquid. However, it was shown that this could not be the case. The new conclusion was that the effect had to come from a modulation of the near-surface layer of the gold.
Theoretic description
Effect of the electric field on the electronic structure
When an electric field is applied to a metal or semiconductor, the electronic structure of the material changes. The electrons (and other charged particles) will react to the electric field, by repositioning themselves within the material. Electrons in metals can relatively easily move around and are available in abundance. They will move in such a manner that they try to cancel the external electric field. Since no metal is a perfect conductor, no metal will perfectly cancel the external electric field within the material. In semiconductors the electrons that are available will not be able to move around as easily as electrons in metals. This leads to a weaker response and weaker cancellation of the electric field. This has the effect that the electric field can penetrate deeper into a semiconductor than into a metal.
The optical reflectivity of a (semi-)conductor is based on the band structure of the material close to or at the surface of the material. For reflectivity to occur a photon has to have enough energy to overcome the bandgap of electrons at the Fermi surface. When the photon energy is smaller than the bandgap, the solid will be unable to absorb the energy of the photon by excitation of an electron to a higher energy. This means that the photon will not be re-emitted by the solid and thus not reflected. If the photon energy is large enough to excite an electron from the Fermi surface, the solid will re-emit the photon by decaying the electron back to the original energy. This is not exactly the same photon as the incident photon, as it has for example the opposite direction of the incident photon.
By applying an electric field to the material, the band structure of the solid changes. This change in band structure leads to a different bandgap, which in turn leads to a difference in optical reflectivity. The electric field, generally made by creating a potential difference, leads to an altered Hamiltonian. Using analytical methods available, such as the Tight Binding method, it can be calculated that this altered Hamiltonian leads to a different band structure.
The combination of electron repositioning and the change in band structure due to an external electric field is called the field effect. Since the electric field has more influence on semiconductors than on metals, semiconductors are easier to use to observe the electroreflectance effect.
Near the surface
The optical reflection in (semi-)conductors happens mostly in the surface region of the material. Therefore, the band structure of this region is extra important. Band structure usually covers bulk material. For deviations from this structure, it is conventional to use a band diagram. In a band diagram the x-axis is changed from wavevector k in band structure diagrams to position x in the preferred direction. Usually, this positional direction is normal to the surface plane.
For semiconductors specifically, the band diagram near the surface of the material is important. When an electric field is present close to, or in the material, this will lead to a potential difference within the semiconductor. Dependent on the electric field, the semiconductor will become n- or p-like in the surface region. From now on we will use that the semiconductor has become n-like at the surface. The bands near the surface will bend under the electrostatic potential of the applied electric field. This bending can be interpreted in the same way as the bending of the valence and conduction bands in a p-n-junction, when equilibrium has been reached. The result of this bending leads to a conduction band that comes close to the Fermi level. Therefore, the conduction band will begin to fill with electrons. This change in band structure leads to a change in optical reflection of the semiconductor.
Brillouin zones and optical reflectivity
Optical reflectivity and the Brillouin zones are closely linked, since the band gap energy in the Brillouin zone determines if a photon is absorbed or reflected. If the band gap energy in the Brillouin zone is smaller than the photon energy, the photon will be absorbed, while the photon will be transmitted/reflected if the band gap energy is larger than the photon energy. For example, the photon energies of visible light lie in a range between 1.8 eV (red light) and 3.1 eV (violet light), So if the band gap energy is larger than 3.2 eV, photons of visible light will not be absorbed, but reflected/transmitted: the material appears transparent. This is the case for diamond, quartz etc. But if the band gap is roughly 2.6 eV (this is the case for cadmium sulfide) only blue and violet light is absorbed, while red and green light are transmitted, resulting in a reddish looking material.
When an electric field is added to a (semi)conductor, the material will try to cancel this field by inducing an electric field at its surface. Because of this electric field, the optical properties of the surface layer will change, due to the change in size of critical band gaps, and hence changing its energy. Since the change in band gap only occurs on the surface of the (semi)conductor, optical properties will not change in the core of bulk materials, but for very thin films, where almost all particles can be found at the surface, the optical properties can change: absorption or transmittance of certain wavelengths depending on the strength of the electric field. This can result in more accurate measurements in case there are multiple compounds in the semiconductor, practically canceling the background noise of data.
Commonly, the band gaps are smallest close to, or at the Brillouin zone boundary. Adding an electric field will alter the whole band structure of the material where the electric field penetrates, but the effect will be especially noticeable at the Brillouin zone boundary. When the smallest band gap changes in size, this alters the optical reflectivity of the material more than the change in an already larger band gap. This can be explained by noticing that the smallest band gap determines a lot of the reflectivity, as lower energy photons cannot be absorbed and re-emitted.
Dielectric constant
The optical properties of semiconductors are directly related to the dielectric constant of the material. This dielectric constant gives the ratio between the electric permeability of a material in relation to the permeability of a vacuum. The imaginary refractive index of a material is given by the square root of the dielectric constant. The reflectance of a material can be calculated using the Fresnel equations. A present electric field alters the dielectric constant and therefore alters the optical properties of the material, like the reflectance.
Interfaces with a liquid (electric double layer)
A solid in contact with a liquid, in the presence of an electric field, forms an electric double layer. This layer is present at the interface of the solid and liquid and it shields the charged surface of the solid. This electric double layer has an effect on the optical reflectivity of the solid as it changes the elastic light scattering properties. The formation of the electric double layer involves different timescales, such as the relaxation time and the charging time.
The relaxation time we can write as with being the Diffusion constant and the Debye length.
The charging time can be expressed by where is the representative system size.
The Debye length is often used as a measure of electric double layer thickness. Measuring the electric double layer with electroreflectance is challenging due to separation caused by conduction electrons.
History
The effect of electroreflectence was first written of in a review letter from 1965 by B. O. Seraphin and R. B. Hess from Michelson Laboratory, China Lake, California where they were studying the Franz-Keldysh effect above the fundamental edge in germanium. They found that it was not only possible for the material to absorb the electrons, but also re-emit them. Following this discovery Seraphin has written numerous articles on the new found phenomenon.
Research techniques
Electroreflectance in surface physics
Using electroreflectance in surface physics studies gives some major advantages over techniques used before its discovery. Before, the determination of the surface potential was the hard to do since you need electrical measurements at the surface and it was difficult to probe the surface region without involving the bulk underneath. Electroreflectance does not need to make electrical measurements on the surface, but only uses optical measurements. Furthermore, due to direct functional relationships between surface potential and reflectivity we can get rid of a lot assumptions about mobility, scattering, or trapping of added carriers needed in the older methods. The electric field of the surface is probed by the modulation of the beam reflected by the surface. The incoming beam does not penetrate the material deep, so you only probe the surface without interacting with the bulk underneath.
Aspnes's third-derivative
Third order spectroscopy, sometimes revered to as Aspnes's third-derivative, is a technique used to enhance the resolution of a spectroscopy measurement. This technique was first used by D.E. Aspnes to study electroreflectance in 1971. Using 3rd order derivatives can sharpen the peak of a function (see figure). Especially in spectroscopy, where the wave is never measured on one specific wavelength, but always on a band, it is use full to sharpen the peak, and thus narrow the band.
Another advantage of derivatives is that baseline shifts are eliminated since derivatives get rid of shifts. These shifts in spectra can for example be caused by sample handling, lamp or detector instabilities. This way, you can eliminate some of the background noise of measurements.
Applications
Electroreflectance is often used to determine band gaps and electric properties of thin films of weaker semiconducting materials. Two different examples are listed below.
Enhancing research of high band gap semiconductors at room temperature
Wide band gap semiconductors like tin oxide SnO2 generally possess a high chemical stability and mobility, are cheap to fabricate and have a suitable band alignment, making these semiconductors often used in various electronics as thin film transistors, anodes in lithium ion batteries and as electron transport layer in solar cells. The large band gap of SnO2 () and large binding energy () make it useful in ultraviolet based devices.
But a fundamental problem arises with its dipole forbidden band structure in bulk form: the transition from the valence to the conduction band is dipole forbidden since both types of states exist with even parity with the effect that band edge emission of SnO2 is forbidden in nature. This can be offset by employing its reduced dimensional structure, partially destroying the crystal symmetry, turning the forbidden dipole transition into allowed ones. Observing optical transitions in SnO2 at room temperature, however, is challenging due to the light absorbing efficiency in the UV region of the reduced SnO2 structures being very weak and background scattering of electrons with lower energies. Using electroreflectance the optical transitions of thin films can be recovered: by placing a thin film in an electric field, the critical points of the optical transition will be enhanced while, due to a change in reflectivity, low energy background scattering is reduced.
Electroreflectance in organic semiconductors
Organic compounds containing conjugate (i.e., alternate single-double) bonds can have semiconducting properties. The conductivity and mobility of those organic compounds however, are very low compared to inorganic semiconductors. Assuming the molecules of the organic semiconductor are lattices, the same procedure of electroreflectance of inorganic semiconductors can be applied for the organic ones. It should be noted though that there is a certain dualism in semiconductors: intra-molecular conduction (inside a molecule) and inter-molecular conduction (between molecules), which one should take into account doing measurements. Especially for thin films the band gaps of organic semiconductors can be accurately determined using this method.
See also
Photo-reflectance
Field effect (semiconductor)
Band structure
Brillouin zone
Electric double layer
References
Spectroscopy
Semiconductor properties
Electrostatics | Electroreflectance | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,734 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Semiconductor properties",
"Condensed matter physics",
"Spectroscopy"
] |
69,806,623 | https://en.wikipedia.org/wiki/Band%20bending | In solid-state physics, band bending refers to the process in which the electronic band structure in a material curves up or down near a junction or interface. It does not involve any physical (spatial) bending. When the electrochemical potential of the free charge carriers around an interface of a semiconductor is dissimilar, charge carriers are transferred between the two materials until an equilibrium state is reached whereby the potential difference vanishes. The band bending concept was first developed in 1938 when Mott, Davidov and Schottky all published theories of the rectifying effect of metal-semiconductor contacts. The use of semiconductor junctions sparked the computer revolution in the second half of the 20th century. Devices such as the diode, the transistor, the photocell and many more play crucial roles in technology.
Qualitative description
Band bending can be induced by several types of contact. In this section metal-semiconductor contact, surface state, applied bias and adsorption induced band bending are discussed.
Metal-semiconductor contact induced band bending
Figure 1 shows the ideal band diagram (i.e. the band diagram at zero temperature without any impurities, defects or contaminants) of a metal with an n-type semiconductor before (top) and after contact (bottom). The work function is defined as the energy difference between the Fermi level of the material and the vacuum level before contact and is denoted by . When the metal and semiconductor are brought in contact, charge carriers (i.e. free electrons and holes) will transfer between the two materials as a result of the work function difference .
If the metal work function () is larger than that of the semiconductor (), that is , the electrons will flow from the semiconductor to the metal, thereby lowering the semiconductor Fermi level and increasing that of the metal. Under equilibrium the work function difference vanishes and the Fermi levels align across the interface. A Helmholtz double layer will be formed near the junction, in which the metal is negatively charged and the semiconductor is positively charged due to this electrostatic induction. Consequently, a net electric field is established from the semiconductor to the metal. Due to the low concentration of free charge carriers in the semiconductor, the electric field cannot be effectively screened (unlike in the metal where in the bulk). This causes the formation of a depletion region near the semiconductor surface. In this region, the energy band edges in the semiconductor bend upwards as a result of the accumulated charge and the associated electric field between the semiconductor and the metal surface.
In the case of , electrons are shared from the metal to the semiconductor, resulting in an electric field that points in the opposite direction. Hence, the band bending is downward as can be seen in the bottom right of Figure 1.
One can envision the direction of bending by considering the electrostatic energy experienced by an electron as it moves across the interface. When , the metal develops a negative charge. An electron moving from the semiconductor to the metal therefore experiences a growing repulsion as it approaches the interface. It follows that its potential energy rises and hence the band bending is upwards. In the case of , the semiconductor carries a negative charge, forming a so-called accumulation layer and leaving a positive charge on the metal surface. An electric field develops from the metal to the semiconductor which drives the electrons towards the metal. By moving closer to the metal the electron could thus lower its potential energy. The result is that the semiconductor energy band bends downwards towards the metal surface.
Surface state induced band bending
Despite being energetically unfavourable, surface states may exist on a clean semiconductor surface due to the termination of the materials lattice periodicity. Band bending can also be induced in the energy bands of such surface states. A schematic of an ideal band diagram near the surface of a clean semiconductor in and out of equilibrium with its surface states is shown in Figure 2 . The unpaired electrons in the dangling bonds of the surface atoms interact with each other to form an electronic state with a narrow energy band, located somewhere within the band gap of the bulk material.
For simplicity, the surface state band is assumed to be half-filled with its Fermi level located at the mid-gap energy of the bulk. Furthermore, doping is taken to not be of influence to the surface states. This is a valid approximation since the dopant concentration is low.
For intrinsic semiconductors (undoped), the valence band is fully filled with electrons, whilst the conduction band is completely empty. The Fermi level is thus located in the middle of the band gap, the same as that of the surface states, and hence there is no charge transfer between the bulk and the surface. As a result no band bending occurs.
If the semiconductor is doped, the Fermi level of the bulk is shifted with respect to that of the undoped semiconductor by the introduction of dopant eigenstates within the band gap. It is shifted up for n-doped semiconductors (closer to the conduction band) and down in case of p-doping (nearing the valence band). In disequilibrium, the Fermi energy is thus lower or higher than that of the surface states for p- and n-doping, respectively. Due to the energy difference, electrons will flow from the bulk to the surface or vice versa until the Fermi levels become aligned at equilibrium. The result is that, for n-doping, the energy bands bend upward, whereas they bend downwards for p-doped semiconductors.
Note that the density of surface states is large () in comparison with the dopant concentration in the bulk (). Therefore, the Fermi energy of the semiconductor is almost independent of the bulk dopant concentration and is instead determined by the surface states. This is called Fermi level pinning.
Adsorption induced band bending
Adsorption on a semiconductor surface can also induce band bending. Figure 3 illustrates the adsorption of an acceptor molecule (A) onto a semiconductor surface. As the molecule approaches the surface, an unfilled molecular orbital of the acceptor interacts with the semiconductor and shifts downwards in energy.
Due to the adsorption of the acceptor molecule its movement is restricted. It follows from the general uncertainty principle that the molecular orbital broadens its energy as can be seen in the bottom of figure 3. The lowering of the acceptor molecular orbital leads to electron flow from the semiconductor to the molecule, thereby again forming a Helmholtz layer on the semiconductor surface. An electric field is set up and upwards band bending near the semiconductor surface occurs. For a donor molecule, the electrons will transfer from the molecule to the semiconductor, resulting in downward band bending.
Applied bias induced band bending
When a voltage is applied across two surfaces of metals or semiconductors the associated electric field is able to penetrate the surface of the semiconductor. Because the semiconductor material contains little charge carriers the electric field will cause an accumulation of charges on the semiconductor surface. When , a forward bias, the band bends downwards. A reverse bias () would cause an accumulation of holes on the surface which would bend the band upwards. This follows again from Poisson's equation.
As an example the band bending induced by the forming of a p-n junction or a metal-semiconductor junction can be modified by applying a bias voltage . This voltage adds to the built-in potential () that exists in the depletion region (). Thus the potential difference between the bands is either increased or decreased depending on the type of bias that is applied. The conventional depletion approximation assumes a uniform ion distribution in the depletion region. It also approximates a sudden drop in charge carrier concentration in the depletion region. Therefore the electric field changes linearly and the band bending is parabolic. Thus the width of the depletion region will change due to the bias voltage. The depletion region width is given by:
and are the boundaries of the depletion region. is the dielectric constant of the semiconductor. and are the net acceptor and net donor dopant concentrations respectively and is the charge of the electron. The term compensates for the existence of free charge carriers near the junction from the bulk region.
Poisson's equation
The equation which governs the curvature obtained by the band edges in the space charge region, i.e. the band bending phenomenon, is Poisson’s equation,
where is the electric potential, is the local charge density and is the permittivity of the material. An example of its implementation can be found on the Wikipedia article on p-n junctions.
Applications
Electronics
The p-n diode is a device that allows current to flow in only one direction as long as the applied voltage is below a certain threshold. When a forward bias is applied to the p-n junction of the diode the band gap in the depletion region is narrowed. The applied voltage introduces more charge carriers as well, which are able to diffuse across the depletion region. Under a reverse bias this is hardly possible because the band gap is widened instead of narrowed, thus no current can flow. Therefore the depletion region is necessary to allow for only one direction of current.
The metal–oxide–semiconductor field-effect transistor (MOSFET) relies on band bending. When the transistor is in its so called 'off state' there is no voltage applied on the gate and the first p-n junction is reversed bias. The potential barrier is too high for the electrons to pass thus no current flows. When a voltage is applied on the gate the potential gap shrinks due to the applied bias band bending that occurs. As a result current will flow. Or in other words, the transistor is in its 'on' state. The MOSFET is not the only type of transistor available today. Several more examples are the metal-semiconductor field-effect transistor (MESFET) and the junction field-effect transistor (JFET), both of which rely on band bending as well.
Photovoltaic cells (solar cells) are essentially just p-n diodes that can generate a current when they are exposed to sunlight. Solar energy can create an electron-hole pair in the depletion region. Normally they would recombine quite quickly before traveling very far. The electric field in the depletion region separates the electrons and holes generating a current when the two sides of the p-n diode are connected. Photovoltaic cells are an important supplier of renewable energy. They are a promising source of reliable clean energy.
Spectroscopy
Different spectroscopy methods make use of or can measure band bending:
Surface photovoltage is a spectroscopy method used to determine the minority carrier diffusion length of semiconductors. The band bending at the surface of a semiconductor results in a depletion region with a surface potential. A photon source creates electron-hole pairs deeper into the material. These electrons then diffuse to the surface to radiatively recombine. This results in a changing surface potential which can be measured and is directly correlated to the minority carrier diffusion length. This property of a semiconductor is very important for certain electronics such as photodiodes, solar panels and transistors.
Time-resolved photoluminescence is another technique used to measure the minority carrier diffusion length in semiconductors. It is a form of photoluminescence spectroscopy where the emitted photon decay is measured over time. In photoluminescence spectroscopy a material is excited using a photon pulse with a higher photon energy than the band gap in the material. The material relaxes back into its ground state under emission of a photon. These emitted photons are measured to gain information about the band structure of a material.
Angle-resolved photoemission spectroscopy can be used to chart the electronic energy bands of crystal structures such as semiconductors. This can thus also visualize band bending. The technique is an enhanced version of regular photoemission spectroscopy. It is based on the photoelectric effect. By analysing the energy difference between the incident photons and the electrons emitted by the solid, information about the energy band differences in the solid can be gained. By measuring at different angles the band structure can be mapped and the band bending captured.
See also
Field effect (semiconductor) – band bending due to the presence of an external electric field at the vacuum surface of a semiconductor.
Thomas–Fermi screening – special case of Lindhard theory that describes the band bending caused by a charged defect.
Quantum capacitance – Field effect band bending, especially important for low-density-of-states-systems.
References
Electronic band structures
Semiconductor structures | Band bending | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,583 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
69,811,346 | https://en.wikipedia.org/wiki/Pressure%20sewer | A pressure sewer provides a method of discharging sewage from properties into a conventional gravity sewer or directly to a sewage treatment plant.
Pressure sewers are typically used where properties are located below the level of the nearest gravity sewer or are located on difficult terrain.
Operation
In a typical set-up, a receiving well is provided close to the properties being served so that all sewage can gravitate to the well. An electric macerator pump (also called a grinder pump) pumps the finely macerated sewage through a narrow diameter continuous plastic pipe which discharges into the nearest gravity sewer or treatment plant. The operation of the pump is controlled by a float switch in the pumping well. The pumping well is sized to allow for periods of power outage or pump maintenance.
The discharge pipe may be as small as 50mm diameter carrying sewage at very high flow rates.
Pressure sewers are also used to collect the discharge from septic tanks and discharge this into the local gravity sewer to protect local ground water from contamination.
Advantages
Pressure sewers enable properties constructed below the nearest gravity main to connect to the local sewerage system avoiding the need for a septic tank or cesspit.
In areas where washouts or earthquakes are common, conventional earthenware or cast iron sewerage system may be prone to breakage and leakage. The plastic discharge pipe of a pressure sewer is much more robust and can accommodate substantial movements in the ground without failing.
Costs can be lower than conventional sewers since the pipework is much cheaper and there is no requirement for manholes or other intermediate infrastructure. Installation costs can also be very low as the pipes can be laid very close to the surface and may be installed using no-dig methods such as moleing.
Disadvantages
The pumping well and pump controls require expert maintenance and repair should they fail.
Electricity is required to power the pump and this cost would typically fall on the local house-holder.
Although the pump well is usually designed to accommodate several days storage, failure of the pump or of the local electricity supply for an extended period would result in a local overflow.
References
Hydraulic engineering
Sewerage infrastructure | Pressure sewer | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 428 | [
"Hydrology",
"Water treatment",
"Sewerage infrastructure",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
75,679,597 | https://en.wikipedia.org/wiki/Indium%20arsenide%20antimonide | Indium arsenide antimonide, also known as indium antimonide arsenide or InAsSb (InAs1-xSbx), is a ternary III-V semiconductor compound. It can be considered as an alloy between indium arsenide (InAs) and indium antimonide (InSb). The alloy can contain any ratio between arsenic and antimony. InAsSb refers generally to any composition of the alloy.
Preparation
InAsSb films have been grown by molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE) and liquid phase epitaxy (LPE) on gallium arsenide and gallium antimonide substrates. It is often incorporated into layered heterostructures with other III-V compounds.
Thermodynamic stability
Between 524 °C and 942 °C (the melting points of pure InSb and InAs, respectively), InAsSb can exist at a two-phase liquid-solid equilibrium, depending on temperature and average composition of the alloy.
InAsSb possesses an additional miscibility gap at temperatures below approximately 503 °C. This means that intermediate compositions of the alloy below this temperature are thermodynamically unstable and can spontaneously separate into two phases: one InAs-rich and one InSb-rich. This limits the compositions of InAsSb that can be obtained by near-equilibrium growth techniques, such as LPE, to those outside of the miscibility gap. However, compositions of InAsSb within the miscibility gap can be obtained with non-equilibrium growth techniques, such as MBE and MOVPE. By carefully selecting the growth conditions and maintaining relatively low temperatures during and after growth, it is possible to obtain compositions of InAsSb within the miscibility gap that are kinetically stable.
Electronic properties
The bandgap and lattice constant of InAsSb alloys are between those of pure InAs (a = 0.606 nm, Eg = 0.35 eV) and InSb (a = 0.648 nm, Eg = 0.17 eV). Over all compositions, the band gap is direct, like in InAs and InSb. The direct bandgap displays strong bowing, reaching a minimum with respect to composition at approximately x = 0.62 at room temperature and lower temperatures. The following empirical relationship has been suggested for the direct bandgap of InAsSb in eV as a function of composition (0 < x < 1) and temperature (in Kelvin):
This equation is plotted in the figures, using a suggested bowing parameter of C = 0.75 eV. Slightly different relations have also been suggested for Eg as a function of composition and temperature, depending on the material quality, strain, and defect density.
Applications
Because of its small direct bandgap, InAsSb has been extensively studied over the last few decades, predominantly for use in mid- to long-wave infrared photodetectors that operate at room temperature and cryogenic temperatures. InAsSb is used as the active material in some commercially available infrared photodetectors. Depending on the heterostructure and detector configuration that is used, InAsSb-based detectors can operate at wavelengths ranging from approximately 2 μm to 11 μm.
See also
Mercury cadmium telluride - a ternary II-VI compound that has a widely tunable bandgap and is used in commercial mid- and long-wave infrared photodetectors.
Aluminium arsenide antimonide - a ternary III-V compound that is used as a barrier material in some InAsSb-based photodetectors.
References
External links
Properties of InAsSb
Antimonides
Arsenides
Indium compounds
III-V compounds | Indium arsenide antimonide | [
"Chemistry"
] | 780 | [
"III-V compounds",
"Inorganic compounds"
] |
75,689,664 | https://en.wikipedia.org/wiki/Donsker%20classes | A class of functions is considered a Donsker class if it satisfies Donsker's theorem, a functional generalization of the central limit theorem.
Definition
Let be a collection of square integrable functions on a probability space . The empirical process is the stochastic process on the set defined by
where is the empirical measure based on an iid sample from .
The class of measurable functions is called a Donsker class if the empirical process converges in distribution to a tight Borel measurable element in the space .
By the central limit theorem, for every finite set of functions , the random vector converges in distribution to a multivariate normal vector as . Thus the class is Donsker if and only if the sequence is asymptotically tight in
Examples and Sufficient Conditions
Classes of functions which have finite Dudley's entropy integral are Donsker classes. This includes empirical distribution functions formed from the class of functions defined by as well as parametric classes over bounded parameter spaces. More generally any VC class is also Donsker class.
Properties
Classes of functions formed by taking infima or suprema of functions in a Donsker class also form a Donsker class.
Donsker's Theorem
Donsker's theorem states that the empirical distribution function, when properly normalized, converges weakly to a Brownian bridge—a continuous Gaussian process. This is significant as it assures that results analogous to the central limit theorem hold for empirical processes, thereby enabling asymptotic inference for a wide range of statistical applications.
The concept of the Donsker class is influential in the field of asymptotic statistics. Knowing whether a function class is a Donsker class helps in understanding the limiting distribution of empirical processes, which in turn facilitates the construction of confidence bands for function estimators and hypothesis testing.
See also
Empirical process
Central limit theorem
Brownian bridge
Glivenko–Cantelli theorem
Vapnik–Chervonenkis theory
Weak convergence (probability)
References
Probability theory
Central limit theorem | Donsker classes | [
"Mathematics"
] | 421 | [
"Central limit theorem",
"Theorems in probability theory"
] |
77,310,528 | https://en.wikipedia.org/wiki/Ceria%20based%20thermochemical%20cycles | A ceria based thermochemical cycle is a type of two-step thermochemical cycle that uses as oxygen carrier cerium oxides (/) for synthetic fuel production such as hydrogen or syngas. These cycles are able to obtain either hydrogen () from the splitting of water molecules (), or also syngas, which is a mixture of hydrogen () and carbon monoxide (), by also splitting carbon dioxide () molecules alongside water molecules. These type of thermochemical cycles are mainly studied for concentrated solar applications.
Types of cycles
These cycles are based on the two step redox thermochemical cycle. In the first step, a metal oxide, such as ceria, is reduced by providing heat to the material, liberating oxygen. In the second step, a stream of steam oxidises the previously obtained molecule back to its starting state, therefore closing the cycle. Depending on the stoichiometry of the reactions, which is the relation of the reactants and products of the chemical reaction, these cycles can be classified in two categories.
Stoichiometric ceria cycle
The stoichiometric ceria cycle uses the cerium(IV) oxide () and cerium(III) oxide () metal oxide pairs as oxygen carriers. This cycle is composed of two steps:
A reduction step, to liberate oxygen () from the material:
And an oxidation step, to split the water molecules into hydrogen () and oxygen (), and/or the carbon dioxide molecules () into carbon monoxide () and oxygen ():
The reaction for hydrogen production:
The reaction for carbon monoxide production:
The reduction step is an endothermic reaction that takes place at temperatures around 2,300 K (2,027 °C) in order to ensure a sufficient reduction. In order to enhance the reduction of the material, low partial pressures of oxygen are required. To obtain these low partial pressures, there are two main possibilities, either by vacuum pumping the reactor chamber, or by using an chemically inert sweep gas, such as nitrogen () or argon ().
On the other hand, the oxidation step is an exothermic reaction that can take place at a considerable range of temperatures, from 400 °C up to 1,000 °C. In this case, depending on the fuel to be produced, a stream of steam, carbon dioxide or a mixture of both is introduced to the reaction chamber for hydrogen, carbon monoxide or syngas production respectively. The temperature difference between the two steps presents a challenge for heat recovery, since the existing solid to solid heat exchangers are not highly efficient.
The thermal energy required to achieve these high temperatures is provided by concentrated solar radiation. Due to the high concentration ratio required to achieve this high temperatures, the main technologies used are concentrating solar towers (CST) or parabolic dishes.
The main disadvantage of the stoichiometric ceria cycle lies in the fact that the reduction reaction temperature of cerium(IV) oxide () is at the same range of the melting temperature (1,687–2,230 °C) of cerium(IV) oxide (), which in the end results in some melting and sublimation of the material, which can produce reactor failures such as deposition on the window or sintering of the particles.
Non-stoichiometric ceria cycle
The non-stoichiometric ceria cycle uses only cerium(IV) oxide, and instead of totally reducing it to the next oxidation molecule, it performs a partial reduction of it. The quantity of this reduction is commonly expressed as reduction extent and is indicated as . In this way, by partially reducing ceria, oxygen vacancies are created in the material. The two steps are formulated as such:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main advantage of this cycle is that the reduction temperature is lower, around 1,773 K (1,500 °C) which alleviates the high temperature demand of some materials and avoids certain problems such as sublimation or sintering. Temperatures above these would result in the reduction of the material to the next oxidation molecule, which should be avoided.
In order to reduce the thermal loses of the cycle, the temperature difference between the reduction and oxidation chambers need to be optimized. This results in partially oxidated states, rather than a full oxidation of the ceria. Due to this, the chemical reaction is commonly expressed considering these two reduction extents:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main disadvantage of these cycles is the low reduction extent, due to the low non-stoichiometry, hence leaving less vacancies for the oxidation process, which in the end translates to lower fuel production rates.
Due to the properties of ceria, other materials are being studied, mainly perovskites based on ceria, to improve the thermodynamic and chemical properties of the metal oxide.
Methane driven non-stoichiometric ceria cycle
Since the temperatures needed to achieve the reduction of the material are considerably high, the reduction of the cerium oxide can be enhanced by providing methane to the reaction. This reduces significantly the temperatures required to achieve the reduction of ceria, ranging between 800-1,000 °C, while also producing syngas in the reduction reactor. In this case, the reduction reaction goes as follows:
The main disadvantages of this cycle are the carbon deposition on the material, which eventually deactivates it after several cycles and needs to be replaced, and the acquisition of the methane feedstock.
Types of reactors
Depending on the type and topology of the reactors, the cycles will function either in continuous production or in batch production. There are two main types of reactors for these specific cycles:
Monolithic reactors
These type of reactors consist on a piece of solid material, which is shaped as a reticulated porous foam (RPC) in other to increase both the surface area and the solar radiation penetration. This reactors are shaped as a cavity receivers, in order to reduce the thermal losses due to reradiation. They usually count with a quartz (fused silica) window in order to let the solar radiation inside the cavity.
Since the metal oxide is a solid structure, both reactions must be done in the same reactor, which leads to a discontinuous production process, carrying out one step after the other. To avoid this stops in the production time, multiple reactors can be arranged to approximate a continuous production process. This is usually referred as a batch process. The intention is to always have one or multiple reactors operating in the oxidation step at the same time, hence always generating hydrogen.
Some new reactor concepts are being studied, in which the RPCs can be moved from one reactor to another, in order to have one single reduction reactor.
Solid particles reactors
These type of reactors try to solve the discontinuity problem of the cycle by using solid particles of the metal oxide instead of having solid structures. This particles can be moved from the reduction reactor to the oxidation reactor, which allows a continuous production of fuel. Many types of reactors work with solid particles, from free falling receivers, to packed beds, fluidized beds or rotary kilns.
The main disadvantage of this approach is that, due to the high temperatures achieved, the solid particles are susceptible to sintering, which is a process in which small particles melt and get stuck to another particles, creating bigger particles, which reduces their surface area and difficult the transportation process.
See also
Thermochemical cycle
Solar fuel
Sulfur–iodine cycle
Hybrid sulfur cycle
References
External links
HYDROSOL project. Retrieved 07/07/2024
Sun to Liquid project Retrieved 11/07/2024
Chemical reactions
Hydrogen production
Cerium
Catalysis | Ceria based thermochemical cycles | [
"Chemistry"
] | 1,599 | [
"Catalysis",
"Chemical kinetics",
"nan"
] |
77,311,534 | https://en.wikipedia.org/wiki/Momentum%20mapping%20format | Momentum mapping format is a key technique in the Material Point Method (MPM) for transferring physical quantities such as momentum, mass, and stress between a material point and a background grid.
The Material Point Method (MPM) is a numerical technique using a mixed Eulerian-Lagrangian description. It discretises the computational domain with material points and employs a background grid to solve the momentum equations. Proposed by Sulsky et al. in 1994.
MPM has since been expanded to various fields such as computational solid dynamics. Currently, MPM features several momentum mapping schemes, with the four main ones being PIC (Particle-in-cell), FLIP (Fluid-Implicit Particle), hybrid format, and APIC (Affine Particle-in-Cell). Understanding these schemes in-depth is crucial for the further development of MPM.
Background
MPM represents materials as collections of material points (or particles). Unlike other particle methods such as SPH(Smoothed-particle hydrodynamics) and DEM (Discrete element method), MPM also uses a background grid to solve the momentum equations arising from particle interactions. MPM can be categorized as a mixed particle/grid method or a mixed Lagrangian-Eulerian method. By combining the strengths of both frameworks, MPM aims to be the most effective numerical solver for large deformation problems. It has been further developed and applied to various challenging problems such as high-speed impact (Huang et al., 2011), landslides (Fern et al., 2019), saturated porous media (He et al., 2024), and fluid-structure interaction (Li et al., 2022).
The Material Point Method (MPM) community has developed several momentum mapping schemes, among which PIC, FLIP, the hybrid scheme, and APIC are the most common. The FLIP scheme is widely used for dynamic problems due to its energy conservation properties, although it can introduce numerical noise and instability (Bardenhagen, 2002), potentially leading to computational failure. Conversely, the PIC scheme is known for numerical stability and is advantageous for static problems, but it suffers from significant numerical dissipation (Brackbill et al., 1988), which is unacceptable for strongly dynamic responses. Nairn et al. combined FLIP and PIC linearly (Nairn, 2015) to create a hybrid scheme, adjusting the proportion of each component based on empirical rather than theoretical analysis. Hammerquist and Nairn (2017) introduced an improved scheme called XPIC-m (for eXtended Particle-In-Cell of order m), which addresses the excessive filtering and numerical diffusion of PIC while suppressing the noise caused by the nonlinear space in FLIP used in MPM. XPIC-1 (eXtended Particle-In-Cell of order 1) is equivalent to the standard PIC method. Jiang et al. (2017, 2015) introduced the Affine Particle In Cell (APIC) method, where particle velocities are represented locally affine, preserving linear and angular momentum during the transfer process. This significantly reduces numerical dissipation and avoids the velocity noise and instability seen in FLIP. Fu et al. (2017) introduced generalized local functions into the APIC method, proposing the Polynomial Particle In Cell (PolyPIC) method. PolyPIC views G2P (Grid-to-Particle) transfer as a projection of the particle's local grid velocity, preserving linear and angular momentum, thereby improving energy and vorticity retention compared to the original APIC. Additionally, PolyPIC retains the filtering properties of APIC and PIC, providing robustness against noise.
Affine particle in cell method
In the PIC scheme, particle velocities during the Grid-to-Particle (G2P) substep are directly overwritten by extrapolating the nodal velocities to the particles themselves:
In the FLIP scheme, the material point velocities are updated by interpolating the velocity increments of the grid nodes over the current time step:
The hybrid scheme's momentum mapping can be mathematically represented as:
where the parameters are defined as shown here below
represents the velocity computed using the FLIP scheme
represents the velocity using the PIC scheme
is the proportion of FLIP with representing pure FLIP and representing pure PIC
Based on the idea of "providing the local velocity field around the material point to the background grid by transferring the material point's velocity gradient," Jiang et al. (2015) proposed the APIC method. In this method, the particle velocity is locally affine, mathematically expressed as:
where the parameters are defined as shown here below:
indicates the translational speed
represents the emission matrix, and represent the pattern of horizontal and vertical stretching models, respectively, while and represent the pattern of clockwise and counterclockwise shear motion models, respectively. If , the momentum mapping scheme will be simplified to PIC mode.
Computational implementation
PIC (Particle-In-Cell), FLIP (Fluid-Implicit Particle), hybrid (hybrid solution) and APIC (Affine) The different numerical methods used in Particle-In-Cell fluid simulation greatly show how they map momentum and time integrals between material points and grids, and how they differ from each other. The typical time integration schemes for PIC, FLIP, hybrid, and APIC schemes have their own unique characteristics. The evolution of momentum on the grid under each scheme is identical. Despite the differences among these four-momentum mapping formats, their common points are still dominant. In the P2G process, the momentum mapping in PIC, FLIP, and hybrid schemes is the same. The material point positions are updated in the same manner across all four schemes. During the G2P stage, PIC transfers the updated momentum on grid nodes directly back to the material points, FLIP uses incremental mapping, and the hybrid scheme linearly combines FLIP and PIC using a coefficient. APIC mapping maintains an additional affine matrix on top of the PIC mapping.
Numerical tests
Numerical tests on ring collision highlight the performance of different momentum mapping schemes in dynamic problems. The mean stress distribution and total energy evolution curve at typical time are the key contents of researchers' attention. Due to the PIC mapping scheme canceling out velocities in opposite directions, significant energy loss occurs, preventing effective conversion of kinetic energy into strain energy. GIMP_FLIP (Generalized Interpolation Material Point - Fluid Implicit Particle ) shows notable numerical noise and instability, with severe oscillations in mean stress, leading to numerical fracture. GIMP_FLPI0.99 exhibits improved stability but still carries the risk of numerical fracture. Tests indicate that increasing the PIC component enhances numerical stability, with stress distribution becoming more uniform and regular, and the probability of numerical fracture decreasing. However, energy loss also becomes more pronounced. GIMP_APIC (Generalized Interpolation Material Point - Affine Particle-In-Cell) demonstrates the best performance, providing a stable and smooth stress distribution while maintaining excellent energy conservation characteristics.
Related research and developments
Recently, Qu et al. proposed PowerPIC (Qu et al., 2022), a more stable and accurate mapping scheme based on optimization, which also maintains volume and uniform particle distribution characteristics.
See also
Smoothed Particle Hydrodynamics
Finite Element Method
Particle-in-cell
Material point method
Numerical Methods for Partial Differential Equations
References
External links
Computational physics
Civil engineering
Materials science
Computational mathematics
Numerical analysis
Numerical differential equations
Computational fluid dynamics
Simulation | Momentum mapping format | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,518 | [
"Applied and interdisciplinary physics",
"Computational fluid dynamics",
"Applied mathematics",
"Computational mathematics",
"Materials science",
"Computational physics",
"Construction",
"Civil engineering",
"Mathematical relations",
"nan",
"Numerical analysis",
"Approximations",
"Fluid dyna... |
77,313,933 | https://en.wikipedia.org/wiki/Elmo%20Motion%20Control | Elmo Motion Control is an engineering company specializing in developing, producing, and selling innovative hardware and software solutions in motion control. The company was founded in 1988 and is based in Petah Tikva, Israel. On September 4, 2022, Elmo was fully acquired by Bosch Rexroth.
History
Elmo Motion Control was established in 1988 by Haim Monhait. Four years later, in 1992, the company expanded its operations by opening its first subsidiary in the United States. In 2008, Elmo acquired and merged with Control Solutions (Pitronot Bakara), further solidifying its position in the market. In 2015, the company opened an additional production facility in Warsaw, Poland, to meet the growing demand.
Over the years, Elmo has steadily expanded its global presence by establishing eight additional subsidiaries worldwide. These include operations in China, Europe, and the APAC region. The most recent subsidiary was opened in Singapore in 2019.
Operations
Elmo employs over 400 personnel and has its headquarters and manufacturing facilities in Petah Tikva, Israel. The company also has worldwide sales and technical support offices and additional manufacturing facilities.
Products and markets
Elmo offers complete motion control solutions, ranging from design to delivery, including cutting-edge servo drives, network-based multi-axis motion controllers, power supplies, and integrated servo motors. These solutions can be customized, configured, and simulated using Elmo's proprietary software tools, which are designed to be advanced and easy to use. Elmo's products cater to various industries, such as semiconductors, lasers, robots, drones, life sciences, industrial automation, and extreme environments.
Product lines
Elmo Motion Control provides various servo drives suitable for various motion requirements, from industrial applications that require high precision and power density to extreme applications designed for critical missions in harsh environments. Since its establishment, Elmo has developed three generations of products, each offering servo drives and motion controllers for both industrial and harsh environments. Platinum's latest product line is known for its EtherCAT networking precision and fully certified functional safety in all its products. Elmo's servo-drive product lines comply with global industry standards.
Acquisition by Bosch Rexroth
In September 2022, Elmo Motion Control was fully acquired by Bosch Rexroth, a leading global supplier of drive and control technologies.
References
External links
Official website
Companies based in Petah Tikva
Israeli companies established in 1988
Motion control
2022 mergers and acquisitions | Elmo Motion Control | [
"Physics",
"Engineering"
] | 501 | [
"Physical phenomena",
"Motion (physics)",
"Automation",
"Motion control"
] |
77,318,529 | https://en.wikipedia.org/wiki/Anchor%20losses | Anchor losses are a type of damping commonly highlighted in micro-resonators. They refer to the phenomenon where energy is dissipated as mechanical waves from the resonator attenuate into the substrate.
Introduction
In physical systems, damping is the loss of energy of an oscillating system by dissipation. In the field of micro-electro-mechanicals, the damping is usually measured by a dimensionless parameter Q factor (Quality factor). A higher Q factor indicates lower damping and reduced energy dissipation, which is desirable for micro-resonators as it leads to lower energy consumption, better accuracy and efficiency, and reduced noise.
Several factors contribute to the damping of micro-electro-mechanical resonators, including fluid damping and solid damping. Anchor losses are a type of solid damping observed in resonators operating in various environments. When a resonator is fixed to a substrate, either directly or via other structures such as tethers, mechanical waves propagate into the substrate through these connections. The wave traveling through a perfectly elastic solid would have a constant energy and an isolated perfectly elastic solid once set into vibration would continue to vibrate indefinitely. Actual materials do not show such behavior and dissipation will happen due to some imperfection of elasticity within the body. In typical micro-resonators, the substrate dimensions are significantly larger than those of the resonator itself. Consequently, it can be approximated that all waves entering the substrate will attenuate without reflecting back to the resonator. In other words, the energy carried by the waves will dissipate, leading to damping. This phenomenon is referred to as anchor losses.
Estimation of anchor losses
Analytical estimation
Standard theories of structural mechanics permit the expression of concentrated forces and couples exerted by the structure on the support.These generally include a constant component (due, for instance, to pre-stresses or initial deformation) and a sinusoidal varying contribution. Some researchers have investigated some simple geometries following this idea, and one example is the anchor losses of a cantilaver beam connected to a 3-D semi-infinite region:
where L is the length of the beam, H is the in-plane (curvature plane) thickness, W is the out-of-plane thickness, C is a constant depending on the Poisson's coefficient, with C = 3.45 for ν = 0.25, C = 3.23 for ν = 0.3, C = 3.175 for ν = 0.33.
Numerical estimation
Due to the complexity of geometries and the anisotropy or inhomogeneities of materials, usually it is difficult to use analytical method to estimate the anchor losses of some devices. Numerical methods are more widely applied for this issue. An artificial boundary or an artificial absorbing layer is applied to the numerical model to prevent the wave reflection. One such method is the perfectly matched layer, initially developed for electromagnetic wave transmission and later adapted for solid mechanics. Perfectly matched layers act as special elements where wave attenuation occurs through a complex coordinate transformation, ensuring all waves entering the layer are absorbed, thus simulating anchor losses.
To determine the Q factor from a Finite Element Method model with perfectly matched layers, two common approaches are used:
Using the complex eigenfrequency from a modal analysis:
where and is the real and imaginary part of the complex eigenfrequency.
Generating the frequency response from a frequency domain analysis and applying methods such as the half-bandwidth method to calculate the Q factor.
Methods to mitigate anchor losses
Anchor losses are highly dependent on the geometry of the resonator. How to anchor the resonator or the size of the tether has a strong effect on the anchor losses. Some common methods to eliminate anchor losses are summarized as followings.
Anchor at nodal points
A common method is to fix the resonator at the nodal points, where the motion amplitude is minimum. From the definitions of anchor losses, now the wave magnitude into the substrate will be minimized and less energy will dissipate. However, this method may not apply to certain resonators, in which the nodal points are not around the resonator edges, causing difficulty in tether designs.
Quater wavelength tethers
Quarter wavelength tether is an effective approach to minimize the energy loss through these tethers. Similar to the theory used for transmission lines, quarter wavelength tether is assumed as the best acoustic isolation, since the complete in phase reflection occurs as the tether length equals to a quarter acoustic wavelength, or λ/4. Therefore, there is hardly any energy dissipation to the substrate through the tethers. However, quarter-wavelength design results in extremely long tether structures, usually in tens to hundreds of micrometers, which is counter to minimization and leads to a decrease in the mechanical stability of the devices.
Material-mismatched support
The resonator structure and anchoring stem are made with different materials. The acoustic impedance mismatch between these two suppresses the energy from the resonator to the stem, thus reducing anchor losses and allowing high Q factor.
Acoustic reflection cavity
The basic mechanism is to reflect back a portion of the elastic waves at the anchor boundary due to the discontinuity in the acoustic impedance caused by the acoustic cavity (the etching trenches).
Phonon crystal tether and metamaterial
Phonon crystal tether is a promising way to restrain the acoustic wave propagation in the supporting tethers, since they can arouse complete band gaps in which the transmissions of the elastic waves are prohibited. Thus, the vibration energy is retained in the resonator body, reducing the anchor losses into the substrate. Besides the phonon crystal tether, some other kinds of metamaterial could be applied to the anchor and surrounding regions to prohibit the wave transmission. A key drawback of this method is the challenge to the fabrication process.
Optimized anchor geometry
Anchor losses are highly sensitive to the geometry of the anchors. Features such as fillets, curvature, sidewall inclination, and other detailed geometric aspects can affect anchor losses. By carefully optimizing these geometric configurations, anchor losses can be significantly reduced.
See also
Dynamical systems theory
Finite element method
Finite-difference time-domain method
Micro-Electro-Mechanical Systems
Resonator
Infinite element method
References
External links
How to Model Different Types of Damping in COMSOL Multiphysics®
Effect of Perfectly Matched Layers (PML) in FDTD Simulations
Notes on Perfectly Matched Layers (PMLs)
Resonators
Dimensionless numbers of mechanics
Engineering ratios
Ordinary differential equations
Mathematical analysis
Classical mechanics | Anchor losses | [
"Physics",
"Mathematics",
"Engineering"
] | 1,372 | [
"Mathematical analysis",
"Metrics",
"Engineering ratios",
"Quantity",
"Classical mechanics",
"Mechanics",
"Dimensionless numbers of mechanics"
] |
78,735,030 | https://en.wikipedia.org/wiki/Atumelnant | Atumelnant (developmental code name CRN04894) is an investigational new drug developed by Crinetics Pharmaceuticals for the treatment of adrenocorticotropic hormone (ACTH)-dependent endocrine disorders. It is a selective antagonist of the melanocortin type 2 receptor (MC2R), also known as the ACTH receptor, which is primarily expressed in the adrenal glands. Atumelnant is being evaluated to treat conditions such as congenital adrenal hyperplasia (CAH) and ACTH-dependent Cushing's syndrome caused for example by pituitary adenomas.
References
Carboxamides
Cyclobutanes
Ethers
Ethoxy compounds
Melanocortin receptor antagonists
Piperazines
Phenols
Pyridines
Quinuclidines
Trifluoromethyl compounds | Atumelnant | [
"Chemistry"
] | 180 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Organic compounds",
"Ethers",
"Pharmacology stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.