text
stringlengths
11
320k
source
stringlengths
26
161
Physical organic chemistry , a term coined by Louis Hammett in 1940, refers to a discipline of organic chemistry that focuses on the relationship between chemical structures and reactivity , in particular, applying experimental tools of physical chemistry to the study of organic molecules . Specific focal points of study include the rates of organic reactions , the relative chemical stabilities of the starting materials, reactive intermediates , transition states , and products of chemical reactions , and non-covalent aspects of solvation and molecular interactions that influence chemical reactivity. Such studies provide theoretical and practical frameworks to understand how changes in structure in solution or solid-state contexts impact reaction mechanism and rate for each organic reaction of interest. Physical organic chemists use theoretical and experimental approaches work to understand these foundational problems in organic chemistry , including classical and statistical thermodynamic calculations, quantum mechanical theory and computational chemistry , as well as experimental spectroscopy (e.g., NMR ), spectrometry (e.g., MS ), and crystallography approaches. The field therefore has applications to a wide variety of more specialized fields, including electro- and photochemistry , polymer and supramolecular chemistry , and bioorganic chemistry , enzymology , and chemical biology , as well as to commercial enterprises involving process chemistry , chemical engineering , materials science and nanotechnology , and pharmacology in drug discovery by design. Physical organic chemistry is the study of the relationship between structure and reactivity of organic molecules . More specifically, physical organic chemistry applies the experimental tools of physical chemistry to the study of the structure of organic molecules and provides a theoretical framework that interprets how structure influences both mechanisms and rates of organic reactions . It can be thought of as a subfield that bridges organic chemistry with physical chemistry . Physical organic chemists use both experimental and theoretical disciplines such as spectroscopy , spectrometry , crystallography , computational chemistry , and quantum theory to study both the rates of organic reactions and the relative chemical stability of the starting materials, transition states , and products. [ 1 ] [ page needed ] Chemists in this field work to understand the physical underpinnings of modern organic chemistry , and therefore physical organic chemistry has applications in specialized areas including polymer chemistry , supramolecular chemistry , electrochemistry , and photochemistry . [ 1 ] [ page needed ] The term physical organic chemistry was itself coined by Louis Hammett in 1940 when he used the phrase as a title for his textbook. [ 2 ] Organic chemists use the tools of thermodynamics to study the bonding , stability , and energetics of chemical systems. This includes experiments to measure or determine the enthalpy (Δ H ), entropy (Δ S ), and Gibbs' free energy (Δ G ) of a reaction, transformation, or isomerization. Chemists may use various chemical and mathematical analyses, such as a Van 't Hoff plot , to calculate these values. Empirical constants such as bond dissociation energy , standard heat of formation (Δ f H °), and heat of combustion (Δ c H °) are used to predict the stability of molecules and the change in enthalpy (Δ H ) through the course of the reactions. For complex molecules, a Δ f H ° value may not be available but can be estimated using molecular fragments with known heats of formation . This type of analysis is often referred to as Benson group increment theory , after chemist Sidney Benson who spent a career developing the concept. [ 1 ] [ page needed ] [ 3 ] [ 4 ] The thermochemistry of reactive intermediates— carbocations , carbanions , and radicals —is also of interest to physical organic chemists. Group increment data are available for radical systems. [ 1 ] [ page needed ] Carbocation and carbanion stabilities can be assessed using hydride ion affinities and pK a values , respectively. [ 1 ] [ page needed ] One of the primary methods for evaluating chemical stability and energetics is conformational analysis . Physical organic chemists use conformational analysis to evaluate the various types of strain present in a molecule to predict reaction products. [ 5 ] [ page needed ] Strain can be found in both acyclic and cyclic molecules, manifesting itself in diverse systems as torsional strain , allylic strain , ring strain , and syn -pentane strain . [ 1 ] [ page needed ] A-values provide a quantitative basis for predicting the conformation of a substituted cyclohexane , an important class of cyclic organic compounds whose reactivity is strongly guided by conformational effects. The A-value is the difference in the Gibbs' free energy between the axial and equatorial forms of substituted cyclohexane, and by adding together the A-values of various substituents it is possible to quantitatively predict the preferred conformation of a cyclohexane derivative. In addition to molecular stability, conformational analysis is used to predict reaction products. One commonly cited example of the use of conformational analysis is a bi-molecular elimination reaction (E2). This reaction proceeds most readily when the nucleophile attacks the species that is antiperiplanar to the leaving group. A molecular orbital analysis of this phenomenon suggest that this conformation provides the best overlap between the electrons in the R-H σ bonding orbital that is undergoing nucleophilic attack and the empty σ* antibonding orbital of the R-X bond that is being broken. [ 6 ] [ page needed ] By exploiting this effect, conformational analysis can be used to design molecules that possess enhanced reactivity. The physical processes which give rise to bond rotation barriers are complex, and these barriers have been extensively studied through experimental and theoretical methods. [ 7 ] [ 8 ] [ 9 ] A number of recent articles have investigated the predominance of the steric , electrostatic , and hyperconjugative contributions to rotational barriers in ethane , butane , and more substituted molecules. [ 10 ] Chemists use the study of intramolecular and intermolecular non-covalent bonding/interactions in molecules to evaluate reactivity. Such interactions include, but are not limited to, hydrogen bonding , electrostatic interactions between charged molecules, dipole-dipole interactions , polar-π and cation-π interactions, π-stacking , donor-acceptor chemistry, and halogen bonding . In addition, the hydrophobic effect —the association of organic compounds in water—is an electrostatic, non-covalent interaction of interest to chemists. The precise physical origin of the hydrophobic effect originates from many complex interactions , but it is believed to be the most important component of biomolecular recognition in water. [ 1 ] [ page needed ] For example, researchers elucidated the structural basis for folic acid recognition by folate acid receptor proteins. [ 11 ] The strong interaction between folic acid and folate receptor was attributed to both hydrogen bonds and hydrophobic interactions . The study of non-covalent interactions is also used to study binding and cooperativity in supramolecular assemblies and macrocyclic compounds such as crown ethers and cryptands , which can act as hosts to guest molecules. The properties of acids and bases are relevant to physical organic chemistry. Organic chemists are primarily concerned with Brønsted–Lowry acids/bases as proton donors/acceptors and Lewis acids/bases as electron acceptors/donors in organic reactions. Chemists use a series of factors developed from physical chemistry -- electronegativity / Induction , bond strengths , resonance , hybridization , aromaticity , and solvation —to predict relative acidities and basicities. The hard/soft acid/base principle is utilized to predict molecular interactions and reaction direction. In general, interactions between molecules of the same type are preferred. That is, hard acids will associate with hard bases, and soft acids with soft bases. The concept of hard acids and bases is often exploited in the synthesis of inorganic coordination complexes . Physical organic chemists use the mathematical foundation of chemical kinetics to study the rates of reactions and reaction mechanisms. Unlike thermodynamics, which is concerned with the relative stabilities of the products and reactants (Δ G °) and their equilibrium concentrations, the study of kinetics focuses on the free energy of activation (Δ G ‡ ) -- the difference in free energy between the reactant structure and the transition state structure—of a reaction, and therefore allows a chemist to study the process of equilibration . [ 1 ] [ page needed ] Mathematically derived formalisms such as the Hammond Postulate , the Curtin-Hammett principle , and the theory of microscopic reversibility are often applied to organic chemistry . Chemists have also used the principle of thermodynamic versus kinetic control to influence reaction products. The study of chemical kinetics is used to determine the rate law for a reaction. The rate law provides a quantitative relationship between the rate of a chemical reaction and the concentrations or pressures of the chemical species present. [ 12 ] [ page needed ] Rate laws must be determined by experimental measurement and generally cannot be elucidated from the chemical equation . The experimentally determined rate law refers to the stoichiometry of the transition state structure relative to the ground state structure. Determination of the rate law was historically accomplished by monitoring the concentration of a reactant during a reaction through gravimetric analysis , but today it is almost exclusively done through fast and unambiguous spectroscopic techniques. In most cases, the determination of rate equations is simplified by adding a large excess ("flooding") all but one of the reactants. The study of catalysis and catalytic reactions is very important to the field of physical organic chemistry. A catalyst participates in the chemical reaction but is not consumed in the process. [ 12 ] [ page needed ] A catalyst lowers the activation energy barrier (Δ G ‡ ), increasing the rate of a reaction by either stabilizing the transition state structure or destabilizing a key reaction intermediate, and as only a small amount of catalyst is required it can provide economic access to otherwise expensive or difficult to synthesize organic molecules. Catalysts may also influence a reaction rate by changing the mechanism of the reaction. [ 1 ] [ page needed ] Although a rate law provides the stoichiometry of the transition state structure, it does not provide any information about breaking or forming bonds. [ 1 ] [ page needed ] The substitution of an isotope near a reactive position often leads to a change in the rate of a reaction. Isotopic substitution changes the potential energy of reaction intermediates and transition states because heavier isotopes form stronger bonds with other atoms. Atomic mass affects the zero-point vibrational state of the associated molecules, shorter and stronger bonds in molecules with heavier isotopes and longer, weaker bonds in molecules with light isotopes. [ 6 ] [ page needed ] Because vibrational motions will often change during a course of a reaction, due to the making and breaking of bonds, the frequencies will be affected, and the substitution of an isotope can provide insight into the reaction mechanism and rate law. The study of how substituents affect the reactivity of a molecule or the rate of reactions is of significant interest to chemists. Substituents can exert an effect through both steric and electronic interactions, the latter of which include resonance and inductive effects . The polarizability of molecule can also be affected. Most substituent effects are analyzed through linear free energy relationships (LFERs). The most common of these is the Hammett Plot Analysis . [ 1 ] [ page needed ] This analysis compares the effect of various substituents on the ionization of benzoic acid with their impact on diverse chemical systems. The parameters of the Hammett plots are sigma (σ) and rho (ρ). The value of σ indicates the acidity of substituted benzoic acid relative to the unsubstituted form. A positive σ value indicates the compound is more acidic, while a negative value indicates that the substituted version is less acidic. The ρ value is a measure of the sensitivity of the reaction to the change in substituent, but only measures inductive effects. Therefore, two new scales were produced that evaluate the stabilization of localized charge through resonance. One is σ + , which concerns substituents that stabilize positive charges via resonance, and the other is σ − which is for groups that stabilize negative charges via resonance. Hammett analysis can be used to help elucidate the possible mechanisms of a reaction. For example, if it is predicted that the transition state structure has a build-up of negative charge relative to the ground state structure, then electron-donating groups would be expected to increase the rate of the reaction. [ 1 ] [ page needed ] Other LFER scales have been developed. Steric and polar effects are analyzed through Taft Parameters . Changing the solvent instead of the reactant can provide insight into changes in charge during the reaction. The Grunwald-Winstein Plot provides quantitative insight into these effects. [ 1 ] [ page needed ] [ 13 ] Solvents can have a powerful effect on solubility , stability , and reaction rate . A change in solvent can also allow a chemist to influence the thermodynamic or kinetic control of the reaction. Reactions proceed at different rates in different solvents due to the change in charge distribution during a chemical transformation. Solvent effects may operate on the ground state and/or transition state structures. [ 1 ] [ page needed ] An example of the effect of solvent on organic reactions is seen in the comparison of S N 1 and S N 2 reactions . [ 14 ] [ further explanation needed ] [ example needed ] Solvent can also have a significant effect on the thermodynamic equilibrium of a system, for instance as in the case of keto-enol tautomerizations . In non-polar aprotic solvents, the enol form is strongly favored due to the formation of an intramolecular hydrogen-bond , while in polar aprotic solvents, such as methylene chloride , the enol form is less favored due to the interaction between the polar solvent and the polar diketone . [ example needed ] In protic solvents, the equilibrium lies towards the keto form as the intramolecular hydrogen bond competes with hydrogen bonds originating from the solvent. [ 15 ] [ non-primary source needed ] [ non-primary source needed ] [ 16 ] [ non-primary source needed ] [ non-primary source needed ] [ 17 ] [ non-primary source needed ] [ non-primary source needed ] A modern example of the study of solvent effects on chemical equilibrium can be seen in a study of the epimerization of chiral cyclopropylnitrile Grignard reagents . [ 18 ] [ non-primary source needed ] [ non-primary source needed ] This study reports that the equilibrium constant for the cis to trans isomerization of the Grignard reagent is much greater—the preference for the cis form is enhanced—in THF as a reaction solvent, over diethyl ether . However, the faster rate of cis-trans isomerization in THF results in a loss of stereochemical purity. This is a case where understanding the effect of solvent on the stability of the molecular configuration of a reagent is important with regard to the selectivity observed in an asymmetric synthesis . Many aspects of the structure-reactivity relationship in organic chemistry can be rationalized through resonance , electron pushing, induction , the eight electron rule , and s-p hybridization , but these are only helpful formalisms and do not represent physical reality. Due to these limitations, a true understanding of physical organic chemistry requires a more rigorous approach grounded in particle physics . Quantum chemistry provides a rigorous theoretical framework capable of predicting the properties of molecules through calculation of a molecule's electronic structure, and it has become a readily available tool in physical organic chemists in the form of popular software packages. [ citation needed ] The power of quantum chemistry is built on the wave model of the atom , in which the nucleus is a very small, positively charged sphere surrounded by a diffuse electron cloud. Particles are defined by their associated wavefunction , an equation which contains all information associated with that particle. [ 12 ] [ page needed ] All information about the system is contained in the wavefunction. This information is extracted from the wavefunction through the use of mathematical operators. E Ψ = H ^ Ψ {\displaystyle E\Psi ={\hat {H}}\Psi } The energy associated with a particular wavefunction , perhaps the most important information contained in a wavefunction, can be extracted by solving the Schrödinger equation (above, Ψ is the wavefunction, E is the energy, and Ĥ is the Hamiltonian operator) [ 12 ] [ page needed ] in which an appropriate Hamiltonian operator is applied. In the various forms of the Schrödinger equation, the overall size of a particle's probability distribution increases with decreasing particle mass. For this reason, nuclei are of negligible size in relation to much lighter electrons and are treated as point charges in practical applications of quantum chemistry. Due to complex interactions which arise from electron-electron repulsion, algebraic solutions of the Schrödinger equation are only possible for systems with one electron such as the hydrogen atom, H 2 + , H 3 2+ , etc.; however, from these simple models arise all the familiar atomic (s,p,d,f) and bonding (σ,π) orbitals. In systems with multiple electrons, an overall multielectron wavefunction describes all of their properties at once. Such wavefunctions are generated through the linear addition of single electron wavefunctions to generate an initial guess, which is repeatedly modified until its associated energy is minimized. Thousands of guesses are often required until a satisfactory solution is found, so such calculations are performed by powerful computers. Importantly, the solutions for atoms with multiple electrons give properties such as diameter and electronegativity which closely mirror experimental data and the patterns found in the periodic table . The solutions for molecules, such as methane , provide exact representations of their electronic structure which are unobtainable by experimental methods. [ citation needed ] Instead of four discrete σ-bonds from carbon to each hydrogen atom, theory predicts a set of four bonding molecular orbitals which are delocalized across the entire molecule. Similarly, the true electronic structure of 1,3-butadiene shows delocalized π-bonding molecular orbitals stretching through the entire molecule rather than two isolated double bonds as predicted by a simple Lewis structure . [ citation needed ] A complete electronic structure offers great predictive power for organic transformations and dynamics, especially in cases concerning aromatic molecules , extended π systems , bonds between metal ions and organic molecules , molecules containing nonstandard heteroatoms like selenium and boron , and the conformational dynamics of large molecules such as proteins wherein the many approximations in chemical formalisms make structure and reactivity prediction impossible. An example of how electronic structure determination is a useful tool for the physical organic chemist is the metal-catalyzed dearomatization of benzene . Chromium tricarbonyl is highly electrophilic due to the withdrawal of electron density from filled chromium d-orbitals into antibonding CO orbitals, and is able to covalently bond to the face of a benzene molecule through delocalized molecular orbitals . The CO ligands inductively draw electron density from benzene through the chromium atom, and dramatically activate benzene to nucleophilic attack. Nucleophiles are then able to react to make hexacyclodienes, which can be used in further transformations such as Diels Alder cycloadditions . [ 19 ] Quantum chemistry can also provide insight into the mechanism of an organic transformation without the collection of any experimental data. Because wavefunctions provide the total energy of a given molecular state, guessed molecular geometries can be optimized to give relaxed molecular structures very similar to those found through experimental methods. [ 20 ] [ page needed ] Reaction coordinates can then be simulated, and transition state structures solved. Solving a complete energy surface for a given reaction is therefore possible, and such calculations have been applied to many problems in organic chemistry where kinetic data is unavailable or difficult to acquire. [ 1 ] [ page needed ] Physical organic chemistry often entails the identification of molecular structure, dynamics, and the concentration of reactants in the course of a reaction. The interaction of molecules with light can afford a wealth of data about such properties through nondestructive spectroscopic experiments , with light absorbed when the energy of a photon matches the difference in energy between two states in a molecule and emitted when an excited state in a molecule collapses to a lower energy state. Spectroscopic techniques are broadly classified by the type of excitation being probed, such as vibrational , rotational , electronic , nuclear magnetic resonance (NMR), and electron paramagnetic resonance spectroscopy. In addition to spectroscopic data, structure determination is often aided by complementary data collected from X-Ray diffraction and mass spectrometric experiments. [ 21 ] [ page needed ] One of the most powerful tools in physical organic chemistry is NMR spectroscopy . An external magnetic field applied to a paramagnetic nucleus generates two discrete states, with positive and negative spin values diverging in energy ; the difference in energy can then be probed by determining the frequency of light needed to excite a change in spin state for a given magnetic field. Nuclei that are not indistinguishable in a given molecule absorb at different frequencies, and the integrated peak area in an NMR spectrum is proportional to the number of nuclei responding to that frequency. [ 22 ] It is possible to quantify the relative concentration of different organic molecules simply by integration peaks in the spectrum, and many kinetic experiments can be easily and quickly performed by following the progress of a reaction within one NMR sample. Proton NMR is often used by the synthetic organic chemist because protons associated with certain functional groups give characteristic absorption energies, but NMR spectroscopy can also be performed on isotopes of nitrogen , carbon , fluorine , phosphorus , boron , and a host of other elements . In addition to simple absorption experiments, it is also possible to determine the rate of fast atom exchange reactions through suppression exchange measurements, interatomic distances through multidimensional nuclear Overhauser effect experiments, and through-bond spin-spin coupling through homonuclear correlation spectroscopy . [ 23 ] In addition to the spin excitation properties of nuclei, it is also possible to study the properties of organic radicals through the same fundamental technique. Unpaired electrons also have a net spin , and an external magnetic field allows for the extraction of similar information through electron paramagnetic resonance (EPR) spectroscopy. [ 1 ] [ page needed ] Vibrational spectroscopy , or infrared (IR) spectroscopy, allows for the identification of functional groups and, due to its low expense and robustness, is often used in teaching labs and the real-time monitoring of reaction progress in difficult to reach environments (high pressure, high temperature, gas phase, phase boundaries ). Molecular vibrations are quantized in an analogous manner to electronic wavefunctions, with integer increases in frequency leading to higher energy states . The difference in energy between vibrational states is nearly constant, often falling in the energy range corresponding to infrared photons, because at normal temperatures molecular vibrations closely resemble harmonic oscillators . It allows for the crude identification of functional groups in organic molecules , but spectra are complicated by vibrational coupling between nearby functional groups in complex molecules. Therefore, its utility in structure determination is usually limited to simple molecules. Further complicating matters is that some vibrations do not induce a change in the molecular dipole moment and will not be observable with standard IR absorption spectroscopy. These can instead be probed through Raman spectroscopy , but this technique requires a more elaborate apparatus and is less commonly performed. However, as Raman spectroscopy relies on light scattering it can be performed on microscopic samples such as the surface of a heterogeneous catalyst , a phase boundary , or on a one microliter (μL) subsample within a larger liquid volume. [ 21 ] [ page needed ] The applications of vibrational spectroscopy are often used by astronomers to study the composition of molecular gas clouds , extrasolar planetary atmospheres , and planetary surfaces . Electronic excitation spectroscopy , or ultraviolet-visible (UV-vis) spectroscopy, is performed in the visible and ultraviolet regions of the electromagnetic spectrum and is useful for probing the difference in energy between the highest energy occupied (HOMO) and lowest energy unoccupied (LUMO) molecular orbitals . This information is useful to physical organic chemists in the design of organic photochemical systems and dyes , as absorption of different wavelengths of visible light give organic molecules color. A detailed understanding of an electronic structure is therefore helpful in explaining electronic excitations, and through careful control of molecular structure it is possible to tune the HOMO-LUMO gap to give desired colors and excited state properties. [ 24 ] Mass spectrometry is a technique which allows for the measurement of molecular mass and offers complementary data to spectroscopic techniques for structural identification. In a typical experiment a gas phase sample of an organic material is ionized and the resulting ionic species are accelerated by an applied electric field into a magnetic field . The deflection imparted by the magnetic field, often combined with the time it takes for the molecule to reach a detector, is then used to calculate the mass of the molecule. Often in the course of sample ionization large molecules break apart, and the resulting data show a parent mass and a number of smaller fragment masses; such fragmentation can give rich insight into the sequence of proteins and nucleic acid polymers. In addition to the mass of a molecule and its fragments, the distribution of isotopic variant masses can also be determined and the qualitative presence of certain elements identified due to their characteristic natural isotope distribution . The ratio of fragment mass population to the parent ion population can be compared against a library of empirical fragmentation data and matched to a known molecular structure. [ 25 ] Combined gas chromatography and mass spectrometry is used to qualitatively identify molecules and quantitatively measure concentration with great precision and accuracy, and is widely used to test for small quantities of biomolecules and illicit narcotics in blood samples. For synthetic organic chemists it is a useful tool for the characterization of new compounds and reaction products. Unlike spectroscopic methods, X-ray crystallography always allows for unambiguous structure determination and provides precise bond angles and lengths totally unavailable through spectroscopy. It is often used in physical organic chemistry to provide an absolute molecular configuration and is an important tool in improving the synthesis of a pure enantiomeric substance. It is also the only way to identify the position and bonding of elements that lack an NMR active nucleus such as oxygen . Indeed, before x-ray structural determination methods were made available in the early 20th century all organic structures were entirely conjectural: tetrahedral carbon , for example, was only confirmed by the crystal structure of diamond , [ 26 ] and the delocalized structure of benzene was confirmed by the crystal structure of hexamethylbenzene . [ 27 ] While crystallography provides organic chemists with highly satisfying data, it is not an everyday technique in organic chemistry because a perfect single crystal of a target compound must be grown. Only complex molecules, for which NMR data cannot be unambiguously interpreted, require this technique. In the example below, the structure of the host–guest complex would have been quite difficult to solve without a single crystal structure: there are no protons on the fullerene , and with no covalent bonds between the two halves of the organic complex spectroscopy alone was unable to prove the hypothesized structure. [ citation needed ]
https://en.wikipedia.org/wiki/Physical_organic_chemistry
A phenomenon ( pl. : phenomena ), sometimes spelled phaenomenon , is an observable event . [ 1 ] The term came into its modern philosophical usage through Immanuel Kant , who contrasted it with the noumenon , which cannot be directly observed. Kant was heavily influenced by Gottfried Wilhelm Leibniz in this part of his philosophy, in which phenomenon and noumenon serve as interrelated technical terms. Far predating this, the ancient Greek Pyrrhonist philosopher Sextus Empiricus also used phenomenon and noumenon as interrelated technical terms. In popular usage, a phenomenon often refers to an extraordinary, unusual or notable event. According to the Dictionary of Visual Discourse : [ 2 ] In ordinary language 'phenomenon/phenomena' refer to any occurrence worthy of note and investigation, typically an untoward or unusual event, person or fact that is of special significance or otherwise notable. In modern philosophical use, the term phenomena means things as they are experienced through the senses and processed by the mind as distinct from things in and of themselves ( noumena ). In his inaugural dissertation , titled On the Form and Principles of the Sensible and Intelligible World , Immanuel Kant (1770) theorizes that the human mind is restricted to the logical world and thus can only interpret and understand occurrences according to their physical appearances. He wrote that humans could infer only as much as their senses allowed, but not experience the actual object itself. [ 3 ] Thus, the term phenomenon refers to any incident deserving of inquiry and investigation, especially processes and events which are particularly unusual or of distinctive importance. [ 2 ] In scientific usage, a phenomenon is any event that is observable , including the use of instrumentation to observe, record, or compile data. Especially in physics , the study of a phenomenon may be described as measurements related to matter , energy , or time , such as Isaac Newton 's observations of the Moon's orbit and of gravity ; or Galileo Galilei 's observations of the motion of a pendulum . [ 4 ] In natural sciences , a phenomenon is an observable happening or event. Often, this term is used without considering the causes of a particular event. Example of a physical phenomenon is an observable phenomenon of the lunar orbit or the phenomenon of oscillations of a pendulum. [ 4 ] A mechanical phenomenon is a physical phenomenon associated with the equilibrium or motion of objects. [ 5 ] Some examples are Newton's cradle , engines , and double pendulums . Group phenomena concern the behavior of a particular group of individual entities, usually organisms and most especially people. The behavior of individuals often changes in a group setting in various ways, and a group may have its own behaviors not possible for an individual because of the herd mentality . Social phenomena apply especially to organisms and people in that subjective states are implicit in the term. Attitudes and events particular to a group may have effects beyond the group, and either be adapted by the larger society, or seen as aberrant, being punished or shunned.
https://en.wikipedia.org/wiki/Physical_phenomena
A physical plant , building plant , mechanical plant or industrial plant (and where context is given, often just plant ) refers to the technical infrastructure used in operation and maintenance of a given facility. The operation of these technical facilities and services, or the department of an organization which does so, is called "plant operations" or facility management . Industrial plant should not be confused with "manufacturing plant" in the sense of "a factory ". The design and equipment of nuclear power plants have, for the most part, remained stagnant over the last 30 years. [ 1 ] There are three types of reactor cooling mechanisms: light water reactors , liquid metal reactors , and high-temperature gas-cooled reactors . [ 2 ] While, for the most part, equipment remains the same, there have been some minimal modifications to existing reactors improving safety and efficiency. [ 3 ] There have also been significant design changes for all these reactors. However, they remain theoretical and unimplemented. [ 4 ] Nuclear power plant equipment can be separated into two categories: primary systems and balance-of-plant systems. [ 5 ] Primary systems are equipment involved in the production and safety of nuclear power . [ 6 ] The reactor specifically has equipment such as reactor vessels usually surrounding the core for protection, and the reactor core which holds fuel rods . It also includes reactor cooling equipment consisting of liquid cooling loops and circulating coolant . These loops are usually separate systems each having at least one pump. [ 7 ] Other equipment includes steam generators and pressurizers that ensure pressure in the plant is adjusted as needed. [ 8 ] Containment equipment encompasses the physical structure built around the reactor to protect the surroundings from reactor failure. [ 9 ] Lastly, primary systems also include emergency core cooling equipment and reactor protection equipment. [ 10 ] Balance-of-plant systems are equipment used commonly across power plants in the production and distribution of power. [ 11 ] They utilize turbines , generators , condensers , feedwater equipment, auxiliary equipment, fire protection equipment, emergency power supply equipment and used fuel storage . [ 12 ] In broadcast engineering , the term transmitter plant refers to the part of the physical plant associated with the transmitter and its controls and inputs, the studio/transmitter link (if the radio studio is off-site), [ 13 ] the radio antenna and radomes , feedline and desiccation / nitrogen system, broadcast tower and building , tower lighting, generator , and air conditioning. These are often monitored by an automatic transmission system , which reports conditions via telemetry ( transmitter/studio link ). [ citation needed ] Economic constraints such as capital and operating expenditure lead to Passive Optical Networks as the primary fiber optic model used to for connecting users to the fiber optic plant. [ 14 ] A central office hub utilities transmission equipment, allowing it to send signals to between one and 32 users per line. [ 14 ] The main fiber backbone of a PON network is called an optical line terminal . [ 15 ] The operational requirements, such as maintenance, equipment sharing efficiency, sharing of the actual fiber and potential need for future expansion, all determine which specific variant of PON is used. [ 14 ] A Fiber Optic Splitter is equipment used when multiple users must be connected to the same backbone of fiber. [ 14 ] EPON is a variant of PON, which can hold 704 connections in one line. [ 15 ] Fibre networks based on a PON backbone have several options in connecting individuals to their network, such as fibre to the “curb, building, or home”. [ 16 ] This equipment utilises different wavelengths to send and receive data simultaneously and without interference [ 15 ] Base stations are a key component of mobile telecommunications infrastructure. They connect the end user to the main network. [ 17 ] They have physical barriers protecting transition equipment and are placed on masts or on the roofs/sides of buildings. Where it is located is determined by the local radio frequency coverage that is required. [ 18 ] These base stations utilize different kinds of antennas, either on buildings or on landscapes, to transmit signals back and forth [ 19 ] Directional antennas are used to direct signals in different direction, whereas line-of-sight radio-communication antennas, allow for communication in-between base stations. [ 19 ] Base stations are of three types: macro-, micro- and pico-cell sub-stations. [ 18 ] Macro cells are the most widely used base station, utilizing omnidirectional or radio-communication dishes. Micro cells are more specialized; these expand and provide additional coverage in areas where macro cells cannot. [ 20 ] They are typically placed on streetlights, usually not requiring radio-communication dishes. This is because they are physically interconnected via fiber-optic cables. [ 17 ] Pico cell stations are further specific, providing additional coverage only within a building when the coverage is poor. They will usually be placed on a roof or a wall in each building. [ 17 ] Desalination plants are responsible for removing salt from water sources so that it becomes usable for human consumption. [ 21 ] Reverse osmosis , multi-stage flash and multi-effect distillation , are three main types of equipment and processes used that differentiate desalination plants. [ 21 ] Thermal technologies such as MSF and MED are the most used in the Middle East, as they have low access to fresh water supply yet have access to excess energy. [ 21 ] Reverse osmosis plants use “Semi-Permeable Membrane Polymers”, that allow for water to pass through unabated while blocking molecules not suitable for drinking. [ 22 ] Reverse Osmosis plants typically use intake pipes, which allow for water to be abstracted at its source. This water is then taken to pre-treatment centers, where particles in the water are removed with chemicals added to prevent water damage. HR- pumps and booster pumps are used to provide pressure and pump the water at different heights of the facility, which is then transferred to a reverse osmosis module. This equipment, depending on the specifications, effectively filters out between 98 and 99.5% of salt from the water. Waste that is separated through these pre-treatment and reverse osmosis modules is taken to an energy recovery module, and any further excess is pumped back out through an outfall pipe. Control equipment is used to monitor this process and ensure it continues to run smoothly. When the water is separated, it is then delivered to a household via a distribution network for consumption. [ 23 ] Pre-treatment systems have intake screening equipment such as forebays and screens . [ 24 ] Intake equipment can vary in design; open ocean intakes are either placed onshore or off the shore. Offshore intakes transfer water using concrete channels into screening chambers to be transferred directly to pre-treatment centers, using intake pumps where chemicals will be added. It is then dissolved and separated from solids using a flotation device, to be pumped through a semi-permeable membrane. [ 25 ] Electrodialysis competes with reverse osmosis systems and has been used industrially since the 1960s. [ 26 ] It uses cathodes and anodes at multiple stages to filter out ionic compounds into a concentrated form, leaving more pure and safe drinking water. This technology does have a higher cost of energy so unlike reverse osmosis it is mainly used for brackish water which has a lower salt content than seawater . [ 27 ] Thermal distillation equipment is commonly used in the middle East; similarly to Reverse osmosis, it has a water abstraction and pre-treatment equipment, although in MSF different chemicals such as anti-sealant and anti-corrosives are added. Heating equipment is used at different stages at different pressure levels until it reaches a brine heater. The brine heater is what provides steam at these different stages to change the boiling point of the water. [ 28 ] Conventional water treatment plants are used to extract, purify and then distribute water from already drinkable bodies of water. Water treatment plants require a large network of equipment to retrieve, store and transfer water to a plant for treatment. Water from underground water sources are typically extracted via wells to be transported to a plant. [ 29 ] Typical well equipment includes pipes, pumps, and shelters. [ 30 ] If this underground water source is distant from the treatment plant, then aqueducts are commonly used to transport it. [ 31 ] Many transport equipment, such as aqueducts, pipes , and tunnels utilize open-channel flow to ensure delivery of the water. [ 32 ] This utilizes geography and gravity to allow the water to naturally flow from one place to another withoutthe need for additional pumps. Flow measurement equipment is used to monitor the flow, which is consistent with no issues occurring. [ 33 ] Watersheds are areas where surface water in each area will naturally flow and where it is usually stored after collection. [ 34 ] For storm water runoff , natural bodies of water as well as filtration systems are used to store and transfer water. Non-stormwater runoffs use equipment such as septic tanks to treat water onsite, or sewer systems where the water is collected and transferred to a water treatment plant. [ 35 ] Once water arrives at a plant, it undergoes a pre-treatment process where it is passed through screens, such as passive screens or bar screens, to stop certain kinds of debris from entering equipment further down the facility that could damage it. [ 36 ] After that, a mix of chemicals is added using either a dry chemical feeder or solution metering pumps . To prevent the water from being unusable or damaging equipment, these chemicals are measured using an electromechanical chemical feed device to ensure the correct levels of chemicals in the water are maintained. [ 37 ] Corrosive -resistant pipe materials such as PVC , aluminum and stainless steel are used to transfer water safely due to increases in acidity from pre-treatment. [ 38 ] Coagulation is usually the next step, in which salts such as ferric sulfate are used to destabilize organic matter in a mixing tank. Variable-speed paddle mixers are used to identify the best mix of salts to use for a specific body of water being treated. [ 39 ] Flocculation basins use temperature to condense unsafe particles together. [ 40 ] Setting tanks are then used to perform sedimentation , which removes certain solids using gravity so that they accumulate at the bottom of the tank. Rectangular and center feed basins are used to remove the sediment that is taken to sludge processing centers. Filtration then separates the larger materials that remain in the water source using pressure filtration, diatomaceous earth filtration, and direct filtration. [ 41 ] Water is then disinfected where it is then either stored or distributed for use. [ 42 ] Stakeholders have different responsibilities for the maintenance of equipment in a water treatment plant. [ 43 ] In terms of the distribution equipment to the end user, it is mainly the plant owners who are responsible for the maintenance of this equipment. An engineers role is more focused on maintaining the equipment used to treat water. Public regulators are responsible for monitoring water supply quality and ensuring it is safe to drink. [ 44 ] These stakeholders have active responsibility for these processes and equipment. The manufacturer's primary responsibility is off site, providing quality assurance of equipment function prior to use. [ 45 ] An HVAC plant usually includes air conditioning (both heating and cooling systems and ventilation) and other mechanical systems. It often also includes the maintenance of other systems, such as plumbing and lighting. The facility itself may be an office building, a school campus, military base, apartment complex, or the like. HVAC systems can be used to transport heat towards specific areas within a given facility or building. [ 46 ] Heat pumps are used to push heat in a certain direction. Specific heat pumps used vary, potentially including, solar thermal and ground source pumps. Other common components are finned tube heat exchanger and fans; however, these are limited and can lead to heat loss. [ 46 ] HVAC ventilation systems primarily remove air-borne particles through forced circulation. [ 47 ]
https://en.wikipedia.org/wiki/Physical_plant
A physical property is any property of a physical system that is measurable . [ 1 ] The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity . Measurable physical quantities are often referred to as observables . Some physical properties are qualitative , such as shininess , brittleness , etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity , hardness , ductility , viscosity , etc. Physical properties are often characterized as intensive and extensive properties . An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined. Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance. It may be difficult to determine whether a given property is a material property or not. Color , for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient . A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quantum structure. Physical properties are contrasted with chemical properties which determine the way a material behaves in a chemical reaction . The physical properties of an object that are traditionally defined by classical mechanics are often called mechanical properties. Other broad categories, commonly cited, are electrical properties, optical properties, thermal properties, etc. Physical properties include: [ 2 ]
https://en.wikipedia.org/wiki/Physical_property
A physical quantity (or simply quantity ) [ 1 ] [ a ] is a property of a material or system that can be quantified by measurement . A physical quantity can be expressed as a value , which is the algebraic multiplication of a numerical value and a unit of measurement . For example, the physical quantity mass , symbol m , can be quantified as m = n kg, where n is the numerical value and kg is the unit symbol (for kilogram ). Quantities that are vectors have, besides numerical value and unit, direction or orientation in space. Following ISO 80000-1 , [ 1 ] any value or magnitude of a physical quantity is expressed as a comparison to a unit of that quantity. The value of a physical quantity Z is expressed as the product of a numerical value { Z } (a pure number) and a unit [ Z ]: For example, let Z {\displaystyle Z} be "2 metres"; then, { Z } = 2 {\displaystyle \{Z\}=2} is the numerical value and [ Z ] = m e t r e {\displaystyle [Z]=\mathrm {metre} } is the unit. Conversely, the numerical value expressed in an arbitrary unit can be obtained as: The multiplication sign is usually left out, just as it is left out between variables in the scientific notation of formulas. The convention used to express quantities is referred to as quantity calculus . In formulas, the unit [ Z ] can be treated as if it were a specific magnitude of a kind of physical dimension : see Dimensional analysis for more on this treatment. International recommendations for the use of symbols for quantities are set out in ISO/IEC 80000 , the IUPAP red book and the IUPAC green book . For example, the recommended symbol for the physical quantity "mass" is m , and the recommended symbol for the quantity "electric charge" is Q . Physical quantities are normally typeset in italics. Purely numerical quantities, even those denoted by letters, are usually printed in roman (upright) type, though sometimes in italics. Symbols for elementary functions (circular trigonometric, hyperbolic, logarithmic etc.), changes in a quantity like Δ in Δ y or operators like d in d x , are also recommended to be printed in roman type. Examples: A scalar is a physical quantity that has magnitude but no direction. Symbols for physical quantities are usually chosen to be a single letter of the Latin or Greek alphabet , and are printed in italic type. Vectors are physical quantities that possess both magnitude and direction and whose operations obey the axioms of a vector space . Symbols for physical quantities that are vectors are in bold type, underlined or with an arrow above. For example, if u is the speed of a particle, then the straightforward notations for its velocity are u , u , or u → {\displaystyle {\vec {u}}} . Scalar and vector quantities are the simplest tensor quantities , which are tensors that can be used to describe more general physical properties. For example, the Cauchy stress tensor possesses magnitude, direction, and orientation qualities. The notion of dimension of a physical quantity was introduced by Joseph Fourier in 1822. [ 2 ] By convention, physical quantities are organized in a dimensional system built upon base quantities, each of which is regarded as having its own dimension. There is often a choice of unit, though SI units are usually used in scientific contexts due to their ease of use, international familiarity and prescription. For example, a quantity of mass might be represented by the symbol m , and could be expressed in the units kilograms (kg), pounds (lb), or daltons (Da). Dimensional homogeneity is not necessarily sufficient for quantities to be comparable; [ 1 ] for example, both kinematic viscosity and thermal diffusivity have dimension of square length per time (in units of m 2 /s ). Quantities of the same kind share extra commonalities beyond their dimension and units allowing their comparison; for example, not all dimensionless quantities are of the same kind. [ 1 ] A systems of quantities relates physical quantities, and due to this dependence, a limited number of quantities can serve as a basis in terms of which the dimensions of all the remaining quantities of the system can be defined. A set of mutually independent quantities may be chosen by convention to act as such a set, and are called base quantities. The seven base quantities of the International System of Quantities (ISQ) and their corresponding SI units and dimensions are listed in the following table. [ 3 ] : 136 Other conventions may have a different number of base units (e.g. the CGS and MKS systems of units). The angular quantities, plane angle and solid angle , are defined as derived dimensionless quantities in the SI. For some relations, their units radian and steradian can be written explicitly to emphasize the fact that the quantity involves plane or solid angles. [ 3 ] : 137 Derived quantities are those whose definitions are based on other physical quantities (base quantities). Important applied base units for space and time are below. Area and volume are thus, of course, derived from the length, but included for completeness as they occur frequently in many derived quantities, in particular densities. Important and convenient derived quantities such as densities, fluxes , flows , currents are associated with many quantities. Sometimes different terms such as current density and flux density , rate , frequency and current , are used interchangeably in the same context; sometimes they are used uniquely. To clarify these effective template-derived quantities, we use q to stand for any quantity within some scope of context (not necessarily base quantities) and present in the table below some of the most commonly used symbols where applicable, their definitions, usage, SI units and SI dimensions – where [ q ] denotes the dimension of q . For time derivatives, specific, molar, and flux densities of quantities, there is no one symbol; nomenclature depends on the subject, though time derivatives can be generally written using overdot notation. For generality we use q m , q n , and F respectively. No symbol is necessarily required for the gradient of a scalar field, since only the nabla/del operator ∇ or grad needs to be written. For spatial density, current, current density and flux, the notations are common from one context to another, differing only by a change in subscripts. For current density, t ^ {\displaystyle \mathbf {\hat {t}} } is a unit vector in the direction of flow, i.e. tangent to a flowline. Notice the dot product with the unit normal for a surface, since the amount of current passing through the surface is reduced when the current is not normal to the area. Only the current passing perpendicular to the surface contributes to the current passing through the surface, no current passes in the (tangential) plane of the surface. The calculus notations below can be used synonymously. If X is a n -variable function X ≡ X ( x 1 , x 2 ⋯ x n ) {\displaystyle X\equiv X\left(x_{1},x_{2}\cdots x_{n}\right)} , then Differential The differential n -space volume element is d n x ≡ d V n ≡ d x 1 d x 2 ⋯ d x n {\displaystyle \mathrm {d} ^{n}x\equiv \mathrm {d} V_{n}\equiv \mathrm {d} x_{1}\mathrm {d} x_{2}\cdots \mathrm {d} x_{n}} , No common symbol for n -space density, here ρ n is used. (length, area, volume or higher dimensions) q = ∫ q λ d λ {\displaystyle q=\int q_{\lambda }\mathrm {d} \lambda } q = ∫ q ν d ν {\displaystyle q=\int q_{\nu }\mathrm {d} \nu } [q]T ( q ν ) Transport mechanics , nuclear physics / particle physics : q = ∭ F d A d t {\displaystyle q=\iiint F\mathrm {d} A\mathrm {d} t} Vector field : Φ F = ∬ S F ⋅ d A {\displaystyle \Phi _{F}=\iint _{S}\mathbf {F} \cdot \mathrm {d} \mathbf {A} } k -vector q : m = r ∧ q {\displaystyle \mathbf {m} =\mathbf {r} \wedge q}
https://en.wikipedia.org/wiki/Physical_quantity
A physical system is a collection of physical objects under study. [ 1 ] The collection differs from a set : all the objects must coexist and have some physical relationship. [ 2 ] In other words, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment , which is ignored except for its effects on the system. The split between system and environment is the analyst's choice, generally made to simplify the analysis. For example, the water in a lake, the water in half of a lake, or an individual molecule of water in the lake can each be considered a physical system. An isolated system is one that has negligible interaction with its environment. Often a system in this sense is chosen to correspond to the more usual meaning of system , such as a particular machine. In the study of quantum coherence , the "system" may refer to the microscopic properties of an object (e.g. the mean of a pendulum bob), while the relevant "environment" may be the internal degrees of freedom , described classically by the pendulum's thermal vibrations. Because no quantum system is completely isolated from its surroundings, [ 3 ] it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems . In control theory , a physical system being controlled (a "controlled system") is called a " plant ". This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physical_system
Physical vapor deposition ( PVD ), sometimes called physical vapor transport ( PVT ), describes a variety of vacuum deposition methods which can be used to produce thin films and coatings on substrates including metals, ceramics, glass, and polymers. PVD is characterized by a process in which the material transitions from a condensed phase to a vapor phase and then back to a thin film condensed phase. The most common PVD processes are sputtering and evaporation . PVD is used in the manufacturing of items which require thin films for optical, mechanical, electrical, acoustic or chemical functions. Examples include semiconductor devices such as thin-film solar cells , [ 1 ] microelectromechanical devices such as thin film bulk acoustic resonator, aluminized PET film for food packaging and balloons , [ 2 ] and titanium nitride coated cutting tools for metalworking. Besides PVD tools for fabrication, special smaller tools used mainly for scientific purposes have been developed. [ 3 ] The source material is unavoidably also deposited on most other surfaces interior to the vacuum chamber, including the fixturing used to hold the parts. This is called overshoot. Various thin film characterization techniques can be used to measure the physical properties of PVD coatings, such as: PVD can be used as an application to make anisotropic glasses of low molecular weight for organic semiconductors . [ 10 ] The parameter needed to allow the formation of this type of glass is molecular mobility and anisotropic structure at the free surface of the glass. [ 10 ] The configuration of the polymer is important where it needs to be positioned in a lower energy state before the added molecules bury the material through a deposition. This process of adding molecules to the structure starts to equilibrate and gain mass and bulk out to have more kinetic stability. [ 10 ] The packing of molecules here through PVD is face-on, meaning not at the long tail end, allows further overlap of pi orbitals as well which also increases the stability of added molecules and the bonds. The orientation of these added materials is dependent mainly on temperature for when molecules will be deposited or extracted from the molecule. [ 10 ] The equilibration of the molecules is what provides the glass with its anisotropic characteristics. The anisotropy of these glasses is valuable as it allows a higher charge carrier mobility. [ 10 ] This process of packing in glass in an anisotropic way is valuable due to its versatility and the fact that glass provides added benefits beyond crystals, such as homogeneity and flexibility of composition. By varying the composition and duration of the process, a range of colors can be produced by PVD on stainless steel. The resulting colored stainless steel product can appear as brass, bronze, and other metals or alloys. This PVD-colored stainless steel can be used as exterior cladding for buildings and structures, such as the Vessel sculpture in New York City and The Bund in Shanghai. It is also used for interior hardware, paneling, and fixtures, and is even used on some consumer electronics, like the Space Gray and Gold finishes of the iPhone and Apple Watch. [ citation needed ] PVD is used to enhance the wear resistance of steel cutting tools ' surfaces and decrease the risk of adhesion and sticking between tools and a workpiece. This includes tools used in metalworking or plastic injection molding . [ 11 ] : 2 The coating is typically a thin ceramic layer less than 4 μm that has very high hardness and low friction. It is necessary to have high hardness of workpieces to ensure dimensional stability of coating to avoid brittling. It is possible to combine PVD with a plasma nitriding treatment of steel to increase the load bearing capacity of the coating. [ 11 ] : 2 Chromium nitride (CrN), titanium nitride (TiN), and titanium carbonitride (TiCN) may be used for PVD coating for plastic molding dies. [ 11 ] : 5 PVD coatings are generally used to improve hardness, increase wear resistance, and prevent oxidation. They can also be used for aesthetic purposes. Thus, such coatings are used in a wide range of applications such as:
https://en.wikipedia.org/wiki/Physical_vapor_deposition
Physics-informed neural networks ( PINNs ), [ 1 ] also referred to as Theory-Trained Neural Networks ( TTNs ), [ 2 ] are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. [ 1 ] The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples. Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations [ 3 ] are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass , momentum , and energy ) that govern fluid mechanics . The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences , finite elements and finite volumes ). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization. Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem [ 4 ] and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. [ 5 ] However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. [ 5 ] Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. [ 6 ] Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity. PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. [ 7 ] Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. [ 8 ] In addition, they allow for exploiting automatic differentiation (AD) [ 9 ] to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation . A general nonlinear partial differential equation can be: u t + N [ u ; λ ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} where u ( t , x ) {\displaystyle u(t,x)} denotes the solution, N [ ⋅ ; λ ] {\displaystyle N[\cdot ;\lambda ]} is a nonlinear operator parameterized by λ {\displaystyle \lambda } , and Ω {\displaystyle \Omega } is a subset of R D {\displaystyle \mathbb {R} ^{D}} . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: The data-driven solution of PDE [ 1 ] computes the hidden state u ( t , x ) {\displaystyle u(t,x)} of the system given boundary data and/or measurements z {\displaystyle z} , and fixed model parameters λ {\displaystyle \lambda } . We solve: u t + N [ u ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining the residual f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ] = 0 {\displaystyle f:=u_{t}+N[u]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network. This network can be differentiated using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ‖ u − z ‖ Γ {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} is the error between the PINN u ( t , x ) {\displaystyle u(t,x)} and the set of boundary conditions and measured data on the set of points Γ {\displaystyle \Gamma } where the boundary conditions and data are defined, and L f = ‖ f ‖ Γ {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} is the mean-squared error of the residual function. This second term encourages the PINN to learn the structural information expressed by the partial differential equation during the training process. This approach has been used to yield computationally efficient physics-informed surrogate models with applications in the forecasting of physical processes, model predictive control, multi-physics and multi-scale modeling, and simulation. [ 10 ] It has been shown to converge to the solution of the PDE. [ 11 ] Given noisy and incomplete measurements z {\displaystyle z} of the state of the system, the data-driven discovery of PDE [ 7 ] results in computing the unknown state u ( t , x ) {\displaystyle u(t,x)} and learning model parameters λ {\displaystyle \lambda } that best describe the observed data and it reads as follows: u t + N [ u ; λ ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}+N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T]} . By defining f ( t , x ) {\displaystyle f(t,x)} as f := u t + N [ u ; λ ] = 0 {\displaystyle f:=u_{t}+N[u;\lambda ]=0} , and approximating u ( t , x ) {\displaystyle u(t,x)} by a deep neural network, f ( t , x ) {\displaystyle f(t,x)} results in a PINN. This network can be derived using automatic differentiation. The parameters of u ( t , x ) {\displaystyle u(t,x)} and f ( t , x ) {\displaystyle f(t,x)} , together with the parameter λ {\displaystyle \lambda } of the differential operator can be then learned by minimizing the following loss function L t o t {\displaystyle L_{tot}} : L t o t = L u + L f {\displaystyle L_{tot}=L_{u}+L_{f}} . Where L u = ‖ u − z ‖ Γ {\displaystyle L_{u}=\Vert u-z\Vert _{\Gamma }} , with u {\displaystyle u} and z {\displaystyle z} state solutions and measurements at sparse location Γ {\displaystyle \Gamma } , respectively and L f = ‖ f ‖ Γ {\displaystyle L_{f}=\Vert f\Vert _{\Gamma }} residual function. This second term requires the structured information represented by the partial differential equations to be satisfied in the training process. This strategy allows for discovering dynamic models described by nonlinear PDEs assembling computationally efficient and fully differentiable surrogate models that may find application in predictive forecasting, control, and data assimilation . [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. [ 17 ] [ 18 ] DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. [ 17 ] DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. [ 18 ] Another school of thought is discretization for parallel computation to leverage usage of available computational resources. XPINNs [ 19 ] is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), [ 20 ] which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. [ 15 ] However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization. [ 18 ] In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC [ 21 ] framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. [ 22 ] A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. [ 23 ] X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications. [ 24 ] [ 25 ] [ 26 ] Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) [ 27 ] is fundamentally the result of a combination of PINN's loss function with PointNet. [ 28 ] In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas . PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow , heat transfer and linear elasticity . [ 27 ] [ 29 ] Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, [ 30 ] demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, [ 31 ] topology optimization/characterization, [ 32 ] multiphase flow in porous media, [ 33 ] [ 34 ] and high-speed fluid flow. [ 35 ] [ 13 ] PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations. [ 36 ] [ 37 ] PINNs can also be used in connection with symbolic regression for discovering the mathematical expression in connection with discovery of parameters and functions. One example of such application is the study on chemical ageing of cellulose insulation material, [ 38 ] in this example PINNs are used to first discover a parameter for a set of ordinary differential equations (ODEs) and later a function solution, which is later used to find a more fitting expression using a symbolic regression with a combination of operators. Ensemble of physics-informed neural networks is applied for solving plane elasticity problems. Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions. The computational approach is based on principles of artificial intelligence. [ 39 ] This approach can be extended to nonlinear elasticity problems, where the constitutive equations are nonlinear. PINNs can also be used for Kirchhoff plate bending problems with transverse distributed loads and to contact models with elastic Winkler’s foundations. [ 40 ] Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks , deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions. [ 41 ] An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function L t o t {\displaystyle L_{tot}} is modified to include L c o n s t r {\displaystyle L_{constr}} , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. [ 42 ] [ 43 ] A natural example of BINNs can be found in cell dynamics, where the cell density u ( x , t ) {\displaystyle u(x,t)} is governed by a reaction-diffusion equation with diffusion and growth functions D ( u ) {\displaystyle D(u)} and G ( u ) {\displaystyle G(u)} , respectively: u t = ∇ ⋅ ( D ( u ) ∇ u ) + G ( u ) u , x ∈ Ω , t ∈ [ 0 , T ] {\displaystyle u_{t}=\nabla \cdot (D(u)\nabla u)+G(u)u,\quad x\in \Omega ,\quad t\in [0,T]} In this case, a component of L c o n s t r {\displaystyle L_{constr}} could be | | D | | Γ {\displaystyle ||D||_{\Gamma }} for D < D m i n , D > D m a x {\displaystyle D<D_{min},D>D_{max}} , which penalizes values of D {\displaystyle D} that fall outside a biologically relevant diffusion range defined by D m i n ≤ D ≤ D m a x {\displaystyle D_{min}\leq D\leq D_{max}} . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct u M L P ( x , t ) {\displaystyle u_{MLP}(x,t)} from model inputs ( x , t ) {\displaystyle (x,t)} , serving as a surrogate model for the cell density u ( x , t ) {\displaystyle u(x,t)} . This surrogate is then fed into the two additional MLPs, D M L P ( u M L P ) {\displaystyle D_{MLP}(u_{MLP})} and G M L P ( u M L P ) {\displaystyle G_{MLP}(u_{MLP})} , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of u M L P {\displaystyle u_{MLP}} , D M L P {\displaystyle D_{MLP}} and G M L P {\displaystyle G_{MLP}} to form the governing reaction-diffusion equation. [ 42 ] Note that since u M L P {\displaystyle u_{MLP}} is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach. Translation and discontinuous behavior are hard to approximate using PINNs. [ 18 ] They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables. [ 18 ] This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution. [ 44 ] They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. [ 45 ] One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize. [ 18 ] More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima. [ 18 ] [ 46 ]
https://en.wikipedia.org/wiki/Physics-informed_neural_networks
The Physics ( Greek : Φυσικὴ ἀκρόασις Phusike akroasis ; Latin : Physica , or Naturales Auscultationes , possibly meaning " Lectures on nature ") is a named text, written in ancient Greek, collated from a collection of surviving manuscripts known as the Corpus Aristotelicum , attributed to the 4th-century BC philosopher Aristotle . It is a collection of treatises or lessons that deals with the most general (philosophical) principles of natural or moving things, both living and non-living, rather than physical theories (in the modern sense) or investigations of the particular contents of the universe. The chief purpose of the work is to discover the principles and causes of (and not merely to describe) change, or movement, or motion (κίνησις kinesis ), especially that of natural wholes (mostly living things, but also inanimate wholes like the cosmos ). In the conventional Andronicean ordering of Aristotle's works, it stands at the head of, as well as being foundational to, the long series of physical, cosmological and biological treatises, whose ancient Greek title, τὰ φυσικά, means "the [writings] on nature" or " natural philosophy ". The Physics is composed of eight books, which are further divided into chapters. This system is of ancient origin, now obscure. In modern languages, books are referenced with Roman numerals, standing for ancient Greek capital letters (the Greeks represented numbers with letters, e.g. A for 1). Chapters are identified by Arabic numerals, but the use of the English word "chapter" is strictly conventional. Ancient "chapters" (capita) are generally very short, often less than a page. Additionally, the Bekker numbers give the page and column (a or b) used in the Prussian Academy of Sciences' edition of Aristotle's works, instigated and managed by Bekker himself. These are evident in the 1831 2-volume edition. Bekker's line numbers may be given. These are often given, but unless the edition is the Academy's, they do match any line counts. Book I introduces Aristotle's approach to nature, which is to be based on principles, causes, and elements. Before offering his particular views, he engages previous theories, such as those offered by Melissus and Parmenides. Aristotle's own view comes out in Ch. 7 where he identifies three principles: substances, opposites, and privation. Chapters 3 and 4 are among the most difficult in all of Aristotle's works and involve subtle refutations of the thought of Parmenides, Melissus and Anaxagoras. In chapter 5, he continues his review of his predecessors, particularly how many first principles there are. Chapter 6 narrows down the number of principles to two or three. He presents his own account of the subject in chapter 7, where he first introduces the word matter (Greek: hyle ) to designate fundamental essence ( ousia ). He defines matter in chapter 9: "For my definition of matter is just this—the primary substratum of each thing, from which it comes to be without qualification, and which persists in the result." Matter in Aristotle's thought is, however, defined in terms of sensible reality; for example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it – its matter – does. Matter is not specifically described, but consists of whatever is apart from quality or quantity and that of which something may be predicated. Matter in this understanding does not exist independently (i.e. as a substance ), but exists interdependently (i.e. as a "principle") with form and only insofar as it underlies change. [ 1 ] Matter and form are analogical terms. Book II identifies "nature" ( physis ) as "a source or cause of being moved and of being at rest in that to which it belongs primarily" (1.192b21). Thus, those entities are natural which are capable of starting to move, e.g. growing, acquiring qualities, displacing themselves, and finally being born and dying. Aristotle contrasts natural things with the artificial: artificial things can move also, but they move according to what they are made of, not according to what they are. For example, if a wooden bed were buried and somehow sprouted as a tree, it would be according to what it is made of, not what it is. Aristotle contrasts two senses of nature: nature as matter and nature as form or definition. By "nature", Aristotle means the natures of particular things and would perhaps be better translated "a nature." In Book II, however, his appeal to "nature" as a source of activities is more typically to the genera of natural kinds (the secondary substance ). But, contra Plato , Aristotle attempts to resolve a philosophical quandary that was well understood in the fourth century. [ 2 ] The Eudoxian planetary model sufficed for the wandering stars , but no deduction of terrestrial substance would be forthcoming based solely on the mechanical principles of necessity, (ascribed by Aristotle to material causation in chapter 9). In the Enlightenment , centuries before modern science made good on atomist intuitions , a nominal allegiance to mechanistic materialism gained popularity despite harboring Newton's action at distance , and comprising the native habitat of teleological arguments : Machines or artifacts composed of parts lacking any intrinsic relationship to each other with their order imposed from without. Thus, the source of an apparent thing's activities is not the whole itself, but its parts. While Aristotle asserts that the matter (and parts) are a necessary cause of things – the material cause – he says that nature is primarily the essence or formal cause (1.193b6), that is, the information, the whole species itself. The necessary in nature, then, is plainly what we call by the name of matter, and the changes in it. Both causes must be stated by the physicist, but especially the end; for that is the cause of the matter, not vice versa; and the end is 'that for the sake of which', and the beginning starts from the definition or essence… [ 3 ] In chapter 3, Aristotle presents his theory of the four causes (material, efficient, formal, and final [ 4 ] ). Material cause explains what something is made of (for example, the wood of a house), formal cause explains the form which a thing follows to become that thing (the plans of an architect to build a house), efficient cause is the actual source of the change (the physical building of the house), and final cause is the intended purpose of the change (the final product of the house and its purpose as a shelter and home). Of particular importance is the final cause or purpose ( telos ). It is a common mistake to conceive of the four causes as additive or alternative forces pushing or pulling; in reality, all four are needed to explain (7.198a22-25). What we typically mean by cause in the modern scientific idiom is only a narrow part of what Aristotle means by efficient cause. [ 5 ] He contrasts purpose with the way in which "nature" does not work, chance (or luck), discussed in chapters 4, 5, and 6. (Chance working in the actions of humans is tuche and in unreasoning agents automaton .) Something happens by chance when all the lines of causality converge without that convergence being purposefully chosen, and produce a result similar to the teleologically caused one. In chapters 7 through 9, Aristotle returns to the discussion of nature. With the enrichment of the preceding four chapters, he concludes that nature acts for an end, and he discusses the way that necessity is present in natural things. For Aristotle, the motion of natural things is determined from within them, while in the modern empirical sciences, motion is determined from without (more properly speaking: there is nothing to have an inside). In order to understand "nature" as defined in the previous book, one must understand the terms of the definition. To understand motion, book III begins with the definition of change based on Aristotle's notions of potentiality and actuality . [ 6 ] Change, he says, is the actualization of a thing's ability insofar as it is able. [ 7 ] The rest of the book (chapters 4-8) discusses the infinite ( apeiron , the unlimited). He distinguishes between the infinite by addition and the infinite by division, and between the actually infinite and potentially infinite. He argues against the actually infinite in any form, including infinite bodies, substances, and voids. Aristotle here says the only type of infinity that exists is the potentially infinite. Aristotle characterizes this as that which serves as "the matter for the completion of a magnitude and is potentially (but not actually) the completed whole" (207a22-23). The infinite, lacking any form, is thereby unknowable. Aristotle writes, "it is not what has nothing outside it that is infinite, but what always has something outside it" (6.206b33-207a1-2). Book IV discusses the preconditions of motion: place ( topos , chapters 1-5), void ( kenon , chapters 6-9), and time ( khronos , chapters 10-14). The book starts by distinguishing the various ways a thing can "be in" another. He likens place to an immobile container or vessel: "the innermost motionless boundary of what contains" is the primary place of a body (4.212a20). Unlike space, which is a volume co-existent with a body, place is a boundary or surface. He teaches that, contrary to the Atomists and others, a void is not only unnecessary, but leads to contradictions, e.g., making locomotion impossible. Time is a constant attribute of movements and, Aristotle thinks, does not exist on its own but is relative to the motions of things. Tony Roark describes Aristotle's view of time as follows: Aristotle defines time as "a number of motion with respect to the before and after" ( Phys. 219b1–2), by which he intends to denote motion's susceptibility to division into undetached parts of arbitrary length, a property that it possesses both by virtue of its intrinsic nature and also by virtue of the capacities and activities of percipient souls. Motion is intrinsically indeterminate, but perceptually determinable, with respect to its length. Acts of perception function as determiners; the result is determinate units of kinetic length, which is precisely what a temporal unit is. [ 8 ] Books V and VI deal with how motion occurs. Book V classifies four species of movement, depending on where the opposites are located. Movement categories include quantity (e.g. a change in dimensions, from great to small), quality (as for colors: from pale to dark), place (local movements generally go from up downwards and vice versa), or, more controversially, substance. In fact, substances do not have opposites, so it is inappropriate to say that something properly becomes, from not-man, man: generation and corruption are not kinesis in the full sense. Book VI discusses how a changing thing can reach the opposite state, if it has to pass through infinite intermediate stages. It investigates by rational and logical arguments the notions of continuity and division , establishing that change—and, consequently, time and place—are not divisible into indivisible parts; they are not mathematically discrete but continuous, that is, infinitely divisible (in other words, that you cannot build up a continuum out of discrete or indivisible points or moments). Among other things, this implies that there can be no definite (indivisible) moment when a motion begins. This discussion, together with that of speed and the different behavior of the four different species of motion, eventually helps Aristotle answer the famous paradoxes of Zeno , which purport to show the absurdity of motion's existence. Book VII briefly deals with the relationship of the moved to his mover, which Aristotle describes in substantial divergence with Plato ' s theory of the soul as capable of setting itself in motion ( Laws book X, Phaedrus , Phaedo ). Everything which moves is moved by another. He then tries to correlate the species of motion and their speeds, with the local change (locomotion, phorà ) as the most fundamental to which the others can be reduced. Book VII.1-3 also exist in an alternative version, not included in the Bekker edition . Book VIII (which occupies almost a fourth of the entire Physics , and probably constituted originally an independent course of lessons) discusses two main topics, though with a wide deployment of arguments: the time limits of the universe , and the existence of a Prime Mover — eternal, indivisible, without parts and without magnitude. Isn't the universe eternal, has it had a beginning, will it ever end? Aristotle's response, as a Greek, could hardly be affirmative, never having been told of a creatio ex nihilo, but he also has philosophical reasons for denying that motion had not always existed, on the grounds of the theory presented in the earlier books of the Physics . Eternity of motion is also confirmed by the existence of a substance which is different from all the others in lacking matter; being pure form, it is also in an eternal actuality, not being imperfect in any respect; hence needing not to move. This is demonstrated by describing the celestial bodies thus: the first things to be moved must undergo an infinite, single and continuous movement, that is, circular. This is not caused by any contact but (integrating the view contained in the Metaphysics , bk. XII ) by love and aspiration. The works of Aristotle are typically influential to the development of Western science and philosophy . [ 9 ] The citations below are not given as any sort of final modern judgement on the interpretation and significance of Aristotle, but are only the notable views of some moderns. Martin Heidegger writes: The Physics is a lecture in which he seeks to determine beings that arise on their own, τὰ φύσει ὄντα, with regard to their being. Aristotelian "physics" is different from what we mean today by this word, not only to the extent that it belongs to antiquity whereas the modern physical sciences belong to modernity, rather above all it is different by virtue of the fact that Aristotle's "physics" is philosophy, whereas modern physics is a positive science that presupposes a philosophy.... This book determines the warp and woof of the whole of Western thinking, even at that place where it, as modern thinking, appears to think at odds with ancient thinking. But opposition is invariably comprised of a decisive, and often even perilous, dependence. Without Aristotle's Physics there would have been no Galileo . [ 10 ] Bertrand Russell says of Physics and On the Heavens (which he believed was a continuation of Physics ) that they were: ...extremely influential, and dominated science until the time of Galileo ... The historian of philosophy, accordingly, must study them, in spite of the fact that hardly a sentence in either can be accepted in the light of modern science. [ 11 ] Italian theoretical physicist Carlo Rovelli considers Aristotle's physics as a correct and non-intuitive special case of Newtonian physics for the motion of matter in fluid after it has reached terminal velocity (steady state). His theory disregards the initial phase of acceleration, which is too short to be observed by the naked eye. Galileo 's inclined plane experiment bypasses the issue, as it slows down acceleration enough to allow observing the initial phase of acceleration by the naked eye. The five elements explain forms of observed motions. Ether explains circular motion in the sky, earth and water explains downward motion, and fire and air explains upward motion. To explain downward motion, instead of postulating one element, he proposed two, because wood moves up in water but down in air, while earth moves down in both water and air. The complex interaction between the 4 elements could explain most of the rising and falling motions of objects with different densities. The velocity of falling objects is equal to C ( W ρ ) n {\displaystyle C\left({\frac {W}{\rho }}\right)^{n}} , where W {\displaystyle W} is the weight of the object, ρ {\displaystyle \rho } is the density of the surrounding fluid (such as air, fire, or water), n > 0 {\displaystyle n>0} is a constant, and C {\displaystyle C} is a constant depending on the shape of the object. This is correct for the terminal velocity of falling objects in fluid in a constant gravitational field, in the case where most of the fluid resistance is drag force , ∝ ρ v 2 {\displaystyle \propto \rho v^{2}} . In this case, the terminal velocity is C ( W ρ ) 1 / 2 {\displaystyle C\left({\frac {W}{\rho }}\right)^{1/2}} [ 12 ] A recension is a selection of a specific text for publication. The manuscripts on a given work attributed to Aristotle offer textual variants. One recension makes a selection of one continuous text, but typically gives notes stating the alternative sections of text. Determining which text is to be presented as "original" is a detailed scholarly investigation. The recension is often known by its scholarly editor's name. In reverse chronological order: A commentary differs from a note in being a distinct work analyzing the language and subsumed concepts of some other work classically notable. A note appears within the annotated work on the same page or in a separate list. Commentaries are typically arranged by lemmas, or quotes from the notable work, followed by an analysis of the author of the commentary. The commentaries on every work of Aristotle are a vast and mainly unpublished topic. They extend continuously from the death of the philosopher, representing the entire history of Graeco-Roman philosophy. There are thousands of commentators and commentaries known wholly or more typically in fragments of manuscripts. The latter especially occupy the vaults of institutions formerly responsible for copying them, such as monasteries. The process of publishing them is slow and ongoing. Below is a brief representative bibliography of published commentaries on Aristotle's Physics available on or through the Internet. Like the topic itself, they are perforce multi-cultural, but English has been favored, as well as the original languages, ancient Greek and Latin.
https://en.wikipedia.org/wiki/Physics_(Aristotle)
Physics and Chemistry of Glasses: European Journal of Glass Science and Technology Part B is a bimonthly peer-reviewed scientific journal published by the Society of Glass Technology . It was established in 2006, from the merger of the Society of Glass Technology's journal Physics and Chemistry of Glasses [ 1 ] and the Deutsche Glastechnische Gesellschaft's journal Glass Science and Technology . [ 2 ] The journal is abstracted and indexed in the Science Citation Index , Current Contents /Physical, Chemical & Earth Sciences, Current Contents/Engineering, Computing & Technology, [ 3 ] and Scopus . [ 4 ] According to the Journal Citation Reports , the journal has a 2015 impact factor of 0.630. [ 5 ] This glass -related article is a stub . You can help Wikipedia by expanding it . This article about a physics journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Physics_and_Chemistry_of_Glasses
Physics beyond the Standard Model ( BSM ) refers to the theoretical developments needed to explain the deficiencies of the Standard Model , such as the inability to explain the fundamental parameters of the standard model, the strong CP problem , neutrino oscillations , matter–antimatter asymmetry , and the nature of dark matter and dark energy . [ 1 ] Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity , and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons . Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry , such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory , M-theory , and extra dimensions . As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything , can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics . [ 2 ] Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. [ 3 ] A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: No experimental result is accepted as definitively contradicting the Standard Model at the 5 σ level, [ 8 ] widely considered to be the threshold of a discovery in particle physics. Because every experiment contains some degree of statistical and systemic uncertainty, and the theoretical predictions themselves are also almost never calculated exactly and are subject to uncertainties in measurements of the fundamental constants of the Standard Model (some of which are tiny and others of which are substantial), it is to be expected that some of the hundreds of experimental tests of the Standard Model will deviate from it to some extent, even if there were no new physics to be discovered. At any given moment there are several experimental results standing that significantly differ from a Standard Model-based prediction. In the past, many of these discrepancies have been found to be statistical flukes or experimental errors that vanish as more data has been collected, or when the same experiments were conducted more carefully. On the other hand, any physics beyond the Standard Model would necessarily first appear in experiments as a statistically significant difference between an experiment and the theoretical prediction. The task is to determine which is the case. In each case, physicists seek to determine if a result is merely a statistical fluke or experimental error on the one hand, or a sign of new physics on the other. More statistically significant results cannot be mere statistical flukes but can still result from experimental error or inaccurate estimates of experimental precision. Frequently, experiments are tailored to be more sensitive to experimental results that would distinguish the Standard Model from theoretical alternatives. Some of the most notable examples include the following: Observation at particle colliders of all of the fundamental particles predicted by the Standard Model has been confirmed. The Higgs boson is predicted by the Standard Model's explanation of the Higgs mechanism , which describes how the weak SU(2) gauge symmetry is broken and how fundamental particles obtain mass; it was the last particle predicted by the Standard Model to be observed. On July 4, 2012, CERN scientists using the Large Hadron Collider announced the discovery of a particle consistent with the Higgs boson, with a mass of about 126 GeV/ c 2 . A Higgs boson was confirmed to exist on March 14, 2013, although efforts to confirm that it has all of the properties predicted by the Standard Model are ongoing. [ 18 ] A few hadrons (i.e. composite particles made of quarks ) whose existence is predicted by the Standard Model, which can be produced only at very high energies in very low frequencies have not yet been definitively observed, and " glueballs " [ 19 ] (i.e. composite particles made of gluons ) have also not yet been definitively observed. Some very low frequency particle decays predicted by the Standard Model have also not yet been definitively observed because insufficient data is available to make a statistically significant observation. It is unclear if these empirical relationships represent any underlying physics; according to Koide, the rule he discovered "may be an accidental coincidence". [ 25 ] Some features of the standard model are added in an ad hoc way. These are not problems per se (i.e. the theory works fine with the ad hoc insertions), but they imply a lack of understanding. These contrived features have motivated theorists to look for more fundamental theories with fewer parameters. Some of the contrivances are: Research from experimental data on the cosmological constant , LIGO noise , and pulsar timing , suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider . [ 29 ] [ 30 ] [ 31 ] However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. [ 29 ] The standard model has three gauge symmetries ; the colour SU(3) , the weak isospin SU(2) , and the weak hypercharge U(1) symmetry, corresponding to the three fundamental forces. Due to renormalization the coupling constants of each of these symmetries vary with the energy at which they are measured. Around 10 16 GeV these couplings become approximately equal. This has led to speculation that above this energy the three gauge symmetries of the standard model are unified in one single gauge symmetry with a simple gauge group, and just one coupling constant. Below this energy the symmetry is spontaneously broken to the standard model symmetries. [ 32 ] Popular choices for the unifying group are the special unitary group in five dimensions SU(5) and the special orthogonal group in ten dimensions SO(10) . [ 33 ] Theories that unify the standard model symmetries in this way are called Grand Unified Theories (or GUTs), and the energy scale at which the unified symmetry is broken is called the GUT scale. Generically, grand unified theories predict the creation of magnetic monopoles in the early universe, [ 34 ] and instability of the proton . [ 35 ] Neither of these have been observed, and this absence of observation puts limits on the possible GUTs. Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian . These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles , abbreviated as sparticles , which include the sleptons , squarks , neutralinos and charginos . Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. Due to the breaking of supersymmetry , the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders may not be powerful enough to produce them. In the standard model, neutrinos cannot spontaneously change flavor . Measurements however indicated that neutrinos do spontaneously change flavor, in what is called neutrino oscillations . Neutrino oscillations are usually explained using massive neutrinos. In the standard model, neutrinos have exactly zero mass, as the standard model only contains left-handed neutrinos. With no suitable right-handed partner, it is impossible to add a renormalizable mass term to the standard model. [ 36 ] These measurements only give the mass differences between the different flavours. The best constraint on the absolute mass of the neutrinos comes from precision measurements of tritium decay, providing an upper limit 2 eV, which makes them at least five orders of magnitude lighter than the other particles in the standard model. [ 37 ] This necessitates an extension of the standard model, which not only needs to explain how neutrinos get their mass, but also why the mass is so small. [ 38 ] One approach to add masses to the neutrinos, the so-called seesaw mechanism , is to add right-handed neutrinos and have these couple to left-handed neutrinos with a Dirac mass term. The right-handed neutrinos have to be sterile , meaning that they do not participate in any of the standard model interactions. Because they have no charges, the right-handed neutrinos can act as their own anti-particles, and have a Majorana mass term. Like the other Dirac masses in the standard model, the neutrino Dirac mass is expected to be generated through the Higgs mechanism, and is therefore unpredictable. The standard model fermion masses differ by many orders of magnitude; the Dirac neutrino mass has at least the same uncertainty. On the other hand, the Majorana mass for the right-handed neutrinos does not arise from the Higgs mechanism, and is therefore expected to be tied to some energy scale of new physics beyond the standard model, for example the Planck scale. [ 39 ] Therefore, any process involving right-handed neutrinos will be suppressed at low energies. The correction due to these suppressed processes effectively gives the left-handed neutrinos a mass that is inversely proportional to the right-handed Majorana mass, a mechanism known as the see-saw. [ 40 ] The presence of heavy right-handed neutrinos thereby explains both the small mass of the left-handed neutrinos and the absence of the right-handed neutrinos in observations. However, due to the uncertainty in the Dirac neutrino masses, the right-handed neutrino masses can lie anywhere. For example, they could be as light as keV and be dark matter , [ 41 ] they can have a mass in the LHC energy range [ 42 ] [ 43 ] and lead to observable lepton number violation, [ 44 ] or they can be near the GUT scale, linking the right-handed neutrinos to the possibility of a grand unified theory. [ 45 ] [ 46 ] The mass terms mix neutrinos of different generations. This mixing is parameterized by the PMNS matrix , which is the neutrino analogue of the CKM quark mixing matrix . Unlike the quark mixing, which is almost minimal, the mixing of the neutrinos appears to be almost maximal. This has led to various speculations of symmetries between the various generations that could explain the mixing patterns. [ 47 ] The mixing matrix could also contain several complex phases that break CP invariance, although there has been no experimental probe of these. These phases could potentially create a surplus of leptons over anti-leptons in the early universe, a process known as leptogenesis . This asymmetry could then at a later stage be converted in an excess of baryons over anti-baryons, and explain the matter-antimatter asymmetry in the universe. [ 33 ] The light neutrinos are disfavored as an explanation for the observation of dark matter, based on considerations of large-scale structure formation in the early universe. Simulations of structure formation show that they are too hot – that is, their kinetic energy is large compared to their mass – while formation of structures similar to the galaxies in our universe requires cold dark matter . The simulations show that neutrinos can at best explain a few percent of the missing mass in dark matter. However, the heavy, sterile, right-handed neutrinos are a possible candidate for a dark matter WIMP . [ 48 ] There are however other explanations for neutrino oscillations which do not necessarily require neutrinos to have masses, such as Lorentz-violating neutrino oscillations . Several preon models have been proposed to address the unsolved problem concerning the fact that there are three generations of quarks and leptons. Preon models generally postulate some additional new particles which are further postulated to be able to combine to form the quarks and leptons of the standard model. One of the earliest preon models was the Rishon model . [ 49 ] [ 50 ] [ 51 ] To date, no preon model is widely accepted or fully verified. Theoretical physics continues to strive toward a theory of everything, a theory that fully explains and links together all known physical phenomena, and predicts the outcome of any experiment that could be carried out in principle. In practical terms the immediate goal in this regard is to develop a theory which would unify the Standard Model with General Relativity in a theory of quantum gravity . Additional features, such as overcoming conceptual flaws in either theory or accurate prediction of particle masses, would be desired. The challenges in putting together such a theory are not just conceptual - they include the experimental aspects of the very high energies needed to probe exotic realms. Several notable attempts in this direction are supersymmetry , loop quantum gravity , and String theory . Theories of quantum gravity such as loop quantum gravity and others are thought by some to be promising candidates to the mathematical unification of quantum field theory and general relativity, requiring less drastic changes to existing theories. [ 52 ] However recent work places stringent limits on the putative effects of quantum gravity on the speed of light, and disfavours some current models of quantum gravity. [ 53 ] Extensions, revisions, replacements, and reorganizations of the Standard Model exist in attempt to correct for these and other issues. String theory is one such reinvention, and many theoretical physicists think that such theories are the next theoretical step toward a true Theory of Everything . [ 52 ] Among the numerous variants of string theory, M-theory , whose mathematical existence was first proposed at a String Conference in 1995 by Edward Witten, is believed by many to be a proper "ToE" candidate, notably by physicists Brian Greene and Stephen Hawking . Though a full mathematical description is not yet known, solutions to the theory exist for specific cases. [ 54 ] Recent works have also proposed alternate string models, some of which lack the various harder-to-test features of M-theory (e.g. the existence of Calabi–Yau manifolds , many extra dimensions , etc.) including works by well-published physicists such as Lisa Randall . [ 55 ] [ 56 ]
https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Model
Physics of life is a branch of physics that studies the fundamental principles governing living systems. It applies methods from mechanics , thermodynamics , statistical physics , and information theory to biological phenomena ranging from molecular assemblies to ecosystems. [ 1 ] The field seeks to understand how complex behaviors of life arise from interactions among physical components under conditions far from equilibrium. Biological physics has gained wider recognition as a distinct and essential area within physics research. [ 2 ] [ 3 ] [ 4 ] The physics of life investigates how the familiar laws of physics apply to living matter, and how living systems sometimes require new physical principles for their understanding. [ 4 ] Rather than viewing biology as an exception, researchers treat biological phenomena as fertile ground for discovering general laws of non-equilibrium matter and information processing. Biological physics has grown substantially and now constitutes one of the largest divisions of the American Physical Society (APS). [ 2 ] It bridges traditional disciplines and introduces concepts such as stochasticity, phase transitions, and self-organization into the study of life. A 2022 decadal survey by the National Academies of Sciences, Engineering, and Medicine outlined five central questions guiding research in the physics of life: [ 1 ] Living systems face physical challenges in energy conversion , locomotion, environmental sensing, and mechanical stability. These problems involve fields such as fluid dynamics , elasticity (physics) , and non-equilibrium thermodynamics . Biological systems encode and transmit information through mechanisms like gene regulatory networks , synaptic signaling , and chemical communication . Frameworks such as information theory and Bayesian inference are used to model biological information processing. Macroscopic properties of tissues, organisms, and ecosystems arise from local physical interactions between molecules, cells, and structures. Research areas include phase separation in cells, collective behavior in animal groups, and morphogenesis . Scaling relationships often describe the emergent properties of biological systems. For example, diffusion-limited transport obeys a characteristic scaling law: where R {\displaystyle R} is the characteristic distance traveled over time t {\displaystyle t} , consistent with Brownian motion . Similarly, phase separation kinetics in living systems are modeled by the Cahn–Hilliard equation : where ϕ {\displaystyle \phi } is the order parameter, μ {\displaystyle \mu } is the chemical potential, and M {\displaystyle M} is the mobility. These mathematical models help quantify intracellular compartmentalization, tissue patterning, and early embryonic development. Through processes such as evolution , developmental plasticity , and adaptive learning , biological systems explore vast spaces of possible forms and behaviors, optimizing their functions under diverse constraints. Insights from biological physics have driven advances in biomechanics , neural engineering , synthetic biology , and regenerative medicine . Experimental tools include optical tweezers , cryo-electron microscopy , and single-molecule tracking. Theoretical approaches combine statistical mechanics , continuum mechanics , machine learning , and non-equilibrium physics . Concepts such as phase transitions , self-organization , and stochastic fluctuations, traditionally studied in inanimate systems, have become central for understanding biological systems. [ 4 ] Biological physics has evolved from an interdisciplinary curiosity into a central part of modern physics. [ 2 ] Researchers emphasize that studying life offers opportunities to discover new organizing principles of matter and information. [ 4 ] The National Academies report and commentary from the American Physical Society call for expanded funding, interdisciplinary training, and infrastructure to accelerate progress. [ 5 ]
https://en.wikipedia.org/wiki/Physics_of_Life
Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance. Other definitions of Physics of Failure include: The concept of Physics of Failure, also known as Reliability Physics, involves the use of degradation algorithms that describe how physical, chemical, mechanical, thermal, or electrical mechanisms evolve over time and eventually induce failure. While the concept of Physics of Failure is common in many structural fields, [ 2 ] the specific branding evolved from an attempt to better predict the reliability of early generation electronic parts and systems. Within the electronics industry , the major driver for the implementation of Physics of Failure was the poor performance of military weapon systems during World War II . [ 3 ] During the subsequent decade, the United States Department of Defense funded an extensive amount of effort to especially improve the reliability of electronics, [ 4 ] with the initial efforts focused on after-the-fact or statistical methodology. [ 5 ] Unfortunately, the rapid evolution of electronics, with new designs, new materials, and new manufacturing processes, tended to quickly negate approaches and predictions derived from older technology. In addition, the statistical approach tended to lead to expensive and time-consuming testing. The need for different approaches led to the birth of Physics of Failure at the Rome Air Development Center (RADC). [ 6 ] Under the auspices of the RADC, the first Physics of Failure in Electronics Symposium was held in September 1962. [ 7 ] The goal of the program was to relate the fundamental physical and chemical behavior of materials to reliability parameters. [ 8 ] The initial focus of physics of failure techniques tended to be limited to degradation mechanisms in integrated circuits . This was primarily because the rapid evolution of the technology created a need to capture and predict performance several generations ahead of existing product. One of the first major successes under predictive physics of failure was a formula [ 9 ] developed by James Black of Motorola to describe the behavior of electromigration . Electromigration occurs when collisions of electrons cause metal atoms in a conductor to dislodge and move downstream of current flow (proportional to current density ). Black used this knowledge, in combination with experimental findings, to describe the failure rate due to electromigration as where A is a constant based on the cross-sectional area of the interconnect, J is the current density , E a is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), k is the Boltzmann constant , T is the temperature and n is a scaling factor (usually set to 2 according to Black). Physics of failure is typically designed to predict wearout, or an increasing failure rate, but this initial success by Black focused on predicting behavior during operational life, or a constant failure rate. This is because electromigration in traces can be designed out by following design rules, while electromigration at vias are primarily interfacial effects, which tend to be defect or process-driven. Leveraging this success, additional physics-of-failure based algorithms have been derived for the three other major degradation mechanisms ( time dependent dielectric breakdown [TDDB], hot carrier injection [HCI], and negative bias temperature instability [NBTI]) in modern integrated circuits (equations shown below). More recent work has attempted to aggregate these discrete algorithms into a system-level prediction. [ 10 ] TDDB : τ = τo( T ) exp[ G ( T )/ εox] [ 11 ] where τo( T ) = 5.4 × 10 −7 exp(− E a / kT ), G ( T ) = 120 + 5.8/ kT , and εox is the permittivity. HCI : λ HCI = A 3 exp(− β / VD ) exp(− E a / kT ) [ 12 ] where λ HCI is the failure rate of HCI, A 3 is an empirical fitting parameter, β is an empirical fitting parameter, V D is the drain voltage, E a is the activation energy of HCI, typically −0.2 to −0.1 eV, k is the Boltzmann constant, and T is absolute temperature. NBTI : λ = A εoxm V T μ p exp(− E a / kT ) [ 13 ] where A is determined empirically by normalizing the above equation, m = 2.9, V T is the thermal voltage, μ p is the surface mobility constant, E a is the activation energy of NBTI, k is the Boltzmann constant, and T is the absolute temperature. The resources and successes with integrated circuits, and a review of some of the drivers of field failures, subsequently motivated the reliability physics community to initiate physics of failure investigations into package-level degradation mechanisms. An extensive amount of work was performed to develop algorithms that could accurately predict the reliability of interconnects. Specific interconnects of interest resided at 1st level (wire bonds, solder bumps, die attach), 2nd level (solder joints), and 3rd level (plated through holes). Just as integrated circuit community had four major successes with physics of failure at the die-level, the component packaging community had four major successes arise from their work in the 1970s and 1980s. These were Peck : [ 14 ] Predicts time to failure of wire bond / bond pad connections when exposed to elevated temperature / humidity where A is a constant, RH is the relative humidity, f ( V ) is a voltage function (often cited as voltage squared), E a is the activation energy, k B is the Boltzmann constant, and T is absolute temperature. Engelmaier : [ 15 ] Predicts time to failure of solder joints exposed to temperature cycling where ε f is a fatigue ductility coefficient, c is a time and temperature dependent constant, F is an empirical constant, L D is the distance from the neutral point, α is the coefficient of thermal expansion, Δ T is the change in temperature, and h is solder joint thickness. Steinberg : [ 16 ] Predicts time to failure of solder joints exposed to vibration where Z is maximum displacement, PSD is the power spectral density ( g 2 /Hz), f n is the natural frequency of the CCA, Q is transmissibility (assumed to be square root of natural frequency), Z c is the critical displacement (20 million cycles to failure), B is the length of PCB edge parallel to component located at the center of the board, c is a component packaging constant, h is PCB thickness, r is a relative position factor, and L is component length. IPC-TR-579 : [ 17 ] Predicts time to failure of plated through holes exposed to temperature cycling where a is coefficient of thermal expansion (CTE), T is temperature, E is elastic modules, h is board thickness, d is hole diameter, t is plating thickness, and E and Cu label corresponding board and copper properties, respectively, S u being the ultimate tensile strength and D f being ductility of the plated copper, and De is the strain range. Each of the equations above uses a combination of knowledge of the degradation mechanisms and test experience to develop first-order equations that allow the design or reliability engineer to be able to predict time to failure behavior based on information on the design architecture, materials, and environment. More recent work in the area of physics of failure has been focused on predicting the time to failure of new materials (i.e., lead-free solder, [ 18 ] [ 19 ] high-K dielectric [ 20 ] ), software programs , [ 21 ] using the algorithms for prognostic purposes, [ 22 ] and integrating physics of failure predictions into system-level reliability calculations. [ 23 ] There are some limitations with the use of physics of failure in design assessments and reliability prediction. The first is physics of failure algorithms typically assume a 'perfect design'. Attempting to understand the influence of defects can be challenging and often leads to Physics of Failure (PoF) predictions limited to end of life behavior (as opposed to infant mortality or useful operating life). In addition, some companies have so many use environments (think personal computers) that performing a PoF assessment for each potential combination of temperature / vibration / humidity / power cycling / etc. would be onerous and potentially of limited value.
https://en.wikipedia.org/wiki/Physics_of_failure
Physics of financial markets is a non-orthodox economics discipline that studies financial markets as physical systems . It seeks to understand the nature of financial processes and phenomena by employing the scientific method and avoiding beliefs, unverifiable assumptions and immeasurable notions, not uncommon to economic disciplines. Physics of financial markets addresses issues such as theory of price formation, price dynamics, [ 1 ] [ 2 ] market ergodicity , [ 3 ] [ 4 ] [ 5 ] collective phenomena, market self-action, and market instabilities. Physics of financial markets should not be confused with mathematical finance , which are only concerned with descriptive mathematical modeling of financial instruments without seeking to understand nature of underlying processes.
https://en.wikipedia.org/wiki/Physics_of_financial_markets
From the viewpoint of physics ( dynamics , to be exact), a firearm , as for most weapons , is a system for delivering maximum destructive energy to the target with minimum delivery of energy on the shooter. [ citation needed ] The momentum delivered to the target, however, cannot be any more than that (due to recoil) on the shooter. This is due to conservation of momentum , which dictates that the momentum imparted to the bullet is equal and opposite to that imparted to the gun-shooter system. [ failed verification ] From a thermodynamic point of view, a firearm is a special type of piston engine , or in general heat engine where the bullet has a function of a piston. The energy conversion efficiency of a firearm strongly depends on its construction, especially on its caliber and barrel length. However, for illustration, here is the energy balance of a typical small firearm for .300 Hawk ammunition: [ 1 ] which is comparable with a typical piston engine. Higher efficiency can be achieved in longer barrel firearms because they have better volume ratio. However, the efficiency gain is less than corresponding to the volume ratio, because the expansion is not truly adiabatic and burnt gas becomes cold faster because of exchange of heat with the barrel. Large firearms (such as cannons) achieve smaller barrel-heating loss because they have better volume-to-surface ratio. High barrel diameter is also helpful because lower barrel friction is induced by sealing compared to the accelerating force. The force is proportional to the square of the barrel diameter while sealing needs are proportional to the perimeter by the same pressure. According to Newtonian mechanics, if the gun and shooter are at rest initially, the force on the bullet will be equal to that on the gun-shooter. This is due to Newton's third law of motion (For every action, there is an equal and opposite reaction). Consider a system where the gun and shooter have a combined mass m g and the bullet has a mass m b . When the gun is fired, the two masses move away from one another with velocities v g and v b respectively. But the law of conservation of momentum states that the magnitudes of their momenta must be equal, and as momentum is a vector quantity and their directions are opposite: In technical mathematical terms, the derivative of momentum with respect to time is force, which implies the force on the bullet will equal the force on the gun, and the momentum of the bullet/shooter can be derived via integrating the force-time function of the bullet or shooter. This is mathematically written as follows: Where g , b , t , m , v , F {\displaystyle g,b,t,m,v,F} represent the gun, bullet, time, mass, velocity and force respectively. Hollywood depictions of firearm victims being thrown through plate-glass windows are inaccurate. Were this the case, the shooter would also be thrown backwards, experiencing an even greater change in momentum in the opposite direction. Gunshot victims frequently fall or collapse when shot; this is less a result of the momentum of the bullet pushing them over, but is primarily caused by physical damage or psychological effects, perhaps combined with being off balance. This is not the case if the victim is hit by heavier projectiles such as 20 mm cannon shell, where the momentum effects can be enormous; this is why very few such weapons can be fired without being mounted on a weapons platform or involve a recoilless system (e.g. a recoilless rifle ). Example: A .44 Remington Magnum with a 240-grain (0.016 kg) jacketed bullet is fired at 1,180 feet per second (360 m/s) [ 2 ] at a 170-pound (77 kg) target. What velocity is imparted to the target (assume the bullet remains embedded in the target and thus practically loses all its velocity)? Let m b and v b stand for the mass and velocity of the bullet, the latter just before hitting the target, and let m t and v t stand for the mass and velocity of the target after being hit. Conservation of momentum requires Solving for the target's velocity gives This shows the target, with its great mass, barely moves at all. This is despite ignoring drag forces, which would in reality cause the bullet to lose energy and momentum in flight. From Eq. 1 we can write for the velocity of the gun/shooter: V = mv/M. This shows that despite the high velocity of the bullet, the small bullet-mass to shooter-mass ratio results in a low recoil velocity (V) although the force and momentum are equal. However, the smaller mass of the bullet, compared to that of the gun-shooter system, allows significantly more kinetic energy to be imparted to the bullet than to the shooter. The kinetic energy for the two systems are 1 2 M V 2 {\displaystyle {\begin{matrix}{\frac {1}{2}}\end{matrix}}MV^{2}} for the gun-shooter system and 1 2 m v 2 {\displaystyle {\begin{matrix}{\frac {1}{2}}\end{matrix}}mv^{2}} for the bullet. The energy imparted to the shooter can then be written as: For the ratio of these energies we have: The ratio of the kinetic energies is the same as the ratio of the masses (and is independent of velocity). Since the mass of the bullet is much less than that of the shooter there is more kinetic energy transferred to the bullet than to the shooter. Once discharged from the weapon, the bullet's energy decays throughout its flight, until the remainder is dissipated by colliding with a target (e.g. deforming the bullet and target). When the bullet strikes, its high velocity and small frontal cross-section means that it will exert highly focused stresses in any object it hits. This usually results in it penetrating any softer material, such as flesh. The energy is then dissipated along the wound channel formed by the passage of the bullet. See terminal ballistics for a fuller discussion of these effects. Bulletproof vests work by dissipating the bullet's energy in another way; the vest's material, usually Aramid ( Kevlar or Twaron ), works by presenting a series of material layers which catch the bullet and spread its imparted force over a larger area, hopefully bringing it to a stop before it can penetrate into the body behind the vest. While the vest can prevent a bullet from penetrating, the wearer will still be affected by the momentum of the bullet, which can produce contusions .
https://en.wikipedia.org/wiki/Physics_of_firearms
The physics of skiing refers to the analysis of the forces acting on a person while skiing . The motion of a skier is determined by the physical principles of the conservation of energy and the frictional forces acting on the body. For example, in downhill skiing, as the skier is accelerated down the hill by the force of gravity, their gravitational potential energy is converted to kinetic energy , the energy of motion. In the ideal case, all of the potential energy would be converted into kinetic energy; in reality, some of the energy is lost to heat due to friction . One type of friction acting on the skier is the kinetic friction between the skis and snow. The force of friction acts in the direction opposite to the direction of motion, resulting in a lower velocity and hence less kinetic energy. The kinetic friction can be reduced by applying wax to the bottom of the skis which reduces the coefficient of friction . Different types of wax are manufactured for different temperature ranges because the snow quality changes depending on the current weather conditions and thermal history of the snow. The shape and construction material of a ski can also greatly impact the forces acting on a skier. [ 1 ] Skis designed for use in powder condition are very different from skis designed for use on groomed trails. These design differences can be attributed to the differences in the snow quality . An illustration of how snow quality can be different follows. In an area which experiences fluctuation in temperatures around 0°C - freezing temperature of water, both rain and snowfall are possible. Wet snow or the wet ground can freeze into a slippery sheet of ice. In an area which consistently experiences temperatures below 0°C, snowfall leads to accumulation of snow on the ground. When fresh, this snow is fluffy and powder-like. This type of snow has a lot of air space. Over time, this snow will become more compact, and the lower layers of snow will become more dense than the top layer. Skiers can use this type of information to improve their skiing experience by choosing the appropriate skis, wax, or by choosing to stay home. Search and rescue teams, and backcountry users rely on our understanding of snow to navigate the dangers present in the outdoors. [ 2 ] The second type of frictional force acting on a skier is drag . This is typically referred to as " air resistance ". The drag force is proportional to the cross-sectional area of a body (e.g. the skier) and the square of its velocity and density relative to the fluid in which the body is traveling through (e.g. air). To go faster, a skier can try to reduce the cross-sectional area of their body. Downhill skiers can adopt more aerodynamic positions such as tucking. Alpine ski racers wear skin tight race suits. The general area of physics which addresses these forces is known as fluid dynamics .
https://en.wikipedia.org/wiki/Physics_of_skiing
Physiological agonism describes the action of a substance which ultimately produces the same effects in the body as another substance—as if they were both agonists at the same receptor —without actually binding to the same receptor. Physiological antagonism describes the behavior of a substance that produces effects counteracting those of another substance (a result similar to that produced by an antagonist blocking the action of an agonist at the same receptor) using a mechanism that does not involve binding to the same receptor. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physiological_agonism_and_antagonism
Physiological condition [ 1 ] or, more often "physiological conditions" is a term used in biology , biochemistry , and medicine . It refers to conditions of the external or internal milieu that may occur in nature for that organism or cell system, in contrast to artificial laboratory conditions. A temperature range of 20-40 degrees Celsius , atmospheric pressure of 1, pH of 6-8, glucose concentration of 1-20 mM, atmospheric oxygen concentration, earth gravity and electromagnetism are examples of physiological conditions for most earth organisms. This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physiological_condition
In muscle physiology , physiological cross-sectional area (PCSA) is the area of the cross section of a muscle perpendicular to its fibers, generally at its largest point. It is typically used to describe the contraction properties of pennate muscles . [ 1 ] It is not the same as the anatomical cross-sectional area (ACSA), which is the area of the crossection of a muscle perpendicular to its longitudinal axis. In a non-pennate muscle the fibers are parallel to the longitudinal axis, and therefore PCSA and ACSA coincide. One advantage of pennate muscles is that more muscle fibers can be packed in parallel, thus allowing the muscle to produce more force, although the fiber angle to the direction of action means that the maximum force in that direction is somewhat less than the maximum force in the fiber direction. [ 2 ] [ 3 ] The muscle cross-sectional area (blue line in figure 1, also known as anatomical cross-section area, or ACSA) does not accurately represent the number of muscle fibers in the muscle. A better estimate is provided by the total area of the cross-sections perpendicular to the muscle fibers (green lines in figure 1). This measure is known as the physiological cross-sectional area (PCSA), and is commonly calculated and defined by the following formula, developed in 1975 by Alexander and Vernon: [ 4 ] [ 5 ] [ 6 ] where ρ is the density of the muscle: PCSA increases with pennation angle, and with muscle length. In a pennate muscle, PCSA is always larger than ACSA. In a non-pennate muscle, it coincides with ACSA. The total force exerted by the fibers in their oblique direction is proportional to PCSA. If the specific tension of the muscle fibers is known (force exerted by the fibers per unit of PCSA), it can be computed as follows: [ 7 ] However, only a component of that force can be used to pull the tendon in the desired direction. This component, which is the true muscle force (also called tendon force [ 6 ] ), is exerted along the direction of action of the muscle: [ 6 ] The other component, orthogonal to the direction of action of the muscle (Orthogonal force = Total force × sinΦ) is not exerted on the tendon, but simply squeezes the muscle, by pulling its aponeuroses toward each other. Notice that, although it is practically convenient to compute PCSA based on volume or mass and fiber length, PCSA (and therefore the total fiber force, which is proportional to PCSA) is not proportional to muscle mass or fiber length alone. Namely, the maximum ( tetanic ) force of a muscle fiber simply depends on its thickness (cross-section area) and type . By no means it depends on its mass or length alone. For instance, when muscle mass increases due to physical development during childhood, this may be only due to an increase in length of the muscle fibers, with no change in fiber thickness (PCSA) or fiber type. In this case, an increase in mass does not produce an increase in force. Sometimes, the increase in mass is associated with an increase in thickness. Only in this case it will have some effect on fiber force, but this effect will be proportional to the increase in thickness, not to the increase in mass. For instance, in some stages of physical development, the increase in mass may be due to both an increase in PCSA and in fiber length. Even in this case, muscle force does not increase as much as muscle mass does, because the mass increase is partly produced by a variation in fiber length, and fiber length has no effect on muscle force. In 1982 a different definition of PCSA, herein denoted PCSA 2 , to facilitate comparison with the previous definition, was introduced by Sacks & Roy: [ 7 ] The comparison shows that in a pennate muscle, since cos ⁡ Φ {\displaystyle \cos \Phi } is always smaller than 1, PCSA 2 is always smaller than PCSA. Hence, it cannot be described as the total area of the cross-sections perpendicular to the muscle fibers (green lines in figure 1). It can be interpreted two ways: This implies that, in a muscle such as that in figure 1A, PCSA 2 coincides with ACSA. The disadvantage of this definition is its more complex interpretation, its advantage is that muscle force can be computed more directly: Currently, some authors keep using the original definition of PCSA, [ 5 ] [ 6 ] probably because of its intuitively appealing geometrical interpretation (figure 1).
https://en.wikipedia.org/wiki/Physiological_cross-sectional_area
Physiological functional capacity ( PFC ) is the ability to perform the physical tasks of daily life and the ease with which these tasks can be performed. PFC declines at some point with advancing age even in healthy adults, resulting in a reduced capacity to perform certain physical tasks. This can eventually result in increased incidence of functional disability , increased use of health care services, loss of independence , and reduced quality of life . [ 1 ] [ 2 ] This medical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physiological_functional_capacity
The term physiological prematurity (also described as altriciality ) refers to the fact that compared to most animals, humans are born in a premature biological state. Although sensory organs and skeletal and muscular systems are largely developed prenatally, human babies at the time of their birth are completely helpless and dependent on intensive care. This is in contrast to the maturity at birth found in other higher mammals (e.g. elephants, horses). According to the Swiss biologist Adolf Portmann , it is a characteristic feature of humans that due to their early birth, many developmental processes do not happen in isolation, but embedded in a sociocultural environment. Due to their complete dependence, humans are particularly amenable for social interactions and their environmental condition, which, according to Portmann, is a precondition for cultural and intellectual learning. Popularly called the fourth trimester of pregnancy, using a label coined by US pediatrician Harvey Karp in his book The Happiest Baby on the Block , the relative underdevelopment of human newborns compared to other primates is believed to be due to evolutionary pressures related to walking upright . [ 1 ] This medical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physiological_prematurity
Physiological relevance is a scientific concept that refers to the applicability or significance of a particular experimental finding or biological observation in the context of normal bodily functions. This concept is often used in biomedical research , where scientists strive to design experiments that not only yield statistically significant results but also have direct implications for understanding human health and disease. Physiological relevance is a critical factor in biomedical research because it helps to bridge the gap between basic science and clinical application. Researchers aim to design studies that not only yield statistically significant results but also have direct implications for understanding human health and disease. [ 2 ] For example, a study on the effects of a new drug on cancer cells in a lab dish might show promising results. However, these findings would only be considered physiologically relevant if the drug also demonstrated efficacy in animal models or clinical trials, where the complex interplay of various bodily systems and processes are taken into account. [ 3 ] [ 4 ] A classic example of physiological relevance is the discovery of insulin . In the early 20th century, scientists found that injecting diabetic dogs with extracts from the pancreas of healthy dogs could normalize their blood sugar levels. This finding was not only statistically significant but also physiologically relevant, as it led to the development of insulin therapy for diabetes in humans. In tissue engineering , physiological relevance means that living tissue constructs in vitro are morphologically and functionally similar to native tissue. [ 5 ] Bioengineering approaches to modify the mechanical properties of scaffolds and functionalize materials with growth factors or gene therapeutics. [ 6 ] [ 7 ] One of the main challenges in ensuring physiological relevance is the inherent complexity of biological systems. Many factors can influence the outcome of an experiment, from the genetic makeup of the test subjects to the specific conditions under which the experiment is conducted. Furthermore, what is physiologically relevant in one species may not be in another, making it difficult to extrapolate findings from animal models to humans. Another challenge is that physiological relevance is not always easy to quantify. Unlike statistical significance, which can be calculated using well-established mathematical formulas, physiological relevance often requires a more subjective, holistic assessment of the data. A limited number of quantitative models have been applied to improve the physiological relevance of biological systems. [ 6 ] [ 8 ]
https://en.wikipedia.org/wiki/Physiological_relevance
Physiologically based pharmacokinetic (PBPK) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion (ADME) of synthetic or natural chemical substances in humans and other animal species. PBPK modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals. PBPK models strive to be mechanistic by mathematically transcribing anatomical, physiological, physical, and chemical descriptions of the phenomena involved in the complex ADME processes. A large degree of residual simplification and empiricism is still present in those models, but they have an extended domain of applicability compared to that of classical, empirical function based, pharmacokinetic models. PBPK models may have purely predictive uses, but other uses, such as statistical inference, have been made possible by the development of Bayesian statistical tools able to deal with complex models. [ 1 ] That is true for both toxicity risk assessment and therapeutic drug development. PBPK models try to rely a priori on the anatomical and physiological structure of the body, and to a certain extent, on biochemistry. They are usually multi-compartment models , with compartments corresponding to predefined organs or tissues, with interconnections corresponding to blood or lymph flows (more rarely to diffusions). A system of differential equations for concentration or quantity of substance on each compartment can be written, and its parameters represent blood flows, pulmonary ventilation rate, organ volumes etc., for which information is available in scientific publications. Indeed, the description they make of the body is simplified and a balance needs to be struck between complexity and simplicity. Besides the advantage of allowing the recruitment of a priori information about parameter values, these models also facilitate inter-species transpositions or extrapolation from one mode of administration to another ( e.g. , inhalation to oral). An example of a 7-compartment PBPK model, suitable to describe the fate of many solvents in the mammalian body, is given in the Figure on the right. The first pharmacokinetic model described in the scientific literature [ 2 ] was in fact a PBPK model. It led, however, to computations intractable at that time. The focus shifted then to simpler models, [ 3 ] for which analytical solutions could be obtained (such solutions were sums of exponential terms, which led to further simplifications.) The availability of computers and numerical integration algorithms marked a renewed interest in physiological models in the early 1970s. [ 4 ] [ 5 ] For substances with complex kinetics, or when inter-species extrapolations were required, simple models were insufficient and research continued on physiological models. [ 6 ] [ 7 ] [ 8 ] By 2010, hundreds of scientific publications had described and used PBPK models, and at least two private companies have based their business on their expertise in this area. The model equations follow the principles of mass transport, fluid dynamics, and biochemistry in order to simulate the fate of a substance in the body. [ 9 ] Compartments are usually defined by grouping organs or tissues with similar blood perfusion rate and lipid content ( i.e. organs for which chemicals' concentration vs. time profiles will be similar). Ports of entry (lung, skin, intestinal tract...), ports of exit (kidney, liver...) and target organs for therapeutic effect or toxicity are often left separate. Bone can be excluded from the model if the substance of interest does not distribute to it. Connections between compartment follow physiology ( e.g. , blood flow in exit of the gut goes to liver, etc. ) Drug distribution into a tissue can be rate-limited by either perfusion or permeability. [ 10 ] [ 11 ] Perfusion-rate-limited kinetics apply when the tissue membranes present no barrier to diffusion. Blood flow, assuming that the drug is transported mainly by blood, as is often the case, is then the limiting factor to distribution in the various cells of the body. That is usually true for small lipophilic drugs. Under perfusion limitation, the instantaneous rate of entry for the quantity of drug in a compartment is simply equal to (blood) volumetric flow rate through the organ times the incoming blood concentration. In that case; for a generic compartment i , the differential equation for the quantity Q i of substance, which defines the rate of change in this quantity, is: d Q i d t = F i ( C a r t − Q i P i V i ) {\displaystyle {dQ_{i} \over dt}=F_{i}(C_{art}-{{Q_{i}} \over {P_{i}V_{i}}})} where F i is blood flow (noted Q in the Figure above), C art incoming arterial blood concentration, P i the tissue over blood partition coefficient and V i the volume of compartment i . A complete set of differential equations for the 7-compartment model shown above could therefore be given by the following table: The above equations include only transport terms and do not account for inputs or outputs. Those can be modeled with specific terms, as in the following. Modeling inputs is necessary to come up with a meaningful description of a chemical's pharmacokinetics. The following examples show how to write the corresponding equations. When dealing with an oral bolus dose ( e.g. ingestion of a tablet), first order absorption is a very common assumption. In that case the gut equation is augmented with an input term, with an absorption rate constant K a : d Q g d t = F g ( C a r t − Q g P g V g ) + K a Q i n g {\displaystyle {dQ_{g} \over dt}=F_{g}(C_{art}-{{Q_{g}} \over {P_{g}V_{g}}})+K_{a}Q_{ing}} That requires defining an equation for the quantity ingested and present in the gut lumen: d Q i n g d t = − K a Q i n g {\displaystyle {dQ_{ing} \over dt}=-K_{a}Q_{ing}} In the absence of a gut compartment, input can be made directly in the liver. However, in that case local metabolism in the gut may not be correctly described. The case of approximately continuous absorption ( e.g. via drinking water) can be modeled by a zero-order absorption rate (here R ing in units of mass over time): d Q g d t = F g ( C a r t − Q g P g V g ) + R i n g {\displaystyle {dQ_{g} \over dt}=F_{g}(C_{art}-{{Q_{g}} \over {P_{g}V_{g}}})+R_{ing}} More sophisticated gut absorption model can be used. In those models, additional compartments describe the various sections of the gut lumen and tissue. Intestinal pH, transit times and presence of active transporters can be taken into account . [ 12 ] The absorption of a chemical deposited on skin can also be modeled using first order terms. It is best in that case to separate the skin from the other tissues, to further differentiate exposed skin and non-exposed skin, and differentiate viable skin (dermis and epidermis) from the stratum corneum (the actual skin upper layer exposed). This is the approach taken in [Bois F., Diaz Ochoa J.G. Gajewska M., Kovarich S., Mauch K., Paini A., Péry A., Sala Benito J.V., Teng S., Worth A., in press, Multiscale modelling approaches for assessing cosmetic ingredients safety, Toxicology. doi: 10.1016/j.tox.2016.05.026] Unexposed stratum corneum simply exchanges with the underlying viable skin by diffusion: d Q s c u d t = K p × S s × ( 1 − f S e ) × ( Q s u P s c V s c u − C s c u ) {\displaystyle {dQ_{{sc}_{u}} \over dt}=K_{p}\times S_{s}\times (1-f_{S_{e}})\times ({Q_{s_{u}} \over {P_{sc}V_{{sc}_{u}}}}-C_{{sc}_{u}})} where K p {\displaystyle K_{p}} is the partition coefficient, S s {\displaystyle S_{s}} is the total skin surface area, f S e {\displaystyle f_{S_{e}}} the fraction of skin surface area exposed, ... For the viable skin unexposed: d Q s u d t = F s ( 1 − f S e ) ( C a r t − Q s u P s V s u ) − d Q s c u d t {\displaystyle {dQ_{s_{u}} \over dt}=F_{s}(1-f_{S_{e}})(C_{art}-{{Q_{s_{u}}} \over {P_{s}V_{s_{u}}}})-{dQ_{{sc}_{u}} \over dt}} For the skin stratum corneum exposed: d Q s c e d t = K p × S s × f S e × ( Q s e P s c V s c e − C s c e ) {\displaystyle {dQ_{{sc}_{e}} \over dt}=K_{p}\times S_{s}\times f_{S_{e}}\times ({Q_{s_{e}} \over {P_{sc}V_{{sc}_{e}}}}-C_{{sc}_{e}})} for the viable skin exposed: d Q s e d t = F s f S e ( C a r t − Q s e P s V s e ) − d Q s c e d t {\displaystyle {dQ_{s_{e}} \over dt}=F_{s}f_{S_{e}}(C_{art}-{{Q_{s_{e}}} \over {P_{s}V_{s_{e}}}})-{dQ_{{sc}_{e}} \over dt}} dt(QSkin_u) and dt(QSkin_e) feed from arterial blood and back to venous blood. More complex diffusion models have been published [reference to add]. Intravenous injection is a common clinical route of administration. (to be completed) Inhalation occurs through the lung and is hardly dissociable from exhalation (to be completed) There are several ways metabolism can be modeled. For some models, a linear excretion rate is preferred. This can be accomplished with a simple differential equation. Otherwise a Michaelis-Menten equation, as follows, is generally appropriate for a more accurate result. PBPK models are compartmental models like many others, but they have a few advantages over so-called "classical" pharmacokinetic models, which are less grounded in physiology. PBPK models can first be used to abstract and eventually reconcile disparate data (from physicochemical or biochemical experiments, in vitro or in vivo pharmacological or toxicological experiments, etc. ) They give also access to internal body concentrations of chemicals or their metabolites, and in particular at the site of their effects, be it therapeutic or toxic. Finally they also help interpolation and extrapolation of knowledge between: Some of these extrapolations are "parametric" : only changes in input or parameter values are needed to achieve the extrapolation (this is usually the case for dose and time extrapolations). Others are "nonparametric" in the sense that a change in the model structure itself is needed ( e.g. , when extrapolating to a pregnant female, equations for the foetus should be added). Owing to the mechanistic basis of PBPK models, another potential use of PBPK modeling is hypothesis testing. For example, if a drug compound showed lower-than-expected oral bioavailability, various model structures (i.e., hypotheses) and parameter values can be evaluated to determine which models and/or parameters provide the best fit to the observed data. If the hypothesis that metabolism in the intestines was responsibility for the low bioavailability yielded the best fit, then the PBPK modeling results support this hypothesis over the other hypotheses evaluated. As such, PBPK modeling can be used, inter alia , to evaluate the involvement of carrier-mediated transport, clearance saturation, enterohepatic recirculation of the parent compound, extra-hepatic/extra-gut elimination; higher in vivo solubility than predicted in vitro ; drug-induced gastric emptying delays; gut loss and regional variation in gut absorption. [ 15 ] Each type of modeling technique has its strengths and limitations. PBPK modeling is no exception. One limitation is the potential for a large number of parameters, some of which may be correlated. This can lead to the issues of parameter identifiability and redundancy. However, it is possible (and commonly done) to model explicitly the correlations between parameters (for example, the non-linear relationships between age, body-mass, organ volumes and blood flows). After numerical values are assigned to each PBPK model parameter, specialized or general computer software is typically used to numerically integrate a set of ordinary differential equations like those described above, in order to calculate the numerical value of each compartment at specified values of time (see Software). However, if such equations involve only linear functions of each compartmental value, or under limiting conditions (e.g., when input values remain very small) that guarantee such linearity is closely approximated, such equations may be solved analytically to yield explicit equations (or, under those limiting conditions, very accurate approximations) for the time-weighted average (TWA) value of each compartment as a function of the TWA value of each specified input (see, e.g., [ 16 ] [ 17 ] ). PBPK models can rely on chemical property prediction models ( QSAR models or predictive chemistry models) on one hand. For example, QSAR models can be used to estimate partition coefficients. They also extend into, but are not destined to supplant, systems biology models of metabolic pathways. They are also parallel to physiome models, but do not aim at modelling physiological functions beyond fluid circulation in detail. In fact the above four types of models can reinforce each other when integrated. [ 18 ] Further references: Dedicated software: General software:
https://en.wikipedia.org/wiki/Physiologically_based_pharmacokinetic_modelling
Physiology ( / ˌ f ɪ z i ˈ ɒ l ə dʒ i / ; from Ancient Greek φύσις ( phúsis ) ' nature, origin ' and -λογία ( -logía ) ' study of ' ) [ 1 ] is the scientific study of functions and mechanisms in a living system . [ 2 ] [ 3 ] As a subdiscipline of biology , physiology focuses on how organisms , organ systems , individual organs , cells , and biomolecules carry out chemical and physical functions in a living system. [ 4 ] According to the classes of organisms , the field can be divided into medical physiology , animal physiology , plant physiology , cell physiology , and comparative physiology . [ 4 ] Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. [ 5 ] Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions , including human diseases . The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine . Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines: There are many ways to categorize the subdisciplines of physiology: [ 6 ] Although there are differences between animal , plant , and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division , cell signaling , cell growth , and cell metabolism . [ citation needed ] Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology , plant ecology , phytochemistry , cell biology , genetics , biophysics , and molecular biology . Fundamental processes of plant physiology include photosynthesis , respiration , plant nutrition , tropisms , nastic movements , photoperiodism , photomorphogenesis , circadian rhythms , seed germination , dormancy , and stomata function and transpiration . Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology. [ 7 ] Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing. It seeks to understand the mechanisms that work to keep the human body alive and functioning, [ 4 ] through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical. [ 8 ] Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. [ 9 ] Change in behavior as a result of these substances is often used to assess the health of individuals. [ 10 ] [ 11 ] Much of the foundation of knowledge in human physiology was provided by animal experimentation . Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum. [ 12 ] Involving evolutionary physiology and environmental physiology , comparative physiology considers the diversity of functional characteristics across organisms. [ 13 ] The study of human physiology as a medical field originates in classical Greece , at the time of Hippocrates (late 5th century BC). [ 14 ] Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China , [ 15 ] India [ 16 ] and elsewhere. Hippocrates incorporated the theory of humorism , which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece . Like Hippocrates , Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. [ 17 ] Galen ( c. 130 –200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. [ 18 ] His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. [ 18 ] Galen was also the founder of experimental physiology. [ 19 ] And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine . [ 18 ] Jean Fernel (1497–1558), a French physician, introduced the term "physiology". [ 20 ] Galen, Ibn al-Nafis , Michael Servetus , Realdo Colombo , Amato Lusitano and William Harvey , are credited as making important discoveries in the circulation of the blood . [ 21 ] Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium ), and a thermoscope to measure temperature. [ 22 ] In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata . In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law , which compared functional differences between dorsal and ventral roots of the spinal cord . In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law. In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith , Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils ). [ 23 ] In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount. [ 24 ] The Physiological Society was founded in London in 1876 as a dining club. [ 25 ] The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences." [ 26 ] In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli. [ 24 ] In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann . [ 27 ] It radically stated that organisms are made up of units called cells. Claude Bernard 's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), [ 28 ] [ 29 ] which would later be taken up and championed as " homeostasis " by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." [ 30 ] In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology. Nineteenth-century physiologists such as Michael Foster , Max Verworn , and Alfred Binet , based on Haeckel 's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, [ 23 ] later renamed in the 20th century as cell biology . [ 31 ] In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology . [ 32 ] Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew . Most recently, evolutionary physiology has become a distinct subdiscipline. [ 33 ] In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated. [ 24 ] In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle , known today as the sliding filament theory. [ 24 ] Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). [ 34 ] [ 35 ] If physiology is perhaps less visible nowadays than during the golden age of the 19th century, [ 36 ] it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience , endocrinology , and immunology . [ 37 ] Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains. [ 35 ] [ 38 ] [ 39 ] Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society , for example, was founded in 1887 and included only men in its ranks. [ 40 ] In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. [ 41 ] Hyde, a representative of the American Association of University Women , a global non-profit organization that advances equity for women and girls in education, [ 42 ] attempted to promote gender equality in every aspect of science and medicine. Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society , which had been founded in 1876. [ 43 ] On 3 July 1915, six women were officially admitted: Florence Buchanan , Winifred Cullis , Ruth Skelton , Sarah C. M. Sowton , Constance Leetham Terry , and Enid M. Tribe . [ 44 ] The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society." ( ISBN 978-0-9933410-0-7 ) Prominent women physiologists include: Human physiology Animal physiology Plant physiology Fungal physiology Protistan physiology Algal physiology Bacterial physiology
https://en.wikipedia.org/wiki/Physiology
The physiology of marathons is typically associated with high demands on a marathon runner's cardiovascular system and their locomotor system . The marathon was conceived centuries ago and as of recent has been gaining popularity among many populations around the world. The 42.195 km (26.2 mile) distance is a physical challenge that entails distinct features of an individual's energy metabolism . Marathon runners finish at different times because of individual physiological characteristics. The interaction between different energy systems captures the essence of why certain physiological characteristics of marathon runners exist. The differing efficiency of certain physiological features in marathon runners evidence the variety of finishing times among elite marathon runners that share similarities in many physiological characteristics. Aside from large aerobic capacities and other biochemical mechanisms, external factors such as the environment and proper nourishment of a marathon runner can further the insight as to why marathon performance is variable despite ideal physiological characteristics obtained by a runner. The first marathon was perhaps a 25 mile run by Pheidippides , a Greek soldier who ran to Athens from the town of Marathon, Greece to deliver news of a battle victory over the Persians in 490 B.C. According to this belief, he dropped dead of exhaustion shortly after arriving in Athens . [ 1 ] Thousands of years later, marathon running became part of world sports, starting at the inaugural Marathon in the 1896 Modern Olympic Games . After around 40 years of various distances, the 42.195 kilometer (26.2) mile trek became standard. The number of marathons in the United States has grown over 45 times in this period. [ 2 ] With an increase in popularity, the scientific field has a large basis to analyze some of the physiological characteristics and the factors influencing these traits that led to Pheidippides's death. The high physical and biochemical demands of marathon running and variation across finishing times make for an intricate field of study that entangles multiple facets of human capacities. Humans metabolize food to transfer potential energy from food to adenosine triphosphate (ATP). This molecule provides the human body's instant accessible form of energy for all functions of cells within the body. [ 3 ] For exercise the human body places high demand for ATP to supply itself with enough energy to support all the corresponding changes in the body at work. The 3 energy systems involved in exercise are the Phosphogenic, Anaerobic and Aerobic energy pathways. [ 4 ] The simultaneous action of these three energy pathways prioritizes one specific pathway over the others depending on the type of exercise an individual is partaking in. This differential prioritization is based on the duration and intensity of the particular exercise. Variable use of these energy pathways is central to the mechanisms that support long, sustained exercise—such as running a marathon. The phosphogenic (ATP-PC) anaerobic energy pathway restores ATP after its breakdown via creatine phosphate stored in skeletal muscle . This pathway is anaerobic because it does not require oxygen to synthesize or use ATP. ATP restoration only lasts for approximately the first 30 seconds of exercise. [ 3 ] This rapid rate of ATP production is essential at the onset of exercise. The amount of creatine phosphate and ATP stored in the muscle is small, readily available, and used quickly due these two factors. Weight lifting or running sprints are examples of exercises that use this energy pathway. The anaerobic glycolytic energy pathway is the source of human energy after the first 30 seconds of an exercise until 3 minutes into that exercise. The first 30 seconds of exercise are most heavily reliant on the phosphogenic pathway for energy production. Through glycolysis , the breakdown of carbohydrates from blood glucose or muscle glycogen stores yields ATP for the body without the need for oxygen. [ 4 ] This energy pathway is often thought of as the transitional pathway between the phosphogenic energy pathway and the aerobic energy pathway due to the point in exercise this pathway onsets and terminates. A 300-800 meter run is an example of an exercise that uses this pathway—as it is typically higher intensity than endurance exercise, and only sustained for 30–180 seconds, depending on training. The aerobic energy pathway is the third and slowest ATP producing pathway that is oxygen dependent. This energy pathway typically supplies the bulk of the body's energy during exercise—after three minutes from the onset of exercise until the end, or when the individual experiences fatigue. The body uses this energy pathway for lower intensity exercise that lasts longer than three minutes, which corresponds to the rate at which the body produces ATP using oxygen. [ 3 ] This energy system is essential to endurance athletes such as marathon runners, triathletes, cross-country skiers, etc. The Aerobic Energy Pathway is able to produce the largest amount of ATP out of these three systems. This is largely because of this energy system's ability to convert fats , carbohydrates , and protein into a state that can enter the mitochondria , the site of aerobic ATP production. [ 5 ] Marathon runners obtain above average aerobic capacities, oftentimes up to 50% larger than normally active individuals. [ 6 ] Aerobic capacity or VO 2Max is an individual's ability to maximally take up and consume oxygen in all bodily tissue during exhaustive exercise. [ 7 ] Aerobic capacity serves as a good measure of exercise intensity as it is the upper limit of one's physical performance. An individual cannot perform any exercise at 100% VO 2Max for extended periods of time. [ 7 ] The marathon is generally run at about 70-90% of VO 2Max and the fractional use of one's aerobic capacity serves as a key component of marathon performance. [ 6 ] The physiological mechanisms that aerobic capacity or VO 2Max consist of are blood transportation/distribution and the use of this oxygen within muscle cells. [ 7 ] VO 2Max is one of the most salient indicators of endurance exercise performance. The VO 2Max of an elite runner at maximal exercise is almost two times the value of a fit or trained adult at maximal exercise. [ 8 ] Marathon runners demonstrate physiological characteristics that enable them to deal with the high demands of a 26.2 mile (42.195 km) run. The primary components of an individual's VO 2Max are the properties of aerobic capacity that influence the fractional use (%VO 2Max ) of this ability to take up and consume oxygen during exhaustive exercise. The transportation of large amounts of blood to and from the lungs to reach all bodily tissues depends on a high cardiac output and sufficient levels of total body hemoglobin . Hemoglobin is the oxygen carrying protein within blood cells that transports oxygen from the lungs to other bodily tissues via the circulatory system . [ 9 ] For effective transportation of oxygen in blood during a marathon, distribution of blood must be efficient. The mechanism that allows for this distribution of oxygen to the muscle cells is muscle blood flow. [ 10 ] A 20 fold increase of local blood flow within skeletal muscle is necessary for endurance athletes, like marathon runners, to meet their muscles' oxygen demands at maximal exercise that are up to 50 times greater than at rest. [ 10 ] Upon successful transportation and distribution of oxygen in the blood, the extraction and use of the blood within skeletal muscle are what give effect to a marathoner's increased aerobic capacity and the overall improvement of an individual's marathon performance. Extraction of oxygen from the blood is performed by myoglobin within the skeletal muscle cells that accept and store oxygen. [ 9 ] These components of aerobic capacity help define the maximal uptake and consumption of oxygen in bodily tissues during exhaustive exercise. Marathon runners often present enlarged dimensions of the heart and decreased resting heart rates that enable them to achieve greater aerobic capacities. [ 7 ] [ 11 ] Although these morphological and functional changes in a marathon runner's heart aid in maximizing their aerobic capacity, these factors are also what set the limit for an individual to maximally take up and consume oxygen in their bodily tissues during endurance exercise. Increased dimensions of the heart enable an individual to achieve a greater stroke volume . A concomitant decrease in stroke volume occurs with the initial increase in heart rate at the onset of exercise. [ 6 ] The highest heart rate an individual can achieve is limited and decreases with age (Estimated Maximum Heart Rate = 220 - age in years). [ 12 ] Despite an increase in cardiac dimensions, a marathoner's aerobic capacity is confined to this capped and ever decreasing heart rate . An athlete's aerobic capacity cannot continuously increase because their maximum heart rate can only pump a specific volume of blood. [ 12 ] [ 7 ] An individual running a marathon experiences appropriation of blood to the skeletal muscles. This distribution of blood maximizes oxygen extraction by the skeletal muscles to aerobically produce as much ATP needed to meet demand. To achieve this, blood volume increases. [ 7 ] The initial increase in blood volume during marathon running can later lead to decreased blood volume as a result of increased core body temperature, pH changes in skeletal muscles, and the increased dehydration associated with cooling during such exercise. Oxygen affinity of the blood depends on blood plasma volume and an overall decrease in blood volume. Dehydration , temperature and pH differences between the lungs and the muscle capillaries can limit one's ability to fractionally use their aerobic capacity (%VO 2Max ). [ 7 ] [ 13 ] Other limitations affecting a marathon runner's VO 2Max include pulmonary diffusion , mitochondria enzyme activity, and capillary density. These features of a marathon runner can be enlarged compared to that of an untrained individual but have upper limits determined by the body. Improved mitochondria enzyme activity and increased capillary density likely accommodate more aerobically produced ATP. These increases only occur to a certain point and help to determine peak aerobic capacity. [ 7 ] Especially in fit individuals, the pulmonary diffusion of these individuals correlates strongly with VO 2Max and can limit these individuals in an inability to efficiently saturate hemoglobin with oxygen due to large cardiac output . [ 7 ] [ 14 ] The shorter transit time of larger amounts of blood being pumped per unit time can be attributed to this insufficient oxygen saturation often seen in well trained athletes such as marathoners. Not all inspired air and its components make it into the pulmonary system due to the human body's anatomical dead space , which, in terms of exercise, is a source of oxygen wasted. [ 15 ] Despite being one of the most salient predictors of marathon performance, a large VO 2Max is only one of the factors that may affect marathon performance. A marathoner's running economy is their sub maximal requirement for oxygen at specific speeds. This concept of running economy helps explain different marathon times for runners with similar aerobic capacities. [ 11 ] The steady state oxygen consumption used to define running economy demonstrates the energy cost of running at sub maximal speeds. This is often measured by the volume of oxygen consumed, either in liters or milliliters , per kilogram of body weight per minute (L/kg/min or mL/kg/min). [ 6 ] Discrepancies in time of winning performances of various marathon runners with almost identical VO 2Max and %VO 2Max values can be explained by different levels of oxygen consumption per minute at the same speeds. For this reason, it can be seen that Jim McDonagh has run the marathon faster than Ted Corbitt in his winning performances compared to that of Corbitt. This greater requirement for sub maximal oxygen consumption (3.3L of oxygen per minute for Corbitt vs. 3.0L of oxygen per minute for McDonagh) is positively correlated with a greater level of energy expenditure while running the same speed. [ 6 ] Running economy (efficiency) can be credited with being an important factor in elite marathon performance as energy expenditure is weakly correlated with a runner's mean velocity increase. [ 6 ] A disparity in running economy determined differences in marathon performance and the efficiency of these runners exemplifies the marginal differences in total energy expenditure when running at greater velocities than recreational athletes. A marathon runner's velocity at lactate threshold is strongly correlated to their performance. Lactate threshold or anaerobic threshold is considered a good indicator of the body's ability to efficiently process and transfer chemical energy into mechanical energy . [ 7 ] A marathon is considered an aerobic dominant exercise, but higher intensities associated with elite performance use a larger percentage of anaerobic energy. The lactate threshold is the cross over point between predominantly aerobic energy usage and anaerobic energy usage. This cross over is associated with the anaerobic energy system's inability to efficiently produce energy leading to the buildup of blood lactate often associated with muscle fatigue . [ 16 ] In endurance trained athletes, the increase in blood lactate concentration appears at about 75%-90%VO 2Max , which directly corresponds to the VO 2Max marathoner's run at. With this high of an intensity endured for over two hours, a marathon runner's performance requires more energy production than that solely supplied by mitochondrial activity. This causes a higher anaerobic to aerobic energy ratio during a marathon. [ 7 ] [ 16 ] The higher the velocity and fractional use of aerobic capacity an individual has at their lactic threshold, the better their overall performance. Uncertainty exists about how lactate threshold effects endurance performance. Contribution of blood lactate levels accumulating is attributed to potential skeletal muscle hypoxemia but also to the production of more glucose that can be used as energy. [ 11 ] [ 7 ] The inability to establish a singular set of physiological contributions to blood lactate accumulation's effect on the exercising individual creates a correlative role for lactate threshold in marathon performance as opposed to a causal role. [ 17 ] To sustain high intensity running, a marathon runner must obtain sufficient glycogen stores. Glycogen can be found in the skeletal muscles or liver . With low levels of glycogen stores at the onset of the marathon, premature depletion of these stores can reduce performance or even prevent completion of the race. [ 6 ] [ 7 ] ATP production via aerobic pathways can further be limited by glycogen depletion. Free Fatty Acids serve as a sparing mechanism for glycogen stores. The artificial elevation of these fatty acids along with endurance training demonstrate a marathon runner's ability to sustain higher intensities for longer periods of time. The prolonged sustenance of running intensity is attributed to a high turnover rate of fatty acids that allows the runner to preserve glycogen stores later into the race. [ 11 ] Some suggest that ingesting monosaccharides at low concentrations during the race could delay glycogen depletion. This lower concentration, as opposed to a high concentration of monosaccharides, is proposed as a means to maintain a more efficient gastric emptying and faster intestinal uptake of this energy source. [ 11 ] Carbohydrates may be the most efficient source of energy for ATP. Pasta parties and the consumption of carbohydrates in the days leading up to a marathon are common practice of marathon runners at all levels. [ 6 ] [ 18 ] Maintaining internal core body temperature is crucial to a marathon runner's performance and health. An inability to reduce rising core body temperature can lead to hyperthermia . To reduce body heat, the body must remove metabolically produced heat by sweating (also known as evaporative cooling). Heat dissipation by sweat evaporation can lead to significant bodily water loss. [ 11 ] A marathon runner can lose water adding up to about 8% of body weight. [ 6 ] Fluid replacement is limited, but can help keep internal temperatures cooler. Fluid replacement is physiologically challenging during exercise of this intensity due to the inefficient emptying of the stomach. Partial fluid replacement can serve to avoid a marathon runner's body over heating but not enough to keep pace with the loss of fluid via sweat evaporation. Environmental factors such as air resistance , rain , terrain, and heat contribute to a marathon runner's ability to perform at their full physiological ability. Air resistance or wind, and the marathon course terrain (hilly or flat) are factors. [ 11 ] [ 7 ] Rain can affect performance by adding weight to the runner's attire. Temperature, in particular heat, is the strongest environmental impediment to marathon performance. [ 19 ] An increase in air temperature affects all the runners the same. This negative correlation of increased temperature and decreased race time is affiliated with marathon runners' hospitalizations and exercise induced hyperthermia . There are other environmental factors less directly associated with marathon performance such as the pollutants in the air and even prize money associated with a specific marathon itself. [ 19 ]
https://en.wikipedia.org/wiki/Physiology_of_marathons
The physiome of an individual's or species ' physiological state is the description of its functional behavior. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and structure ( genome , proteome , and morphome ). The term comes from "physio-" (nature) and "-ome" (as a whole). The study of physiome is called physiomics . The concept of a physiome project was presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology in 1993. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus for the next decade. [ 1 ] The project is led by the Physiome Commission of the IUPS. [ 2 ] Other research initiatives related to the physiome include: This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Physiome
The physiotope is the total abiotic matrix of habitat present within any certain ecotope . It refers to the landform, the rocks and the soils, the climate and the hydrology, and the geologic processes which marshalled all these resources together in a certain way and in this time and place [ 1 ] Specifically, the physiotope denotes spatially explicit functional landscape units that can stratify landscapes into distinct units resulting from geological, morphological and soil processes. [ 2 ] In contrast to ecotopes, the physiotope does not include any definition of vegetation cover. [ 3 ] As such, resources used in mapping physiotopes strictly pertain to those implicated in the development and evolution of abiotic components of ecosystems. [ 4 ] Physiotopes can be utilized in mapping landscapes to help study the relation between abiotic and biotic parts of nature (eg. how the soil composition, geomorphology, etc. of one area can impact how biotic elements grow) in both land [ 5 ] and aquatic ecosystems. [ 6 ] They can also be used for analyzing land-use development in relation to geography for insights into policy implications. [ 7 ]
https://en.wikipedia.org/wiki/Physiotope
Physisorption , also called physical adsorption , is a process in which the electronic structure of the atom or molecule is barely perturbed upon adsorption . [ 1 ] [ 2 ] [ 3 ] The fundamental interacting force of physisorption is Van der Waals force . Even though the interaction energy is very weak (~10–100 meV), physisorption plays an important role in nature. For instance, the van der Waals attraction between surfaces and foot-hairs of geckos (see Synthetic setae ) provides the remarkable ability to climb up vertical walls. [ 4 ] Van der Waals forces originate from the interactions between induced, permanent or transient electric dipoles. In comparison with chemisorption , in which the electronic structure of bonding atoms or molecules is changed and covalent or ionic bonds form, physisorption does not result in changes to the chemical bonding structure. In practice, the categorisation of a particular adsorption as physisorption or chemisorption depends principally on the binding energy of the adsorbate to the substrate, with physisorption being far weaker on a per-atom basis than any type of connection involving a chemical bond. To give a simple illustration of physisorption, we can first consider an adsorbed hydrogen atom in front of a perfect conductor, as shown in Fig. 1. A nucleus with positive charge is located at R = (0, 0, Z ), and the position coordinate of its electron, r = ( x , y , z ) is given with respect to the nucleus. The adsorption process can be viewed as the interaction between this hydrogen atom and its image charges of both the nucleus and electron in the conductor. As a result, the total electrostatic energy is the sum of attraction and repulsion terms: The first term is the attractive interaction of the nucleus and its image charge, and the second term is due to the interaction of the electron and its image charge. The repulsive interaction is shown in the third and fourth terms arising from the interaction between the nucleus and the image electron, and, the interaction between the electron and the image nucleus, respectively. By Taylor expansion in powers of | r | / | R |, this interaction energy can be further expressed as: One can find from the first non-vanishing term that the physisorption potential depends on the distance Z between adsorbed atom and surface as Z −3 , in contrast with the r −6 dependence of the molecular van der Waals potential, where r is the distance between two dipoles . The van der Waals binding energy can be analyzed by another simple physical picture: modeling the motion of an electron around its nucleus by a three-dimensional simple harmonic oscillator with a potential energy V a : [ clarification needed ] where m e and ω are the mass and vibrational frequency of the electron, respectively. As this atom approaches the surface of a metal and forms adsorption, this potential energy V a will be modified due to the image charges by additional potential terms which are quadratic in the displacements: Assuming the potential is well approximated as where If one assumes that the electron is in the ground state, then the van der Waals binding energy is essentially the change of the zero-point energy: This expression also shows the nature of the Z −3 dependence of the van der Waals interaction. Furthermore, by introducing the atomic polarizability , the van der Waals potential can be further simplified: where is the van der Waals constant which is related to the atomic polarizability. Also, by expressing the fourth-order correction in the Taylor expansion above as ( aC v Z 0 ) / (Z 4 ), where a is some constant, we can define Z 0 as the position of the dynamical image plane and obtain The origin of Z 0 comes from the spilling of the electron wavefunction out of the surface. As a result, the position of the image plane representing the reference for the space coordinate is different from the substrate surface itself and modified by Z 0 . Table 1 shows the jellium model calculation for van der Waals constant C v and dynamical image plane Z 0 of rare gas atoms on various metal surfaces. The increasing of C v from He to Xe for all metal substrates is caused by the larger atomic polarizability of the heavier rare gas atoms. For the position of the dynamical image plane, it decreases with increasing dielectric function and is typically on the order of 0.2 Å. Even though the van der Waals interaction is attractive, as the adsorbed atom moves closer to the surface the wavefunction of electron starts to overlap with that of the surface atoms. Further the energy of the system will increase due to the orthogonality of wavefunctions of the approaching atom and surface atoms. This Pauli exclusion and repulsion are particularly strong for atoms with closed valence shells that dominate the surface interaction. As a result, the minimum energy of physisorption must be found by the balance between the long-range van der Waals attraction and short-range Pauli repulsion . For instance, by separating the total interaction of physisorption into two contributions—a short-range term depicted by Hartree–Fock theory and a long-range van der Waals attraction—the equilibrium position of physisorption for rare gases adsorbed on jellium substrate can be determined. [ 5 ] Fig. 2 shows the physisorption potential energy of He adsorbed on Ag, Cu, and Au substrates which are described by the jellium model with different densities of smear-out background positive charges. It can be found that the weak van der Waals interaction leads to shallow attractive energy wells (<10 meV). One of the experimental methods for exploring physisorption potential energy is the scattering process, for instance, inert gas atoms scattered from metal surfaces. Certain specific features of the interaction potential between scattered atoms and surface can be extracted by analyzing the experimentally determined angular distribution and cross sections of the scattered particles. Since 1980 two theories were worked on to explain adsorption and obtain equations that work. These two are referred to as the chi hypothesis, the quantum mechanical derivation, and excess surface work, ESW. [ 6 ] Both these theories yield the same equation for flat surfaces: θ = ( χ − χ c ) U ( χ − χ c ) {\displaystyle \theta =(\chi -\chi _{\text{c}})U(\chi -\chi _{\text{c}})} Where U is the unit step function. The definitions of the other symbols is as follows: θ := n ads / n m , χ := − ln ⁡ ( − ln ⁡ ( P / P vap ) ) {\displaystyle \theta :=n_{\text{ads}}/n_{\text{m}}\quad ,\quad \chi :=-\ln {\bigl (}-\ln {\bigl (}P/P_{\text{vap}}{\bigr )}{\bigr )}} where "ads" stands for "adsorbed", "m" stands for "monolayer equivalence" and "vap" is reference to the vapor pressure ("ads" and "vap" are the latest IUPAC convention but "m" has no IUAPC equivalent notation) of the liquid adsorptive at the same temperature as the solid sample. The unit function creates the definition of the molar energy of adsorption for the first adsorbed molecule by: χ c =: − ln ⁡ ( − E a / R T ) {\displaystyle \chi _{\text{c}}=:-\ln {\bigl (}-E_{\text{a}}/RT{\bigr )}} The plot of n a d s {\displaystyle n_{ads}} adsorbed versus χ {\displaystyle \chi } is referred to as the chi plot. For flat surfaces, the slope of the chi plot yields the surface area. Empirically, this plot was notice as being a very good fit to the isotherm by Polanyi [ 7 ] [ 8 ] [ 9 ] and also by deBoer and Zwikker [ 10 ] but not pursued. This was due to criticism in the former case by Einstein and in the latter case by Brunauer. This flat surface equation may be used as a "standard curve" in the normal tradition of comparison curves, with the exception that the porous sample's early portion of the plot of n a d s {\displaystyle n_{ads}} versus χ {\displaystyle \chi } acts as a self-standard. Ultramicroporous, microporous and mesoporous conditions may be analyzed using this technique. Typical standard deviations for full isotherm fits including porous samples are typically less than 2%. A typical fit to good data on a homogeneous non-porous surface is shown in figure 3. The data is by Payne, Sing and Turk [ 11 ] and was used to create the α {\displaystyle \alpha } -s standard curve. Unlike the BET, which can only be at best fit over the range of 0.05 to 0.35 of P / P vap , the range of the fit is the full isotherm.
https://en.wikipedia.org/wiki/Physisorption
A phytase ( myo -inositol hexakisphosphate phosphohydrolase) is any type of phosphatase enzyme that catalyzes the hydrolysis of phytic acid (myo-inositol hexakisphosphate) – an indigestible, organic form of phosphorus that is found in many plant tissues, especially in grains and oil seeds – and releases a usable form of inorganic phosphorus. [ 1 ] While phytases have been found to occur in animals, plants, fungi and bacteria, phytases have been most commonly detected and characterized from fungi. [ 2 ] The first plant phytase was found in 1907 from rice bran [ 3 ] [ 4 ] and in 1908 from an animal ( calf 's liver and blood). [ 4 ] [ 5 ] In 1962 began the first attempt at commercializing phytases for animal feed nutrition enhancing purposes when International Minerals & Chemicals (IMC) studied over 2000 microorganisms to find the most suitable ones for phytase production. This project was launched in part due to concerns about mineable sources for inorganic phosphorus eventually running out (see peak phosphorus ), which IMC was supplying for the feed industry at the time. Aspergillus ( ficuum ) niger fungal strain NRRL 3135 (ATCC 66876) was identified as a promising candidate [ 6 ] as it was able to produce large amounts of extracellular phytases. [ 7 ] However, the organism's efficiency was not enough for commercialization so the project ended in 1968 as a failure. [ 6 ] Still, identifying A. niger led in 1984 to a new attempt with A. niger mutants made with the relatively recently invented recombinant DNA technology. This USDA funded project was initiated by Dr. Rudy Wodzinski who formerly participated in the IMC's project. [ 6 ] This 1984 project led in 1991 to the first partially cloned phytase gene phyA (from A. niger NRRL 31235) [ 6 ] [ 8 ] and later on in 1993 to the cloning of the full gene and its overexpression in A. niger . [ 6 ] [ 9 ] In 1991 BASF began to sell the first commercial phytase produced in A. niger under the trademark Natuphos which was used to increase the nutrient content of animal feed . [ 6 ] In 1999 Escherichia coli bacterial phytases were identified as being more effective than A. niger fungal phytases. [ 6 ] [ 10 ] [ 11 ] Subsequently, this led to the animal feed use of this new generation of bacterial phytases which were superior to fungal phytases in many aspects. [ 6 ] Four distinct structural classes of phytase have been characterized in the literature: histidine acid phosphatases (HAPS), beta-propeller phytases (BPPs), purple acid phosphatases (PAPs), [ 2 ] and most recently, protein tyrosine phosphatase -like phytases (PTP-like phytases). [ 12 ] Most of the known phytases belong to a class of enzyme called histidine acid phosphatases (HAPs). HAPs have been isolated from filamentous fungi, bacteria, yeast, and plants. [ 1 ] All members of this class of phytase share a common active site sequence motif (Arg-His-Gly-X-Arg-X-Pro) and have a two-step mechanism that hydrolyzes phytic acid (as well as some other phosphoesters). [ 2 ] The phytase from the fungus Aspergillus niger is a HAP and is well known for its high specific activity and its commercially marketed role as an animal feed additive to increase the bioavailability of phosphate from phytic acid in the grain-based diets of poultry and swine. [ 13 ] HAPs have also been overexpressed in several transgenic plants as a potential alternative method of phytase production for the animal feed industry [ 14 ] and very recently, the HAP phytase gene from E. coli has been successfully expressed in a transgenic pig. [ 15 ] β-propeller phytases make up a recently discovered class of phytase. These first examples of this class of enzyme were originally cloned from Bacillus species, [ 2 ] but numerous microorganisms have since been identified as producing β-propeller phytases. The three-dimensional structure of β-propeller phytase is similar to a propeller with six blades. Current research suggests that β-propeller phytases are the major phytate-degrading enzymes in water and soil, and may play a major role in phytate-phosphorus cycling. [ 16 ] A phytase has recently been isolated from the cotyledons of germinating soybeans that has the active site motif of a purple acid phosphatase (PAP). This class of metalloenzyme has been well studied and searches of genomic databases reveal PAP-like sequences in plants, mammals, fungi, and bacteria. However, only the PAP from soybeans has been found to have any significant phytase activity. The three-dimensional structure, active-site sequence motif and proposed mechanism of catalysis have been determined for PAPs. [ citation needed ] Only a few of the known phytases belong to a superfamily of enzymes called protein tyrosine phosphatases (PTPs). PTP-like phytases, a relatively newly discovered class of phytase, have been isolated from bacteria that normally inhabit the gut of ruminant animals. [ 17 ] All characterized PTP-like phytases share an active site sequence motif (His-Cys-(X)5-Arg), a two-step, acid-base mechanism of dephosphorylation, and activity towards phosphrylated tyrosine residues, characteristics that are common to all PTP superfamily enzymes. [ 18 ] [ 19 ] Like many PTP superfamily enzymes, the exact biological substrates and roles of bacterial PTP-like phytases have not yet been clearly identified. The characterized PTP-like phytases from ruminal bacteria share sequence and structural homology with the mammalian PTP-like phosphoinositide/-inositol phosphatase PTEN, [ 12 ] and significant sequence homology to the PTP domain of a type III-secreted virulence protein from Pseudomonas syringae (HopPtoD2). [ 20 ] Most phytases show a broad substrate specificity, having the ability to hydrolyze many phosphorylated compounds that are not structurally similar to phytic acid such as ADP , ATP , phenyl phosphate, fructose 1,6-bisphosphate , glucose 6-phosphate , glycerophosphate and 3-phosphoglycerate . Only a few phytases have been described as highly specific for phytic acid, such as phytases from Bacillus sp. , Aspergillus sp. , E. coli [ 21 ] and those phytases belonging to the class of PTP-like phytases. [ 18 ] Phytic acid has six phosphate groups that may be released by phytases at different rates and in different order. Phytases hydrolyze phosphates from phytic acid in a stepwise manner, yielding products that again become substrates for further hydrolysis. Most phytases are able to cleave five of the six phosphate groups from phytic acid. Phytases have been grouped based on the first phosphate position of phytic acid that is hydrolyzed. The Enzyme Nomenclature Committee of the International Union of Biochemistry recognizes three types of phytases based on the position of the first phosphate hydrolyzed, those are 3-phytase ( EC 3.1.3.8 ), 4-phytase ( EC 3.1.3.26 ), and 5-phytase ( EC 3.1.3.72 ). To date, most of the known phytases are 3-phytases or 4-phytases, [ 21 ] only a HAP purified from lily pollen [ 22 ] and a PTP-like phytase from Selenomonas ruminantium subsp. lactilytica [ 20 ] have been determined to be 5-phytases . Phytic acid and its metabolites have several important roles in seeds and grains, most notably, phytic acid functions as a phosphorus store, as an energy store, as a source of cations and as a source of myo-inositol (a cell wall precursor). Phytic acid is the principal storage forms of phosphorus in plant seeds and the major source of phosphorus in the grain-based diets used in intensive livestock operations. The organic phosphate found in phytic acid is largely unavailable to the animals that consume it, but the inorganic phosphate that phytases release can be easily absorbed. Ruminant animals can use phytic acid as a source of phosphorus because the bacteria that inhabit their gut are well characterized producers of many types of phytases. However, monogastric animals do not carry bacteria that produce phytase, thus, these animals cannot use phytic acid as a major source of phosphorus and it is excreted in the feces. [ 23 ] However, human—especially vegetarians and vegans due to increased gut microbiome adaptation—can have microbes in their gut that can produce phytase that break down phytic acid. [ 24 ] Phytic acid and its metabolites have several other important roles in Eukaryotic physiological processes. As such, phytases, which hydrolyze phytic acid and its metabolites, also have important roles. Phytic acid and its metabolites have been implicated in DNA repair, clathrin -coated vesicular recycling, control of neurotransmission and cell proliferation. [ 25 ] [ 26 ] [ 27 ] The exact roles of phytases in the regulation of phytic acid and its metabolites and the resulting role in the physiological processes described above are still largely unknown and the subject of much research. Phytase has been reported to cause hypersensitivity pneumonitis in a human exposed while adding the enzyme to cattle feed. [ 28 ] [ 29 ] Phytase is produced by bacteria found in the gut of ruminant animals (cattle, sheep) making it possible for them to use the phytic acid found in grains as a source of phosphorus. [ 30 ] Non-ruminants ( monogastric animals) like human beings, dogs, pigs, birds, etc. do not produce phytase. Research in the field of animal nutrition has put forth the idea of supplementing feed with phytase so as to make available to the animal phytate-bound nutrients like calcium , phosphorus , minerals , carbohydrates , amino acids and proteins . [ 31 ] In Canada, a genetically modified pig called Enviropig, which has the capability to produce phytase primarily through its salivary glands, was developed and approved for limited production. [ 32 ] [ 33 ] Phytase is used as an animal feed supplement – often in poultry and swine – to enhance the nutritive value of plant material by liberation of inorganic phosphate from phytic acid (myo-inositol hexakisphosphate). Phytase can be purified from transgenic microbes and has been produced recently in transgenic canola , alfalfa and rice plants. [ 34 ] Phytase has also been produced from cultivated Rhizopus oligosporus .
https://en.wikipedia.org/wiki/Phytase
Phytobenthos ( / . f aɪ t oʊ ˈ b ɛ n θ ɒ s / ) (from Greek φυτόν ( phyton , meaning "plants") and βένθος ( benthos , meaning "depths") are autotrophic organisms found attached to bottom surfaces of aquatic environments, such as rocks, sediments, or even other organisms. [ 1 ] [ 2 ] [ 3 ] This photosynthetic community includes single-celled or filamentous cyanobacteria , microalgae , and macrophytes . [ 4 ] [ 5 ] Phytobenthos are highly diverse, and can be found in freshwater and marine environments, as well as transitional water systems. [ 6 ] [ 7 ] However, their distribution and availability still depend on the factors and stressors that exist in the environment. [ 8 ] Because phytobenthos are autotrophs, they need to be able to subsist where it is still possible to perform photosynthesis. [ 1 ] Similar to phytoplankton, phytobenthos contribute to the aquatic food web for grazers and heterotrophic bacteria, and researchers have also been studying their health as an indicator for water quality and environmental integrity of aquatic ecosystems. [ 5 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] Phytobenthos are subcategorized into microphytobenthos and macrophytobenthos. Microphytobenthos such as diatoms can be as small as 0.2 μm in diameter, and macrophytobenthos such as kelps can be tens of meters long. [ 4 ] [ 14 ] To establish themselves on surfaces, phytobenthos usually stabilize themselves onto substrates through the use of various polysaccharides, glycoproteins, and even lipids that make up the extracellular polymeric substance, of which 40 - 90% of the carbons are derived from carbohydrates. [ 15 ] Some species of phytobenthos such as Ostreobium and diatoms such as the Synedra acus Kütznig have been observed to live in a free-living state. [ 16 ] [ 17 ] Benthic diatoms have been found to be useful indicator species for determining the state of the aquatic environment as many study models have demonstrated association between the type of diatom communities that are present and the stability and the size of the sediments. [ 18 ] Non-diatom phytobenthos such as the cyanobacteria Nostoc spp. and Phormidium spp. have also been used as biological indicators. [ 19 ] Phytobenthos consist of both eukaryotic and prokaryotic communities, which can be identified by using microscopy or by performing gene sequencing with 16S rRNA (for prokaryotes) and 18S rRNA (for eukaryotes). [ 20 ] The eukaryotic communities of phytobenthos include microalgae such as Chlorophyceae , Bacillariophyceae , Cryptophyceae , and Chrysophyceae . [ 21 ] In the marine environment, some additional representative populations include Rhodophyta , which has been reportedly found in intertidal regions. [ 22 ] The prokaryotic communities of phytobenthos are composed primarily of filamentous cyanobacteria, some of which have been identified to be capable of producing hepatotoxins. [ 23 ] Depending on the type of substrates to which the phytobenthos is attached, they would be considered as epilithic (growing on rocks and other manmade, artificial substances), epipelic (growing on silt), episammic (growing on sand), epiphytic (growing on other plants), or epizoic (growing on animals). [ 24 ] Epizoic phytobenthos such as Ostreobium and Symbiodinium have also been found to grow on skeletons or within corals to which they have established symbiotic relationships by exchanging nutrients. [ 25 ] The phytobenthos' habitats can range from freshwater systems such as rivers and lakes to coastal regions. In the marine environment, phytobenthos can be found as far back from the shore as the subtidal zones where they are consistently submerged in water. [ 26 ] Their productivity does not extend beyond the outer boundary of the littoral zones , the region to which sunlight can still penetrate to the bottom. [ 27 ] With increasing depth, there is a decline in algal cover due in part to light availability. [ 1 ] [ 3 ] In addition to depth, turbidity can restrict the extent of light availability, which would also impact the extent of phytobenthic growth. [ 1 ] [ 28 ] However, phytobenthos such as Ostreobium have demonstrated capability to adapt to low-light conditions as grow in areas as deep as 200 meters. [ 3 ] [ 25 ] Some diatoms also demonstrated mobility and rise to the surface during the earlier part of the year. [ 24 ] Other physical and chemical conditions that also determine phytobenthos distributions include flow, acidity, nutrient, temperature, and the community's composition. [ 1 ] Water flow can determine the types and distributions of phytobenthos, especially in the stream communities where the water is constantly moving. [ 1 ] Rivers with more steady flow contribute to the stable environment that can promote the growth of phytobenthos communities. [ 29 ] Phytobenthos form biofilm with other microbial populations, including heterotrophic bacteria, which can also produce extracellular polymeric substance to help establish biofilm. [ 20 ] Within these diverse communities, phytobenthos sustains the heterotrophs and mixotrophs not only by serving as food themselves. [ 20 ] Phytobenthos can fix organic matters as primary producers, and the extracellular polymeric substance they produced to attach themselves to surfaces can also be utilized by bacteria as another potential carbon source. [ 15 ] The presence of consumers are not the only biotic factors driving changes to the phytobenthos composition in the community. Photosynthetic populations that demonstrate themselves to be competitive can also change the benthic community makeup. [ 30 ] The diatom D. geminata can proliferate quickly and are readily adaptive to changes to the aquatic environment. [ 30 ] Researchers have assigned trophic values or indicators based on the Periphyton Index of Trophic status (PIT) to phytobenthos as another means to determine the ecological status of water bodies. [ 10 ] [ 11 ] Researchers have also taken into consideration of the water chemistry, richness of the community, and biomass in their studies. [ 10 ] Depending on the site of study, researchers also account for the activities from the phytobenthos when calculating for primary productivity. [ 31 ]
https://en.wikipedia.org/wiki/Phytobenthos
Phytochemistry is the study of phytochemicals , which are chemicals derived from plants . Phytochemists strive to describe the structures of the large number of secondary metabolites found in plants, the functions of these compounds in human and plant biology, and the biosynthesis of these compounds. Plants synthesize phytochemicals for many reasons, including to protect themselves against insect attacks and plant diseases . The compounds found in plants are of many kinds, but most can be grouped into four major biosynthetic classes: alkaloids , phenylpropanoids , polyketides , and terpenoids . Phytochemistry can be considered a subfield of botany or chemistry . Activities can be led in botanical gardens or in the wild with the aid of ethnobotany . Phytochemical studies directed toward human (i.e. drug discovery) use may fall under the discipline of pharmacognosy , whereas phytochemical studies focused on the ecological functions and evolution of phytochemicals likely fall under the discipline of chemical ecology . Phytochemistry also has relevance to the field of plant physiology . Techniques commonly used in the field of phytochemistry are extraction , isolation, and structural elucidation ( MS ,1D and 2D NMR) of natural products , as well as various chromatography techniques (MPLC, HPLC , and LC-MS). Many plants produce chemical compounds for defence against herbivores . The major classes of pharmacologically active phytochemicals are described below, with examples of medicinal plants that contain them. [ 1 ] Human settlements are often surrounded by weeds containing phytochemicals, such as nettle , dandelion and chickweed . [ 2 ] [ 3 ] Many phytochemicals, including curcumin , epigallocatechin gallate , genistein , and resveratrol are pan-assay interference compounds and are not useful in drug discovery . [ 4 ] [ 5 ] Alkaloids are bitter-tasting chemicals, widespread in nature, and often toxic. There are several classes with different modes of action as drugs, both recreational and pharmaceutical. Medicines of different classes include atropine , scopolamine , and hyoscyamine (all from nightshade ), [ 6 ] the traditional medicine berberine (from plants such as Berberis and Mahonia ), caffeine ( Coffea ), cocaine ( Coca ), ephedrine ( Ephedra ), morphine ( opium poppy ), nicotine ( tobacco ), reserpine ( Rauvolfia serpentina ), quinidine and quinine ( Cinchona ), vincamine ( Vinca minor ), and vincristine ( Catharanthus roseus ). [ 7 ] Anthraquinone glycosides are found in senna , [ 9 ] rhubarb , and Aloe . [ 10 ] The cardiac glycosides are phytochemicals from plants including foxglove and lily of the valley . They include digoxin and digitoxin which act as diuretics . [ 11 ] Polyphenols of several classes are widespread in plants, including anthocyanins , phytoestrogens , and tannins . [ 13 ] Polyphenols are secondary metabolites produced by almost every part of plants, including fruits, flowers, leaves and bark. [ 13 ] Terpenes and terpenoids of many kinds are found in resinous plants such as the conifers . They are aromatic and serve to repel herbivores. Their scent makes them useful in essential oils , whether for perfumes such as rose and lavender , or for aromatherapy . [ 14 ] [ 15 ] Some have had medicinal uses: thymol is an antiseptic and was once used as a vermifuge (anti-worm medicine). [ 16 ] [ 17 ] Contrary to bacteria and fungi, most plant metabolic pathways are not grouped into biosynthetic gene clusters , but instead are scattered as individual genes. Some exceptions have been discovered: steroidal glycoalkaloids in Solanum , polyketides in Pooideae , benzoxazinoids in Zea mays , triterpenes in Avena sativa , Cucurbitaceae , Arabidopsis , and momilactone diterpenes in Oryza sativa . [ 18 ]
https://en.wikipedia.org/wiki/Phytochemistry
A phytochorion , in phytogeography , is a geographic area with a relatively uniform composition of plant species. Adjacent phytochoria do not usually have a sharp boundary, but rather a soft one, a transitional area in which many species from both regions overlap. The region of overlap is called a vegetation tension zone . In traditional schemes, areas in phytogeography are classified hierarchically, according to the presence of endemic families, genera or species, e.g., in floral (or floristic , phytogeographic ) zones and regions , [ 1 ] or also in kingdoms , regions and provinces , [ 2 ] sometimes including the categories empire and domain . However, some authors prefer not to rank areas, referring to them simply as "areas", "regions" (in a non hierarchical sense) or "phytochoria". [ 3 ] Systems used to classify vegetation can be divided in two major groups: those that use physiognomic-environmental parameters and characteristics and those that are based on floristic (i.e. shared genera and species) relationships. [ 4 ] Phytochoria are defined by their plant taxonomic composition, while other schemes of regionalization (e.g., vegetation type , physiognomy , plant formations, biomes ) may variably take in account, depending on the author, the apparent characteristics of a community (the dominant life-form ), environment characteristics , the fauna associated, anthropic factors or political - conservationist issues. [ 5 ] Several systems of classifying geographic areas where plants grow have been devised. Most systems are organized hierarchically, with the largest units subdivided into smaller geographic areas, which are made up of smaller floristic communities, and so on. Phytochoria are defined as areas possessing a large number of endemic taxa . Floristic kingdoms are characterized by a high degree of family endemism, floristic regions by a high degree of generic endemism, and floristic provinces by a high degree of species endemism. Systems of phytochoria have both significant similarities and differences with zoogeographic provinces , which follow the composition of mammal families , and with biogeographical provinces or terrestrial ecoregions , which take into account both plant and animal species. The term "phytochorion" (Werger & van Gils, 1976) [ 6 ] is especially associated with the classifications according to the methodology of Josias Braun-Blanquet , which is tied to the presence or absence of particular species, [ 7 ] mainly in Africa. [ 8 ] Taxonomic databases tend to be organized in ways which approximate floristic provinces, but which are more closely aligned to political boundaries, for example according to the World Geographical Scheme for Recording Plant Distributions . In the late 19th century, Adolf Engler (1844-1930) was the first to make a world map with the limits of distribution of floras, with four major floral regions (realms). [ 9 ] [ 10 ] His Syllabus der Pflanzenfamilien , from the third edition (1903) onwards, also included a sketch of the division of the earth into floral regions. [ 11 ] Other important early works on floristics includes Augustin de Candolle (1820), [ 12 ] Schouw (1823), [ 13 ] Alphonse de Candolle (1855), [ 14 ] Drude (1890), [ 1 ] Diels (1908), [ 15 ] and Rikli (1913). [ 16 ] Botanist Ronald Good (1947) identified six floristic kingdoms ( Boreal or Holarctic, Neotropical , Paleotropical , South African , Australian, and Antarctic ), the largest natural units he determined for flowering plants. Good's six kingdoms are subdivided into smaller units, called regions and provinces. The Paleotropical kingdom is divided into three subkingdoms, which are each subdivided into floristic regions. Each of the other five kingdoms are subdivided directly into regions. There are a total of 37 floristic regions. Almost all regions are further subdivided into floristic provinces. [ 17 ] Armen Takhtajan (1978, 1986), in a widely used scheme that builds on Good's work, identified thirty-five floristic regions, each of which is subdivided into floristic provinces, of which there are 152 in all. [ 18 ] [ 19 ] [ 20 ] [ 21 ] Critiquing previous attempts for their lack of phylogenetic relationships in the construction of their regions, Liu et al. incorporated distribution data alongside phylogenetic relationships to configure their realms. This led to the classification of eight realms organized into two super-realms and each composed of a number of sub-realms. [ 24 ] Differences from Takhtajan's floristic kingdoms mainly focus on emphasizing the uniqueness of certain realms that he had as subdivisions within kingdoms. Two examples are separating some kingdoms into two separate realms, as happened to the Paleotropical and Antarctic kingdoms, reasoning that they have been separated form each other for long enough time to constitute a different phylogenetic trajectory. The merging of the Cape floristic kingdom with the African realm was based by the low endemism of higher taxonomic ranks, which could be found outside the cape region in the rest of Africa. The final major change is the separation of the Saharo-Arabian realm from the Holarctic kingdom, though they admit the northern boundary is not clear, with flora from the Holarctic being found within this area. After publishing their regions, Dr. Hong Qian criticized Liu et al. for the inclusion of nonnative distributions in their analyses. [ 25 ] In response to this, the group cleaned their data to remove nonnative ranges and reassessed their regions. They suggest that the previous inclusion of exotic species did not significantly affect their mapping and found that the cleaned data revealed the same floristic realms. [ 26 ]
https://en.wikipedia.org/wiki/Phytochorion
In oceanography , phytodetritus is the organic particulate matter resulting from phytoplankton and other organic material in surface waters falling to the seabed. This process takes place almost continuously as a " marine snow " of descending particles, falling at the rate of about 100 to 150 m (328 to 492 ft) per day. [ 1 ] Under certain conditions, phytoplankton may aggregate and fall rapidly through the water column to arrive little changed on the seabed. These fluxes sometimes occur seasonally or periodically, are sometimes associated with algal blooms and may constitute the greater part of descending organic matter. If the amount is greater than the benthic detritivores can process, the phytodetritus forms a fluffy layer on the surface of the sediment. It accumulates in many shallow and deep water locations throughout the world. [ 2 ] Phytodetritus varies in colour and appearance and may be greenish, brown or grey, flocculent or gelatinous. It includes the microscopic remains of diatoms , dinoflagellates , dictyochales , coccolithophores , foraminiferans , phaeodareans , tintinnids , crustacean eggs and moults, protozoan faecal pellets, picoplankton and other planktonic matter embedded in a membranous gelatinous matrix. One of the most important genera of forams is Globigerina ; vast areas of the ocean floor are covered with " Globigerina ooze", so named by Murray and Renard in 1873, [ 3 ] dominated by the shells of planktonic forms. Larger materials may also be present including large animal remains such as carcases , large fragments of plant and faecal matter. [ 1 ]
https://en.wikipedia.org/wiki/Phytodetritus
Phytoecdysteroids are plant-derived ecdysteroids . Phytoecdysteroids are a class of chemicals that plants synthesize for defense against phytophagous (plant eating) insects. These compounds are mimics of hormones used by arthropods in the molting process known as ecdysis . It is presumed that these chemicals act as endocrine disruptors for insects, so that when insects eat the plants with these chemicals they may prematurely molt, lose weight, or suffer other metabolic damage and die. Chemically, phytoecdysteroids are classed as triterpenoids , the group of compounds that includes triterpene saponins , phytosterols , and phytoecdysteroids. Plants, but not animals, synthesize phytoecdysteroids from mevalonic acid in the mevalonate pathway of the plant cell using acetyl-CoA as a precursor. Some ecdysteroids, including ecdysone and 20-hydroxyecdysone (20E), are produced by both plants and arthopods. [ 1 ] Besides those, over 250 ecdysteroid analogs have been identified so far in plants, and it has been theorized that there are over 1,000 possible structures which might occur in nature. [ 2 ] Many more plants have the ability to "turn on" the production of phytoecdysteroids when under stress, animal attack or other conditions. [ 3 ] The term phytoecdysteroid can also apply to ecdysteroids found in fungi , even though fungi are not plants. The more precise term mycoecdysteroid has been applied to these chemicals. [ 4 ] Some plants and fungi that produce phytoecdysteroids include Achyranthes bidentata , [ 5 ] Tinospora cordifolia , [ 6 ] Pfaffia paniculata , [ 7 ] Leuzea carthamoides , [ 8 ] Rhaponticum uniflorum , [ 9 ] Serratula coronata , [ 10 ] Cordyceps , [ citation needed ] and Asparagus . [ 11 ] It is generally believed that phytoecdysteroid exert a negative effect on pests. Indeed, phytoecdysteroids sprayed onto plants have been shown to reduce the infestation of nematodes and insects. [ 1 ] However, in very limited scenarios, phytoecdysteroids may end up becoming beneficial for the insect. For example, ginsenosides are able to activate the ecdysteroid receptor in fruit flies, but this activation happens to compensate for age-related reduction in 20E levels. [ 12 ] Phytoecdysteroids have also been reported to influence the germination of other plants, making it an allelochemical . The plant producing phytoecdysteroids may also be affected by ecdysteroids, mainly by increasing the rate of photosynthesis. [ 1 ] They are not toxic to mammals and occur in the human diet. [ 1 ] 20-hydroxyecdysone is a drug candidate, [ 13 ] but this does not mean dietary amounts have any effect.
https://en.wikipedia.org/wiki/Phytoecdysteroid
A phytoestrogen is a plant-derived xenoestrogen (a type of estrogen produced by organisms other than humans) not generated within the endocrine system , but consumed by eating plants or manufactured foods. [ 1 ] Also called a "dietary estrogen", it is a diverse group of naturally occurring nonsteroidal plant compounds that, because of its structural similarity to estradiol (17-β-estradiol), have the ability to cause estrogenic or antiestrogenic effects. [ 2 ] Phytoestrogens are not essential nutrients because their absence from the diet does not cause a disease, nor are they known to participate in any normal biological function. [ 2 ] Common foods containing phytoestrogens are soy protein , beans , oats , barley , rice , coffee , apples , carrots (see Food Sources section below for bigger list). Its name comes from the Greek phyto ("plant") and estrogen , the hormone which gives fertility to female mammals . The word " estrus " (Greek οίστρος) means " sexual desire ", and "gene" (Greek γόνο) is "to generate". It has been hypothesized that plants use a phytoestrogen as part of their natural defense against the overpopulation of herbivore animals by controlling female fertility. [ 3 ] [ 4 ] The similarities, at the molecular level, of an estrogen and a phytoestrogen allow them to mildly mimic and sometimes act as an antagonist of estrogen. [ 2 ] Phytoestrogens were first observed in 1926, [ 2 ] [ 5 ] but it was unknown if they could have any effect in human or animal metabolism. In the 1940s and early 1950s, it was noticed that some pastures of subterranean clover and red clover (phytoestrogen-rich plants) had adverse effects on the fecundity of grazing sheep. [ 2 ] [ 6 ] [ 7 ] [ 8 ] Phytoestrogens mainly belong to a large group of substituted natural phenolic compounds: the coumestans , prenylflavonoids and isoflavones are three of the most active in estrogenic effects in this class. [ 1 ] The best-researched are isoflavones, which are commonly found in soy and red clover . Lignans have also been identified as phytoestrogens, although they are not flavonoids. [ 2 ] Mycoestrogens have similar structures and effects, but are not components of plants; these are mold metabolites of Fusarium , especially common on cereal grains, [ 9 ] [ 10 ] [ 11 ] but also occurring elsewhere, e.g. on various forages. [ 12 ] Although mycoestrogens are rarely taken into account in discussions about phytoestrogens, these are the compounds that initially generated the interest on the topic. [ 13 ] Phytoestrogens exert their effects primarily through binding to estrogen receptors (ER). [ 14 ] There are two variants of the estrogen receptor, alpha ( ER-α ) and beta ( ER-β ) and many phytoestrogens display somewhat higher affinity for ER-β compared to ER-α. [ 14 ] The key structural elements that enable phytoestrogens to bind with high affinity to estrogen receptors and display estradiol-like effects are: [ 2 ] In addition to interaction with ERs, phytoestrogens may also modulate the concentration of endogenous estrogens by binding or inactivating some enzymes, and may affect the bioavailability of sex hormones by depressing or stimulating the synthesis of sex hormone-binding globulin (SHBG). [ 8 ] Emerging evidence shows that some phytoestrogens bind to and transactivate peroxisome proliferator-activated receptors (PPARs). [ 15 ] [ 16 ] In vitro studies show an activation of PPARs at concentrations above 1 μM, which is higher than the activation level of ERs. [ 17 ] [ 18 ] At the concentration below 1 μM, activation of ERs may play a dominant role. At higher concentrations (>1 μM), both ERs and PPARs are activated. Studies have shown that both ERs and PPARs influence each other and therefore induce differential effects in a dose-dependent way. The final biological effects of genistein are determined by the balance among these pleiotrophic actions. [ 15 ] [ 16 ] [ 17 ] Phytoestrogens are involved in the synthesis of antifungal benzofurans and phytoalexins , such as medicarpin (common in legumes ), and sesquiterpenes , such as capsidiol in tobacco. [ 19 ] Soybeans naturally produce isoflavones, and are therefore a dietary source for isoflavones. [ citation needed ] Phytoestrogens are ancient naturally occurring substances, and as dietary phytochemicals they are considered to have coevolved with mammals. In the human diet, phytoestrogens are not the only source of exogenous estrogens. Xenoestrogens (novel, man-made), are found as food additives [ 20 ] and ingredients, and also in cosmetics, plastics, and insecticides. Environmentally, they have similar effects as phytoestrogens, making it difficult to clearly separate the action of these two kind of agents in studies. [ 21 ] The consumption of plants with unusual content of phytoestrogens, under drought conditions, has been shown to decrease fertility in quail . [ 22 ] Parrot food as available in nature has shown only weak estrogenic activity. Studies have been conducted on screening methods for environmental estrogens present in manufactured supplementary food, with the purpose of aiding reproduction of endangered species. [ 23 ] According to one study of nine common phytoestrogens in a Western diet, foods with the highest relative phytoestrogen content were nuts and oilseeds, followed by soy products, cereals and breads, legumes , meat products, and other processed foods that may contain soy, vegetables, fruits, alcoholic, and nonalcoholic beverages. Flax seed and other oilseeds contained the highest total phytoestrogen content, followed by soybeans and tofu . [ 24 ] The highest concentrations of isoflavones are found in soybeans and soybean products followed by legumes, whereas lignans are the primary source of phytoestrogens found in nuts and oilseeds (e.g. flax) and also found in cereals, legumes, fruits and vegetables. Phytoestrogen content varies in different foods, and may vary significantly within the same group of foods (e.g. soy beverages, tofu) depending on processing mechanisms and type of soybean used. Legumes (in particular soybeans), whole grain cereals, and some seeds are high in phytoestrogens. [ citation needed ] A more comprehensive list of foods known to contain phytoestrogens includes: [ citation needed ] Food content of phytoestrogens is very variable and accurate estimates of intake are therefore difficult and depends on the databases used. [ 31 ] Data from the European Prospective Investigation into Cancer and Nutrition found intakes between 1 mg/d in Mediterranean Countries and more than 20 mg/d in the United Kingdom . [ 32 ] The high intake in the UK is partly explained by the use of soy in the Chorleywood bread process . [ 33 ] A 2001 epidemiological study of women in the United States found that the dietary intake of phytoestrogens in healthy post-menopausal Caucasian women is less than one milligram daily. [ 34 ] In humans, phytoestrogens are digested in the small intestine, poorly absorbed into the circulatory system, circulate in plasma, and are excreted in the urine. Metabolic influence is different from that of grazing animals due to the differences between ruminant versus monogastric digestive systems. [ 21 ] As of 2020, there is insufficient clinical evidence to determine that phytoestrogens have effects in humans. [ 35 ] It is unclear if phytoestrogens have any effect on the cause or prevention of cancer in women. [ 1 ] [ 36 ] Some epidemiological studies have suggested a protective effect against breast cancer. [ 1 ] [ 36 ] [ 37 ] Additionally, other epidemiological studies found that consumption of soy estrogens is safe for patients with breast cancer, and that it may decrease mortality and recurrence rates. [ 1 ] [ 38 ] [ 39 ] It remains unclear if phytoestrogens can minimize some of the deleterious effects of low estrogen levels ( hypoestrogenism ) resulting from oophorectomy , menopause , or other causes. [ 36 ] A Cochrane review of the use of phytoestrogens to relieve the vasomotor symptoms of menopause ( hot flashes ) stated that there was no conclusive evidence to suggest any benefit to their use, although genistein effects should be further investigated. [ 40 ] It is unclear if phytoestrogens have any effect on male physiology, with conflicting results about the potential effects of isoflavones originating from soy. [ 1 ] Some studies showed that isoflavone supplementation had a positive effect on sperm concentration, count, or motility , and increased ejaculate volume. [ 41 ] [ 42 ] Sperm count decline and increasing rate of testicular cancers in the West may be linked to a higher presence of isoflavone phytoestrogens in the diet while in utero, but such a link has not been definitively proven. [ 43 ] Furthermore, while there is some evidence that phytoestrogens may affect male fertility, more recent reviews of available studies found no link, [ 44 ] [ 45 ] and instead suggests that healthier diets such as the Mediterranean diet might have a positive effect on male fertility. [ 45 ] Neither isoflavones nor soy have been shown to affect male reproductive hormones in healthy individuals. [ 44 ] [ 46 ] Some studies have found that some concentrations of isoflavones may have effects on intestinal cells. At low doses, genistein acted as a weak estrogen and stimulated cell growth; at high doses, it inhibited proliferation and altered cell cycle dynamics. This biphasic response correlates with how genistein is thought to exert its effects. [ 47 ] Some reviews express the opinion that more research is needed to answer the question of what effect phytoestrogens may have on infants, [ 48 ] [ 49 ] but their authors did not find any adverse effects. Studies conclude there are no adverse effects in human growth, development, or reproduction as a result of the consumption of soy-based infant formula compared to conventional cow-milk formula. [ 50 ] [ 51 ] [ 52 ] The American Academy of Pediatrics states: "although isolated soy protein-based formulas may be used to provide nutrition for normal growth and development, there are few indications for their use in place of cow milk-based formula. These indications include (a) for infants with galactosemia and hereditary lactase deficiency (rare) and (b) in situations in which a vegetarian diet is preferred." [ 53 ] In some countries, phytoestrogenic plants have been used for centuries in the treatment of menstrual and menopausal problems, as well as for fertility problems. [ 54 ] Plants used that have been shown to contain phytoestrogens include Pueraria mirifica [ 55 ] and its close relative kudzu , [ 56 ] Angelica , [ 57 ] fennel , [ 28 ] and anise . In a rigorous study, the use of one such source of phytoestrogen, red clover , has been shown to be safe, but ineffective in relieving menopausal symptoms [ 58 ] ( black cohosh is also used for menopausal symptoms, but does not contain phytoestrogens [ 59 ] ).
https://en.wikipedia.org/wiki/Phytoestrogen
Phytoextraction is a subprocess of phytoremediation in which plants remove dangerous elements or compounds from soil or water, most usually heavy metals , metals that have a high density and may be toxic to organisms even at relatively low concentrations. [ 1 ] The heavy metals that plants extract are toxic to the plants as well, and the plants used for phytoextraction are known hyperaccumulators that sequester extremely large amounts of heavy metals in their tissues. Phytoextraction can also be performed by plants that uptake lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. [ 2 ] Heavy metals can be a major problem for any biological organism as they may be reactive with a number of chemicals essential to biological processes. They can also break apart other molecules into even more reactive species (such as reactive oxygen species ), which also disrupt biological processes. These reactions deplete the concentration of important molecules and also produce dangerously reactive molecules such as the radicals O . and OH . . Non-hyperaccumulators also absorb some concentration of heavy metals, as many heavy metals are chemically similar to other metals that are essential to the plants' life. For a plant to extract a heavy metal from water or soil, five things need to happen. In their normal states, metals cannot be taken into any organism. They must be dissolved as an ion in solution to be mobile in an organism. [ 3 ] Once the metal is mobile, it can either be directly transported over the root cell wall by a specific metal transporter or carried over by a specific agent. The plant roots mediate this process by secreting things that will capture the metal in the rhizosphere and then transport the metal over the cell wall. Some examples are: phytosiderophores, organic acids , or carboxylates [ 4 ] If the metal is chelated at this point, then the plant does not need to chelate it later and the chelater serves as a case to conceal the metal from the rest of the plant. This is a way for a hyper-accumulator to protect itself from the toxic effects of poisonous metals. The first thing that happens when a metal is absorbed is it binds to the root cell wall. [ 5 ] The metal is then transported into the root. Some plants then store the metal through chelation or sequestration. Many specific transition metal ligands contributing to metal detoxification and transport are up-regulated in plants when metals are available in the rhizosphere. [ 6 ] At this point the metal can be alone or already sequestered by a chelating agent or other compound. To get to the xylem , the metal must then pass through the root symplasm. The systems that transport and store heavy metals are the most critical systems in a hyper-accumulator because the heavy metals will damage the plant before they are stored. The root-to-shoot transport of heavy metals is strongly regulated by gene expression. The genes that code for metal transport systems in plants have been identified. These genes are expressed in both hyper-accumulating and non-hyper-accumulating plants. There is a large body of evidence that genes known to code for the transport systems of heavy metals are constantly over-expressed in hyper-accumulating plants when they are exposed to heavy metals. [ 7 ] This genetic evidence suggests that hyper-accumulators overdevelop their metal transport systems. This may be to speed up the root-to-shoot process limiting the amount of time the metal is exposed to the plant systems before it is stored. Cadmium accumulation has been reviewed. [ 8 ] These transporters are known as heavy metal transporting ATPases (HMAs). [ 9 ] One of the most well-documented HMAs is HMA4, which belongs to the Zn/Co/Cd/Pb HMA subclass and is localized at xylem parenchyma plasma membranes. [ 7 ] HMA4 is upregulated when plants are exposed to high levels of Cd and Zn, but it is downregulated in its non-hyperaccumulating relatives. [ 10 ] Also, when the expression of HMA4 is increased there is a correlated increase in the expression of genes belonging to the ZIP (Zinc regulated transporter Iron regulated transporter Proteins) family. This suggests that the root-to-shoot transport system acts as a driving force of the hyper-accumulation by creating a metal deficiency response in roots. [ 11 ] Systems that transport and store heavy metals are the most critical systems in a hyper-accumulator, because heavy metals damage the plant before they are stored. Often in hyperaccumulators the heavy metals are stored in the leaves. There are several theories to explain why it would be beneficial for a plant to do this. For example, the "elemental defence" hypothesis assumes that maybe predators will avoid eating hyperaccumulators because of the heavy metals. So far, scientists have not been able to determine a correlation. [ 12 ] In 2002 a study was done by the Department of Pharmacology at Bangabandhu Sheikh Mujib Medical University in Bangladesh that used water hyacinth to remove arsenic from water. [ 13 ] This study proved that water could be completely purified of arsenic in a few hours and that the plant then could be used as animal feed, fire wood and many other practical purposes. Since water hyacinth is invasive, it is inexpensive to grow and extremely practical for this purpose.
https://en.wikipedia.org/wiki/Phytoextraction_process
Phytogeography (from Greek φυτόν, phytón = "plant" and γεωγραφία, geographía = "geography" meaning also distribution) or botanical geography is the branch of biogeography that is concerned with the geographic distribution of plant species and their influence on the earth's surface. [ 1 ] Phytogeography is concerned with all aspects of plant distribution, from the controls on the distribution of individual species ranges (at both large and small scales, see species distribution ) to the factors that govern the composition of entire communities and floras . Geobotany , by contrast, focuses on the geographic space's influence on plants. [ 2 ] Phytogeography is part of a more general science known as biogeography . [ 3 ] Phytogeographers are concerned with patterns and process in plant distribution. Most of the major questions and kinds of approaches taken to answer such questions are held in common between phyto- and zoogeographers. Phytogeography in wider sense (or geobotany, in German literature) encompasses four fields, according with the focused aspect, environment, flora ( taxa ), vegetation ( plant community ) and origin, respectively: [ 4 ] [ 5 ] [ 6 ] [ 7 ] Phytogeography is often divided into two main branches: ecological phytogeography and historical phytogeography . The former investigates the role of current day biotic and abiotic interactions in influencing plant distributions; the latter are concerned with historical reconstruction of the origin, dispersal, and extinction of taxa. [ 8 ] The basic data elements of phytogeography are occurrence records (presence or absence of a species) with operational geographic units such as political units or geographical coordinates. These data are often used to construct phytogeographic provinces ( floristic provinces ) and elements. The questions and approaches in phytogeography are largely shared with zoogeography , except zoogeography is concerned with animal distribution rather than plant distribution. The term phytogeography itself suggests a broad meaning. How the term is actually applied by practicing scientists is apparent in the way periodicals use the term. The American Journal of Botany , a monthly primary research journal, frequently publishes a section titled "Systematics, Phytogeography, and Evolution." Topics covered in the American Journal of Botany' s "Systematics and Phytogeography" section include phylogeography , distribution of genetic variation and, historical biogeography , and general plant species distribution patterns. Biodiversity patterns are not heavily covered. A flora is the group of all plant species in a specific period of time or area, in which each species is independent in abundance and relationships to the other species. The group or the flora can be assembled in accordance with floral element, which are based on common features. A flora element can be a genetic element, in which the group of species share similar genetic information i.e. common evolutionary origin; a migration element has a common route of access into a habitat; a historical element is similar to each other in certain past events and an ecological element is grouped based on similar environmental factors. A population is the collection of all interacting individuals of a given species, in an area. An area is the entire location where a species, an element or an entire flora can occur. Aerography studies the description of that area, chorology studies their development. The local distribution within the area as a whole, as that of a swamp shrub, is the topography of that area. Areas are an important factor is forming an image about how species interaction result in their geography. The nature of an area’s margin, their continuity, their general shape and size relative to other areas, make the study of area crucial in identifying these types of information. For example, a relict area is an area surviving from an earlier and more exclusive occurrence. Mutually exclusive plants are called vicarious (areas containing such plants are also called vicarious). The earth’s surface is divided into floristic region, each region associated with a distinctive flora. [ 9 ] Phytogeography has a long history. One of the subjects earliest proponents was Prussian naturalist Alexander von Humboldt , who is often referred to as the "father of phytogeography". Von Humboldt advocated a quantitative approach to phytogeography that has characterized modern plant geography. Gross patterns of the distribution of plants became apparent early on in the study of plant geography. For example, Alfred Russel Wallace , co-discoverer of the principle of natural selection, discussed the latitudinal gradients in species diversity , a pattern observed in other organisms as well. Much research effort in plant geography has since then been devoted to understanding this pattern and describing it in more detail. In 1890, the United States Congress passed an act that appropriated funds to send expeditions to discover the geographic distributions of plants (and animals) in the United States. The first of these was The Death Valley Expedition , including Frederick Vernon Coville , Frederick Funston , Clinton Hart Merriam , and others. [ 10 ] Research in plant geography has also been directed to understanding the patterns of adaptation of species to the environment. This is done chiefly by describing geographical patterns of trait/environment relationships. These patterns termed ecogeographical rules when applied to plants represent another area of phytogeography. Floristics is a study of the flora of some territory or area. Traditional phytogeography concerns itself largely with floristics and floristic classification,. China has been a focus to botanist for its rich biota as it holds the record for the earliest known angiosperm megafossil. [ 11 ]
https://en.wikipedia.org/wiki/Phytogeography
Phytogeomorphology is the study of how terrain features affect plant growth. [ 1 ] It was the subject of a treatise by Howard and Mitchell in 1985, who were considering the growth and varietal temporal and spatial variability found in forests, but recognized that their work also had application to farming , and the relatively new science (at that time) of precision agriculture . The premise of Howard and Mitchell is that landforms , or features of the land's 3D topography significantly affect how and where plants (or trees in their case) grow. Since that time, the ability to map and classify landform shapes and features has increased greatly. The advent of GPS has made it possible to map almost any variable one might wish to measure. Thus, a very increased awareness of the spatial variability of the environment that plants grow in has arisen. The development of technology like airborne LiDAR has enabled the detailed measurement of landform features to better than sub-meter, and when combined with RTK-GPS (accuracies to 1mm) enables the creation of very accurate maps of where these features are. Comparison of these landform maps with mapping of variables related to crop or plant growth show a strong correlation (see below for examples and references for precision agriculture). While phytogeomorphology studies the relationship between plants and terrain attributes in general (see Howard et al., (1985)), it can also apply to precision agriculture by studying crop growth temporal and spatial variability within farm fields. There is already a volume of work, although they don't use the term phytogeomorphology specifically, that considers farm field terrain attributes as affecting crop yield and growth, Moore et al. (1991) [ 2 ] provide an early overview of the application of terrain features to precision agriculture, but one of the earliest references to this phenomenon in farming is that of Whittaker in 1967. [ 3 ] More recent work includes a six-year study of temporal and spatial yield stability over 11 years (Kaspar et al., (2003), and references therein), [ 4 ] and a detailed study of the same on a small patch farm in Portugal (and references therein). [ 5 ] This variability can be exploited to produce higher yields and reduce the environmental impact of farming - consequently returning a higher profit to the farmer in terms of higher overall yields and lesser amounts of inputs. The new science of Sustainable Intensification of Agriculture [ 6 ] which is addressing the need for higher yields from existing fields can be fulfilled by some of the practical applications of phytogeomorphology applied to precision agriculture. Work in this area has been happening for some years (see Reuter et al., (2005), [ 7 ] Marquas de Silva et al., (2008), and especially Moore et al., (1991)), but it is slow and sometimes tedious work that necessarily involves multiple years of data, very specialized software tools, and long compute times to produce the resulting maps. Typically, the objective of precision agriculture is to divide the farm field into distinct management zones based on yield performance at each point in the field. 'Variable rate technology' is a relatively new term in farming technology that refers to spreaders , seeders , sprayers , etc. that are able to adjust their rates of flow on the fly. The idea is to create a 'recipe map' for variable rate farm machinery to deliver the exact quantity of amendments required at that location (within that zone of the field). The literature is divided on how to properly define management zones. [ citation needed ] In the geomorphological approach to defining management zones it is found that topography aids in at least partially defining how much yield comes from which part of the field. This is true in fields where there are permanently limiting characteristics to parts of the field, but not true in fields where the growth potential is the same all over the field (Blackmore et al., (2003) [ 8 ] ). It can be shown that an index map of yield (shows areas of consistent over-performance of yield and areas of consistent under-performance) correlates well with a landform classification map (personal communication, Aspinall (2011) [ 9 ] ). Landforms can be classified a number of ways, but the simplest to use software tool is LandMapR (MacMillan (2003) [ 10 ] ). An early version of the LandMapR software is available through the Opengeomorphometry project hosted under the Google Code project .
https://en.wikipedia.org/wiki/Phytogeomorphology
The phytoglobin-nitric oxide cycle is a metabolic pathway induced in plants under hypoxic conditions which involves nitric oxide (NO) and phytoglobin (Pgb). [ 1 ] It provides an alternative type of respiration to mitochondrial electron transport under the conditions of limited oxygen supply. [ 2 ] Phytoglobin in hypoxic plants acts as part of a soluble terminal nitric oxide dioxygenase system, yielding nitrate ion from the reaction of oxygenated phytoglobin with NO. Class 1 phytoglobins are induced in plants under hypoxia, bind oxygen very tightly at nanomolar concentrations, and can effectively scavenge NO at oxygen levels far below the saturation of cytochrome c oxidase . In the course of the reaction, phytoglobin is oxidized to metphytoglobin which has to be reduced for continuous operation of the cycle. [ 3 ] [ 4 ] Nitrate is reduced to nitrite by nitrate reductase , while NO is mainly formed due to anaerobic reduction of nitrite which may take place in mitochondria by complex III and complex IV in the absence of oxygen, in the side reaction of nitrate reductase, [ 5 ] or by electron transport proteins on the plasma membrane. [ 6 ] The overall reaction sequence of the cycle consumes NADH and can contribute to the maintenance of ATP level in highly hypoxic conditions. [ 7 ]
https://en.wikipedia.org/wiki/Phytoglobin-NO_cycle
Phytoglycogen is a type of glycogen extracted from plants. It is a highly branched, water-soluble polysaccharide derived from glucose . [ 1 ] Phytoglycogen is a highly branched polysaccharide used to store glucose in a similar way that glycogen is the glucose storage for animals. It is made up of branched, flexible chains on glucose molecules that grow similarly to synthetic dendrimers . The special structure of the phytoglycogen allows it to have low viscosity , high water retention, as well as high stability in water, and stabilize bioactive compounds and form films on surfaces. Thus, this monodisperse nanoparticle is able to be used in many different technologies. [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Phytoglycogen
Phytomining , sometimes called agromining , [ 1 ] is the concept of extracting heavy metals from the soil using plants. [ 2 ] Specifically, phytomining is for the purpose of economic gain. [ 3 ] The approach exploits the existence of hyperaccumulators , proteins or compounds secreted by plants to bind certain metal ions. These extracted ores are called bio-ores. [ 4 ] A 2021 review concluded that the commercial viability of phytomining was "limited" [ 1 ] because it is a slow and inefficient process. Phytomining was first proposed in 1983 by Rufus Chaney, a USDA agronomist. [ 5 ] He and Alan Baker, a University of Melbourne professor, first tested it in 1996. [ 5 ] They, as well as Jay Scott Angle and Yin-Ming Li, filed a patent on the process in 1995 which expired in 2015. [ 6 ] Phytomining would, in principle, cause minimal environmental effects compared to mining . [ 2 ] Phytomining could also remove low-grade heavy metals from mine waste. [ 4 ]
https://en.wikipedia.org/wiki/Phytomining
A phytoplankton microbiome is the community of microorganisms—mainly bacteria, but also including fungi and viruses—that live in association with phytoplankton . These microbiomes play a critical role in marine ecosystems by supporting phytoplankton health, facilitating nutrient cycling, sustaining food webs, and contributing to climate regulation. [ 1 ] Microbial partners help decompose organic matter and recycle key nutrients like nitrogen and carbon, sustaining primary production, and supporting ocean productivity and phytoplankton community structure. [ 2 ] [ 1 ] Diazotrophic cyanobacteria, for example, fix atmospheric nitrogen, boosting productivity in nutrient-poor waters. [ 3 ] As primary producers , phytoplankton absorb CO₂ through photosynthesis, contributing to the biological carbon pump and long-term carbon sequestration. [ 4 ] [ 5 ] Phytoplankton–microbiome interactions are central to the global biogeochemical cycles . Microbial diversity influences host physiology and ecosystem functions, while environmental factors such as temperature, nutrient levels, and ocean chemistry shape microbiome composition and function. [ 6 ] [ 7 ] [ 8 ] Chemical signaling—through Quorum sensing and Infochemicals —regulate microbial behavior, impacting bloom dynamics, symbiosis, and defense mechanisms. [ 9 ] [ 10 ] Viruses also affect phytoplankton populations by driving nutrient turnover and mediating carbon flow. [ 11 ] [ 12 ] Current research focuses on microbial diversity, environmental drivers, and chemical communication, all of which are crucial to understanding the phytoplankton microbiome's ecological functions. These microbial interactions shape marine ecosystem stability, food web dynamics, and global climate processes. Symbiotic relationships between phytoplankton and their microbiomes are crucial for promoting phytoplankton growth, resilience to stress, and overall ecological stability. Microorganisms, including bacteria and archaea, provide essential nutrients like nitrogen and phosphorus through nitrogen fixation and nutrient recycling. [ 6 ] In return, phytoplankton offer a stable habitat, supplying organic compounds and surface areas where microbes can attach. These interactions bolster the phytoplankton's ability to cope with environmental challenges, such as oxidative stress, pathogens, and harmful algal blooms, through the protective functions of specific microbial communities. [ 13 ] [ 14 ] Additionally, microbiomes influence marine primary production by enhancing nutrient uptake and stimulating phytoplankton growth, which in turn supports higher trophic levels in marine food webs. [ 15 ] [ 16 ] Moreover, the microbes associated with phytoplankton are key players in biogeochemical cycles. They facilitate nitrogen fixation and break down organic matter, releasing usable nitrogen and carbon that fuel phytoplankton growth and help close crucial nutrient loops. [ 8 ] [ 17 ] These dynamic interactions between phytoplankton and their microbiomes are instrumental in maintaining ocean productivity and the functioning of marine ecosystems. Microbial diversity within eukaryotes influences host physiology, nutrient cycling, and community dynamics in marine ecosystems. Higher microbiome diversity correlates with increased bacterial phenotypic and taxonomic diversity, while lower diversity leads to reduced bacterial diversity. The microbiome of phytoplankton also affects phytoplankton growth and survival rates, which affect oceanic cycles. [ 6 ] Additionally, microbial diversity affects phytoplankton biomass, with low diversity linked to higher biomass and higher diversity to lower biomass. Under stressful conditions, such as rising water temperatures, the impact of microbiome diversity on phytoplankton community structure is amplified, decreasing community diversity and abundance. Increased microbial diversity also elevates the levels of dissolved nitrogen and phosphorus in the water. [ 6 ] Phytoplankton are genetically diverse, spanning 12 taxonomic divisions, yet maintain conserved metabolic functions vital for biogeochemical stability, such as photosynthesis and nutrient cycling. [ 1 ] Cyanobacteria , an ancient group, dominate modern phytoplankton populations and contributed to the evolution of eukaryotic algae and plants through endosymbiosis. This functional redundancy has ensured phytoplankton's resilience despite environmental changes. Environmental stresses, including chemical alterations in ocean chemistry by phytoplankton, drive phylogenetic diversity and reinforce ecosystem stability. [ 1 ] Phytoplankton release dissolved organic matter (DOM), which sustains their microbiome and part of the bacterioplankton community. [ 18 ] Microbial communities in these systems are shaped by deterministic processes, enhancing host function, while free-living communities are more flexible and less diverse. Microevolution through host-microbe interactions supports ecological adaptation and niche differentiation. [ 15 ] The phycosphere refers to the region surrounding individual phytoplankton, where algae, bacteria, archaea, and viruses interact. [ 2 ] [ 14 ] Among these organisms, bacteria have the most significant influence on the phycosphere. The specific conditions of the phycosphere help select particular bacterial populations, which in turn shape the microbial community. Over time, as these communities evolve, they can also enhance the fitness of the host. [ 14 ] The interactions between bacteria and phytoplankton are complex. In marine ecosystems, it remains unclear whether the genetic makeup of the host influences the bacterial community composition. However, it is known that phytoplankton species are categorized into populations based on their genetic traits, often linked to their ability to survive in specific environmental conditions. For instance, a study on T. rotula showed that the genotype of the host phytoplankton influences the microbial community surrounding it. [ 7 ] Phytoplankton interact with surrounding microbial communities through chemical signaling, exchanging compounds that influence growth, metabolism, and community composition. Microbes provide essential nutrients such as vitamins and nitrogen, while phytoplankton release organic molecules that help structure microbial populations. [ 19 ] [ 2 ] These interactions are often concentrated in the phycosphere, however, during large blooms, chemical signals can operate over broader scales due to increased cell density and compound concentration. [ 9 ] Throughout the bloom cycle—ranging from dormancy to demise—chemical signals mediate different ecological roles. In early stages, signals like DMSP attract beneficial bacteria. During peak growth, microbes exchange growth-promoting compounds, while in later stages, stress-related signals may trigger defense or cell death, contributing to bloom decline and nutrient recycling. [ 20 ] [ 11 ] These microscale chemical exchanges shape microbial succession and impact broader ecosystem functions, including carbon and nutrient cycling. [ 21 ] Phytoplankton exhibit vast genetic diversity across 12 taxonomic divisions but share conserved metabolic pathways that support critical biogeochemical processes, including oxygenic photosynthesis and nutrient cycling. [ 1 ] Cyanobacteria, among the oldest phytoplankton, shaped evolutionary history by contributing to eukaryotic algal development through endosymbiosis. Despite significant environmental changes, phytoplankton persist due to functional redundancy and genetic adaptation. Their microbiomes are sustained by dissolved organic matter (DOM) excreted by the host, influencing both associated and free-living microbial communities. [ 18 ] Deterministic processes drive microbial diversity, shaping phytoplankton functions, while microevolution supports niche adaptation. [ 15 ] Phytoplankton significantly shape their microbiomes, with host genotype playing a crucial role in determining microbial community structure. Microbial diversity is more closely linked to host genetics than environmental factors, with these patterns observed across ocean basins. [ 7 ] Host-driven microbiome recruitment influences both composition and function, and this specificity can persist over generations. [ 14 ] While these microbiomes contribute to host fitness, the extent of the benefits can vary by species, and long-term shifts in microbial composition can influence how hosts adapt to changing environments. Multiple environmental factors regulate phytoplankton microbiomes: These environmental variables interact with host genetics to structure microbiomes, influencing ecological processes like nutrient cycling and carbon sequestration. Phytoplankton–microbial interactions can be: Understanding these interactions is essential for elucidating phytoplankton population dynamics and their broader ecological roles. Chemical signaling through infochemicals is crucial for microbial interactions and the regulation of aquatic ecosystems. These signals enable both intraspecific and interspecific communication, influencing microbial community dynamics and ecological processes. Phytoplankton release allelopathic compounds that affect microbial growth, community composition, and toxin production, particularly in species like Alexandrium spp. and Prymnesium spp. [ 9 ] Quorum sensing (QS) molecules also regulate bacterial behavior, playing a significant role in microbial communication. In the phycosphere, Sulfitobacter species enhance diatom growth by secreting IAA. [ 19 ] Furthermore, intraspecific signaling in microalgae fosters reproduction and genetic diversity, maintaining ecological balance and community structure. [ 23 ] The structure of microbial communities is influenced not only by species composition but also by the biotic interactions involving the flow of energy and metabolites that drive trophic dynamics. [ 16 ] While traditional ecological theory emphasizes interspecific competition and abiotic constraints (e.g., temperature and nutrient availability) as the main determinants of community assembly, recent studies suggest that the success of different microbial species also depends on their ability to interact with other microbes and have metabolic interdependencies among them. [ 16 ] At the base of ecological composition is trophic interactions and the creation of opportunities for species to establish themselves and reassemble an environment. Environmental disturbances such as seasonal mixing, upwelling, and viral infections can disrupt established communities, allowing microbial colonizers to enter. [ 20 ] After a species has entered a system, its persistence in a disturbed environment depends on its efficiency in using available resources and its ability to engage in metabolic exchanges with surrounding microbes. [ 16 ] If unable to form metabolic linkages, a species can be subject to competitive exclusion by more efficient species depending on its environmental conditions. [ 20 ] In high-resource environments, competition dominates, while in lower resource environments, the exchange of metabolites allows for less dominant species to coexist together. [ 16 ] These interactions form cross-feeding networks, in which one species' metabolic byproducts (e.g., amino acids, acetates, and organic molecules) can be utilized as another species' substrates. [ 24 ] For example, another species may use a bacterium that produces acetate as a waste product, as a source of carbon or energy source. These interactions encourage coexistence by creating symbiotic or obligatory relationships between microbes. [ 24 ] As decomposers and chemical converters, microorganisms function by breaking down organic matter and facilitating the transformation between inorganic and organic forms of matter. Through this conversion, microbes release nutrients that can be redistributed to primary producers, enabling microbes to help facilitate the cycling of nutrients such as nitrogen, phosphorus, and iron. [ 25 ] Nitrogen cycling, in particular, involves several microbial pathways that determine the amount of bioavailable nitrogen: These transformations regulate the ratio of dissolved inorganic nitrogen (DIN) and particulate organic nitrogen (PON) available. In the case of the mineralization of nutrients, heterotrophic microbes break down organic matter and release inorganic nutrients (phosphate and ammonium) into its more oxidized forms of nitrate and orthophosphate. In these forms, organic compounds can be broken down easily and be quickly recycled in part of short-term productivity. [ 25 ] As a by-product of remineralization in the upper ocean, microbes interact with phytoplankton and sinking organic matter to help form recalcitrant dissolved organic matter (RDOM) and facilitate long-term carbon storage in the deep ocean. [ 17 ] When phytoplankton die or are consumed, their biomass forms larger aggregates and sink through the water column, contributing to the flux of carbon part of the biological carbon pump . As these particles descend, heterotrophic bacteria break down the organic matter through enzymatic degradation and release dissolved organic and inorganic carbon in the forms of CO 2 and HCO 3 - . [ 17 ] While trophic interactions and metabolic dependences have roles in shaping microbial assembly, they are also regulated by viral infections. In particular, viruses that infect bacteria (bacteriophages) and phytoplankton are seen to influence the coexistence of microbial populations through host-specific lytic infections. [ 11 ] In 1961, George Evelyn Hutchinson introduced the "paradox of the plankton", which questioned how numerous species could coexist in a relatively isotopic or unstructured environment and compete without a single dominant competitor. [ 26 ] The idea follows conventional theory, where in an unstructured environment, competitive exclusion should occur and leave a microbial community to a population of a single species; however, this was not seen in some settings. [ 16 ] To explain this dilemma, viral infections were noted, as viruses help prevent any single species from dominating by infecting the most abundant hosts. When a virus infects a microbial cell, it infiltrates the host's machinery to reproduce, causing the cell to burst and release its intracellular contents as dissolved organic matter. This is part of a process called the viral shunt. The released dissolved organic matter is taken up by heterotrophic bacteria to be recycled and remineralized. [ 12 ] This type of viral control is modeled by the boom-and-busted dynamics (BBeD) framework. After a phytoplankton species blooms and experiences an increase in population, it becomes more susceptible to viral infection and, once infected, is subject to an equally rapid decrease or bust in population. In this way, viral infections can impose top-down regulation on a dominant species by saturating the environment with host-specific viruses. This adds a layer of species-specific regulation where potentially hyper-successful species are prevented from having successive blooms and ecological space is created for other, less dominant species to compete in the same system. [ 27 ] Phytoplankton microbial community structures and diversity are dictated by environmental conditions. [ 8 ] Environmental and anthropogenic stressors such as elevated temperatures and changes in nutrient availability drive changes in the phycosphere, which consequently impacts ecosystem dynamics and biogeochemical cycling. [ 6 ] [ 13 ] Distinct microbiome compositions have been characterized in a study that performed metagenomic analysis to compare phytoplankton communities sampled from polar and non-polar regions. [ 8 ] The differences in microbiome compositions were hypothesized to be due to differences in environmental conditions and stressors, such as sea ice and thermohaline mixing. [ 8 ] The same study also uncovered differences in metabolic pathways involved in nutrient acquisition and utilization, correlated to geographical location. Phytoplankton microbiomes play important roles in maintaining host health and fitness. [ 13 ] [ 28 ] The symbiotic interactions within the phycosphere impacts the biological processes of both counterparts: host and microbiome, dictating their ecological roles. Additionally, the phytoplankton microbiome has been shown to be integral in the adaptation of the phycosphere to stressors such as toxic pollutants and parasite infections. [ 13 ] Toxic anthropogenic inputs into the ocean, such as herbicides and pesticides , have been linked to impaired function of photosynthetic pigments in phytoplankton. [ 29 ] These phytoplankton are impacted more significantly by these pollutants than their microbiome counterparts, which can convert toxic compounds into growth stimulating molecules, promoting the survivability of phytoplankton in polluted environmental conditions. [ 13 ] Additionally, horizontal gene transfer of tolerance genes contribute to the survivability of marine phytoplankton in the presence of toxins. [ 29 ] Extracellular polymeric substances (EPSs) produced by microbes contribute to the formation of biofilms that acts as a protective layer; trapping and metabolizing toxic compounds into less harmful compounds. [ 13 ] Polar diatoms and their associated microbiomes have been observed to work in synergy to produce EPSs in order to disrupt ice crystal formation and reduce the freezing point of ice. [ 30 ] Since polar diatoms are often incorporated into sea ice, it is hypothesized that they use ice binding proteins and EPSs to modulate their icy environment and maintain an aqueous brine channel system to preserve access to nutrients. [ 30 ] Environmental conditions modulate phytoplankton associated microbiome function and diversity. Ocean environments are changing as a result of human activity: causing variations in temperature, nutrient availability, and toxic pollutants. As ocean ecosystems continue to be impacted by anthropogenic disturbances, shifts in phytoplankton community composition and diversity may drive changes in global primary productivity trends. [ 31 ] While current knowledge on the impacts of climate change on phytoplankton microbiomes is fairly limited, monitoring for changes using metagenomics and metatranscriptomics is a promising field that may provide more insight and predictions regarding the responses in phytoplankton microbiomes to global warming. [ 30 ] [ 32 ] [ 33 ] Anthropogenic stressors causing increased temperature and changes in nutrient availability contribute to and exacerbate decreasing microbiome diversity, [ 34 ] [ 8 ] which has been shown to impact nutrient cycling. [ 6 ] Most notably, resulting in depleted nitrogen and phosphorus. [ 6 ] As nitrogen and phosphorus are already considered to be common limiting nutrients in ocean environments, depletion of these nutrients can directly impact the primary productivity in the region. Among other complex changes in ecosystem dynamics, ocean acidification has been shown to inhibit nitrification by microbes, while increasing oxygen minimum zones (OMZs) are associated with denitrification driven by microbial processes. [ 35 ] [ 36 ] [ 37 ] These shifts may result in restructured nutrient cycling which may have cascading effects on ocean ecosystems on a large scale. Changes in phytoplankton morphology—shifting towards a smaller cell size—and community composition have also been observed in response to decreased microbiome diversity. [ 6 ] This is hypothesized to be due to host cell stress as a result of nitrogen and phosphorus limitation. [ 6 ]
https://en.wikipedia.org/wiki/Phytoplankton_microbiome
Phytoremediation technologies use living plants to clean up soil, air and water contaminated with hazardous contaminants. [ 1 ] It is defined as "the use of green plants and the associated microorganisms, along with proper soil amendments and agronomic techniques to either contain, remove or render toxic environmental contaminants harmless". [ 2 ] The term is an amalgam of the Greek phyto (plant) and Latin remedium (restoring balance). Although attractive for its cost, phytoremediation has not been demonstrated to redress any significant environmental challenge to the extent that contaminated space has been reclaimed. Phytoremediation is proposed as a cost-effective plant-based approach of environmental remediation that takes advantage of the ability of plants to concentrate elements and compounds from the environment and to detoxify various compounds without causing additional pollution. [ 3 ] The concentrating effect results from the ability of certain plants called hyperaccumulators to bioaccumulate chemicals. The remediation effect is quite different. Toxic heavy metals cannot be degraded, but organic pollutants can be, and are generally the major targets for phytoremediation. Several field trials confirmed the feasibility of using plants for environmental cleanup . [ 4 ] Soil remediation is an expensive and complicated process. Traditional methods involve removal of the contaminated soil followed by treatment and return of the treated soil. [ citation needed ] Phytoremediation could in principle be a more cost effective solution. [ 5 ] Phytoremediation may be applied to polluted soil or static water environment. This technology has been increasingly investigated and employed at sites with soils contaminated heavy metals like with cadmium , lead , aluminum , arsenic and antimony . [ 6 ] These metals can cause oxidative stress in plants, destroy cell membrane integrity, interfere with nutrient uptake, inhibit photosynthesis and decrease plant chlorophyll . [ 7 ] Phytoremediation has been used successfully in the restoration of abandoned metal mine workings, and sites where polychlorinated biphenyls have been dumped during manufacture and mitigation of ongoing coal mine discharges reducing the impact of contaminants in soils, water, or air. [ citation needed ] Contaminants such as metals, pesticides, solvents, explosives, [ 8 ] and crude oil and its derivatives, have been mitigated in phytoremediation projects worldwide. Many plants such as mustard plants , alpine pennycress , hemp , and pigweed have proven to be successful at hyperaccumulating contaminants at toxic waste sites. Not all plants are able to accumulate heavy metals or organics pollutants due to differences in the physiology of the plant. [ 9 ] Even cultivars within the same species have varying abilities to accumulate pollutants. [ 9 ] A range of processes mediated by plants or algae are tested in treating environmental problems.: [ citation needed ] Phytoextraction (or phytoaccumulation or phytosequestration ) exploits the ability of plants or algae to remove contaminants from soil or water into harvestable plant biomass. It is also used for the mining of metals such as copper(II) compounds. The roots take up substances from the soil or water and concentrate them above ground in the plant biomass [ 10 ] Organisms that can uptake high amounts of contaminants are called hyperaccumulators . [ 13 ] Phytoextraction can also be performed by plants (e.g. Populus and Salix ) that take up lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. [ 14 ] Phytoextraction has been growing rapidly in popularity worldwide for the last twenty years or so. Typically, phytoextraction is used for heavy metals or other inorganics. [ 15 ] At the time of disposal, contaminants are typically concentrated in the much smaller volume of the plant matter than in the initially contaminated soil or sediment. After harvest, a lower level of the contaminant will remain in the soil, so the growth/harvest cycle must usually be repeated through several crops to achieve a significant cleanup. After the process, the soil is remediated. [ citation needed ] Of course many pollutants kill plants, so phytoremediation is not a panacea. For example, chromium is toxic to most higher plants at concentrations above 100 μM·kg−1 dry weight. [ 16 ] Mining of these extracted metals through phytomining is a conceivable way of recovering the material. [ 17 ] Hyperaccumulating plants are often metallophyte . Induced or assisted phytoextraction is a process where a conditioning fluid containing a chelator or another agent is added to soil to increase metal solubility or mobilization so that the plants can absorb them more easily. [ 18 ] While such additives can increase metal uptake by plants, they can also lead to large amounts of available metals in the soil beyond what the plants are able to translocate, causing potential leaching into the subsoil or groundwater. [ 18 ] Examples of plants that are known to accumulate the following contaminants: Phytostabilization reduces the mobility of substances in the environment, for example, by limiting the leaching of substances from the soil . [ 9 ] It focuses on the long term stabilization and containment of the pollutant. The plant immobilizes the pollutants by binding them to soil particles making them less available for plant or human uptake. [ citation needed ] Unlike phytoextraction, phytostabilization focuses mainly on sequestering pollutants in soil near the roots but not in plant tissues. Pollutants become less bioavailable, resulting in reduced exposure. The plants can also excrete a substance that produces a chemical reaction, converting the heavy metal pollutant into a less toxic form. [ 10 ] Stabilization results in reduced erosion, runoff, leaching, in addition to reducing the bioavailability of the contaminant. [ 15 ] An example application of phytostabilization is using a vegetative cap to stabilize and contain mine tailings . [ 27 ] Some soil amendments decrease radiosource mobility – while at some concentrations the same amendments will increase mobility. [ 28 ] [ 29 ] Vidal et al. 2000 find the root mats of meadow grasses are effective at demobilising radiosource materials especially with certain combinations of other agricultural practices. [ 28 ] [ 29 ] Vidal also find that the particular grass mix makes a significant difference. [ 28 ] [ 29 ] Phytodegradation (also called phytotransformation) uses plants or microorganisms to degrade organic pollutants in the soil or within the body of the plant. The organic compounds are broken down by enzymes that the plant roots secrete and these molecules are then taken up by the plant and released through transpiration. [ 30 ] This process works best with organic contaminants like herbicides, trichloroethylene , and methyl tert -butyl ether . [ 15 ] Phytotransformation results in the chemical modification of environmental substances as a direct result of plant metabolism , often resulting in their inactivation, degradation (phytodegradation), or immobilization (phytostabilization). In the case of organic pollutants, such as pesticides , explosives , solvents , industrial chemicals, and other xenobiotic substances, certain plants, such as Cannas , render these substances non-toxic by their metabolism . [ 31 ] In other cases, microorganisms living in association with plant roots may metabolize these substances in soil or water. These complex and recalcitrant compounds cannot be broken down to basic molecules (water, carbon-dioxide, etc.) by plant molecules, and, hence, the term phytotransformation represents a change in chemical structure without complete breakdown of the compound. The term "Green Liver" is used to describe phytotransformation, [ 32 ] as plants behave analogously to the human liver when dealing with these xenobiotic compounds (foreign compound/pollutant). [ 33 ] [ 34 ] After uptake of the xenobiotics, plant enzymes increase the polarity of the xenobiotics by adding functional groups such as hydroxyl groups (-OH). [ citation needed ] This is known as Phase I metabolism, similar to the way that the human liver increases the polarity of drugs and foreign compounds ( drug metabolism ). Whereas in the human liver enzymes such as cytochrome P450s are responsible for the initial reactions, in plants enzymes such as peroxidases, phenoloxidases, esterases and nitroreductases carry out the same role. [ 31 ] In the second stage of phytotransformation, known as Phase II metabolism, plant biomolecules such as glucose and amino acids are added to the polarized xenobiotic to further increase the polarity (known as conjugation). This is again similar to the processes occurring in the human liver where glucuronidation (addition of glucose molecules by the UGT class of enzymes, e.g. UGT1A1 ) and glutathione addition reactions occur on reactive centres of the xenobiotic. [ citation needed ] Phase I and II reactions serve to increase the polarity and reduce the toxicity of the compounds, although many exceptions to the rule are seen. The increased polarity also allows for easy transport of the xenobiotic along aqueous channels. [ citation needed ] In the final stage of phytotransformation (Phase III metabolism), a sequestration of the xenobiotic occurs within the plant. The xenobiotics polymerize in a lignin -like manner and develop a complex structure that is sequestered in the plant. This ensures that the xenobiotic is safely stored, and does not affect the functioning of the plant. However, preliminary studies have shown that these plants can be toxic to small animals (such as snails), and, hence, plants involved in phytotransformation may need to be maintained in a closed enclosure. [ citation needed ] Hence, the plants reduce toxicity (with exceptions) and sequester the xenobiotics in phytotransformation. Trinitrotoluene phytotransformation has been extensively researched and a transformation pathway has been proposed. [ 35 ] Phytostimulation (or rhizodegradation) is the enhancement of soil microbial activity for the degradation of organic contaminants, typically by organisms that associate with roots . [ 30 ] This process occurs within the rhizosphere , which is the layer of soil that surrounds the roots. [ 30 ] Plants release carbohydrates and acids that stimulate microorganism activity which results in the biodegradation of the organic contaminants. [ 36 ] This means that the microorganisms are able to digest and break down the toxic substances into harmless form. [ 30 ] Phytostimulation has been shown to be effective in degrading petroleum hydrocarbons, PCBs, and PAHs. [ 15 ] Phytostimulation can also involve aquatic plants supporting active populations of microbial degraders, as in the stimulation of atrazine degradation by hornwort . [ 37 ] Phytovolatilization is the removal of substances from soil or water with release into the air, sometimes as a result of phytotransformation to more volatile and/or less polluting substances. In this process, contaminants are taken up by the plant and through transpiration, evaporate into the atmosphere. [ 30 ] This is the most studied form of phytovolatilization, where volatilization occurs at the stem and leaves of the plant, however indirect phytovolatilization occurs when contaminants are volatilized from the root zone. [ 38 ] Selenium (Se) and Mercury (Hg) are often removed from soil through phytovolatilization. [ 9 ] Poplar trees are one of the most successful plants for removing VOCs through this process due to its high transpiration rate. [ 15 ] Rhizofiltration is a process that filters water through a mass of roots to remove toxic substances or excess nutrients . The pollutants remain absorbed in or adsorbed to the roots. [ 30 ] This process is often used to clean up contaminated groundwater through planting directly in the contaminated site or through removing the contaminated water and providing it to these plants in an off-site location. [ 30 ] In either case though, typically plants are first grown in a greenhouse under precise conditions. [ 39 ] Biological hydraulic containment occurs when some plants, like poplars, draw water upwards through the soil into the roots and out through the plant, which decreases the movement of soluble contaminants downwards, deeper into the site and into the groundwater. [ 40 ] Phytodesalination uses halophytes (plants adapted to saline soil) to extract salt from the soil to improve its fertility. [ 10 ] Breeding programs and genetic engineering are powerful methods for enhancing natural phytoremediation capabilities, or for introducing new capabilities into plants. Genes for phytoremediation may originate from a micro-organism or may be transferred from one plant to another variety better adapted to the environmental conditions at the cleanup site. For example, genes encoding a nitroreductase from a bacterium were inserted into tobacco and showed faster removal of TNT and enhanced resistance to the toxic effects of TNT. [ 41 ] Researchers have also discovered a mechanism in plants that allows them to grow even when the pollution concentration in the soil is lethal for non-treated plants. Some natural, biodegradable compounds, such as exogenous polyamines , allow the plants to tolerate concentrations of pollutants 500 times higher than untreated plants, and to absorb more pollutants. [ citation needed ] A plant is said to be a hyperaccumulator if it can concentrate the pollutants in a minimum percentage which varies according to the pollutant involved (for example: more than 1000 mg/kg of dry weight for nickel , copper , cobalt , chromium or lead ; or more than 10,000 mg/kg for zinc or manganese ). [ 42 ] This capacity for accumulation is due to hypertolerance , or phytotolerance : the result of adaptative evolution from the plants to hostile environments through many generations. A number of interactions may be affected by metal hyperaccumulation, including protection, interferences with neighbour plants of different species, mutualism (including mycorrhizae , pollen and seed dispersal), commensalism, and biofilm . [ 43 ] [ 44 ] [ 45 ] As plants can translocate and accumulate particular types of contaminants, plants can be used as biosensors of subsurface contamination, thereby allowing investigators to delineate contaminant plumes quickly. [ 46 ] [ 47 ] Chlorinated solvents, such as trichloroethylene , have been observed in tree trunks at concentrations related to groundwater concentrations. [ 48 ] To ease field implementation of phytoscreening, standard methods have been developed to extract a section of the tree trunk for later laboratory analysis, often by using an increment borer . [ 49 ] Phytoscreening may lead to more optimized site investigations and reduce contaminated site cleanup costs. [ citation needed ]
https://en.wikipedia.org/wiki/Phytoremediation
Phytosemiotics is a branch of biosemiotics that studies the sign processing capabilities present in plants. [ 1 ] Some functions that plants perform that utilize this simple semiosis includes cellular recognition, plant perception , intercellular communication , [ 1 ] and plant signal transduction . [ 2 ] Comparative to the sign processing present in animals and humans, phytosemiotics occurs at the cellular level, with communication between the cells of plants acting as a means of observing their surroundings and making rudimentary decisions. [ 1 ] The term 'phytosemiotics' was introduced by German psychologist and semiotician Martin Krampen , in 1981. After participating in an experiment involving a subject living in a plant-filled greenhouse , Krampen became interested in the semiotic processing capabilities of plants. After consulting with the works of Jakob von Uexküll , in particular his 'Theory of Meaning', Krampen further developed this concept and eventually wrote "Phytosemiotics", the first essay covering the topic. [ 3 ] Despite the fundamentally different biological systems that make up animals and plants, there are comparisons to be made in the ways they undergo semiotics. One possible similarity is the presence of vegetative semiotics within plants as well as animals, as intercellular communication is an important aspect of all life. [ 1 ] Another example is the ability to distinguish which aspects in the immediate environment of a creature (also known as their ' umwelt ') are important to their survival and which ones are not. [ 3 ] While plants do not have an umwelt in the traditional sense, they are able to deduce what surrounding resources are important to them and which ones are not. [ 3 ] An important aspect of phytosemiotics is understanding how plants undergo semiosis differently than how animals experience it. [ 3 ] One important difference is the lack of 'receptor' and 'effector' organs in plants, unlike animals who are able to see their environment and interact with it in a more direct way than plants are able to. [ 3 ] While plants may not have tradition receptor or effector organs in the same way animals do, plants do use signal transduction to send external signals throughout the plant to make simple decisions. [ 2 ] Another important difference is the types of signs plants are able to process. While plants can only process indexes , animals can also process icons and humans can further process symbols . [ 3 ] The addition of phytosemiotics into the broader scale of semiotic research remains a controversial one, as determining the extent to which plants actually exhibit sign processing remains a debate, as the range of what signs are able to be processed by plants remains fairly limited compared to the semiotic capabilities of humans or animals. [ 2 ] The lack of ability to process icons [ 2 ] due to not having receptor organs [ 3 ] makes the semiotics of plants fundamentally different from semiosis in animals. Another challenge in legitimizing phytosemiotics as a field of study in semiotics is the blurring of lines between animal life and plant life. In researching plant biology there is a risk of prescribing human traits onto plants. [ 2 ] This fact makes discerning the legitimacy of plant semiosis and plant communication difficult. However, if fully recognized, phytosemiotics could expand semiotic research beyond a focus on human or animal semiotics into the other kingdoms of life, including fungi and bacteria . [ 1 ] It could also change how we view the components that make up effective sign processing and how non-human/animal life is capable of more advanced sign processing. [ 2 ] This semiotics article is a stub . You can help Wikipedia by expanding it . This botany article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Phytosemiotics
Phytosociology , also known as phytocoenology or simply plant sociology , is the study of groups of species of plant that are usually found together. Phytosociology aims to empirically describe the vegetative environment of a given territory. A specific community of plants is considered a social unit, the product of definite conditions, present and past, and can exist only when such conditions are met. In phyto-sociology, such a unit is known as a phytocoenosis (or phytocoenose). A phytocoenosis is more commonly known as a plant community, and consists of the sum of all plants in a given area. It is a subset of a biocoenosis , which consists of all organisms in a given area. More strictly speaking, a phytocoenosis is a set of plants in area that are interacting with each other through competition or other ecological processes. Coenoses are not equivalent to ecosystems , which consist of organisms and the physical environment that they interact with. A phytocoensis has a distribution which can be mapped. Phytosociology has a system for describing and classifying these phytocoenoses in a hierarchy, known as syntaxonomy , and this system has a nomenclature. The science is most advanced in Europe, Africa and Asia. In the United States this concept was largely rejected in favour of studying environments in more individualistic terms regarding species, where specific associations of plants occur randomly because of individual preferences and responses to gradients, and there are no sharp boundaries between phytocoenoses. The terminology ' plant community ' is usually used in the US for a habitat consisting of a number of specific plant species. It has been a successful approach in the scope of contemporary vegetation science because of its highly descriptive and predictive powers, and its usefulness in nature management issues. The term 'phytosociology' was coined in 1896 by Józef Paczoski . [ 1 ] The term 'phytocoenology' was coined by Helmut Gams in 1918. [ 1 ] [ 2 ] While the terminology phytocoenosis grew to be most popular in France, Switzerland, Germany and the Soviet Union, the terminology phytosociology remained in use in some European countries. [ 1 ] Phytosociology is a further refinement of the phytogeography introduced by Alexander von Humboldt at the very beginning of the 19th century. [ 1 ] [ 3 ] [ 4 ] [ 5 ] Phytocoenology was initially considered to be a subdiscipline of 'geobotany'. [ 2 ] In Scandinavia the concept of plant associations was popular at an early date. Hampus von Post (1842, 1862), [ 6 ] Ragnar Hult (1881, 1898), [ 7 ] Thore Christian Elias Fries (1913), [ 8 ] Gustaf Einar Du Rietz (1921). [ 9 ] In the Soviet Union an important botanist to apply and popularise the science was Vladimir Sukachev . [ 15 ] The science of phytosociology has hardly penetrated into the English-speaking world, where the continuum concept of community prevailed, opposed to the concept of a 'society' of plants. [ 5 ] Nonetheless it had some early adherents in the United States, notably Frederic Clements in particular, who used the concept to characterise the vegetation of California. Largely following European ideas, he devised his own system to classify habitat types using vegetation. [ 16 ] [ 17 ] [ 18 ] Clements most important contribution was his study of succession . His work has seen much local usage. [ 1 ] In Britain Arthur Tansley was the first to apply phytosociological concepts to the vegetation of the kingdom in 1911 after learning of its application elsewhere in Europe. [ 19 ] Tansley eventually broadened the concept and thus came up with the idea of an ecosystem , combining all biotic and abiotic ecological aspects of an environment. The work of Tansley and Clements was quite divergent from the rest. Modern phytosociology for largely follows the work of Józef Paczoski in Poland, Josias Braun-Blanquet in France and Gustaf Einar Du Rietz in Sweden. [ 5 ] In Europe a complete classification system has been developed to describe the vegetation types found across the continent. These are used as habitat-type classifications in the NATURA 2000 network and in Habitats Directive legislation. Each phytocoenose has been given a number, and protected areas can thus be classified according to the habitats they contain. In Europe this information is generally mapped per 2 km² blocks for conservation purposes, such as monitoring particularly endangered habitat types, predicting success of reintroductions , or estimating more specific carrying capacities . Because certain habitats are deemed more imperilled (i.e. having a higher conservation value) than others, a numerical conservation value of a specific site can be approximated. The aim of phytosociology is to achieve a sufficient empirical model of vegetation using combinations of plant species (or subspecies, i.e. taxa ) that characterize discrete vegetation units. Vegetation units as understood by phytosociologists may express largely abstract vegetation concepts (e.g. the set of all hard-leaved evergreen forests of western Mediterranean area) or actual readily recognizable vegetation types (e.g. cork-oak oceanic forests on Pleistocene dunes with dense canopy in Iberian Peninsula ). Such conceptual units are called syntaxa (singular "syntaxon") and can be set in a hierarchy system called "synsystem" or syntaxonomic system. Creating new syntaxa or adjusting the synsystem is called syntaxonomy . Before the rules were agreed upon, a number of slightly different systems of classification existed. These were known as "schools" or "traditions", and there were two main systems: the older Scandinavian school and the Zürich-Montpellier school, [ 20 ] also sometimes called the Braun-Blanquet approach. [ 21 ] The first step in phytosociology is gathering data. This is done with what is known as a relevé , a plot in which all the species are identified, and their abundance both vertically and in area are calculated. Other data are also recorded for a relevé: the geographic location, environmental factors and vegetation structure. Boolean operators and (formerly) tables are used to sort the data. As the calculations needed are difficult and tedious to do manually, modern ecologists feed the relevé data into software programs that use algorithms to crunch the numbers. [ 21 ] The basic unit of syntaxonomy, the organisation and nomenclature of phytosociological relationships, is the " association ", defined by its characteristic combination of plant taxa. Sometimes other habitat features such as the management by humans (mowing regime, for example), physiognomy and/or the stage in ecological succession may also be considered. Such an association is usually viewed as a discrete phytocoenose. Similar and neighbouring associations can be grouped in larger ecological conceptual units, with a group of plant associations called an "alliance". Similar alliances may be grouped in "orders" and orders in vegetation "classes". The setting of syntaxa in such a hierarchy makes up the syntaxonomical system. The most important workers to define the modern system were initially Charles Flahault , with the work of his student Josias Braun-Blanquet being the what is generally considered the final version of syntaxonomical nomenclature. Braun-Blanquet further refined and standardised the work of Flahault and many others when he worked on the phytocoenosis of the southern Cévennes . He established the modern system of classifying vegetation. [ 22 ] Braun-Blanquet's method uses the scientific name of its most characteristic species as namesake, changing the ending of the generic epithet to "- etum " and treating the specific epithet as an adjective . Thus, a particular type of mesotrophic grassland widespread in western Europe and dominated only by the grass Arrhenatherum elatius becomes " Arrhenatheretum elatioris Br.-Bl. ". To distinguish between similar plant communities dominated by the same species, other important species are included in the name, but the name is otherwise is formed according to the same rules. Another type of mesotrophic pasture dominated by black knapweed ( Centaurea nigra ) and the grass Cynosurus cristatus , which is also widespread in western Europe, is consequently named Centaureo-Cynosuretum cristati Br.-Bl. & Tx. . If the second species is characteristic but notably less dominant than the first one, its genus name may be used as the adjective, [ 23 ] for example in Pterocarpetum rhizophorosus , a type of tropical scrubland near water which has abundant Pterocarpus officinalis and significant (though not overwhelmingly prominent) red mangrove ( Rhizophora mangle ). Today an International Code of Phytosociological Nomenclature [ 24 ] [ 25 ] exists, in which the rules for naming syntaxa are given. Its use has increased among botanists. [ 24 ] In Anglo-American ecology, the association concept is mostly linked to the work of the mid-twentieth century botanist Henry Gleason , who set it up as an alternative to Frederic Clement 's views on the superorganismic framework . [ 26 ] The philosophical parameters of the association concept have also come under study by environmental philosophers as to how it values and defends the natural environment. [ 27 ] Modern phytosociologists try to include higher levels of complexity in the perception of vegetation, namely by describing whole successional units (vegetation series) or, in general, vegetation complexes. Other developments include the use of multivariate statistics for the definition of syntaxa and their interpretation. Phytosociological data contain information collected in relevés (or plots) listing each species cover-abundance values and the measured environmental variables. This data is conveniently databanked in a program like TURBOVEG [ 28 ] allowing for editing, storage and export to other applications. Data is usually classified and sorted using TWINSPAN [ 29 ] in host programs like JUICE to create realistic species-relevé associations. Further patterns are investigated using clustering and resemblance methods, and ordination techniques available in software packages like CANOCO [ 30 ] or the R -package vegan. [ 31 ]
https://en.wikipedia.org/wiki/Phytosociology
Phytotechnology (from Ancient Greek φυτο ( phyto ) ' plant ' and τεχνολογία (technología) ; from τέχνη (téchnē) ' art, skill, craft ' and -λογία ( -logía ) ' study of- ' ) implements solutions to scientific and engineering problems in the form of plants . It is distinct from ecotechnology and biotechnology as these fields encompass the use and study of ecosystems and living beings, respectively. Current study of this field has mostly been directed into contaminate removal ( phytoremediation ), storage (phytosequestration) and accumulation (see hyperaccumulators ). Plant-based technologies have become alternatives to traditional cleanup procedures because of their low capital costs, high success rates, low maintenance requirements, end-use value, and aesthetic nature. Phytotechnology is the application of plants to engineering and science problems. Phytotechnology uses ecosystem services to provide for a specifically engineered solution to a problem. Ecosystem services, broadly defined fall into four broad categories: provisioning (i.e. production of food and water), regulating (i.e. the control of climate and disease) supporting (i.e. nutrient cycles and crop pollination), and cultural (i.e. spiritual and recreational benefits). Many times only one of these ecosystem services is maximized in the design of the space. For instance a constructed wetland may attempt to maximize the cooling properties of the system to treat water from a wastewater treatment facility before introduction to a river. The designed benefit is a reduction of water temperature for the river system while the constructed wetland itself provides habitat and food for wildlife as well as walking trails for recreation. Most phytotechnology has been focused on the abilities of plants to remove pollutants from the environment. Other technologies such as green roofs , green walls and bioswales are generally considered phytotechnology. Taking a broad view: even parks and landscaping could be viewed as phytotechnology. However, there is very little consensus over a definition of phytotechnology even within the field. The Phytotechnology Technical and Regulatory Guidance and Decision Trees, Revised defines phytotechnology as, "Phytotechnologies are a set of technologies using plants to remediate or contain contaminants in soil, groundwater, surface water, or sediments." The United Nations Environment Programme defines phytotechnology as, "the application of science and engineering to study problems and provide solutions involving plants." A third definition from the Department of Environmental Engineering Indonesia, gives it as, "a technology which is based on the application of plants as solar driven and living technology for improving environmental sanitation and conservation problems." In phytotechnology the naturally existing properties of plants are used to accomplish defined outcomes with ecosystem services in a designed environment . The phytotechnologic system uses these properties, broadly the degradation/use of chemicals in the environment and the transport and storage of water, to change the output of the system. These mechanisms have evolved since the beginning of angiosperms 1000 mya and have become quite effective. The diversity of plants also gives versatility to the phytotechnologic system. Plants from the native environment capable to handle many applications and non-natives for more specific projects (such as hyperaccumulators for heavy metal removal). Ancillary benefits are a factor. Community use, education use, tax credits, habitat creation, increased sustainability and aesthetics are all benefits of phytotechnology. The cost of the system is also lower compared to traditional remediation technologies in many cases. Without pumping systems, electricity costs, infrastructure and other costs. Even if the initial investment is higher in some cases (notably green roofs ) the costs over the lifetime of the project will be less. Plants will not tolerate certain conditions. Too much pollution , water , salt or other variables can kill the plants in the system. Water solubility of the pollutants affects the system. Plants also have mechanisms to halt the uptake of substances and may not remove a contaminate completely in an acceptable time frame. The length of time in which the project must be completed is another limiting factor. Many phytotechnologies take at least two years or longer to reach maturity and some could be designed as legacy projects, with lifespans which may be 100 years or more. In more temperate climates the systems may become inactive or much less active in the winter months and may not be usable at all in more arctic environments. There are many physiological properties of plants which can be used in phytotechnology. The mechanisms work synergistically to achieve the goals set by a project. Phytosequestration is the ability of plants to sequester certain contaminants in root zone . This is accomplished through several of the plant's physiological mechanisms. Phytochemicals extruded in the root zone can immobilize or precipitation of the target contaminant. The transport proteins associated with the root also can irreversibly bind and stabilize target contaminants. Contaminants can also be taken up by the root and sequestered in the vacuoles in the root system. Phytohydraulics is the ability of plants to capture, transport and transpire water from the environment. This action in turn contains pollutants and controls the hydrology of the environment. This mechanism does not degrade the contaminant. Rhizodegradation is the enhancement of microbial degradation of contaminants in the rhizosphere . The presence of a contaminant in the soil will naturally provide an environment for bacteria and fungi which can use the containment as a source of energy. The root systems of plants, in most cases, will form a symbiotic relationship with the organisms in the soil. The oxygen and water transported by the roots allows for greater growth of beneficial soil microorganisms. This allows for greater breakdown of the contaminant and quicker remediation. This is the primary means through which organic contaminants can be remediated. Rhizofiltration is the adsorption onto plant roots or absorption into plant roots of contaminants that are in solution surrounding the root zone. Phytoextraction is the ability to take contaminants into the plant. The plant material is then removed and safely stored or destroyed. Phytovolatilization is the ability to take up contaminants in the transpiration stream and then transpire volatile contaminants. The contaminant is remediation by removal through plants. Phytodegradation is the ability of plants to take up and degrade the contaminants. Contaminants are degraded through internal enzymatic activity and photosynthetic oxidation/reduction.
https://en.wikipedia.org/wiki/Phytotechnology
Phytotelma (plural phytotelmata ) is a small water-filled cavity in a terrestrial plant. The water accumulated within these plants may serve as the habitat for associated fauna and flora . A rich literature in German summarised by Thienemann (1954) [ 1 ] developed many aspects of phytotelm biology. Reviews of the subject by Kitching (1971) and Maguire (1971) [ 2 ] [ 3 ] introduced the concept of phytotelmata to English-speaking readers. A multi-authored book edited by Frank and Lounibos (1983) [ 4 ] dealt in 11 chapters with classification of phytotelmata, and with phytotelmata provided by bamboo internodes, banana leaf axils, bromeliad leaf axils , Nepenthes pitchers, Sarracenia pitchers, tree holes , and Heliconia flower bracts and leaf rolls. [ 5 ] [ 6 ] A classification of phytotelmata by Kitching (2000) [ 7 ] recognizes five principal types: bromeliad tanks, certain carnivorous plants such as pitcher plants , water-filled tree hollows , bamboo internodes, and axil water (collected at the base of leaves, petals or bracts ); it concentrated on food webs. A review by Greeney (2001) [ 8 ] identified seven forms: tree holes, leaf axils , flowers, modified leaves, fallen vegetative parts (e.g. leaves or bracts), fallen fruit husks , and stem rots . The word "phytotelma" derives from the ancient Greek roots phyto- , meaning 'plant', and telma , meaning 'pond'. Thus, the correct singular is phytotelma . The term was coined by L. Varga in 1928. [ 9 ] The correct pronunciation is "phytotēlma" and "phytotēlmata" because of the Greek origin (the stressed vowels are here written as ē ). Often the faunae associated with phytotelmata are unique: Different groups of microcrustaceans occur in phytotelmata, including ostracods ( Elpidium spp . Metacypris bromeliarum ), harpacticoid copepods ( Bryocamptus spp , Moraria arboricola, Attheyella spp. [ 10 ] ) and cyclopoid copepods ( Bryocyclops spp ., Tropocyclops jamaicensis [ 11 ] ). [ 12 ] In tropical and subtropical rainforest habitats, many species of frogs specialize on phytotelma as a readily available breeding ground, such as some microhylids [ 13 ] (in pitcher plants), poison dart frogs [ 14 ] and some tree frogs (in bromeliads). [ 15 ] [ 16 ] Many insects use them for breeding and foraging, for instance odonates , water bugs , beetles and dipterans . [ 17 ] [ 18 ] Some species also are of great practical significance; for example, immature stages of some mosquitoes , such as some Anopheles and Aedes species that are important disease vectors, develop in phytotelmata. [ 4 ] As these are such small systems, there may be great risk of nitrogenous waste eventually putrefying phytotelmata, killing their inhabitants. Potentially relevant is that tadpoles of the species Kurixalus eiffingeri have been found to avoid defecation until after metamorphosis , when they have vacated phytotelmata. This may evidence selection for social sanitation, and the discoverers surmise this may be a selective pressure for other denizens of phytotelmata as well. [ 19 ]
https://en.wikipedia.org/wiki/Phytotelma
Phytotope refers to the total habitat available for colonization by plants within a specific biotope, such as a forest, meadow, wetland, or urban green space. The term is primarily used in ecology to describe the portion of an environment that can support plant life, taking into account factors such as soil type, climate, light availability, and water conditions. The word originates from the Greek roots phyto- meaning "plant" and -topos meaning "place." In ecological studies, the phytotope is often used alongside related concepts such as zootope (habitat available to animals) and microhabitat to describe the diversity of life-supporting conditions in a particular environment. Understanding the phytotope of a region is essential for evaluating its plant biodiversity and for conservation planning. It defines the range of ecological niches available to plants and is influenced by both abiotic factors (e.g., pH, salinity, temperature, disturbance) and biotic interactions (e.g., competition, herbivory, symbiosis). Different phytotopes within the same biotope may support distinct plant communities, contributing to habitat heterogeneity and species richness. For example, within a forest biotope, the canopy, understory, forest floor, and clearings each represent distinct phytotopes with different environmental conditions and species compositions. Ecological restoration: Identifying the phytotopes of a degraded area can help in selecting appropriate native plant species for revegetation. Landscape ecology: Phytotopes are used to analyze vegetation patterns across fragmented habitats or urban ecosystems. Biodiversity assessments: Phytotopes are essential units in mapping plant distributions and evaluating conservation priorities. This ecoregion article is a stub . You can help Wikipedia by expanding it . This article about environmental habitats is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Phytotope
Phytotoxicity describes any adverse effects on plant growth, physiology , or metabolism caused by a chemical substance , such as high levels of fertilizers, herbicides, heavy metals, or nanoparticles . [ 1 ] General phytotoxic effects include altered plant metabolism, growth inhibition, or plant death. [ 2 ] Changes to plant metabolism and growth are the result of disrupted physiological functioning, including inhibition of photosynthesis , water and nutrient uptake, cell division , or seed germination . [ 1 ] High concentrations of mineral salts in solution within the plant growing medium can result in phytotoxicity, commonly caused by excessive application of fertilizers . [ 3 ] For example, urea is used in agriculture as a nitrogenous fertilizer. However, if too much is applied, phytotoxic effects can result from urea toxicity directly or ammonia production from hydrolysis of urea. [ 3 ] Organic fertilizers, such as compost , also have the potential to be phytotoxic if not sufficiently humified , as intermediate products of this process are harmful to plant growth. [ 4 ] Herbicides are designed and used to control unwanted plants such as agricultural weeds. However, the use of herbicides can cause phytotoxic effects on non-targeted plants through wind-blown spray drift or from the use of herbicide-contaminated material (such as straw or manure) being applied to the soil. [ 5 ] Herbicides can also cause phytotoxicity in crops if applied incorrectly, in the wrong stage of crop growth, or in excess. [ 1 ] The phytotoxic effects of herbicides are an important subject of study in the field of ecotoxicology . Heavy metals are high-density metallic compounds which are poisonous to plants at low concentrations, although toxicity depends on plant species, specific metal and its chemical form, and soil properties. [ 2 ] The most relevant heavy metals contributing to phytotoxicity in crops are silver (Ag), arsenic (As), cadmium (Cd), cobalt (Co), chromium (Cr), iron (Fe), nickel (Ni), lead (Pb), and zinc (Zn). Of these, Co, Cu, Fe, Ni, and Zn are trace elements required in small amounts for enzyme and redox reactions essential in plant development. [ 2 ] However, past a certain threshold they become toxic. The other heavy metals listed are considered toxic at any concentration and can bioaccumulate , posing a health hazard to humans if consumed. [ 6 ] Heavy metal contamination occurs from both natural and anthropogenic sources. The most notable natural source of heavy metals is rock outcroppings, although volcanic eruptions can release large amounts of toxic material. [ 2 ] Significant anthropogenic sources include mining and smelting operations and organic and inorganic fertilizer application. [ 2 ] Nanotechnology is a rapidly growing industry with many applications, including drug delivery , biomedicines , and electronics. [ 7 ] As a result, manufactured nanoparticles, with sizes less than 100 nm, are released into the environment. [ 8 ] Plant uptake and bioaccumulation of these nanoparticles can cause plant growth enhancement or phytotoxic effects, depending on plant species and nanoparticle concentration. [ 8 ]
https://en.wikipedia.org/wiki/Phytotoxicity
A phytotron is an enclosed research greenhouse used for studying interactions between plants and the environment. It was a product of the disciplines of plant physiology and botany . Phytotrons unified and extended earlier piecemeal efforts to claim total control of the whole environment. In both walk-in rooms and smaller reach-in cabinets, phytotrons produced and reproduced whole complex climates of many variables. In the first phytotrons each individual room was held at a constant unique temperature. The Australian phytotron, for example, had rooms maintaining 9°C, 12°C, 16°C, 20°C, 23°C, 26°C, 30°C, 34°C. Because some of the earliest controlled environment experiments showed that plants reacted differently in daytime temperatures and nighttime temperatures, the first experiments to observe the effect(s) of varying the daytime versus the nighttime temperature saw experimenters move their plants from higher to lower temperatures over the course of a daily, or any other variable or constant, routine. [ 1 ] This rendered the variable “temperature” experimentally controllable. Even a brute force approach that tested each successive environmental variable and every variety of plant would serve to pinpoint specific environmental conditions to maximize growth. Expecting that more knowledge would surely come from greater technology, the next generation of phytotrons expanded in technological reach, in their ranges of environmental variables, and also in the degree of control over each variable. The phytotron in Stockholm offered a humidity controlled room and a custom built computer, as well as a low temperature room that extended the temperature range down to -25°C for the study of Nordic forests. After that, phytotron technology compressed whole environments into smaller cabinets able to be set to any desired combination of environmental conditions, which are still in use today. The first phytotron was built under the direction of Frits Warmolt Went at the California Institute of Technology in 1949. It was funded by the Earhart Foundation , and was officially known as the Earhart Plant Research Laboratory. It acquired its more distinctive nickname evidently from a joking conversation between Caltech biologists James Bonner and Sam Wildman. Recalling the origin sometime in 1980s Bonner noted that: "The Earhart Plant Research Laboratory [was] called an environmentally controlled greenhouse but my first postdoctoral fellow [Sam Wildman] and I, sitting around about 1950, having coffee, decided it deserved a better or more euphonious name [...]. We decided to call it a phytotron—phytos from the Greek word for plant, and tron as in cyclotron, a big complicated machine. Went was originally enormously annoyed by this word. But Dr. Millikan took it right up saying, ‘this edifice financed by Mr. Earhart, is going to do for plant biology what the cyclotron has done for physics,’ and he christened it a phytotron." [ 2 ] Phytotrons spread around the world between 1945 and the present day to Australia, France, Hungary, the Soviet Union, England, and the United States. Moreover, they have spurred variants such as the Climatron at the Missouri Botanical Garden , the Biotron at the University of Wisconsin-Madison , the Ecotron at Imperial College London and the Brisatron at the Savannah River Ecology Laboratory .
https://en.wikipedia.org/wiki/Phytotron
In chemistry, π-effects or π-interactions are a type of non-covalent interaction that involves π systems . Just like in an electrostatic interaction where a region of negative charge interacts with a positive charge, the electron-rich π system can interact with a metal (cationic or neutral), an anion, another molecule and even another π system. [ 1 ] Non-covalent interactions involving π systems are pivotal to biological events such as protein-ligand recognition. [ 2 ] The most common types of π-interactions involve: [ 5 ] [ 6 ] [ 7 ] Anion and π–aromatic systems (typically electron-deficient) create an interaction that is associated with the repulsive forces of the structures. These repulsive forces involve electrostatic and anion-induced polarized interactions. [ 8 ] [ 9 ] This force allows for the systems to be used as receptors and channels in supramolecular chemistry for applications in the medical (synthetic membranes, ion channels) and environmental fields (e.g. sensing, removal of ions from water). [ 10 ] The first X-ray crystal structure that depicted anion–π interactions was reported in 2004. [ 11 ] In addition to this being depicted in the solid state, there is also evidence that the interaction is present in solution. [ 12 ] π-effects have an important contribution to biological systems since they provide a significant amount of binding enthalpy. Neurotransmitters produce most of their biological effect by binding to the active site of a protein receptor. Xation-π interactions are important for the acetylcholine (Ach) neurotransmitter. [ 13 ] [ 14 ] The structure of acetylcholine esterase includes 14 highly conserved aromatic residues. The trimethyl ammonium group of Ach binds to the aromatic residue of tryptophan (Trp). The indole site provides a much more intense region of negative electrostatic potential than benzene and phenol residue of Phe and Tyr. π systems are important building blocks in supramolecular assembly because of their versatile noncovalent interactions with various functional groups. Particularly, π − π {\displaystyle {\ce {\pi - \pi}}} , CH − π {\displaystyle {\ce {CH-\pi}}} and π − cation {\displaystyle {\ce {\pi -cation}}} interactions are widely used in supramolecular assembly and recognition . π − π {\displaystyle {\ce {\pi-\pi}}} concerns the direct interactions between two π -systems; and cation − π {\displaystyle {\ce {cation-\pi}}} interaction arises from the electrostatic interaction of a cation with the face of the π -system. Unlike these two interactions, the CH − π {\displaystyle {\ce {CH-\pi}}} interaction arises mainly from charge transfer between the C–H orbital and the π -system.
https://en.wikipedia.org/wiki/Pi-interaction
In chemistry , pi stacking (also called π–π stacking ) refers to the presumptively attractive, noncovalent pi interactions between the pi bonds of aromatic rings, because of orbital overlap. [ 1 ] According to some authors direct stacking of aromatic rings (the "sandwich interaction") is electrostatically repulsive. What is more commonly observed (see figure to the right) is either a staggered stacking (parallel displaced) or pi-teeing (perpendicular T-shaped) interaction both of which are electrostatic attractive [ 2 ] [ 3 ] For example, the most commonly observed interactions between aromatic rings of amino acid residues in proteins is a staggered stacked followed by a perpendicular orientation. Sandwiched orientations are relatively rare. [ 4 ] Pi stacking is repulsive as it places carbon atoms with partial negative charges from one ring on top of other partial negatively charged carbon atoms from the second ring and hydrogen atoms with partial positive charges on top of other hydrogen atoms that likewise carry partial positive charges. [ 2 ] In staggered stacking, one of the two aromatic rings is offset sideways so that the carbon atoms with partial negative charge in the first ring are placed above hydrogen atoms with partial positive charge in the second ring so that the electrostatic interactions become attractive. Likewise, pi-teeing interactions in which the two rings are oriented perpendicular to either other is electrostatically attractive as it places partial positively charged hydrogen atoms in close proximity to partially negatively charged carbon atoms. An alternative explanation for the preference for staggered stacking is due to the balance between van der Waals interactions (attractive dispersion plus Pauli repulsion ). [ 5 ] These staggered stacking and π-teeing interactions between aromatic rings are important in nucleobase stacking within DNA and RNA molecules, protein folding , template-directed synthesis , materials science , and molecular recognition . Despite the wide use of term pi stacking in the scientific literature, there is no theoretical justification for its use. [ 2 ] The benzene dimer is the prototypical system for the study of pi stacking, and is experimentally bound by 8–12 kJ/mol (2–3 kcal/mol) in the gas phase with a separation of 4.96 Å between the centers of mass for the T-shaped dimer. The small binding energy makes the benzene dimer difficult to study experimentally, and the dimer itself is only stable at low temperatures and is prone to cluster. [ 6 ] Other evidence against pi stacking comes from X-ray crystallography . Perpendicular and offset parallel configurations can be observed in the crystal structures of many simple aromatic compounds. [ 6 ] Similar offset parallel or perpendicular geometries were observed in a survey of high-resolution x-ray protein crystal structures in the Protein Data Bank . [ 7 ] Analysis of the aromatic amino acids phenylalanine, tyrosine, histidine, and tryptophan indicates that dimers of these side chains have many possible stabilizing interactions at distances larger than the average van der Waals radii. [ 4 ] The preferred geometries of the benzene dimer have been modeled at a high level of theory with MP2-R12/A computations and very large counterpoise-corrected aug-cc-PVTZ basis sets. [ 6 ] The two most stable conformations are the parallel displaced and T-shaped, which are essentially isoenergetic. In contrast, the sandwich configuration maximizes overlap of the pi system, which destabilizes the interaction. The sandwich configuration represents an energetic saddle point, which is consistent with the relative rarity of this configuration in x-ray crystal data. [ citation needed ] The relative binding energies of these three geometric configurations of the benzene dimer can be explained by a balance of quadrupole/quadrupole and London dispersion forces . While benzene does not have a dipole moment, it has a strong quadrupole moment . [ 8 ] The local C–H dipole means that there is positive charge on the atoms in the ring and a correspondingly negative charge representing an electron cloud above and below the ring. The quadrupole moment is reversed for hexafluorobenzene due to the electronegativity of fluorine. The benzene dimer in the sandwich configuration is stabilized by London dispersion forces but destabilized by repulsive quadrupole/quadrupole interactions. By offsetting one of the benzene rings, the parallel displaced configuration reduces these repulsive interactions and is stabilized. The large polarizability of aromatic rings lead to dispersive interactions as major contribution to stacking effects. These play a major role for interactions of nucleobases e.g. in DNA . [ 9 ] The T-shaped configuration enjoys favorable quadrupole/quadrupole interactions, as the positive quadrupole of one benzene ring interacts with the negative quadrupole of the other. The benzene rings are furthest apart in this configuration, so the favorable quadrupole/quadrupole interactions evidently compensate for diminished dispersion forces. The ability to fine-tune pi stacking interactions would be useful in numerous synthetic efforts. One example would be to increase the binding affinity of a small-molecule inhibitor to an enzyme pocket containing aromatic residues. The effects of heteroatoms [ 7 ] and substituents on pi stacking interactions is difficult to model and a matter of debate. An early model for the role of substituents in pi stacking interactions was proposed by Hunter and Sanders. [ 10 ] They used a simple mathematical model based on sigma and pi atomic charges, relative orientations, and van der Waals interactions to qualitatively determine that electrostatics are dominant in substituent effects. According to their model, electron-withdrawing groups reduce the negative quadrupole of the aromatic ring and thereby favor parallel displaced and sandwich conformations. Contrastingly, electron donating groups increase the negative quadrupole, which may increase the interaction strength in a T-shaped configuration with the proper geometry. Based on this model, the authors proposed a set of rules governing pi stacking interactions which prevailed until more sophisticated computations were applied. [ citation needed ] Experimental evidence for the Hunter–Sanders model was provided by Siegel et al. using a series of substituted syn- and anti-1,8-di- o -tolylnaphthalenes. [ 11 ] In these compounds the aryl groups "face-off" in a stacked geometry due to steric crowding, and the barrier to epimerization was measured by nuclear magnetic resonance spectroscopy . The authors reported that aryl rings with electron-withdrawing substituents had higher barriers to rotation. The interpretation of this result was that these groups reduced the electron density of the aromatic rings, allowing more favorable sandwich pi stacking interactions and thus a higher barrier. In other words, the electron-withdrawing groups resulted in "less unfavorable" electrostatic interactions in the ground state. [ citation needed ] Hunter et al. applied a more sophisticated chemical double mutant cycle with a hydrogen-bonded "zipper" to the issue of substituent effects in pi stacking interactions. [ 12 ] This technique has been used to study a multitude of noncovalent interactions. The single mutation, in this case changing a substituent on an aromatic ring, results in secondary effects such as a change in hydrogen bond strength. The double mutation quantifies these secondary interactions, such that even a weak interaction of interest can be dissected from the array. Their results indicate that more electron-withdrawing substituents have less repulsive pi stacking interactions. Correspondingly, this trend was exactly inverted for interactions with pentafluorophenylbenzene, which has a quadrupole moment equal in magnitude but opposite in sign as that of benzene. [ 8 ] The findings provide direct evidence for the Hunter–Sanders model. However, the stacking interactions measured using the double mutant method were surprisingly small, and the authors note that the values may not be transferable to other systems. In a follow-up study, Hunter et al. verified to a first approximation that the interaction energies of the interacting aromatic rings in a double mutant cycle are dominated by electrostatic effects. [ 13 ] However, the authors note that direct interactions with the ring substituents, discussed below, also make important contributions. Indeed, the interplay of these two factors may result in the complicated substituent- and geometry-dependent behavior of pi stacking interactions. The Hunter–Sanders model has been criticized by numerous research groups offering contradictory experimental and computational evidence of pi stacking interactions that are not governed primarily by electrostatic effects. [ 14 ] The clearest experimental evidence against electrostatic substituent effects was reported by Rashkin and Waters. [ 15 ] They used meta- and para-substituted N-benzyl-2-(2-fluorophenyl)-pyridinium bromides, which stack in a parallel displaced conformation, as a model system for pi stacking interactions. In their system, a methylene linker prohibits favorable T-shaped interactions. As in previous models, the relative strength of pi stacking interactions was measured by NMR as the rate of rotation about the biaryl bond, as pi stacking interactions are disrupted in the transition state . Para-substituted rings had small rotational barriers which increased with increasingly electron-withdrawing groups, consistent with prior findings. However, meta-substituted rings had much larger barriers of rotation despite having nearly identical electron densities in the aromatic ring. The authors explain this discrepancy as direct interaction of the edge of hydrogen atoms of one ring with the electronegative substituents on the other ring. This claim is supported by chemical shift data of the proton in question. [ citation needed ] Much of the detailed analyses of the relative contributions of factors in pi stacking have been borne out by computation. Sherill and Sinnokrot reported a surprising finding using high-level theory that all substituted benzene dimers have more favorable binding interactions than a benzene dimer in the sandwich configuration. [ 16 ] Later computational work from the Sherill group revealed that the substituent effects for the sandwich configuration are additive, which points to a strong influence of dispersion forces and direct interactions between substituents. [ 17 ] It was noted that interactions between substituted benzenes in the T-shaped configuration were more complex. Finally, Sherill and Sinnokrot argue in their review article that any semblance of a trend based on electron donating or withdrawing substituents can be explained by exchange-repulsion and dispersion terms. [ 18 ] Houk and Wheeler also provide compelling computational evidence for the importance of direct interaction in pi stacking. [ 19 ] In their analysis of substituted benzene dimers in a sandwich conformation, they were able to recapitulate their findings using an exceedingly simple model where the substituted benzene, Ph–X, was replaced by H–X. Remarkably, this crude model resulted in the same trend in relative interaction energies, and correlated strongly with the values calculated for Ph–X. This finding suggests that substituent effects in the benzene dimer are due to direct interaction of the substituent with the aromatic ring, and that the pi system of the substituted benzene is not involved. This latter point is expanded upon below. In summary, it would seem that the relative contributions of electrostatics, dispersion, and direct interactions to the substituent effects seen in pi stacking interactions are highly dependent on geometry and experimental design. The lack of consensus on the matter may simply reflect the complexity of the issue. The conventional understanding of pi stacking involves quadrupole interactions between delocalized electrons in p-orbitals. In other words, aromaticity should be required for this interaction to occur. However, several groups have provided contrary evidence, calling into question whether pi stacking is a unique phenomenon or whether it extends to other neutral, closed-shell molecules. In an experiment not dissimilar from others mentioned above, Paliwal and coauthors constructed a molecular torsion balance from an aryl ester with two conformational states. [ 20 ] The folded state had a well-defined pi stacking interaction with a T-shaped geometry, whereas the unfolded state had no aryl–aryl interactions. The NMR chemical shifts of the two conformations were distinct and could be used to determine the ratio of the two states, which was interpreted as a measure of intramolecular forces. The authors report that a preference for the folded state is not unique to aryl esters. For example, the cyclohexyl ester favored the folded state more so than the phenyl ester, and the tert-butyl ester favored the folded state by a preference greater than that shown by any aryl ester. This suggests that aromaticity is not a strict requirement for favorable interaction with an aromatic ring. Other evidence for non-aromatic pi stacking interactions results include critical studies in theoretical chemistry, explaining the underlying mechanisms of empirical observations. Grimme reported that the interaction energies of smaller dimers consisting of one or two rings are very similar for both aromatic and saturated compounds. [ 21 ] This finding is of particular relevance to biology, and suggests that the contribution of pi systems to phenomena such as stacked nucleobases may be overestimated. However, it was shown that an increased stabilizing interaction is seen for large aromatic dimers. As previously noted, this interaction energy is highly dependent on geometry. Indeed, large aromatic dimers are only stabilized relative to their saturated counterparts in a sandwich geometry, while their energies are similar in a T-shaped interaction. A more direct approach to modeling the role of aromaticity was taken by Bloom and Wheeler. [ 22 ] The authors compared the interactions between benzene and either 2-methylnaphthalene or its non-aromatic isomer, 2-methylene-2,3-dihydronaphthalene. The latter compound provides a means of conserving the number of p-electrons while, however, removing the effects of delocalization. Surprisingly, the interaction energies with benzene are higher for the non-aromatic compound, suggesting that pi-bond localization is favorable in pi stacking interactions. The authors also considered a homodesmotic dissection of benzene into ethylene and 1,3-butadiene and compared these interactions in a sandwich with benzene. Their calculation indicates that the interaction energy between benzene and homodesmotic benzene is higher than that of a benzene dimer in both sandwich and parallel displaced conformations, again highlighting the favorability of localized pi-bond interactions. These results strongly suggest that aromaticity is not required for pi stacking interactions in this model. Even in light of this evidence, Grimme concludes that pi stacking does indeed exist. [ 21 ] However, he cautions that smaller rings, particularly those in T-shaped conformations, do not behave significantly differently from their saturated counterparts, and that the term should be specified for larger rings in stacked conformations which do seem to exhibit a cooperative pi electron effect. One demonstration of stacking is found in the buckycatcher . [ 23 ] This molecular tweezer is based on two concave buckybowls with a perfect fit for one convex fullerene molecule. Complexation takes place simply by evaporating a toluene solution containing both compounds. In solution an association constant of 8600 M −1 is measured based on changes in NMR chemical shifts . [ citation needed ] Pi stacking is prevalent in protein crystal structures, and also contributes to the interactions between small molecules and proteins. As a result, pi–pi and cation–pi interactions are important factors in rational drug design. [ 24 ] One example is the FDA-approved acetylcholinesterase (AChE) inhibitor tacrine which is used in the treatment of Alzheimer's disease . Tacrine is proposed to have a pi stacking interaction with the indolic ring of Trp84, and this interaction has been exploited in the rational design of novel AChE inhibitors. [ 25 ] π systems are building blocks in supramolecular assembly because they often engage in noncovalent interactions. An example of π–π interactions in supramolecular assembly is the synthesis of catenane . The major challenge for the synthesis of catenane is to interlock molecules in a controlled fashion. Stoddart and co-workers developed a series of systems utilizing the strong π–π interactions between electron-rich benzene derivatives and electron-poor pyridinium rings. [ 26 ] [2]Catanene was synthesized by reacting bis(pyridinium) ( A ), bisparaphenylene-34-crown-10 ( B ), and 1, 4-bis(bromomethyl)benzene ( C ) (Fig. 2). The π–π interaction between A and B directed the formation of an interlocked template intermediate that was further cyclized by substitution reaction with compound C to generate the [2]catenane product. A combination of tetracyanoquinodimethane (TCNQ) and tetrathiafulvalene (TTF) forms a strong charge-transfer complex referred to as TTF-TCNQ . [ 28 ] The solid shows almost metallic electrical conductance. In a TTF-TCNQ crystal, TTF and TCNQ molecules are arranged independently in separate parallel-aligned stacks, and an electron transfer occurs from donor (TTF) to acceptor (TCNQ) stacks. [ 29 ] Graphite consists of stacked sheets of covalently bonded carbon. [ 30 ] [ 31 ] The individual layers are called graphene . In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp 2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. [ 32 ]
https://en.wikipedia.org/wiki/Pi-stacking
Pi (stylized as π ) [ a ] is a 1998 American conceptual psychological thriller film written and directed by Darren Aronofsky (in his feature directorial debut ). Pi was filmed on high-contrast black-and-white reversal film . The title refers to the mathematical constant pi . [ 5 ] [ 6 ] The story focuses on a mathematician with an obsession to find underlying complete order in the real world and contrasting two seemingly irreconcilable entities: the imperfect irrationality of humanity and the rigor and regularity of mathematics, specifically number theory . [ 7 ] The film explores themes of religion, mysticism, and the relationship of the universe to mathematics. The film received positive reviews and earned Aronofsky the Directing Award at the 1998 Sundance Film Festival , [ 8 ] the Independent Spirit Award for Best First Screenplay and the Gotham Open Palm Award . Unemployed number theorist Max Cohen, who lives in a drab apartment in Chinatown, Manhattan , believes everything in nature can be understood through numbers. He suffers from cluster headaches , extreme paranoia , hallucinations , and schizoid personality disorder . His only social interactions are with his mathematics mentor, Sol Robeson (now disabled from a stroke), and those who live in his building: Jenna, a little girl fascinated by his ability to perform complex calculations, and Devi, a young woman living next door who sometimes speaks with him. Max tries to program his computer, named Euclid , to make stock predictions. Euclid malfunctions, printing out a seemingly random 216-digit number, as well as a single stock pick at one-tenth its current value, then crashes. Disgusted, Max throws away the printout . The next morning, he learns that Euclid's pick was accurate but cannot find the printout. When Max mentions the number, Sol becomes unnerved and asks if it contained 216 digits, revealing that he came across the same number years ago. He urges Max to take a break from his work. Max meets Lenny Meyer, a Hasidic Jew who does mathematical research on the Torah . Lenny demonstrates some simple Gematria , the correspondence of the Hebrew alphabet to numbers, and explains that some people believe the Torah is a string of numbers forming a code sent by God. Intrigued, Max notes some of the concepts parallel other mathematical concepts, such as the Fibonacci sequence . Agents of a Wall Street firm approach Max. One of them, Marcy Dawson, offers him a classified computer chip called "Ming Mecca" in exchange for the results of his work. Using the chip, Max has Euclid analyze mathematical patterns in the Torah. Once again, Euclid displays the 216-digit number before crashing. As Max writes down the number, he realizes that he knows the pattern, undergoes an epiphany , and loses consciousness. After waking up, Max appears to become clairvoyant and visualizes the stock market patterns he sought. His headaches intensify, and he discovers a vein-like bulge protruding from his right temple. Max has a falling out with Sol after Sol urges him to quit his work. Dawson and her agents grab Max on the street and try to force him to explain the number, having found the printout Max threw away. Attempting to use the number to manipulate the stock market , the firm instead caused the market to crash . Driving by, Lenny rescues Max, but takes him to his companions at a nearby synagogue. They ask Max to give them the 216-digit number, believing it was meant for them to bring about the messianic age , as the number represents the unspeakable name of God . Max refuses, insisting the number has been revealed to him alone. Max flees and visits Sol, only to learn from his daughter, Jenny, that he died from another stroke. He finds a piece of paper with the number in his study. At his own apartment, Max experiences another headache but does not take his painkillers. Believing the number and the headaches are linked, Max tries to concentrate on the number through his pain. After passing out, Max goes to the bathroom where he stares at himself in the mirror before lighting a match and burning the piece of paper with the number. Max then takes a power drill to his own head, trepanning himself in an effort to find relief. Sometime later, Jenna approaches Max in a park and asks him to do several calculations, including 748 ÷ 238 (an approximation for pi ). Max smiles and says that he does not know the answer, seemingly at peace. Before production, to finance the complex visual sets and shots for the film, producer Eric Watson and director Darren Aronofsky asked every friend, relative, or acquaintance for donations of $100 each. Eventually, they accumulated about $60,000 for their production budget. [ 9 ] The film was shot on an Aaton XTR Prod Camera, which shoots with 16mm film, with a Bolex H16 Camera used for most of the handheld shots that the crew broke and had to fix. [ 3 ] A Canon 16mm camera package was also used. [ 10 ] [ 11 ] Lenses were from Angenieux . The film was shot on black and white reversal film stock; Aronofsky aimed for high-contrast shots to give Pi a more "technically raw and spontaneous" look. [ 12 ] Pi was produced with a low budget, with the crew being paid $200 a day and actors being paid $75 a day. [ 3 ] To save money, various cost-cutting techniques were used, including using only the actors' clothes and thrift store purchases as costumes, and shooting all of the subway and outdoor city scenes illegally to get around paying expensive permits. [ 13 ] [ better source needed ] To get vehicles for the film, Aronofsky says he "probably" rented a station wagon belonging to the film's consulting producer, and claims to have hailed a cab and paid the driver $100 to keep his car there for a scene that was later cut, rather than renting out an additional vehicle. [ 3 ] For the main set, which was Max Cohen's apartment, Scott Vogel secured a section of his fathers's warehouse in Bushwick, Brooklyn . A back room was cleared out and used as a sound stage, [ 11 ] where Max's Euclid computer was built and the majority of the film was shot. Finishing the film was more costly than shooting it. The post-budget was $68,183, most of which went into post-production sound, film and lab work, and film editing. Throughout the filming, 53,000 feet of 16mm film was shot, amounting to about 23 hours over 28 days. [ 11 ] The film was sent to be developed in Bono Labs in Arlington, Virginia , which was the only lab capable of developing black and white reversal stock. Consequently, the crew only received dailies a week after sending the footage in. Raw stock cost $5,414 and developing it cost $18,000. During post-production, most of the budget went toward the negative cut, which was a match back from an AVID cut list. Clint Mansell created the score on his equipment, for which he was paid a deferred fee. The production cost was $60,927, with post-production costing an additional $68,183. Along with other expenses, including insurance, the film cost $134,815. [ 3 ] Pi features multiple references to mathematics and mathematical theories. [ b ] For instance, Max finds the golden spiral occurring everywhere, including the stock market. Max's belief that diverse systems embodying nonlinear dynamics share a unifying pattern has some similarity to results in chaos theory , which provides a means for describing certain phenomena of nonlinear systems , which might be thought of as patterns. During the climactic final scene, a pattern resembling a bifurcation diagram is apparent on Max's shattered mirror. In the film, Max periodically plays Go with his mentor, Sol. [ 14 ] This game has historically stimulated the study of mathematics [ 15 ] and features a simple set of rules that results in a complex game strategy . Each character uses the game as a model for their view of the universe: Sol says that the game is a microcosm of an extremely complex and chaotic world , while Max asserts its complexity gradually converges toward patterns that can be found. [ 14 ] [ c ] Gullette and Margolis spent many hours learning the game at the Brooklyn Go Club and received help from Dan Weiner, one of three Go consultants credited to the film. [ 14 ] Barbara Calhoun and Michael Solomon also served as game consultants. Early in the film, when Lenny speaks with Max about his work, he asks if Max is familiar with kabbalah . The numerological interpretation of the Torah and the 216-letter name of God, known as the Shem HaMephorash , are important concepts in traditional Jewish mysticism. Another religious reference comes while Max is at the market looking for that day's newspaper, when a recitation citing Quran 2:140 can be heard in the background: "Or do you say that Abraham and Ishmael and Isaac and Jacob and the Descendants were Jews or Christians? Say, 'Are you more knowing or is Allah?' And who is more unjust than one who conceals a testimony he has from Allah? And Allah is not unaware of what you do." Pi launched the film scoring career of Clint Mansell . The soundtrack was released on July 21, 1998, via Thrive Records . AllMusic rated it 4.5 stars out of 5. [ 16 ] A music video for "πr²", using an alternative mix of the title track, is available as a special feature on DVD, consisting of footage from the film intercut with stock color reels of ants, referencing one of the film's visual motifs. More than a decade after its theatrical release, the rights to the film reverted from Lionsgate (owner of Summit Entertainment and the Artisan library) back to Aronofsky, who sold it to A24 in 2023. The 8K and Atmos restoration version was released on March 14 by A24 in the IMAX format, to commemorate its 25th anniversary. [ 17 ] Produced on a budget of $134,815, [ 3 ] the film was financially successful at the box office, grossing $3,221,152 in the United States [ 4 ] despite only a limited theatrical release . It sold steadily on DVD , and was the first film ever to be sold as a download on the Internet . [ 18 ] Through the website Sightsound.com , the film was available for streaming in a pay-per-view window. [ 19 ] [ 20 ] Pi was received well by critics upon release. On Rotten Tomatoes , the film has an 88% approval rating based on 59 reviews with an average rating of 7.4/10. The website's critical consensus reads: "Dramatically gripping and frighteningly smart, this Lynchian thriller does wonders with its unlikely subject and shoestring budget." [ 21 ] On Metacritic, the film has a rating of 72 out of 100 based on 23 reviews, indicating "generally favorable reviews". [ 22 ] Roger Ebert gave the film three and a half out of four stars, writing: " Pi is a thriller. I am not very thrilled these days by whether the bad guys will get shot or the chase scene will end one way instead of another. You have to make a movie like that pretty skillfully before I care. But I am thrilled when a man risks his mind in the pursuit of a dangerous obsession." [ 23 ] James Berardinelli gave the film three out of four stars, writing: " Pi transports us to a world that is like yet unlike our own, and, in its mysterious familiarity, is eerie, intense, and compelling. Reality is a fragile commodity, but, because the script is well-written and the central character is strongly developed, it's not hard to suspend disbelief....It probably deserves 3.1416 stars, but since my scale doesn't support that, I'll round it off to three." [ 24 ]
https://en.wikipedia.org/wiki/Pi_(film)
Pi Day is an annual celebration of the mathematical constant π (pi) . Pi Day is observed on March 14 (the 3rd month) since 3, 1, and 4 are the first three significant figures of π , and was first celebrated in the United States. [ 2 ] [ 3 ] It was founded in 1988 by Larry Shaw , an employee of a science museum in San Francisco , the Exploratorium . Celebrations often involve eating pie or holding pi recitation competitions. In 2009, the United States House of Representatives supported the designation of Pi Day. [ 4 ] UNESCO 's 40th General Conference designated Pi Day as the International Day of Mathematics in November 2019. [ 5 ] [ 6 ] Other dates when people celebrate pi include Pi Approximation Day on July 22 (22/7 in the day/month format), another approximation of π ; and June 28 (6.28), an approximation of 2 π or 𝜏 (tau). In 1988, the earliest known official or large-scale celebration of Pi Day was organized by Larry Shaw at the San Francisco Exploratorium , [ 7 ] where Shaw worked as a physicist , [ 8 ] with staff and public marching around one of its circular spaces, then consuming fruit pies. [ 9 ] The Exploratorium continues to hold Pi Day celebrations. [ 10 ] On March 12, 2009, the U.S. House of Representatives passed a non-binding resolution ( 111 H. Res. 224 ), [ 4 ] recognizing March 14, 2009, as National Pi Day. [ 11 ] For Pi Day 2010, Google presented a Google Doodle celebrating the holiday, with the word Google laid over images of circles and pi symbols; [ 12 ] and for the 30th anniversary in 2018, it was a Dominique Ansel pie with the circumference divided by its diameter . [ 13 ] Some observed the entire month of March 2014 (3/14) as "Pi Month". [ 14 ] [ 15 ] In the year 2015, March 14 was celebrated as "Super Pi Day". [ 16 ] It had special significance, as the date is written as 3/14/15 in month/day/year format. At 9:26:53, the date and time together represented the first ten digits of π , [ 17 ] and later that second, "Pi Instant" represented all of π 's digits. [ 18 ] Pi Day has been observed in many ways, including eating pie , throwing pies and discussing the significance of the number π . [ 19 ] The first two are due to a pun based on the words "pi" and "pie" being homophones in English ( / p aɪ / ), and the coincidental circular shape of many pies. [ 1 ] [ 20 ] Many pizza and pie restaurants offer discounts, deals, and free products on Pi Day. [ 21 ] Also, some schools hold competitions as to which student can recall pi to the highest number of decimal places . [ 22 ] [ 23 ] The Massachusetts Institute of Technology has often mailed its application decision letters to prospective students for delivery on Pi Day. [ 24 ] Starting in 2012, MIT has announced it will post those decisions (privately) online on Pi Day at exactly 6:28 pm, which they have called "Tau Time", to honor the rival numbers π and 𝜏 equally. [ 25 ] [ 26 ] In 2015, the regular decisions were put online at 9:26 am, following that year's "pi minute", [ 27 ] and in 2020, regular decisions were released at 1:59 pm, making the first six digits of pi. [ 28 ] Princeton, New Jersey , hosts numerous events in a combined celebration of Pi Day and Albert Einstein 's birthday, which is also March 14. [ 29 ] Einstein lived in Princeton for more than twenty years while working at the Institute for Advanced Study . In addition to pie eating and recitation contests, there is an annual Einstein look-alike contest. [ 29 ] In 2024, the recreational mathematician Matt Parker and a team of hundreds of volunteers at City of London School spent six days calculating 139 correct digits of pi by hand, in what Parker claimed was "the biggest hand calculation in a century". [ 30 ] [ 31 ] On 15 August 2024, the main-belt asteroid 314159 Mattparker [ a ] was named in his honor. The citation highlights Parker's biennial "Pi Day challenges", stating that they have helped to popularize mathematics. [ 32 ] [ 33 ] Pi Day is frequently observed on March 14 (3/14 in the month/day date format), but related celebrations have been held on alternative dates. Pi Approximation Day is observed on July 22 (22/7 in the day/month date format), since the fraction 22 ⁄ 7 is a common approximation of π , which is accurate to two decimal places and dates from Archimedes . [ 34 ] In Indonesia, a country that uses the DD/MM/YYYY date format , some people celebrate Pi Day every July 22. [ 35 ] Tau Day , also known as Two-Pi Day , [ 36 ] is observed on June 28 (6/28 in the month/day format). [ 37 ] The number 𝜏 , denoted by the Greek letter tau , is the ratio of a circle's circumference to its radius ; it equals 2 π , a common multiple in mathematical formulae, and approximately equals 6.28. Some have argued that 𝜏 is the clearer and more fundamental constant and that Tau Day should be celebrated alongside or instead of Pi Day. [ 38 ] [ 39 ] [ 40 ] Celebrants of this date jokingly suggest eating "twice the pie". [ 41 ] [ 42 ] [ 43 ] Some also celebrate π on November 10, since it is the 314th day of the year (in leap years, on November 9). [ 44 ]
https://en.wikipedia.org/wiki/Pi_Day
Pi Epsilon Tau ( ΠΕΤ ) is an American honor society for petroleum engineering students. Its purpose is to maintain the standards and high ideals of the petroleum engineering profession and to build a bond between its members and the industry. The society was established in 1947 at the University of Oklahoma . Faculty member Paul S. Johnson established Pi Epsilon Tau at the University of Oklahoma in November 1947 as an honor society for petroleum engineering students. [ 1 ] [ 2 ] [ 3 ] It was officially recognized by the university of January 7, 1948. [ 1 ] The honor society's purpose is to maintain the standards and high ideals of the petroleum engineering profession and to build a bond between its members and the industry. [ 4 ] Pi Epsilon Tau's founders planned to expand it to other campuses, creating a national honor society. [ 1 ] Its Beta chapter was established at the University of Tulsa in 1948. [ 2 ] Gamma was formed in 1949 at Texas Tech University in 1949. [ 2 ] Other chapters were established at colleges across the United States. [ 5 ] It is governed through a national council of five membersand a national convention. [ 6 ] The emblem or key of Pi Epsilon Tau is shaped like an oil derick , standing on the base of an isosceles triangle . [ 6 ] [ 2 ] Its flag features the emblem on top of a three-leaf clover that symbolizes Saint Patrick . [ 6 ] The society's colors are black and gold. [ 6 ] Its flower is the red rose. [ 6 ] Its pledges are called " roustabouts ". [ 2 ] Membership in Pi Epsilon Tau is open to juniors, seniors, and graduate students studying petroleum engineering based on academic achievement, leadership, and sociability. [ 7 ] [ 8 ] [ 1 ] Pi Epsilon Tau has three class of members: active (students), honorary, and alumnus. [ 6 ] Following are the chapters of Pi Epsilon Tau, with active chapters indicated in bold and inactive chapters in italics . [ 5 ] [ 9 ]
https://en.wikipedia.org/wiki/Pi_Epsilon_Tau
A Josephson junction (JJ) is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.). A π Josephson junction is a Josephson junction in which the Josephson phase φ equals π in the ground state, i.e. when no external current or magnetic field is applied. The supercurrent I s through a Josephson junction is generally given by I s = I c sin( φ ), where φ is the phase difference of the superconducting wave functions of the two electrodes, i.e. the Josephson phase. [ 1 ] The critical current I c is the maximum supercurrent that can exist through the Josephson junction. In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase φ = arcsin( I / I c ), where I is the applied (super)current. Since the phase is 2 π -periodic, i.e. φ and φ + 2 π n are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ φ < 2 π . When no current ( I = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero ( φ = 0). The phase can also be φ = π , also resulting in no current through the junction. It turns out that the state with φ = π is unstable and corresponds to the Josephson energy maximum, while the state φ = 0 corresponds to the Josephson energy minimum and is a ground state. In certain cases, one may obtain a Josephson junction where the critical current is negative ( I c < 0). In this case, the first Josephson relation becomes The ground state of such a Josephson junction is ϕ = π {\displaystyle \phi =\pi } and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with ϕ = π {\displaystyle \phi =\pi } in the ground state is called a π Josephson junction. π Josephson junctions have quite unusual properties. For example, if one connects (shorts) the superconducting electrodes with the inductance L (e.g. superconducting wire ), one may expect the spontaneous supercurrent circulating in the loop, passing through the junction and through inductance clockwise or counterclockwise. This supercurrent is spontaneous and belongs to the ground state of the system. The direction of its circulation is chosen at random. This supercurrent will of course induce a magnetic field which can be detected experimentally. The magnetic flux passing through the loop will have the value from 0 to a half of magnetic flux quanta , i.e. from 0 to Φ 0 /2, depending on the value of inductance L . Theoretically, the first time the possibility of creating a π {\displaystyle \pi } Josephson junction was discussed by Bulaevskii et al. , [ 20 ] who considered a Josephson junction with paramagnetic scattering in the barrier. Almost one decade later, the possibility of having a π {\displaystyle \pi } Josephson junction was discussed in the context of heavy fermion p-wave superconductors. [ 21 ] Experimentally, the first π {\displaystyle \pi } Josephson junction was a corner junction made of yttrium barium copper oxide (d-wave) and Pb (s-wave) superconductors. [ 13 ] The first unambiguous proof of a π {\displaystyle \pi } Josephson junction with a ferromagnetic barrier was given only a decade later. [ 2 ] That work used a weak ferromagnet consisting of a copper-nickel alloy (Cu x Ni 1− x , with x around 0.5) and optimized it so that the Curie temperature was close to the superconducting transition temperature of the superconducting niobium leads.
https://en.wikipedia.org/wiki/Pi_Josephson_junction
Pi Notebook is the trade name of a notebook produced by Japanese stationery manufacturer KING JIM and sold by Loft , which began selling the product in June 2017. The notebook is characterized by its ruled lines, a series of pi (3.1415926535...) [ 1 ] Originally, the product was forfeited. Still, when it was posted on X on March 14, 2017, Pi Day , it received a string of requests for commercialization [ 1 ] [ 2 ] and was released in June of the same year as a collaborative product with Loft. [ 3 ] Pi Notebook was a project that came up at a meeting around 2016. The meeting aired on June 12, 2016, on TBS 's TV program Gacchiri Monday! [ 4 ] However, the project was abandoned before it went up for commercialization presentation. [ 2 ] However, the response was overwhelming when KING JIM posted it on Twitter along with a photo on March 14, 2017, Pi Day . [ 5 ] As of March 17, three days after the tweet, the number of retweets exceeded 12,000. [ 2 ] King Jim commented on the post, "We are always trying to create products with a free spirit. We hope this post conveys that atmosphere." [ 1 ] Then, a representative from Loft saw the tweet and contacted the company, which decided to commercialize the product on March 14. [ 3 ] After that, King Jim's designers came up with the cover, details, and other designs. [ 3 ] Then, on June 19 of the same year, King Jim and Loft announced that they would commercialize the product in 2,000 copies each. [ 3 ] [ 6 ] [ 7 ] On June 23, four days after it was announced that the book would go on sale, it was sold in advance at the Loft Net Store, an E-commerce site of Loft, and at Ginza Loft, which opened on the same day, for 314 yen excluding tax. [ 8 ] The Ginza Loft sold a limited number of 314 copies [ 8 ] and placed POP hand-drawn by King Jim's Twitter representative. [ 9 ] The Loft online store, which began selling at 11:00 on the same day, [ 10 ] sold out just 17 minutes later at 11:17, [ 11 ] and the Ginza Loft also sold out within the day of its release. [ 12 ] Later, on July 25, the product went on sale at all Loft stores nationwide and at the Loft online store for 380 yen, excluding tax. [ 13 ] Following the massive success of the Pi notebook, on March 9, 2018, the company began selling its second product, the "Miscellaneous Ruled Notebook," in which " Japanese era name ," " Ogura Hyakunin Isshu ," " prime numbers ," "country names," and "city names" are ruled lines in the notebook. [ 14 ] [ 15 ] Furthermore, at the "Stationery Festival 2020" event held at the Loft in July and August 2020, the company released its third series of notebooks, in which " Elemental Symbols ," "New Japanese era name ," and " Jōyō kanji " are ruled lines. [ 16 ] In January 2018, Asahi Weekly interviewed Loft for its "Loft Heisei Hit Items," the Pi notebook was listed in the stationery category. [ 17 ] In addition, the miscellaneous ruled notebooks released as the second version directly led to sales, with total sales exceeding 25,000 in the three months from its release to the end of May 2018. [ 18 ] Some believe even this rejected project led to commercialization because King Jim's official X account interacted daily with users and created a fan base. [ 19 ]
https://en.wikipedia.org/wiki/Pi_Notebook
In chemistry , pi backbonding or π backbonding is a π-bonding interaction between a filled (or half filled) orbital of a transition metal atom and a vacant orbital on an adjacent ion or molecule. [ 1 ] [ 2 ] In this type of interaction, electrons from the metal are used to bond to the ligand , which dissipates excess negative charge and stabilizes the metal. It is common in transition metals with low oxidation states that have ligands such as carbon monoxide , olefins , or phosphines . The ligands involved in π backbonding can be broken into three groups: carbonyls and nitrogen analogs, alkenes and alkynes , and phosphines . Compounds where π backbonding is prominent include Ni(CO) 4 , Zeise's salt , and molybdenum and iron dinitrogen complexes . The electrons are partially transferred from a d-orbital of the metal to anti-bonding molecular orbitals of CO (and its analogs). This electron-transfer strengthens the metal–C bond and weakens the C–O bond. The strengthening of the M–CO bond is reflected in increases of the vibrational frequencies for the M–C bond (often outside of the range for the usual IR spectrophotometers). Furthermore, the M–CO bond length is shortened. The weakening of the C–O bond is indicated by a decrease in the wavenumber of the ν CO band(s) from that for free CO (2143 cm −1 ), for example to 2060 cm −1 in Ni(CO) 4 and 1981 cm −1 in Cr(CO) 6 , and 1790 cm −1 in the anion [Fe(CO) 4 ] 2− . [ 3 ] For this reason, IR spectroscopy is an important diagnostic technique in metal–carbonyl chemistry . The article infrared spectroscopy of metal carbonyls discusses this in detail. Many ligands other than CO are strong "backbonders". Nitric oxide is an even stronger π-acceptor than CO and ν NO is a diagnostic tool in metal–nitrosyl chemistry . Isocyanides , RNC, are another class of ligands that are capable of π-backbonding. In contrast with CO, the σ-donor lone pair on the C atom of isocyanides is antibonding in nature and upon complexation the CN bond is strengthened and the ν CN increased. At the same time, π-backbonding lowers the ν CN . Depending on the balance of σ-bonding versus π-backbonding, the ν CN can either be raised (for example, upon complexation with weak π-donor metals, such as Pt(II)) or lowered (for example, upon complexation with strong π-donor metals, such as Ni(0)). [ 4 ] For the isocyanides, an additional parameter is the MC=N–C angle, which deviates from 180° in highly electron-rich systems. Other ligands have weak π-backbonding abilities, which creates a labilization effect of CO, which is described by the cis effect . As in metal–carbonyls, electrons are partially transferred from a d-orbital of the metal to antibonding molecular orbitals of the alkenes and alkynes. [ 5 ] [ 6 ] This electron transfer strengthens the metal–ligand bond and weakens the C–C bonds within the ligand. [ 7 ] In the case of metal-alkenes and alkynes, the strengthening of the M–C 2 R 4 and M–C 2 R 2 bond is reflected in bending of the C–C–R angles which assume greater sp 3 and sp 2 character, respectively. [ 8 ] [ 6 ] Thus strong π backbonding causes a metal-alkene complex to assume the character of a metallacyclopropane. [ 5 ] Alkenes and alkynes with electronegative substituents exhibit greater π backbonding. [ 6 ] Some strong π backbonding ligands are tetrafluoroethylene , tetracyanoethylene , and hexafluoro-2-butyne . Phosphines accept electron density from metal p or d orbitals into combinations of P–C σ* antibonding orbitals that have π symmetry. [ 9 ] When phosphines bond to electron-rich metal atoms, backbonding would be expected to lengthen P–C bonds as P–C σ* orbitals become populated by electrons. The expected lengthening of the P–C distance is often hidden by an opposing effect: as the phosphorus lone pair is donated to the metal, P(lone pair)–R(bonding pair) repulsions decrease, which acts to shorten the P–C bond. The two effects have been deconvoluted by comparing the structures of pairs of metal-phosphine complexes that differ only by one electron. [ 10 ] Oxidation of R 3 P–M complexes results in longer M–P bonds and shorter P–C bonds, consistent with π-backbonding. [ 11 ] In early work, phosphine ligands were thought to utilize 3d orbitals to form M–P pi-bonding, but it is now accepted that d-orbitals on phosphorus are not involved in bonding as they are too high in energy. [ 12 ] [ 13 ] The full IUPAC definition of back donation is as follows: A description of the bonding of π-conjugated ligands to a transition metal which involves a synergic process with donation of electrons from the filled π-orbital or lone electron pair orbital of the ligand into an empty orbital of the metal (donor–acceptor bond), together with release (back donation) of electrons from an n d orbital of the metal (which is of π-symmetry with respect to the metal–ligand axis) into the empty π*- antibonding orbital of the ligand. [ 14 ]
https://en.wikipedia.org/wiki/Pi_backbonding
In chemistry , pi bonds ( π bonds ) are covalent chemical bonds , in each of which two lobes of an orbital on one atom overlap with two lobes of an orbital on another atom, and in which this overlap occurs laterally. Each of these atomic orbitals has an electron density of zero at a shared nodal plane that passes through the two bonded nuclei . This plane also is a nodal plane for the molecular orbital of the pi bond. Pi bonds can form in double and triple bonds but do not form in single bonds in most cases. The Greek letter π in their name refers to p orbitals , since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. One common form of this sort of bonding involves p orbitals themselves, though d orbitals also engage in pi bonding. This latter mode forms part of the basis for metal-metal multiple bonding . Pi bonds are usually weaker than sigma bonds . The C–C double bond, composed of one sigma and one pi bond, [ 1 ] has a bond energy less than twice that of a C–C single bond, indicating that the stability added by the pi bond is less than the stability of a sigma bond. From the perspective of quantum mechanics , this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation. This is contrasted by sigma bonds which form bonding orbitals directly between the nuclei of the bonding atoms, resulting in greater overlap and a strong sigma bond. Pi bonds result from overlap of atomic orbitals that are in contact through two areas of overlap. Most orbital overlaps that do not include the s-orbital, or have different internuclear axes (for example p x + p y overlap, which does not apply to an s-orbital) are generally all pi bonds. Pi bonds are more diffuse bonds than the sigma bonds. Electrons in pi bonds are sometimes referred to as pi electrons . Molecular fragments joined by a pi bond cannot rotate about that bond without breaking the pi bond, because rotation involves destroying the parallel orientation of the constituent p orbitals. For homonuclear diatomic molecules , bonding π molecular orbitals have only the one nodal plane passing through the bonded atoms, and no nodal planes between the bonded atoms. The corresponding anti bonding , or π* ("pi-star") molecular orbital, is defined by the presence of an additional nodal plane between these two bonded atoms. A typical double bond consists of one sigma bond and one pi bond; for example, the C=C double bond in ethylene (H 2 C=CH 2 ). A typical triple bond , for example in acetylene (HC≡CH), consists of one sigma bond and two pi bonds in two mutually perpendicular planes containing the bond axis. Two pi bonds are the maximum that can exist between a given pair of atoms. Quadruple bonds are extremely rare and can be formed only between transition metal atoms, and consist of one sigma bond, two pi bonds and one delta bond . A pi bond is weaker than a sigma bond, but the combination of pi and sigma bond is stronger than either bond by itself. The enhanced strength of a multiple bond versus a single (sigma bond) is indicated in many ways, but most obviously by a contraction in bond lengths. For example, in organic chemistry, carbon–carbon bond lengths are about 154 pm in ethane , [ 2 ] [ 3 ] 134 pm in ethylene and 120 pm in acetylene. More bonds make the total bond length shorter and the bond becomes stronger. A pi bond can exist between two atoms that do not have a net sigma-bonding effect between them. In certain metal complexes , pi interactions between a metal atom and alkyne and alkene pi antibonding orbitals form pi-bonds. In some cases of multiple bonds between two atoms, there is no net sigma-bonding at all, only pi bonds. Examples include diiron hexacarbonyl (Fe 2 (CO) 6 ), dicarbon (C 2 ), and diborane(2) (B 2 H 2 ). In these compounds the central bond consists only of pi bonding because of a sigma antibond accompanying the sigma bond itself. These compounds have been used as computational models for analysis of pi bonding itself, revealing that in order to achieve maximum orbital overlap the bond distances are much shorter than expected. [ 4 ]
https://en.wikipedia.org/wiki/Pi_bond
The pEDA parameter ( pi electron donor-acceptor ) is a pi-electron substituent effect scale, described also as mesomeric or resonance effect . There is also a complementary scale - sEDA . The more positive is the value of pEDA the more pi-electron donating is a substituent. The more negative pEDA, the more pi-electron withdrawing is the substituent (see the table below). The pEDA parameter for a given substituent is calculated by means of quantum chemistry methods. The model molecule is the monosubstituted benzene . First the geometry should be optimized at a suitable model of theory, then the natural population analysis within the framework of Natural Bond Orbital theory is performed. The molecule have to be oriented in such a way that the aromatic benzene ring is perpendicular to the z-axis. Then, the 2p z orbital occupations of ring carbon atoms are summed up to give the total pi- occupation. From this value the sum of pi-occupation for unsubstituted benzene (value close to 6 in accord to Huckel rule ) is subtracted resulting in original pEDA parameter. For pi-electron donating substituents like -NH 2 , OH or -F the pEDA parameter is positive, and for pi-electron withdrawing substituents like -NO 2 , -BH 2 or -CN the pEDA is negative. The pEDA scale was invented by Wojciech P. Oziminski and Jan Cz. Dobrowolski and the details are available in the original paper. [ 1 ] The pEDA scale linearly correlates with experimental substituent constants like Taft-Topsom σR parameter. [ 2 ] For easy calculation of pEDA the free of charge for academic purposes written in Tcl program with graphical user interface AromaTcl is available. Sums of pi-electron occupations and pEDA parameter for substituents of various character are gathered in the following table:
https://en.wikipedia.org/wiki/Pi_electron_donor-acceptor
Pi in the Sky was an experimental aerial art display where airplanes spelled out pi to decimal 1,000 places in the sky over the San Francisco Bay Area . The display took place on September 12, 2012. It was then displayed again in Austin on March 13, 2014, during the SXSW festival, at which time it was said to be the largest art piece ever displayed in the state of Texas. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The numbers, each 0.4 kilometres (0.25 mi) [ 8 ] high, were created by a group of five [ 9 ] skywriting airplanes, and appeared as a dot matrix . [ 3 ] The string of numbers was produced in a large loop 161 kilometres (100 mi) [ 8 ] in circumference, at an altitude of approximately 10,000 feet (3,000 m). [ 5 ] The aircraft used were 1979 Grumman AA-5B Tigers , small, single-engined planes provided by the company AirSign Aerial Advertising, based in Williston, FL. The numbers were produced by spraying natural, burnt-off canola oil, [ 10 ] which dissipated, causing no environmental damage. [ 11 ] The exhibition began in the skies over San Jose , then continued over Fremont , Hayward , Oakland , Berkeley , San Francisco, San Bruno , San Mateo , Redwood City , Palo Alto , and Mountain View . The planes deliberately flew over the headquarters of NASA Ames , Lawrence Livermore National Laboratory , University of California at Berkeley , Stanford University , Google , Facebook , Twitter , and Apple . [ 12 ] The 2012 display was part of the 2012 ZERO1 Biennial, was conceived by artist ISHKY, and involved a company called Stamen Design . [ 9 ] ZERO1 is an art-technology network based in San Jose. The display was intended to draw attention to their biennial showcase for art and technology. [ 13 ] The 2014 display was part of an ongoing project, directed by artist ISHKY (Ben Davis), and again involved AirSign Aerial Advertising. The 2014 display quickly gained publicity making it the number 2 top trending hashtag on Twitter during the display and within 24 hours it was shared and viewed a little over six million times. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ]
https://en.wikipedia.org/wiki/Pi_in_the_Sky
Pi is 3 is a misunderstanding that the Japanese public believed that, due to the revision of the Japanese Curriculum guideline in 2002, the approximate value of pi (π), which had previously been taught as 3.14, is now taught as 3 in arithmetic education. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In fact, this is not true, and even after the revision, the approximate value of pi is still taught as 3.14. In the Japanese Curriculum guideline published in December 1998 and implemented in 2002, regulations such as the limitation on the number of digits for decimal multiplication were changed. Although the new regulations did not change the fact that pi was to be calculated using "3.14," they added the statement that "3 is used for the purpose." At that time, many people believed elementary school students were forced to hand calculate pi as "3", and believed that is a typical example of the negative effects of a relaxed education . [ 5 ] [ 6 ] [ 7 ] This misunderstanding was not easily resolved. [ 8 ] [ 9 ] [ 10 ] In the fall of 1999, a major cram school company launched a major campaign with advertisements that read "The formula for finding the area of a circle: radius x radius x 3!", "Calculate the quadrature of a circle with pi as approximately 3 instead of 3.14." [ 3 ] The mass media also picked up on this issue, and it wrongly became widely believed in society that "as a result of the relaxed education system, pi is now taught as 3". [ 11 ] [ 12 ] As a criticism of the decline in scholastic achievement and the relaxed education system, "Pi is 3" was widely covered in weekly [ 13 ] [ 14 ] [ 15 ] and monthly [ 2 ] [ 16 ] magazines and other mathematics-related journals. [ 1 ] [ 4 ] The Japanese Curriculum guideline was started as a guide only, but at some point, they came to be considered legally binding. In addition to this, there was also a so-called "restrictive provision" that "has to teach without excesses or deficiencies of what the guidelines say." [ 17 ] This statement was carried over to the 1998 revision (implemented in 2002 for elementary schools.) "To process using 3 for the purpose" assumed for, One interpretation is that they expect to develop the ability to make appropriate judgments and process the information according to the situation and application. [ 8 ] As part of the so-called "relaxed education," the content of arithmetic learning in multiplication, division, and decimals was reduced, while calculators were allowed to be used from the arithmetic learning stage. On the other hand, because relaxed education reduced the time for learning but not the areas of learning, many people believed that, students were forced by guideline to use 3 instead of 3.14 as the approximate number of Pi for calculating. However, it was a misunderstanding. [ 19 ] [ 20 ] [ 21 ] In addition, the use of calculators, which was allowed from the 5th grade under the previous teaching guidelines, was allowed from the 4th grade [ 22 ] [ 23 ] [ 24 ] [ 25 ] and calculations using 3.14 were possible even with a calculator. [ 22 ] In the 1998 Revised Curriculum guideline in Japan, the calculation of decimals in the fifth grade of elementary school is, Because of the above limitation, this led to the misunderstanding that 3 must be used as pi. But even if decimals were limited to 1/10th of a place, the pi used would still be 3.1, which does not define pi as 3. At that time, in the fall of 1999, a major cram school company released the following advertisement. They conducted a major campaign in the Tokyo metropolitan area, [ 3 ] and the media covered it extensively. [ 26 ] This led to widespread public awareness of the misunderstanding that pi is now taught as 3 because of the relaxed education system. [ 3 ] [ 10 ] [ 11 ] [ 12 ] Akito Arima , who promoted "relaxed education" as the Minister of Education at the time, repeatedly said, "I was stunned by that," and regretted "my failure to go around the country and explain it in detail." [ 27 ] In the second report of the Japanese Central Council for Education on February 23, 2003, the policy of emphasizing scholastic ability was formulated. [ 24 ] In December 2003, the Curriculum guideline was partially revised to remove the limitation that they must be taught without excesses or deficiencies and changed to a minimum standard that allows teaching in more detail than what is written in the Curriculum guideline, if necessary. [ 28 ] [ 29 ] On February 15, 2008, the Japanese Ministry of Education released the new Curriculum guideline (effective in 2011 for elementary schools), the first after the complete revision of the Fundamental Law of Education , and the restrictive provision was eliminated. [ 30 ] [ 31 ] As a result of the increase in the content of the study, the content has become such that have already learned how to calculate the decimal point by the time they use pi, and the section on pi now states only that "pi shall be 3.14" and the statement "3 is used for the purpose" has been deleted. [ 32 ] The issue of "pi is 3" was discussed in mathematics-related journals [ 1 ] [ 4 ] and various academic journals. [ 24 ] [ 33 ] This misunderstanding was not easily resolved, and there were many misunderstandings even among those involved in education. [ 8 ] In response to this situation, Masahiro Kaminaga, an associate professor of the Department of Electrical and Computer Engineering in Tohoku Gakuin University , confessed that he had been convinced that "relaxed education is a foolish reform that teaches pi as 3." And he said, "I usually said, 'Go and do your own research until you are satisfied,' but if teachers are like this, it must be a problem before educational reform." [ 34 ] Other issues with treating pi as approximately 3 that were discussed, included "pi is an irrational number, so it is neither exactly 3 nor 3.14. Thus, while the former and the latter are essentially equivalent in learning the procedure, there is a clear difference in approximate accuracy," [ 35 ] "if pi is calculated as 3, the perimeter is the same for the circle and the regular hexagon inscribed in it," [ 35 ] and "for the circumference of a circle with a diameter of 10 cm, the error would be 1.4 cm." [ 36 ] were point out. It also points out the danger of adding ".14" in vain in terms of significant figures . [ 8 ] [ 37 ] The misunderstanding that "they teach pi as 3 in elementary school" was seen in weekly [ 13 ] [ 14 ] [ 15 ] and monthly [ 2 ] [ 16 ] magazines as well. This has resulted in a distrust of public school education. [ 3 ] In Nisio Isin 's novel Zaregoto , a scene appeared in which the truncated decimal point is introduced as "the tragedy of 0.14". [ 38 ] On one TV program, five comedians presented a skit in which they used "Pi is OK at 3" as a key line. [ 39 ] The theme song of "Yutori-chan," an animation about Japan's "Yutori" generation, includes the lyrics "3.1415 pi is approximately 3." [ 40 ] The misunderstanding of teaching pi as 3 was also introduced by Akira Ikegami in a 2013 TV program. [ 10 ] In 2003, in the sixth question of the first semester of science at the University of Tokyo , a question asking "Prove that pi is greater than 3.05" was included and it became famous as a question with a message opposing the government's stance of teaching pi as 3. [ 41 ] To solve this problem, to prove that the perimeter of the regular dodecagon inscribed in the circle of diameter 1 is greater than 3.05. [ 42 ]
https://en.wikipedia.org/wiki/Pi_is_3
In 1976, the Italian chemist, Giovanni Piancatelli and coworkers developed a new method to synthesize 4-hydroxycyclopentenone derivatives from 2-furylcarbinols through an acid-catalyzed rearrangement. [ 1 ] [ 2 ] This discovery occurred when Piancatelli was studying heterocyclic steroids and their reactive abilities in an acidic environment. As this rearrangement has continued to be studied, it has become a commonly used rearrangement in natural product synthesis because of the ability to create 4-hydroxy-5-substitutedcyclopent-2-enones. [ 2 ] Piancatelli’s motive for looking into this new rearrangement stemmed from the ever present 3-oxycyclopentene molecule, specifically its 5-hydroxy derivative, found in biologically active natural products. [ 3 ] The mechanism of this reaction is proposed to be a 4-π electrocyclization very much like the Nazarov cyclization reaction . [ 4 ] To obtain the 2-furyl carbinols, Piancatelli subjected furfural , an inedible biomass, to a Grignard reaction . [ 3 ] This is then submitted to acid-catalyzed hydrolysis to cause a molecular rearrangement and obtain the final 2-furyl carbinols. It was proposed by Piancatelli that the reaction is a thermal electrocyclic reaction of a conrotatory 4π electron system while studying specifics of the mechanism conditions when synthesizing the 4-hydroxycyclopentenone derivatives. This mechanism was suggested when studying 1 H NMR spectra as it became apparent that the final products exclusively delivered the trans isomer . [ 2 ] In Piancatelli's proposed mechanism, the formation of the carbocation due to a protonation-dehydration sequence results in the two hydroxy groups being anti allowing for the trans -4hydroxy-5-substituted-cyclopent-2-enone from a 4π electrocylization ring closure. [ 2 ] [ 3 ] [ 5 ] D'Auria proposed a possible mechanism that included zwitterionic intermediates as a way to form the cis isomer alongside the abundant trans isomer of the 2-furylcarbinol. D'Auria performed the rearrangement in boiling water without an acid catalyst. [ 2 ] Another proposed mechanism is from Yin and co-workers that was studied while completing the rearrangement of 2-furylcarbinols with a hydroxyalkyl chain at the 5 position. Yin rationalized the mechanism by utilizing an aldol -type intramolecular addition. [ 2 ] The harness of the reaction conditions needed for the rearrangement differed based upon the reactivity of the substrates. Piancatelli observed that the more reactive substrates such as 5-methyl-2-furylcarbinols can undergo the rearrangement with much milder conditions in order to avoid any possible side products. [ 2 ] Lewis acids were discovered to help drive the reaction to completion as long as there was an equimolar ratio, whereas alkyl groups on the hydroxy-bearing carbon leave the starting material more stable and cause longer reaction times and lower yields with the formation of side products due to the increased reactivity of those carbocations . [ 2 ] An important use of the Piancatelli rearrangement that was studied by Piancatelli himself is the synthesis of prostaglandins and their derivatives. Piancatelli was able to synthesize key intermediates for the preparation of prostanoic acid starting from his 2-furylcarbinols bearing a second functional group. This study was able to demonstrate the versatility of the sequence of the rearrangement. A few of the products synthesized due to utilizing the Piancatelli rearrangement include: 3E,5Z -misoprostol, enisoprost, 4-fluoro-enisoprost, 2-normisoprostol, prostaglandin E 1 (PGE 1 ), ent -phytoprostane E 1 , 16- epi -phytoprostane E 1 , bimatoprost , and travoprost .
https://en.wikipedia.org/wiki/Piancatelli_rearrangement
PicSat was a French observatory nanosatellite , designed to measure the transit of Beta Pictoris b , an exoplanet which orbits the star Beta Pictoris . PicSat was designed and built by a team of scientists led by Dr. Sylvestre Lacour, astrophysicist and instrumentalist at the High Angular Resolution in Astrophysics group in the LESIA laboratory with Paris Observatory , Paris Sciences et Lettres University and the French National Centre for Scientific Research (CNRS). It was launched on 12 January 2018, and operated for more than 10 weeks before falling silent on 20 March 2018. [ 1 ] The cubesat decayed from orbit on 3 October 2023. [ 2 ] With an age of about 23 million years, Beta Pictoris is a very young star. Compared to the Sun, which is 4.5 billion years old, Beta Pictoris is about twice as large in mass and size. Beta Pictoris is relatively close to the Sun: just 63.4 light-years away, making it bright and easy to observe. This makes Beta Pictoris interesting for study as it allows astronomers to learn more about the very early stages of planet formation. In the early 1980s, a large disk of asteroids, dust, gas, and other debris were found surrounding Beta Pictoris, leftovers from the formation of the star. [ 3 ] In 2009, a giant gas planet orbiting Beta Pictoris was discovered by a team of French astronomers led by Anne-Marie Lagrange from Grenoble, France . [ 4 ] The planet, named Beta Pictoris b , is about seven times as massive as Jupiter . It orbits Beta Pictoris from a distance at around ten astronomical units : ten times the distance between the Earth and the Sun, and about the same distance between Saturn and the Sun. In 2016, it was predicted that Beta Pictoris b's Hill sphere or the planet itself would be passing in front of its star as seen from the Earth. [ 5 ] The detailed observation of such a transit would reveal detailed information about the planet, such as its exact size, the composition of its atmosphere , its density, and its chemical composition. Because Beta Pictoris b is so young, this information would reveal more about the formation of giant planets and planetary systems . However, as Beta Pictoris b's orbit is not well known, the moment of transit could only be estimated roughly. The transit was predicted to occur between the summer of 2017 and the summer of 2018. A transit of the planet would have lasted only a few hours; a transit of the planet's Hill sphere would have lasted anywhere from days to months. Continuous monitoring would have been the only way to capture the phenomenon. Since Earth-based observatories would not be able to accurately capture the transit, as long-term continuous monitoring was unlikely to work with Earth's atmosphere, day-night cycle changes, and scheduling conflicts, only a satellite could accurately capture the transit. The purpose of PicSat was to continually observe Beta Pictoris' brightness in order to capture the change in brightness when Beta Pictoris b transited over the star and partially blocked some light. PicSat, a contraction of "Beta Pictoris" and "satellite", was a CubeSat . PicSat was composed of three standard cubic units, called a "3U", each 10x10x10cm in size. [ 6 ] PicSat was the first CubeSat to be operated by the CNRS. It was different from most CubeSat projects in that it was developed by professionals, not by students. The project began in 2014 when Sylvestre Lacour, astrophysicist and instrumentalist at the French CNRS at the LESIA laboratory / Paris Observatory, thought of using a CubeSat to observe Beta Pictoris b's transit. He gathered a small team and they designed and built PicSat. PicSat was one of the few CubeSats worldwide with an astrophysical science goal and the first CubeSat in the field of exoplanetary science . The PicSat science case was defined in collaboration with Dr. Alain Lecavelier des Etangs from the Institut d’Astrophysique de Paris , who had been working on the Beta Pictoris system for many years. The PicSat project also involved a collaboration with CCERES, the "Center & Campus" space of PSL Research University, and with French Space Agency CNES experts. [ 7 ] PicSat consisted of three cubic units. The top and middle cubic units held the satellite's payload, and the bottom unit contained its onboard computer. PicSat's topmost unit contained a small telescope with a five-centimeter diameter mirror. The mirror's small size was sufficient, as Beta Pictoris is very bright. The middle unit contained two innovative technical tools: its fine-tracking ability, and its usage of a thin optical fiber , 3 micrometers in diameter. The fiber, whose usage marks the first time an optical fiber was flown into space, receives light photons and guides them to a sensitive photodiode that accurately measures the arrival time of each individual photon. Using a thin optical fiber eliminated other light sources, like stray light from the sky and scattered light from within the optical system, from entering the photodiode, allowing for accurate measurement of Beta Pictoris' brightness. A fast-moving piezoelectric actuator was added to PicSat to keep the optical fiber tracked upon Beta Pictoris, since the natural wobble of the satellite's orbit would affect the fiber's ability to accurately track and measure the star. The bottom cubic unit of PicSat contained the onboard computer for the satellite's operation, ground-station communication with Earth, raw pointing of the telescope, battery operation, and other important monitoring tasks. [ 8 ] The whole satellite was clothed in arrays of deployable solar panels, providing energy for all systems. PicSat's total weight was about 3.5 kilograms, and its power consumption was about 5 watts. [ 9 ] If PicSat ever detected the onset of Beta Pictoris b's transit, or the transit of its Hill sphere, then a European Southern Observatory telescope would have been immediately put into action. [ 10 ] This was thanks to an accepted proposal to ESO for an opportunity to observe time in support of the PicSat project, led by Dr. Flavien Kiefer from the Institut d'astrophysique de Paris . Dr. Kiefer was known for his work on the detection and observation of exocomets in star systems such as Beta Pictoris. [ 11 ] The telescope was equipped with the High Accuracy Radial Velocity Planet Searcher (HARPS) instrument. [ 10 ] Together with PicSat measurements, HARPS transit data would have allowed for more accurate determinations of the orbit and size of the planet, along with the chemical make up of its atmosphere. If a comet were to have transited, HARPS would have been able to determine the chemical composition of the comet's atmosphere, which carries key information about the chemical composition of the star system as a whole and thus its formation and evolution. [ 12 ] PicSat was launched into a polar , low Earth orbit with an altitude of 600 km on 12 January 2018. The launch was carried out by the Indian Space Research Organization using a Polar Satellite Launch Vehicle on the PSLV-C40 mission. [ 13 ] The satellite was operated from the PicSat Ground Station at Paris Observatory , although it only was visible for about 30 minutes a day. Since PicSat communicated with amateur radio frequencies (achieved with cooperation with Réseau des Émetteurs Français ), anyone with radio receiving capabilities was able to tune into, receive, and upload information from PicSat to a database. A large network of radio amateurs were called to collaborate to track the satellite, receive its data, and transmit it to Ground Station. Licensed radio amateurs were able to use PicSat as a transponder when it was not performing observation tasks or other communication. [ 14 ] PicSat's official website displayed received information, as well as up-to-date light curve data of Beta Pictoris. PicSat was predicted to operate for one year. [ 15 ] It operated for approximately 10 weeks before contact was lost on 20 March 2018. [ 1 ] Attempts to reestablish contact were made. On 30 March it was believed contact was restored by a team at Morehead State University , but the signal received was from the TIGRISAT satellite. The mission officially concluded on 5 April. PicSat was financially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Lithium proposal 639248, the CNRS, the ESEP Laboratory Group, the PSL Research University, Foundation MERAC, CNES , CCERES, and the Paris Observatory – LESIA. [ 16 ]
https://en.wikipedia.org/wiki/PicSat
In mathematics , specifically the study of differential equations , the Picard–Lindelöf theorem gives a set of conditions under which an initial value problem has a unique solution. It is also known as Picard's existence theorem , the Cauchy–Lipschitz theorem , or the existence and uniqueness theorem . The theorem is named after Émile Picard , Ernst Lindelöf , Rudolf Lipschitz and Augustin-Louis Cauchy . Let D ⊆ R × R n {\displaystyle D\subseteq \mathbb {R} \times \mathbb {R} ^{n}} be a closed rectangle with ( t 0 , y 0 ) ∈ int ⁡ D {\displaystyle (t_{0},y_{0})\in \operatorname {int} D} , the interior of D {\displaystyle D} . Let f : D → R n {\displaystyle f:D\to \mathbb {R} ^{n}} be a function that is continuous in t {\displaystyle t} and Lipschitz continuous in y {\displaystyle y} (with Lipschitz constant independent from t {\displaystyle t} ). Then there exists some ε > 0 {\displaystyle \varepsilon >0} such that the initial value problem y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 {\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0}} has a unique solution y ( t ) {\displaystyle y(t)} on the interval [ t 0 − ε , t 0 + ε ] {\displaystyle [t_{0}-\varepsilon ,t_{0}+\varepsilon ]} . [ 1 ] [ 2 ] A standard proof relies on transforming the differential equation into an integral equation, then applying the Banach fixed-point theorem to prove the existence and uniqueness of solutions. Integrating both sides of the differential equation y ′ ( t ) = f ( t , y ( t ) ) {\textstyle y'(t)=f(t,y(t))} shows that any solution to the differential equation must also satisfy the integral equation Given the hypotheses that f {\displaystyle f} is continuous in t {\displaystyle t} and Lipschitz continuous in y {\displaystyle y} , this integral operator is a contraction [ why? ] and so the Banach fixed-point theorem proves that a solution can be obtained by fixed-point iteration of successive approximations. In this context, this fixed-point iteration method is known as Picard iteration . Set and It follows from the Banach fixed-point theorem that the sequence of "Picard iterates" φ k {\textstyle \varphi _{k}} is convergent and that its limit is a solution to the original initial value problem: Since the Banach fixed-point theorem states that the fixed-point is unique, the solution found through this iteration is the unique solution to the differential equation given an initial value. Let y ( t ) = tan ⁡ ( t ) , {\displaystyle y(t)=\tan(t),} the solution to the equation y ′ ( t ) = 1 + y ( t ) 2 {\displaystyle y'(t)=1+y(t)^{2}} with initial condition y ( t 0 ) = y 0 = 0 , t 0 = 0. {\displaystyle y(t_{0})=y_{0}=0,t_{0}=0.} Starting with φ 0 ( t ) = 0 , {\displaystyle \varphi _{0}(t)=0,} we iterate so that φ n ( t ) → y ( t ) {\displaystyle \varphi _{n}(t)\to y(t)} : and so on. Evidently, the functions are computing the Taylor series expansion of our known solution y = tan ⁡ ( t ) . {\displaystyle y=\tan(t).} Since tan {\displaystyle \tan } has poles at ± π 2 , {\displaystyle \pm {\tfrac {\pi }{2}},} it is not Lipschitz continuous in the neighborhood of those points, and the iteration converges toward a local solution for | t | < π 2 {\displaystyle |t|<{\tfrac {\pi }{2}}} only that is not valid over all of R {\displaystyle \mathbb {R} } . To understand uniqueness of solutions, contrast the following two examples of first order ordinary differential equations for y ( t ) . [ 3 ] Both differential equations will possess a single stationary point y = 0. First, the homogeneous linear equation ⁠ dy / dt ⁠ = ay ( a < 0 {\displaystyle a<0} ), a stationary solution is y ( t ) = 0 , which is obtained for the initial condition y (0) = 0 . Beginning with any other initial condition y (0) = y 0 ≠ 0 , the solution y ( t ) = y 0 e a t {\displaystyle y(t)=y_{0}e^{at}} tends toward the stationary point y = 0 , but it only approaches it in the limit of infinite time, so the uniqueness of solutions over all finite times is guaranteed. By contrast for an equation in which the stationary point can be reached after a finite time, uniqueness of solutions does not hold. Consider the homogeneous nonlinear equation ⁠ dy / dt ⁠ = ay ⁠ 2 / 3 ⁠ , which has at least these two solutions corresponding to the initial condition y (0) = 0 : y ( t ) = 0 and so the previous state of the system is not uniquely determined by its state at or after t = 0. The uniqueness theorem does not apply because the derivative of the function f ( y ) = y ⁠ 2 / 3 ⁠ is not bounded in the neighborhood of y = 0 and therefore it is not Lipschitz continuous, violating the hypothesis of the theorem. Let L {\displaystyle L} be the Lipschitz constant of ( t , y ) ↦ f ( t , y ) {\displaystyle (t,y)\mapsto f(t,y)} with respect to y . {\displaystyle y.} The function f {\displaystyle f} is continuous as a function of ( t , y ) {\displaystyle (t,y)} . In particular, since t ↦ f ( t , y ) {\displaystyle t\mapsto f(t,y)} is a continuous function of t {\displaystyle t} , we have that for any point ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} and ϵ > 0 {\displaystyle \epsilon >0} there exist δ > 0 {\displaystyle \delta >0} such that | f ( t , y 0 ) − f ( t 0 , y 0 ) | < ϵ / 2 {\displaystyle |f(t,y_{0})-f(t_{0},y_{0})|<\epsilon /2} when | t − t 0 | < δ {\displaystyle |t-t_{0}|<\delta } . We have | f ( t , y ) − f ( t 0 , y 0 ) | ≤ | f ( t , y ) − f ( t , y 0 ) | + | f ( t , y 0 ) − f ( t 0 , y 0 ) | < ϵ , {\displaystyle |f(t,y)-f(t_{0},y_{0})|\leq |f(t,y)-f(t,y_{0})|+|f(t,y_{0})-f(t_{0},y_{0})|<\epsilon ,} provided | t − t 0 | < δ {\displaystyle |t-t_{0}|<\delta } and | y − y 0 | < ϵ / 2 L {\displaystyle |y-y_{0}|<\epsilon /2L} , which shows that f {\displaystyle f} is continuous at ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} . Let a := 1 / 2 L {\displaystyle a:=1/2L} and take any b > 0 {\displaystyle b>0} such that C a , b = I a ( t 0 ) × B b ( y 0 ) {\displaystyle C_{a,b}=I_{a}(t_{0})\times B_{b}(y_{0})} is a subset of D , {\displaystyle D,} where I a ( t 0 ) = [ t 0 − a , t 0 + a ] B b ( y 0 ) = [ y 0 − b , y 0 + b ] . {\displaystyle {\begin{aligned}I_{a}(t_{0})&=[t_{0}-a,t_{0}+a]\\B_{b}(y_{0})&=[y_{0}-b,y_{0}+b].\end{aligned}}} Such a set exists because ( t 0 , y 0 ) {\displaystyle (t_{0},y_{0})} is in the interior of D , {\displaystyle D,} by assumption. Let which is the supremum of (the absolute values of) the slopes of the function. The function f {\displaystyle f} attains a maximum on C a , b {\displaystyle C_{a,b}} because f {\displaystyle f} is continuous and C a , b {\displaystyle C_{a,b}} is compact. For a later step in the proof, we need that a < b / M , {\displaystyle a<b/M,} so if a ≥ b / M , {\displaystyle a\geq b/M,} then change a {\displaystyle a} to a := 1 2 min { 1 / L , b / M } , {\displaystyle a:={\tfrac {1}{2}}\min\{1/L,\ b/M\},} and update I a ( t 0 ) , {\displaystyle I_{a}(t_{0}),} B b ( y 0 ) , {\displaystyle B_{b}(y_{0}),} C a , b , {\displaystyle C_{a,b},} and M {\displaystyle M} accordingly (this update will be needed at most once since M {\displaystyle M} cannot increase as a result of restricting C a , b {\displaystyle C_{a,b}} ). Consider C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} , the function space of continuous functions I a ( t 0 ) → B b ( y 0 ) . {\displaystyle I_{a}(t_{0})\to B_{b}(y_{0}).} We will proceed by applying the Banach fixed-point theorem using the metric on C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} induced by the uniform norm . Namely, for each continuous function φ : I a ( t 0 ) → B b ( y 0 ) , {\displaystyle \varphi :I_{a}(t_{0})\to B_{b}(y_{0}),} the norm of φ {\displaystyle \varphi } is ‖ φ ‖ ∞ = sup t ∈ I a ‖ φ ( t ) ‖ . {\displaystyle \|\varphi \|_{\infty }=\sup _{t\in I_{a}}\|\varphi (t)\|.} The Picard operator Γ : C ( I a ( t 0 ) , B b ( y 0 ) ) → C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \Gamma :{\mathcal {C}}{\big (}I_{a}(t_{0}),B_{b}(y_{0}){\big )}\to {\mathcal {C}}{\big (}I_{a}(t_{0}),B_{b}(y_{0}){\big )}} is defined for each φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} by Γ φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) {\displaystyle \Gamma \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0}))} given by Γ φ ( t ) = y 0 + ∫ t 0 t f ( s , φ ( s ) ) d s ∀ t ∈ I a ( t 0 ) . {\displaystyle \Gamma \varphi (t)=y_{0}+\int _{t_{0}}^{t}f(s,\varphi (s))\,ds\quad \forall t\in I_{a}(t_{0}).} To apply the Banach fixed-point theorem, we must show that Γ {\displaystyle \Gamma } maps a complete non-empty metric space X into itself and also is a contraction mapping . We first show that Γ {\displaystyle \Gamma } takes B b ( y 0 ) {\displaystyle B_{b}(y_{0})} into itself in the space of continuous functions with the uniform norm. Here, B b ( y 0 ) {\displaystyle B_{b}(y_{0})} is a closed ball in the space of continuous (and bounded ) functions "centered" at the constant function y 0 {\displaystyle y_{0}} . Hence we need to show that ‖ φ − y 0 ‖ ∞ ≤ b {\displaystyle \|\varphi -y_{0}\|_{\infty }\leq b} implies ‖ Γ φ ( t ) − y 0 ‖ = ‖ ∫ t 0 t f ( s , φ ( s ) ) d s ‖ ≤ ∫ t 0 t ′ ‖ f ( s , φ ( s ) ) ‖ d s ≤ ∫ t 0 t ′ M d s = M | t ′ − t 0 | ≤ M a ≤ b {\displaystyle \left\|\Gamma \varphi (t)-y_{0}\right\|=\left\|\int _{t_{0}}^{t}f(s,\varphi (s))\,ds\right\|\leq \int _{t_{0}}^{t'}\left\|f(s,\varphi (s))\right\|ds\leq \int _{t_{0}}^{t'}M\,ds=M\left|t'-t_{0}\right|\leq Ma\leq b} where t ′ {\displaystyle t'} is some number in [ t 0 − a , t 0 + a ] {\displaystyle [t_{0}-a,t_{0}+a]} where the maximum is achieved. The last inequality in the chain is true since a < b / M . {\displaystyle a<b/M.} Now let us prove that Γ {\displaystyle \Gamma } is a contraction mapping as required to apply the Banach fixed-point theorem . In particular, we want to show that there exists 0 ≤ q < 1 , {\displaystyle 0\leq q<1,} such that ‖ Γ φ 1 − Γ φ 2 ‖ ∞ ≤ q ‖ φ 1 − φ 2 ‖ ∞ {\displaystyle \left\|\Gamma \varphi _{1}-\Gamma \varphi _{2}\right\|_{\infty }\leq q\left\|\varphi _{1}-\varphi _{2}\right\|_{\infty }} for all φ 1 , φ 2 ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) . {\displaystyle \varphi _{1},\varphi _{2}\in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})).} Let q = a L {\displaystyle q=aL} and take any φ 1 , φ 2 ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) . {\displaystyle \varphi _{1},\varphi _{2}\in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})).} Take t {\displaystyle t} such that Then, using the definition of Γ {\displaystyle \Gamma } , where t − t 0 ≤ a , {\displaystyle t-t_{0}\leq a,} because the domains of ϕ 1 , ϕ 2 {\displaystyle \phi _{1},\phi _{2}} are both I a ( t 0 ) × B b ( y 0 ) . {\displaystyle I_{a}(t_{0})\times B_{b}(y_{0}).} By definition, q = a L , {\displaystyle q=aL,} and a < 1 / L , {\displaystyle a<1/L,} so q < 1. {\displaystyle q<1.} Therefore, Γ {\displaystyle \Gamma } is a contraction. We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function φ ∈ C ( I a ( t 0 ) , B b ( y 0 ) ) \varphi \in {\mathcal {C}}(I_{a}(t_{0}),B_{b}(y_{0})) such that Γ φ = φ . {\displaystyle \Gamma \varphi =\varphi .} Thus, φ {\displaystyle \varphi } is the unique solution of the initial value problem, valid on the interval I a . {\displaystyle I_{a}.} We wish to remove the dependence of the interval I a on L . To this end, there is a corollary of the Banach fixed-point theorem: if an operator T n is a contraction for some n in N , then T has a unique fixed point. Before applying this theorem to the Picard operator, recall the following: Lemma — ‖ Γ m φ 1 ( t ) − Γ m φ 2 ( t ) ‖ ≤ L m | t − t 0 | m m ! ‖ φ 1 − φ 2 ‖ {\displaystyle \left\|\Gamma ^{m}\varphi _{1}(t)-\Gamma ^{m}\varphi _{2}(t)\right\|\leq {\frac {L^{m}|t-t_{0}|^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|} for all t ∈ [ t 0 − α , t 0 + α ] {\displaystyle t\in [t_{0}-\alpha ,t_{0}+\alpha ]} Proof. Induction on m . For the base of the induction ( m = 1 ) we have already seen this, so suppose the inequality holds for m − 1 , then we have: ‖ Γ m φ 1 ( t ) − Γ m φ 2 ( t ) ‖ = ‖ Γ Γ m − 1 φ 1 ( t ) − Γ Γ m − 1 φ 2 ( t ) ‖ ≤ | ∫ t 0 t ‖ f ( s , Γ m − 1 φ 1 ( s ) ) − f ( s , Γ m − 1 φ 2 ( s ) ) ‖ d s | ≤ L | ∫ t 0 t ‖ Γ m − 1 φ 1 ( s ) − Γ m − 1 φ 2 ( s ) ‖ d s | ≤ L | ∫ t 0 t L m − 1 | s − t 0 | m − 1 ( m − 1 ) ! ‖ φ 1 − φ 2 ‖ d s | ≤ L m | t − t 0 | m m ! ‖ φ 1 − φ 2 ‖ . {\displaystyle {\begin{aligned}\left\|\Gamma ^{m}\varphi _{1}(t)-\Gamma ^{m}\varphi _{2}(t)\right\|&=\left\|\Gamma \Gamma ^{m-1}\varphi _{1}(t)-\Gamma \Gamma ^{m-1}\varphi _{2}(t)\right\|\\&\leq \left|\int _{t_{0}}^{t}\left\|f\left(s,\Gamma ^{m-1}\varphi _{1}(s)\right)-f\left(s,\Gamma ^{m-1}\varphi _{2}(s)\right)\right\|ds\right|\\&\leq L\left|\int _{t_{0}}^{t}\left\|\Gamma ^{m-1}\varphi _{1}(s)-\Gamma ^{m-1}\varphi _{2}(s)\right\|ds\right|\\&\leq L\left|\int _{t_{0}}^{t}{\frac {L^{m-1}|s-t_{0}|^{m-1}}{(m-1)!}}\left\|\varphi _{1}-\varphi _{2}\right\|ds\right|\\&\leq {\frac {L^{m}|t-t_{0}|^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|.\end{aligned}}} By taking a supremum over t ∈ [ t 0 − α , t 0 + α ] {\displaystyle t\in [t_{0}-\alpha ,t_{0}+\alpha ]} we see that ‖ Γ m φ 1 − Γ m φ 2 ‖ ≤ L m α m m ! ‖ φ 1 − φ 2 ‖ {\displaystyle \left\|\Gamma ^{m}\varphi _{1}-\Gamma ^{m}\varphi _{2}\right\|\leq {\frac {L^{m}\alpha ^{m}}{m!}}\left\|\varphi _{1}-\varphi _{2}\right\|} . This inequality assures that for some large m , L m α m m ! < 1 , {\displaystyle {\frac {L^{m}\alpha ^{m}}{m!}}<1,} and hence Γ m will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking α = min{ a , ⁠ b / M ⁠ } . In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value. The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The Peano existence theorem shows only existence, not uniqueness, but it assumes only that f is continuous in y , instead of Lipschitz continuous . For example, the right-hand side of the equation ⁠ dy / dt ⁠ = y ⁠ 1 / 3 ⁠ with initial condition y (0) = 0 is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions: [ 4 ] Even more general is Carathéodory's existence theorem , which proves existence (in a more general sense) under weaker conditions on f . Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura 's theorem. [ 5 ] The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval [ t 0 − ε , t 0 + ε ] {\displaystyle [t_{0}-\varepsilon ,t_{0}+\varepsilon ]} , possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of f and the domain over which f is defined. For instance, if f is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R . If f is only locally Lipschitz, some solutions may not be defined for certain values of t , even if f is smooth. For instance, the differential equation ⁠ dy / dt ⁠ = y 2 with initial condition y (0) = 1 has the solution y ( t ) = 1/(1- t ), which is not defined at t = 1. Nevertheless, if f is a differentiable function defined on a compact submanifold of R n such that the prescribed derivative is tangent to the given submanifold, then the initial value problem has a unique solution for all time. More generally, in differential geometry : if f is a differentiable vector field defined over a domain which is a compact smooth manifold , then all its trajectories ( integral curves ) exist for all time. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Picard–Lindelöf_theorem
The Pickands–Balkema–De Haan theorem gives the asymptotic tail distribution of a random variable when its true distribution is unknown. It is often called the second theorem in extreme value theory . Unlike the first theorem (the Fisher–Tippett–Gnedenko theorem ), which concerns the maximum of a sample, the Pickands–Balkema–De Haan theorem describes the values above a threshold. The theorem owes its name to mathematicians James Pickands , Guus Balkema, and Laurens de Haan . For an unknown distribution function F {\displaystyle F} of a random variable X , {\displaystyle \ X\ ,} the Pickands–Balkema–De Haan theorem describes the conditional distribution function F u {\displaystyle \ F_{u}\ } of the variable X {\displaystyle \ X\ } above a certain threshold u . {\displaystyle \ u~.} This is the so-called conditional excess distribution function, defined as for 0 ≤ y ≤ x F − u , {\displaystyle \ 0\leq y\leq x_{F}-u\ ,} where x F {\displaystyle \ x_{F}\ } is either the finite or infinite right endpoint of the underlying distribution F . {\displaystyle \ F~.} The modified function F u {\displaystyle \ F_{u}\ } describes the distribution of the excess value over a threshold u , {\displaystyle \ u\ ,} given that the threshold is exceeded. Let F u {\displaystyle F_{u}} be the conditional excess distribution function. Pickands, [ 1 ] Balkema and De Haan [ 2 ] posed that for a large class of underlying distribution functions F {\displaystyle F} , and large u {\displaystyle u} , F u {\displaystyle F_{u}} is well approximated by the generalized Pareto distribution , in the following sense. Suppose that there exist functions a ( u ) , b ( u ) {\displaystyle a(u),b(u)} , with a ( u ) > 0 {\displaystyle a(u)>0} such that F u ( a ( u ) y + b ( u ) ) {\displaystyle F_{u}(a(u)y+b(u))} as u → ∞ {\displaystyle u\rightarrow \infty } converge to a non-degenerate distribution, then such limit is equal to the generalized Pareto distribution: where Here σ > 0, and y ≥ 0 when k ≥ 0 and 0 ≤ y ≤ − σ / k when k < 0. These special cases are also known as The class of underlying distribution functions F {\displaystyle F} are related to the class of the distribution functions F {\displaystyle F} satisfying the Fisher–Tippett–Gnedenko theorem . [ 3 ] Since a special case of the generalized Pareto distribution is a power-law, the Pickands–Balkema–De Haan theorem is sometimes used to justify the use of a power-law for modeling extreme events. The theorem has been extended to include a wider range of distributions. [ 4 ] [ 5 ] While the extended versions cover, for example the normal and log-normal distributions, still continuous distributions exist that are not covered. [ 6 ]
https://en.wikipedia.org/wiki/Pickands–Balkema–De_Haan_theorem
A Pickering emulsion , sometimes named Ramsden emulsion , is an emulsion that is stabilized by solid particles (for example colloidal silica ) which adsorb onto the interface between the water and oil phases . Typically, the emulsions are either water-in-oil or oil-in-water emulsions, but other more complex systems such as water-in-water, oil-in-oil, water-in-oil-in-water, and oil-in-water-in-oil also do exist. Pickering emulsions were named after S.U. Pickering , who described the phenomenon in 1907, although the effect was first recognized by Walter Ramsden in 1903. [ 1 ] [ 2 ] If oil and water are mixed and small oil droplets are formed and dispersed throughout the water (oil-in-water emulsion), eventually the droplets will coalesce to decrease the amount of energy in the system. However, if solid particles are added to the mixture, they will bind to the surface of the interface and prevent the droplets from coalescing, making the emulsion more stable. Particle properties such as hydrophobicity , shape, and size, as well as the electrolyte concentration of the continuous phase and the volume ratio of the two phases can have an effect on the stability of the emulsion. The particle’s contact angle to the surface of the droplet is a characteristic of the hydrophobicity of the particle. If the contact angle of the particle to the interface is low, the particle will be mostly wetted by the droplet and therefore will not be likely to prevent coalescence of the droplets. Particles that are partially hydrophobic are better stabilizers because they are partially wettable by both liquids and therefore bind better to the surface of the droplets. The optimal contact angle for a stable emulsion is achieved when the particle is equally wetted by the two phases (i.e. 90° contact angle). The stabilization energy is given by where r is the particle radius, γ O W {\displaystyle \gamma _{OW}} is the interfacial tension, and θ O W {\displaystyle \theta _{OW}} is the contact angle of the particle with the interface. When the contact angle is approximately 90°, the energy required to stabilize the system is at its minimum. [ 3 ] Generally, the phase that preferentially wets the particle will be the continuous phase in the emulsion system. The most common type of Ramsden emulsions are oil-in-water emulsions due to the hydrophilicity of most organic particles. One example of a Ramsden-stabilized emulsion is homogenized milk. The milk protein ( casein ) units are adsorbed at the surface of the milk fat globules and act as surfactants . The casein replaces the milkfat globule membrane, which is damaged during homogenization. Other examples of emulsions where Ramsden particles may be the stabilizing species are for example detergents, low-fat chocolates, mayonnaises and margarines. Ramsden emulsions have gained increased attention and research interest during the last 20 years when the use of traditional surfactants was questioned due to environmental, health and cost issues. Synthetic nanoparticles as Ramsden emulsion stabilizers with well-defined sizes and compositions have been the primarily particles of interest until recently when also natural organic particles have gained increased attention. They are believed to have advantages such as cost-efficiency and degradability, and are issued from renewable resources. [ 4 ] Pickering emulsions find applications for enhanced oil recovery [ 5 ] or water remediation . [ 6 ] Certain Pickering emulsions remain stable even under gastric conditions and show an extraordinary resistance against gastric lipolysis , [ 7 ] facilitating their use for controlled lipid digestion and satiation [ 8 ] or oral delivery systems. [ 9 ] Additionally, it has been demonstrated that the stability of the Ramsden emulsions can be improved by the use of amphiphilic " Janus particles ", namely particles that have one hydrophobic and one hydrophilic side, due to the higher adsorption energy of the particles at the liquid-liquid interface. [ 10 ] This is evident when observing emulsion stabilization using polyelectrolytes . It is also possible to use latex particles for Ramsden stabilization and then fuse these particles to form a permeable shell or capsule, called a colloidosome. [ 11 ] Moreover, Ramsden emulsion droplets are also suitable templates for micro-encapsulation and the formation of closed, non-permeable capsules. [ 12 ] This form of encapsulation can also be applied to water-in-water emulsions (dispersions of phase-separated aqueous polymer solutions), and can also be reversible. [ 13 ] Pickering-stabilized microbubbles may have applications as ultrasound contrast agents . [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Pickering_emulsion
Picoline- N -oxide describes any of three isomers with the formula CH 3 C 5 H 4 NO . All are colorless solids. Their properties and those of pyridine- N -oxide ( C 5 H 5 NO ) are similar.
https://en.wikipedia.org/wiki/Picoline-N-oxide
In the field of ordinary differential equations , the Picone identity , named after Mauro Picone , [ 1 ] is a classical result about homogeneous linear second order differential equations. Since its inception in 1910 it has been used with tremendous success in association with an almost immediate proof of the Sturm comparison theorem , a theorem whose proof took up many pages in Sturm's original memoir of 1836. It is also useful in studying the oscillation of such equations and has been generalized to other type of differential equations and difference equations . The Picone identity is used to prove the Sturm–Picone comparison theorem . Suppose that u and v are solutions of the two homogeneous linear second order differential equations in self-adjoint form and Then, for all x with v ( x ) ≠ 0, the following identity holds ( u v ( p 1 u ′ v − p 2 u v ′ ) ) ′ = ( u p 1 u ′ − p 2 v ′ u 2 1 v ) ′ = u ′ p 1 u ′ + u ( p 1 u ′ ) ′ − ( p 2 v ′ ) ′ u 2 1 v − p 2 v ′ 2 u u ′ 1 v + p 2 v ′ u 2 v ′ v 2 = {\displaystyle \left({\frac {u}{v}}(p_{1}u'v-p_{2}uv')\right)'=\left(up_{1}u'-p_{2}v'u^{2}{\frac {1}{v}}\right)'=u'p_{1}u'+u(p_{1}u')'-(p_{2}v')'u^{2}{\frac {1}{v}}-p_{2}v'2uu'{\frac {1}{v}}+p_{2}v'u^{2}{\frac {v'}{v^{2}}}=} = p 1 u ′ 2 − 2 p 2 u u ′ v ′ v + p 2 u 2 v ′ 2 v 2 + u ( p 1 u ′ ) ′ − ( p 2 v ′ ) ′ u 2 v = {\displaystyle =p_{1}u'^{2}-2p_{2}{\frac {uu'v'}{v}}+p_{2}{\frac {u^{2}v'^{2}}{v^{2}}}+u(p_{1}u')'-(p_{2}v')'{\frac {u^{2}}{v}}=} = p 1 u ′ 2 − p 2 u ′ 2 + p 2 u ′ 2 − 2 p 2 u ′ u v ′ v + p 2 ( u v ′ v ) 2 − u ( q 1 u ) + ( q 2 v ) u 2 v = ( p 1 − p 2 ) u ′ 2 + p 2 ( u ′ − v ′ u v ) 2 + ( q 2 − q 1 ) u 2 {\displaystyle =p_{1}u'^{2}-p_{2}u'^{2}+p_{2}u'^{2}-2p_{2}u'{\frac {uv'}{v}}+p_{2}\left({\frac {uv'}{v}}\right)^{2}-u(q_{1}u)+(q_{2}v){\frac {u^{2}}{v}}=\left(p_{1}-p_{2}\right)u'^{2}+p_{2}\left(u'-v'{\frac {u}{v}}\right)^{2}+\left(q_{2}-q_{1}\right)u^{2}}
https://en.wikipedia.org/wiki/Picone_identity
The term picotechnology is a portmanteau of picometre and technology , intended to parallel the term nanotechnology . It is a hypothetical future level of technological manipulation of matter, on the scale of trillionths of a metre or picoscale (10 −12 ). This is three orders of magnitude smaller than a nanometre (and thus most nanotechnology ) and two orders of magnitude smaller than most chemistry transformations and measurements. Picotechnology would involve the manipulation of matter at the atomic level. A further hypothetical development, femtotechnology , would involve working with matter at the subatomic level. Picoscience is a term used by some futurists to refer to structuring of matter on a true picometre scale. Picotechnology was described as involving the alteration of the structure and chemical properties of individual atoms, typically through the manipulation of energy states of electrons within an atom to produce metastable (or otherwise stabilized) states with unusual properties, producing some form of exotic atom . Analogous transformations known to exist in the real world are redox chemistry , which can manipulate the oxidation states of atoms; excitation of electrons to metastable excited states as with lasers and some forms of saturable absorption ; and the manipulation of the states of excited electrons in Rydberg atoms to encode information. However, none of these processes produces the types of exotic atoms described by futurists. Alternatively, picotechnology is used by some researchers in nanotechnology to refer to the fabrication of structures where atoms and devices are positioned with sub-nanometre accuracy. This is important where interaction with a single atom or molecule is desired, because of the strength of the interaction between two atoms which are very close. For example, the force between an atom in an atomic force microscope probe tip and an atom in a sample being studied vary exponentially with separation distance, and is sensitive to changes in position on the order of 50 to 100 picometres (due to Pauli exclusion at short ranges and van der Waals forces at long ranges). The Chinese science fiction novel The Three-Body Problem features a plot-point in which an advanced alien civilization imbues individual protons with supercomputing powers and subsequently manipulates said protons via quantum entanglement (the fictional name for these proton-sized supercomputers is "sophons"). [ 1 ]
https://en.wikipedia.org/wiki/Picotechnology
Picramic acid , also known as 2-amino-4,6-dinitrophenol , [ 3 ] is an acid obtained by neutralizing an alcoholic solution of picric acid with ammonium hydroxide . Hydrogen sulfide is then added to the resulting solution, which turns red, yielding sulfur and red crystals. These are the ammonium salts of picramic acid, from which it can be extracted using acetic acid . [ 4 ] Picramic acid is explosive and very toxic . It has a bitter taste. [ 5 ] Along with its sodium salt (sodium picramate) it is used in low concentrations in certain hair dyes, such as henna , it is considered safe for this use provided its concentration remains low. [ 6 ]
https://en.wikipedia.org/wiki/Picramic_acid
Picric acid is an organic compound with the formula (O 2 N) 3 C 6 H 2 OH. Its IUPAC name is 2,4,6-trinitrophenol ( TNP ). The name "picric" comes from Greek : πικρός ( pikros ), meaning "bitter", due to its bitter taste. It is one of the most acidic phenols . Like other strongly nitrated organic compounds, picric acid is an explosive , which is its primary use. It has also been used as medicine ( antiseptic , burn treatments) and as a dye. Picric acid was probably first mentioned in the 17th-century alchemical writings of Johann Rudolf Glauber . Initially, it was made by nitrating substances such as animal horn, silk , indigo , and natural resin , the synthesis from indigo first being performed by Peter Woulfe in 1771. [ 4 ] The German chemist Justus von Liebig had named picric acid Kohlenstickstoffsäure (rendered in French as acide carboazotique ). Picric acid was given that name by the French chemist Jean-Baptiste Dumas in 1841. [ 5 ] Its synthesis from phenol , and the correct determination of its formula, were accomplished during 1841. [ 6 ] In 1799, French chemist Jean-Joseph Welter (1763–1852) produced picric acid by treating silk with nitric acid; he found that potassium picrate could explode. [ 7 ] Not until 1830 did chemists think to use picric acid as an explosive . Before then, chemists assumed that only the salts of picric acid were explosive, not the acid itself. [ 8 ] In 1871 Hermann Sprengel proved it could be detonated [ 9 ] and afterwards most military powers used picric acid as their main high explosive material. A full synthesis was later found by Leonid Valerieovich Kozakov. Picric acid was the first strongly explosive nitrated organic compound widely considered suitable to withstand the shock of firing in conventional artillery . Nitroglycerine and nitrocellulose (guncotton) were available earlier, but shock sensitivity sometimes caused detonation in an artillery barrel at the time of firing. In 1885, based on research of Hermann Sprengel, French chemist Eugène Turpin patented the use of pressed and cast picric acid in blasting charges and artillery shells . In 1887 the French government adopted a mixture of picric acid and guncotton with the name Melinite . In 1888, Britain started manufacturing a very similar mixture in Lydd , Kent, with the name Lyddite . Japan followed with an alternative stabilization approach known as Shimose powder which, instead of attempting to stabilize the material itself, removed its contact with metal by coating the inside of the shells with layer(s) of resin and wax. [ 10 ] In 1889, a mixture of ammonium cresylate with trinitrocresol , or an ammonium salt of trinitrocresol, started to be manufactured with the name Ecrasite in Austria-Hungary . By 1894 Russia was manufacturing artillery shells filled with picric acid. Ammonium picrate (known as Dunnite or explosive D ) was used by the United States beginning in 1906. However, shells filled with picric acid become unstable if the compound reacts with the metal shell or fuze casings to form metal picrates which are more sensitive than the parent phenol. The sensitivity of picric acid was demonstrated by the Halifax Explosion . Picric acid was used in the Battle of Omdurman , [ 11 ] the Second Boer War , [ 12 ] the Russo-Japanese War , [ 13 ] and World War I . [ 14 ] Germany began filling artillery shells with trinitrotoluene (TNT) in 1902. Toluene was less readily available than phenol, and TNT is less powerful than picric acid, but the improved safety of munitions manufacturing and storage caused the replacement of picric acid by TNT for most military purposes between the World Wars. [ 15 ] Efforts to control the availability of phenol , the precursor to picric acid, emphasize its importance in World War I . Germans are reported to have bought US supplies of phenol and converted it to acetylsalicylic acid ( aspirin ) to keep it from the Allies. At the time, phenol was obtained from coal as a co-product of coke ovens and the manufacture of gas for gas lighting . Laclede Gas reports being asked to expand production of phenol (and toluene ) to assist the war effort. [ 16 ] Both Monsanto [ 17 ] and Dow Chemical [ 18 ] began manufacturing synthetic phenol in 1915, with Dow being the main producer. Dow describes picric acid as "the main battlefield explosive used by the French. Large amounts [of phenol] also went to Japan, where it was made into picric acid sold to the Russians." [ 19 ] Thomas Edison needed phenol to manufacture phonograph records. He responded by undertaking production of phenol at his Silver Lake, New Jersey , facility using processes developed by his chemists. [ 20 ] He built two plants with a capacity of six tons of phenol per day. Production began the first week of September, one month after hostilities began in Europe. He built two plants to produce the raw material benzene at Johnstown, Pennsylvania , and Bessemer, Alabama , replacing supplies previously from Germany. Edison manufactured aniline dyes , which had previously been supplied by the German dye trust . Other wartime products included xylene , p-phenylenediamine , shellac , and pyrophyllite . Wartime shortages made these ventures profitable. In 1915, his production capacity was fully committed by midyear. [ citation needed ] The aromatic ring of phenol is activated towards electrophilic substitution reactions, and attempted nitration of phenol, even with dilute nitric acid, results in the formation of high molecular weight tars. In order to minimize these side reactions, anhydrous phenol is sulfonated with fuming sulfuric acid , and the resulting sulfonic acid is then nitrated with concentrated nitric acid . During this reaction, nitro groups are introduced, and the sulfonic acid group is displaced. The reaction is highly exothermic , and careful temperature control is required. Synthesis routes that nitrate aspirin or salicylic acid can also be used to mitigate tar formation. Carbon dioxide is lost from the former via decarboxylation , while both acetic acid and carbon dioxide are lost from the latter. [ 21 ] Another method of picric acid synthesis is direct nitration of 2,4-dinitrophenol with nitric acid. [ 22 ] [ 23 ] It crystallizes in the orthorhombic space group Pca 2 1 with a = 9.13 Å, b = 18.69 Å, c = 9.79 Å and α = β = γ = 90°. [ 24 ] By far the greatest use of picric acid has been in ammunitions and explosives. Explosive D , also known as Dunnite, is the ammonium salt of picric acid. Dunnite is more powerful but less stable than the more common explosive TNT (which is produced in a similar process to picric acid but with toluene as the feedstock). Picramide, formed by aminating picric acid (typically beginning with Dunnite), can be further aminated to produce the very stable explosive TATB . It has found some use in organic chemistry for the preparation of crystalline salts of organic bases (picrates) for the purpose of identification and characterization. In metallurgy , a 4% picric acid in ethanol etch, termed "picral", has been commonly used in optical metallography to reveal prior austenite grain boundaries in ferritic steels. The hazards associated with picric acid have meant it has largely been replaced with other chemical etchants. However, it is still used to etch magnesium alloys , such as AZ31. Bouin solution is a common picric-acid–containing fixative solution used for histology specimens. [ 25 ] It improves the staining of acid dyes, but it can also result in hydrolysis of any DNA in the sample. [ 26 ] Picric acid is used in the preparation of Picrosirius red , a histological stain for collagen . [ 27 ] [ 28 ] Clinical chemistry laboratory testing utilizes picric acid for the Jaffe reaction to test for creatinine . It forms a colored complex that can be measured using spectroscopy. [ 29 ] Picric acid forms red isopurpurate with hydrogen cyanide (HCN). By photometric measurement of the resulting dye, picric acid can be used to quantify hydrogen cyanide. [ 30 ] During the early 20th century, picric acid was used to measure blood glucose levels. When glucose, picric acid and sodium carbonate are combined and heated, a characteristic red color forms. With a calibrating glucose solution, the red color can be used to measure the glucose levels added. This is known as the Lewis and Benedict method of measuring glucose. [ 31 ] Much less commonly, wet picric acid has been used as a skin dye, or temporary branding agent. [ citation needed ] It reacts with proteins in the skin to give a dark brown color that may last as long as a month. [ citation needed ] During the early 20th century, picric acid was stocked in pharmacies as an antiseptic and as a treatment for burns , malaria , herpes , and smallpox . Picric-acid–soaked gauze was commonly stocked in first aid kits from that period as a burn treatment. It was notably used for the treatment of burns suffered by victims of the Hindenburg disaster in 1937. Picric acid was used as a treatment for trench foot suffered by soldiers stationed on the Western Front during World War I . [ 32 ] Picric acid has been used for many years by fly tyers to dye mole skins and feathers a dark olive green for use as fishing lures. Its popularity has been tempered by its toxic nature. [ citation needed ] Modern safety precautions recommend storing picric acid wet, to minimize the danger of explosion. Dry picric acid is relatively sensitive to shock and friction , so laboratories that use it store it in bottles under a layer of water , rendering it safe. Glass or plastic bottles are required, as picric acid can easily form metal picrate salts that are even more sensitive and hazardous than the acid itself. Industrially, picric acid is especially hazardous because it is volatile and slowly sublimes even at room temperature. Over time, the buildup of picrates on exposed metal surfaces can constitute an explosion hazard. [ 33 ] Picric acid gauze, if found in antique first aid kits, presents a safety hazard because picric acid of that vintage (60–90 years old) will have become crystallized and unstable, [ 34 ] and may have formed metal picrates from long storage in a metal first aid case. Bomb disposal units are often called to dispose of picric acid if it has dried out. [ 35 ] [ 36 ] In the United States there was an effort to remove dried picric acid containers from high school laboratories during the 1980s. [ citation needed ] Munitions containing picric acid may be found in sunken warships . The buildup of metal picrates over time renders them shock-sensitive and extremely hazardous. It is recommended that shipwrecks that contain such munitions not be disturbed in any way. [ 37 ] The hazard may subside when the shells become corroded enough to admit seawater as these materials are water-soluble. [ 37 ] Currently there are various fluorescent probes to sense and detect picric acid in very minute quantities. [ 38 ]
https://en.wikipedia.org/wiki/Picric_acid
Picryl chloride is an organic compound with the formula ClC 6 H 2 (NO 2 ) 3 . It is a bright yellow solid that is highly explosive, as is typical for polynitro aromatics such as picric acid . Its detonation velocity is 7,200 m/s. The reactivity of picryl chloride is strongly influenced by the presence of three electron-withdrawing nitro groups . Consequently picryl chloride is an electrophile as illustrated by its reactivity toward sulfite to give the sulfonate : [ 2 ] Picryl chloride is also a strong electron acceptor. It forms a 1:1 charge-transfer complex with hexamethylbenzene . [ 3 ] This explosives -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Picryl_chloride
Pictet's experiment is the demonstration of the reflection of heat and the apparent reflection of cold in a series of experiments [ 1 ] performed in 1790 (reported in English in 1791 in An Essay on Fire [ 2 ] ) by Marc-Auguste Pictet —ten years before the discovery of infrared heating of the Earth by the Sun. [ 3 ] The apparatus for most of the experiments used two concave mirrors facing one another at a distance. An object placed at the focus of one mirror would have heat and light reflected by the mirror and focused. An object at the focus of the counterpart mirror would do the same. Placing a hot object at one focus and a thermometer at the other would register an increase in temperature on the thermometer. This was sometimes demonstrated with the explosion of a flammable mix of gasses in a blackened balloon, as described and depicted by John Tyndall in 1863. [ 4 ] After "demonstrating that radiant heat , even when it was not accompanied by any light, could be reflected and focused like light", [ 5 ] Pictet used the same apparatus to demonstrate the apparent reflection of cold [ 6 ] in a similar manner. This demonstration was important to Benjamin Thompson, Count Rumford who argued for the existence of "frigorific rays" conveying cold. Rumford's continuation of the experiments [ 7 ] and promotion of the topic [ 8 ] caused the name to be attached to the experiment. [ 9 ] The apparent reflection of cold if a cold object is placed in one focus surprised Pictet [ 10 ] and two scholars writing about the experiment in 1985 noted "most physicists, on seeing it demonstrated for the first time, find it surprising and even puzzling." [ 11 ] The confusion may be resolved by understanding that all objects in the system—including the thermometer—are constantly radiating heat. Pictet described this as "the thermometer acts the same part relatively to the snow as the bullet [heat source] in relation to the thermometer." Addition of a very cold object adds an effective heat sink versus a room temperature object which would not, in the net, cool or warm a thermometer in the other focus. [ 12 ] There are relatively few published examples of demonstrations or recreation of the experiment. Two physicists in the University of Washington system reported on demonstrations to students and colleagues and produced directions for re-creating the experiment in 1985 [ 1 ] as part of an investigation into the role of the experiment in the history of physics . Physicists at Sofia University in Bulgaria reported on reproducing the experiment for high school students in 2017. [ 13 ]
https://en.wikipedia.org/wiki/Pictet's_experiment
6-Phenanthridine 3,4-Benzoquinoline 9-Azaphenanthrene 3,4-Benzoisioquinoline 5-Azaphenanthrene Phenanthridine is a nitrogen heterocyclic compound with the formula C 13 H 9 N . It is a colorless solid, although impure samples can be brownish. It is a precursor to DNA-binding fluorescent dyes through intercalation . Examples of such dyes are ethidium bromide and propidium iodide . Phenanthridine was discovered by Amé Pictet and H. J. Ankersmit in 1891. Structurally, the molecule is flat but otherwise unremarkable. [ 3 ] Phenanthridine is typically extracted from coal tar , an abundant resource where it is found at a level of about 0.1%. [ 4 ] Phenanthridine was prepared by Pictet and Ankersmit by pyrolysis of the condensation product of benzaldehyde and aniline . [ 5 ] In the Pictet–Hubert reaction (1899) the compound is formed in a reaction of the 2-aminobiphenyl – formaldehyde adduct (an N -acyl- o -xenylamine) with zinc chloride at elevated temperatures. [ 6 ] This traditional method proceeds in low yield and gives various side products (approximately 30-50%). The pyrolysis method involves passing benzylideneaniline through a pumice-filled tube heated to 600–800 °C, where rearrangement and decomposition occur. The resulting pyrolysis products are collected and purified through fractional distillation to remove side products such as benzene, benzonitrile, aniline, and biphenyl. The remaining crude phenanthridine can be crystallized as a mercurochloride salt for further isolation. The second method is the Morgan–Walls reaction that gives a 42% yield of phenanthridine after purification. It involves a cyclodehydration process. This route starts with heating 2-aminobiphenyl with formic acid to give o-formamidobiphenyl. The intermediate is then treated with phosphorus oxychloride to promote cyclization. Nitrobenzene as a high-boiling solvent can improve the yield by allowing higher reaction temperatures. Morgan and Walls in 1931 improved the Pictet–Hubert reaction by replacing the metal by phosphorus oxychloride and using nitrobenzene as a reaction solvent. [ 7 ] For this reason, the reaction is also called the Morgan–Walls reaction . [ 8 ] The reaction is similar to the Bischler–Napieralski reaction and the Pictet–Spengler reaction . In terms of reactivity, phenanthridine resembles its more common isomer acridine . It is a weak base. It forms a methiodide . It resists common oxidants. [ 9 ] It forms adducts with metal ions. [ 10 ] Phenanthridine undergoes metabolic transformation primarily through oxidative pathways in both microbial and vertebrate systems. [ 11 ] The major metabolite is the amide phenanthridone., [ 12 ] which is primarily done by the cytochrome P450 enzymes . The phenanthridone metabolite is more mutagenic than the parent compound. A study that tested the metabolism of phenanthridine to phenanthridone by rat lung and liver microsomes suggests that further hydroxylation or epoxidation could enhance phenanthridone's mutagenic effects. [ 13 ] [ 14 ] [ 15 ] The two main mechanisms of action are: topoisomerase inhibition [ 16 ] and DNA intercalation. [ 17 ] Phenanthridine derivatives have attracted attention from medicinal chemists. [ 15 ] The two main mechanisms of action are: topoisomerase inhibition [ 16 ] and DNA intercalation. [ 17 ] [ 18 ] When functionalized, phenanthridine derivatives can exhibit strong DNA-binding affinity, enzyme inhibition and cytotoxic effects. [ 19 ] [ 20 ] s [ 21 ] Phenanthridine derivatives basis for DNA-binding fluorescent dyes, such as ethidium bromide and propidium iodide , which intercalate between nucleic acid base pairs. Looking at a derivative mentioned in the mechanism of action, the efficacy of ethidium bromide is clarified by being mentioned as a potent mutagen. In addition, the intercalating properties of ethidium bromide with DNA is used in laboratory applications for visualizing nucleic acids during gel electrophoresis, where careful considerations of ethidium bromide concentration and the electrophoresis conditions is essential for obtaining accurate results. [ 22 ] Phenanthridine exhibits some mutagenic properties following activation with rat liver enzymes (S-9 fraction), which simulates mammalian metabolism, making it a suspected human carcinogen . [ 23 ] In addition it has been found that phenanthridine was genotoxic [ 24 ] and phototoxic [ 25 ] as well. Furthermore, phenanthridine can be metabolized to phenanthridone, which has been identified as directly mutagenic in Salmonella strain TA-98. Research suggests that phenanthridone can interact with DNA and induce mutations without requiring enzymatic activation. [ 13 ] Many hydrophenanthridines have been identified in nature. These compounds, all of which are chiral , feature one or two partially hydrogenated rings. Some examples are hamayne , norpluviine, and the crinines . [ 26 ]
https://en.wikipedia.org/wiki/Pictet–Hubert_reaction
The Pictet–Spengler reaction is a chemical reaction in which a β-arylethylamine undergoes condensation with an aldehyde or ketone followed by ring closure. The reaction was first discovered in 1911 by Amé Pictet and Theodor Spengler (22 February 1886 – 18 August 1965). [ 1 ] Traditionally, an acidic catalyst in protic solvent was employed with heating; [ 2 ] however, the reaction has been shown to work in aprotic media in superior yields and sometimes without acid catalysis . [ 3 ] The Pictet–Spengler reaction can be considered a special case of the Mannich reaction , which follows a similar reaction pathway. The driving force for this reaction is the electrophilicity of the iminium ion generated from the condensation of the aldehyde and amine under acid conditions. This explains the need for an acid catalyst in most cases, as the imine is not electrophilic enough for ring closure but the iminium ion is capable of undergoing the reaction. The Pictet–Spengler reaction is widespread in both industry and biosynthesis. It has remained an important reaction in the fields of alkaloid and organic synthesis since its inception, where it has been employed in the development of many beta-carbolines . Natural Pictet–Spengler reaction typically employ an enzyme , such as strictosidine synthase . Pictet–Spengler products can be isolated from many products initially derived from nature, including foodstuffs such as soy sauce and ketchup . In such cases it is common to find the amino acid tryptophan and various aldoses used as the biological feedstock . Nucleophilic aromatic rings such as indole or pyrrole give products in high yields and mild conditions, while less nucleophilic aromatic rings such as a phenyl group give poorer yields or require higher temperatures and strong acid. The original Pictet–Spengler reaction was the reaction of phenethylamine and dimethoxymethane , catalysed by hydrochloric acid forming a tetrahydroisoquinoline . The Pictet–Spengler reaction has been applied to solid-phase combinatorial chemistry with great success. [ 4 ] [ 5 ] An analogous reaction with an aryl-β-ethanol is called oxa-Pictet–Spengler reaction . [ 6 ] The reaction mechanism occurs by initial formation of an iminium ion ( 2 ) followed by electrophilic addition at the 3-position, in accordance with the expected nucleophilicity of indoles , to give the spirocycle 3 . After migration of the best migrating group , deprotonation gives the product ( 5 ). Replacing an indole with a 3,4-dimethoxyphenyl group give the reaction named the Pictet–Spengler tetrahydroisoquinoline synthesis. Reaction conditions are generally harsher than the indole variant, and require refluxing conditions with strong acids like hydrochloric acid , trifluoroacetic acid or superacids . [ 7 ] [ 8 ] Instead of catalyzing the Pictet–Spengler cyclization with strong acid, one can acylate the iminium ion forming the intermediate N -acyliminium ion. The N -acyliminium ion is a very powerful electrophile and most aromatic ring systems will cyclize under mild conditions with good yields. [ 9 ] Tadalafil is synthesized via the N -acyliminium Pictet–Spengler reaction. [ 10 ] This reaction can also be catalyzed by AuCl 3 and AgOTf . [ 11 ] When the Pictet–Spengler reaction is performed with an aldehyde other than formaldehyde , a new chiral center is created. Several substrate- or auxiliary-controlled diastereoselective Pictet–Spengler reactions have been developed. [ 12 ] [ 13 ] Additionally, List et al. have published a chiral Brønsted–Lowry acid that catalyzes asymmetric Pictet–Spengler reactions. [ 14 ] Tryptophans: diastereocontrolled reaction The reaction of enantiopure tryptophan or its short-chain alkyl esters leads to 1,2,3,4-tetrahydro- β -carbolines in which a new chiral center at C-1 adopts either a cis or trans configuration towards the C-3 carboxyl group. The cis conduction is kinetically controlled, i.e. it is performed at lower temperatures. At higher temperatures the reaction becomes reversible and usually favours racemisation . 1,3- trans dominated products can be obtained with N b - benzylated tryptophans, which are accessible by reductive amination . The benzyl group can be removed hydrogenolytically afterwards. As a rough rule, 13 C NMR signals for C1 and C3 are downfield shifted in cis products relative to trans products (see steric compression effect ). [ 3 ] [ 15 ]
https://en.wikipedia.org/wiki/Pictet–Spengler_reaction
In painting , photography , graphical perspective and descriptive geometry , a picture plane is an image plane located between the "eye point" (or oculus ) and the object being viewed and is usually coextensive to the material surface of the work. It is ordinarily a vertical plane perpendicular to the sightline to the object of interest. In the technique of graphical perspective the picture plane has several features: The horizon frequently features vanishing points of lines appearing parallel in the foreground. The orientation of the picture plane is always perpendicular of the axis that comes straight out of your eyes. For example, if you are looking to a building that is in front of you and your eyesight is entirely horizontal then the picture plane is perpendicular to the ground and to the axis of your sight. If you are looking up or down, then the picture plane remains perpendicular to your sight and it changes the 90 degrees angle compared to the ground. When this happens a third vanishing point will appear in most cases depending on what you are seeing (or drawing). G. B. Halsted included the picture plane in his book Synthetic Projective Geometry : "To 'project' from a fixed point M (the 'projection vertex') a figure, the 'original', composed of points B, C, D etc. and straights b, c, d etc., is to construct the 'projecting straights' M B ¯ , M C ¯ , M D ¯ , {\displaystyle {\overline {MB}},\ {\overline {MC}},\ {\overline {MD}},} and the 'projecting planes' M b ¯ , M c ¯ , M d ¯ . {\displaystyle {\overline {Mb}},\ {\overline {Mc}},\ {\overline {Md}}.} Thus is obtained a new figure composed of straights and planes, all on M, and called an 'eject' of the original." "To 'cut' by a fixed plane μ (the picture-plane) a figure, the 'subject' made up of planes β, γ, δ, etc., and straights b, c, d , etc., is to construct the meets μ β ¯ , μ γ ¯ , μ δ ¯ {\displaystyle {\overline {\mu \beta }},\ {\overline {\mu \gamma }},\ {\overline {\mu \delta }}} and passes μ b ˙ , μ c ˙ , μ d ˙ . {\displaystyle {\dot {\mu b}},\ {\dot {\mu c}},\ {\dot {\mu d}}.} Thus is obtained a new figure composed of straights and points, all on μ, and called a 'cut' of the subject. If the subject is an eject of an original, the cut of the subject is an 'image' of the original. [ 2 ] A well-known phrase has accompanied many discussions of painting during the period of modernism . [ 3 ] Coined by the influential art critic Clement Greenberg in his essay called "Modernist Painting", the phrase "integrity of the picture plane" has come to denote how the flat surface of the physical painting functions in older as opposed to more recent works. That phrase is found in the following sentence in his essay: "The Old Masters had sensed that it was necessary to preserve what is called the integrity of the picture plane: that is, to signify the enduring presence of flatness underneath and above the most vivid illusion of three-dimensional space." Greenberg seems to be referring to the way painting relates to the picture plane in both the modern period and the "Old Master" period. [ 4 ]
https://en.wikipedia.org/wiki/Picture_plane
In mathematics, the Pidduck polynomials s n ( x ) are polynomials introduced by Pidduck ( 1910 , 1912 ) given by the generating function ( Roman 1984 , 4.4.3), ( Boas & Buck 1958 , p.38) This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pidduck_polynomials
The Pidgeon process is a practical method for smelting magnesium . The most common method involves the raw material, dolomite being fed into an externally heated reduction tank and then thermally reduced to metallic magnesium using 75% ferrosilicon as a reducing agent in a vacuum . [ 1 ] Overall the processes in magnesium smelting via the Pidgeon process involve dolomite calcination , grinding and pelleting, and vacuum thermal reduction. [ 1 ] Besides the Pidgeon process, electrolysis of magnesium chloride for commercial production of magnesium is also used, especially for magnesite ores, [ 2 ] which at one point in time accounted for 75% of the world's magnesium production. [ 3 ] By 2000, it took between 17 and 20 kilowatt-hours per kilo of magnesium produced by the Pidgeon process. [ 2 ] The Pidgeon processes in Canada in the year 2000 all used sulfur hexafluoride (SF 6 ) to cover the reaction so as not to introduce stray oxygen to it. Research to replace SF 6 with boron trifluoride was underway in 2000. [ 2 ] By 2011, magnesium production had departed under the Kyoto Protocol from Canada. [ 4 ] Wu, Han and Liu claimed that "China is the world’s largest producer of primary magnesium and has a magnesium smelting industry that is mainly based on the Pidgeon process" in an era in which China had obtained an 80% market share of production of magnesium metal. [ 1 ] The general reaction that occurs in the Pidgeon process is: For industrial use, ferrosilicon is used in place of pure silicon because its cheaper and more readily available. The iron from the alloy is a spectator in the reaction. CaC 2 may also be used as an even cheaper alternative for silicon and ferrosilicon, but is disadvantageous because it decreases the magnesium yield slightly. [ 5 ] The magnesium raw material of this type of reaction is magnesium oxide , which is obtained in many ways. In all cases, the raw materials must be calcined to remove both water and carbon dioxide. Magnesium oxide can also be obtained from sea or lake water magnesium chloride hydrolyzed to hydroxide. The Mg(OH) 2 is thermally dehydrated. Another option is to use mined magnesite (MgCO 3 ) calcined to magnesium oxide. The most used raw material is mined dolomite, a mixed (Ca,Mg)CO 3 , where the calcium oxide present in the reaction zone scavenges the silica formed, releasing heat and consuming one of the products, ultimately helping push the equilibrium to the right. c(1) Dolomite calcination (2) Reduction The Pidgeon process is an endothermic reaction ( △ {\displaystyle \bigtriangleup } H° ~183.0 kJ/mol Si ). Thermodynamically speaking, the temperatures decrease when the vacuum is used for both MgO and calcined dolomite. [ 5 ] The Chinese Pidgeon process is described here by Wu, Han and Liu. Being an endothermic reaction, heat is applied to initiate and sustain the reaction. This heat requirement may be very high. To keep reaction temperatures low, the processes are operated under pressure. The rotary kiln is typically used in dolomite calcination. In the rotary kiln, the raw material, calcinated dolomite, is mixed with the finely ground reducing agent, ferrosilicon and the catalyst, fluorite . The materials are mixed together and pressed into sphere shaped pellets and the mixed materials are charged into cylindrical nickel chromium steel retorts . A number of retorts are placed in a furnace in sealed paper bags to avoid moisture absorption so that calcined dolomite activity doesn't reduce magnesium yield. The pellets are then placed into a reduction tank and heated to 1200 °C. The inside of the furnace is vacuumed with a 13.3 Pa or higher, to produce magnesium vapour. Magnesium crystals are removed from the condensers, slag is removed as a solid, and the retort is recharged. The crude magnesium is refined via flux , and commercial magnesium ingot is produced. The authors nowhere identify the name or the characteristics of the flux. [ 1 ] Typical flux composition is 49 wt% anhydrous magnesium chloride , 27 wt% potassium chloride , 20 wt% barium chloride and 4 wt% calcium fluoride . [ 6 ] [ 7 ] The Canadian variant is described here with reference to the Chinese variant. In 2000, Canada had three magnesium smelters. All three used SF 6 as cover gas to prevent oxidation and combustion of exposed surfaces of magnesium, which is at STP highly combustible . The SF 6 cover gas had been in use at that point for over 20 years by all industries which dealt with raw magnesium. [ 2 ] Canadian industry was tasked to discover a suitable alternative cover gas in order not to be sacrificed to Action Plan 2000 on Climate Change . [ 8 ] [ 9 ] SF 6 had been deemed to have a Global Warming Potential (GWP) factor of 23,900 times that of CO 2 . [ 9 ] By 2011, magnesium production had departed from Canada because of the Kyoto Protocol . [ 4 ] Many technologies have been developed for producing magnesium metal. These approaches can be broadly classified as electrolytic and thermic. [ 10 ] The main manifestation of the electrolytic is the Dow process. The main application of thermic routes is the Pidgeon process. The Bolzano process merits mention because it is very similar to the Pidgeon process except that the heating is achieved through electric heating conductors and retorts are placed vertically into large blocks in the Bolzano process. [ 5 ] [ 11 ] The Pidgeon process is less technologically complex and because of distillation/vapour deposition conditions, a high purity product is easily achievable. [ 5 ] Although the Pidgeon process has many advantages, there are some environmental disadvantages of the process as well. With the increased demand for magnesium in recent years, production through ore reduction has been emitting larger amounts of carbon dioxide and particulate matter . [ 12 ] There are environmental impacts because to create lightweight materials in the first place, more energy is needed compared to the material being replaced, typically iron or steel . Approximately 10.4 kg of coal is burned and 37 kg of carbon dioxide is released per 1 kg of magnesium obtained, compared with less than 2 kg of carbon dioxide to produce 1 kg of steel. [ 13 ] [ 14 ] [ 15 ] In China, production of magnesium using the Pidgeon process has a 60% higher global warming impact than aluminum, a competing metal mass-produced in the country as well. [ 15 ] The silicothermic reduction of dolomite was first developed by Amati in 1938 at the University of Padua . Immediately afterward, an industrial production was established in Bolzano (Italy), using what is now better known as the Bolzano process . [ 16 ] A few years later in 1939, when Canada and its allies entered WW2 , they were short on supplies that required magnesium such as bombs, other military devices and aluminum alloys needed for aircraft. Dr. Lloyd Montgomery Pidgeon at the National Research Council was able to create a method for extracting magnesium from dolomite in a vacuum at high temperature with ferrosilicon as the reducing agent. At this time, the ferrosilicon method was known, however it had yet to be commercialized. By early 1942, a successful pilot test took place. [ 17 ] Since then, the Pidgeon process has continually been widely used, especially in China, the world's largest magnesium producer.
https://en.wikipedia.org/wiki/Pidgeon_process
Player 1 Player 2 Player 1 Player 2 Player 1 Player 2 The pie rule , sometimes referred to as the swap rule , is a rule used to balance abstract strategy games where a first-move advantage has been demonstrated. After the first move is made in a game that uses the pie rule, the second player must select one of two options: Depending on the game, there may be two ways to implement switching places. The use of pie rule was first reported in 1909 for a game in the Mancala family. [ 1 ] Among modern games, Hex uses this rule. [ 2 ] TwixT in tournament play uses a swap rule. [ 3 ] In Meridians , the first player places 2 stones on the board before the second player chooses the color. The rule can be applied to other games which are otherwise solved for one player, such as Gomoku or Tablut . [ 4 ] The rule gets its name from the divide and choose method of ensuring fairness in when dividing a pie between two people: one person cuts the pie in half, then the other person chooses which half to eat. The person cutting the pie, knowing that the other person will choose the larger piece, will make as equal a division as possible. This rule acts as a normalization factor in games where there may be a first-move advantage. In games that cannot end in a draw, such as Hex, the pie rule theoretically gives the second player a win (since one of the players must have a winning strategy after the first move, and the second player can choose to be this player), but the practical result is that the first player will choose a move neither too strong nor too weak, and the second player will have to decide whether switching places is worth the first-move advantage. In Go , one player can choose the amount of komi . (These are the points given to the second player as compensation for not going first.) The other player then decides whether to accept that or switch colors with the other player. This leads players to choose fair komi amounts because if they choose a komi that is too advantageous, the other player can just choose to play White and take advantage of that high komi. [ 5 ]
https://en.wikipedia.org/wiki/Pie_rule
In mathematics, a piecewise algebraic space is a generalization of a semialgebraic set , introduced by Maxim Kontsevich and Yan Soibelman . The motivation was for the proof of Deligne's conjecture on Hochschild cohomology . Robert Hardt, Pascal Lambrechts, Victor Turchin, and Ismar Volić later developed the theory. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Piecewise_algebraic_space
In mathematics , piecewise syndeticity is a notion of largeness of subsets of the natural numbers . A set S ⊂ N {\displaystyle S\subset \mathbb {N} } is called piecewise syndetic if there exists a finite subset G of N {\displaystyle \mathbb {N} } such that for every finite subset F of N {\displaystyle \mathbb {N} } there exists an x ∈ N {\displaystyle x\in \mathbb {N} } such that where S − n = { m ∈ N : m + n ∈ S } {\displaystyle S-n=\{m\in \mathbb {N} :m+n\in S\}} . Equivalently, S is piecewise syndetic if there is a constant b such that there are arbitrarily long intervals of N {\displaystyle \mathbb {N} } where the gaps in S are bounded by b . There are many alternative definitions of largeness that also usefully distinguish subsets of natural numbers:
https://en.wikipedia.org/wiki/Piecewise_syndetic_set
A pier , in architecture , is an upright support for a structure or superstructure such as an arch or bridge . Sections of structural walls between openings (bays) can function as piers. External or free-standing walls may have piers at the ends or on corners. The simplest cross section of the pier is square , or rectangular , but other shapes are also common. In medieval architecture , massive circular supports called drum piers, cruciform (cross-shaped) piers, and compound piers are common architectural elements. Columns are a similar upright support, but stand on a round base; in many contexts columns may also be called piers. In buildings with a sequence of bays between piers, each opening (window or door) between two piers is considered a single bay. Single-span bridges have abutments at each end that support the weight of the bridge and serve as retaining walls to resist lateral movement of the earthen fill of the bridge approach. [ 1 ] Multi-span bridges require piers to support the ends of spans between these abutments. In cold climates, the upstream edge of a pier may include a starkwater to prevent accumulation of broken ice during peak snowmelt flows. The starkwater has a sharpened upstream edge sometimes called a cutwater . The cutwater edge may be of concrete or masonry, but is often capped with a steel angle to resist abrasion and focus force at a single point to fracture floating pieces of ice striking the pier. In cold climates, the starling is typically sloped at an angle of about 45°  so current pushing against the ice tends to lift the downstream edge of the ice translating horizontal force of the current to vertical force against a thinner cross-section of ice until unsupported weight of ice fractures the piece of ice allowing it to pass on either side of the pier. [ 2 ] In the Arc de Triomphe , Paris ( illustration, right ) the central arch and side arches are raised on four massive planar piers [ clarification needed ] . Donato Bramante 's original plan for St Peter's Basilica in Rome has richly articulated piers. Four piers support the weight of the dome at the central crossing. These piers were found to be too small to support the weight and were changed later by Michelangelo to account for the massive weight of the dome. [ 3 ] The piers of the four apses that project from each outer wall are also strong, to withstand the outward thrust of the half-domes upon them. Many niches articulate the wall-spaces of the piers. [ 3 ]
https://en.wikipedia.org/wiki/Pier_(architecture)
The Pierce Protein Assay is a method of protein quantification . It provides quick estimation of the protein amount in a given sample. [ 1 ] The assay is separated into three main parts: preparation of the Diluted Albumin (BSA) Standards, preparation of the bicinchoninic acid (BCA) working reagent, and quantification of proteins (using either test tube or microplate procedure). This method is able to detect as low as 25 μg/ml and up to 2000 μg/ml of protein in a 65 ul sample, using standard protocol. This method may be preferred for samples containing detergents or other reducing agents . This method has a fast detection speed and low protein-to-protein variability in comparison to the BCA or Coomassie (Bradford) Assays. This method has a stable end point. This method has greater protein-to-protein variability than the BCA Assay .
https://en.wikipedia.org/wiki/Pierce_Protein_Assay
In mathematics, Pieri's formula , named after Mario Pieri , describes the product of a Schubert cycle by a special Schubert cycle in the Schubert calculus , or the product of a Schur polynomial by a complete symmetric function. In terms of Schur functions s λ indexed by partitions λ, it states that where h r is a complete homogeneous symmetric polynomial and the sum is over all partitions λ obtained from μ by adding r elements, no two in the same column. By applying the ω involution on the ring of symmetric functions, one obtains the dual Pieri rule for multiplying an elementary symmetric polynomial with a Schur polynomial: The sum is now taken over all partitions λ obtained from μ by adding r elements, no two in the same row . Pieri's formula implies Giambelli's formula . The Littlewood–Richardson rule is a generalization of Pieri's formula giving the product of any two Schur functions. Monk's formula is an analogue of Pieri's formula for flag manifolds.
https://en.wikipedia.org/wiki/Pieri's_formula
Pierre A. Deymier is a researcher in phononics , [ 1 ] acoustic metamaterial , [ 1 ] and materials science . He is a Professor of Materials Science and Engineering and previously department head at the University of Arizona . [ 2 ] He holds appointments with the applied mathematics graduate interdisciplinary program, [ 3 ] BIO5 institute, and School of Sustainable Engineered Systems at the University of Arizona . More recently, he has proposed a novel approach akin to quantum computing using the properties of phonons rather than qubits , which he has dubbed "phi-bits" or "phase-bits". [ 4 ] [ 5 ] Deymier received his engineer's degree in materials science in 1982 from University of Montpellier in France and his Ph.D. in Materials Science & Engineering from MIT in 1985. [ 6 ] His dissertation research was focused on computational materials science. [ citation needed ] He became assistant professor of materials science & engineering at the University of Arizona in 1985. [ 6 ] His daughter, Alix Deymier, is a professor of biomedical engineering at the University of Connecticut . [ citation needed ] Deymier has published over 180 peer-reviewed publications. [ 7 ] Some of his most highly cited works are: This article about an American scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pierre_Deymier
Pierre Gabriel (1 August 1933 [ 1 ] – 24 November 2015), also known as Peter Gabriel , was a French mathematician at the University of Strasbourg (1962–1970), University of Bonn (1970–1974) and University of Zürich (1974–1998) who worked on category theory , algebraic groups , and representation theory of algebras. He was elected a correspondent member of the French Academy of Sciences in November 1986. [ 2 ] His most famous result is Gabriel's theorem that provides a classification of all quivers of finite type. This article about a mathematician is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pierre_Gabriel
Pierre Monsan (born June 25, 1948, in Prades, Pyrénées-Orientales , France ) [ 1 ] is a French biochemist and entrepreneur. He is currently Professor emeritus at the Institut national des sciences appliquées de Toulouse (INSA Toulouse, affiliated to the University of Toulouse ) and the founding director of the pre-industrial demonstrator Toulouse White Biotechnology (TWB). [ 2 ] Monsan's scientific interests include biocatalysis , biochemical and enzyme engineering . Beyond his academic work, Monsan is co-inventor on numerous patents and co-founded several industrial biotechnology companies. [ 3 ] Monsan was educated at INSA Toulouse and the University of Toulouse where he graduated with an engineer degree ( Ingénieur diplômé) in Biological Chemistry in 1969. [ 1 ] He was then awarded his Doctor-Engineer Degree in 1971 and his PhD degree in 1977 from INSA Toulouse for research on enzyme immobilization . [ 4 ] He served as lecturer in the Department of Biochemical Engineering at INSA Toulouse from 1969 and was later promoted to Assistant Professor (1973) and then Professor (1981). [ 5 ] In 1984, Monsan took a leave from Academia and co-founded BioEurope, a startup company specialized in industrial biocatalysis. There he served as CSO from 1984 to 1989, CEO from 1989 to 1993 and CSO again from 1993 to 1999 after the acquisition of the company by the Solabia Group. [ 6 ] In 1993, Monsan returned to INSA Toulouse to lead a research group focusing on the discovery, characterisation and molecular engineering of enzymes, including glucansucrases and lipases . He was also appointed Professor at Ecole des Mines-ParisTech in 1993. [ 1 ] From 1999 to 2003 he served as head of Department of Biochemical Engineering at INSA Toulouse. In 2012, he founded the pre-industrial demonstrator “Toulouse White Biotechnology” (TWB) with a €20M grant within the framework of the Investing for the Future national program (also called the grand emprunt ) and served as its founding director until 2019. [ 7 ] [ 8 ] Monsan is presently Professor emeritus at INSA Toulouse and the CEO of Cell-Easy, a start-up specializing in the production of stem cells. [ 9 ] Research by Monsan and his collaborators has focused on biocatalysis , biochemical engineering and enzyme engineering , published in over 230 articles. [ 10 ] His fundamental includes investigation of structure-activity relationships of enzymes, (particularly glycoside hydrolases [ 11 ] [ 12 ] [ 13 ] and lipases [ 14 ] ), and enzyme discovery by functional metagenomics . [ 15 ] [ 16 ] [ 17 ] His more applied research involves biocatalysis in non-conventional ( anhydrous ) media for the synthesis of chemicals ( e.g. , chiral resolution to obtain enantiopure compounds ), [ 18 ] [ 19 ] [ 20 ] protein engineering (e.g. modification of substrate specificity , [ 21 ] [ 22 ] [ 23 ] enantioselectivity, [ 24 ] or thermostability [ 25 ] ), methods for enzyme immobilization , [ 26 ] [ 27 ] [ 28 ] and bioreactor design and development. [ 29 ] [ 30 ] [ 31 ] Monsan has been heavily involved in technology transfer throughout his career and is co-inventor of over 60 patents. [ 32 ] He has developed several industrial biocatalytic processes for the production of polysaccharides , [ 33 ] oligosaccharides [ 34 ] and amino acid derivatives. [ 35 ] Companies he has co-founded include BioEurope (1984; biocatalytic synthesis of reagents for the food, pharma and nutrition industries; now owned by the Solabia group), [ 6 ] [ dead link ] Biotrade (1996; waste water treatment) and Genibio (1998, food additives). [ 36 ] He is and has been member of the scientific advisory board of several companies, including Danisco Venture, [ 37 ] PCAS , [ 38 ] or Deinove. [ 39 ] In 2012, Monsan founded the pre-industrial demonstrator Toulouse White Biotechnology (TWB), [ 7 ] an original institute dedicated to technology transfer through a consortium of public and industrial partners. [ 40 ] [ 41 ] TWB promotes industrial biotechnology and biobased economy through collaborative public/private research and development projects ( e.g. , THANAPLAST project in partnership with Carbios ) [ 42 ] and the creation of startups such as EnobraQ (development of yeasts able to metabolize CO 2 ) [ 43 ] or Pili (production of bacterial ink). [ 44 ]
https://en.wikipedia.org/wiki/Pierre_Monsan
Pierre Samuel (12 September 1921 [ 1 ] – 23 August 2009 [ 2 ] ) was a French mathematician , known for his work in commutative algebra and its applications to algebraic geometry . The two-volume work Commutative Algebra that he wrote with Oscar Zariski is a classic. Other books of his covered projective geometry and algebraic number theory . Samuel studied at the Lycée Janson-de-Sailly in Paris before attending the École Normale Supérieure where he studied for his Agrégé de mathematique. He received his Master of Arts and then a Ph.D. from Princeton University in 1947, under the supervision of Oscar Zariski, with a thesis "Ultrafilters and Compactification of Uniform Spaces". Samuel ran a Paris seminar during the 1960s, and became Professeur émérite at the Université Paris-Sud (Orsay). His lectures on unique factorization domains published by the Tata Institute of Fundamental Research played a significant role in computing the Picard group of a Zariski surface via the work of Jeffrey Lang and collaborators. The method was inspired by earlier work of Nathan Jacobson and Pierre Cartier , another outstanding member of the Bourbaki group . Nicholas Katz related this to the concept of p -curvature of a connection introduced by Alexander Grothendieck . He was a member of the Bourbaki group , and filmed some of their meetings. A French television documentary on Bourbaki broadcast some of this footage in 2000. Samuel was also active in issues of social justice , including concerns about environmental degradation (where he was influenced by Grothendieck), and arms control . [ 3 ] He died in Paris in August 2009. [ 2 ] His doctoral students include Lucien Szpiro and Daniel Lazard . In 1958 he was an invited speaker ( Relations d'équivalence en géométrie algébrique ) at the ICM in Edinburgh . In 1969 he won the Lester R. Ford Award . [ 4 ]
https://en.wikipedia.org/wiki/Pierre_Samuel
Pierre Scerri is a French telecommunications engineer and model builder, who gained fame in 1998 after having his highly accurate 1:3 scale model of a Ferrari 312 PB featured on the BBC television programme Jeremy Clarkson's Extreme Machines . He began his project for the model in 1978, out of desire for having a Ferrari that could function in his dining room. [ 1 ] Pierre Bardinon , owner of the Mas du Clos race track , allowed Scerri to take detailed photographs of the actual car on display at the adjacent Ferrari museum. Based on those photographs, he drafted the schematics and made the molds for all parts of the model, a process which took 15 years. In 1989, he finally completed assembly of the engine, a perfect scaled replica of the Flat-12 cylinder engine found on the 312PB. He reportedly took extra time tuning the engine so that it would sound like the full-scale model. [ 2 ] The project was finally completed in December 1992. Scerri is now working on three new models, a Ferrari 330 P4 , another Ferrari 312PB and an engine for a Ferrari 250 GTO , all 1:3 scale. This French engineer or inventor biographical article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pierre_Scerri
Pierre de Fermat ( / f ɜːr ˈ m ɑː / ; [ 2 ] French: [pjɛʁ də fɛʁma] ; 17 August 1601 [ a ] – 12 January 1665) was a French mathematician who is given credit for early developments that led to infinitesimal calculus , including his technique of adequality . In particular, he is recognized for his discovery of an original method of finding the greatest and the smallest ordinates of curved lines, which is analogous to that of differential calculus , then unknown, and his research into number theory . He made notable contributions to analytic geometry , probability , and optics . He is best known for his Fermat's principle for light propagation and his Fermat's Last Theorem in number theory , which he described in a note at the margin of a copy of Diophantus ' Arithmetica . He was also a lawyer [ 4 ] at the parlement of Toulouse , France . Fermat was born in 1601 [ a ] in Beaumont-de-Lomagne , France—the late 15th-century mansion where Fermat was born is now a museum. He was from Gascony , where his father, Dominique Fermat, was a wealthy leather merchant and served three one-year terms as one of the four consuls of Beaumont-de-Lomagne. His mother was Claire de Long. [ 3 ] Pierre had one brother and two sisters and was almost certainly brought up in the town of his birth. [ citation needed ] He attended the University of Orléans from 1623 and received a bachelor in civil law in 1626, before moving to Bordeaux . In Bordeaux, he began his first serious mathematical researches, and in 1629 he gave a copy of his restoration of Apollonius 's De Locis Planis to one of the mathematicians there. Certainly, in Bordeaux he was in contact with Beaugrand and during this time he produced important work on maxima and minima which he gave to Étienne d'Espagnet who clearly shared mathematical interests with Fermat. There he became much influenced by the work of François Viète . [ 5 ] In 1630, he bought the office of a councilor at the Parlement de Toulouse , one of the High Courts of Judicature in France, and was sworn in by the Grand Chambre in May 1631. He held this office for the rest of his life. Fermat thereby became entitled to change his name from Pierre Fermat to Pierre de Fermat. On 1 June 1631, Fermat married Louise de Long, a fourth cousin of his mother Claire de Fermat (née de Long). The Fermats had eight children, five of whom survived to adulthood: Clément-Samuel, Jean, Claire, Catherine, and Louise. [ 6 ] [ 7 ] [ 8 ] Fluent in six languages ( French , Latin , Occitan , classical Greek , Italian and Spanish ), Fermat was praised for his written verse in several languages and his advice was eagerly sought regarding the emendation of Greek texts. He communicated most of his work in letters to friends, often with little or no proof of his theorems. In some of these letters to his friends, he explored many of the fundamental ideas of calculus before Newton or Leibniz . Fermat was a trained lawyer making mathematics more of a hobby than a profession. Nevertheless, he made important contributions to analytical geometry , probability, number theory and calculus. [ 9 ] Secrecy was common in European mathematical circles at the time. This naturally led to priority disputes with contemporaries such as Descartes and Wallis . [ 10 ] Anders Hald writes that, "The basis of Fermat's mathematics was the classical Greek treatises combined with Vieta's new algebraic methods." [ 11 ] Fermat's pioneering work in analytic geometry ( Methodus ad disquirendam maximam et minimam et de tangentibus linearum curvarum ) was circulated in manuscript form in 1636 (based on results achieved in 1629), [ 12 ] predating the publication of Descartes' La géométrie (1637), which exploited the work. [ 13 ] This manuscript was published posthumously in 1679 in Varia opera mathematica , as Ad Locos Planos et Solidos Isagoge ( Introduction to Plane and Solid Loci ). [ 14 ] In Methodus ad disquirendam maximam et minimam et de tangentibus linearum curvarum , Fermat developed a method ( adequality ) for determining maxima, minima, and tangents to various curves that was equivalent to differential calculus . [ 15 ] [ 16 ] In these works, Fermat obtained a technique for finding the centers of gravity of various plane and solid figures, which led to his further work in quadrature . Fermat was the first person known to have evaluated the integral of general power functions. With his method, he was able to reduce this evaluation to the sum of geometric series . [ 17 ] The resulting formula was helpful to Newton , and then Leibniz , when they independently developed the fundamental theorem of calculus . [ citation needed ] In number theory, Fermat studied Pell's equation , perfect numbers , amicable numbers and what would later become Fermat numbers . It was while researching perfect numbers that he discovered Fermat's little theorem . He invented a factorization method— Fermat's factorization method —and popularized the proof by infinite descent , which he used to prove Fermat's right triangle theorem which includes as a corollary Fermat's Last Theorem for the case n = 4. Fermat developed the two-square theorem , and the polygonal number theorem , which states that each number is a sum of three triangular numbers , four square numbers , five pentagonal numbers , and so on. Although Fermat claimed to have proven all his arithmetic theorems, few records of his proofs have survived. Many mathematicians, including Gauss , doubted several of his claims, especially given the difficulty of some of the problems and the limited mathematical methods available to Fermat. His Last Theorem was first discovered by his son in the margin in his father's copy of an edition of Diophantus , and included the statement that the margin was too small to include the proof. It seems that he had not written to Marin Mersenne about it. It was first proven in 1994, by Sir Andrew Wiles , using techniques unavailable to Fermat. [ citation needed ] Through their correspondence in 1654, Fermat and Blaise Pascal helped lay the foundation for the theory of probability. From this brief but productive collaboration on the problem of points , they are now regarded as joint founders of probability theory . [ 18 ] Fermat is credited with carrying out the first-ever rigorous probability calculation. In it, he was asked by a professional gambler why if he bet on rolling at least one six in four throws of a die he won in the long term, whereas betting on throwing at least one double-six in 24 throws of two dice resulted in his losing. Fermat showed mathematically why this was the case. [ 19 ] The first variational principle in physics was articulated by Euclid in his Catoptrica . It says that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection . Hero of Alexandria later showed that this path gave the shortest length and the least time. [ 20 ] Fermat refined and generalized this to "light travels between two given points along the path of shortest time " now known as the principle of least time . [ 21 ] For this, Fermat is recognized as a key figure in the historical development of the fundamental principle of least action in physics. The terms Fermat's principle and Fermat functional were named in recognition of this role. [ 22 ] Pierre de Fermat died on January 12, 1665, at Castres , in the present-day department of Tarn . [ 23 ] The oldest and most prestigious high school in Toulouse is named after him: the Lycée Pierre-de-Fermat . French sculptor Théophile Barrau made a marble statue named Hommage à Pierre Fermat as a tribute to Fermat, now at the Capitole de Toulouse . Together with René Descartes , Fermat was one of the two leading mathematicians of the first half of the 17th century. According to Peter L. Bernstein , in his 1996 book Against the Gods , Fermat "was a mathematician of rare power. He was an independent inventor of analytic geometry , he contributed to the early development of calculus, he did research on the weight of the earth, and he worked on light refraction and optics. In the course of what turned out to be an extended correspondence with Blaise Pascal , he made a significant contribution to the theory of probability. But Fermat's crowning achievement was in the theory of numbers." [ 24 ] Regarding Fermat's work in analysis, Isaac Newton wrote that his own early ideas about calculus came directly from "Fermat's way of drawing tangents." [ 25 ] Of Fermat's number theoretic work, the 20th-century mathematician André Weil wrote that: "what we possess of his methods for dealing with curves of genus 1 is remarkably coherent; it is still the foundation for the modern theory of such curves. It naturally falls into two parts; the first one ... may conveniently be termed a method of ascent, in contrast with the descent which is rightly regarded as Fermat's own." [ 26 ] Regarding Fermat's use of ascent, Weil continued: "The novelty consisted in the vastly extended use which Fermat made of it, giving him at least a partial equivalent of what we would obtain by the systematic use of the group theoretical properties of the rational points on a standard cubic." [ 27 ] With his gift for number relations and his ability to find proofs for many of his theorems, Fermat essentially created the modern theory of numbers. Fermat made a number of mistakes. Some mistakes were pointed out by Schinzel and Sierpinski. [ 28 ] In his letter to Carcavi, Fermat said that he had proved that the Fermat numbers are all prime. Euler pointed out that 4,294,967,297 is divisible by 641. Also, see Weil, in "Number Theory". [ 29 ]
https://en.wikipedia.org/wiki/Pierre_de_Fermat
Piet Bergveld ( Dutch pronunciation: [pid ˈbɛr(ə)xfɛlt] ; born 26 January 1940) is a Dutch electrical engineer. He was professor of biosensors at the University of Twente between 1983 and 2003. He is the inventor of the ion-sensitive field-effect transistor (ISFET) sensor . [ 1 ] Bergveld's work has focused on electrical engineering and biomedical technology . Bergveld was born in Oosterwolde, Friesland on 26 January 1940. [ 1 ] In 1960 he started studying electrical engineering at the Eindhoven University of Technology , he had preferred to study biomedical engineering but that was not available. Between 1964 and 1965 he did a master's degree at the Philips Natuurkundig Laboratorium . [ 1 ] In the latter half of the 1960s Bergveld started working as a scientific employee at the Technische Hogeschool Twente (which later became the University of Twente). Intrigued by discovering and measuring the origin of electronic activity in the human brain Bergveld started working on a new technique. In 1970, he completed the development of the ion-sensitive field-effect transistor (ISFET) sensor . [ 2 ] [ 3 ] It was based on his earlier research on the MOSFET (metal–oxide–semiconductor field-effect transistor), which he realized could be adapted into a biosensor for electrochemical and biological applications. [ 3 ] [ 4 ] In 1973, he earned his PhD at Twente, with a dissertation which delved deeper into the possibilities of ISFET sensors. [ 1 ] [ 5 ] Bergveld worked at the University of Twente from 1965 until he took up emeritus status in February 2003. [ 6 ] He had been a full professor since 1983. [ 1 ] At the university he was one of the driving forces for increased biomedical technology research and one of the founding fathers of the MESA+ research institute . [ 7 ] In 1995 Bergveld was awarded the Jacob Kistemaker [ nl ] prize by minister Hans Wijers . [ 2 ] He was elected a member of the Royal Netherlands Academy of Arts and Sciences in 1997. [ 8 ] In April 2003 Bergveld was made a Knight in the Order of the Netherlands Lion . [ 7 ]
https://en.wikipedia.org/wiki/Piet_Bergveld
Pieter Baas (28 April 1944 – 29 April 2024) was a Dutch botanist and an emeritus professor of plant systematics at the Leiden University . He served as director of the Rijksherbarium [ nl ] of Leiden University between 1991 and 1999. When the institute was faced with budget cuts in 1993 he managed to preserve the collection by joining it with the university collections of Wageningen and Utrecht. This led to the founding of the National Herbarium of the Netherlands in 1999. Baas subsequently became director of the institute and served until 2005. As a botanist, Baas specialised in wood anatomy , and was a honorary fellow of the International Academy of Wood Science . Most of his pioneering research work in wood anatomy has been jointly made with the American botanist and wood scientist, Elisabeth Wheeler . Baas was born on 28 April 1944 in the municipality of Wieringermeer . [ 1 ] He attended the MULO and later the HBS . [ 2 ] Baas grew up with a broad interest in science. At age 17, while harvesting potatoes he saw a Natterjack toad crossing a path, appreciated the beauty of nature and decided to study natural history after earlier having contemplated studying history. [ 2 ] [ 3 ] In 1962 Baas started studying biology at Leiden University . [ 3 ] In his first year of biology Baas hated plant systematics as he hardly knew any plants or animals. He preferred plant anatomy and physiology. [ 2 ] While studying he was offered a job at the Rijksherbarium [ nl ] , the herbarium of Leiden University, by its director Cornelis Gijsbert Gerrit Jan van Steenis . Baas rejected the offer, having no interest in working in a herbarium. [ 3 ] For his final year of studying Baas wished to stay at the Royal Botanic Gardens, Kew . Van Steenis agreed to this if Baas took up a course of systematics. [ 2 ] Between 1968 and 1969 Baas studied at the Jodrell Laboratory of the Royal Botanic Gardens under Professor Charles Russell Metcalfe . [ 4 ] [ 5 ] On his return from the United Kingdom Baas approached Van Steenis and asked to be employed as a wood anatomy expert. [ 3 ] In 1969 Baas became an employee of the Rijksherbarium. [ 3 ] [ 6 ] In 1975 Baas earned his PhD in wood anatomy, with a thesis entitled: Comparative anatomy of Ilex, Nemopanthus, Sphenostemon, Phelline, and Oncotheca . [ 1 ] [ 6 ] In 1987 he became professor ( Bijzonder hoogleraar [ nl ] , paid from non-university funds) of plant systematics at Leiden University. In 1989 he was chairman of the organizing committee for the first Flora Malesiana Symposium . In 1991 he became a regular professor, succeeding Cornelis Kalkman . [ 1 ] [ 3 ] [ 5 ] In 1991 Baas became scientific-director of the Rijksherbarium. [ 6 ] Baas was pressured to take over the position from Cornelis Kalkman. Although content as a researcher and not very interested in directing and managing, Baas took up the position of director out of a sense of duty. [ 3 ] Two years after starting as director, the Rijksherbarium was faced with a plan of the dean of the University Faculty of Mathematics and Natural Sciences to slash the budget by half, which would have forced Baas to fire all scientific staff. [ 2 ] Baas informed Queen Beatrix of the Netherlands of the plan. Beatrix discussed the matter with the Minister of Education, Culture & Sciences, Jo Ritzen . [ 3 ] Ritzen preferred to see the pieces of the collection returned to their countries of origin. [ 2 ] A six-year struggle ensued, after which the Ministry set aside money for broad-value biological collections. Baas called this "his finest moment". [ 3 ] The university board and the Royal Netherlands Academy of Arts and Sciences aided Baas in his wish to see the collection preserved and a special fund was established. [ 2 ] In 1999 the National Herbarium of the Netherlands was formed from the collections of the herbariums of the universities of Leiden, Utrecht University and Wageningen . [ 6 ] Ritzen subsequently denied the influence of Beatrix in the matter while Baas was convinced that Beatrix helped with the formation of the institute. [ 7 ] Baas became director of the newly formed National Herbarium. [ 6 ] During his term as director, Baas managed to improve digitalization efforts and nature conservancy projects at the institute. [ 3 ] Furthermore, a start at DNA sequencing the collection was made. The National Herbarium also joined forces with the Naturalis Biodiversity Center , the Zoological Museum Amsterdam , and the Centraalbureau voor Schimmelcultures to become a biodiversity research centre. [ 2 ] Baas retired as professor in April and as director in September 2005, and was succeeded by Erik Smets. [ 1 ] [ 2 ] [ 5 ] Until age 65, he maintained a zero-hour contract at the institute, and then returned to his research on wood anatomy. [ 3 ] As of 2013 he was still active as professor emeritus and honorary staff member at the Naturalis Biodiversity Center , the successor institute to the National Herbarium of the Netherlands. [ 4 ] Baas's principal research was in the evolution of anatomical diversity in wood and in the significance of tree biology as it relates to global environmental change. He was also interested in plant anatomy, both systematic and phylogenetic, wood culture, biodiversity, biohistory, conservation, as well as in microscopic wood identification. He studied the role of botanical gardens in education and research. [ 4 ] As of 1976 Baas was Editor-in-Chief of the International Association of Wood Anatomists Journal . [ 4 ] As an expert on wood anatomy, Baas was at times asked to be a scientific expert on police investigations regarding wooden weapons or tools. [ 8 ] In 1987 Baas became a corresponding member of the Botanical Society of America [ 9 ] and a member of the Royal Netherlands Academy of Arts and Sciences in 2000. [ 10 ] He was an elected fellow of The International Academy of Wood Science . [ 11 ] In 2003 he won the Linnean Medal of the Linnean Society of London . [ 12 ] Baas became a Knight in the Order of the Netherlands Lion in 2005. [ 13 ] Ilex baasii and Baasoxylon are named after him. [ 5 ] While in Sri Lanka in 2004, Baas survived the Indian Ocean tsunami . [ 2 ] He died on 29 April 2024, at the age of 80. [ 14 ]
https://en.wikipedia.org/wiki/Pieter_Baas
Pieter Hendrik Schoute (21 January 1846, Wormerveer – 18 April 1913, Groningen ) was a Dutch mathematician known for his work on regular polytopes and Euclidean geometry . He started his career as a civil engineer, but became a professor of mathematics at Groningen and published some thirty papers on polytopes between 1878 and his death in 1913. [ 1 ] He collaborated with Alicia Boole Stott on describing the sections of the regular 4-polytopes. [ 2 ] In 1886, he became member of the Royal Netherlands Academy of Arts and Sciences . [ 3 ] This article about a European mathematician is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Pieter_Hendrik_Schoute
Piezochromism , from the Greek piezô "to squeeze, to press" and chromos "color", [ 1 ] describes the tendency of certain materials to change color with the application of pressure . This effect is closely related to the electronic band gap change, which can be found in plastics , semiconductors (e.g. hybrid perovskites) [ 2 ] [ 3 ] [ 4 ] and hydrocarbons. [ 5 ] One simple molecule displaying this property is 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophenecarbonitrile , also known as ROY owing to its red, orange and yellow crystalline forms. Individual yellow and pale orange versions transform reversibly to red at high pressure. [ 6 ] This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Piezochromism
A piezoelectric microelectromechanical system (piezoMEMS) is a miniature or microscopic device that uses piezoelectricity to generate motion and carry out its tasks. It is a microelectromechanical system that takes advantage of an electrical potential that appears under mechanical stress . PiezoMEMS can be found in a variety of applications, such as switches , inkjet printer heads , sensors, micropumps , and energy harvesters. [ 1 ] Interest in piezoMEMS technology began around the early 1990s as scientists explored alternatives to electrostatic actuation in radio frequency (RF) microelectromechanical systems (MEMS) . [ 2 ] For RF MEMS, electrostatic actuation specialized high voltage charge pump circuits due to small electrode gap spacing and large driving voltages. In contrast, piezoelectric actuation allowed for high sensitivity as well as low voltage and power consumption as low as a few millivolts. [ 3 ] [ 4 ] It also had the ability to close large vertical gaps while still allowing for low microsecond operating speeds. [ 5 ] Lead zirconate titanate (PZT) , in particular, offered the most promise as a piezoelectric material because of its high piezoelectric coefficient , tunable dielectric constant , and electromechanical coupling coefficient . [ 4 ] PiezoMEMS have been applied to various different technologies from switches to sensors, and further research have led to the creation of piezoelectric thin films, which aided in the realization of highly integrated piezoMEMS devices. [ 6 ] The first reported piezoelectrically actuated RF MEMS switch was developed by scientists at the LG Electronics Institute of Technology in Seoul, South Korea in 2005. [ 3 ] The researchers designed and actualized a RF MEMS switch with a piezoelectric cantilever actuator that had an operation voltage of 2.5 volts. [ 7 ] In 2017, researchers from the U.S. Army Research Laboratory (ARL) evaluated the radiation effects in the piezoelectric response of PZT thin films for the first time. They determined that PZT exhibited a degree of radiation hardness that could be further extended by using conductive oxide electrodes instead of traditional platinum electrodes. Gamma radiation tests have also shown that actuated devices such as switches, resonators , and inertial devices could benefit from the radiation tolerance of PZT, suggesting the possibility that actuators and sensors can be integrated into platforms evaluating nuclear material and reduce human exposure to radiation. [ 8 ] [ 9 ] This experiment was part of a decades-long research investment effort at ARL to improve the use of PZT thin film technology for piezoMEMS. [ 4 ] Other piezoMEMS-related work included developing a piezoelectric microphone based on PZT thin films, [ 10 ] creating new integrated surface micromachining processes for RF MEMS to incorporate thin film PZT actuators, [ 11 ] providing the first experimental demonstration of monolithically integrated piezoMEMS RF switches with contour mode filters, [ 12 ] and demonstrating the feasibility of vibrational energy harvesting using thin film PZT MEMS. [ 13 ] In their work, researchers from ARL have also increased the overall electromechanical response of PZT thin films by 15-30% by incorporating iridium oxide electrode materials. [ 8 ] There exists three primary approaches to realizing PiezoMEMS devices: [ 14 ] PiezoMEMS use two principal crystal structures, the wurtzite and perovskite structures. [ 6 ] PiezoMEMS still face many difficulties that impede its ability to be successfully commercialized. For instance, the success of depositing uniform films of piezoelectrics still depend heavily on the use of appropriate layers of proper nucleation and film growth. As a result, extensive device-specific development efforts are needed to create a proper sensor structure. In addition, researchers continue to search for ways to reduce and control the material and sensor drift and aging characteristics of thin film piezoelectric materials. Deposition techniques to create thin films with properties approaching those of bulk materials remain in development and in need of improvement. Furthermore, the chemistry and etching characteristics of most piezoelectric materials remain very slow. [ 14 ]
https://en.wikipedia.org/wiki/Piezoelectric_microelectromechanical_systems
Piezoelectricity ( / ˌ p iː z oʊ -, ˌ p iː t s oʊ -, p aɪ ˌ iː z oʊ -/ , US : / p i ˌ eɪ z oʊ -, p i ˌ eɪ t s oʊ -/ [ 1 ] ) is the electric charge that accumulates in certain solid materials—such as crystals , certain ceramics , and biological matter such as bone , DNA , and various proteins —in response to applied mechanical stress . [ 2 ] [ 3 ] The piezoelectric effect results from the linear electromechanical interaction between the mechanical and electrical states in crystalline materials with no inversion symmetry . [ 4 ] The piezoelectric effect is a reversible process : materials exhibiting the piezoelectric effect also exhibit the reverse piezoelectric effect, the internal generation of a mechanical strain resulting from an applied electric field . For example, lead zirconate titanate crystals will generate measurable piezoelectricity when their static structure is deformed by about 0.1% of the original dimension. Conversely, those same crystals will change about 0.1% of their static dimension when an external electric field is applied. The inverse piezoelectric effect is used in the production of ultrasound waves . [ 5 ] French physicists Jacques and Pierre Curie discovered piezoelectricity in 1880. [ 6 ] The piezoelectric effect has been exploited in many useful applications, including the production and detection of sound, piezoelectric inkjet printing , generation of high voltage electricity, as a clock generator in electronic devices, in microbalances , to drive an ultrasonic nozzle , and in ultrafine focusing of optical assemblies. It forms the basis for scanning probe microscopes that resolve images at the scale of atoms . It is used in the pickups of some electronically amplified guitars and as triggers in most modern electronic drums . [ 7 ] [ 8 ] The piezoelectric effect also finds everyday uses, such as generating sparks to ignite gas cooking and heating devices, torches, and cigarette lighters . The word piezoelectricity means electricity resulting from pressure and latent heat . It is derived from Ancient Greek πιέζω ( piézō ) ' to squeeze or press ' and ἤλεκτρον ( ḗlektron ) ' amber ' (an ancient source of static electricity). [ 9 ] [ 10 ] The German form of the word ( Piezoelektrizität ) was coined in 1881 by the German physicist Wilhelm Gottlieb Hankel ; the English word was derived from German in 1883. [ 9 ] [ 11 ] The pyroelectric effect , by which a material generates an electric potential in response to a temperature change, was studied by Carl Linnaeus and Franz Aepinus in the mid-18th century. Drawing on this knowledge, both René Just Haüy and Antoine César Becquerel posited a relationship between mechanical stress and electric charge; however, experiments by both proved inconclusive. [ 12 ] The first demonstration of the direct piezoelectric effect was in 1880 by the brothers Pierre Curie and Jacques Curie . [ 13 ] They combined their knowledge of pyroelectricity with their understanding of the underlying crystal structures that gave rise to pyroelectricity to predict crystal behavior, and demonstrated the effect using crystals of tourmaline , quartz , topaz , cane sugar , and Rochelle salt (sodium potassium tartrate tetrahydrate). Quartz and Rochelle salt exhibited the most piezoelectricity. The Curies, however, did not predict the converse piezoelectric effect. The converse effect was mathematically deduced from fundamental thermodynamic principles by Gabriel Lippmann in 1881. [ 14 ] The Curies immediately confirmed the existence of the converse effect, [ 15 ] and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals. For the next few decades, piezoelectricity remained something of a laboratory curiosity, though it was a vital tool in the discovery of polonium and radium by Pierre and Marie Curie in 1898. More work was done to explore and define the crystal structures that exhibited piezoelectricity. This culminated in 1910 with the publication of Woldemar Voigt 's Lehrbuch der Kristallphysik ( Textbook on Crystal Physics ), [ 16 ] which described the 20 natural crystal classes capable of piezoelectricity, and rigorously defined the piezoelectric constants using tensor analysis . The first practical application for piezoelectric devices was sonar , first developed during World War I . The superior performance of piezoelectric devices, operating at ultrasonic frequencies, superseded the earlier Fessenden oscillator . In France in 1917, Paul Langevin and his coworkers developed an ultrasonic submarine detector. [ 17 ] The detector consisted of a transducer , made of thin quartz crystals carefully glued between two steel plates, and a hydrophone to detect the returned echo . By emitting a high-frequency pulse from the transducer, and measuring the amount of time it takes to hear an echo from the sound waves bouncing off an object, one can calculate the distance to that object. Piezoelectric devices found homes in many fields. Ceramic phonograph cartridges simplified player design, were cheap and accurate, and made record players cheaper to maintain and easier to build. The development of the ultrasonic transducer allowed for easy measurement of viscosity and elasticity in fluids and solids, resulting in huge advances in materials research. Ultrasonic time-domain reflectometers (which send an ultrasonic pulse through a material and measure reflections from discontinuities) could find flaws inside cast metal and stone objects, improving structural safety. During World War II , independent research groups in the United States , USSR , and Japan discovered a new class of synthetic materials, called ferroelectrics , which exhibited piezoelectric constants many times higher than natural materials. This led to intense research to develop barium titanate and later lead zirconate titanate materials with specific properties for particular applications. One significant example of the use of piezoelectric crystals was developed by Bell Telephone Laboratories . Following World War I, Frederick R. Lack, working in radio telephony in the engineering department, developed the "AT cut" crystal, a crystal that operated through a wide range of temperatures. Lack's crystal did not need the heavy accessories previous crystal used, facilitating its use on the aircraft. This development allowed Allied air forces to engage in coordinated mass attacks through the use of aviation radio. Development of piezoelectric devices and materials in the United States was kept within the companies doing the development, mostly due to the wartime beginnings of the field, and in the interests of securing profitable patents. New materials were the first to be developed—quartz crystals were the first commercially exploited piezoelectric material, but scientists searched for higher-performance materials. Despite the advances in materials and the maturation of manufacturing processes, the United States market did not grow as quickly as Japan's did. Without many new applications, the growth of the United States' piezoelectric industry suffered. In contrast, Japanese manufacturers shared their information, quickly overcoming technical and manufacturing challenges and creating new markets. In Japan, a temperature stable crystal cut was developed by Issac Koga . Japanese efforts in materials research created piezoceramic materials competitive to the United States materials but free of expensive patent restrictions. Major Japanese piezoelectric developments included new designs of piezoceramic filters for radios and televisions, piezo buzzers and audio transducers that can connect directly to electronic circuits, and the piezoelectric igniter , which generates sparks for small engine ignition systems and gas-grill lighters, by compressing a ceramic disc. Ultrasonic transducers that transmit sound waves through air had existed for quite some time but first saw major commercial use in early television remote controls. These transducers now are mounted on several car models as an echolocation device, helping the driver determine the distance from the car to any objects that may be in its path. The nature of the piezoelectric effect is closely related to the occurrence of electric dipole moments in solids. The latter may either be induced for ions on crystal lattice sites with asymmetric charge surroundings (as in BaTiO 3 and PZTs ) or may directly be carried by molecular groups (as in cane sugar ). The dipole density or polarization (dimensionality [C·m/m 3 ] ) may easily be calculated for crystals by summing up the dipole moments per volume of the crystallographic unit cell . [ 18 ] As every dipole is a vector, the dipole density P is a vector field . Dipoles near each other tend to be aligned in regions called Weiss domains . The domains are usually randomly oriented, but can be aligned using the process of poling (not the same as magnetic poling ), a process by which a strong electric field is applied across the material, usually at elevated temperatures. Not all piezoelectric materials can be poled. [ 19 ] Of decisive importance for the piezoelectric effect is the change of polarization P when applying a mechanical stress . This might either be caused by a reconfiguration of the dipole-inducing surrounding or by re-orientation of molecular dipole moments under the influence of the external stress. Piezoelectricity may then manifest in a variation of the polarization strength, its direction or both, with the details depending on: 1. the orientation of P within the crystal; 2. crystal symmetry ; and 3. the applied mechanical stress. The change in P appears as a variation of surface charge density upon the crystal faces, i.e. as a variation of the electric field extending between the faces caused by a change in dipole density in the bulk. For example, a 1 cm 3 cube of quartz with 2 kN (500 lbf) of correctly applied force can produce a voltage of 12500 V . [ 20 ] Piezoelectric materials also show the opposite effect, called the converse piezoelectric effect , where the application of an electrical field creates mechanical deformation in the crystal. Linear piezoelectricity is the combined effect of These may be combined into so-called coupled equations , of which the strain-charge form is: [ 23 ] where d {\displaystyle {\mathfrak {d}}} is the piezoelectric tensor and the superscript t stands for its transpose. Due to the symmetry of d {\displaystyle {\mathfrak {d}}} , d i j k t = d k j i = d k i j {\displaystyle d_{ijk}^{t}=d_{kji}=d_{kij}} . In matrix form, where [ d ] is the matrix for the direct piezoelectric effect and [ d t ] is the matrix for the converse piezoelectric effect. The superscript E indicates a zero, or constant, electric field; the superscript T indicates a zero, or constant, stress field; and the superscript t stands for transposition of a matrix . Notice that the third order tensor d {\displaystyle {\mathfrak {d}}} maps vectors into symmetric matrices. There are no non-trivial rotation-invariant tensors that have this property, which is why there are no isotropic piezoelectric materials. The strain-charge for a material of the 4mm (C 4v ) crystal class (such as a poled piezoelectric ceramic such as tetragonal PZT or BaTiO 3 ) as well as the 6mm crystal class may also be written as (ANSI IEEE 176): where the first equation represents the relationship for the converse piezoelectric effect and the latter for the direct piezoelectric effect. [ 24 ] Although the above equations are the most used form in literature, some comments about the notation are necessary. Generally, D and E are vectors , that is, Cartesian tensors of rank 1; and permittivity ε is a Cartesian tensor of rank 2. Strain and stress are, in principle, also rank-2 tensors . But conventionally, because strain and stress are all symmetric tensors, the subscript of strain and stress can be relabeled in the following fashion: 11 → 1; 22 → 2; 33 → 3; 23 → 4; 13 → 5; 12 → 6. (Different conventions may be used by different authors in literature. For example, some use 12 → 4; 23 → 5; 31 → 6 instead.) That is why S and T appear to have the "vector form" of six components. Consequently, s appears to be a 6-by-6 matrix instead of a rank-3 tensor. Such a relabeled notation is often called Voigt notation . Whether the shear strain components S 4 , S 5 , S 6 are tensor components or engineering strains is another question. In the equation above, they must be engineering strains for the 6,6 coefficient of the compliance matrix to be written as shown, i.e., 2( s E 11 − s E 12 ). Engineering shear strains are double the value of the corresponding tensor shear, such as S 6 = 2 S 12 and so on. This also means that s 66 = ⁠ 1 / G 12 ⁠ , where G 12 is the shear modulus . In total, there are four piezoelectric coefficients, d ij , e ij , g ij , and h ij defined as follows: where the first set of four terms corresponds to the direct piezoelectric effect and the second set of four terms corresponds to the converse piezoelectric effect. The equality between the direct piezoelectric tensor and the transpose of the converse piezoelectric tensor originates from the Maxwell relations of thermodynamics. [ 25 ] For those piezoelectric crystals for which the polarization is of the crystal-field induced type, a formalism has been worked out that allows for the calculation of piezoelectrical coefficients d ij from electrostatic lattice constants or higher-order Madelung constants . [ 18 ] Of the 32 crystal classes , 21 are non- centrosymmetric (not having a centre of symmetry), and of these, 20 exhibit direct piezoelectricity [ 26 ] (the 21st is the cubic class 432). Ten of these represent the polar crystal classes, [ 27 ] which show a spontaneous polarization without mechanical stress due to a non-vanishing electric dipole moment associated with their unit cell, and which exhibit pyroelectricity . If the dipole moment can be reversed by applying an external electric field, the material is said to be ferroelectric . For polar crystals, for which P ≠ 0 holds without applying a mechanical load, the piezoelectric effect manifests itself by changing the magnitude or the direction of P or both. For the nonpolar but piezoelectric crystals, on the other hand, a polarization P different from zero is only elicited by applying a mechanical load. For them the stress can be imagined to transform the material from a nonpolar crystal class ( P = 0) to a polar one, [ 18 ] having P ≠ 0. Many materials exhibit piezoelectricity. Examples include: Ceramics with randomly oriented grains must be ferroelectric to exhibit piezoelectricity. [ 31 ] The occurrence of abnormal grain growth (AGG) in sintered polycrystalline piezoelectric ceramics has detrimental effects on the piezoelectric performance in such systems and should be avoided, as the microstructure in piezoceramics exhibiting AGG tends to consist of few abnormally large elongated grains in a matrix of randomly oriented finer grains. Macroscopic piezoelectricity is possible in textured polycrystalline non-ferroelectric piezoelectric materials, such as AlN and ZnO. The families of ceramics with perovskite , tungsten - bronze , and related structures exhibit piezoelectricity: The fabrication of lead-free piezoceramics pose multiple challenges, from an environmental standpoint and their ability to replicate the properties of their lead-based counterparts. By removing the lead component of the piezoceramic, the risk of toxicity to humans decreases, but the mining and extraction of the materials can be harmful to the environment. [ 35 ] Analysis of the environmental profile of PZT versus sodium potassium niobate (NKN or KNN) shows that across the four indicators considered (primary energy consumption, toxicological footprint, eco-indicator 99, and input-output upstream greenhouse gas emissions), KNN is actually more harmful to the environment. Most of the concerns with KNN, specifically its Nb 2 O 5 component, are in the early phase of its life cycle before it reaches manufacturers. Since the harmful impacts are focused on these early phases, some actions can be taken to minimize the effects. Returning the land as close to its original form after Nb 2 O 5 mining via dam deconstruction or replacing a stockpile of utilizable soil are known aids for any extraction event. For minimizing air quality effects, modeling and simulation still needs to occur to fully understand what mitigation methods are required. The extraction of lead-free piezoceramic components has not grown to a significant scale at this time, but from early analysis, experts encourage caution when it comes to environmental effects. Fabricating lead-free piezoceramics faces the challenge of maintaining the performance and stability of their lead-based counterparts. In general, the main fabrication challenge is creating the "morphotropic phase boundaries (MPBs)" that provide the materials with their stable piezoelectric properties without introducing the "polymorphic phase boundaries (PPBs)" that decrease the temperature stability of the material. [ 36 ] New phase boundaries are created by varying additive concentrations so that the phase transition temperatures converge at room temperature. The introduction of the MPB improves piezoelectric properties, but if a PPB is introduced, the material becomes negatively affected by temperature. Research is ongoing to control the type of phase boundaries that are introduced through phase engineering, diffusing phase transitions, domain engineering, and chemical modification. A piezoelectric potential can be created in any bulk or nanostructured semiconductor crystal having non central symmetry, such as the Group III – V and II – VI materials, due to polarization of ions under applied stress and strain. This property is common to both the zincblende and wurtzite crystal structures. To first order, there is only one independent piezoelectric coefficient in zincblende , called e 14 , coupled to shear components of the strain. In wurtzite , there are instead three independent piezoelectric coefficients: e 31 , e 33 and e 15 . The semiconductors where the strongest piezoelectricity is observed are those commonly found in the wurtzite structure, i.e. GaN , InN , AlN and ZnO (see piezotronics ). Since 2006, there have also been a number of reports of strong non linear piezoelectric effects in polar semiconductors . [ 37 ] Such effects are generally recognized to be at least important if not of the same order of magnitude as the first order approximation. The piezo-response of polymers is not as high as the response for ceramics; however, polymers hold properties that ceramics do not. Over the last few decades, non-toxic, piezoelectric polymers have been studied and applied due to their flexibility and smaller acoustical impedance . [ 38 ] Other properties that make these materials significant include their biocompatibility , biodegradability , low cost, and low power consumption compared to other piezo-materials (ceramics, etc.). [ 39 ] Piezoelectric polymers and non-toxic polymer composites can be used given their different physical properties. Piezoelectric polymers can be classified by bulk polymers, voided charged polymers ("piezoelectrets"), and polymer composites. A piezo-response observed by bulk polymers is mostly due to its molecular structure. There are two types of bulk polymers: amorphous and semi-crystalline . Examples of semi-crystalline polymers are polyvinylidene fluoride (PVDF) and its copolymers , polyamides , and parylene-C . Non-crystalline polymers, such as polyimide and polyvinylidene chloride (PVDC), fall under amorphous bulk polymers. Voided charged polymers exhibit the piezoelectric effect due to charge induced by poling of a porous polymeric film. Under an electric field, charges form on the surface of the voids forming dipoles. Electric responses can be caused by any deformation of these voids. The piezoelectric effect can also be observed in polymer composites by integrating piezoelectric ceramic particles into a polymer film. A polymer does not have to be piezo-active to be an effective material for a polymer composite. [ 39 ] In this case, a material could be made up of an inert matrix with a separate piezo-active component. PVDF exhibits piezoelectricity several times greater than quartz. The piezo-response observed from PVDF is about 20–30 pC/N. That is an order of 5–50 times less than that of piezoelectric ceramic lead zirconate titanate (PZT). [ 38 ] [ 39 ] The thermal stability of the piezoelectric effect of polymers in the PVDF family (i.e. vinylidene fluoride co-poly trifluoroethylene) goes up to 125 °C. Some applications of PVDF are pressure sensors, hydrophones, and shock wave sensors. [ 38 ] Due to their flexibility, piezoelectric composites have been proposed as energy harvesters and nanogenerators. In 2018, it was reported by Zhu et al. that a piezoelectric response of about 17 pC/N could be obtained from PDMS/PZT nanocomposite at 60% porosity. [ 40 ] Another PDMS nanocomposite was reported in 2017, in which BaTiO 3 was integrated into PDMS to make a stretchable, transparent nanogenerator for self-powered physiological monitoring. [ 41 ] In 2016, polar molecules were introduced into a polyurethane foam in which high responses of up to 244 pC/N were reported. [ 42 ] Most materials exhibit at least weak piezoelectric responses. Trivial examples include sucrose (table sugar), DNA , viral proteins, including those from bacteriophage . [ 43 ] [ 44 ] An actuator based on wood fibers, called cellulose fibers , has been reported. [ 39 ] D33 responses for cellular polypropylene are around 200 pC/N. Some applications of cellular polypropylene are musical key pads, microphones, and ultrasound-based echolocation systems. [ 38 ] Recently, single amino acid such as β-glycine also displayed high piezoelectric (178 pmV −1 ) as compared to other biological materials. [ 45 ] Ionic liquids were recently identified as the first piezoelectric liquid. [ 46 ] Direct piezoelectricity of some substances, like quartz, can generate potential differences of thousands of volts. The principle of operation of a piezoelectric sensor is that a physical dimension, transformed into a force, acts on two opposing faces of the sensing element. Depending on the design of a sensor, different "modes" to load the piezoelectric element can be used: longitudinal, transversal and shear. Detection of pressure variations in the form of sound is the most common sensor application, e.g. piezoelectric microphones (sound waves bend the piezoelectric material, creating a changing voltage) and piezoelectric pickups for acoustic-electric guitars . A piezo sensor attached to the body of an instrument is known as a contact microphone . Piezoelectric sensors especially are used with high frequency sound in ultrasonic transducers for medical imaging and also industrial nondestructive testing (NDT). For many sensing techniques, the sensor can act as both a sensor and an actuator—often the term transducer is preferred when the device acts in this dual capacity, but most piezo devices have this property of reversibility whether it is used or not. Ultrasonic transducers, for example, can inject ultrasound waves into the body, receive the returned wave, and convert it to an electrical signal (a voltage). Most medical ultrasound transducers are piezoelectric. In addition to those mentioned above, various sensor and transducer applications include: As very high electric fields correspond to only tiny changes in the width of the crystal, this width can be changed with better-than- μm precision, making piezo crystals the most important tool for positioning objects with extreme accuracy—thus their use in actuators . [ 55 ] Multilayer ceramics, using layers thinner than 100 μm , allow reaching high electric fields with voltage lower than 150 V . These ceramics are used within two kinds of actuators: direct piezo actuators and amplified piezoelectric actuators . While direct actuator's stroke is generally lower than 100 μm , amplified piezo actuators can reach millimeter strokes. The piezoelectrical properties of quartz are useful as a standard of frequency . Types of piezoelectric motor include: Aside from the stepping stick-slip motor, all these motors work on the same principle. Driven by dual orthogonal vibration modes with a phase difference of 90°, the contact point between two surfaces vibrates in an elliptical path, producing a frictional force between the surfaces. Usually, one surface is fixed, causing the other to move. In most piezoelectric motors, the piezoelectric crystal is excited by a sine wave signal at the resonant frequency of the motor. Using the resonance effect, a much lower voltage can be used to produce a high vibration amplitude. A stick-slip motor works using the inertia of a mass and the friction of a clamp. Such motors can be very small. Some are used for camera sensor displacement, thus allowing an anti-shake function. Different teams of researchers have been investigating ways to reduce vibrations in materials by attaching piezo elements to the material. When the material is bent by a vibration in one direction, the vibration-reduction system responds to the bend and sends electric power to the piezo element to bend in the other direction. Future applications of this technology are expected in cars and houses to reduce noise. Further applications to flexible structures, such as shells and plates, have also been studied for nearly three decades. In a demonstration at the Material Vision Fair in Frankfurt in November 2005, a team from TU Darmstadt in Germany showed several panels that were hit with a rubber mallet, and the panel with the piezo element immediately stopped swinging. Piezoelectric ceramic fiber technology is being used as an electronic damping system on some HEAD tennis rackets . [ 60 ] All piezo transducers have a fundamental resonant frequency and many harmonic frequencies. Piezo driven Drop-On-Demand fluid systems are sensitive to extra vibrations in the piezo structure that must be reduced or eliminated. One inkjet company, Howtek, Inc solved this problem by replacing glass(rigid) inkjet nozzles with Tefzel (soft) inkjet nozzles. This novel idea popularized single nozzle inkjets and they are now used in 3D Inkjet printers that run for years if kept clean inside and not overheated (Tefzel creeps under pressure at very high temperatures) In people with previous total fertilization failure , piezoelectric activation of oocytes together with intracytoplasmic sperm injection (ICSI) seems to improve fertilization outcomes. [ 61 ] Piezosurgery [ 62 ] is a minimally invasive technique that aims to cut a target tissue with little damage to neighboring tissues. For example, Hoigne et al. [ 63 ] uses frequencies in the range 25–29 kHz, causing microvibrations of 60–210 μm. It has the ability to cut mineralized tissue without cutting neurovascular tissue and other soft tissue, thereby maintaining a blood-free operating area, better visibility and greater precision. [ 64 ] Recent advancements in piezoelectric materials have led to the development of multifunctional composites that integrate mechanical protection with sensing capabilities. [ 65 ] [ 66 ] One notable innovation involves the growth of Rochelle salt [ 67 ] (RS) crystals within 3D-printed, bioinspired structures. Researchers have used cuttlefish bone as a model for designing porous frameworks, which provide high stiffness and impact absorption due to their chambered microstructure. By growing RS crystals within these structures, the resulting composite achieves enhanced piezoelectric properties while maintaining significant mechanical strength. [ 68 ] These RS-based composites demonstrate remarkable impact energy absorption and real-time sensing capabilities. Under cyclic impacts, they maintain stable piezoelectric output for thousands of cycles, with peak output voltages reaching approximately 8 V and a measured piezoelectric coefficient (d33) of around 30 pC/N. [ 68 ] Such performance enables the detection of both the magnitude and location of impacts, making the material suitable for applications in wearable protective gear, including smart armor for athletes and fall detection devices for elderly individuals. A distinguishing feature of these composites is their sustainability and recyclability. Rochelle salt crystals can be dissolved and regrown within the structure, allowing damaged materials to be repaired. Recycled samples retain up to 95% of their original performance, significantly extending the material's lifespan and promoting eco-friendly usage. Applications extend beyond sports and medical devices, with potential use in aerospace, military armor, and structural health monitoring systems. These advancements highlight the evolving versatility of piezoelectric materials in modern technology. [ 68 ] In 2015, Cambridge University researchers working in conjunction with researchers from the National Physical Laboratory and Cambridge-based dielectric antenna company Antenova Ltd, using thin films of piezoelectric materials found that at a certain frequency, these materials become not only efficient resonators, but efficient radiators as well, meaning that they can potentially be used as antennas. The researchers found that by subjecting the piezoelectric thin films to an asymmetric excitation, the symmetry of the system is similarly broken, resulting in a corresponding symmetry breaking of the electric field, and the generation of electromagnetic radiation. [ 69 ] [ 70 ] Several attempts at the macro-scale application of the piezoelectric technology have emerged [ 71 ] [ 72 ] to harvest kinetic energy from walking pedestrians. In this case, locating high traffic areas is critical for optimization of the energy harvesting efficiency, as well as the orientation of the tile pavement significantly affects the total amount of the harvested energy. [ 73 ] A density flow evaluation is recommended to qualitatively evaluate the piezoelectric power harvesting potential of the considered area based on the number of pedestrian crossings per unit time. [ 74 ] In X. Li's study, the potential application of a commercial piezoelectric energy harvester in a central hub building at Macquarie University in Sydney, Australia is examined and discussed. Optimization of the piezoelectric tile deployment is presented according to the frequency of pedestrian mobility and a model is developed where 3.1% of the total floor area with the highest pedestrian mobility is paved with piezoelectric tiles. The modelling results indicate that the total annual energy harvesting potential for the proposed optimized tile pavement model is estimated at 1.1 MWh/year, which would be sufficient to meet close to 0.5% of the annual energy needs of the building. [ 74 ] In Israel, there is a company which has installed piezoelectric materials under a busy highway. The energy generated is enough to power street lights, billboards, and signs. [ citation needed ] Tire company Goodyear has plans to develop an electricity generating tire which has piezoelectric material lined inside it. As the tire moves, it deforms and thus electricity is generated. [ 75 ] The efficiency of a hybrid photovoltaic cell that contains piezoelectric materials can be increased simply by placing it near a source of ambient noise or vibration. The effect was demonstrated with organic cells using zinc oxide nanotubes. The electricity generated by the piezoelectric effect itself is a negligible percentage of the overall output. Sound levels as low as 75 decibels improved efficiency by up to 50%. Efficiency peaked at 10 kHz, the resonant frequency of the nanotubes. The electrical field set up by the vibrating nanotubes interacts with electrons migrating from the organic polymer layer. This process decreases the likelihood of recombination, in which electrons are energized but settle back into a hole instead of migrating to the electron-accepting ZnO layer. [ 76 ] [ 77 ]
https://en.wikipedia.org/wiki/Piezoelectricity
The piezoelectrochemical transducer effect (PECT) is a coupling between the electrochemical potential and the mechanical strain in ion-insertion-based electrode materials. It is similar to the piezoelectric effect – with both exhibiting a voltage-strain coupling - although the PECT effect relies on movement of ions within a material microstructure, rather than charge accumulation from the polarization of electric dipole moments . Many different materials have been shown to exhibit a PECT effect including: lithiated graphite .; [ 1 ] carbon fibers inserted with lithium , [ 2 ] [ 3 ] [ 4 ] sodium , [ 5 ] and potassium ; [ 6 ] sodiated black phosphorus ; [ 7 ] lithiated aluminium ; [ 8 ] lithium cobalt oxide ; [ 9 ] vanadium oxide nanofibers inserted with lithium and sodium ; [ 10 ] and lithiated silicon . [ 11 ] These materials all exhibit a voltage-strain coupling, whereby the material expands when it is charged with ions, and contracts when it is discharged. The reverse is also true: when applying a mechanical strain the electrical potential changes. This has led to various proposals of applications for the PECT effect with research focusing on actuators , strain-sensors, and energy harvesters . The PECT effect was first reported by Dr. F Lincoln Vogel in 1981 when studying how intercalation voltages could be used to provide an actuation force in graphitized carbon fibres. [ 12 ] The research used sulphate (SO 4 ) ions from sulfuric acid to intercalate into the microstructure of carbon fibers , forming graphite intercalation compounds (GICs). It was hypothesized that an axial strain of up to 2% should be possible, however only 0.2% was observed due to experimental limitations. [ 13 ] The effect is often explained by the theories of Larché and Cahn [ 14 ] [ 15 ] [ 16 ] who derived mathematical formulations for the equilibrium relationships between the electric potential , chemical potential , and mechanical stress in solid materials. In summary the theory states that solid materials under mechanical stress undergo a change in chemical potential , which in turn affects their electrical potential . [ 17 ] Since PECT materials expand and contract upon ion-insertion it is possible to use this effect for actuation . Several different materials have been proposed for this, including: carbon fibers inserted with lithium , [ 2 ] [ 3 ] [ 18 ] sodium , [ 5 ] and potassium ; [ 6 ] lithium cobalt oxide ; [ 9 ] and vanadium oxide nanofibers inserted with lithium and sodium . [ 10 ] Applications for PECT-based actuation range from microelectromechanical systems (MEMS), [ 19 ] to large morphing structures. [ 20 ] [ 21 ] Different materials exhibit different amounts of expansion/contraction, with a response that is dependent on the type of ion, as well as the amount of charge. For example, silicon expands by more than 300% when inserted with lithium, [ 19 ] whereas graphite expands by around 13%. [ 19 ] Carbon fibres expand by up to 1% when inserted with lithium, [ 2 ] but only around 0.2% when inserted with potassium. [ 6 ] As PECT materials exhibit a change in voltage upon application of strain, it is possible to calibrate this change in voltage to the level of strain in a material. This has been proposed for applications in battery health monitoring, [ 22 ] as well as structural health monitoring . [ 6 ] [ 18 ] [ 17 ] When mechanical strain is applied to a PECT material it changes the chemical potential , and therefore the electric potential of that material. [ 14 ] [ 15 ] [ 16 ] [ 23 ] Since current flows from more negative materials to more positive materials, it is possible to induce a current flow between two ionically connected materials by simply applying a mechanical strain. It is therefore possible to harness and convert mechanical energy into electrical energy. A number of materials have been demonstrated to be capable of PECT-based energy harvesting, including: carbon fibers inserted with lithium , [ 3 ] [ 24 ] [ 18 ] sodiated black phosphorus ; [ 7 ] lithiated aluminium ; [ 8 ] and lithiated silicon . [ 11 ] A structural carbon fibre composite has also been shown to be capable of harvesting energy using the PECT effect. [ 17 ] Conventional lithium-ion batteries have also been shown to be capable of PECT-based energy harvesting. [ 25 ] [ 23 ] This effect has most often been demonstrated using a two-electrode bending setup: [ 7 ] [ 8 ] [ 11 ] [ 18 ] [ 17 ] [ 24 ] PECT energy harvesting is limited by the rate of ionic diffusion, and therefore is only efficient at low frequency (typically below around 1 Hz). [ 8 ] Figures of merit for comparing different PECT-based energy harvesters were formulated by Preimesberger et al. [ 26 ] The PECT effect is also present in typical ion-insertion-based battery electrodes (e.g. Li-ion). [ 25 ] [ 27 ] [ 28 ] The electrodes expand and contract when inserted with ions, which is one of the issues that leads to battery ageing and capacity loss over time. [ 29 ] The PECT effect in battery electrodes could be an issue in situations where battery electrodes are mechanically stressed (e.g. in structural batteries ), causing a change in electrical potential when the stress-state changes. It has been proposed that the PECT effect in Li-ion batteries could be exploited to measure battery health., [ 22 ] and to harvest mechanical energy. [ 25 ]
https://en.wikipedia.org/wiki/Piezoelectrochemical_transducer_effect
Piezoluminescence is a form of luminescence created by pressure upon certain solids. This phenomenon is characterized by recombination processes involving electrons, holes and impurity ion centres. [ 1 ] Some piezoelectric crystals give off a certain amount of piezoluminescence when under pressure. Irradiated salts, such as NaCl, KCl, KBr and polycrystalline chips of LiF (TLD-100), have been found to exhibit piezoluminescent properties. [ 2 ] It has also been discovered that ferroelectric polymers exhibit piezoluminescence upon the application of stress. [ 3 ] In the folk-literature surrounding psychedelic production, DMT , 5-MeO-DMT , and LSD have been reported to exhibit piezoluminescence. As specifically noted in the book Acid Dreams , it is stated that Augustus Owsley Stanley III , one of the most prolific producers of LSD in the 1960s, observed piezoluminescence in the compound's purest form, [ 4 ] an observation confirmed by Alexander Shulgin: "A totally pure salt, when dry and when shaken in the dark, will emit small flashes of white light." [ 5 ]
https://en.wikipedia.org/wiki/Piezoluminescence
Piezomagnetism is a phenomenon observed in some antiferromagnetic and ferrimagnetic crystals. It is characterized by a linear coupling between the system's magnetic polarization and mechanical strain . In a piezomagnetic material, one may induce a spontaneous magnetic moment by applying mechanical stress , or a physical deformation by applying a magnetic field . Piezomagnetism differs from the related property of magnetostriction ; if an applied magnetic field is reversed in direction, the strain produced changes signs. Additionally, a non-zero piezomagnetic moment can be produced by mechanical strain alone , at zero fields, which is not true of magnetostriction. [ 1 ] According to the Institute of Electrical and Electronics Engineers (IEEE): "Piezomagnetism is the linear magneto-mechanical effect analogous to the linear electromechanical effect of piezoelectricity . Similarly, magnetostriction and electrostriction are analogous second-order effects. These higher-order effects can be represented as effectively first-order when variations in the system parameters are small compared with the initial values of the parameters". [ 2 ] The piezomagnetic effect is made possible by an absence of certain symmetry elements in a crystal structure ; specifically, symmetry under time reversal forbids the property. [ 3 ] The first experimental observation of piezomagnetism was made in 1960, in the fluorides of cobalt and manganese . [ 4 ] The strongest piezomagnet known is uranium dioxide , with magnetoelastic memory switching at magnetic fields near 180,000 Oe at temperatures below 30 kelvins. [ 5 ]
https://en.wikipedia.org/wiki/Piezomagnetism