text
stringlengths
559
401k
source
stringlengths
13
121
Gas chromatography–mass spectrometry (GC–MS) is an analytical method that combines the features of gas-chromatography and mass spectrometry to identify different substances within a test sample. Applications of GC–MS include drug detection, fire investigation, environmental analysis, explosives investigation, food and flavor analysis, and identification of unknown samples, including that of material samples obtained from planet Mars during probe missions as early as the 1970s. GC–MS can also be used in airport security to detect substances in luggage or on human beings. Additionally, it can identify trace elements in materials that were previously thought to have disintegrated beyond identification. Like liquid chromatography–mass spectrometry, it allows analysis and detection even of tiny amounts of a substance. GC–MS has been regarded as a "gold standard" for forensic substance identification because it is used to perform a 100% specific test, which positively identifies the presence of a particular substance. A nonspecific test merely indicates that any of several in a category of substances is present. Although a nonspecific test could statistically suggest the identity of the substance, this could lead to false positive identification. However, the high temperatures (300°C) used in the GC–MS injection port (and oven) can result in thermal degradation of injected molecules, thus resulting in the measurement of degradation products instead of the actual molecule(s) of interest. == History == The first on-line coupling of gas chromatography to a mass spectrometer was reported in the late 1950s. An interest in coupling the methods had been suggested as early as December 1954, but conventional recording techniques had too poor temporal resolution. Fortunately, time-of-flight mass spectrometry developed around the same time allowed to measure spectra thousands times a second. The development of affordable and miniaturized computers has helped in the simplification of the use of this instrument, as well as allowed great improvements in the amount of time it takes to analyze a sample. In 1964, Electronic Associates, Inc. (EAI), a leading U.S. supplier of analog computers, began development of a computer controlled quadrupole mass spectrometer under the direction of Robert E. Finnigan. By 1966 Finnigan and collaborator Mike Uthe's EAI division had sold over 500 quadrupole residual gas-analyzer instruments. In 1967, Finnigan left EAI to form the Finnigan Instrument Corporation along with Roger Sant, T. Z. Chou, Michael Story, Lloyd Friedman, and William Fies. In early 1968, they delivered the first prototype quadrupole GC/MS instruments to Stanford and Purdue University. When Finnigan Instrument Corporation was acquired by Thermo Instrument Systems (later Thermo Fisher Scientific) in 1990, it was considered "the world's leading manufacturer of mass spectrometers". == Instrumentation == The GC–MS is composed of two major building blocks: the gas chromatograph and the mass spectrometer. The gas chromatograph utilizes a capillary column whose properties regarding molecule separation depend on the column's dimensions (length, diameter, film thickness) as well as the phase properties (e.g. 5% phenyl polysiloxane). The difference in the chemical properties between different molecules in a mixture and their relative affinity for the stationary phase of the column will promote separation of the molecules as the sample travels the length of the column. The molecules are retained by the column and then elute (come off) from the column at different times (called the retention time), and this allows the mass spectrometer downstream to capture, ionize, accelerate, deflect, and detect the ionized molecules separately. The mass spectrometer does this by breaking each molecule into ionized fragments and detecting these fragments using their mass-to-charge ratio. These two components, used together, allow a much finer degree of substance identification than either unit used separately. It is not possible to make an accurate identification of a particular molecule by gas chromatography or mass spectrometry alone. The mass spectrometry process normally requires a very pure sample while gas chromatography using a traditional detector (e.g. Flame ionization detector) cannot differentiate between multiple molecules that happen to take the same amount of time to travel through the column (i.e. have the same retention time), which results in two or more molecules that co-elute. Sometimes two different molecules can also have a similar pattern of ionized fragments in a mass spectrometer (mass spectrum). Combining the two processes reduces the possibility of error, as it is extremely unlikely that two different molecules will behave in the same way in both a gas chromatograph and a mass spectrometer. Therefore, when an identifying mass spectrum appears at a characteristic retention time in a GC–MS analysis, it typically increases certainty that the analyte of interest is in the sample. === Purge and trap GC–MS === For the analysis of volatile compounds, a purge and trap (P&T) concentrator system may be used to introduce samples. The target analytes are extracted by mixing the sample with water and purge with inert gas (e.g. Nitrogen gas) into an airtight chamber, this is known as purging or sparging. The volatile compounds move into the headspace above the water and are drawn along a pressure gradient (caused by the introduction of the purge gas) out of the chamber. The volatile compounds are drawn along a heated line onto a 'trap'. The trap is a column of adsorbent material at ambient temperature that holds the compounds by returning them to the liquid phase. The trap is then heated and the sample compounds are introduced to the GC–MS column via a volatiles interface, which is a split inlet system. P&T GC–MS is particularly suited to volatile organic compounds (VOCs) and BTEX compounds (aromatic compounds associated with petroleum). A faster alternative is the "purge-closed loop" system. In this system the inert gas is bubbled through the water until the concentrations of organic compounds in the vapor phase are at equilibrium with concentrations in the aqueous phase. The gas phase is then analysed directly. === Types of mass spectrometer detectors === The most common type of mass spectrometer (MS) associated with a gas chromatograph (GC) is the quadrupole mass spectrometer, sometimes referred to by the Hewlett-Packard (now Agilent) trade name "Mass Selective Detector" (MSD). Another relatively common detector is the ion trap mass spectrometer. Additionally one may find a magnetic sector mass spectrometer, however these particular instruments are expensive and bulky and not typically found in high-throughput service laboratories. Other detectors may be encountered such as time of flight (TOF), tandem quadrupoles (MS-MS) (see below), or in the case of an ion trap MSn where n indicates the number mass spectrometry stages. === GC–tandem MS === When a second phase of mass fragmentation is added, for example using a second quadrupole in a quadrupole instrument, it is called tandem MS (MS/MS). MS/MS can sometimes be used to quantitate low levels of target compounds in the presence of a high sample matrix background. The first quadrupole (Q1) is connected with a collision cell (Q2) and another quadrupole (Q3). Both quadrupoles can be used in scanning or static mode, depending on the type of MS/MS analysis being performed. Types of analysis include product ion scan, precursor ion scan, selected reaction monitoring (SRM) (sometimes referred to as multiple reaction monitoring (MRM)) and neutral loss scan. For example: When Q1 is in static mode (looking at one mass only as in SIM), and Q3 is in scanning mode, one obtains a so-called product ion spectrum (also called "daughter spectrum"). From this spectrum, one can select a prominent product ion which can be the product ion for the chosen precursor ion. The pair is called a "transition" and forms the basis for SRM. SRM is highly specific and virtually eliminates matrix background. == Ionization == After the molecules travel the length of the column, pass through the transfer line and enter into the mass spectrometer they are ionized by various methods with typically only one method being used at any given time. Once the sample is fragmented it will then be detected, usually by an electron multiplier, which essentially turns the ionized mass fragment into an electrical signal that is then detected. The ionization technique chosen is independent of using full scan or SIM. === Electron ionization === By far the most common and perhaps standard form of ionization is electron ionization (EI). The molecules enter into the MS (the source is a quadrupole or the ion trap itself in an ion trap MS) where they are bombarded with free electrons emitted from a filament, not unlike the filament one would find in a standard light bulb. The electrons bombard the molecules, causing the molecule to fragment in a characteristic and reproducible way. This "hard ionization" technique results in the creation of more fragments of low mass-to-charge ratio (m/z) and few, if any, molecules approaching the molecular mass unit. Hard ionization is considered by mass spectrometrists as the employ of molecular electron bombardment, whereas "soft ionization" is charge by molecular collision with an introduced gas. The molecular fragmentation pattern is dependent upon the electron energy applied to the system, typically 70 eV (electronvolts). The use of 70 eV facilitates comparison of generated spectra with library spectra using manufacturer-supplied software or software developed by the National Institute of Standards (NIST-USA). Spectral library searches employ matching algorithms such as Probability Based Matching and dot-product matching that are used with methods of analysis written by many method standardization agencies. Sources of libraries include NIST, Wiley, the AAFS, and instrument manufacturers. ==== Cold electron ionization ==== The "hard ionization" process of electron ionization can be softened by the cooling of the molecules before their ionization, resulting in mass spectra that are richer in information. In this method named cold electron ionization (cold-EI) the molecules exit the GC column, mixed with added helium make up gas and expand into vacuum through a specially designed supersonic nozzle, forming a supersonic molecular beam (SMB). Collisions with the make up gas at the expanding supersonic jet reduce the internal vibrational (and rotational) energy of the analyte molecules, hence reducing the degree of fragmentation caused by the electrons during the ionization process. Cold-EI mass spectra are characterized by an abundant molecular ion while the usual fragmentation pattern is retained, thus making cold-EI mass spectra compatible with library search identification techniques. The enhanced molecular ions increase the identification probabilities of both known and unknown compounds, amplify isomer mass spectral effects and enable the use of isotope abundance analysis for the elucidation of elemental formulas. === Chemical ionization === In chemical ionization (CI) a reagent gas, typically methane or ammonia is introduced into the mass spectrometer. Depending on the technique (positive CI or negative CI) chosen, this reagent gas will interact with the electrons and analyte and cause a 'soft' ionization of the molecule of interest. A softer ionization fragments the molecule to a lower degree than the hard ionization of EI. One of the main benefits of using chemical ionization is that a mass fragment closely corresponding to the molecular weight of the analyte of interest is produced. In positive chemical ionization (PCI) the reagent gas interacts with the target molecule, most often with a proton exchange. This produces the species in relatively high amounts. In negative chemical ionization (NCI) the reagent gas decreases the impact of the free electrons on the target analyte. This decreased energy typically leaves the fragment in great supply. == Analysis == A mass spectrometer is typically utilized in one of two ways: full scan or selective ion monitoring (SIM). The typical GC–MS instrument is capable of performing both functions either individually or concomitantly, depending on the setup of the particular instrument. The primary goal of instrument analysis is to quantify an amount of substance. This is done by comparing the relative concentrations among the atomic masses in the generated spectrum. Two kinds of analysis are possible, comparative and original. Comparative analysis essentially compares the given spectrum to a spectrum library to see if its characteristics are present for some sample in the library. This is best performed by a computer because there are a myriad of visual distortions that can take place due to variations in scale. Computers can also simultaneously correlate more data (such as the retention times identified by GC), to more accurately relate certain data. Deep learning was shown to lead to promising results in the identification of VOCs from raw GC–MS data. Another method of analysis measures the peaks in relation to one another. In this method, the tallest peak is assigned 100% of the value, and the other peaks being assigned proportionate values. All values above 3% are assigned. The total mass of the unknown compound is normally indicated by the parent peak. The value of this parent peak can be used to fit with a chemical formula containing the various elements which are believed to be in the compound. The isotope pattern in the spectrum, which is unique for elements that have many natural isotopes, can also be used to identify the various elements present. Once a chemical formula has been matched to the spectrum, the molecular structure and bonding can be identified, and must be consistent with the characteristics recorded by GC–MS. Typically, this identification is done automatically by programs which come with the instrument, given a list of the elements which could be present in the sample. A "full spectrum" analysis considers all the "peaks" within a spectrum. Conversely, selective ion monitoring (SIM) only monitors selected ions associated with a specific substance. This is done on the assumption that at a given retention time, a set of ions is characteristic of a certain compound. This is a fast and efficient analysis, especially if the analyst has previous information about a sample or is only looking for a few specific substances. When the amount of information collected about the ions in a given gas chromatographic peak decreases, the sensitivity of the analysis increases. So, SIM analysis allows for a smaller quantity of a compound to be detected and measured, but the degree of certainty about the identity of that compound is reduced. === Full scan MS === When collecting data in the full scan mode, a target range of mass fragments is determined and put into the instrument's method. An example of a typical broad range of mass fragments to monitor would be m/z 50 to m/z 400. The determination of what range to use is largely dictated by what one anticipates being in the sample while being cognizant of the solvent and other possible interferences. A MS should not be set to look for mass fragments too low or else one may detect air (found as m/z 28 due to nitrogen), carbon dioxide (m/z 44) or other possible interference. Additionally if one is to use a large scan range then sensitivity of the instrument is decreased due to performing fewer scans per second since each scan will have to detect a wide range of mass fragments. Full scan is useful in determining unknown compounds in a sample. It provides more information than SIM when it comes to confirming or resolving compounds in a sample. During instrument method development it may be common to first analyze test solutions in full scan mode to determine the retention time and the mass fragment fingerprint before moving to a SIM instrument method. === Selective ion monitoring === In selective ion monitoring (SIM) certain ion fragments are entered into the instrument method and only those mass fragments are detected by the mass spectrometer. The advantages of SIM are that the detection limit is lower since the instrument is only looking at a small number of fragments (e.g. three fragments) during each scan. More scans can take place each second. Since only a few mass fragments of interest are being monitored, matrix interferences are typically lower. To additionally confirm the likelihood of a potentially positive result, it is relatively important to be sure that the ion ratios of the various mass fragments are comparable to a known reference standard. == Applications == === Environmental monitoring and cleanup === GC–MS is becoming the tool of choice for tracking organic pollutants in the environment. The cost of GC–MS equipment has decreased significantly, and the reliability has increased at the same time, which has contributed to its increased adoption in environmental studies. === Criminal forensics === GC–MS can analyze the particles from a human body in order to help link a criminal to a crime. The analysis of fire debris using GC–MS is well established, and there is even an established American Society for Testing and Materials (ASTM) standard for fire debris analysis. GCMS/MS is especially useful here as samples often contain very complex matrices, and results used in court need to be highly accurate. === Law enforcement === GC–MS is increasingly used for detection of illegal narcotics, and may eventually supplant drug-sniffing dogs.[1] A simple and selective GC–MS method for detecting marijuana usage was recently developed by the Robert Koch Institute in Germany. It involves identifying an acid metabolite of tetrahydrocannabinol (THC), the active ingredient in marijuana, in urine samples by employing derivatization in the sample preparation. GC–MS is also commonly used in forensic toxicology to find drugs and/or poisons in biological specimens of suspects, victims, or the deceased. In drug screening, GC–MS methods frequently utilize liquid-liquid extraction as a part of sample preparation, in which target compounds are extracted from blood plasma. === Sports anti-doping analysis === GC–MS is the main tool used in sports anti-doping laboratories to test athletes' urine samples for prohibited performance-enhancing drugs, for example anabolic steroids. === Security === A post–September 11 development, explosive detection systems have become a part of all US airports. These systems run on a host of technologies, many of them based on GC–MS. There are only three manufacturers certified by the FAA to provide these systems, one of which is Thermo Detection (formerly Thermedics), which produces the EGIS, a GC–MS-based line of explosives detectors. The other two manufacturers are Barringer Technologies, now owned by Smith's Detection Systems, and Ion Track Instruments, part of General Electric Infrastructure Security Systems. === Chemical warfare agent detection === As part of the post-September 11 drive towards increased capability in homeland security and public health preparedness, traditional GC–MS units with transmission quadrupole mass spectrometers, as well as those with cylindrical ion trap (CIT-MS) and toroidal ion trap (T-ITMS) mass spectrometers have been modified for field portability and near real-time detection of chemical warfare agents (CWA) such as sarin, soman, and VX. These complex and large GC–MS systems have been modified and configured with resistively heated low thermal mass (LTM) gas chromatographs that reduce analysis time to less than ten percent of the time required in traditional laboratory systems. Additionally, the systems are smaller, and more mobile, including units that are mounted in mobile analytical laboratories (MAL), such as those used by the United States Marine Corps Chemical and Biological Incident Response Force MAL and other similar laboratories, and systems that are hand-carried by two-person teams or individuals, much ado to the smaller mass detectors. Depending on the system, the analytes can be introduced via liquid injection, desorbed from sorbent tubes through a thermal desorption process, or with solid-phase micro extraction (SPME). === Chemical engineering === GC–MS is used for the analysis of unknown organic compound mixtures. One critical use of this technology is the use of GC–MS to determine the composition of bio-oils processed from raw biomass. GC–MS is also utilized in the identification of continuous phase component in a smart material, magnetorheological (MR) fluid. === Food, beverage and perfume analysis === Foods and beverages contain numerous aromatic compounds, some naturally present in the raw materials and some forming during processing. GC–MS is extensively used for the analysis of these compounds which include esters, fatty acids, alcohols, aldehydes, terpenes etc. It is also used to detect and measure contaminants from spoilage or adulteration which may be harmful and which is often controlled by governmental agencies, for example pesticides. === Astrochemistry === Several GC–MS systems have left earth. Two were brought to Mars by the Viking program. Venera 11 and 12 and Pioneer Venus analysed the atmosphere of Venus with GC–MS. The Huygens probe of the Cassini–Huygens mission landed one GC–MS on Saturn's largest moon, Titan. The MSL Curiosity rover's Sample analysis at Mars (SAM) instrument contains both a gas chromatograph and quadrupole mass spectrometer that can be used in tandem as a GC–MS. The material in the comet 67P/Churyumov–Gerasimenko was analysed by the Rosetta mission with a chiral GC–MS in 2014. === Medicine === Dozens of congenital metabolic diseases also known as inborn errors of metabolism (IEM) are now detectable by newborn screening tests, especially the testing using gas chromatography–mass spectrometry. GC–MS can determine compounds in urine even in minor concentration. These compounds are normally not present but appear in individuals suffering with metabolic disorders. This is increasingly becoming a common way to diagnose IEM for earlier diagnosis and institution of treatment eventually leading to a better outcome. It is now possible to test a newborn for over 100 genetic metabolic disorders by a urine test at birth based on GC–MS. In combination with isotopic labeling of metabolic compounds, the GC–MS is used for determining metabolic activity. Most applications are based on the use of 13C as the labeling and the measurement of 13C-12C ratios with an isotope ratio mass spectrometer (IRMS); an MS with a detector designed to measure a few select ions and return values as ratios. == See also == Capillary electrophoresis–mass spectrometry Ion-mobility spectrometry–mass spectrometry Liquid chromatography–mass spectrometry Prolate trochoidal mass spectrometer Pyrolysis–gas chromatography–mass spectrometry == References == == Bibliography == == External links == Gas+chromatography-mass+spectrometry at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Golm Metabolome Database, a mass spectral reference database of plant metabolites
Wikipedia/Gas_chromatography–mass_spectrometry
In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others. The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume. Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism. Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems. == Overview == In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium. Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors. Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy. The transport of mass, energy, and momentum can be affected by the presence of external sources: An odor dissipates more slowly (and may intensify) when the source of the odor remains present. The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied. The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air. == Commonalities among phenomena == An important principle in the study of transport phenomena is analogy between phenomena. === Diffusion === There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples: Mass: the spreading and dissipation of odors in air is an example of mass diffusion. Energy: the conduction of heat in a solid material is an example of heat diffusion. Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates). The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena. A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations. The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass. === Onsager reciprocal relations === In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once. == Momentum transfer == In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion). When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer. The equation for momentum transfer is Newton's law of viscosity written as follows: τ z x = − ρ ν ∂ υ x ∂ z {\displaystyle \tau _{zx}=-\rho \nu {\frac {\partial \upsilon _{x}}{\partial z}}} where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed. == Mass transfer == When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are: Mass can be transferred by the action of a pressure gradient (pressure diffusion) Forced diffusion occurs because of the action of some external force Diffusion can be caused by temperature gradients (thermal diffusion) Diffusion can be caused by differences in chemical potential This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B: J A y = − D A B ∂ C a ∂ y {\displaystyle J_{Ay}=-D_{AB}{\frac {\partial Ca}{\partial y}}} where D is the diffusivity constant. == Heat transfer == Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system: q ″ = − k d T d x {\displaystyle q''=-k{\frac {dT}{dx}}} The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position. For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient. Q = h ⋅ A ⋅ Δ T {\displaystyle Q=h\cdot A\cdot {\Delta T}} where A is the surface area, Δ T {\displaystyle {\Delta T}} is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient. Within heat transfer, two principal types of convection can occur: Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is N u a = h a D k {\displaystyle Nu_{a}={\frac {h_{a}D}{k}}} . Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data. Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers. == Heat and mass transfer analogy == The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer. === Derivation === The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected: u ∗ ∂ T ∗ ∂ x ∗ + v ∗ ∂ T ∗ ∂ y ∗ = 1 R e L P r ∂ 2 T ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial T^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial T^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Pr}}{\frac {\partial ^{2}T^{*}}{\partial y^{*2}}}} Where u ∗ {\displaystyle {u^{*}}} and v ∗ {\displaystyle {v^{*}}} are the velocities in the x and y directions respectively normalized by the free stream velocity, x ∗ {\displaystyle {x^{*}}} and y ∗ {\displaystyle {y^{*}}} are the x and y coordinates non-dimensionalized by a relevant length scale, R e L {\displaystyle {Re_{L}}} is the Reynolds number, P r {\displaystyle {Pr}} is the Prandtl number, and T ∗ {\displaystyle {T^{*}}} is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures: T ∗ = T − T m i n T m a x − T m i n {\displaystyle T^{*}={\frac {T-T_{min}}{T_{max}-T_{min}}}} The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation: u ∗ ∂ C A ∗ ∂ x ∗ + v ∗ ∂ C A ∗ ∂ y ∗ = 1 R e L S c ∂ 2 C A ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial C_{A}^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial C_{A}^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Sc}}{\frac {\partial ^{2}C_{A}^{*}}{\partial y^{*2}}}} Where C A ∗ {\displaystyle {C_{A}^{*}}} is the non-dimensional concentration, and S c {\displaystyle {Sc}} is the Schmidt number. Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and thermal diffusion ( α {\displaystyle {\alpha }} ), given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and mass Diffusivity ( D {\displaystyle {D}} ), given by the Schmidt number. In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport. At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling: q ″ = k d T d y = h ( T s − T b ) {\displaystyle q''=k{\frac {dT}{dy}}=h(T_{s}-T_{b})} Where q” is the heat flux, k {\displaystyle {k}} is the thermal conductivity, h {\displaystyle {h}} is the heat transfer coefficient, and the subscripts s {\displaystyle {s}} and b {\displaystyle {b}} compare the surface and bulk values respectively. For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding: J = D d C d y = h m ( C m − C b ) {\displaystyle J=D{\frac {dC}{dy}}=h_{m}(C_{m}-C_{b})} Where J {\displaystyle {J}} is the mass flux [kg/s m 3 {\displaystyle {m^{3}}} ], D {\displaystyle {D}} is the diffusivity of species a in fluid b, and h m {\displaystyle {h_{m}}} is the mass transfer coefficient. As we can see, q ″ {\displaystyle {q''}} and J {\displaystyle {J}} are analogous, k {\displaystyle {k}} and D {\displaystyle {D}} are analogous, while T {\displaystyle {T}} and C {\displaystyle {C}} are analogous. === Implementing the Analogy === Heat-Mass Analogy: Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat. In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient n {\displaystyle n} . Therefore, one can directly calculate these numbers from one another using: N u S h = P r n S c n {\displaystyle {\frac {Nu}{Sh}}={\frac {Pr^{n}}{Sc^{n}}}} Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent. We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding: h h m = k D L e n = ρ C p L e 1 − n {\displaystyle {\frac {h}{h_{m}}}={\frac {k}{DLe^{n}}}=\rho C_{p}Le^{1-n}} For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy. === Limitations === The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species. The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge. === Applications of the Heat-Mass Analogy === The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes. == Applications == === Pollution === The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S. == See also == Constitutive equation Continuity equation Wave propagation Pulse Action potential Bioheat transfer == References == == External links == Transport Phenomena Archive Archived 2017-10-08 at the Wayback Machine in the Teaching Archives of the Materials Digital Library Pathway "Some Classical Transport Phenomena Problems with Solutions – Fluid Mechanics". "Some Classical Transport Phenomena Problems with Solutions – Heat Transfer". "Some Classical Transport Phenomena Problems with Solutions – Mass Transfer".
Wikipedia/Transport_phenomena_(engineering_&_physics)
A linear response function describes the input-output relationship of a signal transducer, such as a radio turning electromagnetic waves into music or a neuron turning synaptic input into a response. Because of its many applications in information theory, physics and engineering there exist alternative names for specific linear response functions such as susceptibility, impulse response or impedance; see also transfer function. The concept of a Green's function or fundamental solution of an ordinary differential equation is closely related. == Mathematical definition == Denote the input of a system by h ( t ) {\displaystyle h(t)} (e.g. a force), and the response of the system by x ( t ) {\displaystyle x(t)} (e.g. a position). Generally, the value of x ( t ) {\displaystyle x(t)} will depend not only on the present value of h ( t ) {\displaystyle h(t)} , but also on past values. Approximately x ( t ) {\displaystyle x(t)} is a weighted sum of the previous values of h ( t ′ ) {\displaystyle h(t')} , with the weights given by the linear response function χ ( t − t ′ ) {\displaystyle \chi (t-t')} : x ( t ) = ∫ − ∞ t d t ′ χ ( t − t ′ ) h ( t ′ ) + ⋯ . {\displaystyle x(t)=\int _{-\infty }^{t}dt'\,\chi (t-t')h(t')+\cdots \,.} The explicit term on the right-hand side is the leading order term of a Volterra expansion for the full nonlinear response. If the system in question is highly non-linear, higher order terms in the expansion, denoted by the dots, become important and the signal transducer cannot adequately be described just by its linear response function. The complex-valued Fourier transform χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} of the linear response function is very useful as it describes the output of the system if the input is a sine wave h ( t ) = h 0 sin ⁡ ( ω t ) {\displaystyle h(t)=h_{0}\sin(\omega t)} with frequency ω {\displaystyle \omega } . The output reads x ( t ) = | χ ~ ( ω ) | h 0 sin ⁡ ( ω t + arg ⁡ χ ~ ( ω ) ) , {\displaystyle x(t)=\left|{\tilde {\chi }}(\omega )\right|h_{0}\sin(\omega t+\arg {\tilde {\chi }}(\omega ))\,,} with amplitude gain | χ ~ ( ω ) | {\displaystyle |{\tilde {\chi }}(\omega )|} and phase shift arg ⁡ χ ~ ( ω ) {\displaystyle \arg {\tilde {\chi }}(\omega )} . == Example == Consider a damped harmonic oscillator with input given by an external driving force h ( t ) {\displaystyle h(t)} , x ¨ ( t ) + γ x ˙ ( t ) + ω 0 2 x ( t ) = h ( t ) . {\displaystyle {\ddot {x}}(t)+\gamma {\dot {x}}(t)+\omega _{0}^{2}x(t)=h(t).} The complex-valued Fourier transform of the linear response function is given by χ ~ ( ω ) = x ~ ( ω ) h ~ ( ω ) = 1 ω 0 2 − ω 2 + i γ ω . {\displaystyle {\tilde {\chi }}(\omega )={\frac {{\tilde {x}}(\omega )}{{\tilde {h}}(\omega )}}={\frac {1}{\omega _{0}^{2}-\omega ^{2}+i\gamma \omega }}.} The amplitude gain is given by the magnitude of the complex number χ ~ ( ω ) , {\displaystyle {\tilde {\chi }}(\omega ),} and the phase shift by the arctan of the imaginary part of the function divided by the real one. From this representation, we see that for small γ {\displaystyle \gamma } the Fourier transform χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} of the linear response function yields a pronounced maximum ("Resonance") at the frequency ω ≈ ω 0 {\displaystyle \omega \approx \omega _{0}} . The linear response function for a harmonic oscillator is mathematically identical to that of an RLC circuit. The width of the maximum, Δ ω , {\displaystyle \Delta \omega ,} typically is much smaller than ω 0 , {\displaystyle \omega _{0},} so that the Quality factor Q := ω 0 / Δ ω {\displaystyle Q:=\omega _{0}/\Delta \omega } can be extremely large. == Kubo formula == The exposition of linear response theory, in the context of quantum statistics, can be found in a paper by Ryogo Kubo. This defines particularly the Kubo formula, which considers the general case that the "force" h(t) is a perturbation of the basic operator of the system, the Hamiltonian, H ^ 0 → H ^ 0 − h ( t ′ ) B ^ ( t ′ ) {\displaystyle {\hat {H}}_{0}\to {\hat {H}}_{0}-h(t'){\hat {B}}(t')} where B ^ {\displaystyle {\hat {B}}} corresponds to a measurable quantity as input, while the output x(t) is the perturbation of the thermal expectation of another measurable quantity A ^ ( t ) {\displaystyle {\hat {A}}(t)} . The Kubo formula then defines the quantum-statistical calculation of the susceptibility χ ( t − t ′ ) {\displaystyle \chi (t-t')} by a general formula involving only the mentioned operators. As a consequence of the principle of causality the complex-valued function χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} has poles only in the lower half-plane. This leads to the Kramers–Kronig relations, which relates the real and the imaginary parts of χ ~ ( ω ) {\displaystyle {\tilde {\chi }}(\omega )} by integration. The simplest example is once more the damped harmonic oscillator. == See also == Convolution Green–Kubo relations Fluctuation theorem Dispersion (optics) Lindbladian Semilinear response Green's function Impulse response Resolvent formalism Propagator == References == == External links == Linear Response Functions in Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.): DMFT at 25: Infinite Dimensions, Verlag des Forschungszentrum Jülich, 2014 ISBN 978-3-89336-953-9
Wikipedia/Linear_response_function
Conglomerate () is a sedimentary rock made up of rounded gravel-sized pieces of rock surrounded by finer-grained sediments (such as sand, silt, or clay). The larger fragments within conglomerate are called clasts, while the finer sediment surrounding the clasts is called the matrix. The clasts and matrix are typically cemented by calcium carbonate, iron oxide, silica, or hardened clay. Conglomerates form when rounded gravels deposited by water or glaciers become solidified and cemented by pressure over time. They can be found in sedimentary rock sequences of all ages but probably make up less than 1 percent by weight of all sedimentary rocks. They are closely related to sandstones in origin, and exhibit many of the same types of sedimentary structures, such as tabular and trough cross-bedding and graded bedding. Fanglomerates are poorly sorted, matrix-rich conglomerates that originated as debris flows on alluvial fans and likely contain the largest accumulations of gravel in the geologic record. Breccias are similar to conglomerates, but have clasts that have angular (rather than rounded) shapes. == Classification of conglomerates == Conglomerates may be named and classified by the: Amount and type of matrix present Composition of gravel-size clasts they contain Size range of gravel-size clasts present The classification method depends on the type and detail of research being conducted. A sedimentary rock composed largely of gravel is first named according to the roundness of the gravel. If the gravel clasts that comprise it are largely well-rounded to subrounded, it is a conglomerate. If the gravel clasts that comprise it are largely angular, it is a breccia. Such breccias can be called sedimentary breccias to differentiate them from other types of breccia, e.g. volcanic and fault breccias. Sedimentary rocks that contain a mixture of rounded and angular gravel clasts are sometimes called breccio-conglomerate. === Texture === Conglomerates contain at least 30% of rounded to subangular clasts larger than 2 mm (0.079 in) in diameter, e.g., granules, pebbles, cobbles, and boulders. However, conglomerates are rarely composed entirely of gravel-size clasts. Typically, the space between the gravel-size clasts is filled by a mixture composed of varying amounts of silt, sand, and clay, known as matrix. If the individual gravel clasts in a conglomerate are separated from each other by an abundance of matrix such that they are not in contact with each other and float within the matrix, it is called a paraconglomerate. Paraconglomerates are also often unstratified and can contain more matrix than gravel clasts. If the gravel clasts of a conglomerate are in contact with each other, it is called an orthoconglomerate. Unlike paraconglomerates, orthoconglomerates are typically cross-bedded and often well-cemented and lithified by either calcite, hematite, quartz, or clay. The differences between paraconglomerates and orthoconglomerates reflect differences in how they are deposited. Paraconglomerates are commonly either glacial tills or debris flow deposits. Orthoconglomerates are typically associated with aqueous currents. === Clast composition === Conglomerates are also classified according to the composition of their clasts. A conglomerate or any clastic sedimentary rock that consists of a single rock or mineral is known as either a monomict, monomictic, oligomict, or oligomictic conglomerate. If the conglomerate consists of two or more different types of rocks, minerals, or combination of both, it is known as either a polymict or polymictic conglomerate. If a polymictic conglomerate contains an assortment of the clasts of metastable and unstable rocks and minerals, it is called either a petromict or petromictic conglomerate. In addition, conglomerates are classified by source as indicated by the lithology of the gravel-size clasts If these clasts consist of rocks and minerals that are significantly different in lithology from the enclosing matrix and, thus, older and derived from outside the basin of deposition, the conglomerate is known as an extraformational conglomerate. If these clasts consist of rocks and minerals that are identical to or consistent with the lithology of the enclosing matrix and, thus, penecontemporaneous and derived from within the basin of deposition, the conglomerate is known as an intraformational conglomerate. Two recognized types of intraformational conglomerates are shale-pebble and flat-pebble conglomerates. A shale-pebble conglomerate is a conglomerate that is composed largely of clasts of rounded mud chips and pebbles held together by clay minerals and created by erosion within environments such as within a river channel or along a lake margin. Flat-pebble conglomerates (edgewise conglomerates) are conglomerates that consist of relatively flat clasts of lime mud created by either storms or tsunami eroding a shallow sea bottom or tidal currents eroding tidal flats along a shoreline. === Clast size === Finally, conglomerates are often differentiated and named according to the dominant clast size comprising them. In this classification, a conglomerate composed largely of granule-size clasts would be called a granule conglomerate; a conglomerate composed largely of pebble-size clasts would be called a pebble conglomerate; and a conglomerate composed largely of cobble-size clasts would be called a cobble conglomerate. == Sedimentary environments == Conglomerates are deposited in a variety of sedimentary environments. === Deepwater marine === In turbidites, the basal part of a bed is typically coarse-grained and sometimes conglomeratic. In this setting, conglomerates are normally very well sorted, well-rounded and often with a strong A-axis type imbrication of the clasts. === Shallow marine === Conglomerates are normally present at the base of sequences laid down during marine transgressions above an unconformity, and are known as basal conglomerates. They represent the position of the shoreline at a particular time and are diachronous. === Fluvial === Conglomerates deposited in fluvial environments are typically well rounded and poorly sorted. Clasts of this size are carried as bedload and only at times of high flow-rate. The maximum clast size decreases as the clasts are transported further due to attrition, so conglomerates are more characteristic of immature river systems. In the sediments deposited by mature rivers, conglomerates are generally confined to the basal part of a channel fill where they are known as pebble lags. Conglomerates deposited in a fluvial environment often have an AB-plane type imbrication. === Alluvial === Alluvial deposits form in areas of high relief and are typically coarse-grained. At mountain fronts individual alluvial fans merge to form braidplains and these two environments are associated with the thickest deposits of conglomerates. The bulk of conglomerates deposited in this setting are clast-supported with a strong AB-plane imbrication. Matrix-supported conglomerates, as a result of debris-flow deposition, are quite commonly associated with many alluvial fans. When such conglomerates accumulate within an alluvial fan, in rapidly eroding (e.g., desert) environments, the resulting rock unit is often called a fanglomerate. === Glacial === Glaciers carry a lot of coarse-grained material and many glacial deposits are conglomeratic. tillites, the sediments deposited directly by a glacier, are typically poorly sorted, matrix-supported conglomerates. The matrix is generally fine-grained, consisting of finely milled rock fragments. Waterlaid deposits associated with glaciers are often conglomeratic, forming structures such as eskers. == Examples == An example of conglomerate can be seen at Montserrat, near Barcelona. Here, erosion has created vertical channels that give the characteristic jagged shapes the mountain is named for (Montserrat literally means "jagged mountain"). The rock is strong enough to use as a building material, as in the Santa Maria de Montserrat Abbey. Another example, the Crestone Conglomerate, occurs in and near the town of Crestone, at the foot of the Sangre de Cristo Range in Colorado's San Luis Valley. The Crestone Conglomerate consists of poorly sorted fanglomerates that accumulated in prehistoric alluvial fans and related fluvial systems. Some of these rocks have hues of red and green. Conglomerate cliffs are found on the east coast of Scotland from Arbroath northwards along the coastlines of the former counties of Angus and Kincardineshire. Dunnottar Castle sits on a rugged promontory of conglomerate jutting into the North Sea just south of the town of Stonehaven. Copper Harbor Conglomerate is found both in the Keweenaw Peninsula and Isle Royale National Park in Lake Superior. Conglomerate may also be seen in the domed hills of Kata Tjuta, in Australia's Northern Territory or in the Buda Hills in Hungary. In the nineteenth century a thick layer of Pottsville conglomerate was recognized to underlie anthracite coal measures in Pennsylvania. === Examples on Mars === On Mars, slabs of conglomerate have been found at an outcrop named "Hottah", and have been interpreted by scientists as having formed in an ancient streambed. The gravels, which were discovered by NASA's Mars rover Curiosity, range from the size of sand particles to the size of golf balls. Analysis has shown that the pebbles were deposited by a stream that flowed at walking pace and was ankle- to hip-deep. == Metaconglomerate == Metamorphic alteration transforms conglomerate into metaconglomerate. == See also == Puddingstone Jasper conglomerate == References == == External links == Conglomerate at Cushendun, Northern Ireland Archived 2016-03-03 at the Wayback Machine
Wikipedia/Conglomerate_(geology)
The Journal of Fluid Mechanics is a peer-reviewed scientific journal in the field of fluid mechanics. It publishes original work on theoretical, computational, and experimental aspects of the subject. The journal is published by Cambridge University Press and retains a strong association with the University of Cambridge, in particular the Department of Applied Mathematics and Theoretical Physics (DAMTP). Until January 2020, volumes were published twice a month in a single-column B5 format, but the publication is now online-only with the same frequency. The journal was established in 1956 by George Batchelor, who remained the editor-in-chief for some forty years. He started out as the sole editor, but later a team of associate editors provided assistance in arranging the review of articles. Detlef Lohse is the author who has most papers (169 times) appeared in this journal. == Editors == The following people have been editor (later, editor in chief) of the Journal of Fluid Mechanics: 1956–1996: George Batchelor (DAMTP) 1966–1983: Keith Moffatt (DAMTP) 1996–2000: David Crighton (DAMTP) 2000–2006: Tim Pedley (DAMTP) 2000–2010: Stephen H. Davis (Northwestern University) 2007–2022: Grae Worster (DAMTP) 2022–present: Colm-cille P. Caulfield (DAMTP) == See also == List of fluid mechanics journals == References == == Further reading == Batchelor, G. K. (1981). "Preoccupations of a journal editor". Journal of Fluid Mechanics. 106: 1. Bibcode:1981JFM...106....1B. doi:10.1017/S0022112081001493. S2CID 122525538. Davis, Steve; Pedley, Tim (2000). "Editorial". Journal of Fluid Mechanics. 415: 0. Bibcode:2000JFM...415S9514D. doi:10.1017/S0022112000009514. Davis, S. H.; Pedley, T. J. (2006). "Editorial: JFM at 50". Journal of Fluid Mechanics. 554: 1. Bibcode:2006JFM...554....1D. doi:10.1017/S0022112006009566. Huppert, H. E. (2006). 50 Years of Impact of JFM. == External links == Official website
Wikipedia/Journal_of_Fluid_Mechanics
In surface chemistry, disjoining pressure (symbol Πd) according to an IUPAC definition arises from an attractive interaction between two surfaces. For two flat and parallel surfaces, the value of the disjoining pressure (i.e., the force per unit area) can be calculated as the derivative of the Gibbs energy of interaction per unit area in respect to distance (in the direction normal to that of the interacting surfaces). There is also a related concept of disjoining force, which can be viewed as disjoining pressure times the surface area of the interacting surfaces. The concept of disjoining pressure was introduced by Derjaguin (1936) as the difference between the pressure in a region of a phase adjacent to a surface confining it, and the pressure in the bulk of this phase. == Description == Disjoining pressure can be expressed as: Π d = − 1 A ( ∂ G ∂ x ) T , V , A {\displaystyle \Pi _{d}=-{1 \over A}\left({\frac {\partial G}{\partial x}}\right)_{T,V,A}} where (in SI units): Πd - disjoining pressure (N/m2) A - the surface area of the interacting surfaces (m2) G - total Gibbs energy of the interaction of the two surfaces (J) x - distance (m) indices T, V and A signify that the temperature, volume, and the surface area remain constant in the derivative. Using the concept of the disjoining pressure, the pressure in a film can be viewed as: P = P 0 + Π d {\displaystyle P=P_{0}+\Pi _{d}} where: P - pressure in a film (Pa) P0 - pressure in the bulk of the same phase as that of the film (Pa) Disjoining pressure is interpreted as a sum of several interactions: dispersion forces, electrostatic forces between charged surfaces, interactions due to layers of neutral molecules adsorbed on the two surfaces, and the structural effects of the solvent. Classic theory predicts that the disjoining pressure of a thin liquid film on a flat surface as follows, Π d = − A H 6 π δ 0 3 {\displaystyle \Pi _{d}=-{\frac {A_{H}}{6\pi \delta _{0}^{3}}}} where: AH - Hamaker constant (J) δ0 - liquid film thickness (m) For a solid-liquid-vapor system where the solid surface is structured, the disjoining pressure is affected by the solid surface profile, ζS, and the meniscus shape, ζL Π d ( x , ζ L ( x ) ) = ∫ d 2 ρ ∫ ζ L ( x ) − ζ S ( x + ρ ) + ∞ d z ω ( ρ , z ) {\displaystyle {\Pi _{d}(x,{\zeta _{\text{L}}(x)})}=\int d^{2}\rho {\int _{\zeta _{\text{L}}(x)-\zeta _{\text{S}}(x+\rho )}^{+\infty }}dz\omega (\rho ,z)} where: ω(ρ,z) - solid-liquid potential (J/m6) The meniscus shape can be by minimization of total system free energy as follows δ W total = ∂ W total ∂ ζ L δ ζ L + ∂ W total ∂ ζ L ′ δ ζ L ′ = 0 {\displaystyle {\delta W_{\text{total}}}={{\partial W_{\text{total}}} \over {\partial {\zeta _{\text{L}}}}}\delta \zeta _{\text{L}}+{{\partial W_{\text{total}}} \over {\partial \zeta _{\text{L}}^{'}}}\delta \zeta _{\text{L}}^{'}=0} where: Wtotal - total system free energy including surface excess energy and free energy due to solid-liquid interactions (J/m2) ζL - meniscus shape (m) ζ'L - slope of meniscus shape (1) In the theory of liquid drops and films, the disjoining pressure can be shown to be related to the equilibrium liquid-solid contact angle θe through the relation cos ⁡ θ e = 1 + 1 γ ∫ h 0 ∞ Π D ( h ) d h , {\displaystyle \cos \theta _{e}=1+{\frac {1}{\gamma }}\int _{h_{0}}^{\infty }\Pi _{D}(h)dh,} where γ is the liquid-vapor surface tension and h0 is the precursor film thickness. == See also == Capillary condensation Capillary pressure Hamaker constant Thin-film equation == References ==
Wikipedia/Disjoining_force
In fluid mechanics (specifically lubrication theory), the Reynolds equation is a partial differential equation governing the pressure distribution of thin viscous fluid films. It was first derived by Osborne Reynolds in 1886. The classical Reynolds Equation can be used to describe the pressure distribution in nearly any type of fluid film bearing; a bearing type in which the bounding bodies are fully separated by a thin layer of liquid or gas. == General usage == The general Reynolds equation is: ∂ ∂ x ( ρ h 3 12 μ ∂ p ∂ x ) + ∂ ∂ y ( ρ h 3 12 μ ∂ p ∂ y ) = ∂ ∂ x ( ρ h ( u a + u b ) 2 ) + ∂ ∂ y ( ρ h ( v a + v b ) 2 ) + ρ ( w a − w b ) − ρ u a ∂ h ∂ x − ρ v a ∂ h ∂ y + h ∂ ρ ∂ t {\displaystyle {\frac {\partial }{\partial x}}\left({\frac {\rho h^{3}}{12\mu }}{\frac {\partial p}{\partial x}}\right)+{\frac {\partial }{\partial y}}\left({\frac {\rho h^{3}}{12\mu }}{\frac {\partial p}{\partial y}}\right)={\frac {\partial }{\partial x}}\left({\frac {\rho h\left(u_{a}+u_{b}\right)}{2}}\right)+{\frac {\partial }{\partial y}}\left({\frac {\rho h\left(v_{a}+v_{b}\right)}{2}}\right)+\rho \left(w_{a}-w_{b}\right)-\rho u_{a}{\frac {\partial h}{\partial x}}-\rho v_{a}{\frac {\partial h}{\partial y}}+h{\frac {\partial \rho }{\partial t}}} Where: p {\displaystyle p} is fluid film pressure. x {\displaystyle x} and y {\displaystyle y} are the bearing width and length coordinates. z {\displaystyle z} is fluid film thickness coordinate. h {\displaystyle h} is fluid film thickness. μ {\displaystyle \mu } is fluid viscosity. ρ {\displaystyle \rho } is fluid density. u , v , w {\displaystyle u,v,w} are the bounding body velocities in x , y , z {\displaystyle x,y,z} respectively. a , b {\displaystyle a,b} are subscripts denoting the top and bottom bounding bodies respectively. The equation can either be used with consistent units or nondimensionalized. The Reynolds Equation assumes: The fluid is Newtonian. Fluid viscous forces dominate over fluid inertia forces. This is the principle of the Reynolds number. Fluid body forces are negligible. The variation of pressure across the fluid film is negligibly small (i.e. ∂ p ∂ z = 0 {\displaystyle {\frac {\partial p}{\partial z}}=0} ) The fluid film thickness is much less than the width and length and thus curvature effects are negligible. (i.e. h ≪ l {\displaystyle h\ll l} and h ≪ w {\displaystyle h\ll w} ). For some simple bearing geometries and boundary conditions, the Reynolds equation can be solved analytically. Often however, the equation must be solved numerically. Frequently this involves discretizing the geometric domain, and then applying a finite technique - often FDM, FVM, or FEM. == Derivation from Navier-Stokes == A full derivation of the Reynolds Equation from the Navier-Stokes equation can be found in numerous lubrication text books. == Solution of Reynolds Equation == In general, Reynolds equation has to be solved using numerical methods such as finite difference, or finite element. In certain simplified cases, however, analytical or approximate solutions can be obtained. For the case of rigid sphere on flat geometry, steady-state case and half-Sommerfeld cavitation boundary condition, the 2-D Reynolds equation can be solved analytically. This solution was proposed by a Nobel Prize winner Pyotr Kapitsa. Half-Sommerfeld boundary condition was shown to be inaccurate and this solution has to be used with care. In case of 1-D Reynolds equation several analytical or semi-analytical solutions are available. In 1916 Martin obtained a closed form solution for a minimum film thickness and pressure for a rigid cylinder and plane geometry. This solution is not accurate for the cases when the elastic deformation of the surfaces contributes considerably to the film thickness. In 1949, Grubin obtained an approximate solution for so called elasto-hydrodynamic lubrication (EHL) line contact problem, where he combined both elastic deformation and lubricant hydrodynamic flow. In this solution it was assumed that the pressure profile follows Hertz solution. The model is therefore accurate at high loads, when the hydrodynamic pressure tends to be close to the Hertz contact pressure. == Applications == The Reynolds equation is used to model the pressure in many applications. For example: Ball bearings Air bearings Journal bearings Squeeze film dampers in aircraft gas turbines Human hip and knee joints Lubricated gear contacts == Reynolds Equation adaptations - Average Flow Model == In 1978 Patir and Cheng introduced an average flow model, which modifies the Reynolds equation to consider the effects of surface roughness on lubricated contacts. The average flow model spans the regimes of lubrication where the surfaces are close together and/or touching. The average flow model applied "flow factors" to adjust how easy it is for the lubricant to flow in the direction of sliding or perpendicular to it. They also presented terms for adjusting the contact shear calculation. In these regimes, the surface topography acts to direct the lubricant flow, which has been demonstrated to affect the lubricant pressure and thus the surface separation and contact friction. Several notable attempts have been made to taken additional details of the contact into account in the simulation of fluid films in contacts. Leighton et al. presented a method for determining the flow factors needed for the average flow model from any measured surface. Harp and Salent extended the average flow model by considering the inter-asperity cavitation. Chengwei and Linqing used an analysis of the surface height probability distribution to remove one of the more complex terms from the average Reynolds equation, d h T ¯ / d h {\displaystyle d{\bar {h_{T}}}/dh} and replace it with a flow factor referred to as contact flow factor, ϕ h {\displaystyle \phi _{h}} . Knoll et al. calculated flow factors, taking into account the elastic deformation of the surfaces. Meng et al. also considered the elastic deformation of the contacting surfaces. The work of Patir and Cheng was a precursor to the investigations of surface texturing in lubricated contacts. Demonstrating how large scale surface features generated micro-hydrodynamic lift to separate films and reduce friction, but only when the contact conditions support this. The average flow model of Patir and Cheng, is often coupled with the rough surface interaction model of Greenwood and Tripp for modelling of the interaction of rough surfaces in loaded contacts. == References ==
Wikipedia/Reynolds_equation
Stokes flow (named after George Gabriel Stokes), also named creeping flow or creeping motion, is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. R e ≪ 1 {\displaystyle \mathrm {Re} \ll 1} . This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication. In nature, this type of flow occurs in the swimming of microorganisms and sperm. In technology, it occurs in paint, MEMS devices, and in the flow of viscous polymers generally. The equations of motion for Stokes flow, called the Stokes equations, are a linearization of the Navier–Stokes equations, and thus can be solved by a number of well-known methods for linear differential equations. The primary Green's function of Stokes flow is the Stokeslet, which is associated with a singular point force embedded in a Stokes flow. From its derivatives, other fundamental solutions can be obtained. The Stokeslet was first derived by Oseen in 1927, although it was not named as such until 1953 by Hancock. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids. == Stokes equations == The equation of motion for Stokes flow can be obtained by linearizing the steady state Navier–Stokes equations. The inertial forces are assumed to be negligible in comparison to the viscous forces, and eliminating the inertial terms of the momentum balance in the Navier–Stokes equations reduces it to the momentum balance in the Stokes equations: ∇ ⋅ σ + f = 0 {\displaystyle {\boldsymbol {\nabla }}\cdot \sigma +\mathbf {f} ={\boldsymbol {0}}} where σ {\displaystyle \sigma } is the stress (sum of viscous and pressure stresses), and f {\displaystyle \mathbf {f} } an applied body force. The full Stokes equations also include an equation for the conservation of mass, commonly written in the form: ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0} where ρ {\displaystyle \rho } is the fluid density and u {\displaystyle \mathbf {u} } the fluid velocity. To obtain the equations of motion for incompressible flow, it is assumed that the density, ρ {\displaystyle \rho } , is a constant. Furthermore, occasionally one might consider the unsteady Stokes equations, in which the term ρ ∂ u ∂ t {\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}} is added to the left hand side of the momentum balance equation. === Properties === The Stokes equations represent a considerable simplification of the full Navier–Stokes equations, especially in the incompressible Newtonian case. They are the leading-order simplification of the full Navier–Stokes equations, valid in the distinguished limit R e → 0. {\displaystyle \mathrm {Re} \to 0.} Instantaneity A Stokes flow has no dependence on time other than through time-dependent boundary conditions. This means that, given the boundary conditions of a Stokes flow, the flow can be found without knowledge of the flow at any other time. Time-reversibility An immediate consequence of instantaneity, time-reversibility means that a time-reversed Stokes flow solves the same equations as the original Stokes flow. This property can sometimes be used (in conjunction with linearity and symmetry in the boundary conditions) to derive results about a flow without solving it fully. Time reversibility means that it is difficult to mix two fluids using creeping flow. While these properties are true for incompressible Newtonian Stokes flows, the non-linear and sometimes time-dependent nature of non-Newtonian fluids means that they do not hold in the more general case. ==== Stokes paradox ==== An interesting property of Stokes flow is known as the Stokes' paradox: that there can be no Stokes flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial solution for the Stokes equations around an infinitely long cylinder. === Demonstration of time-reversibility === A Taylor–Couette system can create laminar flows in which concentric cylinders of fluid move past each other in an apparent spiral. A fluid such as corn syrup with high viscosity fills the gap between two cylinders, with colored regions of the fluid visible through the transparent outer cylinder. The cylinders are rotated relative to one another at a low speed, which together with the high viscosity of the fluid and thinness of the gap gives a low Reynolds number, so that the apparent mixing of colors is actually laminar and can then be reversed to approximately the initial state. This creates a dramatic demonstration of seemingly mixing a fluid and then unmixing it by reversing the direction of the mixer. == Incompressible flow of Newtonian fluids == In the common case of an incompressible Newtonian fluid, the Stokes equations take the (vectorized) form: μ ∇ 2 u − ∇ p + f = 0 ∇ ⋅ u = 0 {\displaystyle {\begin{aligned}\mu \nabla ^{2}\mathbf {u} -{\boldsymbol {\nabla }}p+\mathbf {f} &={\boldsymbol {0}}\\{\boldsymbol {\nabla }}\cdot \mathbf {u} &=0\end{aligned}}} where u {\displaystyle \mathbf {u} } is the velocity of the fluid, ∇ p {\displaystyle {\boldsymbol {\nabla }}p} is the gradient of the pressure, μ {\displaystyle \mu } is the dynamic viscosity, and f {\displaystyle \mathbf {f} } an applied body force. The resulting equations are linear in velocity and pressure, and therefore can take advantage of a variety of linear differential equation solvers. === Cartesian coordinates === With the velocity vector expanded as u = ( u , v , w ) {\displaystyle \mathbf {u} =(u,v,w)} and similarly the body force vector f = ( f x , f y , f z ) {\displaystyle \mathbf {f} =(f_{x},f_{y},f_{z})} , we may write the vector equation explicitly, μ ( ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 + ∂ 2 u ∂ z 2 ) − ∂ p ∂ x + f x = 0 μ ( ∂ 2 v ∂ x 2 + ∂ 2 v ∂ y 2 + ∂ 2 v ∂ z 2 ) − ∂ p ∂ y + f y = 0 μ ( ∂ 2 w ∂ x 2 + ∂ 2 w ∂ y 2 + ∂ 2 w ∂ z 2 ) − ∂ p ∂ z + f z = 0 ∂ u ∂ x + ∂ v ∂ y + ∂ w ∂ z = 0 {\displaystyle {\begin{aligned}\mu \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)-{\frac {\partial p}{\partial x}}+f_{x}&=0\\\mu \left({\frac {\partial ^{2}v}{\partial x^{2}}}+{\frac {\partial ^{2}v}{\partial y^{2}}}+{\frac {\partial ^{2}v}{\partial z^{2}}}\right)-{\frac {\partial p}{\partial y}}+f_{y}&=0\\\mu \left({\frac {\partial ^{2}w}{\partial x^{2}}}+{\frac {\partial ^{2}w}{\partial y^{2}}}+{\frac {\partial ^{2}w}{\partial z^{2}}}\right)-{\frac {\partial p}{\partial z}}+f_{z}&=0\\{\partial u \over \partial x}+{\partial v \over \partial y}+{\partial w \over \partial z}&=0\end{aligned}}} We arrive at these equations by making the assumptions that P = μ ( ∇ u + ( ∇ u ) T ) − p I {\displaystyle \mathbb {P} =\mu \left({\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{\mathsf {T}}\right)-p\mathbb {I} } and the density ρ {\displaystyle \rho } is a constant. === Methods of solution === ==== By stream function ==== The equation for an incompressible Newtonian Stokes flow can be solved by the stream function method in planar or in 3-D axisymmetric cases ==== By Green's function: the Stokeslet ==== The linearity of the Stokes equations in the case of an incompressible Newtonian fluid means that a Green's function, J ( r ) {\displaystyle \mathbb {J} (\mathbf {r} )} , exists. The Green's function is found by solving the Stokes equations with the forcing term replaced by a point force acting at the origin, and boundary conditions vanishing at infinity: μ ∇ 2 u − ∇ p = − F ⋅ δ ( r ) ∇ ⋅ u = 0 | u | , p → 0 as r → ∞ {\displaystyle {\begin{aligned}\mu \nabla ^{2}\mathbf {u} -{\boldsymbol {\nabla }}p&=-\mathbf {F} \cdot \mathbf {\delta } (\mathbf {r} )\\{\boldsymbol {\nabla }}\cdot \mathbf {u} &=0\\|\mathbf {u} |,p&\to 0\quad {\mbox{as}}\quad r\to \infty \end{aligned}}} where δ ( r ) {\displaystyle \mathbf {\delta } (\mathbf {r} )} is the Dirac delta function, and F ⋅ δ ( r ) {\displaystyle \mathbf {F} \cdot \delta (\mathbf {r} )} represents a point force acting at the origin. The solution for the pressure p and velocity u with |u| and p vanishing at infinity is given by u ( r ) = F ⋅ J ( r ) , p ( r ) = F ⋅ r 4 π | r | 3 {\displaystyle \mathbf {u} (\mathbf {r} )=\mathbf {F} \cdot \mathbb {J} (\mathbf {r} ),\qquad p(\mathbf {r} )={\frac {\mathbf {F} \cdot \mathbf {r} }{4\pi |\mathbf {r} |^{3}}}} where J ( r ) = 1 8 π μ ( I | r | + r r | r | 3 ) {\displaystyle \mathbb {J} (\mathbf {r} )={1 \over 8\pi \mu }\left({\frac {\mathbb {I} }{|\mathbf {r} |}}+{\frac {\mathbf {r} \mathbf {r} }{|\mathbf {r} |^{3}}}\right)} is a second-rank tensor (or more accurately tensor field) known as the Oseen tensor (after Carl Wilhelm Oseen). Here, r r is a quantity such that F ⋅ ( r r ) = ( F ⋅ r ) r {\displaystyle \mathbf {F} \cdot (\mathbf {r} \mathbf {r} )=(\mathbf {F} \cdot \mathbf {r} )\mathbf {r} } . The terms Stokeslet and point-force solution are used to describe F ⋅ J ( r ) {\displaystyle \mathbf {F} \cdot \mathbb {J} (\mathbf {r} )} . Analogous to the point charge in electrostatics, the Stokeslet is force-free everywhere except at the origin, where it contains a force of strength F {\displaystyle \mathbf {F} } . For a continuous-force distribution (density) f ( r ) {\displaystyle \mathbf {f} (\mathbf {r} )} the solution (again vanishing at infinity) can then be constructed by superposition: u ( r ) = ∫ f ( r ′ ) ⋅ J ( r − r ′ ) d r ′ , p ( r ) = ∫ f ( r ′ ) ⋅ ( r − r ′ ) 4 π | r − r ′ | 3 d r ′ {\displaystyle \mathbf {u} (\mathbf {r} )=\int \mathbf {f} \left(\mathbf {r'} \right)\cdot \mathbb {J} \left(\mathbf {r} -\mathbf {r'} \right)\mathrm {d} \mathbf {r'} ,\qquad p(\mathbf {r} )=\int {\frac {\mathbf {f} \left(\mathbf {r'} \right)\cdot \left(\mathbf {r} -\mathbf {r'} \right)}{4\pi \left|\mathbf {r} -\mathbf {r'} \right|^{3}}}\,\mathrm {d} \mathbf {r'} } This integral representation of the velocity can be viewed as a reduction in dimensionality: from the three-dimensional partial differential equation to a two-dimensional integral equation for unknown densities. ==== By Papkovich–Neuber solution ==== The Papkovich–Neuber solution represents the velocity and pressure fields of an incompressible Newtonian Stokes flow in terms of two harmonic potentials. ==== By boundary element method ==== Certain problems, such as the evolution of the shape of a bubble in a Stokes flow, are conducive to numerical solution by the boundary element method. This technique can be applied to both 2- and 3-dimensional flows. == Some geometries == === Hele-Shaw flow === Hele-Shaw flow is an example of a geometry for which inertia forces are negligible. It is defined by two parallel plates arranged very close together with the space between the plates occupied partly by fluid and partly by obstacles in the form of cylinders with generators normal to the plates. === Slender-body theory === Slender-body theory in Stokes flow is a simple approximate method of determining the irrotational flow field around bodies whose length is large compared with their width. The basis of the method is to choose a distribution of flow singularities along a line (since the body is slender) so that their irrotational flow in combination with a uniform stream approximately satisfies the zero normal velocity condition. === Spherical coordinates === Lamb's general solution arises from the fact that the pressure p {\displaystyle p} satisfies the Laplace equation, and can be expanded in a series of solid spherical harmonics in spherical coordinates. As a result, the solution to the Stokes equations can be written: u = ∑ n = − ∞ , n ≠ − 1 n = ∞ [ ( n + 3 ) r 2 ∇ p n 2 μ ( n + 1 ) ( 2 n + 3 ) − n x p n μ ( n + 1 ) ( 2 n + 3 ) ] + . . . ∑ n = − ∞ n = ∞ [ ∇ Φ n + ∇ × ( x χ n ) ] p = ∑ n = − ∞ n = ∞ p n {\displaystyle {\begin{aligned}\mathbf {u} &=\sum _{n=-\infty ,n\neq -1}^{n=\infty }\left[{\frac {(n+3)r^{2}\nabla p_{n}}{2\mu (n+1)(2n+3)}}-{\frac {n\mathbf {x} p_{n}}{\mu (n+1)(2n+3)}}\right]+...\\\sum _{n=-\infty }^{n=\infty }[\nabla \Phi _{n}+\nabla \times (\mathbf {x} \chi _{n})]\\p&=\sum _{n=-\infty }^{n=\infty }p_{n}\end{aligned}}} where p n , Φ n , {\displaystyle p_{n},\Phi _{n},} and χ n {\displaystyle \chi _{n}} are solid spherical harmonics of order n {\displaystyle n} : p n = r n ∑ m = 0 m = n P n m ( cos ⁡ θ ) ( a m n cos ⁡ m ϕ + a ~ m n sin ⁡ m ϕ ) Φ n = r n ∑ m = 0 m = n P n m ( cos ⁡ θ ) ( b m n cos ⁡ m ϕ + b ~ m n sin ⁡ m ϕ ) χ n = r n ∑ m = 0 m = n P n m ( cos ⁡ θ ) ( c m n cos ⁡ m ϕ + c ~ m n sin ⁡ m ϕ ) {\displaystyle {\begin{aligned}p_{n}&=r^{n}\sum _{m=0}^{m=n}P_{n}^{m}(\cos \theta )(a_{mn}\cos m\phi +{\tilde {a}}_{mn}\sin m\phi )\\\Phi _{n}&=r^{n}\sum _{m=0}^{m=n}P_{n}^{m}(\cos \theta )(b_{mn}\cos m\phi +{\tilde {b}}_{mn}\sin m\phi )\\\chi _{n}&=r^{n}\sum _{m=0}^{m=n}P_{n}^{m}(\cos \theta )(c_{mn}\cos m\phi +{\tilde {c}}_{mn}\sin m\phi )\end{aligned}}} and the P n m {\displaystyle P_{n}^{m}} are the associated Legendre polynomials. The Lamb's solution can be used to describe the motion of fluid either inside or outside a sphere. For example, it can be used to describe the motion of fluid around a spherical particle with prescribed surface flow, a so-called squirmer, or to describe the flow inside a spherical drop of fluid. For interior flows, the terms with n < 0 {\displaystyle n<0} are dropped, while for exterior flows the terms with n > 0 {\displaystyle n>0} are dropped (often the convention n → − n − 1 {\displaystyle n\to -n-1} is assumed for exterior flows to avoid indexing by negative numbers). == Theorems == === Stokes solution and related Helmholtz theorem === The drag resistance to a moving sphere, also known as Stokes' solution is here summarised. Given a sphere of radius a {\displaystyle a} , travelling at velocity U {\displaystyle U} , in a Stokes fluid with dynamic viscosity μ {\displaystyle \mu } , the drag force F D {\displaystyle F_{D}} is given by: F D = 6 π μ a U {\displaystyle F_{D}=6\pi \mu aU} The Stokes solution dissipates less energy than any other solenoidal vector field with the same boundary velocities: this is known as the Helmholtz minimum dissipation theorem. === Lorentz reciprocal theorem === The Lorentz reciprocal theorem states a relationship between two Stokes flows in the same region. Consider fluid filled region V {\displaystyle V} bounded by surface S {\displaystyle S} . Let the velocity fields u {\displaystyle \mathbf {u} } and u ′ {\displaystyle \mathbf {u} '} solve the Stokes equations in the domain V {\displaystyle V} , each with corresponding stress fields σ {\displaystyle \mathbf {\sigma } } and σ ′ {\displaystyle \mathbf {\sigma } '} . Then the following equality holds: ∫ S u ⋅ ( σ ′ ⋅ n ) d S = ∫ S u ′ ⋅ ( σ ⋅ n ) d S {\displaystyle \int _{S}\mathbf {u} \cdot ({\boldsymbol {\sigma }}'\cdot \mathbf {n} )dS=\int _{S}\mathbf {u} '\cdot ({\boldsymbol {\sigma }}\cdot \mathbf {n} )dS} Where n {\displaystyle \mathbf {n} } is the unit normal on the surface S {\displaystyle S} . The Lorentz reciprocal theorem can be used to show that Stokes flow "transmits" unchanged the total force and torque from an inner closed surface to an outer enclosing surface. The Lorentz reciprocal theorem can also be used to relate the swimming speed of a microorganism, such as cyanobacterium, to the surface velocity which is prescribed by deformations of the body shape via cilia or flagella. The Lorentz reciprocal theorem has also been used in the context of elastohydrodynamic theory to derive the lift force exerted on a solid object moving tangent to the surface of an elastic interface at low Reynolds numbers. === Faxén's laws === Faxén's laws are direct relations that express the multipole moments in terms of the ambient flow and its derivatives. First developed by Hilding Faxén to calculate the force, F {\displaystyle \mathbf {F} } , and torque, T {\displaystyle \mathbf {T} } on a sphere, they take the following form: F = 6 π μ a ( 1 + a 2 6 ∇ 2 ) v ∞ ( x ) | x = 0 − 6 π μ a U T = 8 π μ a 3 ( Ω ∞ ( x ) − ω ) | x = 0 {\displaystyle {\begin{aligned}\mathbf {F} &=6\pi \mu a\left(1+{\frac {a^{2}}{6}}\nabla ^{2}\right)\mathbf {v} ^{\infty }(\mathbf {x} )|_{x=0}-6\pi \mu a\mathbf {U} \\\mathbf {T} &=8\pi \mu a^{3}(\mathbf {\Omega } ^{\infty }(\mathbf {x} )-\mathbf {\omega } )|_{x=0}\end{aligned}}} where μ {\displaystyle \mu } is the dynamic viscosity, a {\displaystyle a} is the particle radius, v ∞ {\displaystyle \mathbf {v} ^{\infty }} is the ambient flow, U {\displaystyle \mathbf {U} } is the speed of the particle, Ω ∞ {\displaystyle \mathbf {\Omega } ^{\infty }} is the angular velocity of the background flow, and ω {\displaystyle \mathbf {\omega } } is the angular velocity of the particle. Faxén's laws can be generalized to describe the moments of other shapes, such as ellipsoids, spheroids, and spherical drops. == See also == == References == Ockendon, H. & Ockendon J. R. (1995) Viscous Flow, Cambridge University Press. ISBN 0-521-45881-1. == External links == Video demonstration of time-reversibility of Stokes flow by UNM Physics and Astronomy
Wikipedia/Oseen_tensor
In chemistry and physics, cohesion (from Latin cohaesiō 'cohesion, unity'), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating an electrical attraction that can maintain a macroscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed. Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules. Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action. Mercury in a glass flask is a good example of the effects of the ratio between cohesive and adhesive forces. Because of its high cohesion and low adhesion to the glass, mercury does not spread out to cover the bottom of the flask, and if enough is placed in the flask to cover the bottom, it exhibits a strongly convex meniscus, whereas the meniscus of water is concave. Mercury will not wet the glass, unlike water and many other liquids, and if the glass is tipped, it will 'roll' around inside. == See also == Adhesion – the attraction of molecules or compounds for other molecules of a different kind Specific heat capacity – the amount of heat needed to raise the temperature of one gram of a substance by one degree Celsius Heat of vaporization – the amount of energy needed to change one gram of a liquid substance to a gas at constant temperature Zwitterion – a molecule composed of individual functional groups which are ions, of which the most prominent examples are the amino acids Chemical polarity – a neutral, or uncharged molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end == References == == External links == The Bubble Wall (audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds) "Adhesion and Cohesion of Water" – US Geological Survey
Wikipedia/Cohesive_force
Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics. == Etymology == The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure. == Subfields == === Biofluid mechanics === Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases. An example of a gaseous biofluids problem is that of human respiration. Respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices. === Biotribology === Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology. Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage. === Comparative biomechanics === Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment. Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems. === Computational biomechanics === Computational biomechanics is the application of engineering computational tools, such as the finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led finite element modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine). Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli. === Continuum biomechanics === The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.: 568  === Neuromechanics === Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings. === Plant biomechanics === The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology. === Sports biomechanics === In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics. Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. === Vascular biomechanics === The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues. It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine. Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore, is necessary to study wall mechanics and hemodynamics with their interaction. It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling. === Immunomechanics === The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology. === Other applied subfields of biomechanics include === Allometry Animal locomotion and Gait analysis Biotribology Biofluid mechanics Cardiovascular biomechanics Comparative biomechanics Computational biomechanics Ergonomy Forensic Biomechanics Human factors engineering and occupational biomechanics Injury biomechanics Implant (medicine), Orthotics and Prosthesis Kinaesthetics Kinesiology (kinetics + physiology) Musculoskeletal and orthopedic biomechanics Rehabilitation Soft body dynamics Sports biomechanics == History == === Antiquity === Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.: 2  With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years. === Renaissance === The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal. In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics. Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight." Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization. In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study. === Industrial era === The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity. Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies. It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries. In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling. == Applications == The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer. Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations. Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert. It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics. Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements. == See also == Biomechatronics Biomedical engineering Cardiovascular System Dynamics Society Evolutionary physiology Forensic biomechanics International Society of Biomechanics List of biofluid mechanics research groups Mechanics of human sexuality OpenSim (simulation toolkit) Physical oncology == References == == Further reading == == External links == Media related to Biomechanics at Wikimedia Commons Biomechanics and Movement Science Listserver (Biomch-L) Biomechanics Links A Genealogy of Biomechanics
Wikipedia/Biofluid_mechanics
The Annual Review of Materials Research is a peer-reviewed journal that publishes review articles about materials science. It has been published by the nonprofit Annual Reviews since 1971, when it was first released under the title the Annual Review of Materials Science. Four people have served as editors, with the current editor Ram Seshadri stepping into the position in 2024. It has an impact factor of 10.6 as of 2024. As of 2023, it is being published as open access, under the Subscribe to Open model. == History == The Annual Review of Materials Science was first published in 1971 by the nonprofit publisher Annual Reviews, making it their sixteenth journal. Its first editor was Robert Huggins. In 2001, its name was changed to the current form, the Annual Review of Materials Research. The name change was intended "to better reflect the broad appeal that materials research has for so many diverse groups of scientists and not simply those who identify themselves with the academic discipline of materials science." As of 2020, it was published both in print and electronically. It defines its scope as covering significant developments in the field of materials science, including methodologies for studying materials and materials phenomena. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 10.6, ranking it forty-ninth of 438 titles in the category "Materials Science, Multidisciplinary". It is abstracted and indexed in Scopus, Science Citation Index Expanded, Civil Engineering Abstracts, INSPEC, and Academic Search, among others. == Editorial processes == The Annual Review of Materials Research is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. === Editors of volumes === Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. Robert Huggins (1971–1993) Elton N. Kaufmann (1994–2000) David R. Clarke (2001–2024) Ram Seshadri (2025-) === Current editorial committee === As of 2024, the editorial committee consists of the editor and the following members: == See also == List of materials science journals == References ==
Wikipedia/Annual_Review_of_Materials_Research
Viscosity is a measure of a fluid's rate-dependent resistance to a change in shape or to movement of its neighboring portions relative to one another. For liquids, it corresponds to the informal concept of thickness; for example, syrup has a higher viscosity than water. Viscosity is defined scientifically as a force multiplied by a time divided by an area. Thus its SI units are newton-seconds per metre squared, or pascal-seconds. Viscosity quantifies the internal frictional force between adjacent layers of fluid that are in relative motion. For instance, when a viscous fluid is forced through a tube, it flows more quickly near the tube's center line than near its walls. Experiments show that some stress (such as a pressure difference between the two ends of the tube) is needed to sustain the flow. This is because a force is required to overcome the friction between the layers of the fluid which are in relative motion. For a tube with a constant rate of flow, the strength of the compensating force is proportional to the fluid's viscosity. In general, viscosity depends on a fluid's state, such as its temperature, pressure, and rate of deformation. However, the dependence on some of these properties is negligible in certain cases. For example, the viscosity of a Newtonian fluid does not vary significantly with the rate of deformation. Zero viscosity (no resistance to shear stress) is observed only at very low temperatures in superfluids; otherwise, the second law of thermodynamics requires all fluids to have positive viscosity. A fluid that has zero viscosity (non-viscous) is called ideal or inviscid. For non-Newtonian fluids' viscosity, there are pseudoplastic, plastic, and dilatant flows that are time-independent, and there are thixotropic and rheopectic flows that are time-dependent. == Etymology == The word "viscosity" is derived from the Latin viscum ("mistletoe"). Viscum also referred to a viscous glue derived from mistletoe berries. == Definitions == === Dynamic viscosity === In materials science and engineering, there is often interest in understanding the forces or stresses involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law, which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the deformation rate over time. These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs. Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate). Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow. In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed u {\displaystyle u} (see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from 0 {\displaystyle 0} at the bottom to u {\displaystyle u} at the top. Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed. In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to u {\displaystyle u} at the top. Moreover, the magnitude of the force, F {\displaystyle F} , acting on the top plate is found to be proportional to the speed u {\displaystyle u} and the area A {\displaystyle A} of each plate, and inversely proportional to their separation y {\displaystyle y} : F = μ A u y . {\displaystyle F=\mu A{\frac {u}{y}}.} The proportionality factor is the dynamic viscosity of the fluid, often simply referred to as the viscosity. It is denoted by the Greek letter mu (μ). The dynamic viscosity has the dimensions ( m a s s / l e n g t h ) / t i m e {\displaystyle \mathrm {(mass/length)/time} } , therefore resulting in the SI units and the derived units: [ μ ] = k g m ⋅ s = N m 2 ⋅ s = P a ⋅ s = {\displaystyle [\mu ]={\frac {\rm {kg}}{\rm {m{\cdot }s}}}={\frac {\rm {N}}{\rm {m^{2}}}}{\cdot }{\rm {s}}={\rm {Pa{\cdot }s}}=} pressure multiplied by time = {\displaystyle =} energy per unit volume multiplied by time. The aforementioned ratio u / y {\displaystyle u/y} is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction parallel to the normal vector of the plates (see illustrations to the right). If the velocity does not vary linearly with y {\displaystyle y} , then the appropriate generalization is: τ = μ ∂ u ∂ y , {\displaystyle \tau =\mu {\frac {\partial u}{\partial y}},} where τ = F / A {\displaystyle \tau =F/A} , and ∂ u / ∂ y {\displaystyle \partial u/\partial y} is the local shear velocity. This expression is referred to as Newton's law of viscosity. In shearing flows with planar symmetry, it is what defines μ {\displaystyle \mu } . It is a special case of the general definition of viscosity (see below), which can be expressed in coordinate-free form. Use of the Greek letter mu ( μ {\displaystyle \mu } ) for the dynamic viscosity (sometimes also called the absolute viscosity) is common among mechanical and chemical engineers, as well as mathematicians and physicists. However, the Greek letter eta ( η {\displaystyle \eta } ) is also used by chemists, physicists, and the IUPAC. The viscosity μ {\displaystyle \mu } is sometimes also called the shear viscosity. However, at least one author discourages the use of this terminology, noting that μ {\displaystyle \mu } can appear in non-shearing flows in addition to shearing flows. === Kinematic viscosity === In fluid dynamics, it is sometimes more appropriate to work in terms of kinematic viscosity (sometimes also called the momentum diffusivity), defined as the ratio of the dynamic viscosity (μ) over the density of the fluid (ρ). It is usually denoted by the Greek letter nu (ν): ν = μ ρ , {\displaystyle \nu ={\frac {\mu }{\rho }},} and has the dimensions ( l e n g t h ) 2 / t i m e {\displaystyle \mathrm {(length)^{2}/time} } , therefore resulting in the SI units and the derived units: [ ν ] = m 2 s = N ⋅ m k g ⋅ s = J k g ⋅ s = {\displaystyle [\nu ]=\mathrm {\frac {m^{2}}{s}} =\mathrm {{\frac {N{\cdot }m}{kg}}{\cdot }s} =\mathrm {{\frac {J}{kg}}{\cdot }s} =} specific energy multiplied by time = {\displaystyle =} energy per unit mass multiplied by time. === General definition === In very general terms, the viscous stresses in a fluid are defined as those resulting from the relative velocity of different fluid particles. As such, the viscous stresses must depend on spatial gradients of the flow velocity. If the velocity gradients are small, then to a first approximation the viscous stresses depend only on the first derivatives of the velocity. (For Newtonian fluids, this is also a linear dependence.) In Cartesian coordinates, the general relationship can then be written as τ i j = ∑ k ∑ ℓ μ i j k ℓ ∂ v k ∂ r ℓ , {\displaystyle \tau _{ij}=\sum _{k}\sum _{\ell }\mu _{ijk\ell }{\frac {\partial v_{k}}{\partial r_{\ell }}},} where μ i j k ℓ {\displaystyle \mu _{ijk\ell }} is a viscosity tensor that maps the velocity gradient tensor ∂ v k / ∂ r ℓ {\displaystyle \partial v_{k}/\partial r_{\ell }} onto the viscous stress tensor τ i j {\displaystyle \tau _{ij}} . Since the indices in this expression can vary from 1 to 3, there are 81 "viscosity coefficients" μ i j k l {\displaystyle \mu _{ijkl}} in total. However, assuming that the viscosity rank-2 tensor is isotropic reduces these 81 coefficients to three independent parameters α {\displaystyle \alpha } , β {\displaystyle \beta } , γ {\displaystyle \gamma } : μ i j k ℓ = α δ i j δ k ℓ + β δ i k δ j ℓ + γ δ i ℓ δ j k , {\displaystyle \mu _{ijk\ell }=\alpha \delta _{ij}\delta _{k\ell }+\beta \delta _{ik}\delta _{j\ell }+\gamma \delta _{i\ell }\delta _{jk},} and furthermore, it is assumed that no viscous forces may arise when the fluid is undergoing simple rigid-body rotation, thus β = γ {\displaystyle \beta =\gamma } , leaving only two independent parameters. The most usual decomposition is in terms of the standard (scalar) viscosity μ {\displaystyle \mu } and the bulk viscosity κ {\displaystyle \kappa } such that α = κ − 2 3 μ {\displaystyle \alpha =\kappa -{\tfrac {2}{3}}\mu } and β = γ = μ {\displaystyle \beta =\gamma =\mu } . In vector notation this appears as: τ = μ [ ∇ v + ( ∇ v ) T ] − ( 2 3 μ − κ ) ( ∇ ⋅ v ) δ , {\displaystyle {\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {v} +(\nabla \mathbf {v} )^{\mathrm {T} }\right]-\left({\frac {2}{3}}\mu -\kappa \right)(\nabla \cdot \mathbf {v} )\mathbf {\delta } ,} where δ {\displaystyle \mathbf {\delta } } is the unit tensor. This equation can be thought of as a generalized form of Newton's law of viscosity. The bulk viscosity (also called volume viscosity) expresses a type of internal friction that resists the shearless compression or expansion of a fluid. Knowledge of κ {\displaystyle \kappa } is frequently not necessary in fluid dynamics problems. For example, an incompressible fluid satisfies ∇ ⋅ v = 0 {\displaystyle \nabla \cdot \mathbf {v} =0} and so the term containing κ {\displaystyle \kappa } drops out. Moreover, κ {\displaystyle \kappa } is often assumed to be negligible for gases since it is 0 {\displaystyle 0} in a monatomic ideal gas. One situation in which κ {\displaystyle \kappa } can be important is the calculation of energy loss in sound and shock waves, described by Stokes' law of sound attenuation, since these phenomena involve rapid expansions and compressions. The defining equations for viscosity are not fundamental laws of nature, so their usefulness, as well as methods for measuring or calculating the viscosity, must be established using separate means. A potential issue is that viscosity depends, in principle, on the full microscopic state of the fluid, which encompasses the positions and momenta of every particle in the system. Such highly detailed information is typically not available in realistic systems. However, under certain conditions most of this information can be shown to be negligible. In particular, for Newtonian fluids near equilibrium and far from boundaries (bulk state), the viscosity depends only space- and time-dependent macroscopic fields (such as temperature and density) defining local equilibrium. Nevertheless, viscosity may still carry a non-negligible dependence on several system properties, such as temperature, pressure, and the amplitude and frequency of any external forcing. Therefore, precision measurements of viscosity are only defined with respect to a specific fluid state. To standardize comparisons among experiments and theoretical models, viscosity data is sometimes extrapolated to ideal limiting cases, such as the zero shear limit, or (for gases) the zero density limit. == Momentum transport == Transport theory provides an alternative interpretation of viscosity in terms of momentum transport: viscosity is the material property which characterizes momentum transport within a fluid, just as thermal conductivity characterizes heat transport, and (mass) diffusivity characterizes mass transport. This perspective is implicit in Newton's law of viscosity, τ = μ ( ∂ u / ∂ y ) {\displaystyle \tau =\mu (\partial u/\partial y)} , because the shear stress τ {\displaystyle \tau } has units equivalent to a momentum flux, i.e., momentum per unit time per unit area. Thus, τ {\displaystyle \tau } can be interpreted as specifying the flow of momentum in the y {\displaystyle y} direction from one fluid layer to the next. Per Newton's law of viscosity, this momentum flow occurs across a velocity gradient, and the magnitude of the corresponding momentum flux is determined by the viscosity. The analogy with heat and mass transfer can be made explicit. Just as heat flows from high temperature to low temperature and mass flows from high density to low density, momentum flows from high velocity to low velocity. These behaviors are all described by compact expressions, called constitutive relations, whose one-dimensional forms are given here: J = − D ∂ ρ ∂ x (Fick's law of diffusion) q = − k t ∂ T ∂ x (Fourier's law of heat conduction) τ = μ ∂ u ∂ y (Newton's law of viscosity) {\displaystyle {\begin{aligned}\mathbf {J} &=-D{\frac {\partial \rho }{\partial x}}&&{\text{(Fick's law of diffusion)}}\\[5pt]\mathbf {q} &=-k_{t}{\frac {\partial T}{\partial x}}&&{\text{(Fourier's law of heat conduction)}}\\[5pt]\tau &=\mu {\frac {\partial u}{\partial y}}&&{\text{(Newton's law of viscosity)}}\end{aligned}}} where ρ {\displaystyle \rho } is the density, J {\displaystyle \mathbf {J} } and q {\displaystyle \mathbf {q} } are the mass and heat fluxes, and D {\displaystyle D} and k t {\displaystyle k_{t}} are the mass diffusivity and thermal conductivity. The fact that mass, momentum, and energy (heat) transport are among the most relevant processes in continuum mechanics is not a coincidence: these are among the few physical quantities that are conserved at the microscopic level in interparticle collisions. Thus, rather than being dictated by the fast and complex microscopic interaction timescale, their dynamics occurs on macroscopic timescales, as described by the various equations of transport theory and hydrodynamics. == Newtonian and non-Newtonian fluids == Newton's law of viscosity is not a fundamental law of nature, but rather a constitutive equation (like Hooke's law, Fick's law, and Ohm's law) which serves to define the viscosity μ {\displaystyle \mu } . Its form is motivated by experiments which show that for a wide range of fluids, μ {\displaystyle \mu } is independent of strain rate. Such fluids are called Newtonian. Gases, water, and many common liquids can be considered Newtonian in ordinary conditions and contexts. However, there are many non-Newtonian fluids that significantly deviate from this behavior. For example: Shear-thickening (dilatant) liquids, whose viscosity increases with the rate of shear strain. Shear-thinning liquids, whose viscosity decreases with the rate of shear strain. Thixotropic liquids, that become less viscous over time when shaken, agitated, or otherwise stressed. Rheopectic liquids, that become more viscous over time when shaken, agitated, or otherwise stressed. Bingham plastics that behave as a solid at low stresses but flow as a viscous fluid at high stresses. Trouton's ratio is the ratio of extensional viscosity to shear viscosity. For a Newtonian fluid, the Trouton ratio is 3. Shear-thinning liquids are very commonly, but misleadingly, described as thixotropic. Viscosity may also depend on the fluid's physical state (temperature and pressure) and other, external, factors. For gases and other compressible fluids, it depends on temperature and varies very slowly with pressure. The viscosity of some fluids may depend on other factors. A magnetorheological fluid, for example, becomes thicker when subjected to a magnetic field, possibly to the point of behaving like a solid. == In solids == The viscous forces that arise during fluid flow are distinct from the elastic forces that occur in a solid in response to shear, compression, or extension stresses. While in the latter the stress is proportional to the amount of shear deformation, in a fluid it is proportional to the rate of deformation over time. For this reason, James Clerk Maxwell used the term fugitive elasticity for fluid viscosity. However, many liquids (including water) will briefly react like elastic solids when subjected to sudden stress. Conversely, many "solids" (even granite) will flow like liquids, albeit very slowly, even under arbitrarily small stress. Such materials are best described as viscoelastic—that is, possessing both elasticity (reaction to deformation) and viscosity (reaction to rate of deformation). Viscoelastic solids may exhibit both shear viscosity and bulk viscosity. The extensional viscosity is a linear combination of the shear and bulk viscosities that describes the reaction of a solid elastic material to elongation. It is widely used for characterizing polymers. In geology, earth materials that exhibit viscous deformation at least three orders of magnitude greater than their elastic deformation are sometimes called rheids. == Measurement == Viscosity is measured with various types of viscometers and rheometers. Close temperature control of the fluid is essential to obtain accurate measurements, particularly in materials like lubricants, whose viscosity can double with a change of only 5 °C. A rheometer is used for fluids that cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. For some fluids, the viscosity is constant over a wide range of shear rates (Newtonian fluids). The fluids without a constant viscosity (non-Newtonian fluids) cannot be described by a single number. Non-Newtonian fluids exhibit a variety of different correlations between shear stress and shear rate. One of the most common instruments for measuring kinematic viscosity is the glass capillary viscometer. In coating industries, viscosity may be measured with a cup in which the efflux time is measured. There are several sorts of cup—such as the Zahn cup and the Ford viscosity cup—with the usage of each type varying mainly according to the industry. Also used in coatings, a Stormer viscometer employs load-based rotation to determine viscosity. The viscosity is reported in Krebs units (KU), which are unique to Stormer viscometers. Vibrating viscometers can also be used to measure viscosity. Resonant, or vibrational viscometers work by creating shear waves within the liquid. In this method, the sensor is submerged in the fluid and is made to resonate at a specific frequency. As the surface of the sensor shears through the liquid, energy is lost due to its viscosity. This dissipated energy is then measured and converted into a viscosity reading. A higher viscosity causes a greater loss of energy. Extensional viscosity can be measured with various rheometers that apply extensional stress. Volume viscosity can be measured with an acoustic rheometer. Apparent viscosity is a calculation derived from tests performed on drilling fluid used in oil or gas well development. These calculations and tests help engineers develop and maintain the properties of the drilling fluid to the specifications required. Nanoviscosity (viscosity sensed by nanoprobes) can be measured by fluorescence correlation spectroscopy. == Units == The SI unit of dynamic viscosity is the newton-second per metre squared (N·s/m2), also frequently expressed in the equivalent forms pascal-second (Pa·s), kilogram per meter per second (kg·m−1·s−1) and poiseuille (Pl). The CGS unit is the poise (P, or g·cm−1·s−1 = 0.1 Pa·s), named after Jean Léonard Marie Poiseuille. It is commonly expressed, particularly in ASTM standards, as centipoise (cP). The centipoise is convenient because the viscosity of water at 20 °C is about 1 cP, and one centipoise is equal to the SI millipascal second (mPa·s). The SI unit of kinematic viscosity is metre squared per second (m2/s), whereas the CGS unit for kinematic viscosity is the stokes (St, or cm2·s−1 = 0.0001 m2·s−1), named after Sir George Gabriel Stokes. In U.S. usage, stoke is sometimes used as the singular form. The submultiple centistokes (cSt) is often used instead, 1 cSt = 1 mm2·s−1 = 10−6 m2·s−1. 1 cSt is 1 cP divided by 1000 kg/m^3, close to the density of water. The kinematic viscosity of water at 20 °C is about 1 cSt. The most frequently used systems of US customary, or Imperial, units are the British Gravitational (BG) and English Engineering (EE). In the BG system, dynamic viscosity has units of pound-seconds per square foot (lb·s/ft2), and in the EE system it has units of pound-force-seconds per square foot (lbf·s/ft2). The pound and pound-force are equivalent; the two systems differ only in how force and mass are defined. In the BG system the pound is a basic unit from which the unit of mass (the slug) is defined by Newton's second law, whereas in the EE system the units of force and mass (the pound-force and pound-mass respectively) are defined independently through the second law using the proportionality constant gc. Kinematic viscosity has units of square feet per second (ft2/s) in both the BG and EE systems. Nonstandard units include the reyn (lbf·s/in2), a British unit of dynamic viscosity. In the automotive industry the viscosity index is used to describe the change of viscosity with temperature. The reciprocal of viscosity is fluidity, usually symbolized by ϕ = 1 / μ {\displaystyle \phi =1/\mu } or F = 1 / μ {\displaystyle F=1/\mu } , depending on the convention used, measured in reciprocal poise (P−1, or cm·s·g−1), sometimes called the rhe. Fluidity is seldom used in engineering practice. At one time the petroleum industry relied on measuring kinematic viscosity by means of the Saybolt viscometer, and expressing kinematic viscosity in units of Saybolt universal seconds (SUS). Other abbreviations such as SSU (Saybolt seconds universal) or SUV (Saybolt universal viscosity) are sometimes used. Kinematic viscosity in centistokes can be converted from SUS according to the arithmetic and the reference table provided in ASTM D 2161. == Molecular origins == Momentum transport in gases is mediated by discrete molecular collisions, and in liquids by attractive forces that bind molecules close together. Because of this, the dynamic viscosities of liquids are typically much larger than those of gases. In addition, viscosity tends to increase with temperature in gases and decrease with temperature in liquids. Above the liquid-gas critical point, the liquid and gas phases are replaced by a single supercritical phase. In this regime, the mechanisms of momentum transport interpolate between liquid-like and gas-like behavior. For example, along a supercritical isobar (constant-pressure surface), the kinematic viscosity decreases at low temperature and increases at high temperature, with a minimum in between. A rough estimate for the value at the minimum is ν min = 1 4 π ℏ m e m {\displaystyle \nu _{\text{min}}={\frac {1}{4\pi }}{\frac {\hbar }{\sqrt {m_{\text{e}}m}}}} where ℏ {\displaystyle \hbar } is the Planck constant, m e {\displaystyle m_{\text{e}}} is the electron mass, and m {\displaystyle m} is the molecular mass. In general, however, the viscosity of a system depends in detail on how the molecules constituting the system interact, and there are no simple but correct formulas for it. The simplest exact expressions are the Green–Kubo relations for the linear shear viscosity or the transient time correlation function expressions derived by Evans and Morriss in 1988. Although these expressions are each exact, calculating the viscosity of a dense fluid using these relations currently requires the use of molecular dynamics computer simulations. Somewhat more progress can be made for a dilute gas, as elementary assumptions about how gas molecules move and interact lead to a basic understanding of the molecular origins of viscosity. More sophisticated treatments can be constructed by systematically coarse-graining the equations of motion of the gas molecules. An example of such a treatment is Chapman–Enskog theory, which derives expressions for the viscosity of a dilute gas from the Boltzmann equation. === Pure gases === Viscosity in gases arises principally from the molecular diffusion that transports momentum between layers of flow. An elementary calculation for a dilute gas at temperature T {\displaystyle T} and density ρ {\displaystyle \rho } gives μ = α ρ λ 2 k B T π m , {\displaystyle \mu =\alpha \rho \lambda {\sqrt {\frac {2k_{\text{B}}T}{\pi m}}},} where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant, m {\displaystyle m} the molecular mass, and α {\displaystyle \alpha } a numerical constant on the order of 1 {\displaystyle 1} . The quantity λ {\displaystyle \lambda } , the mean free path, measures the average distance a molecule travels between collisions. Even without a priori knowledge of α {\displaystyle \alpha } , this expression has nontrivial implications. In particular, since λ {\displaystyle \lambda } is typically inversely proportional to density and increases with temperature, μ {\displaystyle \mu } itself should increase with temperature and be independent of density at fixed temperature. In fact, both of these predictions persist in more sophisticated treatments, and accurately describe experimental observations. By contrast, liquid viscosity typically decreases with temperature. For rigid elastic spheres of diameter σ {\displaystyle \sigma } , λ {\displaystyle \lambda } can be computed, giving μ = α π 3 / 2 k B m T σ 2 . {\displaystyle \mu ={\frac {\alpha }{\pi ^{3/2}}}{\frac {\sqrt {k_{\text{B}}mT}}{\sigma ^{2}}}.} In this case λ {\displaystyle \lambda } is independent of temperature, so μ ∝ T 1 / 2 {\displaystyle \mu \propto T^{1/2}} . For more complicated molecular models, however, λ {\displaystyle \lambda } depends on temperature in a non-trivial way, and simple kinetic arguments as used here are inadequate. More fundamentally, the notion of a mean free path becomes imprecise for particles that interact over a finite range, which limits the usefulness of the concept for describing real-world gases. ==== Chapman–Enskog theory ==== A technique developed by Sydney Chapman and David Enskog in the early 1900s allows a more refined calculation of μ {\displaystyle \mu } . It is based on the Boltzmann equation, which provides a statistical description of a dilute gas in terms of intermolecular interactions. The technique allows accurate calculation of μ {\displaystyle \mu } for molecular models that are more realistic than rigid elastic spheres, such as those incorporating intermolecular attractions. Doing so is necessary to reproduce the correct temperature dependence of μ {\displaystyle \mu } , which experiments show increases more rapidly than the T 1 / 2 {\displaystyle T^{1/2}} trend predicted for rigid elastic spheres. Indeed, the Chapman–Enskog analysis shows that the predicted temperature dependence can be tuned by varying the parameters in various molecular models. A simple example is the Sutherland model, which describes rigid elastic spheres with weak mutual attraction. In such a case, the attractive force can be treated perturbatively, which leads to a simple expression for μ {\displaystyle \mu } : μ = 5 16 σ 2 ( k B m T π ) 1 / 2 ( 1 + S T ) − 1 , {\displaystyle \mu ={\frac {5}{16\sigma ^{2}}}\left({\frac {k_{\text{B}}mT}{\pi }}\right)^{\!\!1/2}\ \left(1+{\frac {S}{T}}\right)^{\!\!-1},} where S {\displaystyle S} is independent of temperature, being determined only by the parameters of the intermolecular attraction. To connect with experiment, it is convenient to rewrite as μ = μ 0 ( T T 0 ) 3 / 2 T 0 + S T + S , {\displaystyle \mu =\mu _{0}\left({\frac {T}{T_{0}}}\right)^{\!\!3/2}\ {\frac {T_{0}+S}{T+S}},} where μ 0 {\displaystyle \mu _{0}} is the viscosity at temperature T 0 {\displaystyle T_{0}} . This expression is usually named Sutherland's formula. If μ {\displaystyle \mu } is known from experiments at T = T 0 {\displaystyle T=T_{0}} and at least one other temperature, then S {\displaystyle S} can be calculated. Expressions for μ {\displaystyle \mu } obtained in this way are qualitatively accurate for a number of simple gases. Slightly more sophisticated models, such as the Lennard-Jones potential, or the more flexible Mie potential, may provide better agreement with experiments, but only at the cost of a more opaque dependence on temperature. A further advantage of these more complex interaction potentials is that they can be used to develop accurate models for a wide variety of properties using the same potential parameters. In situations where little experimental data is available, this makes it possible to obtain model parameters from fitting to properties such as pure-fluid vapour-liquid equilibria, before using the parameters thus obtained to predict the viscosities of interest with reasonable accuracy. In some systems, the assumption of spherical symmetry must be abandoned, as is the case for vapors with highly polar molecules like H2O. In these cases, the Chapman–Enskog analysis is significantly more complicated. ==== Bulk viscosity ==== In the kinetic-molecular picture, a non-zero bulk viscosity arises in gases whenever there are non-negligible relaxational timescales governing the exchange of energy between the translational energy of molecules and their internal energy, e.g. rotational and vibrational. As such, the bulk viscosity is 0 {\displaystyle 0} for a monatomic ideal gas, in which the internal energy of molecules is negligible, but is nonzero for a gas like carbon dioxide, whose molecules possess both rotational and vibrational energy. === Pure liquids === In contrast with gases, there is no simple yet accurate picture for the molecular origins of viscosity in liquids. At the simplest level of description, the relative motion of adjacent layers in a liquid is opposed primarily by attractive molecular forces acting across the layer boundary. In this picture, one (correctly) expects viscosity to decrease with increasing temperature. This is because increasing temperature increases the random thermal motion of the molecules, which makes it easier for them to overcome their attractive interactions. Building on this visualization, a simple theory can be constructed in analogy with the discrete structure of a solid: groups of molecules in a liquid are visualized as forming "cages" which surround and enclose single molecules. These cages can be occupied or unoccupied, and stronger molecular attraction corresponds to stronger cages. Due to random thermal motion, a molecule "hops" between cages at a rate which varies inversely with the strength of molecular attractions. In equilibrium these "hops" are not biased in any direction. On the other hand, in order for two adjacent layers to move relative to each other, the "hops" must be biased in the direction of the relative motion. The force required to sustain this directed motion can be estimated for a given shear rate, leading to where N A {\displaystyle N_{\text{A}}} is the Avogadro constant, h {\displaystyle h} is the Planck constant, V {\displaystyle V} is the volume of a mole of liquid, and T b {\displaystyle T_{\text{b}}} is the normal boiling point. This result has the same form as the well-known empirical relation where A {\displaystyle A} and B {\displaystyle B} are constants fit from data. On the other hand, several authors express caution with respect to this model. Errors as large as 30% can be encountered using equation (1), compared with fitting equation (2) to experimental data. More fundamentally, the physical assumptions underlying equation (1) have been criticized. It has also been argued that the exponential dependence in equation (1) does not necessarily describe experimental observations more accurately than simpler, non-exponential expressions. In light of these shortcomings, the development of a less ad hoc model is a matter of practical interest. Foregoing simplicity in favor of precision, it is possible to write rigorous expressions for viscosity starting from the fundamental equations of motion for molecules. A classic example of this approach is Irving–Kirkwood theory. On the other hand, such expressions are given as averages over multiparticle correlation functions and are therefore difficult to apply in practice. In general, empirically derived expressions (based on existing viscosity measurements) appear to be the only consistently reliable means of calculating viscosity in liquids. Local atomic structure changes observed in undercooled liquids on cooling below the equilibrium melting temperature either in terms of radial distribution function g(r) or structure factor S(Q) are found to be directly responsible for the liquid fragility: deviation of the temperature dependence of viscosity of the undercooled liquid from the Arrhenius equation (2) through modification of the activation energy for viscous flow. At the same time equilibrium liquids follow the Arrhenius equation. === Mixtures and blends === ==== Gaseous mixtures ==== The same molecular-kinetic picture of a single component gas can also be applied to a gaseous mixture. For instance, in the Chapman–Enskog approach the viscosity μ mix {\displaystyle \mu _{\text{mix}}} of a binary mixture of gases can be written in terms of the individual component viscosities μ 1 , 2 {\displaystyle \mu _{1,2}} , their respective volume fractions, and the intermolecular interactions. As for the single-component gas, the dependence of μ mix {\displaystyle \mu _{\text{mix}}} on the parameters of the intermolecular interactions enters through various collisional integrals which may not be expressible in closed form. To obtain usable expressions for μ mix {\displaystyle \mu _{\text{mix}}} which reasonably match experimental data, the collisional integrals may be computed numerically or from correlations. In some cases, the collision integrals are regarded as fitting parameters, and are fitted directly to experimental data. This is a common approach in the development of reference equations for gas-phase viscosities. An example of such a procedure is the Sutherland approach for the single-component gas, discussed above. For gas mixtures consisting of simple molecules, Revised Enskog Theory has been shown to accurately represent both the density- and temperature dependence of the viscosity over a wide range of conditions. ==== Blends of liquids ==== As for pure liquids, the viscosity of a blend of liquids is difficult to predict from molecular principles. One method is to extend the molecular "cage" theory presented above for a pure liquid. This can be done with varying levels of sophistication. One expression resulting from such an analysis is the Lederer–Roegiers equation for a binary mixture: ln ⁡ μ blend = x 1 x 1 + α x 2 ln ⁡ μ 1 + α x 2 x 1 + α x 2 ln ⁡ μ 2 , {\displaystyle \ln \mu _{\text{blend}}={\frac {x_{1}}{x_{1}+\alpha x_{2}}}\ln \mu _{1}+{\frac {\alpha x_{2}}{x_{1}+\alpha x_{2}}}\ln \mu _{2},} where α {\displaystyle \alpha } is an empirical parameter, and x 1 , 2 {\displaystyle x_{1,2}} and μ 1 , 2 {\displaystyle \mu _{1,2}} are the respective mole fractions and viscosities of the component liquids. Since blending is an important process in the lubricating and oil industries, a variety of empirical and proprietary equations exist for predicting the viscosity of a blend. === Solutions and suspensions === ==== Aqueous solutions ==== Depending on the solute and range of concentration, an aqueous electrolyte solution can have either a larger or smaller viscosity compared with pure water at the same temperature and pressure. For instance, a 20% saline (sodium chloride) solution has viscosity over 1.5 times that of pure water, whereas a 20% potassium iodide solution has viscosity about 0.91 times that of pure water. An idealized model of dilute electrolytic solutions leads to the following prediction for the viscosity μ s {\displaystyle \mu _{s}} of a solution: μ s μ 0 = 1 + A c , {\displaystyle {\frac {\mu _{s}}{\mu _{0}}}=1+A{\sqrt {c}},} where μ 0 {\displaystyle \mu _{0}} is the viscosity of the solvent, c {\displaystyle c} is the concentration, and A {\displaystyle A} is a positive constant which depends on both solvent and solute properties. However, this expression is only valid for very dilute solutions, having c {\displaystyle c} less than 0.1 mol/L. For higher concentrations, additional terms are necessary which account for higher-order molecular correlations: μ s μ 0 = 1 + A c + B c + C c 2 , {\displaystyle {\frac {\mu _{s}}{\mu _{0}}}=1+A{\sqrt {c}}+Bc+Cc^{2},} where B {\displaystyle B} and C {\displaystyle C} are fit from data. In particular, a negative value of B {\displaystyle B} is able to account for the decrease in viscosity observed in some solutions. Estimated values of these constants are shown below for sodium chloride and potassium iodide at temperature 25 °C (mol = mole, L = liter). ==== Suspensions ==== In a suspension of solid particles (e.g. micron-size spheres suspended in oil), an effective viscosity μ eff {\displaystyle \mu _{\text{eff}}} can be defined in terms of stress and strain components which are averaged over a volume large compared with the distance between the suspended particles, but small with respect to macroscopic dimensions. Such suspensions generally exhibit non-Newtonian behavior. However, for dilute systems in steady flows, the behavior is Newtonian and expressions for μ eff {\displaystyle \mu _{\text{eff}}} can be derived directly from the particle dynamics. In a very dilute system, with volume fraction ϕ ≲ 0.02 {\displaystyle \phi \lesssim 0.02} , interactions between the suspended particles can be ignored. In such a case one can explicitly calculate the flow field around each particle independently, and combine the results to obtain μ eff {\displaystyle \mu _{\text{eff}}} . For spheres, this results in the Einstein's effective viscosity formula: μ eff = μ 0 ( 1 + 5 2 ϕ ) , {\displaystyle \mu _{\text{eff}}=\mu _{0}\left(1+{\frac {5}{2}}\phi \right),} where μ 0 {\displaystyle \mu _{0}} is the viscosity of the suspending liquid. The linear dependence on ϕ {\displaystyle \phi } is a consequence of neglecting interparticle interactions. For dilute systems in general, one expects μ eff {\displaystyle \mu _{\text{eff}}} to take the form μ eff = μ 0 ( 1 + B ϕ ) , {\displaystyle \mu _{\text{eff}}=\mu _{0}\left(1+B\phi \right),} where the coefficient B {\displaystyle B} may depend on the particle shape (e.g. spheres, rods, disks). Experimental determination of the precise value of B {\displaystyle B} is difficult, however: even the prediction B = 5 / 2 {\displaystyle B=5/2} for spheres has not been conclusively validated, with various experiments finding values in the range 1.5 ≲ B ≲ 5 {\displaystyle 1.5\lesssim B\lesssim 5} . This deficiency has been attributed to difficulty in controlling experimental conditions. In denser suspensions, μ eff {\displaystyle \mu _{\text{eff}}} acquires a nonlinear dependence on ϕ {\displaystyle \phi } , which indicates the importance of interparticle interactions. Various analytical and semi-empirical schemes exist for capturing this regime. At the most basic level, a term quadratic in ϕ {\displaystyle \phi } is added to μ eff {\displaystyle \mu _{\text{eff}}} : μ eff = μ 0 ( 1 + B ϕ + B 1 ϕ 2 ) , {\displaystyle \mu _{\text{eff}}=\mu _{0}\left(1+B\phi +B_{1}\phi ^{2}\right),} and the coefficient B 1 {\displaystyle B_{1}} is fit from experimental data or approximated from the microscopic theory. However, some authors advise caution in applying such simple formulas since non-Newtonian behavior appears in dense suspensions ( ϕ ≳ 0.25 {\displaystyle \phi \gtrsim 0.25} for spheres), or in suspensions of elongated or flexible particles. There is a distinction between a suspension of solid particles, described above, and an emulsion. The latter is a suspension of tiny droplets, which themselves may exhibit internal circulation. The presence of internal circulation can decrease the observed effective viscosity, and different theoretical or semi-empirical models must be used. === Amorphous materials === In the high and low temperature limits, viscous flow in amorphous materials (e.g. in glasses and melts) has the Arrhenius form: μ = A e Q / ( R T ) , {\displaystyle \mu =Ae^{Q/(RT)},} where Q is a relevant activation energy, given in terms of molecular parameters; T is temperature; R is the molar gas constant; and A is approximately a constant. The activation energy Q takes a different value depending on whether the high or low temperature limit is being considered: it changes from a high value QH at low temperatures (in the glassy state) to a low value QL at high temperatures (in the liquid state). For intermediate temperatures, Q {\displaystyle Q} varies nontrivially with temperature and the simple Arrhenius form fails. On the other hand, the two-exponential equation μ = A T exp ⁡ ( B R T ) [ 1 + C exp ⁡ ( D R T ) ] , {\displaystyle \mu =AT\exp \left({\frac {B}{RT}}\right)\left[1+C\exp \left({\frac {D}{RT}}\right)\right],} where A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , D {\displaystyle D} are all constants, provides a good fit to experimental data over the entire range of temperatures, while at the same time reducing to the correct Arrhenius form in the low and high temperature limits. This expression, also known as Duouglas-Doremus-Ojovan model, can be motivated from various theoretical models of amorphous materials at the atomic level. A two-exponential equation for the viscosity can be derived within the Dyre shoving model of supercooled liquids, where the Arrhenius energy barrier is identified with the high-frequency shear modulus times a characteristic shoving volume. Upon specifying the temperature dependence of the shear modulus via thermal expansion and via the repulsive part of the intermolecular potential, another two-exponential equation is retrieved: μ = exp ⁡ { V c C G k B T exp ⁡ [ ( 2 + λ ) α T T g ( 1 − T T g ) ] } {\displaystyle \mu =\exp {\left\{{\frac {V_{c}C_{G}}{k_{B}T}}\exp {\left[(2+\lambda )\alpha _{T}T_{g}\left(1-{\frac {T}{T_{g}}}\right)\right]}\right\}}} where C G {\displaystyle C_{G}} denotes the high-frequency shear modulus of the material evaluated at a temperature equal to the glass transition temperature T g {\displaystyle T_{g}} , V c {\displaystyle V_{c}} is the so-called shoving volume, i.e. it is the characteristic volume of the group of atoms involved in the shoving event by which an atom/molecule escapes from the cage of nearest-neighbours, typically on the order of the volume occupied by few atoms. Furthermore, α T {\displaystyle \alpha _{T}} is the thermal expansion coefficient of the material, λ {\displaystyle \lambda } is a parameter which measures the steepness of the power-law rise of the ascending flank of the first peak of the radial distribution function, and is quantitatively related to the repulsive part of the interatomic potential. Finally, k B {\displaystyle k_{B}} denotes the Boltzmann constant. === Eddy viscosity === In the study of turbulence in fluids, a common practical strategy is to ignore the small-scale vortices (or eddies) in the motion and to calculate a large-scale motion with an effective viscosity, called the "eddy viscosity", which characterizes the transport and dissipation of energy in the smaller-scale flow (see large eddy simulation). In contrast to the viscosity of the fluid itself, which must be positive by the second law of thermodynamics, the eddy viscosity can be negative. == Prediction == Because viscosity depends continuously on temperature and pressure, it cannot be fully characterized by a finite number of experimental measurements. Predictive formulas become necessary if experimental values are not available at the temperatures and pressures of interest. This capability is important for thermophysical simulations, in which the temperature and pressure of a fluid can vary continuously with space and time. A similar situation is encountered for mixtures of pure fluids, where the viscosity depends continuously on the concentration ratios of the constituent fluids For the simplest fluids, such as dilute monatomic gases and their mixtures, ab initio quantum mechanical computations can accurately predict viscosity in terms of fundamental atomic constants, i.e., without reference to existing viscosity measurements. For the special case of dilute helium, uncertainties in the ab initio calculated viscosity are two order of magnitudes smaller than uncertainties in experimental values. For slightly more complex fluids and mixtures at moderate densities (i.e. sub-critical densities) Revised Enskog Theory can be used to predict viscosities with some accuracy. Revised Enskog Theory is predictive in the sense that predictions for viscosity can be obtained using parameters fitted to other, pure-fluid thermodynamic properties or transport properties, thus requiring no a priori experimental viscosity measurements. For most fluids, high-accuracy, first-principles computations are not feasible. Rather, theoretical or empirical expressions must be fit to existing viscosity measurements. If such an expression is fit to high-fidelity data over a large range of temperatures and pressures, then it is called a "reference correlation" for that fluid. Reference correlations have been published for many pure fluids; a few examples are water, carbon dioxide, ammonia, benzene, and xenon. Many of these cover temperature and pressure ranges that encompass gas, liquid, and supercritical phases. Thermophysical modeling software often relies on reference correlations for predicting viscosity at user-specified temperature and pressure. These correlations may be proprietary. Examples are REFPROP (proprietary) and CoolProp (open-source). Viscosity can also be computed using formulas that express it in terms of the statistics of individual particle trajectories. These formulas include the Green–Kubo relations for the linear shear viscosity and the transient time correlation function expressions derived by Evans and Morriss in 1988. The advantage of these expressions is that they are formally exact and valid for general systems. The disadvantage is that they require detailed knowledge of particle trajectories, available only in computationally expensive simulations such as molecular dynamics. An accurate model for interparticle interactions is also required, which may be difficult to obtain for complex molecules. == Selected substances == Observed values of viscosity vary over several orders of magnitude, even for common substances (see the order of magnitude table below). For instance, a 70% sucrose (sugar) solution has a viscosity over 400 times that of water, and 26,000 times that of air. More dramatically, pitch has been estimated to have a viscosity 230 billion times that of water. === Water === The dynamic viscosity μ {\displaystyle \mu } of water is about 0.89 mPa·s at room temperature (25 °C). As a function of temperature in kelvins, the viscosity can be estimated using the semi-empirical Vogel-Fulcher-Tammann equation: μ = A exp ⁡ ( B T − C ) {\displaystyle \mu =A\exp \left({\frac {B}{T-C}}\right)} where A = 0.02939 mPa·s, B = 507.88 K, and C = 149.3 K. Experimentally determined values of the viscosity are also given in the table below. The values at 20 °C are a useful reference: there, the dynamic viscosity is about 1 cP and the kinematic viscosity is about 1 cSt. === Air === Under standard atmospheric conditions (25 °C and pressure of 1 bar), the dynamic viscosity of air is 18.5 μPa·s, roughly 50 times smaller than the viscosity of water at the same temperature. Except at very high pressure, the viscosity of air depends mostly on the temperature. Among the many possible approximate formulas for the temperature dependence (see Temperature dependence of viscosity), one is: η air = 2.791 × 10 − 7 × T 0.7355 {\displaystyle \eta _{\text{air}}=2.791\times 10^{-7}\times T^{0.7355}} which is accurate in the range −20 °C to 400 °C. For this formula to be valid, the temperature must be given in kelvins; η air {\displaystyle \eta _{\text{air}}} then corresponds to the viscosity in Pa·s. === Other common substances === === Order of magnitude estimates === The following table illustrates the range of viscosity values observed in common substances. Unless otherwise noted, a temperature of 25 °C and a pressure of 1 atmosphere are assumed. The values listed are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior. == See also == == References == === Footnotes === === Citations === === Sources === == External links == Viscosity - The Feynman Lectures on Physics Fluid properties – high accuracy calculation of viscosity for frequently encountered pure liquids and gases Fluid Characteristics Chart – a table of viscosities and vapor pressures for various fluids Gas Dynamics Toolbox – calculate coefficient of viscosity for mixtures of gases Glass Viscosity Measurement – viscosity measurement, viscosity units and fixpoints, glass viscosity calculation Kinematic Viscosity – conversion between kinematic and dynamic viscosity Physical Characteristics of Water – a table of water viscosity as a function of temperature Calculation of temperature-dependent dynamic viscosities for some common components Artificial viscosity Viscosity of Air, Dynamic and Kinematic, Engineers Edge
Wikipedia/Viscous_forces
In physics, potential energy is the energy of an object or system due to the body's position relative to other objects, or the configuration of its particles. The energy is equal to the work done against any restoring forces, such as gravity or those in a spring. The term potential energy was introduced by the 19th-century Scottish engineer and physicist William Rankine, although it has links to the ancient Greek philosopher Aristotle's concept of potentiality. Common types of potential energy include gravitational potential energy, the elastic potential energy of a deformed spring, and the electric potential energy of an electric charge and an electric field. The unit for energy in the International System of Units (SI) is the joule (symbol J). Potential energy is associated with forces that act on a body in a way that the total work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, whose total work is path independent, are called conservative forces. If the force acting on a body varies over space, then one has a force field; such a field is described by vectors at every point in space, which is, in turn, called a vector field. A conservative vector field can be simply expressed as the gradient of a certain scalar function, called a scalar potential. The potential energy is related to, and can be obtained from, this potential function. == Overview == There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration. Forces derivable from a potential are also called conservative forces. The work done by a conservative force is W = − Δ U , {\displaystyle W=-\Delta U,} where Δ U {\displaystyle \Delta U} is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, while work done by the force field decreases potential energy. Common notations for potential energy are PE, U, V, and Ep. Potential energy is the energy by virtue of an object's position relative to other objects. Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching a spring or lifting a mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall. Consider a ball whose mass is m dropped from height h. The acceleration g of free fall is approximately constant, so the weight force of the ball mg is constant. The product of force and displacement gives the work done, which is equal to the gravitational potential energy, thus U g = m g h . {\displaystyle U_{\text{g}}=mgh.} The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position. == History == From around 1840 scientists sought to define and understand energy and work. The term "potential energy" was coined by William Rankine a Scottish engineer and physicist in 1853 as part of a specific effort to develop terminology. He chose the term as part of the pair "actual" vs "potential" going back to work by Aristotle. In his 1867 discussion of the same topic Rankine describes potential energy as 'energy of configuration' in contrast to actual energy as 'energy of activity'. Also in 1867, William Thomson introduced "kinetic energy" as the opposite of "potential energy", asserting that all actual energy took the form of ⁠1/2⁠mv2. Once this hypothesis became widely accepted, the term "actual energy" gradually faded. == Work and potential energy == Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points (if the work is done by a conservative force), then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for an applied force is independent of the path, then the work done by the force is evaluated from the start to the end of the trajectory of the point of application. This means that there is a function U(x), called a "potential", that can be evaluated at the two points xA and xB to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is W = ∫ C F ⋅ d x = U ( x A ) − U ( x B ) {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}})} where C is the trajectory taken from A to B. Because the work done is independent of the path taken, then this expression is true for any trajectory, C, from A to B. The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces. === Derivable from a potential === In this section the relationship between work and potential energy is presented in more detail. The line integral that defines work along curve C takes a special form if the force F is related to a scalar field U′(x) so that F = ∇ U ′ = ( ∂ U ′ ∂ x , ∂ U ′ ∂ y , ∂ U ′ ∂ z ) . {\displaystyle \mathbf {F} ={\nabla U'}=\left({\frac {\partial U'}{\partial x}},{\frac {\partial U'}{\partial y}},{\frac {\partial U'}{\partial z}}\right).} This means that the units of U′ must be this case, work along the curve is given by W = ∫ C F ⋅ d x = ∫ C ∇ U ′ ⋅ d x , {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {x} =\int _{C}\nabla U'\cdot d\mathbf {x} ,} which can be evaluated using the gradient theorem to obtain W = U ′ ( x B ) − U ′ ( x A ) . {\displaystyle W=U'(\mathbf {x} _{\text{B}})-U'(\mathbf {x} _{\text{A}}).} This shows that when forces are derivable from a scalar field, the work of those forces along a curve C is computed by evaluating the scalar field at the start point A and the end point B of the curve. This means the work integral does not depend on the path between A and B and is said to be independent of the path. Potential energy U = −U′(x) is traditionally defined as the negative of this scalar field so that work by the force field decreases potential energy, that is W = U ( x A ) − U ( x B ) . {\displaystyle W=U(\mathbf {x} _{\text{A}})-U(\mathbf {x} _{\text{B}}).} In this case, the application of the del operator to the work function yields, ∇ W = − ∇ U = − ( ∂ U ∂ x , ∂ U ∂ y , ∂ U ∂ z ) = F , {\displaystyle {\nabla W}=-{\nabla U}=-\left({\frac {\partial U}{\partial x}},{\frac {\partial U}{\partial y}},{\frac {\partial U}{\partial z}}\right)=\mathbf {F} ,} and the force F is said to be "derivable from a potential". This also necessarily implies that F must be a conservative vector field. The potential U defines a force F at every point x in space, so the set of forces is called a force field. === Computing potential energy === Given a force field F(x), evaluation of the work integral using the gradient theorem can be used to find the scalar function associated with potential energy. This is done by introducing a parameterized curve γ(t) = r(t) from γ(a) = A to γ(b) = B, and computing, ∫ γ ∇ Φ ( r ) ⋅ d r = ∫ a b ∇ Φ ( r ( t ) ) ⋅ r ′ ( t ) d t , = ∫ a b d d t Φ ( r ( t ) ) d t = Φ ( r ( b ) ) − Φ ( r ( a ) ) = Φ ( x B ) − Φ ( x A ) . {\displaystyle {\begin{aligned}\int _{\gamma }\nabla \Phi (\mathbf {r} )\cdot d\mathbf {r} &=\int _{a}^{b}\nabla \Phi (\mathbf {r} (t))\cdot \mathbf {r} '(t)dt,\\&=\int _{a}^{b}{\frac {d}{dt}}\Phi (\mathbf {r} (t))dt=\Phi (\mathbf {r} (b))-\Phi (\mathbf {r} (a))=\Phi \left(\mathbf {x} _{B}\right)-\Phi \left(\mathbf {x} _{A}\right).\end{aligned}}} For the force field F, let v = dr/dt, then the gradient theorem yields, ∫ γ F ⋅ d r = ∫ a b F ⋅ v d t , = − ∫ a b d d t U ( r ( t ) ) d t = U ( x A ) − U ( x B ) . {\displaystyle {\begin{aligned}\int _{\gamma }\mathbf {F} \cdot d\mathbf {r} &=\int _{a}^{b}\mathbf {F} \cdot \mathbf {v} \,dt,\\&=-\int _{a}^{b}{\frac {d}{dt}}U(\mathbf {r} (t))\,dt=U(\mathbf {x} _{A})-U(\mathbf {x} _{B}).\end{aligned}}} The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity v of the point of application, that is P ( t ) = − ∇ U ⋅ v = F ⋅ v . {\displaystyle P(t)=-{\nabla U}\cdot \mathbf {v} =\mathbf {F} \cdot \mathbf {v} .} Examples of work that can be computed from potential functions are gravity and spring forces. == Potential energy for near-Earth gravity == For small height changes, gravitational potential energy can be computed using U g = m g h , {\displaystyle U_{\text{g}}=mgh,} where m is the mass in kilograms, g is the local gravitational field (9.8 metres per second squared on Earth), h is the height above a reference level in metres, and U is the energy in joules. In classical physics, gravity exerts a constant downward force F = (0, 0, Fz) on the center of mass of a body moving near the surface of the Earth. The work of gravity on a body moving along a trajectory r(t) = (x(t), y(t), z(t)), such as the track of a roller coaster is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ t 1 t 2 F ⋅ v d t = ∫ t 1 t 2 F z v z d t = F z Δ z . {\displaystyle W=\int _{t_{1}}^{t_{2}}{\boldsymbol {F}}\cdot {\boldsymbol {v}}\,dt=\int _{t_{1}}^{t_{2}}F_{\text{z}}v_{\text{z}}\,dt=F_{\text{z}}\Delta z.} where the integral of the vertical component of velocity is the vertical distance. The work of gravity depends only on the vertical movement of the curve r(t). == Potential energy for a linear spring == A horizontal spring exerts a force F = (−kx, 0, 0) that is proportional to its deformation in the axial or x-direction. The work of this spring on a body moving along the space curve s(t) = (x(t), y(t), z(t)), is calculated using its velocity, v = (vx, vy, vz), to obtain W = ∫ 0 t F ⋅ v d t = − ∫ 0 t k x v x d t = − ∫ 0 t k x d x d t d t = ∫ x ( t 0 ) x ( t ) k x d x = 1 2 k x 2 {\displaystyle W=\int _{0}^{t}\mathbf {F} \cdot \mathbf {v} \,dt=-\int _{0}^{t}kxv_{\text{x}}\,dt=-\int _{0}^{t}kx{\frac {dx}{dt}}dt=\int _{x(t_{0})}^{x(t)}kx\,dx={\frac {1}{2}}kx^{2}} For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvx, is x2/2. The function U ( x ) = 1 2 k x 2 , {\displaystyle U(x)={\frac {1}{2}}kx^{2},} is called the potential energy of a linear spring. Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy. == Potential energy for gravitational forces between two bodies == The gravitational potential function, also known as gravitational potential energy, is: U = − G M m r , {\displaystyle U=-{\frac {GMm}{r}},} The negative sign follows the convention that work is gained from a loss of potential energy. === Derivation === The gravitational force between two bodies of mass M and m separated by a distance r is given by Newton's law of universal gravitation F = − G M m r 2 r ^ , {\displaystyle \mathbf {F} =-{\frac {GMm}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from M to m and G is the gravitational constant. Let the mass m move at the velocity v then the work of gravity on this mass as it moves from position r(t1) to r(t2) is given by W = − ∫ r ( t 1 ) r ( t 2 ) G M m r 3 r ⋅ d r = − ∫ t 1 t 2 G M m r 3 r ⋅ v d t . {\displaystyle W=-\int _{\mathbf {r} (t_{1})}^{\mathbf {r} (t_{2})}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot d\mathbf {r} =-\int _{t_{1}}^{t_{2}}{\frac {GMm}{r^{3}}}\mathbf {r} \cdot \mathbf {v} \,dt.} The position and velocity of the mass m are given by r = r e r , v = r ˙ e r + r θ ˙ e t , {\displaystyle \mathbf {r} =r\mathbf {e} _{r},\qquad \mathbf {v} ={\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}},} where er and et are the radial and tangential unit vectors directed relative to the vector from M to m. Use this to simplify the formula for work of gravity to, W = − ∫ t 1 t 2 G m M r 3 ( r e r ) ⋅ ( r ˙ e r + r θ ˙ e t ) d t = − ∫ t 1 t 2 G m M r 3 r r ˙ d t = G M m r ( t 2 ) − G M m r ( t 1 ) . {\displaystyle W=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}(r\mathbf {e} _{\text{r}})\cdot ({\dot {r}}\mathbf {e} _{\text{r}}+r{\dot {\theta }}\mathbf {e} _{\text{t}})\,dt=-\int _{t_{1}}^{t_{2}}{\frac {GmM}{r^{3}}}r{\dot {r}}dt={\frac {GMm}{r(t_{2})}}-{\frac {GMm}{r(t_{1})}}.} This calculation uses the fact that d d t r − 1 = − r − 2 r ˙ = − r ˙ r 2 . {\displaystyle {\frac {d}{dt}}r^{-1}=-r^{-2}{\dot {r}}=-{\frac {\dot {r}}{r^{2}}}.} == Potential energy for electrostatic forces between two bodies == The electrostatic force exerted by a charge Q on another charge q separated by a distance r is given by Coulomb's law F = 1 4 π ε 0 Q q r 2 r ^ , {\displaystyle \mathbf {F} ={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. The work W required to move q from A to any point B in the electrostatic force field is given by the potential function U ( r ) = 1 4 π ε 0 Q q r . {\displaystyle U(r)={\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r}}.} == Reference level == The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state; it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used; therefore it can be chosen based on convenience. Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions. == Gravitational potential energy == Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount. Consider a book placed on top of a table. As the book is raised from the floor to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat, deformation, and sound by the impact. The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. "Height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail. === Local approximation === The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant g = 9.8 m/s2 (standard gravity). In this case, a simple expression for gravitational potential energy can be derived using the W = Fd equation for work, and the equation W F = − Δ U F . {\displaystyle W_{\text{F}}=-\Delta U_{\text{F}}.} The amount of gravitational potential energy held by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember W = Fd). The upward force required while moving at a constant velocity is equal to the weight, mg, of an object, so the work done in lifting it through a height h is the product mgh. Thus, when accounting only for mass, gravity, and altitude, the equation is: U = m g h {\displaystyle U=mgh} where U is the potential energy of the object relative to its being on the Earth's surface, m is the mass of the object, g is the acceleration due to gravity, and h is the altitude of the object. Hence, the potential difference is Δ U = m g Δ h . {\displaystyle \Delta U=mg\Delta h.} === General formula === However, over large variations in distance, the approximation that g is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy, we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance r between the two bodies. Using that definition, the gravitational potential energy of a system of masses m1 and M2 at a distance r using the Newtonian constant of gravitation G is U = − G m 1 M 2 r + K , {\displaystyle U=-G{\frac {m_{1}M_{2}}{r}}+K,} where K is an arbitrary constant dependent on the choice of datum from which potential is measured. Choosing the convention that K = 0 (i.e. in relation to a point at infinity) makes calculations simpler, albeit at the cost of making U negative; for why this is physically reasonable, see below. Given this formula for U, the total potential energy of a system of n bodies is found by summing, for all n ( n − 1 ) 2 {\textstyle {\frac {n(n-1)}{2}}} pairs of two bodies, the potential energy of the system of those two bodies. Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity. U = − m ( G M 1 r 1 + G M 2 r 2 ) {\displaystyle U=-m\left(G{\frac {M_{1}}{r_{1}}}+G{\frac {M_{2}}{r_{2}}}\right)} therefore, U = − m ∑ G M r , {\displaystyle U=-m\sum G{\frac {M}{r}},} === Negative gravitational energy === As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which U becomes zero: r = 0 {\displaystyle r=0} and r = ∞ {\displaystyle r=\infty } . The choice of U = 0 {\displaystyle U=0} at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative. The singularity at r = 0 {\displaystyle r=0} in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with U = 0 {\displaystyle U=0} for ⁠ r = 0 {\displaystyle r=0} ⁠, would result in potential energy being positive, but infinitely large for all nonzero values of r, and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and r is always non-zero in practice, the choice of U = 0 {\displaystyle U=0} at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first. The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this. === Uses === Gravitational potential energy has a number of practical uses, notably the generation of pumped-storage hydroelectricity. For example, in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity. The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction. Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism. It is also used by counterweights for lifting up an elevator, crane, or sash window. Roller coasters are an entertaining way to utilize potential energy – chains are used to move a car up an incline (building up gravitational potential energy), to then have that energy converted into kinetic energy as it falls. Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the kinetic energy obtained from the potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips. The commercialization of stored energy (in the form of rail cars raised to higher elevations) that is then converted to electrical energy when needed by an electrical grid, is being undertaken in the United States in a system called Advanced Rail Energy Storage (ARES). == Chemical potential energy == Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions. The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. == Electric potential energy == An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy). === Electrostatic potential energy === Electrostatic potential energy between two bodies in space is obtained from the force exerted by a charge Q on another charge q, which is given by F e = − 1 4 π ε 0 Q q r 2 r ^ , {\displaystyle \mathbf {F} _{e}=-{\frac {1}{4\pi \varepsilon _{0}}}{\frac {Qq}{r^{2}}}\mathbf {\hat {r}} ,} where r ^ {\displaystyle \mathbf {\hat {r}} } is a vector of length 1 pointing from Q to q and ε0 is the vacuum permittivity. If the electric charge of an object can be assumed to be at rest, then it has potential energy due to its position relative to other charged objects. The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, adjusted for non-electrical forces on the object. This energy will generally be non-zero if there is another electrically charged object nearby. The work W required to move q from A to any point B in the electrostatic force field is given by Δ U A B ( r ) = − ∫ A B F e ⋅ d r {\displaystyle \Delta U_{AB}({\mathbf {r} })=-\int _{A}^{B}\mathbf {F_{e}} \cdot d\mathbf {r} } typically given in J for Joules. A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge. === Magnetic potential energy === The energy of a magnetic moment μ {\displaystyle {\boldsymbol {\mu }}} in an externally produced magnetic B-field B has potential energy U = − μ ⋅ B . {\displaystyle U=-{\boldsymbol {\mu }}\cdot \mathbf {B} .} The magnetization M in a field is U = − 1 2 ∫ M ⋅ B d V , {\displaystyle U=-{\frac {1}{2}}\int \mathbf {M} \cdot \mathbf {B} \,dV,} where the integral can be over all space or, equivalently, where M is nonzero. Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when its field is in the same direction as the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be higher the further they are apart and lower the closer they are. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart. == Nuclear potential energy == Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Their rest mass provides the potential energy for certain kinds of radioactive decay, such as beta decay. Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them can have less mass than if they were individually free, in which case this mass difference can be liberated as heat and radiation in nuclear reactions. The process of hydrogen fusion occurring in the Sun is an example of this form of energy release – 600 million tonnes of hydrogen nuclei are fused into helium nuclei, with a loss of about 4 million tonnes of mass per second. This energy, now in the form of kinetic energy and gamma rays, keeps the solar core hot even as electromagnetic radiation carries electromagnetic energy into space. == Forces and potential energy == Potential energy is closely linked with forces. If the work done by a force on a body that moves from A to B does not depend on the path between these points, then the work of this force measured from A assigns a scalar value to every other point in space and defines a scalar potential field. In this case, the force can be defined as the negative of the vector gradient of the potential field. For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by ϕ {\displaystyle \phi } or V {\displaystyle V} , corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is U = − G M m r . {\displaystyle U=-{\frac {GMm}{r}}.} The gravitational potential (specific energy) of the two bodies is ϕ = − ( G M r + G m r ) = − G ( M + m ) r = − G M m μ r = U μ {\displaystyle \phi =-\left({\frac {GM}{r}}+{\frac {Gm}{r}}\right)=-{\frac {G(M+m)}{r}}=-{\frac {GMm}{\mu r}}={\frac {U}{\mu }}} where μ {\displaystyle \mu } is the reduced mass. The work done against gravity by moving an infinitesimal mass from point A with U = a {\displaystyle U=a} to point B with U = b {\displaystyle U=b} is ( b − a ) {\displaystyle (b-a)} and the work done going back the other way is ( a − b ) {\displaystyle (a-b)} so that the total work done in moving from A to B and returning to A is U A → B → A = ( b − a ) + ( a − b ) = 0. {\displaystyle U_{A\to B\to A}=(b-a)+(a-b)=0.} If the potential is redefined at A to be a + c {\displaystyle a+c} and the potential at B to be b + c {\displaystyle b+c} , where c {\displaystyle c} is a constant (i.e. c {\displaystyle c} can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is U A → B = ( b + c ) − ( a + c ) = b − a {\displaystyle U_{A\to B}=(b+c)-(a+c)=b-a} as before. In practical terms, this means that one can set the zero of U {\displaystyle U} and ϕ {\displaystyle \phi } anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section). A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field. == Notes == == References == Serway, Raymond A.; Jewett, John W. (2010). Physics for Scientists and Engineers (8th ed.). Brooks/Cole cengage. ISBN 978-1-4390-4844-3. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. == External links == What is potential energy?
Wikipedia/Potential_Energy
In physics, mechanics and other areas of science, shear rate is the rate at which a progressive shear strain is applied to some material, causing shearing to the material. Shear rate is a measure of how the velocity changes with distance. == Simple shear == The shear rate for a fluid flowing between two parallel plates, one moving at a constant speed and the other one stationary (Couette flow), is defined by γ ˙ = v h , {\displaystyle {\dot {\gamma }}={\frac {v}{h}},} where: γ ˙ {\displaystyle {\dot {\gamma }}} is the shear rate, measured in reciprocal seconds; v is the velocity of the moving plate, measured in meters per second; h is the distance between the two parallel plates, measured in meters. Or: γ ˙ i j = ∂ v i ∂ x j + ∂ v j ∂ x i . {\displaystyle {\dot {\gamma }}_{ij}={\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}.} For the simple shear case, it is just a gradient of velocity in a flowing material. The SI unit of measurement for shear rate is s−1, expressed as "reciprocal seconds" or "inverse seconds". However, when modelling fluids in 3D, it is common to consider a scalar value for the shear rate by calculating the second invariant of the strain-rate tensor γ ˙ = 2 ε : ε {\displaystyle {\dot {\gamma }}={\sqrt {2\varepsilon :\varepsilon }}} . The shear rate at the inner wall of a Newtonian fluid flowing within a pipe is γ ˙ = 8 v d , {\displaystyle {\dot {\gamma }}={\frac {8v}{d}},} where: γ ˙ {\displaystyle {\dot {\gamma }}} is the shear rate, measured in reciprocal seconds; v is the linear fluid velocity; d is the inside diameter of the pipe. The linear fluid velocity v is related to the volumetric flow rate Q by v = Q A , {\displaystyle v={\frac {Q}{A}},} where A is the cross-sectional area of the pipe, which for an inside pipe radius of r is given by A = π r 2 , {\displaystyle A=\pi r^{2},} thus producing v = Q π r 2 . {\displaystyle v={\frac {Q}{\pi r^{2}}}.} Substituting the above into the earlier equation for the shear rate of a Newtonian fluid flowing within a pipe, and noting (in the denominator) that d = 2r: γ ˙ = 8 v d = 8 ( Q π r 2 ) 2 r , {\displaystyle {\dot {\gamma }}={\frac {8v}{d}}={\frac {8\left({\frac {Q}{\pi r^{2}}}\right)}{2r}},} which simplifies to the following equivalent form for wall shear rate in terms of volumetric flow rate Q and inner pipe radius r: γ ˙ = 4 Q π r 3 . {\displaystyle {\dot {\gamma }}={\frac {4Q}{\pi r^{3}}}.} For a Newtonian fluid wall, shear stress (τw) can be related to shear rate by τ w = γ ˙ x μ {\displaystyle \tau _{w}={\dot {\gamma }}_{x}\mu } where μ is the dynamic viscosity of the fluid. For non-Newtonian fluids, there are different constitutive laws depending on the fluid, which relates the stress tensor to the shear rate tensor. == References == == See also == Shear strain Strain rate Non-Newtonian fluid
Wikipedia/Shear_rate
Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction. == History == The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion. == Method overview == In combustion processes, the reaction rate ω {\displaystyle \omega } is dependent on temperature T {\displaystyle T} in the following form (Arrhenius law), ω ( T ) ∝ e − E a / R T , {\displaystyle \omega (T)\propto \mathrm {e} ^{-E_{\rm {a}}/RT},} where E a {\displaystyle E_{\rm {a}}} is the activation energy, and R {\displaystyle R} is the universal gas constant. In general, the condition E a / R ≫ T b {\displaystyle E_{\rm {a}}/R\gg T_{b}} is satisfied, where T b {\displaystyle T_{\rm {b}}} is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting T u {\displaystyle T_{\rm {u}}} for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows β = E a R T b T b − T u T b , q = T b − T u T u . {\displaystyle \beta ={\frac {E_{\rm {a}}}{RT_{\rm {b}}}}{\frac {T_{\rm {b}}-T_{\rm {u}}}{T_{\rm {b}}}},\quad q={\frac {T_{\rm {b}}-T_{\rm {u}}}{T_{\rm {u}}}}.} In addition, if we define a non-dimensional temperature θ = T − T u T b − T u , {\displaystyle \theta ={\frac {T-T_{\rm {u}}}{T_{\rm {b}}-T_{\rm {u}}}},} such that θ {\displaystyle \theta } approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, 0 ≤ θ ≤ 1 {\displaystyle 0\leq \theta \leq 1} ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by ω ( T ) ω ( T b ) ∝ e − E a / R T e − E a / R T b = exp ⁡ [ − β ( 1 − θ ) 1 + q 1 + q θ ] . {\displaystyle {\frac {\omega (T)}{\omega (T_{\rm {b}})}}\propto {\frac {\mathrm {e} ^{-E_{\rm {a}}/RT}}{\mathrm {e} ^{-E_{\rm {a}}/RT_{\rm {b}}}}}=\exp \left[-\beta (1-\theta ){\frac {1+q}{1+q\theta }}\right].} Now in the limit of β → ∞ {\displaystyle \beta \rightarrow \infty } (large activation energy) with q ∼ O ( 1 ) {\displaystyle q\sim O(1)} , the reaction rate is exponentially small i.e., O ( e − β ) {\displaystyle O(e^{-\beta })} and negligible everywhere, but non-negligible when β ( 1 − θ ) ∼ O ( 1 ) {\displaystyle \beta (1-\theta )\sim O(1)} . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where 1 − θ ∼ O ( 1 / β ) {\displaystyle 1-\theta \sim O(1/\beta )} . Thus, in solving the conservation equations, one identifies two different regimes, at leading order, Outer convective-diffusive zone Inner reactive-diffusive layer where in the convective-diffusive zone, reaction term will be neglected and in the thin reactive-diffusive layer, convective terms can be neglected and the solutions in these two regions are stitched together by matching slopes using method of matched asymptotic expansions. The above mentioned two regime are true only at leading order since the next order corrections may involve all the three transport mechanisms. == See also == Zeldovich–Frank-Kamenetskii equation Burke–Schumann limit == References ==
Wikipedia/Activation_energy_asymptotics
Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions. Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program. Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005; a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level. Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification. == Simulation versus model == A model consists of the equations used to capture the behavior of a system. By contrast, computer simulation is the actual running of the program that perform algorithms which solve those equations, often in an approximate manner. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model (or a simulator)", and then either "run the model" or equivalently "run a simulation". == History == Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. == Data preparation == The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models). Input sources also vary widely: Sensors and other physical devices connected to the model; Control surfaces used to direct the progress of the simulation in some way; Current or historical data entered by hand; Values extracted as a by-product from other processes; Values output for the purpose by other simulations, models, or processes. Lastly, the time at which data is available varies: "invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest; data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor; data can be provided during the simulation run, for example by a sensor network. Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others. Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate. == Types == Models used for computer simulations can be classified according to several independent pairs of attributes, including: Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations Steady-state or dynamic Continuous or discrete (and as an important special case of discrete, discrete event or DE models) Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s). Local or distributed. Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes: Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category. If the underlying graph is not a regular grid, the model may belong to the meshfree method class. For steady-state simulations, equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted. Dynamic simulations attempt to capture changes in a system in response to (usually changing) input signals. Stochastic models use random number generators to model chance or random events; A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events. A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer. A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next. Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA). == Visualization == Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events. Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes. Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run. == In science == Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description: a numerical simulation of differential equations that cannot be solved analytically, theories that involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g., climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category. a stochastic simulation, typically used for discrete systems where events occur probabilistically and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method). multiparticle simulation of the response of nanomaterials at multiple scales to an applied force for the purpose of modeling their thermoelastic and thermodynamic properties. Techniques used for such simulations are Molecular dynamics, Molecular mechanics, Monte Carlo method, and Multiscale Green's function. Specific examples of computer simulations include: statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting. agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically). time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting. computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R. computer simulation using molecular modeling for drug discovery. computer simulation to model viral infection in mammalian cells. computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules. Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building. An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory. Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra. In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling == In practical contexts == Computer simulations are used in a wide variety of practical contexts, such as: analysis of air pollutant dispersion using atmospheric dispersion modeling As a possible humane alternative to live animal testing in respect to animal rights. design of complex systems such as aircraft and also logistics systems. design of noise barriers to effect roadway noise mitigation modeling of application performance flight simulators to train pilots weather forecasting forecasting of risk simulation of electrical circuits Power system simulation simulation of other computers is emulation. forecasting of prices on financial markets (for example Adaptive Modeler) behavior of structures (such as buildings and industrial parts) under stress and other conditions design of industrial processes, such as chemical processing plants strategic management and organizational studies reservoir simulation for the petroleum engineering to model the subsurface reservoir process engineering simulation tools. robot simulators for the design of robots and robot control algorithms urban simulation models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies. traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network to transportation system planning, design and operations. See a more detailed article on Simulation in Transportation. modeling car crashes to test safety mechanisms in new vehicle models. crop-soil systems in agriculture, via dedicated software frameworks (e.g. BioMA, OMS3, APSIM) The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly. Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype. Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization. In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data. == Pitfalls == Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures. == See also == == References == == Further reading == Young, Joseph and Findley, Michael. 2014. "Computational Modeling to Study Conflicts and Terrorism." Routledge Handbook of Research Methods in Military Studies edited by Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. pp. 249–260. New York: Routledge, R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy. E. Winsberg Simulation in Science. Entry in the Stanford Encyclopedia of Philosophy. S. Hartmann, The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77–100. E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010. P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press, 2004. James J. Nutaro (2011). Building Software for Simulation: Theory and Algorithms, with Applications in C++. John Wiley & Sons. ISBN 978-1-118-09945-2. Desa, W. L. H. M., Kamaruddin, S., & Nawawi, M. K. M. (2012). Modeling of Aircraft Composite Parts Using Simulation. Advanced Material Research, 591–593, 557–560. == External links == Guide to the Computer Simulation Oral History Archive 2003-2018
Wikipedia/Numerical_model
The Computational Infrastructure for Geodynamics (CIG) is a community-driven organization that advances Earth science by developing and disseminating software for geophysics and related fields. It is a National Science Foundation-sponsored collaborative effort to improve geodynamic modelling and develop, support, and disseminate open-source software for the geodynamics research and higher education communities. CIG is located at the University of California, Davis, and is a member-governed consortium with 62 US institutional members and 15 international affiliates. == History == CIG was established in 2005 in response to the need for coordinated development and dissemination of software for geodynamics applications. Founded with an NSF cooperative agreement to Caltech, in 2010, CIG moved to UC Davis under a new cooperative agreement from NSF. == Software == CIG hosts open source software in a wide range of disciplines and topic areas, such as geodynamics, computational science, seismology, mantle convection, long-term tectonics, and short-term crustal dynamics. == Software Attribution for Geoscience Applications (SAGA) == CIG started the SAGA project with an NSF EAGER award from the SBE Office of Multidisciplinary Activities for "Development of Software Citation Methodology for Open Source Computational Science". == References == Morozov, Igor; Reilkoff, Brian; Chubak, Glenn (2006). "A generalized web service model for geophysical data processing and modeling". Computers & Geosciences. 32 (9): 1403. Bibcode:2006CG.....32.1403M. doi:10.1016/j.cageo.2005.12.010. Gurnis, M (2005). "Sculpting Earth from Inside Out". Scientific American. 284 (3): 40–7. Bibcode:2001SciAm.284c..40G. doi:10.1038/scientificamerican0301-40. PMID 11234505. Zhang, H; Liu, M; Shi, Y; Yuen, D; Yan, Z; Liang, G (2007). "Toward an automated parallel computing environment for geosciences". Physics of the Earth and Planetary Interiors. 163 (1–4): 2–22. Bibcode:2007PEPI..163....2Z. CiteSeerX 10.1.1.531.9474. doi:10.1016/j.pepi.2007.05.008. == External links == Official website
Wikipedia/Computational_Infrastructure_for_Geodynamics
In continuum mechanics, a branch of mathematics, the Burnett equations are a set of higher-order continuum equations for non-equilibrium flows and the transition regimes where the Navier–Stokes equations do not perform well. They were derived by the English mathematician D. Burnett. == Series expansion == === Series expansion approach === The series expansion technique used to derive the Burnett equations involves expanding the distribution function f {\displaystyle f} in the Boltzmann equation as a power series in the Knudsen number K n {\displaystyle Kn} : f ( r , c , t ) = f ( 0 ) ( c | n , u , T ) [ 1 + K n ϕ ( 1 ) ( c | n , u , T ) + K n 2 ϕ ( 2 ) ( c | n , u , T ) + ⋯ ] {\displaystyle f(r,c,t)=f^{(0)}(c|n,u,T)\left[1+K_{n}\phi ^{(1)}(c|n,u,T)+K_{n}^{2}\phi ^{(2)}(c|n,u,T)+\cdots \right]} Here, f ( 0 ) ( c | n , u , T ) {\displaystyle f^{(0)}(c|n,u,T)} represents the Maxwell-Boltzmann equilibrium distribution function, dependent on the number density n {\displaystyle n} , macroscopic velocity u {\displaystyle u} , and temperature T {\displaystyle T} . The terms ϕ ( 1 ) , ϕ ( 2 ) , {\displaystyle \phi ^{(1)},\phi ^{(2)},} etc., are higher-order corrections that account for non-equilibrium effects, with each subsequent term incorporating higher powers of the Knudsen number K n {\displaystyle Kn} . === Derivation === The first-order term f ( 1 ) {\displaystyle f^{(1)}} in the expansion gives the Navier-Stokes equations, which include terms for viscosity and thermal conductivity. To obtain the Burnett equations, one must retain terms up to second order, corresponding to ϕ ( 2 ) {\displaystyle \phi ^{(2)}} . The Burnett equations include additional second-order derivatives of velocity, temperature, and density, representing more subtle effects of non-equilibrium gas dynamics. The Burnett equations can be expressed as: u t + ( u ⋅ ∇ ) u + ∇ p = ∇ ⋅ ( ν ∇ u ) + higher-order terms {\displaystyle \mathbf {u} _{t}+(\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\nabla \cdot (\nu \nabla \mathbf {u} )+{\text{higher-order terms}}} Here, the "higher-order terms" involve second-order gradients of velocity and temperature, which are absent in the Navier-Stokes equations. These terms become significant in situations with high Knudsen numbers, where the assumptions of the Navier-Stokes framework break down. == Extensions == The Onsager-Burnett Equations, commonly referred to as OBurnett, which form a superset of the Navier-Stokes equations and are second-order accurate for Knudsen number. == Derivation == Starting with the Boltzmann equation ∂ f ∂ t + c k ∂ f x k + F k ∂ f c k = J ( f , f 1 ) {\displaystyle {\frac {\partial {f}}{\partial {t}}}+c_{k}\partial {f}{x_{k}}+F_{k}\partial {f}{c_{k}}=J(f,f_{1})} == See also == Fluid dynamics Lars Onsager Non-dimensionalization and scaling of the Navier–Stokes equations Stokes equations Chapman–Enskog theory Navier-Stokes equations == References == == Further reading == García-Colín, L.S.; Velasco, R.M.; Uribe, F.J. (August 2008). "Beyond the Navier–Stokes equations: Burnett hydrodynamics". Physics Reports. 465 (4): 149–189. Bibcode:2008PhR...465..149G. doi:10.1016/j.physrep.2008.04.010.
Wikipedia/Burnett_equations
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes). The Navier–Stokes equations mathematically express momentum balance for Newtonian fluids and make use of conservation of mass. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are an elliptic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable). The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics. The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether they are infinitely differentiable (or even just bounded) at all points in the domain. This is called the Navier–Stokes existence and smoothness problem. The Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample. == Flow velocity == The solution of the equations is a flow velocity. It is a vector field—to every point in a fluid, at any moment in a time interval, it gives a vector whose direction and magnitude are those of the velocity of the fluid at that point in space and at that moment in time. It is usually studied in three spatial dimensions and one time dimension, although two (spatial) dimensional and steady-state cases are often used as models, and higher-dimensional analogues are studied in both pure and applied mathematics. Once the velocity field is calculated, other quantities of interest such as pressure or temperature may be found using dynamical equations and relations. This is different from what one normally sees in classical mechanics, where solutions are typically trajectories of position of a particle or deflection of a continuum. Studying velocity instead of position makes more sense for a fluid, although for visualization purposes one can compute various trajectories. In particular, the streamlines of a vector field, interpreted as flow velocity, are the paths along which a massless fluid particle would travel. These paths are the integral curves whose derivative at each point is equal to the vector field, and they can represent visually the behavior of the vector field at a point in time. == General continuum equations == The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is: D u D t = 1 ρ ∇ ⋅ σ + f . {\displaystyle {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {1}{\rho }}\nabla \cdot {\boldsymbol {\sigma }}+\mathbf {f} .} By setting the Cauchy stress tensor σ {\textstyle {\boldsymbol {\sigma }}} to be the sum of a viscosity term τ {\textstyle {\boldsymbol {\tau }}} (the deviatoric stress) and a pressure term − p I {\textstyle -p\mathbf {I} } (volumetric stress), we arrive at: where D D t {\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}} is the material derivative, defined as ∂ ∂ t + u ⋅ ∇ {\textstyle {\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla } , ρ {\textstyle \rho } is the (mass) density, u {\textstyle \mathbf {u} } is the flow velocity, ∇ ⋅ {\textstyle \nabla \cdot \,} is the divergence, p {\textstyle p} is the pressure, t {\textstyle t} is time, τ {\textstyle {\boldsymbol {\tau }}} is the deviatoric stress tensor, which has order 2, a {\textstyle \mathbf {a} } represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, and so on. In this form, it is apparent that in the assumption of an inviscid fluid – no deviatoric stress – Cauchy equations reduce to the Euler equations. Assuming conservation of mass, with the known properties of divergence and gradient we can use the mass continuity equation, which represents the mass per unit volume of a homogenous fluid with respect to space and time (i.e., material derivative D D t {\displaystyle {\frac {\mathbf {D} }{\mathbf {Dt} }}} ) of any finite volume (V) to represent the change of velocity in fluid media: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}&{\frac {\mathbf {D} m}{\mathbf {Dt} }}=\iiint \limits _{V}\left({\frac {\mathbf {D} \rho }{\mathbf {Dt} +\rho (\nabla \cdot \mathbf {u} )}}\right)\,dV\\[5pt]&{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+(\nabla \rho )\cdot \mathbf {u} +\rho (\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0\end{aligned}}} where D m D t {\textstyle {\frac {\mathrm {D} m}{\mathrm {D} t}}} is the material derivative of mass per unit volume (density, ρ {\displaystyle \rho } ), ∭ V ( F ( x 1 , x 2 , x 3 , t ) ) d V {\textstyle \iiint \limits _{V}(F(x_{1},x_{2},x_{3},t))\,dV} is the mathematical operation for the integration throughout the volume (V), ∂ ∂ t {\textstyle {\frac {\partial }{\partial t}}} is the partial derivative mathematical operator, ∇ ⋅ u {\textstyle \nabla \cdot \mathbf {u} \,} is the divergence of the flow velocity ( u {\displaystyle \mathbf {u} } ), which is a scalar field, Note 1 ∇ ρ {\textstyle \nabla \rho \,} is the gradient of density ( ρ {\displaystyle \rho } ), which is the vector derivative of a scalar field, Note 1 Note 1 – Refer to the mathematical operator del represented by the nabla ( ∇ {\displaystyle \nabla } ) symbol. to arrive at the conservation form of the equations of motion. This is often written: where ⊗ {\textstyle \otimes } is the outer product of the flow velocity ( u {\displaystyle \mathbf {u} } ): u ⊗ u = u u T {\displaystyle \mathbf {u} \otimes \mathbf {u} =\mathbf {u} \mathbf {u} ^{\mathrm {T} }} The left side of the equation describes acceleration, and may be composed of time-dependent and convective components (also the effects of non-inertial coordinates if present). The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces (such as gravity). All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below. === Convective acceleration === A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. While individual fluid particles indeed experience time-dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle. == Compressible flow == Remark: here, the deviatoric stress tensor is denoted τ {\textstyle {\boldsymbol {\tau }}} as it was in the general continuum equations and in the incompressible flow section. The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient ∇ u {\textstyle \nabla \mathbf {u} } , or more simply the rate-of-strain tensor: ε ( ∇ u ) ≡ 1 2 ∇ u + 1 2 ( ∇ u ) T {\textstyle {\boldsymbol {\varepsilon }}\left(\nabla \mathbf {u} \right)\equiv {\frac {1}{2}}\nabla \mathbf {u} +{\frac {1}{2}}\left(\nabla \mathbf {u} \right)^{T}} the deviatoric stress is linear in this variable: σ ( ε ) = − p I + C : ε {\textstyle {\boldsymbol {\sigma }}({\boldsymbol {\varepsilon }})=-p\mathbf {I} +\mathbf {C} :{\boldsymbol {\varepsilon }}} , where p {\textstyle p} is independent on the strain rate tensor, C {\textstyle \mathbf {C} } is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product. the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently C {\textstyle \mathbf {C} } is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity λ {\textstyle \lambda } and the dynamic viscosity μ {\textstyle \mu } , as it is usual in linear elasticity: where I {\textstyle \mathbf {I} } is the identity tensor, and tr ⁡ ( ε ) {\textstyle \operatorname {tr} ({\boldsymbol {\varepsilon }})} is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as: σ = − p I + λ ( ∇ ⋅ u ) I + μ ( ∇ u + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\lambda (\nabla \cdot \mathbf {u} )\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right).} Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow: tr ⁡ ( ε ) = ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\varepsilon }})=\nabla \cdot \mathbf {u} .} Given this relation, and since the trace of the identity tensor in three dimensions is three: tr ⁡ ( I ) = 3. {\displaystyle \operatorname {tr} ({\boldsymbol {I}})=3.} the trace of the stress tensor in three dimensions becomes: tr ⁡ ( σ ) = − 3 p + ( 3 λ + 2 μ ) ∇ ⋅ u . {\displaystyle \operatorname {tr} ({\boldsymbol {\sigma }})=-3p+(3\lambda +2\mu )\nabla \cdot \mathbf {u} .} So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics: σ = − [ p − ( λ + 2 3 μ ) ( ∇ ⋅ u ) ] I + μ ( ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ) {\displaystyle {\boldsymbol {\sigma }}=-\left[p-\left(\lambda +{\tfrac {2}{3}}\mu \right)\left(\nabla \cdot \mathbf {u} \right)\right]\mathbf {I} +\mu \left(\nabla \mathbf {u} +\left(\nabla \mathbf {u} \right)^{\mathrm {T} }-{\tfrac {2}{3}}\left(\nabla \cdot \mathbf {u} \right)\mathbf {I} \right)} Introducing the bulk viscosity ζ {\textstyle \zeta } , ζ ≡ λ + 2 3 μ , {\displaystyle \zeta \equiv \lambda +{\tfrac {2}{3}}\mu ,} we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics: which can also be arranged in the other usual form: σ = − p I + μ ( ∇ u + ( ∇ u ) T ) + ( ζ − 2 3 μ ) ( ∇ ⋅ u ) I . {\displaystyle {\boldsymbol {\sigma }}=-p\mathbf {I} +\mu \left(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }\right)+\left(\zeta -{\frac {2}{3}}\mu \right)(\nabla \cdot \mathbf {u} )\mathbf {I} .} Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term: p = − 1 3 tr ⁡ ( σ ) + ζ ( ∇ ⋅ u ) {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta (\nabla \cdot \mathbf {u} )} and the deviatoric stress tensor σ ′ {\displaystyle {\boldsymbol {\sigma }}'} is still coincident with the shear stress tensor τ {\displaystyle {\boldsymbol {\tau }}} (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity: σ ′ = τ = μ [ ∇ u + ( ∇ u ) T − 2 3 ( ∇ ⋅ u ) I ] {\displaystyle {\boldsymbol {\sigma }}'={\boldsymbol {\tau }}=\mu \left[\nabla \mathbf {u} +(\nabla \mathbf {u} )^{\mathrm {T} }-{\tfrac {2}{3}}(\nabla \cdot \mathbf {u} )\mathbf {I} \right]} Both bulk viscosity ζ {\textstyle \zeta } and dynamic viscosity μ {\textstyle \mu } need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state. The most general of the Navier–Stokes equations become in index notation, the equation can be written as The corresponding equation in conservation form can be obtained by considering that, given the mass continuity equation, the left side is equivalent to: ρ D u D t = ∂ ∂ t ( ρ u ) + ∇ ⋅ ( ρ u ⊗ u ) {\displaystyle \rho {\frac {\mathrm {D} \mathbf {u} }{\mathrm {D} t}}={\frac {\partial }{\partial t}}(\rho \mathbf {u} )+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} )} to give finally: Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity ζ {\textstyle \zeta } can be assumed to be constant in which case, the effect of the volume viscosity ζ {\textstyle \zeta } is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below. ∇ ⋅ ( ∇ ⋅ u ) I = ∇ ( ∇ ⋅ u ) , {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {u} )\mathbf {I} =\nabla (\nabla \cdot \mathbf {u} ),} p ¯ ≡ p − ζ ∇ ⋅ u , {\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,} However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming ζ = 0 {\textstyle \zeta =0} . The assumption of setting ζ = 0 {\textstyle \zeta =0} is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect. With the Stokes hypothesis, the Navier–Stokes equations become If the dynamic μ and bulk ζ {\displaystyle \zeta } viscosities are assumed to be uniform in space, the equations in convective form can be simplified further. By computing the divergence of the stress tensor, since the divergence of tensor ∇ u {\textstyle \nabla \mathbf {u} } is ∇ 2 u {\textstyle \nabla ^{2}\mathbf {u} } and the divergence of tensor ( ∇ u ) T {\textstyle \left(\nabla \mathbf {u} \right)^{\mathrm {T} }} is ∇ ( ∇ ⋅ u ) {\textstyle \nabla \left(\nabla \cdot \mathbf {u} \right)} , one finally arrives to the compressible Navier–Stokes momentum equation: where D D t {\textstyle {\frac {\mathrm {D} }{\mathrm {D} t}}} is the material derivative. ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} is the shear kinematic viscosity and ξ = ζ ρ {\displaystyle \xi ={\frac {\zeta }{\rho }}} is the bulk kinematic viscosity. The left-hand side changes in the conservation form of the Navier–Stokes momentum equation. By bringing the operator on the flow velocity on the left side, one also has: The convective acceleration term can also be written as u ⋅ ∇ u = ( ∇ × u ) × u + 1 2 ∇ u 2 , {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} =(\nabla \times \mathbf {u} )\times \mathbf {u} +{\tfrac {1}{2}}\nabla \mathbf {u} ^{2},} where the vector ( ∇ × u ) × u {\textstyle (\nabla \times \mathbf {u} )\times \mathbf {u} } is known as the Lamb vector. For the special case of an incompressible flow, the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} . == Incompressible flow == The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient ∇ u {\textstyle \nabla \mathbf {u} } . the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently τ {\textstyle {\boldsymbol {\tau }}} is an isotropic tensor; furthermore, since the deviatoric stress tensor can be expressed in terms of the dynamic viscosity μ {\textstyle \mu } : where ε = 1 2 ( ∇ u + ∇ u T ) {\displaystyle {\boldsymbol {\varepsilon }}={\tfrac {1}{2}}\left(\mathbf {\nabla u} +\mathbf {\nabla u} ^{\mathrm {T} }\right)} is the rate-of-strain tensor. So this decomposition can be made explicit as: This is constitutive equation is also called the Newtonian law of viscosity. Dynamic viscosity μ need not be constant – in incompressible flows it can depend on density and on pressure. Any equation that makes explicit one of these transport coefficient in the conservative variables is called an equation of state. The divergence of the deviatoric stress in case of uniform viscosity is given by: ∇ ⋅ τ = 2 μ ∇ ⋅ ε = μ ∇ ⋅ ( ∇ u + ∇ u T ) = μ ∇ 2 u {\displaystyle \nabla \cdot {\boldsymbol {\tau }}=2\mu \nabla \cdot {\boldsymbol {\varepsilon }}=\mu \nabla \cdot \left(\nabla \mathbf {u} +\nabla \mathbf {u} ^{\mathrm {T} }\right)=\mu \,\nabla ^{2}\mathbf {u} } because ∇ ⋅ u = 0 {\textstyle \nabla \cdot \mathbf {u} =0} for an incompressible fluid. Incompressibility rules out density and pressure waves like sound or shock waves, so this simplification is not useful if these phenomena are of interest. The incompressible flow assumption typically holds well with all fluids at low Mach numbers (say up to about Mach 0.3), such as for modelling air winds at normal temperatures. the incompressible Navier–Stokes equations are best visualized by dividing for the density: where ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} is called the kinematic viscosity. By isolating the fluid velocity, one can also state: If the density is constant throughout the fluid domain, or, in other words, if all fluid elements have the same density, ρ {\textstyle \rho } , then we have where p / ρ {\textstyle p/\rho } is called the unit pressure head. In incompressible flows, the pressure field satisfies the Poisson equation, ∇ 2 p = − ρ ∂ u i ∂ x k ∂ u k ∂ x i = − ρ ∂ 2 u i u k ∂ x k x i , {\displaystyle \nabla ^{2}p=-\rho {\frac {\partial u_{i}}{\partial x_{k}}}{\frac {\partial u_{k}}{\partial x_{i}}}=-\rho {\frac {\partial ^{2}u_{i}u_{k}}{\partial x_{k}x_{i}}},} which is obtained by taking the divergence of the momentum equations. It is well worth observing the meaning of each term (compare to the Cauchy momentum equation): ∂ u ∂ t ⏟ Variation + ( u ⋅ ∇ ) u ⏟ Convective acceleration ⏞ Inertia (per volume) = ∂ ∂ − ∇ w ⏟ Internal source + ν ∇ 2 u ⏟ Diffusion ⏞ Divergence of stress + g ⏟ External source . {\displaystyle \overbrace {{\vphantom {\frac {}{}}}\underbrace {\frac {\partial \mathbf {u} }{\partial t}} _{\text{Variation}}+\underbrace {{\vphantom {\frac {}{}}}(\mathbf {u} \cdot \nabla )\mathbf {u} } _{\begin{smallmatrix}{\text{Convective}}\\{\text{acceleration}}\end{smallmatrix}}} ^{\text{Inertia (per volume)}}=\overbrace {{\vphantom {\frac {\partial }{\partial }}}\underbrace {{\vphantom {\frac {}{}}}-\nabla w} _{\begin{smallmatrix}{\text{Internal}}\\{\text{source}}\end{smallmatrix}}+\underbrace {{\vphantom {\frac {}{}}}\nu \nabla ^{2}\mathbf {u} } _{\text{Diffusion}}} ^{\text{Divergence of stress}}+\underbrace {{\vphantom {\frac {}{}}}\mathbf {g} } _{\begin{smallmatrix}{\text{External}}\\{\text{source}}\end{smallmatrix}}.} The higher-order term, namely the shear stress divergence ∇ ⋅ τ {\textstyle \nabla \cdot {\boldsymbol {\tau }}} , has simply reduced to the vector Laplacian term μ ∇ 2 u {\textstyle \mu \nabla ^{2}\mathbf {u} } . This Laplacian term can be interpreted as the difference between the velocity at a point and the mean velocity in a small surrounding volume. This implies that – for a Newtonian fluid – viscosity operates as a diffusion of momentum, in much the same way as the heat conduction. In fact neglecting the convection term, incompressible Navier–Stokes equations lead to a vector diffusion equation (namely Stokes equations), but in general the convection term is present, so incompressible Navier–Stokes equations belong to the class of convection–diffusion equations. In the usual case of an external field being a conservative field: g = − ∇ φ {\displaystyle \mathbf {g} =-\nabla \varphi } by defining the hydraulic head: h ≡ w + φ {\displaystyle h\equiv w+\varphi } one can finally condense the whole source in one term, arriving to the incompressible Navier–Stokes equation with conservative external field: ∂ u ∂ t + ( u ⋅ ∇ ) u − ν ∇ 2 u = − ∇ h . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\mathbf {u} \cdot \nabla )\mathbf {u} -\nu \,\nabla ^{2}\mathbf {u} =-\nabla h.} The incompressible Navier–Stokes equations with uniform density and viscosity and conservative external field is the fundamental equation of hydraulics. The domain for these equations is commonly a 3 or fewer dimensional Euclidean space, for which an orthogonal coordinate reference frame is usually set to explicit the system of scalar partial differential equations to be solved. In 3-dimensional orthogonal coordinate systems are 3: Cartesian, cylindrical, and spherical. Expressing the Navier–Stokes vector equation in Cartesian coordinates is quite straightforward and not much influenced by the number of dimensions of the euclidean space employed, and this is the case also for the first-order terms (like the variation and convection ones) also in non-cartesian orthogonal coordinate systems. But for the higher order terms (the two coming from the divergence of the deviatoric stress that distinguish Navier–Stokes equations from Euler equations) some tensor calculus is required for deducing an expression in non-cartesian orthogonal coordinate systems. A special case of the fundamental equation of hydraulics is the Bernoulli's equation. The incompressible Navier–Stokes equation is composite, the sum of two orthogonal equations, ∂ u ∂ t = Π S ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f S ρ − 1 ∇ p = Π I ( − ( u ⋅ ∇ ) u + ν ∇ 2 u ) + f I {\displaystyle {\begin{aligned}{\frac {\partial \mathbf {u} }{\partial t}}&=\Pi ^{S}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{S}\\\rho ^{-1}\,\nabla p&=\Pi ^{I}\left(-(\mathbf {u} \cdot \nabla )\mathbf {u} +\nu \,\nabla ^{2}\mathbf {u} \right)+\mathbf {f} ^{I}\end{aligned}}} where Π S {\textstyle \Pi ^{S}} and Π I {\textstyle \Pi ^{I}} are solenoidal and irrotational projection operators satisfying Π S + Π I = 1 {\textstyle \Pi ^{S}+\Pi ^{I}=1} , and f S {\textstyle \mathbf {f} ^{S}} and f I {\textstyle \mathbf {f} ^{I}} are the non-conservative and conservative parts of the body force. This result follows from the Helmholtz theorem (also known as the fundamental theorem of vector calculus). The first equation is a pressureless governing equation for the velocity, while the second equation for the pressure is a functional of the velocity and is related to the pressure Poisson equation. The explicit functional form of the projection operator in 3D is found from the Helmholtz Theorem: Π S F ( r ) = 1 4 π ∇ × ∫ ∇ ′ × F ( r ′ ) | r − r ′ | d V ′ , Π I = 1 − Π S {\displaystyle \Pi ^{S}\,\mathbf {F} (\mathbf {r} )={\frac {1}{4\pi }}\nabla \times \int {\frac {\nabla ^{\prime }\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V',\quad \Pi ^{I}=1-\Pi ^{S}} with a similar structure in 2D. Thus the governing equation is an integro-differential equation similar to Coulomb's and Biot–Savart's law, not convenient for numerical computation. An equivalent weak or variational form of the equation, proved to produce the same velocity solution as the Navier–Stokes equation, is given by, ( w , ∂ u ∂ t ) = − ( w , ( u ⋅ ∇ ) u ) − ν ( ∇ w : ∇ u ) + ( w , f S ) {\displaystyle \left(\mathbf {w} ,{\frac {\partial \mathbf {u} }{\partial t}}\right)=-{\bigl (}\mathbf {w} ,\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} {\bigr )}-\nu \left(\nabla \mathbf {w} :\nabla \mathbf {u} \right)+\left(\mathbf {w} ,\mathbf {f} ^{S}\right)} for divergence-free test functions w {\textstyle \mathbf {w} } satisfying appropriate boundary conditions. Here, the projections are accomplished by the orthogonality of the solenoidal and irrotational function spaces. The discrete form of this is eminently suited to finite element computation of divergence-free flow, as we shall see in the next section. There, one will be able to address the question, "How does one specify pressure-driven (Poiseuille) problems with a pressureless governing equation?". The absence of pressure forces from the governing velocity equation demonstrates that the equation is not a dynamic one, but rather a kinematic equation where the divergence-free condition serves the role of a conservation equation. This would seem to refute the frequent statements that the incompressible pressure enforces the divergence-free condition. === Weak form of the incompressible Navier–Stokes equations === ==== Strong form ==== Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density ρ {\textstyle \rho } in a domain Ω ⊂ R d ( d = 2 , 3 ) {\displaystyle \Omega \subset \mathbb {R} ^{d}\quad (d=2,3)} with boundary ∂ Ω = Γ D ∪ Γ N , {\displaystyle \partial \Omega =\Gamma _{D}\cup \Gamma _{N},} being Γ D {\textstyle \Gamma _{D}} and Γ N {\textstyle \Gamma _{N}} portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ( Γ D ∩ Γ N = ∅ {\textstyle \Gamma _{D}\cap \Gamma _{N}=\emptyset } ): { ρ ∂ u ∂ t + ρ ( u ⋅ ∇ ) u − ∇ ⋅ σ ( u , p ) = f in Ω × ( 0 , T ) ∇ ⋅ u = 0 in Ω × ( 0 , T ) u = g on Γ D × ( 0 , T ) σ ( u , p ) n ^ = h on Γ N × ( 0 , T ) u ( 0 ) = u 0 in Ω × { 0 } {\displaystyle {\begin{cases}\rho {\dfrac {\partial \mathbf {u} }{\partial t}}+\rho (\mathbf {u} \cdot \nabla )\mathbf {u} -\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)=\mathbf {f} &{\text{ in }}\Omega \times (0,T)\\\nabla \cdot \mathbf {u} =0&{\text{ in }}\Omega \times (0,T)\\\mathbf {u} =\mathbf {g} &{\text{ on }}\Gamma _{D}\times (0,T)\\{\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\mathbf {h} &{\text{ on }}\Gamma _{N}\times (0,T)\\\mathbf {u} (0)=\mathbf {u} _{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}} u {\textstyle \mathbf {u} } is the fluid velocity, p {\textstyle p} the fluid pressure, f {\textstyle \mathbf {f} } a given forcing term, n ^ {\displaystyle {\hat {\mathbf {n} }}} the outward directed unit normal vector to Γ N {\textstyle \Gamma _{N}} , and σ ( u , p ) {\textstyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)} the viscous stress tensor defined as: σ ( u , p ) = − p I + 2 μ ε ( u ) . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p)=-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} ).} Let μ {\textstyle \mu } be the dynamic viscosity of the fluid, I {\textstyle \mathbf {I} } the second-order identity tensor and ε ( u ) {\textstyle {\boldsymbol {\varepsilon }}(\mathbf {u} )} the strain-rate tensor defined as: ε ( u ) = 1 2 ( ( ∇ u ) + ( ∇ u ) T ) . {\displaystyle {\boldsymbol {\varepsilon }}(\mathbf {u} )={\frac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right).} The functions g {\textstyle \mathbf {g} } and h {\textstyle \mathbf {h} } are given Dirichlet and Neumann boundary data, while u 0 {\textstyle \mathbf {u} _{0}} is the initial condition. The first equation is the momentum balance equation, while the second represents the mass conservation, namely the continuity equation. Assuming constant dynamic viscosity, using the vectorial identity ∇ ⋅ ( ∇ f ) T = ∇ ( ∇ ⋅ f ) {\displaystyle \nabla \cdot \left(\nabla \mathbf {f} \right)^{\mathrm {T} }=\nabla (\nabla \cdot \mathbf {f} )} and exploiting mass conservation, the divergence of the total stress tensor in the momentum equation can also be expressed as: ∇ ⋅ σ ( u , p ) = ∇ ⋅ ( − p I + 2 μ ε ( u ) ) = − ∇ p + 2 μ ∇ ⋅ ε ( u ) = − ∇ p + 2 μ ∇ ⋅ [ 1 2 ( ( ∇ u ) + ( ∇ u ) T ) ] = − ∇ p + μ ( Δ u + ∇ ⋅ ( ∇ u ) T ) = − ∇ p + μ ( Δ u + ∇ ( ∇ ⋅ u ) ⏟ = 0 ) = − ∇ p + μ Δ u . {\displaystyle {\begin{aligned}\nabla \cdot {\boldsymbol {\sigma }}(\mathbf {u} ,p)&=\nabla \cdot \left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right)\\&=-\nabla p+2\mu \nabla \cdot {\boldsymbol {\varepsilon }}(\mathbf {u} )\\&=-\nabla p+2\mu \nabla \cdot \left[{\tfrac {1}{2}}\left(\left(\nabla \mathbf {u} \right)+\left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\right]\\&=-\nabla p+\mu \left(\Delta \mathbf {u} +\nabla \cdot \left(\nabla \mathbf {u} \right)^{\mathrm {T} }\right)\\&=-\nabla p+\mu {\bigl (}\Delta \mathbf {u} +\nabla \underbrace {(\nabla \cdot \mathbf {u} )} _{=0}{\bigr )}=-\nabla p+\mu \,\Delta \mathbf {u} .\end{aligned}}} Moreover, note that the Neumann boundary conditions can be rearranged as: σ ( u , p ) n ^ = ( − p I + 2 μ ε ( u ) ) n ^ = − p n ^ + μ ∂ u ∂ n ^ . {\displaystyle {\boldsymbol {\sigma }}(\mathbf {u} ,p){\hat {\mathbf {n} }}=\left(-p\mathbf {I} +2\mu {\boldsymbol {\varepsilon }}(\mathbf {u} )\right){\hat {\mathbf {n} }}=-p{\hat {\mathbf {n} }}+\mu {\frac {\partial {\boldsymbol {u}}}{\partial {\hat {\mathbf {n} }}}}.} ==== Weak form ==== In order to find the weak form of the Navier–Stokes equations, firstly, consider the momentum equation ρ ∂ u ∂ t − μ Δ u + ρ ( u ⋅ ∇ ) u + ∇ p = f {\displaystyle \rho {\frac {\partial \mathbf {u} }{\partial t}}-\mu \Delta \mathbf {u} +\rho (\mathbf {u} \cdot \nabla )\mathbf {u} +\nabla p=\mathbf {f} } multiply it for a test function v {\textstyle \mathbf {v} } , defined in a suitable space V {\textstyle V} , and integrate both members with respect to the domain Ω {\textstyle \Omega } : ∫ Ω ρ ∂ u ∂ t ⋅ v − ∫ Ω μ Δ u ⋅ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v + ∫ Ω ∇ p ⋅ v = ∫ Ω f ⋅ v {\displaystyle \int \limits _{\Omega }\rho {\frac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} -\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} +\int \limits _{\Omega }\nabla p\cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} } Counter-integrating by parts the diffusive and the pressure terms and by using the Gauss' theorem: − ∫ Ω μ Δ u ⋅ v = ∫ Ω μ ∇ u ⋅ ∇ v − ∫ ∂ Ω μ ∂ u ∂ n ^ ⋅ v ∫ Ω ∇ p ⋅ v = − ∫ Ω p ∇ ⋅ v + ∫ ∂ Ω p v ⋅ n ^ {\displaystyle {\begin{aligned}-\int \limits _{\Omega }\mu \Delta \mathbf {u} \cdot \mathbf {v} &=\int _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} -\int \limits _{\partial \Omega }\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}\cdot \mathbf {v} \\\int \limits _{\Omega }\nabla p\cdot \mathbf {v} &=-\int \limits _{\Omega }p\nabla \cdot \mathbf {v} +\int \limits _{\partial \Omega }p\mathbf {v} \cdot {\hat {\mathbf {n} }}\end{aligned}}} Using these relations, one gets: ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ∀ v ∈ V . {\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} \quad \forall \mathbf {v} \in V.} In the same fashion, the continuity equation is multiplied for a test function q belonging to a space Q {\textstyle Q} and integrated in the domain Ω {\textstyle \Omega } : ∫ Ω q ∇ ⋅ u = 0. ∀ q ∈ Q . {\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0.\quad \forall q\in Q.} The space functions are chosen as follows: V = [ H 0 1 ( Ω ) ] d = { v ∈ [ H 1 ( Ω ) ] d : v = 0 on Γ D } , Q = L 2 ( Ω ) {\displaystyle {\begin{aligned}V=\left[H_{0}^{1}(\Omega )\right]^{d}&=\left\{\mathbf {v} \in \left[H^{1}(\Omega )\right]^{d}:\quad \mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\right\},\\Q&=L^{2}(\Omega )\end{aligned}}} Considering that the test function v vanishes on the Dirichlet boundary and considering the Neumann condition, the integral on the boundary can be rearranged as: ∫ ∂ Ω ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v = ∫ Γ D ( μ ∂ u ∂ n ^ − p n ^ ) ⋅ v ⏟ v = 0 on Γ D + ∫ Γ N ∫ Γ N ( μ ∂ u ∂ n ^ − p n ^ ) ⏟ = h on Γ N ⋅ v = ∫ Γ N h ⋅ v . {\displaystyle \int \limits _{\partial \Omega }\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} =\underbrace {\int \limits _{\Gamma _{D}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)\cdot \mathbf {v} } _{\mathbf {v} =\mathbf {0} {\text{ on }}\Gamma _{D}\ }+\int \limits _{\Gamma _{N}}\underbrace {{\vphantom {\int \limits _{\Gamma _{N}}}}\left(\mu {\frac {\partial \mathbf {u} }{\partial {\hat {\mathbf {n} }}}}-p{\hat {\mathbf {n} }}\right)} _{=\mathbf {h} {\text{ on }}\Gamma _{N}}\cdot \mathbf {v} =\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} .} Having this in mind, the weak formulation of the Navier–Stokes equations is expressed as: find u ∈ L 2 ( R + [ H 1 ( Ω ) ] d ) ∩ C 0 ( R + [ L 2 ( Ω ) ] d ) such that: { ∫ Ω ρ ∂ u ∂ t ⋅ v + ∫ Ω μ ∇ u ⋅ ∇ v + ∫ Ω ρ ( u ⋅ ∇ ) u ⋅ v − ∫ Ω p ∇ ⋅ v = ∫ Ω f ⋅ v + ∫ Γ N h ⋅ v ∀ v ∈ V , ∫ Ω q ∇ ⋅ u = 0 ∀ q ∈ Q . {\displaystyle {\begin{aligned}&{\text{find }}\mathbf {u} \in L^{2}\left(\mathbb {R} ^{+}\;\left[H^{1}(\Omega )\right]^{d}\right)\cap C^{0}\left(\mathbb {R} ^{+}\;\left[L^{2}(\Omega )\right]^{d}\right){\text{ such that: }}\\[5pt]&\quad {\begin{cases}\displaystyle \int \limits _{\Omega }\rho {\dfrac {\partial \mathbf {u} }{\partial t}}\cdot \mathbf {v} +\int \limits _{\Omega }\mu \nabla \mathbf {u} \cdot \nabla \mathbf {v} +\int \limits _{\Omega }\rho (\mathbf {u} \cdot \nabla )\mathbf {u} \cdot \mathbf {v} -\int \limits _{\Omega }p\nabla \cdot \mathbf {v} =\int \limits _{\Omega }\mathbf {f} \cdot \mathbf {v} +\int \limits _{\Gamma _{N}}\mathbf {h} \cdot \mathbf {v} \quad \forall \mathbf {v} \in V,\\\displaystyle \int \limits _{\Omega }q\nabla \cdot \mathbf {u} =0\quad \forall q\in Q.\end{cases}}\end{aligned}}} === Discrete velocity === With partitioning of the problem domain and defining basis functions on the partitioned domain, the discrete form of the governing equation is ( w i , ∂ u j ∂ t ) = − ( w i , ( u ⋅ ∇ ) u j ) − ν ( ∇ w i : ∇ u j ) + ( w i , f S ) . {\displaystyle \left(\mathbf {w} _{i},{\frac {\partial \mathbf {u} _{j}}{\partial t}}\right)=-{\bigl (}\mathbf {w} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}{\bigr )}-\nu \left(\nabla \mathbf {w} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {w} _{i},\mathbf {f} ^{S}\right).} It is desirable to choose basis functions that reflect the essential feature of incompressible flow – the elements must be divergence-free. While the velocity is the variable of interest, the existence of the stream function or vector potential is necessary by the Helmholtz theorem. Further, to determine fluid flow in the absence of a pressure gradient, one can specify the difference of stream function values across a 2D channel, or the line integral of the tangential component of the vector potential around the channel in 3D, the flow being given by Stokes' theorem. Discussion will be restricted to 2D in the following. We further restrict discussion to continuous Hermite finite elements which have at least first-derivative degrees-of-freedom. With this, one can draw a large number of candidate triangular and rectangular elements from the plate-bending literature. These elements have derivatives as components of the gradient. In 2D, the gradient and curl of a scalar are clearly orthogonal, given by the expressions, ∇ φ = ( ∂ φ ∂ x , ∂ φ ∂ y ) T , ∇ × φ = ( ∂ φ ∂ y , − ∂ φ ∂ x ) T . {\displaystyle {\begin{aligned}\nabla \varphi &=\left({\frac {\partial \varphi }{\partial x}},\,{\frac {\partial \varphi }{\partial y}}\right)^{\mathrm {T} },\\[5pt]\nabla \times \varphi &=\left({\frac {\partial \varphi }{\partial y}},\,-{\frac {\partial \varphi }{\partial x}}\right)^{\mathrm {T} }.\end{aligned}}} Adopting continuous plate-bending elements, interchanging the derivative degrees-of-freedom and changing the sign of the appropriate one gives many families of stream function elements. Taking the curl of the scalar stream function elements gives divergence-free velocity elements. The requirement that the stream function elements be continuous assures that the normal component of the velocity is continuous across element interfaces, all that is necessary for vanishing divergence on these interfaces. Boundary conditions are simple to apply. The stream function is constant on no-flow surfaces, with no-slip velocity conditions on surfaces. Stream function differences across open channels determine the flow. No boundary conditions are necessary on open boundaries, though consistent values may be used with some problems. These are all Dirichlet conditions. The algebraic equations to be solved are simple to set up, but of course are non-linear, requiring iteration of the linearized equations. Similar considerations apply to three-dimensions, but extension from 2D is not immediate because of the vector nature of the potential, and there exists no simple relation between the gradient and the curl as was the case in 2D. === Pressure recovery === Recovering pressure from the velocity field is easy. The discrete weak equation for the pressure gradient is, ( g i , ∇ p ) = − ( g i , ( u ⋅ ∇ ) u j ) − ν ( ∇ g i : ∇ u j ) + ( g i , f I ) {\displaystyle (\mathbf {g} _{i},\nabla p)=-\left(\mathbf {g} _{i},\left(\mathbf {u} \cdot \nabla \right)\mathbf {u} _{j}\right)-\nu \left(\nabla \mathbf {g} _{i}:\nabla \mathbf {u} _{j}\right)+\left(\mathbf {g} _{i},\mathbf {f} ^{I}\right)} where the test/weight functions are irrotational. Any conforming scalar finite element may be used. However, the pressure gradient field may also be of interest. In this case, one can use scalar Hermite elements for the pressure. For the test/weight functions g i {\textstyle \mathbf {g} _{i}} one would choose the irrotational vector elements obtained from the gradient of the pressure element. == Non-inertial frame of reference == The rotating frame of reference introduces some interesting pseudo-forces into the equations through the material derivative term. Consider a stationary inertial frame of reference K {\textstyle K} , and a non-inertial frame of reference K ′ {\textstyle K'} , which is translating with velocity U ( t ) {\textstyle \mathbf {U} (t)} and rotating with angular velocity Ω ( t ) {\textstyle \Omega (t)} with respect to the stationary frame. The Navier–Stokes equation observed from the non-inertial frame then becomes Here x {\textstyle \mathbf {x} } and u {\textstyle \mathbf {u} } are measured in the non-inertial frame. The first term in the parenthesis represents Coriolis acceleration, the second term is due to centrifugal acceleration, the third is due to the linear acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} and the fourth term is due to the angular acceleration of K ′ {\textstyle K'} with respect to K {\textstyle K} . == Other equations == The Navier–Stokes equations are strictly a statement of the balance of momentum. To fully describe fluid flow, more information is needed, how much depending on the assumptions made. This additional information may include boundary data (no-slip, capillary surface, etc.), conservation of mass, balance of energy, and/or an equation of state. === Continuity equation for incompressible fluid === Regardless of the flow assumptions, a statement of the conservation of mass is generally necessary. This is achieved through the mass continuity equation, as discussed above in the "General continuum equations" within this article, as follows: D m D t = ∭ V ( D ρ D t + ρ ( ∇ ⋅ u ) ) d V D ρ D t + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ( ∇ ρ ) ⋅ u + ρ ( ∇ ⋅ u ) = ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\begin{aligned}{\frac {\mathbf {D} m}{\mathbf {Dt} }}&={\iiint \limits _{V}}({{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot \mathbf {u} )})dV\\{\frac {\mathbf {D} \rho }{\mathbf {Dt} }}+\rho (\nabla \cdot {\mathbf {u} })&={\frac {\partial \rho }{\partial t}}+({\nabla \rho })\cdot {\mathbf {u} }+{\rho }(\nabla \cdot \mathbf {u} )={\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0\end{aligned}}} A fluid media for which the density ( ρ {\displaystyle \rho } ) is constant is called incompressible. Therefore, the rate of change of density ( ρ {\displaystyle \rho } ) with respect to time ( ∂ ρ ∂ t ) {\displaystyle ({\frac {\partial \rho }{\partial t}})} and the gradient of density ( ∇ ρ ) {\displaystyle (\nabla \rho )} are equal to zero ( 0 ) {\displaystyle (0)} . In this case the general equation of continuity, ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot ({\rho \mathbf {u} })=0} , reduces to: ρ ( ∇ ⋅ u ) = 0 {\displaystyle \rho (\nabla {\cdot }{\mathbf {u} })=0} . Furthermore, assuming that density ( ρ {\displaystyle \rho } ) is a non-zero constant ( ρ ≠ 0 ) {\displaystyle (\rho \neq 0)} means that the right-hand side of the equation ( 0 ) {\displaystyle (0)} is divisible by density ( ρ {\displaystyle \rho } ). Therefore, the continuity equation for an incompressible fluid reduces further to: ( ∇ ⋅ u ) = 0 {\displaystyle (\nabla {\cdot {\mathbf {u} }})=0} This relationship, ( ∇ ⋅ u ) = 0 {\textstyle (\nabla {\cdot {\mathbf {u} }})=0} , identifies that the divergence of the flow velocity vector ( u {\displaystyle \mathbf {u} } ) is equal to zero ( 0 ) {\displaystyle (0)} , which means that for an incompressible fluid the flow velocity field is a solenoidal vector field or a divergence-free vector field. Note that this relationship can be expanded upon due to its uniqueness with the vector Laplace operator ( ∇ 2 u = ∇ ( ∇ ⋅ u ) − ∇ × ( ∇ × u ) ) {\displaystyle (\nabla ^{2}\mathbf {u} =\nabla (\nabla \cdot \mathbf {u} )-\nabla \times (\nabla \times \mathbf {u} ))} , and vorticity ( ω → = ∇ × u ) {\displaystyle ({\vec {\omega }}=\nabla \times \mathbf {u} )} which is now expressed like so, for an incompressible fluid: ∇ 2 u = − ( ∇ × ( ∇ × u ) ) = − ( ∇ × ω → ) {\displaystyle \nabla ^{2}\mathbf {u} =-(\nabla \times (\nabla \times \mathbf {u} ))=-(\nabla \times {\vec {\omega }})} == Stream function for incompressible 2D fluid == Taking the curl of the incompressible Navier–Stokes equation results in the elimination of pressure. This is especially easy to see if 2D Cartesian flow is assumed (like in the degenerate 3D case with u z = 0 {\textstyle u_{z}=0} and no dependence of anything on z {\textstyle z} ), where the equations reduce to: ρ ( ∂ u x ∂ t + u x ∂ u x ∂ x + u y ∂ u x ∂ y ) = − ∂ p ∂ x + μ ( ∂ 2 u x ∂ x 2 + ∂ 2 u x ∂ y 2 ) + ρ g x ρ ( ∂ u y ∂ t + u x ∂ u y ∂ x + u y ∂ u y ∂ y ) = − ∂ p ∂ y + μ ( ∂ 2 u y ∂ x 2 + ∂ 2 u y ∂ y 2 ) + ρ g y . {\displaystyle {\begin{aligned}\rho \left({\frac {\partial u_{x}}{\partial t}}+u_{x}{\frac {\partial u_{x}}{\partial x}}+u_{y}{\frac {\partial u_{x}}{\partial y}}\right)&=-{\frac {\partial p}{\partial x}}+\mu \left({\frac {\partial ^{2}u_{x}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{x}}{\partial y^{2}}}\right)+\rho g_{x}\\\rho \left({\frac {\partial u_{y}}{\partial t}}+u_{x}{\frac {\partial u_{y}}{\partial x}}+u_{y}{\frac {\partial u_{y}}{\partial y}}\right)&=-{\frac {\partial p}{\partial y}}+\mu \left({\frac {\partial ^{2}u_{y}}{\partial x^{2}}}+{\frac {\partial ^{2}u_{y}}{\partial y^{2}}}\right)+\rho g_{y}.\end{aligned}}} Differentiating the first with respect to y {\textstyle y} , the second with respect to x {\textstyle x} and subtracting the resulting equations will eliminate pressure and any conservative force. For incompressible flow, defining the stream function ψ {\textstyle \psi } through u x = ∂ ψ ∂ y ; u y = − ∂ ψ ∂ x {\displaystyle u_{x}={\frac {\partial \psi }{\partial y}};\quad u_{y}=-{\frac {\partial \psi }{\partial x}}} results in mass continuity being unconditionally satisfied (given the stream function is continuous), and then incompressible Newtonian 2D momentum and mass conservation condense into one equation: ∂ ∂ t ( ∇ 2 ψ ) + ∂ ψ ∂ y ∂ ∂ x ( ∇ 2 ψ ) − ∂ ψ ∂ x ∂ ∂ y ( ∇ 2 ψ ) = ν ∇ 4 ψ {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \psi }{\partial y}}{\frac {\partial }{\partial x}}\left(\nabla ^{2}\psi \right)-{\frac {\partial \psi }{\partial x}}{\frac {\partial }{\partial y}}\left(\nabla ^{2}\psi \right)=\nu \nabla ^{4}\psi } where ∇ 4 {\textstyle \nabla ^{4}} is the 2D biharmonic operator and ν {\textstyle \nu } is the kinematic viscosity, ν = μ ρ {\textstyle \nu ={\frac {\mu }{\rho }}} . We can also express this compactly using the Jacobian determinant: ∂ ∂ t ( ∇ 2 ψ ) + ∂ ( ψ , ∇ 2 ψ ) ∂ ( y , x ) = ν ∇ 4 ψ . {\displaystyle {\frac {\partial }{\partial t}}\left(\nabla ^{2}\psi \right)+{\frac {\partial \left(\psi ,\nabla ^{2}\psi \right)}{\partial (y,x)}}=\nu \nabla ^{4}\psi .} This single equation together with appropriate boundary conditions describes 2D fluid flow, taking only kinematic viscosity as a parameter. Note that the equation for creeping flow results when the left side is assumed zero. In axisymmetric flow another stream function formulation, called the Stokes stream function, can be used to describe the velocity components of an incompressible flow with one scalar function. The incompressible Navier–Stokes equation is a differential algebraic equation, having the inconvenient feature that there is no explicit mechanism for advancing the pressure in time. Consequently, much effort has been expended to eliminate the pressure from all or part of the computational process. The stream function formulation eliminates the pressure but only in two dimensions and at the expense of introducing higher derivatives and elimination of the velocity, which is the primary variable of interest. == Properties == === Nonlinearity === The Navier–Stokes equations are nonlinear partial differential equations in the general case and so remain in almost every real situation. In some cases, such as one-dimensional flow and Stokes flow (or creeping flow), the equations can be simplified to linear equations. The nonlinearity makes most problems difficult or impossible to solve and is the main contributor to the turbulence that the equations model. The nonlinearity is due to convective acceleration, which is an acceleration associated with the change in velocity over position. Hence, any convective flow, whether turbulent or not, will involve nonlinearity. An example of convective but laminar (nonturbulent) flow would be the passage of a viscous fluid (for example, oil) through a small converging nozzle. Such flows, whether exactly solvable or not, can often be thoroughly studied and understood. === Turbulence === Turbulence is the time-dependent chaotic behaviour seen in many fluid flows. It is generally believed that it is due to the inertia of the fluid as a whole: the culmination of time-dependent and convective acceleration; hence flows where inertial effects are small tend to be laminar (the Reynolds number quantifies how much the flow is affected by inertia). It is believed, though not known with certainty, that the Navier–Stokes equations describe turbulence properly. The numerical solution of the Navier–Stokes equations for turbulent flow is extremely difficult, and due to the significantly different mixing-length scales that are involved in turbulent flow, the stable solution of this requires such a fine mesh resolution that the computational time becomes significantly infeasible for calculation or direct numerical simulation. Attempts to solve turbulent flow using a laminar solver typically result in a time-unsteady solution, which fails to converge appropriately. To counter this, time-averaged equations such as the Reynolds-averaged Navier–Stokes equations (RANS), supplemented with turbulence models, are used in practical computational fluid dynamics (CFD) applications when modeling turbulent flows. Some models include the Spalart–Allmaras, k–ω, k–ε, and SST models, which add a variety of additional equations to bring closure to the RANS equations. Large eddy simulation (LES) can also be used to solve these equations numerically. This approach is computationally more expensive—in time and in computer memory—than RANS, but produces better results because it explicitly resolves the larger turbulent scales. === Applicability === Together with supplemental equations (for example, conservation of mass) and well-formulated boundary conditions, the Navier–Stokes equations seem to model fluid motion accurately; even turbulent flows seem (on average) to agree with real world observations. The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. At very small scales or under extreme conditions, real fluids made out of discrete molecules will produce results different from the continuous fluids modeled by the Navier–Stokes equations. For example, capillarity of internal layers in fluids appears for flow with high gradients. For large Knudsen number of the problem, the Boltzmann equation may be a suitable replacement. Failing that, one may have to resort to molecular dynamics or various hybrid methods. Another limitation is simply the complicated nature of the equations. Time-tested formulations exist for common fluid families, but the application of the Navier–Stokes equations to less common families tends to result in very complicated formulations and often to open research problems. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist. == Application to specific problems == The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. This is partly because there is an enormous variety of problems that may be modeled, ranging from as simple as the distribution of static pressure to as complicated as multiphase flow driven by surface tension. Generally, application to specific problems begins with some flow assumptions and initial/boundary condition formulation, this may be followed by scale analysis to further simplify the problem. === Parallel flow === Assume steady, parallel, one-dimensional, non-convective pressure-driven flow between parallel plates, the resulting scaled (dimensionless) boundary value problem is: d 2 u d y 2 = − 1 ; u ( 0 ) = u ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-1;\quad u(0)=u(1)=0.} The boundary condition is the no slip condition. This problem is easily solved for the flow field: u ( y ) = y − y 2 2 . {\displaystyle u(y)={\frac {y-y^{2}}{2}}.} From this point onward, more quantities of interest can be easily obtained, such as viscous drag force or net flow rate. === Radial flow === Difficulties may arise when the problem becomes slightly more complicated. A seemingly modest twist on the parallel flow above would be the radial flow between parallel plates; this involves convection and thus non-linearity. The velocity field may be represented by a function f(z) that must satisfy: d 2 f d z 2 + R f 2 = − 1 ; f ( − 1 ) = f ( 1 ) = 0. {\displaystyle {\frac {\mathrm {d} ^{2}f}{\mathrm {d} z^{2}}}+Rf^{2}=-1;\quad f(-1)=f(1)=0.} This ordinary differential equation is what is obtained when the Navier–Stokes equations are written and the flow assumptions applied (additionally, the pressure gradient is solved for). The nonlinear term makes this a very difficult problem to solve analytically (a lengthy implicit solution may be found which involves elliptic integrals and roots of cubic polynomials). Issues with the actual existence of solutions arise for R > 1.41 {\textstyle R>1.41} (approximately; this is not √2), the parameter R {\textstyle R} being the Reynolds number with appropriately chosen scales. This is an example of flow assumptions losing their applicability, and an example of the difficulty in "high" Reynolds number flows. === Convection === A type of natural convection that can be described by the Navier–Stokes equation is the Rayleigh–Bénard convection. It is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility. == Exact solutions of the Navier–Stokes equations == Some exact solutions to the Navier–Stokes equations exist. Examples of degenerate cases—with the non-linear terms in the Navier–Stokes equations equal to zero—are Poiseuille flow, Couette flow and the oscillatory Stokes boundary layer. But also, more interesting examples, solutions to the full non-linear equations, exist, such as Jeffery–Hamel flow, Von Kármán swirling flow, stagnation point flow, Landau–Squire jet, and Taylor–Green vortex. Time-dependent self-similar solutions of the three-dimensional non-compressible Navier–Stokes equations in Cartesian coordinate can be given with the help of the Kummer's functions with quadratic arguments. For the compressible Navier–Stokes equations the time-dependent self-similar solutions are however the Whittaker functions again with quadratic arguments when the polytropic equation of state is used as a closing condition. Note that the existence of these exact solutions does not imply they are stable: turbulence may develop at higher Reynolds numbers. Under additional assumptions, the component parts can be separated. === A three-dimensional steady-state vortex solution === A steady-state example with no singularities comes from considering the flow along the lines of a Hopf fibration. Let r {\textstyle r} be a constant radius of the inner coil. One set of solutions is given by: ρ ( x , y , z ) = 3 B r 2 + x 2 + y 2 + z 2 p ( x , y , z ) = − A 2 B ( r 2 + x 2 + y 2 + z 2 ) 3 u ( x , y , z ) = A ( r 2 + x 2 + y 2 + z 2 ) 2 ( 2 ( − r y + x z ) 2 ( r x + y z ) r 2 − x 2 − y 2 + z 2 ) g = 0 μ = 0 {\displaystyle {\begin{aligned}\rho (x,y,z)&={\frac {3B}{r^{2}+x^{2}+y^{2}+z^{2}}}\\p(x,y,z)&={\frac {-A^{2}B}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{3}}}\\\mathbf {u} (x,y,z)&={\frac {A}{\left(r^{2}+x^{2}+y^{2}+z^{2}\right)^{2}}}{\begin{pmatrix}2(-ry+xz)\\2(rx+yz)\\r^{2}-x^{2}-y^{2}+z^{2}\end{pmatrix}}\\g&=0\\\mu &=0\end{aligned}}} for arbitrary constants A {\textstyle A} and B {\textstyle B} . This is a solution in a non-viscous gas (compressible fluid) whose density, velocities and pressure goes to zero far from the origin. (Note this is not a solution to the Clay Millennium problem because that refers to incompressible fluids where ρ {\textstyle \rho } is a constant, and neither does it deal with the uniqueness of the Navier–Stokes equations with respect to any turbulence properties.) It is also worth pointing out that the components of the velocity vector are exactly those from the Pythagorean quadruple parametrization. Other choices of density and pressure are possible with the same velocity field: === Viscous three-dimensional periodic solutions === Two examples of periodic fully-three-dimensional viscous solutions are described in. These solutions are defined on a three-dimensional torus T 3 = [ 0 , L ] 3 {\displaystyle \mathbb {T} ^{3}=[0,L]^{3}} and are characterized by positive and negative helicity respectively. The solution with positive helicity is given by: u x = 4 2 3 3 U 0 [ sin ⁡ ( k x − π / 3 ) cos ⁡ ( k y + π / 3 ) sin ⁡ ( k z + π / 2 ) − cos ⁡ ( k z − π / 3 ) sin ⁡ ( k x + π / 3 ) sin ⁡ ( k y + π / 2 ) ] e − 3 ν k 2 t u y = 4 2 3 3 U 0 [ sin ⁡ ( k y − π / 3 ) cos ⁡ ( k z + π / 3 ) sin ⁡ ( k x + π / 2 ) − cos ⁡ ( k x − π / 3 ) sin ⁡ ( k y + π / 3 ) sin ⁡ ( k z + π / 2 ) ] e − 3 ν k 2 t u z = 4 2 3 3 U 0 [ sin ⁡ ( k z − π / 3 ) cos ⁡ ( k x + π / 3 ) sin ⁡ ( k y + π / 2 ) − cos ⁡ ( k y − π / 3 ) sin ⁡ ( k z + π / 3 ) sin ⁡ ( k x + π / 2 ) ] e − 3 ν k 2 t {\displaystyle {\begin{aligned}u_{x}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kx-\pi /3)\cos(ky+\pi /3)\sin(kz+\pi /2)-\cos(kz-\pi /3)\sin(kx+\pi /3)\sin(ky+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{y}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(ky-\pi /3)\cos(kz+\pi /3)\sin(kx+\pi /2)-\cos(kx-\pi /3)\sin(ky+\pi /3)\sin(kz+\pi /2)\,\right]e^{-3\nu k^{2}t}\\u_{z}&={\frac {4{\sqrt {2}}}{3{\sqrt {3}}}}\,U_{0}\left[\,\sin(kz-\pi /3)\cos(kx+\pi /3)\sin(ky+\pi /2)-\cos(ky-\pi /3)\sin(kz+\pi /3)\sin(kx+\pi /2)\,\right]e^{-3\nu k^{2}t}\end{aligned}}} where k = 2 π / L {\displaystyle k=2\pi /L} is the wave number and the velocity components are normalized so that the average kinetic energy per unit of mass is U 0 2 / 2 {\displaystyle U_{0}^{2}/2} at t = 0 {\displaystyle t=0} . The pressure field is obtained from the velocity field as p = p 0 − ρ 0 ‖ u ‖ 2 / 2 {\displaystyle p=p_{0}-\rho _{0}\|{\boldsymbol {u}}\|^{2}/2} (where p 0 {\displaystyle p_{0}} and ρ 0 {\displaystyle \rho _{0}} are reference values for the pressure and density fields respectively). Since both the solutions belong to the class of Beltrami flow, the vorticity field is parallel to the velocity and, for the case with positive helicity, is given by ω = 3 k u {\displaystyle \omega ={\sqrt {3}}\,k\,{\boldsymbol {u}}} . These solutions can be regarded as a generalization in three dimensions of the classic two-dimensional Taylor-Green Taylor–Green vortex. == Wyld diagrams == Wyld diagrams are bookkeeping graphs that correspond to the Navier–Stokes equations via a perturbation expansion of the fundamental continuum mechanics. Similar to the Feynman diagrams in quantum field theory, these diagrams are an extension of Mstislav Keldysh's technique for nonequilibrium processes in fluid dynamics. In other words, these diagrams assign graphs to the (often) turbulent phenomena in turbulent fluids by allowing correlated and interacting fluid particles to obey stochastic processes associated to pseudo-random functions in probability distributions. == Representations in 3D == Note that the formulas in this section make use of the single-line notation for partial derivatives, where, e.g. ∂ x u {\textstyle \partial _{x}u} means the partial derivative of u {\textstyle u} with respect to x {\textstyle x} , and ∂ y 2 f θ {\textstyle \partial _{y}^{2}f_{\theta }} means the second-order partial derivative of f θ {\textstyle f_{\theta }} with respect to y {\textstyle y} . A 2022 paper provides a less costly, dynamical and recurrent solution of the Navier-Stokes equation for 3D turbulent fluid flows. On suitably short time scales, the dynamics of turbulence is deterministic. === Cartesian coordinates === From the general form of the Navier–Stokes, with the velocity vector expanded as u = ( u x , u y , u z ) {\textstyle \mathbf {u} =(u_{x},u_{y},u_{z})} , sometimes respectively named u {\textstyle u} , v {\textstyle v} , w {\textstyle w} , we may write the vector equation explicitly, x : ρ ( ∂ t u x + u x ∂ x u x + u y ∂ y u x + u z ∂ z u x ) = − ∂ x p + μ ( ∂ x 2 u x + ∂ y 2 u x + ∂ z 2 u x ) + 1 3 μ ∂ x ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g x {\displaystyle {\begin{aligned}x:\ &\rho \left({\partial _{t}u_{x}}+u_{x}\,{\partial _{x}u_{x}}+u_{y}\,{\partial _{y}u_{x}}+u_{z}\,{\partial _{z}u_{x}}\right)\\&\quad =-\partial _{x}p+\mu \left({\partial _{x}^{2}u_{x}}+{\partial _{y}^{2}u_{x}}+{\partial _{z}^{2}u_{x}}\right)+{\frac {1}{3}}\mu \ \partial _{x}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{x}\\\end{aligned}}} y : ρ ( ∂ t u y + u x ∂ x u y + u y ∂ y u y + u z ∂ z u y ) = − ∂ y p + μ ( ∂ x 2 u y + ∂ y 2 u y + ∂ z 2 u y ) + 1 3 μ ∂ y ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g y {\displaystyle {\begin{aligned}y:\ &\rho \left({\partial _{t}u_{y}}+u_{x}{\partial _{x}u_{y}}+u_{y}{\partial _{y}u_{y}}+u_{z}{\partial _{z}u_{y}}\right)\\&\quad =-{\partial _{y}p}+\mu \left({\partial _{x}^{2}u_{y}}+{\partial _{y}^{2}u_{y}}+{\partial _{z}^{2}u_{y}}\right)+{\frac {1}{3}}\mu \ \partial _{y}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{y}\\\end{aligned}}} z : ρ ( ∂ t u z + u x ∂ x u z + u y ∂ y u z + u z ∂ z u z ) = − ∂ z p + μ ( ∂ x 2 u z + ∂ y 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( ∂ x u x + ∂ y u y + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{x}{\partial _{x}u_{z}}+u_{y}{\partial _{y}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}+\mu \left({\partial _{x}^{2}u_{z}}+{\partial _{y}^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)+{\frac {1}{3}}\mu \ \partial _{z}\left({\partial _{x}u_{x}}+{\partial _{y}u_{y}}+{\partial _{z}u_{z}}\right)+\rho g_{z}.\end{aligned}}} Note that gravity has been accounted for as a body force, and the values of g x {\textstyle g_{x}} , g y {\textstyle g_{y}} , g z {\textstyle g_{z}} will depend on the orientation of gravity with respect to the chosen set of coordinates. The continuity equation reads: ∂ t ρ + ∂ x ( ρ u x ) + ∂ y ( ρ u y ) + ∂ z ( ρ u z ) = 0. {\displaystyle \partial _{t}\rho +\partial _{x}(\rho u_{x})+\partial _{y}(\rho u_{y})+\partial _{z}(\rho u_{z})=0.} When the flow is incompressible, ρ {\textstyle \rho } does not change for any fluid particle, and its material derivative vanishes: D ρ D t = 0 {\textstyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0} . The continuity equation is reduced to: ∂ x u x + ∂ y u y + ∂ z u z = 0. {\displaystyle \partial _{x}u_{x}+\partial _{y}u_{y}+\partial _{z}u_{z}=0.} Thus, for the incompressible version of the Navier–Stokes equation the second part of the viscous terms fall away (see Incompressible flow). This system of four equations comprises the most commonly used and studied form. Though comparatively more compact than other representations, this is still a nonlinear system of partial differential equations for which solutions are difficult to obtain. === Cylindrical coordinates === A change of variables on the Cartesian equations will yield the following momentum equations for r {\textstyle r} , ϕ {\textstyle \phi } , and z {\textstyle z} r : ρ ( ∂ t u r + u r ∂ r u r + u φ r ∂ φ u r + u z ∂ z u r − u φ 2 r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + 1 r 2 ∂ φ 2 u r + ∂ z 2 u r − u r r 2 − 2 r 2 ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{r}}+u_{z}{\partial _{z}u_{r}}-{\frac {u_{\varphi }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{r}}+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}-{\frac {2}{r^{2}}}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r ∂ φ u φ + u z ∂ z u φ + u r u φ r ) = − 1 r ∂ φ p + μ ( 1 r ∂ r ( r ∂ r u φ ) + 1 r 2 ∂ φ 2 u φ + ∂ z 2 u φ − u φ r 2 + 2 r 2 ∂ φ u r ) + 1 3 μ 1 r ∂ φ ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{\varphi }}+u_{z}{\partial _{z}u_{\varphi }}+{\frac {u_{r}u_{\varphi }}{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r}}\ \partial _{r}\left(r{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{\varphi }}+{\partial _{z}^{2}u_{\varphi }}-{\frac {u_{\varphi }}{r^{2}}}+{\frac {2}{r^{2}}}{\partial _{\varphi }u_{r}}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\varphi }\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} z : ρ ( ∂ t u z + u r ∂ r u z + u φ r ∂ φ u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + 1 r 2 ∂ φ 2 u z + ∂ z 2 u z ) + 1 3 μ ∂ z ( 1 r ∂ r ( r u r ) + 1 r ∂ φ u φ + ∂ z u z ) + ρ g z . {\displaystyle {\begin{aligned}z:\ &\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+{\frac {u_{\varphi }}{r}}{\partial _{\varphi }u_{z}}+u_{z}{\partial _{z}u_{z}}\right)\\&\quad =-{\partial _{z}p}\\&\qquad +\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\frac {1}{r^{2}}}{\partial _{\varphi }^{2}u_{z}}+{\partial _{z}^{2}u_{z}}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{z}\left({\frac {1}{r}}{\partial _{r}\left(ru_{r}\right)}+{\frac {1}{r}}{\partial _{\varphi }u_{\varphi }}+{\partial _{z}u_{z}}\right)\\&\qquad +\rho g_{z}.\end{aligned}}} The gravity components will generally not be constants, however for most applications either the coordinates are chosen so that the gravity components are constant or else it is assumed that gravity is counteracted by a pressure field (for example, flow in horizontal pipe is treated normally without gravity and without a vertical pressure gradient). The continuity equation is: ∂ t ρ + 1 r ∂ r ( ρ r u r ) + 1 r ∂ φ ( ρ u φ ) + ∂ z ( ρ u z ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r}}\partial _{r}\left(\rho ru_{r}\right)+{\frac {1}{r}}{\partial _{\varphi }\left(\rho u_{\varphi }\right)}+{\partial _{z}\left(\rho u_{z}\right)}=0.} This cylindrical representation of the incompressible Navier–Stokes equations is the second most commonly seen (the first being Cartesian above). Cylindrical coordinates are chosen to take advantage of symmetry, so that a velocity component can disappear. A very common case is axisymmetric flow with the assumption of no tangential velocity ( u ϕ = 0 {\textstyle u_{\phi }=0} ), and the remaining quantities are independent of ϕ {\textstyle \phi } : ρ ( ∂ t u r + u r ∂ r u r + u z ∂ z u r ) = − ∂ r p + μ ( 1 r ∂ r ( r ∂ r u r ) + ∂ z 2 u r − u r r 2 ) + ρ g r ρ ( ∂ t u z + u r ∂ r u z + u z ∂ z u z ) = − ∂ z p + μ ( 1 r ∂ r ( r ∂ r u z ) + ∂ z 2 u z ) + ρ g z 1 r ∂ r ( r u r ) + ∂ z u z = 0. {\displaystyle {\begin{aligned}\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+u_{z}{\partial _{z}u_{r}}\right)&=-{\partial _{r}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{r}}\right)+{\partial _{z}^{2}u_{r}}-{\frac {u_{r}}{r^{2}}}\right)+\rho g_{r}\\\rho \left({\partial _{t}u_{z}}+u_{r}{\partial _{r}u_{z}}+u_{z}{\partial _{z}u_{z}}\right)&=-{\partial _{z}p}+\mu \left({\frac {1}{r}}\partial _{r}\left(r{\partial _{r}u_{z}}\right)+{\partial _{z}^{2}u_{z}}\right)+\rho g_{z}\\{\frac {1}{r}}\partial _{r}\left(ru_{r}\right)+{\partial _{z}u_{z}}&=0.\end{aligned}}} === Spherical coordinates === In spherical coordinates, the r {\textstyle r} , ϕ {\textstyle \phi } , and θ {\textstyle \theta } momentum equations are (note the convention used: θ {\textstyle \theta } is polar angle, or colatitude, 0 ≤ θ ≤ π {\textstyle 0\leq \theta \leq \pi } ): r : ρ ( ∂ t u r + u r ∂ r u r + u φ r sin ⁡ θ ∂ φ u r + u θ r ∂ θ u r − u φ 2 + u θ 2 r ) = − ∂ r p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u r ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u r + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u r ) − 2 u r + ∂ θ u θ + u θ cot ⁡ θ r 2 − 2 r 2 sin ⁡ θ ∂ φ u φ ) + 1 3 μ ∂ r ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g r {\displaystyle {\begin{aligned}r:\ &\rho \left({\partial _{t}u_{r}}+u_{r}{\partial _{r}u_{r}}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{r}}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{r}}-{\frac {u_{\varphi }^{2}+u_{\theta }^{2}}{r}}\right)\\&\quad =-{\partial _{r}p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{r}}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{r}}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{r}}\right)-2{\frac {u_{r}+{\partial _{\theta }u_{\theta }}+u_{\theta }\cot \theta }{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +{\frac {1}{3}}\mu \partial _{r}\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{r}\\[8px]\end{aligned}}} φ : ρ ( ∂ t u φ + u r ∂ r u φ + u φ r sin ⁡ θ ∂ φ u φ + u θ r ∂ θ u φ + u r u φ + u φ u θ cot ⁡ θ r ) = − 1 r sin ⁡ θ ∂ φ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u φ ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u φ + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u φ ) + 2 sin ⁡ θ ∂ φ u r + 2 cos ⁡ θ ∂ φ u θ − u φ r 2 sin 2 ⁡ θ ) + 1 3 μ 1 r sin ⁡ θ ∂ φ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g φ {\displaystyle {\begin{aligned}\varphi :\ &\rho \left({\partial _{t}u_{\varphi }}+u_{r}{\partial _{r}u_{\varphi }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\varphi }}+{\frac {u_{r}u_{\varphi }+u_{\varphi }u_{\theta }\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r\sin \theta }}{\partial _{\varphi }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\varphi }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\varphi }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\varphi }}\right)+{\frac {2\sin \theta {\partial _{\varphi }u_{r}}+2\cos \theta {\partial _{\varphi }u_{\theta }}-u_{\varphi }}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r\sin \theta }}\partial _{\varphi }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\varphi }\\[8px]\end{aligned}}} θ : ρ ( ∂ t u θ + u r ∂ r u θ + u φ r sin ⁡ θ ∂ φ u θ + u θ r ∂ θ u θ + u r u θ − u φ 2 cot ⁡ θ r ) = − 1 r ∂ θ p + μ ( 1 r 2 ∂ r ( r 2 ∂ r u θ ) + 1 r 2 sin 2 ⁡ θ ∂ φ 2 u θ + 1 r 2 sin ⁡ θ ∂ θ ( sin ⁡ θ ∂ θ u θ ) + 2 r 2 ∂ θ u r − u θ + 2 cos ⁡ θ ∂ φ u φ r 2 sin 2 ⁡ θ ) + 1 3 μ 1 r ∂ θ ( 1 r 2 ∂ r ( r 2 u r ) + 1 r sin ⁡ θ ∂ θ ( u θ sin ⁡ θ ) + 1 r sin ⁡ θ ∂ φ u φ ) + ρ g θ . {\displaystyle {\begin{aligned}\theta :\ &\rho \left({\partial _{t}u_{\theta }}+u_{r}{\partial _{r}u_{\theta }}+{\frac {u_{\varphi }}{r\sin \theta }}{\partial _{\varphi }u_{\theta }}+{\frac {u_{\theta }}{r}}{\partial _{\theta }u_{\theta }}+{\frac {u_{r}u_{\theta }-u_{\varphi }^{2}\cot \theta }{r}}\right)\\&\quad =-{\frac {1}{r}}{\partial _{\theta }p}\\&\qquad +\mu \left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}{\partial _{r}u_{\theta }}\right)+{\frac {1}{r^{2}\sin ^{2}\theta }}{\partial _{\varphi }^{2}u_{\theta }}+{\frac {1}{r^{2}\sin \theta }}\partial _{\theta }\left(\sin \theta {\partial _{\theta }u_{\theta }}\right)+{\frac {2}{r^{2}}}{\partial _{\theta }u_{r}}-{\frac {u_{\theta }+2\cos \theta {\partial _{\varphi }u_{\varphi }}}{r^{2}\sin ^{2}\theta }}\right)\\&\qquad +{\frac {1}{3}}\mu {\frac {1}{r}}\partial _{\theta }\left({\frac {1}{r^{2}}}\partial _{r}\left(r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(u_{\theta }\sin \theta \right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }u_{\varphi }}\right)\\&\qquad +\rho g_{\theta }.\end{aligned}}} Mass continuity will read: ∂ t ρ + 1 r 2 ∂ r ( ρ r 2 u r ) + 1 r sin ⁡ θ ∂ φ ( ρ u φ ) + 1 r sin ⁡ θ ∂ θ ( sin ⁡ θ ρ u θ ) = 0. {\displaystyle {\partial _{t}\rho }+{\frac {1}{r^{2}}}\partial _{r}\left(\rho r^{2}u_{r}\right)+{\frac {1}{r\sin \theta }}{\partial _{\varphi }(\rho u_{\varphi })}+{\frac {1}{r\sin \theta }}\partial _{\theta }\left(\sin \theta \rho u_{\theta }\right)=0.} These equations could be (slightly) compacted by, for example, factoring 1 r 2 {\textstyle {\frac {1}{r^{2}}}} from the viscous terms. However, doing so would undesirably alter the structure of the Laplacian and other quantities. == See also == == Citations == == General references == Acheson, D. J. (1990), Elementary Fluid Dynamics, Oxford Applied Mathematics and Computing Science Series, Oxford University Press, ISBN 978-0-19-859679-0 Batchelor, G. K. (1967), An Introduction to Fluid Dynamics, Cambridge University Press, ISBN 978-0-521-66396-0 Currie, I. G. (1974), Fundamental Mechanics of Fluids, McGraw-Hill, ISBN 978-0-07-015000-3 V. Girault and P. A. Raviart. Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms. Springer Series in Computational Mathematics. Springer-Verlag, 1986. Landau, L. D.; Lifshitz, E. M. (1987), Fluid mechanics, vol. Course of Theoretical Physics Volume 6 (2nd revised ed.), Pergamon Press, ISBN 978-0-08-033932-0, OCLC 15017127 Polyanin, A. D.; Kutepov, A. M.; Vyazmin, A. V.; Kazenin, D. A. (2002), Hydrodynamics, Mass and Heat Transfer in Chemical Engineering, Taylor & Francis, London, ISBN 978-0-415-27237-7 Rhyming, Inge L. (1991), Dynamique des fluides, Presses polytechniques et universitaires romandes Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley, ISBN 0-47-1253499 Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing, ISBN 978-0-8218-2737-6 Milne-Thomson, L.M. C.B.E (1962), Theoretical Hydrodynamics, Macmillan & Co Ltd. Tartar, L (2006), An Introduction to Navier Stokes Equation and Oceanography, Springer ISBN 3-540-35743-2 Birkhoff, Garrett (1960), Hydrodynamics, Princeton University Press Campos, D.(Editor) (2017) Handbook on Navier-Stokes Equations Theory and Applied Analysis, Nova Science Publisher ISBN 978-1-53610-292-5 Döring, C.E. and J.D. Gibbon, J.D. (1995) Applied analysis of the Navier-Stokes equations, Cambridge University Press, ISBN 0-521-44557-1 Basset, A.B. (1888) Hydrodynamics Volume I and II, Cambridge: Delighton, Bell and Co Fox, R. W. McDonald, A.T. and Pritchard, P.J. (2004) Introduction to Fluid Mechanics, John Wiley and Sons, ISBN 0-471-2023-2 Foias, C. Mainley, O. Rosa, R. and Temam, R. (2004) Navier–Stokes Equations and Turbulence, Cambridge University Press, {{ISBN}0-521-36032-3}} Lions, P-L. (1998) Mathematical Topics in Fluid Mechanics Volume 1 and 2, Clarendon Press, ISBN 0-19-851488-3 Deville, M.O. and Gatski, T. B. (2012) Mathematical Modeling for Complex Fluids and Flows, Springer, ISBN 978-3-642-25294-5 Kochin, N.E. Kibel, I.A. and Roze, N.V. (1964) Theoretical Hydromechanics, John Wiley & Sons, Ltd. Lamb, H. (1879) Hydrodynamics, Cambridge University Press, White, Frank M. (2006), Viscous Fluid Flow, McGraw-Hill, ISBN 978-0-07-124493-0 == External links == Simplified derivation of the Navier–Stokes equations Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA
Wikipedia/Navier–Stokes_equation
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it. All four known fundamental interactions are non-contact forces: Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes. Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions. Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics. == See also == Tension Body force Surface force Action at a distance == References ==
Wikipedia/Non-contact_force
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds. In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry. == Etymology == The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry. The modern word alchemy in turn is derived from the Arabic word al-kīmīā (الكیمیاء). This may have Egyptian origins since al-kīmīā is derived from the Ancient Greek χημία, which is in turn derived from the word Kemet, which is the ancient name of Egypt in the Egyptian language. Alternately, al-kīmīā may derive from χημεία 'cast together'. == Modern principles == The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are: === Matter === In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances. ==== Atom ==== The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus. The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent). ==== Element ==== A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends. ==== Compound ==== A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service (CAS) has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number. ==== Molecule ==== A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable. The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature. ==== Substance and mixture ==== A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys. ==== Mole and amount of substance ==== The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly 6.02214076×1023 particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3. === Phase === In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary; in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology. === Bonding === Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition. An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed. In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. === Energy === In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e − E / k T {\displaystyle e^{-E/kT}} – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, Δ G ≤ 0 {\displaystyle \Delta G\leq 0\,} ; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions. The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. === Reaction === When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events'). === Ions and salts === An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−). Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. === Acidity and basicity === A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values. === Redox === Redox (reduction-oxidation) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. === Equilibrium === Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. === Chemical laws === Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are: == History == The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. === Definition === The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection. The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances—a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes. === Background === Early civilizations, such as the Egyptians, Babylonians, and Indians, amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory. A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations. The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals. Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnaces used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline. Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights. The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s). At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis. The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities. == Practice == In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems. === Subdisciplines === Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics. Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases. Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy. Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound. Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Other subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. === Interdisciplinary === Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others. === Industry === The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%. === Professional societies === == See also == == References == == Bibliography == == Further reading == Popular reading Atkins, P. W. Galileo's Finger (Oxford University Press) ISBN 0-19-860941-8 Atkins, P. W. Atkins' Molecules (Cambridge University Press) ISBN 0-521-82397-8 Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, England, 2010 ISBN 978-0-552-77750-6 Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984) ISBN 978-0-14-139944-7 Stwertka, A. A Guide to the Elements (Oxford University Press) ISBN 0-19-515027-9 "Dictionary of the History of Ideas". Archived from the original on 10 March 2008. "Chemistry" . Encyclopædia Britannica. Vol. 6 (11th ed.). 1911. pp. 33–76. Introductory undergraduate textbooks Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th ed.) 2006 (Oxford University Press) ISBN 0-19-926463-5 Chang, Raymond. Chemistry 6th ed. Boston, Massachusetts: James M. Smith, 1998. ISBN 0-07-115221-0 Clayden, Jonathan; Greeves, Nick; Warren, Stuart; Wothers, Peter (2001). Organic Chemistry (1st ed.). Oxford University Press. ISBN 978-0-19-850346-0. Voet and Voet. Biochemistry (Wiley) ISBN 0-471-58651-X Advanced undergraduate-level or graduate textbooks Atkins, P. W. Physical Chemistry (Oxford University Press) ISBN 0-19-879285-9 Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press) McWeeny, R. Coulson's Valence (Oxford Science Publications) ISBN 0-19-855144-4 Pauling, L. The Nature of the chemical bond (Cornell University Press) ISBN 0-8014-0333-2 Pauling, L., and Wilson, E. B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications) ISBN 0-486-64871-0 Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall) ISBN 0-412-40040-5 Stephenson, G. Mathematical Methods for Science Students (Longman) ISBN 0-582-44416-0 == External links == General Chemistry principles, patterns and applications.
Wikipedia/chemistry
Chemical Science is a weekly peer-reviewed scientific journal covering all aspects of chemistry. It is the flagship journal of the Royal Society of Chemistry. It was established in July 2010 and is published by the Royal Society of Chemistry; before 2018, it was published monthly. It won the Best New Journal 2011 award from the Association of Learned and Professional Society Publishers. The editor-in-chief is Andrew Ian Cooper (University of Liverpool). In January 2015, the journal moved to an open access publishing model. It has since become a diamond open access journal, with no charges to readers or authors. == Abstracting and indexing == The journal is abstracted and indexed in: Science Citation Index Expanded Current Contents/Physical, Chemical & Earth Sciences Chemical Abstracts Service Directory of Open Access Journals According to the Journal Citation Reports, the journal has a 2023 impact factor of 7.6. == See also == Chemical Communications Chemical Society Reviews == References == == External links == Official website
Wikipedia/Chemical_Science_(journal)
Quantum mechanics is the fundamental physical theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms.: 1.1  It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science. Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Classical mechanics can be derived from quantum mechanics as an approximation that is valid at ordinary scales. Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle). Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield. == Overview and fundamental concepts == Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and subatomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron. A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.: 67–87  One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.: 427–435  Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate.: 102–111 : 1.1–1.8  The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave).: 109  However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit. Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor. When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem. Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables. It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples. == Mathematical formulation == In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ {\displaystyle \psi } belonging to a (separable) complex Hilbert space H {\displaystyle {\mathcal {H}}} . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, ψ {\displaystyle \psi } and e i α ψ {\displaystyle e^{i\alpha }\psi } represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L 2 ( C ) {\displaystyle L^{2}(\mathbb {C} )} , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C 2 {\displaystyle \mathbb {C} ^{2}} with the usual inner product. Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue λ {\displaystyle \lambda } is non-degenerate and the probability is given by | ⟨ λ → , ψ ⟩ | 2 {\displaystyle |\langle {\vec {\lambda }},\psi \rangle |^{2}} , where λ → {\displaystyle {\vec {\lambda }}} is its associated unit-length eigenvector. More generally, the eigenvalue is degenerate and the probability is given by ⟨ ψ , P λ ψ ⟩ {\displaystyle \langle \psi ,P_{\lambda }\psi \rangle } , where P λ {\displaystyle P_{\lambda }} is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density. After the measurement, if result λ {\displaystyle \lambda } was obtained, the quantum state is postulated to collapse to λ → {\displaystyle {\vec {\lambda }}} , in the non-degenerate case, or to P λ ψ / ⟨ ψ , P λ ψ ⟩ {\textstyle P_{\lambda }\psi {\big /}\!{\sqrt {\langle \psi ,P_{\lambda }\psi \rangle }}} , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics). === Time evolution of a quantum state === The time evolution of a quantum state is described by the Schrödinger equation: i ℏ ∂ ∂ t ψ ( t ) = H ψ ( t ) . {\displaystyle i\hbar {\frac {\partial }{\partial t}}\psi (t)=H\psi (t).} Here H {\displaystyle H} denotes the Hamiltonian, the observable corresponding to the total energy of the system, and ℏ {\displaystyle \hbar } is the reduced Planck constant. The constant i ℏ {\displaystyle i\hbar } is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle. The solution of this differential equation is given by ψ ( t ) = e − i H t / ℏ ψ ( 0 ) . {\displaystyle \psi (t)=e^{-iHt/\hbar }\psi (0).} The operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state ψ ( 0 ) {\displaystyle \psi (0)} – it makes a definite prediction of what the quantum state ψ ( t ) {\displaystyle \psi (t)} will be at any later time. Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian.: 133–137  Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1). Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form. However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy.: 793  Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.: 849  === Uncertainty principle === One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator X ^ {\displaystyle {\hat {X}}} and momentum operator P ^ {\displaystyle {\hat {P}}} do not commute, but rather satisfy the canonical commutation relation: [ X ^ , P ^ ] = i ℏ . {\displaystyle [{\hat {X}},{\hat {P}}]=i\hbar .} Given a quantum state, the Born rule lets us compute expectation values for both X {\displaystyle X} and P {\displaystyle P} , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have σ X = ⟨ X 2 ⟩ − ⟨ X ⟩ 2 , {\displaystyle \sigma _{X}={\textstyle {\sqrt {\left\langle X^{2}\right\rangle -\left\langle X\right\rangle ^{2}}}},} and likewise for the momentum: σ P = ⟨ P 2 ⟩ − ⟨ P ⟩ 2 . {\displaystyle \sigma _{P}={\sqrt {\left\langle P^{2}\right\rangle -\left\langle P\right\rangle ^{2}}}.} The uncertainty principle states that σ X σ P ≥ ℏ 2 . {\displaystyle \sigma _{X}\sigma _{P}\geq {\frac {\hbar }{2}}.} Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators A {\displaystyle A} and B {\displaystyle B} . The commutator of these two operators is [ A , B ] = A B − B A , {\displaystyle [A,B]=AB-BA,} and this provides the lower bound on the product of standard deviations: σ A σ B ≥ 1 2 | ⟨ [ A , B ] ⟩ | . {\displaystyle \sigma _{A}\sigma _{B}\geq {\tfrac {1}{2}}\left|{\bigl \langle }[A,B]{\bigr \rangle }\right|.} Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an i / ℏ {\displaystyle i/\hbar } factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum p i {\displaystyle p_{i}} is replaced by − i ℏ ∂ ∂ x {\displaystyle -i\hbar {\frac {\partial }{\partial x}}} , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times − ℏ 2 {\displaystyle -\hbar ^{2}} . === Composite systems and entanglement === When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let A and B be two quantum systems, with Hilbert spaces H A {\displaystyle {\mathcal {H}}_{A}} and H B {\displaystyle {\mathcal {H}}_{B}} , respectively. The Hilbert space of the composite system is then H A B = H A ⊗ H B . {\displaystyle {\mathcal {H}}_{AB}={\mathcal {H}}_{A}\otimes {\mathcal {H}}_{B}.} If the state for the first system is the vector ψ A {\displaystyle \psi _{A}} and the state for the second system is ψ B {\displaystyle \psi _{B}} , then the state of the composite system is ψ A ⊗ ψ B . {\displaystyle \psi _{A}\otimes \psi _{B}.} Not all states in the joint Hilbert space H A B {\displaystyle {\mathcal {H}}_{AB}} can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if ψ A {\displaystyle \psi _{A}} and ϕ A {\displaystyle \phi _{A}} are both possible states for system A {\displaystyle A} , and likewise ψ B {\displaystyle \psi _{B}} and ϕ B {\displaystyle \phi _{B}} are both possible states for system B {\displaystyle B} , then 1 2 ( ψ A ⊗ ψ B + ϕ A ⊗ ϕ B ) {\displaystyle {\tfrac {1}{\sqrt {2}}}\left(\psi _{A}\otimes \psi _{B}+\phi _{A}\otimes \phi _{B}\right)} is a valid joint state that is not separable. States that are not separable are called entangled. If the state for a composite system is entangled, it is impossible to describe either component system A or system B by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory. As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic. === Equivalence between formulations === There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics. === Symmetries and conservation laws === The Hamiltonian H {\displaystyle H} is known as the generator of time evolution, since it defines a unitary time-evolution operator U ( t ) = e − i H t / ℏ {\displaystyle U(t)=e^{-iHt/\hbar }} for each value of t {\displaystyle t} . From this relation between U ( t ) {\displaystyle U(t)} and H {\displaystyle H} , it follows that any observable A {\displaystyle A} that commutes with H {\displaystyle H} will be conserved: its expectation value will not change over time.: 471  This statement generalizes, as mathematically, any Hermitian operator A {\displaystyle A} can generate a family of unitary operators parameterized by a variable t {\displaystyle t} . Under the evolution generated by A {\displaystyle A} , any observable B {\displaystyle B} that commutes with A {\displaystyle A} will be conserved. Moreover, if B {\displaystyle B} is conserved by evolution under A {\displaystyle A} , then A {\displaystyle A} is conserved under the evolution generated by B {\displaystyle B} . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law. == Examples == === Free particle === The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy: H = 1 2 m P 2 = − ℏ 2 2 m d 2 d x 2 . {\displaystyle H={\frac {1}{2m}}P^{2}=-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}.} The general solution of the Schrödinger equation is given by ψ ( x , t ) = 1 2 π ∫ − ∞ ∞ ψ ^ ( k , 0 ) e i ( k x − ℏ k 2 2 m t ) d k , {\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\hat {\psi }}(k,0)e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}\mathrm {d} k,} which is a superposition of all possible plane waves e i ( k x − ℏ k 2 2 m t ) {\displaystyle e^{i(kx-{\frac {\hbar k^{2}}{2m}}t)}} , which are eigenstates of the momentum operator with momentum p = ℏ k {\displaystyle p=\hbar k} . The coefficients of the superposition are ψ ^ ( k , 0 ) {\displaystyle {\hat {\psi }}(k,0)} , which is the Fourier transform of the initial quantum state ψ ( x , 0 ) {\displaystyle \psi (x,0)} . It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet: ψ ( x , 0 ) = 1 π a 4 e − x 2 2 a {\displaystyle \psi (x,0)={\frac {1}{\sqrt[{4}]{\pi a}}}e^{-{\frac {x^{2}}{2a}}}} which has Fourier transform, and therefore momentum distribution ψ ^ ( k , 0 ) = a π 4 e − a k 2 2 . {\displaystyle {\hat {\psi }}(k,0)={\sqrt[{4}]{\frac {a}{\pi }}}e^{-{\frac {ak^{2}}{2}}}.} We see that as we make a {\displaystyle a} smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making a {\displaystyle a} larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle. As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant. === Particle in a box === The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region.: 77–78  For the one-dimensional case in the x {\displaystyle x} direction, the time-independent Schrödinger equation may be written − ℏ 2 2 m d 2 ψ d x 2 = E ψ . {\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}\psi }{dx^{2}}}=E\psi .} With the differential operator defined by p ^ x = − i ℏ d d x {\displaystyle {\hat {p}}_{x}=-i\hbar {\frac {d}{dx}}} the previous equation is evocative of the classic kinetic energy analogue, 1 2 m p ^ x 2 = E , {\displaystyle {\frac {1}{2m}}{\hat {p}}_{x}^{2}=E,} with state ψ {\displaystyle \psi } in this case having energy E {\displaystyle E} coincident with the kinetic energy of the particle. The general solutions of the Schrödinger equation for the particle in a box are ψ ( x ) = A e i k x + B e − i k x E = ℏ 2 k 2 2 m {\displaystyle \psi (x)=Ae^{ikx}+Be^{-ikx}\qquad \qquad E={\frac {\hbar ^{2}k^{2}}{2m}}} or, from Euler's formula, ψ ( x ) = C sin ⁡ ( k x ) + D cos ⁡ ( k x ) . {\displaystyle \psi (x)=C\sin(kx)+D\cos(kx).\!} The infinite potential walls of the box determine the values of C , D , {\displaystyle C,D,} and k {\displaystyle k} at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} where ψ {\displaystyle \psi } must be zero. Thus, at x = 0 {\displaystyle x=0} , ψ ( 0 ) = 0 = C sin ⁡ ( 0 ) + D cos ⁡ ( 0 ) = D {\displaystyle \psi (0)=0=C\sin(0)+D\cos(0)=D} and D = 0 {\displaystyle D=0} . At x = L {\displaystyle x=L} , ψ ( L ) = 0 = C sin ⁡ ( k L ) , {\displaystyle \psi (L)=0=C\sin(kL),} in which C {\displaystyle C} cannot be zero as this would conflict with the postulate that ψ {\displaystyle \psi } has norm 1. Therefore, since sin ⁡ ( k L ) = 0 {\displaystyle \sin(kL)=0} , k L {\displaystyle kL} must be an integer multiple of π {\displaystyle \pi } , k = n π L n = 1 , 2 , 3 , … . {\displaystyle k={\frac {n\pi }{L}}\qquad \qquad n=1,2,3,\ldots .} This constraint on k {\displaystyle k} implies a constraint on the energy levels, yielding E n = ℏ 2 π 2 n 2 2 m L 2 = n 2 h 2 8 m L 2 . {\displaystyle E_{n}={\frac {\hbar ^{2}\pi ^{2}n^{2}}{2mL^{2}}}={\frac {n^{2}h^{2}}{8mL^{2}}}.} A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy. === Harmonic oscillator === As in the classical case, the potential for the quantum harmonic oscillator is given by: 234  V ( x ) = 1 2 m ω 2 x 2 . {\displaystyle V(x)={\frac {1}{2}}m\omega ^{2}x^{2}.} This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by ψ n ( x ) = 1 2 n n ! ⋅ ( m ω π ℏ ) 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n ( m ω ℏ x ) , {\displaystyle \psi _{n}(x)={\sqrt {\frac {1}{2^{n}\,n!}}}\cdot \left({\frac {m\omega }{\pi \hbar }}\right)^{1/4}\cdot e^{-{\frac {m\omega x^{2}}{2\hbar }}}\cdot H_{n}\left({\sqrt {\frac {m\omega }{\hbar }}}x\right),\qquad } n = 0 , 1 , 2 , … . {\displaystyle n=0,1,2,\ldots .} where Hn are the Hermite polynomials H n ( x ) = ( − 1 ) n e x 2 d n d x n ( e − x 2 ) , {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left(e^{-x^{2}}\right),} and the corresponding energy levels are E n = ℏ ω ( n + 1 2 ) . {\displaystyle E_{n}=\hbar \omega \left(n+{1 \over 2}\right).} This is another example illustrating the discretization of energy for bound states. === Mach–Zehnder interferometer === The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement. We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector ψ ∈ C 2 {\displaystyle \psi \in \mathbb {C} ^{2}} that is a superposition of the "lower" path ψ l = ( 1 0 ) {\displaystyle \psi _{l}={\begin{pmatrix}1\\0\end{pmatrix}}} and the "upper" path ψ u = ( 0 1 ) {\displaystyle \psi _{u}={\begin{pmatrix}0\\1\end{pmatrix}}} , that is, ψ = α ψ l + β ψ u {\displaystyle \psi =\alpha \psi _{l}+\beta \psi _{u}} for complex α , β {\displaystyle \alpha ,\beta } . In order to respect the postulate that ⟨ ψ , ψ ⟩ = 1 {\displaystyle \langle \psi ,\psi \rangle =1} we require that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . Both beam splitters are modelled as the unitary matrix B = 1 2 ( 1 i i 1 ) {\displaystyle B={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&i\\i&1\end{pmatrix}}} , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of 1 / 2 {\displaystyle 1/{\sqrt {2}}} , or be reflected to the other path with a probability amplitude of i / 2 {\displaystyle i/{\sqrt {2}}} . The phase shifter on the upper arm is modelled as the unitary matrix P = ( 1 0 0 e i Δ Φ ) {\displaystyle P={\begin{pmatrix}1&0\\0&e^{i\Delta \Phi }\end{pmatrix}}} , which means that if the photon is on the "upper" path it will gain a relative phase of Δ Φ {\displaystyle \Delta \Phi } , and it will stay unchanged if it is in the lower path. A photon that enters the interferometer from the left will then be acted upon with a beam splitter B {\displaystyle B} , a phase shifter P {\displaystyle P} , and another beam splitter B {\displaystyle B} , and so end up in the state B P B ψ l = i e i Δ Φ / 2 ( − sin ⁡ ( Δ Φ / 2 ) cos ⁡ ( Δ Φ / 2 ) ) , {\displaystyle BPB\psi _{l}=ie^{i\Delta \Phi /2}{\begin{pmatrix}-\sin(\Delta \Phi /2)\\\cos(\Delta \Phi /2)\end{pmatrix}},} and the probabilities that it will be detected at the right or at the top are given respectively by p ( u ) = | ⟨ ψ u , B P B ψ l ⟩ | 2 = cos 2 ⁡ Δ Φ 2 , {\displaystyle p(u)=|\langle \psi _{u},BPB\psi _{l}\rangle |^{2}=\cos ^{2}{\frac {\Delta \Phi }{2}},} p ( l ) = | ⟨ ψ l , B P B ψ l ⟩ | 2 = sin 2 ⁡ Δ Φ 2 . {\displaystyle p(l)=|\langle \psi _{l},BPB\psi _{l}\rangle |^{2}=\sin ^{2}{\frac {\Delta \Phi }{2}}.} One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities. It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by p ( u ) = p ( l ) = 1 / 2 {\displaystyle p(u)=p(l)=1/2} , independently of the phase Δ Φ {\displaystyle \Delta \Phi } . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths. == Applications == Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics. In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. == Relation to other scientific theories == === Classical mechanics === The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.: 299  When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.: 234  Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.: 353  Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations.: 687–730  Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically. Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics. === Special relativity and electrodynamics === Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised. The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical − e 2 / ( 4 π ϵ 0 r ) {\displaystyle \textstyle -e^{2}/(4\pi \epsilon _{_{0}}r)} Coulomb potential.: 285  Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically.: 26  This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles. Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg. === Relation to general relativity === Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon. One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG. == Philosophical implications == Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics." The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations. Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem. Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful. Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later. == History == Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light. During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible units – the word "atom" deriving from the Greek for 'uncuttable' – the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons. The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν): E = h ν {\displaystyle E=h\nu \ } , where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser. This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects. In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927. By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids. == See also == == Explanatory notes == == References == == Further reading == == External links == Introduction to Quantum Theory at Quantiki. Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe. Course material Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware. Modern Physics: With waves, thermodynamics, and optics – an online textbook. MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06. ⁠5+1/2⁠ Examples in Quantum Mechanics. Philosophy Ismael, Jenann. "Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Philosophical Issues in Quantum Theory". Stanford Encyclopedia of Philosophy.
Wikipedia/Quantum_mechanical_model
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds. In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry. == Etymology == The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry. The modern word alchemy in turn is derived from the Arabic word al-kīmīā (الكیمیاء). This may have Egyptian origins since al-kīmīā is derived from the Ancient Greek χημία, which is in turn derived from the word Kemet, which is the ancient name of Egypt in the Egyptian language. Alternately, al-kīmīā may derive from χημεία 'cast together'. == Modern principles == The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are: === Matter === In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances. ==== Atom ==== The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus. The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent). ==== Element ==== A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends. ==== Compound ==== A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service (CAS) has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number. ==== Molecule ==== A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable. The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature. ==== Substance and mixture ==== A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys. ==== Mole and amount of substance ==== The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly 6.02214076×1023 particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3. === Phase === In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary; in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology. === Bonding === Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition. An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed. In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. === Energy === In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e − E / k T {\displaystyle e^{-E/kT}} – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, Δ G ≤ 0 {\displaystyle \Delta G\leq 0\,} ; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions. The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. === Reaction === When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events'). === Ions and salts === An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−). Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. === Acidity and basicity === A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values. === Redox === Redox (reduction-oxidation) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. === Equilibrium === Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. === Chemical laws === Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are: == History == The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. === Definition === The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection. The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances—a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes. === Background === Early civilizations, such as the Egyptians, Babylonians, and Indians, amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory. A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations. The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals. Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnaces used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline. Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights. The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s). At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis. The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities. == Practice == In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems. === Subdisciplines === Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics. Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases. Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy. Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound. Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Other subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. === Interdisciplinary === Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others. === Industry === The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%. === Professional societies === == See also == == References == == Bibliography == == Further reading == Popular reading Atkins, P. W. Galileo's Finger (Oxford University Press) ISBN 0-19-860941-8 Atkins, P. W. Atkins' Molecules (Cambridge University Press) ISBN 0-521-82397-8 Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, England, 2010 ISBN 978-0-552-77750-6 Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984) ISBN 978-0-14-139944-7 Stwertka, A. A Guide to the Elements (Oxford University Press) ISBN 0-19-515027-9 "Dictionary of the History of Ideas". Archived from the original on 10 March 2008. "Chemistry" . Encyclopædia Britannica. Vol. 6 (11th ed.). 1911. pp. 33–76. Introductory undergraduate textbooks Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th ed.) 2006 (Oxford University Press) ISBN 0-19-926463-5 Chang, Raymond. Chemistry 6th ed. Boston, Massachusetts: James M. Smith, 1998. ISBN 0-07-115221-0 Clayden, Jonathan; Greeves, Nick; Warren, Stuart; Wothers, Peter (2001). Organic Chemistry (1st ed.). Oxford University Press. ISBN 978-0-19-850346-0. Voet and Voet. Biochemistry (Wiley) ISBN 0-471-58651-X Advanced undergraduate-level or graduate textbooks Atkins, P. W. Physical Chemistry (Oxford University Press) ISBN 0-19-879285-9 Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press) McWeeny, R. Coulson's Valence (Oxford Science Publications) ISBN 0-19-855144-4 Pauling, L. The Nature of the chemical bond (Cornell University Press) ISBN 0-8014-0333-2 Pauling, L., and Wilson, E. B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications) ISBN 0-486-64871-0 Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall) ISBN 0-412-40040-5 Stephenson, G. Mathematical Methods for Science Students (Longman) ISBN 0-582-44416-0 == External links == General Chemistry principles, patterns and applications.
Wikipedia/Chemical_science
A network solid or covalent network solid (also called atomic crystalline solids or giant covalent structures) is a chemical compound (or element) in which the atoms are bonded by covalent bonds in a continuous network extending throughout the material. In a network solid there are no individual molecules, and the entire crystal or amorphous solid may be considered a macromolecule. Formulas for network solids, like those for ionic compounds, are simple ratios of the component atoms represented by a formula unit. Examples of network solids include diamond with a continuous network of carbon atoms and silicon dioxide or quartz with a continuous three-dimensional network of SiO2 units. Graphite and the mica group of silicate minerals structurally consist of continuous two-dimensional sheets covalently bonded within the layer, with other bond types holding the layers together. Disordered network solids are termed glasses. These are typically formed on rapid cooling of melts so that little time is left for atomic ordering to occur. == Properties == Hardness: Very hard, due to the strong covalent bonds throughout the lattice (deformation can be easier, however, in directions that do not require the breaking of any covalent bonds, as with flexing or sliding of sheets in graphite or mica). Melting point: High, since melting means breaking covalent bonds (rather than merely overcoming weaker intermolecular forces). Solid-phase electrical conductivity: Variable, depending on the nature of the bonding: network solids in which all electrons are used for sigma bonds (e.g. diamond, quartz) are poor conductors, as there are no delocalized electrons. However, network solids with delocalized pi bonds (e.g. graphite) or dopants can exhibit metal-like conductivity. Liquid-phase electrical conductivity: Low, as the macromolecule consists of neutral atoms, meaning that melting does not free up any new charge carriers (as it would for an ionic compound). Solubility: Generally insoluble in any solvent due to the difficulty of solvating such a large molecule. == Examples == Boron nitride (BN) Diamond (carbon, C) Quartz (SiO2) Rhenium diboride (ReB2) Silicon carbide (moissanite, carborundum, SiC) Silicon (Si) Germanium (Ge) Aluminium nitride (AlN) α-tin allotrope (gray tin, Sn) == See also == Molecular solid == References ==
Wikipedia/Network_solids
Botany, also called plant science, is the branch of natural science and biology studying plants, especially their anatomy, taxonomy, and ecology. A botanist or plant scientist is a scientist who specialises in this field. "Plant" and "botany" may be defined more narrowly to include only land plants and their study, which is also known as phytology. Phytologists or botanists (in the strict sense) study approximately 410,000 species of land plants, including some 391,000 species of vascular plants (of which approximately 369,000 are flowering plants) and approximately 20,000 bryophytes. Botany originated as prehistoric herbalism to identify and later cultivate plants that were edible, poisonous, and medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately. Modern botany is a broad subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st-century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity. == Etymology == The term "botany" comes from the Ancient Greek word botanē (βοτάνη) meaning "pasture", "herbs" "grass", or "fodder"; Botanē is in turn derived from boskein (Greek: βόσκειν), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. == History == === Early botany === Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Avestan writings, and in works from China purportedly from before 221 BCE. Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (c. 371–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later. Another work from Ancient Greece that made an early impact on botany is De materia medica, a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. De materia medica was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner. In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621. German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification. Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545 – c. 1611) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells (a term he coined) in cork, and a short time later in living plant tissue. === Early modern botany === During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi. Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity. In the 19th century botany was a socially acceptable hobby for upper-class women. These women would collect and paint flowers and plants from around the world with scientific accuracy. The paintings were used to record many species that could not be transported or maintained in other environments. Marianne North illustrated over 900 species in extreme detail with watercolor and oil paintings. Her work and many other women's botany work was the beginning of popularizing botany to a wider audience. Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's Grundzüge der Wissenschaftlichen Botanik, published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems. === Late modern botany === Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century. The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants. Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides. 20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield. Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research. == Branches of botany == Botany is divided along several axes. Some subfields of botany relate to particular groups of organisms. Divisions related to the broader historical sense of botany include bacteriology, mycology (or fungology), and phycology – respectively, the study of bacteria, fungi, and algae – with lichenology as a subfield of mycology. The narrower sense of botany as the study of embryophytes (land plants) is called phytology. Bryology is the study of mosses (and in the broader sense also liverworts and hornworts). Pteridology (or filicology) is the study of ferns and allied plants. A number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology (or graminology) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. Study can also be divided by guild rather than clade or grade. For example, dendrology is the study of woody plants. Many divisions of biology have botanical subfields. These are commonly denoted by prefixing the word plant (e.g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics), or prefixing or substituting the prefix phyto- (e.g. phytochemistry, phytogeography). The study of fossil plants is called palaeobotany. Other fields are denoted by adding or substituting the word botany (e.g. systematic botany). Phytosociology is a subfield of plant ecology that classifies and studies communities of plants. The intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. Different parts of plants also give rise to their own subfields, including xylology, carpology (or fructology), and palynology, these being the study of wood, fruit and pollen/spores respectively. Botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. == Scope and importance == The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life. The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses. Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. === Human nutrition === Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists. == Plant biochemistry == Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product. The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that is used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant. Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the C4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common C3 carbon fixation pathway. These biochemical strategies are unique to land plants. === Medicine and materials === Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder. Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin. == Plant ecology == Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. === Plants, climate and environmental change === Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. == Genetics == Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms. Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." An important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. This beneficial effect is also known as hybrid vigor or heterosis. Once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed. As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. === Molecular genetics === A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology. Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. === Epigenetics === Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others. Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. == Plant evolution == The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina. Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. == Plant physiology == Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. === Plant hormones === Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids. The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek auxein, to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light. == Plant anatomy and morphology == Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means. Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations. == Systematic botany == Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress. Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available). The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related. Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent. From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants. In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed. == Symbols == A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols ⟨♂⟩ (Mars) for biennial plants, ⟨♃⟩ (Jupiter) for herbaceous perennials and ⟨♄⟩ (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used ⟨♄⟩ (Saturn) for neuter in addition to ⟨☿⟩ (Mercury) for hermaphroditic. The following symbols are still used: == See also == == Notes == == References == === Citations === === Sources === == External links == Media related to Botany at Wikimedia Commons
Wikipedia/Plant_science
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries. The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. == History == The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena. == Fundamentals == A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few. The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials. === Structure === Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc. Structure is studied in the following levels. ==== Atomic structure ==== Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material. ===== Bonding ===== To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure. ===== Crystallography ===== Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties. ==== Nanostructure ==== Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties. In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure. ==== Microstructure ==== Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured. The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. ==== Macrostructure ==== Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye. === Properties === Materials exhibit myriad properties, including the following. Mechanical properties, see Strength of materials Chemical properties, see Chemistry Electrical properties, see Electricity Thermal properties, see Thermodynamics Optical properties, see Optics and Photonics Magnetic properties, see Magnetism The properties of a material determine its usability and hence its engineering application. === Processing === Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene. === Thermodynamics === Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics. The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium. === Kinetics === Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat. == Research == Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas. === Nanomaterials === Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc. === Biomaterials === A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science. Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material. === Electronic, optical, and magnetic === Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance. Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer. This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics. === Computational materials science === With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more. == Industry == Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.). Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. === Ceramics and glasses === Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components. Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries. === Composites === Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases. Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide. Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose. === Polymers === Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics. Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics. Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc. The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints. === Metal alloys === The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. === Semiconductors === A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate. Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications. == Relation with other fields == Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more. The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education. Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in. The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields. == Emerging technologies == == Subdisciplines == The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites. Ceramic engineering Metallurgy Polymer science and engineering Composite engineering There are additionally broadly applicable, materials independent, endeavors. Materials characterization (spectroscopy, microscopy, diffraction) Computational materials science Materials informatics and selection There are also relatively broad focuses across materials on specific phenomena and techniques. Crystallography Surface science Tribology Microelectronics == Related or interdisciplinary fields == Condensed matter physics, solid-state physics and solid-state chemistry Nanotechnology Mineralogy Supramolecular chemistry Biomaterials science == Professional societies == American Ceramic Society ASM International Association for Iron and Steel Technology Materials Research Society The Minerals, Metals & Materials Society == See also == == References == === Citations === === Bibliography === Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3. Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8. Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5. Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4. Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3. González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1. Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9. Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1. Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826. Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2. Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9. Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276. == Further reading == Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007 Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7. Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8. Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0. Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3. Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7. O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9. Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4. Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3. == External links == MS&T conference organized by the main materials societies MIT OpenCourseWare for MSE
Wikipedia/Materials_Chemistry
Nitrate is a polyatomic ion with the chemical formula NO−3. Salts containing this ion are called nitrates. Nitrates are common components of fertilizers and explosives. Almost all inorganic nitrates are soluble in water. An example of an insoluble nitrate is bismuth oxynitrate. == Chemical structure == The nitrate anion is the conjugate base of nitric acid, consisting of one central nitrogen atom surrounded by three identically bonded oxygen atoms in a trigonal planar arrangement. The nitrate ion carries a formal charge of −1. This charge results from a combination formal charge in which each of the three oxygens carries a −2⁄3 charge, whereas the nitrogen carries a +1 charge, all these adding up to formal charge of the polyatomic nitrate ion. This arrangement is commonly used as an example of resonance. Like the isoelectronic carbonate ion, the nitrate ion can be represented by three resonance structures: == Chemical and biochemical properties == In the NO−3 anion, the oxidation state of the central nitrogen atom is V (+5). This corresponds to the highest possible oxidation number of nitrogen. Nitrate is a potentially powerful oxidizer as evidenced by its explosive behaviour at high temperature when it is detonated in ammonium nitrate (NH4NO3), or black powder, ignited by the shock wave of a primary explosive. In contrast to red fuming nitric acid (HNO3/N2O4), or concentrated nitric acid (HNO3), nitrate in aqueous solution at neutral or high pH is only a weak oxidizing agent in redox reactions in which the reductant does not produce hydrogen ions (such as mercury going to calomel). However it is still a strong oxidizer when the reductant does produce hydrogen ions, such as in the oxidation of hydrogen itself. Nitrate is stable in the absence of microorganisms or reductants such as organic matter. In fact, nitrogen gas is thermodynamically stable in the presence of 1 atm of oxygen only in very acidic conditions, and otherwise should combine with the oxygen to form nitrate. This is shown by subtracting the two oxidation reactions: N2 + 6 H2O → 2 NO−3 + 12 H+ + 10 e− E 0 = 1.246 − 0.0709 pH + 0.0591 10 log ⁡ ( N O 3 − ) 2 P N 2 {\displaystyle \qquad E_{0}=1.246-0.0709{\text{ pH }}+{\frac {0.0591}{10}}\log {\frac {(NO_{3}^{-})^{2}}{P_{N_{2}}}}} 2 H2O → O2 + 4 H+ + 4 e− E 0 = 1.228 − 0.0591 pH + 0.0591 4 log ⁡ P O 2 {\displaystyle \qquad \qquad \qquad E_{0}=1.228-0.0591{\text{ pH }}+{\frac {0.0591}{4}}\log {P_{O_{2}}}} giving: 2 N2 + 5 O2 + 2 H2O → 4 NO−3 + 4 H+ 0 = 0.018 − 0.0118 pH + 0.0591 10 log ⁡ ( N O 3 − ) 2 P N 2 − 0.0591 4 log ⁡ P O 2 {\displaystyle \qquad 0=0.018-0.0118{\text{ pH }}+{\frac {0.0591}{10}}\log {\frac {(NO_{3}^{-})^{2}}{P_{N_{2}}}}-{\frac {0.0591}{4}}\log {P_{O_{2}}}} Dividing by 0.0118 and rearranging gives the equilibrium relation: log ⁡ ( N O 3 − ) P N 2 1 / 2 P O 2 5 / 4 = pH − 1.5 {\displaystyle \log {\frac {(NO_{3}^{-})}{P_{N_{2}}^{1/2}P_{O_{2}}^{5/4}}}={\text{ pH }}-1.5} However, in reality, nitrogen, oxygen, and water do not combine directly to form nitrate. Rather, a reductant such as hydrogen reacts with nitrogen to produce "fixed nitrogen" such as ammonia, which is then oxidized, eventually becoming nitrate. Nitrate does not accumulate to high levels in nature because it reacts with reductants in the process called denitrification (see Nitrogen cycle). Nitrate is used as a powerful terminal electron acceptor by denitrifying bacteria to deliver the energy they need to thrive. Under anaerobic conditions, nitrate is the strongest electron acceptor used by prokaryote microorganisms (bacteria and archaea) to respirate. The redox couple NO−3/N2 is at the top of the redox scale for the anaerobic respiration, just below the couple oxygen (O2/H2O), but above the couples Mn(IV)/Mn(II), Fe(III)/Fe(II), SO2−4/HS−, CO2/CH4. In natural waters, inevitably contaminated by microorganisms, nitrate is a quite unstable and labile dissolved chemical species because it is metabolised by denitrifying bacteria. Water samples for nitrate/nitrite analyses need to be kept at 4 °C in a refrigerated room and analysed as quick as possible to limit the loss of nitrate. In the first step of the denitrification process, dissolved nitrate (NO−3) is catalytically reduced into nitrite (NO−2) by the enzymatic activity of bacteria. In aqueous solution, dissolved nitrite, N(III), is a more powerful oxidizer that nitrate, N(V), because it has to accept less electrons and its reduction is less kinetically hindered than that of nitrate. During the biological denitrification process, further nitrite reduction also gives rise to another powerful oxidizing agent: nitric oxide (NO). NO can fix on myoglobin, accentuating its red coloration. NO is an important biological signaling molecule and intervenes in the vasodilation process. Still, it can also produce free radicals in biological tissues, accelerating their degradation and aging process. The reactive oxygen species (ROS) generated by NO contribute to the oxidative stress, a condition involved in vascular dysfunction and atherogenesis. == Detection in chemical analysis == The nitrate anion is commonly analysed in water by ion chromatography (IC) along with other anions also present in the solution. The main advantage of IC is its ease and the simultaneous analysis of all the anions present in the aqueous sample. Since the emergence of IC instruments in the 1980s, this separation technique, coupled with many detectors, has become commonplace in the chemical analysis laboratory and is the preferred and most widely used method for nitrate and nitrite analyses. Previously, nitrate determination relied on spectrophotometric and colorimetric measurements after a specific reagent is added to the solution to reveal a characteristic color (often red because it absorbs visible light in the blue). Because of interferences with the brown color of dissolved organic matter (DOM: humic and fulvic acids) often present in soil pore water, artefacts can easily affect the absorbance values. In case of weak interference, a blank measurement with only a naturally brown-colored water sample can be sufficient to subtract the undesired background from the measured sample absorbance. If the DOM brown color is too intense, the water samples must be pretreated, and inorganic nitrogen species must be separated before measurement. Meanwhile, for clear water samples, colorimetric instruments retain the advantage of being less expensive and sometimes portable, making them an affordable option for fast routine controls or field measurements. Colorimetric methods for the specific detection of nitrate (NO−3) often rely on its conversion to nitrite (NO−2) followed by nitrite-specific tests. The reduction of nitrate to nitrite can be effected by a copper-cadmium alloy, metallic zinc, or hydrazine. The most popular of these assays is the Griess test, whereby nitrite is converted to a deeply red colored azo dye suited for UV–vis spectrophotometry analysis. The method exploits the reactivity of nitrous acid (HNO2) derived from the acidification of nitrite. Nitrous acid selectively reacts with aromatic amines to give diazonium salts, which in turn couple with a second reagent to give the azo dye. The detection limit is 0.02 to 2 μM. Such methods have been highly adapted to biological samples and soil samples. In the dimethylphenol method, 1 mL of concentrated sulfuric acid (H2SO4) is added to 200 μL of the solution being tested for nitrate. Under strongly acidic conditions, nitrate ions react with 2,6-dimethylphenol, forming a yellow compound, 4-nitro-2,6-dimethylphenol. This occurs through electrophilic aromatic substitution where the intermediate nitronium (+NO2) ions attack the aromatic ring of dimethylphenol. The resulting product (ortho- or para-nitro-dimethylphenol) is analyzed using UV-vis spectrophotometry at 345 nm according to the Lambert-Beer law. Another colorimetric method based on the chromotropic acid (dihydroxynaphthalene-disulfonic acid) was also developed by West and Lyles in 1960 for the direct spectrophotometric determination of nitrate anions. If formic acid is added to a mixture of brucine (an alkaloid related to strychnine) and potassium nitrate (KNO3), its color instantly turns red. This reaction has been used for the direct colorimetric detection of nitrates. For direct online chemical analysis using a flow-through system, the water sample is introduced by a peristaltic pump in a flow injection analyzer, and the nitrate or resulting nitrite-containing effluent is then combined with a reagent for its colorimetric detection. == Occurrence and production == Nitrate salts are found naturally on earth in arid environments as large deposits, particularly of nitratine, a major source of sodium nitrate. Nitrates are produced by a number of species of nitrifying bacteria in the natural environment using ammonia or urea as a source of nitrogen and source of free energy. Nitrate compounds for gunpowder were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung. Lightning strikes in earth's nitrogen- and oxygen-rich atmosphere produce a mixture of oxides of nitrogen, which form nitrous ions and nitrate ions, which are washed from the atmosphere by rain or in occult deposition. Nitrates are produced industrially from nitric acid. == Uses == === Agriculture === Nitrate is a chemical compound that serves as a primary form of nitrogen for many plants. This essential nutrient is used by plants to synthesize proteins, nucleic acids, and other vital organic molecules. The transformation of atmospheric nitrogen into nitrate is facilitated by certain bacteria and lightning in the nitrogen cycle, which exemplifies nature's ability to convert a relatively inert molecule into a form that is crucial for biological productivity. Nitrates are used as fertilizers in agriculture because of their high solubility and biodegradability. The main nitrate fertilizers are ammonium, sodium, potassium, calcium, and magnesium salts. Several billion kilograms are produced annually for this purpose. The significance of nitrate extends beyond its role as a nutrient since it acts as a signaling molecule in plants, regulating processes such as root growth, flowering, and leaf development. While nitrate is beneficial for agriculture since it enhances soil fertility and crop yields, its excessive use can lead to nutrient runoff, water pollution, and the proliferation of aquatic dead zones. Therefore, sustainable agricultural practices that balance productivity with environmental stewardship are necessary. Nitrate's importance in ecosystems is evident since it supports the growth and development of plants, contributing to biodiversity and ecological balance. === Firearms === Nitrates are used as oxidizing agents, most notably in explosives, where the rapid oxidation of carbon compounds liberates large volumes of gases (see gunpowder as an example). === Industrial === Sodium nitrate is used to remove air bubbles from molten glass and some ceramics. Mixtures of molten salts are used to harden the surface of some metals. === Photographic film === Nitrate was also used as a film stock through nitrocellulose. Due to its high combustibility, the film making studios swapped to cellulose acetate safety film in 1950. === Medicinal and pharmaceutical use === In the medical field, nitrate-derived organic esters, such as glyceryl trinitrate, isosorbide dinitrate, and isosorbide mononitrate, are used in the prophylaxis and management of acute coronary syndrome, myocardial infarction, acute pulmonary oedema. This class of drug, to which amyl nitrite also belongs, is known as nitrovasodilators. == Toxicity and safety == The two areas of concerns about the toxicity of nitrate are the following: nitrate reduced by the microbial activity of nitrate reducing bacteria is the precursor of nitrite in water and in the lower gastrointestinal tract. Nitrite is a precursor to carcinogenic nitrosamines, and; via the formation of nitrite, nitrate is implicated in methemoglobinemia, a disorder of hemoglobin in red blood cells susceptible to especially affect infants and toddlers. === Methemoglobinemia === One of the most common cause of methemoglobinemia in infants is due to the ingestion of nitrates and nitrites through well water or foods. In fact, nitrates (NO−3), often present at too high concentration in drinkwater, are only the precursor chemical species of nitrites (NO−2), the real culprits of methemoglobinemia. Nitrites produced by the microbial reduction of nitrate (directly in the drinkwater, or after ingestion by the infant, in his digestive system) are more powerful oxidizers than nitrates and are the chemical agent really responsible for the oxidation of Fe2+ into Fe3+ in the tetrapyrrole heme of hemoglobin. Indeed, nitrate anions are too weak oxidizers in aqueous solution to be able to directly, or at least sufficiently rapidly, oxidize Fe2+ into Fe3+, because of kinetics limitations. Infants younger than 4 months are at greater risk given that they drink more water per body weight, they have a lower NADH-cytochrome b5 reductase activity, and they have a higher level of fetal hemoglobin which converts more easily to methemoglobin. Additionally, infants are at an increased risk after an episode of gastroenteritis due to the production of nitrites by bacteria. However, other causes than nitrates can also affect infants and pregnant women. Indeed, the blue baby syndrome can also be caused by a number of other factors such as the cyanotic heart disease, a congenital heart defect resulting in low levels of oxygen in the blood, or by gastric upset, such as diarrheal infection, protein intolerance, heavy metal toxicity, etc. === Drinking water standards === Through the Safe Drinking Water Act, the United States Environmental Protection Agency has set a maximum contaminant level of 10 mg/L or 10 ppm of nitrate in drinking water. An acceptable daily intake (ADI) for nitrate ions was established in the range of 0–3.7 mg (kg body weight)−1 day−1 by the Joint FAO/WHO Expert Committee on Food Additives (JEFCA). === Aquatic toxicity === In freshwater or estuarine systems close to land, nitrate can reach concentrations that are lethal to fish. While nitrate is much less toxic than ammonia, levels over 30 ppm of nitrate can inhibit growth, impair the immune system and cause stress in some aquatic species. Nitrate toxicity remains a subject of debate. In most cases of excess nitrate concentrations in aquatic systems, the primary sources are wastewater discharges, as well as surface runoff from agricultural or landscaped areas that have received excess nitrate fertilizer. The resulting eutrophication and algae blooms result in anoxia and dead zones. As a consequence, as nitrate forms a component of total dissolved solids, they are widely used as an indicator of water quality. == Human impacts on ecosystems through nitrate deposition == Nitrate deposition into ecosystems has markedly increased due to anthropogenic activities, notably from the widespread application of nitrogen-rich fertilizers in agriculture and the emissions from fossil fuel combustion. Annually, about 195 million metric tons of synthetic nitrogen fertilizers are used worldwide, with nitrates constituting a significant portion of this amount. In regions with intensive agriculture, such as parts of the U.S., China, and India, the use of nitrogen fertilizers can exceed 200 kilograms per hectare. The impact of increased nitrate deposition extends beyond plant communities to affect soil microbial populations. The change in soil chemistry and nutrient dynamics can disrupt the natural processes of nitrogen fixation, nitrification, and denitrification, leading to altered microbial community structures and functions. This disruption can further impact the nutrient cycling and overall ecosystem health. == Dietary nitrate == A source of nitrate in the human diets arises from the consumption of leafy green foods, such as spinach and arugula. NO−3 can be present in beetroot juice. Drinking water represents also a primary nitrate intake source. Nitrate ingestion rapidly increases the plasma nitrate concentration by a factor of 2 to 3, and this elevated nitrate concentration can be maintained for more than 2 weeks. Increased plasma nitrate enhances the production of nitric oxide, NO. Nitric oxide is a physiological signaling molecule which intervenes in, among other things, regulation of muscle blood flow and mitochondrial respiration. === Cured meats === Nitrite (NO−2) consumption is primarily determined by the amount of processed meats eaten, and the concentration of nitrates (NO−3) added to these meats (bacon, sausages…) for their curing. Although nitrites are the nitrogen species chiefly used in meat curing, nitrates are used as well and can be transformed into nitrite by microorganisms, or in the digestion process, starting by their dissolution in saliva and their contact with the microbiota of the mouth. Nitrites lead to the formation of carcinogenic nitrosamines. The production of nitrosamines may be inhibited by the use of the antioxidants vitamin C and the alpha-tocopherol form of vitamin E during curing. Many meat processors claim their meats (e.g. bacon) is "uncured" – which is a marketing claim with no factual basis: there is no such thing as "uncured" bacon (as that would be, essentially, raw sliced pork belly). "Uncured" meat is in fact actually cured with nitrites with virtually no distinction in process – the only difference being the USDA labeling requirement between nitrite of vegetable origin (such as from celery) vs. "synthetic" sodium nitrite. An analogy would be purified "sea salt" vs. sodium chloride – both being exactly the same chemical with the only essential difference being the origin. Anti-hypertensive diets, such as the DASH diet, typically contain high levels of nitrates, which are first reduced to nitrite in the saliva, as detected in saliva testing, prior to forming nitric oxide (NO). == Domestic animal feed == Symptoms of nitrate poisoning in domestic animals include increased heart rate and respiration; in advanced cases blood and tissue may turn a blue or brown color. Feed can be tested for nitrate; treatment consists of supplementing or substituting existing supplies with lower nitrate material. Safe levels of nitrate for various types of livestock are as follows: The values above are on a dry (moisture-free) basis. == Salts and covalent derivatives == Nitrate formation with elements of the periodic table: == See also == Ammonium Eutrophication f-ratio in oceanography Frost diagram Nitrification Nitratine Nitrite, the anion NO−2 Nitrogen oxide Nitrogen trioxide, the neutral radical NO3 Peroxynitrate, OONO–2 Sodium nitrate == References == == External links == ATSDR – Case Studies in Environmental Medicine – Nitrate/Nitrite Toxicity (archive)
Wikipedia/Nitrate
DNA paternity testing uses DNA profiles to determine whether an individual is the biological parent of another individual. Paternity testing can be essential when the rights and duties of the father are in issue, and a child's paternity is in doubt. Tests can also determine the likelihood of someone being a biological grandparent. Though genetic testing is the most reliable standard, older methods also exist, including ABO blood group typing, analysis of various other proteins and enzymes, or using human leukocyte antigen antigens. The current paternity testing techniques are polymerase chain reaction (PCR) and restriction fragment length polymorphism (RFLP). Paternity testing can now also be performed while the woman is still pregnant from a blood draw. DNA testing is currently the most advanced and accurate technology to determine parentage. In a DNA paternity test, the result (called the 'probability of parentage) is 0% when the alleged parent is not biologically related to the child, and the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. However, while almost all individuals have a single and distinct set of genes, rare individuals, known as "chimeras", have at least two different sets of genes. This can lead to complications during DNA analysis, such as false negative results if their reproductive tissue has a different genetic makeup from the tissue sampled for the test. == Paternity or maternity testing for child or adult == The DNA test is conducted by collecting buccal (cheek) cells found on the inside of a person's cheek using a buccal or cheek swab. These swabs have handles made of wood or plastic with a cotton synthetic tip. The collector rubs the inside of a person's cheek to collect as many buccal cells as possible, which are then sent to a laboratory for testing. Samples from both the alleged father or mother and the child are required for the test. == Prenatal paternity testing for unborn child == === Invasive prenatal paternity testing === It is possible to determine who the biological father of the fetus is while the woman is still pregnant through a procedure known as chorionic villus sampling or amniocentesis. Chorionic villus sampling retrieves placental tissue, which can be done either through the cervix (transcervical) or the adbominal wall (transabdominal). Amniocentesis involves collecting amniotic fluid by inserting a needle through the pregnant mother's abdominal wall. Both procedures are highly accurate because they obtain samples directly from the fetus. However, there is a small risk of miscarriage associated with them, which could result in the loss of the pregnancy. Both CVS and amniocentesis require the pregnant woman to consult a maternal-fetal medicine specialist who will perform the procedure. === Non-invasive prenatal paternity testing === Recent advances in genetic testing have led to the ability to identify the biological father while the woman is still pregnant. A small quantity of cell-free fetal DNA (cffDNA) is present in the mother's blood during pregnancy. This allows for accurate paternity testing during pregnancy from a blood draw without any risk of miscarriage. Research indicates that cffDNA can first be detected as early as seven weeks into the pregnancy, and its quantity increases as the pregnancy continues. == DNA profiling == The DNA of an individual is identical in all somatic (non reproductive) cells. During sexual reproduction, the DNA from both parents combines to create a unique genetic makeup in a new cell. As a result, an individual's genetic material is derived equally from each parent. This genetic material is referred to as the nuclear genome because it is located in the nucleus of a cell. Autosomal DNA testing allows for a comparison between the child's DNA, the mother's DNA, and the alleged father's DNA. By examining the genetic contribution from the mother, researchers can determine possible genotypes for the actual father. Specific sequences are examined to see if they were copied verbatim from one individual's genome; if so, then the genetic material of one individual could have been derived from that of the other (i.e. one is the parent of the other). If the alleged father cannot be excluded as the true father, then statistical analysis can be performed to assess how likely it is that the alleged father is the true father compared to a random man. In addition to nuclear DNA, mitochondria contain their own genetic material known as mitochondrial DNA. This mitochondrial DNA is inherited solely from the mother and is passed down without any mixing. As a result, establishing a relationship through the comparison of the mitochondrial genome is generally easier than doing so with the nuclear genome. However, testing the mitochondrial DNA can only confirm whether two individuals share a maternal ancestry; it cannot be used to determine paternity. Therefore, its application is somewhat limited. In testing the paternity of a male child, the Y chromosome can be used for comparison, as it is inherited directly from father to son. Like mitochondrial DNA, the Y chromosome is passed down through the paternal line. This means that the two brothers share the same Y chromosome from their father. Therefore, if one brother is the alleged father, his biological brother could also be the father based solely on Y chromosomal data. This holds true for any male relative related to the suspected father along the paternal line. For this reason, autosomal DNA testing would provide a more accurate method for determining paternity. In the US, the AABB has established regulations for DNA paternity and family relationship testing, although AABB accreditation is not mandatory. DNA test results can be considered legally admissible if the collection and processing adhere to a proper chain of custody. Similarly, in Canada, the SCC has regulations on DNA paternity and relationship testing, while accreditation is recommended, it is not required. The Paternity Testing Commission of the International Society for Forensic Genetics is responsible for creating biostatistical recommendations by the ISO/IEC 17025 standards. Biostatistical evaluations of paternity should be based on the likelihood ratio principle, resulting in the Paternity Index (PI). These recommendations offer guidance on the concepts of genetic hypotheses, calculation concerns necessary for producing valid PIs, as well as addressing specific issues related to population genetics. == History == Parental testing has evolved significantly since the 1920s. The earliest method was blood typing, relying on the inheritance of blood types discovered in 1901. In blood typing, the blood types, of the child and the alleged parents are compared to assess the possibility of a parental linkage. For instance, two type O parents can only have type O children, while type B parents can have type B or O offspring. However, this method was limited, excluding about 30% of potential parents based solely on blood type. In the 1930s, serological testing improved the process by examining proteins in the blood, with an exclusion rate of around 40%. The 1960s brought Human Leukocyte Antigen (HLA) typing, which compared genetic markers in white blood cells, achieving about 80% accuracy but struggling to differentiate between close relatives. The 1970s saw advancements with the discovery of restriction enzyme, leading to Restriction Fragment Length Polymorphism ( RFLP) testing in the 1980s, which offered high accuracy. By the 1990s, Polymerase Chain Reaction (PCR) became the standard, providing faster, simpler, and more accurate results with exclusion rates of 99.99% or higher, revolutionizing parental testing in both legal and familial matters. == Legal evidence == A DNA parentage test that adheres to a strict chain of custody can produce legally admissible results used for various purposes, including child support, inheritance, social welfare benefits, immigration, and adoption. To meet the chain-of-custody legal requirements, all tested individuals must be properly identified, and their specimens must be collected by an independent third-party who is not related to any of the tested parties and has no interest in the test's outcome. The quantum of evidence needed is clear and convincing evidence, meaning that it is more substantial than in an ordinary civil case but less than the “beyond a reasonable doubt” standard needed for a criminal conviction. In recent years, immigration authorities in multiple countries- including the United States, United Kingdom, Canada, Australia, France, and others, may accept DNA parentage test results from immigration petitioners and beneficiaries in a family-based immigration case when primary documents that prove biological relationships are missing or inadequate. In the U.S., it is the responsibility of immigration applicants to arrange and cover the cost of DNA testing. U.S. immigration authorities mandate that any DNA test performed must be conducted by a laboratory accredited by the AABB (formerly the American Association of Blood Banks). Similarly, in Canada, the laboratory must be certified by the Standards Council of Canada. Although paternity tests are more prevalent than maternity tests, there are situations where the biological mother of the child is uncertain. Examples include cases in which an adopted child seeks to reunite with their biological mother, potential hospital mix-ups, and in vitro fertilization scenarios where an unrelated embryo may have been implanted in the mother. Other factors, such as new laws regarding reproductive technologies involving donated eggs and sperm or surrogate mothers, can also complicate the determination of legal motherhood. For instance, in Canada, the federal Human Assisted Reproduction Act allows for the use of hired surrogate mothers, meaning that the legal mother may be the egg donor rather than the woman who gave birth. Similar laws exist in the United Kingdom and Australia. In Brazil in 2019, two male identical twins were ordered to both pay maintenance for a child fathered by one of them because the father could not be identified with DNA. == Legal issues == === Australia === Peace-of-mind parentage tests are readily available online. However, for a parentage test (whether paternity or maternity) to be admissible in legal matters—such as changing a birth certificate, proceeding with Family Law Court cases, applying for visas or citizenship, or making child support claims—it must comply with the Family Law Regulations 1984 (Cth). Additionally, the laboratory that processes the samples must be accredited by the National Association of Testing Authorities (NATA). === Canada === Personal paternity-testing kits are available for use. In Canada, the Standards Council regulates paternity testing, ensuring that laboratories are ISO 17025 approved. Only a limited number of laboratories possess this approval, making it advisable to have tests conducted at these accredited facilities. Additionally, courts can order paternity tests during divorce proceedings. === China === In China, paternity testing is legally available for fathers who suspect that a child may not be theirs. Chinese law also mandates a paternity test for any child born outside the one-child policy in order for the child to be eligible for a Hukou, which is a family registration record. Additionally, family ties established by adoption can only be confirmed through a paternity test. Each year, a significant number of Chinese citizens seek paternity testing, leading to the emergence of many unlicensed and illegal testing centers being set up. === France === DNA paternity testing is conducted only at the discretion of a judge during judicial proceedings aimed at either establishing or contesting paternity, or for the purposes of obtaining or denying child support. Non-consensual private DNA paternity testing is illegal, even if carried out through laboratories in other countries. Violation of this law is punishable by up to one year in prison and a fine of €15,000. The French Council of State has described the purpose of this law as upholding the "French regime of filiation" and preserving "the peace of families". === Germany === Under the Gene Diagnostics Act of 2009, secret paternity testing is prohibited. Any paternity test must be conducted by a licensed physician or an expert with a university degree in science and specialized education in parentage testing. Additionally, the laboratory performing the genetic testing must be accredited according to ISO/IEC 17025. Full informed consent from both parents is required for testing. Prenatal paternity testing is also prohibited, except in cases of sexual abuse and rape. If genetic testing is performed without the other parent's consent, the offender may face a fine of €5,000. Furthermore, due to an amendment to civil law section 1598a in 2005, a man who contests paternity no longer automatically loses his legal rights and obligations regarding the child. === Israel === A paternity test that holds legal standing must be ordered by a family court. Although parents can access "peace of mind" parental tests from overseas laboratories, family courts are not obliged to accept these tests as evidence. Additionally, it is illegal to collect genetic material for a paternity test from a minor over 16 years of age without the minor's consent. Family courts have the authority to order paternity tests even against the father's wishes in cases involving divorce, child support, and other matters like determining heirs or settling population registry questions. A man who wishes to prove that he is not the father of a child registered as his is entitled to a paternity test, regardless of the mother and guardian's objections. Paternity tests are not conducted if there is a belief that it could lead to the mother's death. Until 2007, such tests were also not ordered when there was a possibility that the child of a married woman could have been fathered by a man other than her husband, which would designate the child as a mamzer under Jewish law. === Philippines === DNA paternity testing for personal knowledge is legal, and home test kits can be obtained by mail from representatives of AABB- and ISO-certified laboratories. However, DNA paternity testing intended for official purposes, such as child support (sustento) and inheritance disputes, must adhere to the Rule on DNA Evidence A.M. No. 06-11-5-SC, which was issued by the Philippine Supreme Court on October 15, 2007. In some cases, courts may order these tests when proof of paternity is needed. === Spain === In Spain, peace-of-mind paternity tests are a "big business," partly due to the French ban on paternity testing, with many genetic testing companies being based in Spain. === United Kingdom === In the United Kingdom, there were previously no restrictions on paternity tests until the Human Tissue Act 2004 came into effect in September 2006. Section 45 of this Act states that it is an offense to possess any human bodily material without appropriate consent if the intent is to analyze its DNA. Legally recognized fathers are allowed access to paternity-testing services under these new regulations, provided that the DNA being tested is their own. Courts may sometimes order tests when proof of paternity is necessary. In the UK, the Ministry of Justice accredits organizations that are authorized to conduct these tests. The Department of Health produced a voluntary code of practice on genetic paternity testing in 2001, which is currently under review. Responsibility for this code has been transferred to the Human Tissue Authority. In the 2018 case of Anderson V Spencer, the Court of Appeal allowed DNA samples obtained from a deceased person to be used for paternity testing for the first time. === United States === In the United States, paternity testing is entirely legal, and fathers may test their children without the consent or knowledge of the mother. Paternity testing take-home kits are readily available for purchase, though their results are not admissible in court and are for personal knowledge only. Only a court-ordered paternity test may be used as evidence in court proceedings. If parental testing is being submitted for legal purposes, including immigration, testing must be ordered through a lab that has AABB accreditation for relationship DNA testing. The legal implications of a parentage result test vary by state and according to whether the putative parents are unmarried or married. If a parentage test does not meet forensic standards for the state in question, a court-ordered test may be required for the results of the test to be admissible for legal purposes. For unmarried parents, if a parent is currently receiving child support or custody, but DNA testing later proves that the man is not the father, support automatically stops. However, in many states, this testing must be performed during a narrow window of time if a voluntary acknowledgment of parentage form has already been signed by the putative father; otherwise, the results of the test may be disregarded by law, and in many cases, a man may be required to pay child support, though the child is biologically unrelated. In a few states, if the mother is receiving the support, then that alleged father has the right to file a lawsuit to get back any money that he lost from paying support. As of 2011, in most states, unwed parents confronted with a voluntary acknowledgment of parentage form are informed of the possibility and right to request a DNA paternity test. If testing is refused by the mother, the father may not be required to sign the birth certificate or the voluntary acknowledgement of parentage form for the child. For wedded putative parents, the husband of the mother is presumed to be the father of the child. But, in most states, this presumption can be overturned by the application of a forensic paternity test; in many states, the time for overturning this presumption may be limited to the first few years of the child's life. == Reverse paternity testing == Reverse paternity determination is the ability to establish the biological father when the father of that person is not available. The test uses the STR alleles in the mother and her child, other children and brothers of the alleged father, and the deduction of the genetic constitution of the father by the basis of genetic laws, all to create a rough amalgamation. This can compare the father's DNA when a direct sample of the father's DNA is unavailable. An episode of Solved shows this test being used to know if a blood sample matches the victim of a kidnapping. == See also == Paternity fraud Mosaicism and chimerism, rare genetic conditions that can result in false negative results on DNA-based tests Non-paternity event Lauren Lake's Paternity Court, a television series that debuted in fall 2013 Genetic: Heritability List of Mendelian traits in humans == References == Presciuttini, Silvano; Toni, Chiara; Spinetti, Isabella; Rocchi, Anna; Domenici, Ranieri (April 2006). "An unusual case of disputed paternity: When the legitimate children of a deceased alleged father deny DNA". International Congress Series. 1288: 831–833. doi:10.1016/j.ics.2005.09.043. Akinola, Adeyemi Adewale; Anana, Mariam (September 4, 2024). "Exploring Entrepreneurial English in Digital Information: Harmonizing News Headlines with their Introductions in Phoenix". Access: An International Journal of Nepal Library Association. 3: 51–64. doi:10.3126/access.v3i1.69420. ISSN 2822-2075. == External links == UK paternity testing regulations Archived September 23, 2018, at the Wayback Machine per the Human Tissue Authority
Wikipedia/DNA_paternity_testing
Graphology is the analysis of handwriting in an attempt to determine the writer's personality traits. Its methods and conclusions are not supported by scientific evidence, and as such it is considered to be a pseudoscience. Graphology has been controversial for more than a century. Although proponents point to positive testimonials as anecdotal evidence of its utility for personality evaluation, these claims have not been supported by scientific studies. It has been rated as among the most discredited methods of psychological analysis by a survey of mental health professionals. == Etymology == The word "graphology" derives from the Greek γραφή (grapho-; 'writing'), and λόγος (logos; 'theory'). == History == In 1991, Jean-Charles Gille-Maisani stated that Juan Huarte de San Juan's 1575 Examen de ingenios para las ciencias was the first book on handwriting analysis. In American graphology, Camillo Baldi's Trattato come da una lettera missiva si conoscano la natura e qualità dello scrittore from 1622 is considered to be the first book. Around 1830, Jean-Hippolyte Michon became interested in handwriting analysis. He published his findings shortly after founding Société Graphologique in 1871. The most prominent of his disciples was Jules Crépieux-Jamin, who rapidly published a series of books that were soon published in other languages. Starting from Michon's integrative approach, Crépieux-Jamin founded a holistic approach to graphology. Alfred Binet was convinced to conduct research into graphology from 1893 to 1907. He called it "the science of the future" despite rejection of his results by graphologists. French psychiatrist Joseph Rogues De Fursac combined graphology and psychiatry in the 1905 book Les ecrits et les dessins dans les maladies mentales et nerveuses. After World War I, interest in graphology continued to spread in Europe and the United States. In Germany during the 1920s, Ludwig Klages founded and published his findings in Zeitschrift für Menschenkunde (Journal for the Study of Mankind). His major contribution to the field can be found in Handschrift und Charakter. Thea Stein Lewinson and J. Zubin modified Klage's ideas, based upon their experience working for the U.S. government, publishing their method in 1942. In 1929, Milton Bunker founded The American Grapho Analysis Society teaching graphoanalysis. This organization and its system split the American graphology world in two. Students had to choose between graphoanalysis or holistic graphology. While hard data is lacking, anecdotal accounts indicate that 10% of the members of International Graphoanalysis Society (IGAS) were expelled between 1970 and 1980. Regarding a proposed correlation between biological sex and handwriting style, a paper published by James Hartley in 1989 concluded that there was some evidence in support of this hypothesis. Rowan Bayne, a British psychologist who has written several studies on graphology, summarized his view of the appeal of graphology: "[I]t's very seductive because at a very crude level someone who is neat and well behaved tends to have neat handwriting", adding that the practice is "useless... absolutely hopeless". The British Psychological Society ranks graphology alongside astrology, giving them both "zero validity". Graphology was also dismissed as a pseudoscience by the skeptic James Randi in 1991. In his May 21, 2013 Skeptoid podcast episode titled "All About Graphology", scientific skeptic author Brian Dunning reports:In his book The Write Stuff, Barry Beyerstein summarized the work of Geoffrey Dean, who performed probably the most extensive literature survey of graphology ever done. Dean did a meta-analysis on some 200 studies: Dean showed that graphologists have unequivocally failed to demonstrate any validity or reliability of their art for predicting work performance, aptitudes, or personality. Graphology thus fails according to the standards which a genuine psychological test must pass before it can ethically be released for use on the public. Dean found that no particular school of graphology fared better than any other. In fact, no graphologist of any kind was able to show reliably better performance than untrained amateurs making guesses from the same materials. In the vast majority of studies, neither group exceeded chance expectancy. Dunning concludes:Other divining techniques like iridology, phrenology, palmistry, and astrology also have differing schools of thought, require years of training, offer expensive certifications, and fail just as soundly when put to a scientific controlled test. Handwriting analysis does have its plausible-sounding separation from those other techniques though, and that's the whole "handwriting is brainwriting" idea — traits from the brain will be manifested in the way that it controls the muscles of the hand. Unfortunately, this is just as unscientific as the others. No amount of sciencey sounding language can make up for a technique failing when put to a scientifically controlled test. == Use by employers == Although graphology had some support in the scientific community before the mid-twentieth century, more recent research rejects the validity of graphology as a tool to assess personality and job performance. Today it is considered a pseudoscience. Many studies have been conducted to assess its effectiveness to predict personality and job performance. Recent studies testing the validity of using handwriting for predicting personality traits and job performance have been consistently negative. Measures of job performance appear similarly unrelated to the handwriting metrics of graphologists. Professional graphologists using handwriting analysis were just as ineffective as lay people at predicting performance in a 1989 study. A broad literature screen by King and Koehler confirmed that dozens of studies showing the geometric aspects of graphology (slant, slope, etc.) are essentially worthless as predictors of job performance. === Additional specific objections === The Barnum effect (the tendency to interpret vague statements as specifically meaningful) and the Dr. Fox effect (the tendency for supposed experts to be validated based on likeability rather than actual skill) make it difficult to validate methods of personality testing. These phenomena describe the observation that individuals will give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. See, for example, Tallent (1958). Non-individualized graphological reports give credence to this criticism. Effect Size: Dean's (1992) primary argument against the use of graphology is that the effect size is too small. Regardless of the validity of handwriting analysis, the research results imply that it is not applicable for any specific individual, but may be applicable to a group. Vagueness: Some important principles of graphology are vague enough to allow significant room for a graphologist to skew interpretations to suit a subject or preconceived conclusion. For example, one of the main concepts in the theory of Ludwig Klages is form-niveau (or form-level): the overall level of originality, beauty, harmony, style, etc. of a person's handwriting—a quality that, according to Klages, can be perceived but not measured. According to this theory, the same sign has a positive or negative meaning depending on the subject's overall character and personality as revealed by the form-niveau. In practice, this can lead the graphologist to interpret signs positively or negatively depending on whether the subject has high or low social status. == Systems == Integrative graphology focuses on the strokes and their purported relation to personality. Graphoanalysis was the most influential system in the United States between 1929 and 2000. Holistic graphology is based on form, movement, and use of space. It uses psychograms to analyze handwriting. Four academic institutions offer an accredited degree in handwriting analysis: The University of Urbino, Italy: MA (Graphology) Instituto Superior Emerson, Buenos Aires, Argentina: BA (Graphology) Centro de Estudios Superiores (CES), Buenos Aires, Argentina: BA (Graphology) Autonomous University of Barcelona, Barcelona, Spain: MA (Graphology) == Vocabulary == Every system of handwriting analysis has its own vocabulary. Even though two or more systems may share the same words, the meanings of those words may be different. The technical meaning of a word used by a handwriting analyst, and the common meaning is not congruent. Resentment, for example, in common usage, means annoyance. In graphoanalysis, the term indicates a fear of imposition. == Legal considerations == === Hungary === A report by the Hungarian Parliamentary Commissioner for Data Protection and Freedom of Information says that handwriting analysis without informed consent is a privacy violation. === United States === ==== Employment law ==== A 2001 advisory opinion letter from the U.S. Equal Employment Opportunity Commission responded to a question regarding "whether it is legal to use an analysis of an applicant's handwriting as an employment screening tool. You also ask whether it is legal to ask the applicant's age and use of medications to allow for variants in his/her handwriting." The letter advised that in this circumstance, it was illegal under the Americans with Disabilities Act of 1990 (ADA) to ask a job applicant whether he or she is taking any medications, and also advised that asking an applicant for his or her age "allegedly to allow for variants in analyzing his/her handwriting" was not a per se violation of the Age Discrimination in Employment Act of 1967 (ADEA), but could be significant evidence of age discrimination. The letter also said that there was no judicial guidance on "whether a policy of excluding applicants based upon their handwriting has an adverse impact on a protected group" under the ADA, ADEA, or Title VII of the Civil Rights Act of 1964. == Applications == === Gender and handwriting === A 1991 review of the then-current literature concluded that respondents were able to predict the gender of handwriting between 57 and 78% of the time. However, most of these samples, as well as subsequent studies, are based on small sample sizes that are collected non-randomly. A much larger and more recent survey of over 3,000 participants only found a classification accuracy of 54%. As statistical discrimination below 0.7 is generally considered unacceptable, this indicates that most results are rather inaccurate, and that variation in results observed is likely due to sampling technique and bias. The reason for this bias varies; hypotheses are that biology contributes due to average differences in fine motor skills among males and females, and that differences arise from culture and gender bias. === Employment profiling === A company takes a writing sample provided by an applicant, and does a personality profile, supposedly matching the congruence of the applicant with the ideal psychological profile of employees in the position. The applicant can also malpractice in this system; they may ask someone to write on their behalf. A graphological report is meant to be used in conjunction with other tools, such as comprehensive background checks, practical demonstration or record of work skills. Graphology supporters state that it can complement but not replace traditional hiring tools. Research in employment suitability has ranged from complete failure to guarded success. The most substantial reason for not using handwriting analysis in the employment process is the absence of evidence of a direct link between handwriting analysis and various measures of job performance. The use of graphology in the hiring process has been criticized on ethical and legal grounds in the United States. === Psychological analysis === Graphology has been used clinically by counselors and psychotherapists. When it is used, it is generally used alongside other projective personality assessment tools, and not in isolation. It is often used within individual psychotherapy, marital counseling, or vocational counseling. === Marital compatibility === In its simplest form only sexual expression and sexual response are examined. At its most complex, every aspect of an individual is examined for how it affects the other individual(s) within the relationship. The theory is that after knowing and understanding how each individual in the relationship differs from every other individual in the relationship, the resulting marriage will be more enduring. With a comparative analysis receiving and non-receiving parts responses are measured. === Medical diagnosis === Medical graphology is probably the most controversial branch of handwriting analysis. Strictly speaking, such research is not graphology as described throughout this article but an examination of factors pertaining to motor control. Research studies have been conducted in which a detailed examination of handwriting factors, particularly timing, fluidity, and consistency of size, form, speed, and pressure are considered in the process of evaluating patients and their response to pharmacological therapeutic agents. The study of these phenomena is a by-product of researchers investigating motor control processes and the interaction of nervous, anatomical, and biomechanical systems of the body. The Vanguard Code of Ethical Practice, amongst others, prohibits medical diagnosis by those not licensed to do diagnosis in the state in which they practice. === Graphotherapy === Graphotherapy is the pseudoscience of changing a person's handwriting with the goal of changing features of his or her personality, or "handwriting analysis in reverse." It originated in France during the 1930s, spreading to the United States in the late 1950s. The purported therapy consists of a series of exercises similar to those taught in basic calligraphy courses, sometimes in conjunction with music or positive self-talk. == See also == Literomancy Numerology Physiognomy === Graphologists === Max Pulver Robert Saudek Rafael Schermann Léopold Szondi Sheila Lowe === Related fields === Asemic writing List of topics characterized as pseudoscience Palaeography Graphonomics Doodle == References == == Further reading == Bangerter A, König CJ, Blatti S, Salvisberg A (2009). "How Widespread is Graphology in Personnel Selection Practice? A case study of a job market myth" (PDF). International Journal of Selection and Assessment. 17 (2): 219–30. doi:10.1111/j.1468-2389.2009.00464.x. S2CID 55481603. Berger J (2002). "Handwriting Analysis and Graphology". In Shermer M (ed.). The Skeptic Encyclopedia of Pseudoscience. ABC-CLIO. pp. 116–20. ISBN 978-1-57607-653-8. == External links == Skeptic's Dictionary entry on graphology BBC article about graphology How Graphology Fools People
Wikipedia/Graphology
In information science, profiling refers to the process of construction and application of user profiles generated by computerized data analysis. This is the use of algorithms or other mathematical techniques that allow the discovery of patterns or correlations in large quantities of data, aggregated in databases. When these patterns or correlations are used to identify or represent people, they can be called profiles. Other than a discussion of profiling technologies or population profiling, the notion of profiling in this sense is not just about the construction of profiles, but also concerns the application of group profiles to individuals, e. g., in the cases of credit scoring, price discrimination, or identification of security risks (Hildebrandt & Gutwirth 2008) (Elmer 2004). Profiling is being used in fraud prevention, ambient intelligence, consumer analytics, and surveillance. Statistical methods of profiling include Knowledge Discovery in Databases (KDD). == The profiling process == The technical process of profiling can be separated in several steps: Preliminary grounding: The profiling process starts with a specification of the applicable problem domain and the identification of the goals of analysis. Data collection: The target dataset or database for analysis is formed by selecting the relevant data in the light of existing domain knowledge and data understanding. Data preparation: The data are preprocessed for removing noise and reducing complexity by eliminating attributes. Data mining: The data are analysed with the algorithm or heuristics developed to suit the data, model and goals. Interpretation: The mined patterns are evaluated on their relevance and validity by specialists and/or professionals in the application domain (e.g. excluding spurious correlations). Application: The constructed profiles are applied, e.g. to categories of persons, to test and fine-tune the algorithms. Institutional decision: The institution decides what actions or policies to apply to groups or individuals whose data match a relevant profile. Data collection, preparation and mining all belong to the phase in which the profile is under construction. However, profiling also refers to the application of profiles, meaning the usage of profiles for the identification or categorization of groups or individual persons. As can be seen in step six (application), the process is circular. There is a feedback loop between the construction and the application of profiles. The interpretation of profiles can lead to the reiterant – possibly real-time – fine-tuning of specific previous steps in the profiling process. The application of profiles to people whose data were not used to construct the profile is based on data matching, which provides new data that allows for further adjustments. The process of profiling is both dynamic and adaptive. A good illustration of the dynamic and adaptive nature of profiling is the Cross-Industry Standard Process for Data Mining (CRISP-DM). == Types of profiling practices == In order to clarify the nature of profiling technologies, some crucial distinctions have to be made between different types of profiling practices, apart from the distinction between the construction and the application of profiles. The main distinctions are those between bottom-up and top-down profiling (or supervised and unsupervised learning), and between individual and group profiles. === Supervised and unsupervised learning === Profiles can be classified according to the way they have been generated (Fayyad, Piatetsky-Shapiro & Smyth 1996) (Zarsky & 2002-3). On the one hand, profiles can be generated by testing a hypothesized correlation. This is called top-down profiling or supervised learning. This is similar to the methodology of traditional scientific research in that it starts with a hypothesis and consists of testing its validity. The result of this type of profiling is the verification or refutation of the hypothesis. One could also speak of deductive profiling. On the other hand, profiles can be generated by exploring a data base, using the data mining process to detect patterns in the data base that were not previously hypothesized. In a way, this is a matter of generating hypothesis: finding correlations one did not expect or even think of. Once the patterns have been mined, they will enter the loop – described above – and will be tested with the use of new data. This is called unsupervised learning. Two things are important with regard to this distinction. First, unsupervised learning algorithms seem to allow the construction of a new type of knowledge, not based on hypothesis developed by a researcher and not based on causal or motivational relations but exclusively based on stochastical correlations. Second, unsupervised learning algorithms thus seem to allow for an inductive type of knowledge construction that does not require theoretical justification or causal explanation (Custers 2004). Some authors claim that if the application of profiles based on computerized stochastical pattern recognition 'works', i.e. allows for reliable predictions of future behaviours, the theoretical or causal explanation of these patterns does not matter anymore (Anderson 2008). However, the idea that 'blind' algorithms provide reliable information does not imply that the information is neutral. In the process of collecting and aggregating data into a database (the first three steps of the process of profile construction), translations are made from real-life events to machine-readable data. These data are then prepared and cleansed to allow for initial computability. Potential bias will have to be located at these points, as well as in the choice of algorithms that are developed. It is not possible to mine a database for all possible linear and non-linear correlations, meaning that the mathematical techniques developed to search for patterns will be determinate of the patterns that can be found. In the case of machine profiling, potential bias is not informed by common sense prejudice or what psychologists call stereotyping, but by the computer techniques employed in the initial steps of the process. These techniques are mostly invisible for those to whom profiles are applied (because their data match the relevant group profiles). === Individual and group profiles === Profiles must also be classified according to the kind of subject they refer to. This subject can either be an individual or a group of people. When a profile is constructed with the data of a single person, this is called individual profiling (Jaquet-Chiffelle 2008). This kind of profiling is used to discover the particular characteristics of a certain individual, to enable unique identification or the provision of personalized services. However, personalized servicing is most often also based on group profiling, which allows categorisation of a person as a certain type of person, based on the fact that her profile matches with a profile that has been constructed on the basis of massive amounts of data about massive numbers of other people. A group profile can refer to the result of data mining in data sets that refer to an existing community that considers itself as such, like a religious group, a tennis club, a university, a political party etc. In that case it can describe previously unknown patterns of behaviour or other characteristics of such a group (community). A group profile can also refer to a category of people that do not form a community, but are found to share previously unknown patterns of behaviour or other characteristics (Custers 2004). In that case the group profile describes specific behaviours or other characteristics of a category of people, like for instance women with blue eyes and red hair, or adults with relatively short arms and legs. These categories may be found to correlate with health risks, earning capacity, mortality rates, credit risks, etc. If an individual profile is applied to the individual that it was mined from, then that is direct individual profiling. If a group profile is applied to an individual whose data match the profile, then that is indirect individual profiling, because the profile was generated using data of other people. Similarly, if a group profile is applied to the group that it was mined from, then that is direct group profiling (Jaquet-Chiffelle 2008). However, in as far as the application of a group profile to a group implies the application of the group profile to individual members of the group, it makes sense to speak of indirect group profiling, especially if the group profile is non-distributive. === Distributive and non-distributive profiling === Group profiles can also be divided in terms of their distributive character (Vedder 1999). A group profile is distributive when its properties apply equally to all the members of its group: all bachelors are unmarried, or all persons with a specific gene have 80% chance to contract a specific disease. A profile is non-distributive when the profile does not necessarily apply to all the members of the group: the group of persons with a specific postal code have an average earning capacity of XX, or the category of persons with blue eyes has an average chance of 37% to contract a specific disease. Note that in this case the chance of an individual to have a particular earning capacity or to contract the specific disease will depend on other factors, e.g. sex, age, background of parents, previous health, education. It should be obvious that, apart from tautological profiles like that of bachelors, most group profiles generated by means of computer techniques are non-distributive. This has far-reaching implications for the accuracy of indirect individual profiling based on data matching with non-distributive group profiles. Quite apart from the fact that the application of accurate profiles may be unfair or cause undue stigmatisation, most group profiles will not be accurate. == Applications == In the financial sector, institutions use profiling technologies for fraud prevention and credit scoring. Banks want to minimize the risks in giving credit to their customers. On the basis of the extensive group, profiling customers are assigned a certain scoring value that indicates their creditworthiness. Financial institutions like banks and insurance companies also use group profiling to detect fraud or money-laundering. Databases with transactions are searched with algorithms to find behaviors that deviate from the standard, indicating potentially suspicious transactions. In the context of employment, profiles can be of use for tracking employees by monitoring their online behavior, for the detection of fraud by them, and for the deployment of human resources by pooling and ranking their skills. (Leopold & Meints 2008) Profiling can also be used to support people at work, and also for learning, by intervening in the design of adaptive hypermedia systems personalizing the interaction. For instance, this can be useful for supporting the management of attention (Nabeth 2008). In forensic science, the possibility exists of linking different databases of cases and suspects and mining these for common patterns. This could be used for solving existing cases or for the purpose of establishing risk profiles of potential suspects (Geradts & Sommer 2008) (Harcourt 2006). === Consumer profiling === Consumer profiling is a form of customer analytics, where customer data is used to make decisions on product promotion, the pricing of products, as well as personalized advertising. When the aim is to find the most profitable customer segment, consumer analytics draws on demographic data, data on consumer behavior, data on the products purchased, payment method, and surveys to establish consumer profiles. To establish predictive models on the basis of existing databases, the Knowledge Discovery in Databases (KDD) statistical method is used. KDD groups similar customer data to predict future consumer behavior. Other methods of predicting consumer behaviour are correlation and pattern recognition. Consumer profiles describe customers based on a set of attributes and typically consumers are grouped according to income, living standard, age and location. Consumer profiles may also include behavioural attributes that assess a customer's motivation in the buyer decision process. Well known examples of consumer profiles are Experian's Mosaic geodemographic classification of households, CACI's Acorn, and Acxiom's Personicx. === Ambient intelligence === In a built environment with ambient intelligence everyday objects have built-in sensors and embedded systems that allow objects to recognise and respond to the presence and needs of individuals. Ambient intelligence relies on automated profiling and human–computer interaction designs. Sensors monitor an individual's action and behaviours, therefore generating, collecting, analysing, processing and storing personal data. Early examples of consumer electronics with ambient intelligence include mobile apps, augmented reality and location-based service. == Risks and issues == Profiling technologies have raised a host of ethical, legal and other issues including privacy, equality, due process, security and liability. Numerous authors have warned against the affordances of a new technological infrastructure that could emerge on the basis of semi-autonomic profiling technologies (Lessig 2006) (Solove 2004) (Schwartz 2000). Privacy is one of the principal issues raised. Profiling technologies make possible a far-reaching monitoring of an individual's behaviour and preferences. Profiles may reveal personal or private information about individuals that they might not even be aware of themselves (Hildebrandt & Gutwirth 2008). Profiling technologies are by their very nature discriminatory tools. They allow unparalleled kinds of social sorting and segmentation which could have unfair effects. The people that are profiled may have to pay higher prices, they could miss out on important offers or opportunities, and they may run increased risks because catering to their needs is less profitable (Lyon 2003). In most cases they will not be aware of this, since profiling practices are mostly invisible and the profiles themselves are often protected by intellectual property or trade secret. This poses a threat to the equality of and solidarity of citizens. On a larger scale, it might cause the segmentation of society. One of the problems underlying potential violations of privacy and non-discrimination is that the process of profiling is more often than not invisible for those that are being profiled. This creates difficulties in that it becomes hard, if not impossible, to contest the application of a particular group profile. This disturbs principles of due process: if a person has no access to information on the basis of which they are withheld benefits or attributed certain risks, they cannot contest the way they are being treated (Steinbock 2005). Profiles can be used against people when they end up in the hands of people who are not entitled to access or use the information. An important issue related to these breaches of security is identity theft. When the application of profiles causes harm, the liability for this harm has to be determined who is to be held accountable. Is the software programmer, the profiling service provider, or the profiled user to be held accountable? This issue of liability is especially complex in the case the application and decisions on profiles have also become automated like in Autonomic Computing or ambient intelligence decisions of automated decisions based on profiling. == See also == == References == Notes and other references
Wikipedia/Profiling_(information_science)
Forensic entomology has three sub-fields: urban, stored product and medico-criminal entomologies. This article focuses on medico-criminal entomology and how DNA is analyzed with various blood-feeding insects. Forensic entomology can be an important aspect for law enforcement. With the magnitude of information that can be gathered, investigators can more accurately determine time of death, location, how long a body has been in a specific area, if it has been moved, and other important factors. == Blood meal extraction == To extract a blood meal from the abdomen of an insect to isolate and analyze DNA, the insect must first be killed by placing it in 96% ethanol. The killed insect can be stored at -20 °C until analysis. When it is time for analysis, the DNA must then be extracted by dissecting the posterior end of the abdomen and collecting 25 mg of tissue. The cut in the abdomen should be made with a razor blade as close to the posterior as possible to avoid the stomach. Using a DNA extraction kit, the DNA is extracted from the tissue. If the DNA is mixed with samples from more than one individual, it is separated using a species specific primer. Once extracted and isolated, the DNA sample goes through a polymerase chain reaction (PCR), is amplified and identified. PCR works by analyzing species specific mitochondrial DNA. PCR is currently the most commonly used method of species identification. This results from the fact that it is very sensitive in that it requires only a small amount of biological material, and can also utilize material that is not particularly fresh. The sample can be frozen and stored while still remaining usable for later PCR. DNA requires one hour to reach the abdomen of an insect, so DNA can be amplified one to forty-four hours after an insect feeds. Some research suggests that the source of a blood meal can be determined up to two months post feeding. To amplify DNA, it must first be denatured by exposing it to a 95 °C temperature for one minute, followed by thirty cycles of thirty-second 95 °C exposures. Then denatured DNA is mixed with a specific primer. A chromatograph is conducted on 2% agarose gel, stained, and viewed with UV fluorescence. The DNA is identified by looking for genome specific repetitive elements and by comparing it with known examples. == Haematophagous insects of forensic importance == Humans are constantly fed on by haematophagous (blood feeding) insects. The ingested blood can be recovered and used to identify the person from which it was taken. Bite marks and reactions to bites can be used to place a person in an area where those insects are found. === Order Diptera === The following among the flies (Diptera) have been utilized: Mosquitoes, Family Culicidae Due to erratic feeding habits, mosquitoes can provide valuable DNA evidence. Multiplex PCR enables reliable identification of bitten individuals from just one mosquito, even few days after taking a blood-meal. The insects would need to be collected as soon as possible due to the insect's high mobility, digestion of consumed blood (degradation of DNA) and repeated feeding, although dead specimens are also potentially valuable source of evidence DNA. Research is centered on the mosquito due its widespread presence and affinity for feeding on humans. Biting midges, Family Ceratopogonidae Tsetse flies, Family Glossinidae Sheep keds, Family Hippoboscidae Stable and horn flies, Family Muscidae Sand flies, Family Psychodidae, Subfamily Phlebotominae Snipe flies, Family Rhagionidae Black flies, Family Simuliidae Horse flies, Family Tabanidae === Order Siphonaptera === Listed here are fleas commonly encountered by humans that could potentially be used for DNA identification. Sticktight and chigoe flea, Family Hectopsyllidae (formerly Tungidae) Cat flea (Ctenocephalides felis) Northern rat flea (Nosopsyllus fasciatus) Human flea (Pulex irritans) Oriental rat flea (Xenopsylla cheopis) === Order Hemiptera === Bedbug (Cimex lectularius) Cimex lectularius is an obligate parasite of humans. Testing a sample of a residence's bed bug population and screening for bites could reveal possible recent visitors to the structure, as they have been observed to feed approximately once a week in temperate conditions. A recent re-emergence of bedbug populations in North America as well as growing interest in the field of forensics may prove bedbugs to be useful investigative tools. Recent studies have revealed that human DNA can be recovered from bed bugs for up to 60 days after feeding, thus demonstrating the potential use of this insect in forensic entomology Assassin bugs, Family Reduviidae === Order Phthiraptera === Lice can be indicators of contact with another person. Many species closely associated with humans can be easily transferred between individuals. DNA identification of multiple individuals using blood meals from body and head lice has been demonstrated in laboratory settings. ==== Suborder Anoplura ==== Head louse (Pediculus humanus capitis) Body louse (Pediculus humanus humanus) Pubic louse (Pthirus pubis) === Other Arthropods === ==== Order Ixodida ==== Due to the low probability of a tick detaching and falling to the ground at the scene of the crime, these may not be highly useful regardless of the large amount of blood and lymph they ingest. However, should an engorged tick be found in an area of interest, it would likely contain sufficient genetic material for identification. == Analysis of collected DNA == DNA identification of species can be a useful tool in forensic entomology. Although it does not replace conventional identification of species through visual identification, it can be used to differentiate between two species of very similar or identical physical and behavioral characteristics. A thorough identification of the species through conventional methods is needed before an attempt at DNA analysis. This DNA can be obtained from practically any part of the insect, including the body, leg, setae, antennae, etc. There are about one million species described in the world and many more that have still not been identified. A project termed "the barcode of life" was launched by Dr. Paul D. N. Hebert, where he identified a gene that is used in cellular respiration by all species, but is different in every species. This difference in sequence can help entomologists easily identify two similar species. DNA sequencing is basically done in three steps: polymerase chain reaction (PCR), followed by a sequencing reaction, then gel electrophoresis. PCR is a step that cleaves the long chain of chromosomes into much shorter and workable pieces. These pieces are used as patterns to create a set of fragments. These fragments are different in length from each other by one base which is helpful in identification. Those sets of fragments are then separated by gel electrophoresis. This process uses electricity to separate DNA fragments by size as they move through a gel matrix. With the presence of an electric current the negative DNA strand marches toward the positive pole of the current. The smaller DNA fragments move through the gel pores much more easily/faster than larger molecules. At the bottom of the gel the fragments go through a laser beam that emits a distinct color according to the base that passes through. == References ==
Wikipedia/Use_of_DNA_in_forensic_entomology
Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information. Network traffic is transmitted and then lost, so network forensics is often a pro-active investigation. Network forensics generally has two uses. The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. The second form relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions. Two systems are commonly used to collect network data; a brute force "catch it as you can" and a more intelligent "stop look listen" method. == Overview == Network forensics is a comparatively new field of forensic science. The growing popularity of the Internet in homes means that computing has become network-centric and data is now available outside of disk-based digital evidence. Network forensics can be performed as a standalone investigation or alongside a computer forensics analysis (where it is often used to reveal links between digital devices or reconstruct how a crime was committed). Marcus Ranum is credited with defining Network forensics as "the capture, recording, and analysis of network events in order to discover the source of security attacks or other problem incidents". Compared to computer forensics, where evidence is usually preserved on disk, network data is more volatile and unpredictable. Investigators often only have material to examine if packet filters, firewalls, and intrusion detection systems were set up to anticipate breaches of security. Systems used to collect network data for forensics use usually come in two forms: "Catch-it-as-you-can" – This is where all packets passing through a certain traffic point are captured and written to storage with analysis being done subsequently in batch mode. This approach requires large amounts of storage. "Stop, look and listen" – This is where each packet is analyzed in a rudimentary way in memory and only certain information saved for future analysis. This approach requires a faster processor to keep up with incoming traffic. == Types == === Ethernet === Apt all data on this layer allows the user to filter for different events. With these tools, website pages, email attachments, and other network traffic can be reconstructed only if they are transmitted or received unencrypted. An advantage of collecting this data is that it is directly connected to a host. If, for example the IP address or the MAC address of a host at a certain time is known, all data sent to or from this IP or MAC address can be filtered. To establish the connection between IP and MAC address, it is useful to take a closer look at auxiliary network protocols. The Address Resolution Protocol (ARP) tables list the MAC addresses with the corresponding IP addresses. To collect data on this layer, the network interface card (NIC) of a host can be put into "promiscuous mode". In so doing, all traffic will be passed to the CPU, not only the traffic meant for the host. However, if an intruder or attacker is aware that his connection might be eavesdropped, he might use encryption to secure his connection. It is almost impossible nowadays to break encryption but the fact that a suspect's connection to another host is encrypted all the time might indicate that the other host is an accomplice of the suspect. === TCP/IP === On the network layer the Internet Protocol (IP) is responsible for directing the packets generated by TCP through the network (e.g., the Internet) by adding source and destination information which can be interpreted by routers all over the network. Cellular digital packet networks, like GPRS, use similar protocols like IP, so the methods described for IP work with them as well. For the correct routing, every intermediate router must have a routing table to know where to send the packet next. These routing tables are one of the best sources of information if investigating a digital crime and trying to track down an attacker. To do this, it is necessary to follow the packets of the attacker, reverse the sending route and find the computer the packet came from (i.e., the attacker). === Encrypted traffic analytics === Given the proliferation of TLS encryption on the internet, as of April 2021 it is estimated that half of all malware uses TLS to evade detection. Encrypted traffic analysis inspects traffic to identify encrypted traffic coming from malware and other threats by detecting suspicious combinations of TLS characteristics, usually to uncommon networks or servers. Another approach to encrypted traffic analysis uses a generated database of fingerprints, although these techniques have been criticized as being easily bypassed by hackers and inaccurate. === Internet === The internet can be a rich source of digital evidence including web browsing, email, newsgroup, synchronous chat and peer-to-peer traffic. For example, web server logs can be used to show when (or if) a suspect accessed information related to criminal activity. Email accounts can often contain useful evidence; but email headers are easily faked and, so, network forensics may be used to prove the exact origin of incriminating material. Network forensics can also be used in order to find out who is using a particular computer by extracting user account information from the network traffic. == Wireless forensics == Wireless forensics is a sub-discipline of network forensics. The main goal of wireless forensics is to provide the methodology and tools required to collect and analyze (wireless) network traffic that can be presented as valid digital evidence in a court of law. The evidence collected can correspond to plain data or, with the broad usage of Voice-over-IP (VoIP) technologies, especially over wireless, can include voice conversations. Analysis of wireless network traffic is similar to that on wired networks, however there may be the added consideration of wireless security measures. == References == == External links == Overview of network forensic tools and datasets (2021) Forensics Wiki (2010)
Wikipedia/Network_forensics
A retrospective diagnosis (also retrodiagnosis or posthumous diagnosis) is the practice of identifying an illness after the death of the patient (sometimes a historical figure) using modern knowledge, methods and disease classifications. Alternatively, it can be the more general attempt to give a modern name to an ancient and ill-defined scourge or plague. == Historical research == Retrospective diagnosis is practised by medical historians, general historians and the media with varying degrees of scholarship. At its worst it may become "little more than a game, with ill-defined rules and little academic credibility". The process often requires "translating between linguistic and conceptual worlds separated by several centuries", and assumes our modern disease concepts and categories are privileged. Crude attempts at retrospective diagnosis fail to be sensitive to historical context, may treat historical and religious records as scientific evidence, or ascribe pathology to behaviours that require none. Darin Hayton, a historian of science at Haverford College, claims that retrodiagnosing famous individuals with autism in the media is pointless, as historical accounts often contain incomplete information. The understanding of the history of illness can benefit from modern science. For example, knowledge of the insect vectors of malaria and yellow fever can be used to explain the changes in extent of those diseases caused by drainage or urbanisation in historical times. The practice of retrospective diagnosis has been applied in parody, where characters from fiction are "diagnosed"; e.g., authors have speculated that Squirrel Nutkin may have had Tourette syndrome and that Tiny Tim could have had distal renal tubular acidosis (type I). == Postmortem diagnosis == Post-mortem diagnosis is considered a research tool, and also a quality control practice and it allows to evaluate the performance of the clinical case definitions. The term retrospective diagnosis is also sometimes used by a clinical pathologist to describe a medical diagnosis in a person made some time after the original illness has resolved or after death. In such cases, analysis of a physical specimen may yield a confident medical diagnosis. The search for the origin of AIDS has involved posthumous diagnosis of AIDS in people who died decades before the disease was first identified. Another example is where analysis of preserved umbilical cord tissue enables the diagnosis of congenital cytomegalovirus infection in a patient who had later developed a central nervous system disorder. == Examples == Did Abraham, Moses, Jesus, Saint Paul or Muhammad have psychotic spectrum psychological symptoms? Did Tutankhamun have Klippel–Feil syndrome? Did Alfred the Great have Crohn's disease? Did botulism cause the religious visions experienced by Julian of Norwich? Was the English sweat caused by hantavirus? Was the Black Death due to bubonic plague? Was "the great pox" syphilis or several venereal diseases? Did King George III of the United Kingdom exhibit the classic symptoms of porphyria? Were the conditions blamed on witches at the Salem witch trials caused by ergotism? Did Napoleon die from stomach cancer, or was he poisoned with arsenic? Could Franklin D. Roosevelt's paralytic illness have been Guillain–Barré syndrome rather than poliomyelitis? Did Abraham Lincoln have Marfan syndrome? Did Karl Marx have hidradenitis suppurativa? Could Burke and Wills have died of thiaminase poisoning? Did René Descartes have Exploding head syndrome? == Retrospective diagnoses of autism == There have been many published speculative retrospective diagnoses of autism of historical figures. English scientist Henry Cavendish is believed by some to have been autistic. George Wilson, a notable chemist and physician, wrote a book about Cavendish entitled The Life of the Honourable Henry Cavendish (1851), which provides a detailed description that indicates Cavendish may have exhibited many classic signs of autism. The practice of retrospectively diagnosing autism is controversial. Professor Fred Volkmar of Yale University is not convinced; he claims that "There is unfortunately a sort of cottage industry of finding that everyone has Asperger's." == See also == Charles Darwin's illness List of people with epilepsy (includes notes on retrospective diagnosis and misdiagnosis of historical figures) Mental health of Jesus Paleopathology Samuel Johnson's health == References == == Further reading == Mackowiak, Philip A. (2007). Post-Mortem: Solving History's Great Medical Mysteries. The American College of Physicians. ISBN 978-1-930513-89-1. Historical Clinicopathological Conference
Wikipedia/Retrospective_diagnosis
Medicine, Science and the Law is a quarterly peer-reviewed medical journal covering forensic medicine and science. It was established in 1960 and was originally published by Sweet & Maxwell; it is now published by SAGE Publications. It has been the official journal of the British Academy of Forensic Sciences since the journal and Academy were both established. The editor-in-chief is Peter Vanezis (Barts and The London School of Medicine and Dentistry). According to the Journal Citation Reports, the journal has a 2016 impact factor of 0.689, ranking it 90th out of 147 journals in the category "Law" and 12th out of 15 journals in the category "Medicine, Legal". == References == == External links == Official website
Wikipedia/Medicine,_Science_and_the_Law
A controlled substance is generally a drug or chemical whose manufacture, possession and use is regulated by a government, such as illicitly used drugs or prescription medications that are designated by law. Some treaties, notably the Single Convention on Narcotic Drugs, the Convention on Psychotropic Substances, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances, provide internationally agreed-upon "schedules" of controlled substances, which have been incorporated into national laws; however, national laws usually significantly expand on these international conventions. Some precursor chemicals used for the production of illegal drugs are also controlled substances in many countries, even though they may lack the pharmacological effects of the drugs themselves. Substances are classified according to schedules and consist primarily of potentially psychoactive substances and anabolic steroids. The controlled substances do not include many prescription items such as antibiotics. == Laws and enforcement == In the United States, the Drug Enforcement Administration is the federal government agency responsible for suppressing illegal drug use and distribution by enforcing the Controlled Substances Act, which regulates both the drugs themselves and certain precursors. Some U.S. states have additional restrictions for substances which might or might not be regulated by the federal government. During the Obama Administration, the federal government also voluntarily suspended enforcement of federal laws restricting marijuana where people were operating in compliance with state law. Some states in the U.S. have statutes against health care providers self-prescribing and/or administering substances listed in the Controlled Substance Act schedules. This does not forbid licensed providers from self-prescribing medications not on the schedules. The term Controlled Drug in the United Kingdom (CD) is used for substances governed by the Misuse of Drugs Act 1971. Other national drug prohibition laws include the Controlled Drugs and Substances Act (Canada) and the Misuse of Drugs Act 1975 (New Zealand), among many others. Within Europe controlled substance laws are legislated at the national rather than by the EU itself, with significant variation between countries in which and how chemicals are classified as controlled. Only drug precursor laws are legislated for at the European level. == Use in research == A common misunderstanding amongst researchers is that most national laws allow the use of small amounts of a controlled substance for non-clinical / non-in vivo research without licences. A typical use case might be having a few milligrams or microlitres of a controlled substance within larger chemical collections (often 10K's of chemicals) for in vitro screening. Researchers often believe that there is some form of “research exemption” for such small amounts. This incorrect view may be further re-enforced by R&D chemical suppliers often stating and asking scientists to confirm that anything bought is for research use only. A further misconception is that controlled substances laws simply list a few hundred substances (e.g. MDMA, Fentanyl, Amphetamine, etc.) and compliance can be achieved via checking a CAS number, chemical name or similar identifier. However, the reality is that most countries enact “generic statement" or “chemical space” laws, which aim to control all chemicals similar to the “named” substance. These either provide detailed descriptions similar to Markushes, or simply state analogues are also controlled. In addition, control of most named substances is extended to control of all of their ethers, esters, salts and stereoisomers. Due to this complexity in legislation the identification of controlled chemicals in research is often carried out computationally, either by in house systems maintained a company's sample logistics department or by the use commercial software solutions. Automated systems are often required as many research operations can often have chemical collections running into 10Ks of molecules at the 1–5 mg scale, which are likely to include controlled substances, especially within medicinal chemistry research. These may not have been controlled when created, but they have subsequently been declared controlled. === Known research exemptions === Source: Switzerland. Has limited exemptions to some Directory E substances, but which substances are covered and what the exemption allows depends on the substance, for example compounds similar to Fentanyl allow for “Von der Kontrolle ausgenommen ist die industrielle und die wissenschaftliche Verwendung. Der private Gebrauch ist nicht von der Kontrolle ausgenommen” or “Excluded from the control is the industrial and scientific use. Private use is not exempt from the control.” The exemption wording for Cyclohexylphenols is “Cyclohexylphenole sind von der Kontrolle nach den Kapiteln 5 und 6 der Verordnung über die Betäubungsmittelkontrolle vom 25. Mai 2011 ausgenommen, wenn sie von Unternehmen mit einer Betriebsbewilligung für den Umgang mit kontrollierten Substanzen des Verzeichnisses e industriell eingesetzt werden. Für Substanzmengen bis zu 100 g benötigen diese Unternehmen keine Ein- oder Ausfuhrbewilligung.” or “Cyclohexylphenols are exempted from the control under Chapters 5 and 6 of the Narcotics Control Ordinance of 25 May 2011 if they are used industrially by undertakings holding an operating license for the handling of controlled substances in Inventory e. For substance quantities of up to 100g, these companies do not require an import or export license”. In addition, import or export authorization is not required in case of controlled substances for analytical purpose in concentrations up to 1 mg/ml. (Art 23, Abs. 2b, BetmKV) Further qualifications apply e.g. yearly limits as well individual shipment limits United Kingdom There are no specific research exemptions in the Misuse of Drugs Act 1971. However, the associated Misuse of Drugs Regulations 2001 does exempt products containing less than 1 mg of a controlled substance (1 ug for lysergide and derivatives) so long as a number of requirements are met, including that it cannot be recovered by readily applicable means, does not pose a risk to human health and is not meant for administration to a human or animal. Although this does at first seem to allow research use, in most circumstances the sample, by definition, is “recoverable” - in order to prepare it for use the sample is ‘recovered’ into an assay buffer or solvent such as DMSO or water. In 2017 the Home Office also confirmed that the 1 mg limit applies to the total of all preparations across the entire container in the case of sample microtitre plates. Given this, most companies and researchers choose not to rely on this exemption. According to Home Office licensing, "University research departments generally do not require licences to possess and supply drugs in schedule 2 drug, schedule 3 drug, schedule 4 drug part I, part II and schedule 5, but they do require licences to produce any of those drugs and to produce, possess and/or supply drugs in schedule 1". United States of America In the US no general research exemptions are known to exist, at least at the federal level and the Controlled Substances Act. Germany The Gesetz über den Verkehr mit Betäubungsmitteln (Betäubungsmittelgesetz - BtMG) / (Law on the Traffic in Narcotic Drugs (Narcotics Act - BtMG) has a partial exemption that might apply to certain research areas. For each schedule the act allows for the preparations of the substances listed in this Appendix if they are not a) without being applied to or in the human or animal body, for diagnostic or analytical purposes only, and their content of one or more anesthetics not exceeding 0.001 per cent, or isotope-modified in the preparations, or b) are particularly excluded; die Zubereitungen der in dieser Anlage aufgeführten Stoffe, wenn sie nicht a) ohne am oder im menschlichen oder tierischen Körper angewendet zu werden, ausschließlich diagnostischen oder analytischen Zwecken dienen und ihr Gehalt an einem oder mehreren Betäubungsmitteln jeweils 0,001 vom Hundert nicht übersteigt oder die Stoffe in den Zubereitungen isotopenmodifiziert oder b) besonders ausgenommen sind; The exact percentage various for each schedule. Also whether the ‘0.001%’ allows the rest to be an assay solvent or medium, or whether a licence is needed if you have some solid, e.g. 1 mg of sample before its diluted is not clear. == References == == External links == Laws of New York State, §3306: Schedules of controlled substances. (under PBH Public Health, Article 33, Title I) Texas State Controlled Substance Act (Health and Safety Code, Title 6, Chapter 481)
Wikipedia/Controlled_substance
Crime science is the study of crime in order to find ways to prevent it. It is distinguished from criminology in that it is focused on how crime is committed and how to reduce it, rather than on who committed it. It is multidisciplinary, notably recruiting scientific methodology rather than relying on social theory. == Definition == Crime science involves the application of scientific methodologies to prevent or reduce social disorder and find better ways to prevent, detect, and solve crimes. Crime science studies crime related events and how those events arise, or can be prevented, by attempting to understand the temptations and opportunities which provoke or allow offending, and which affect someone's choice to offend on a particular occasion, rather than assuming the problem is simply about bad people versus good people. It is a empirical approach often involving observational studies or quasi-experiments, as well as using randomised controlled trials, that seek to identify patterns of offending behaviour and factors that influence criminal offending behaviour and crime. The multi-disciplinary approach that involves practitioners from many fields including Policing, Geography, Urban Development, Mathematics, Statistics, Industrial Design, Construction Engineering, Physical Sciences, Medical Sciences, Economics, Computer Science, Psychology, Sociology, Criminology, Forensics, Law, and Public Management. == History == Crime science was conceived by the British broadcaster Nick Ross in the late 1990s (with encouragement from the then Commissioner of the Metropolitan Police, Sir John Stevens and Professor Ken Pease) out of concern that traditional criminology and orthodox political discourse were doing little to influence the ebb and flow of crime (e.g. Ross: Police Foundation Lecture, London, 11 July 2000 (jointly with Sir John Stevens); Parliamentary and Scientific Committee, 22 March 2001; Barlow Lecture, UCL, 6 April 2005). Ross described crime science as, "examining the chain of events that leads to crime in order to cut the weakest link" (Royal Institution Lecture 9 May 2002). == Jill Dando Institute of Crime Science == The first incarnation of crime science was the founding, also by Ross, of the Jill Dando Institute of Crime Science (JDI) at University College London in 2001. In order to reflect its broad disciplinary base, and its departure from the sociological (and often politicised) brand of criminology, the Institute is established in the Engineering Sciences Faculty, with growing ties to the physical sciences such as physics and chemistry but also drawing on the fields of statistics, environmental design, psychology, forensics, policing, economics and geography. The JDI grew rapidly and spawned a new Department of Security and Crime Science, which itself developed into one of the largest departments of its type in the world. It has established itself as a world-leader in crime mapping and for training crime analysts (civilian crime profilers who work for the police) and its Centre for the Forensic Sciences has been influential in debunking bad science in criminal detection. It established the world's first secure data lab for security and crime pattern analysis and appointed the world's first Professor of Future Crime whose role is to horizon-scan to foresee and forestall tomorrow's crime challenges. The JDI also developed a Security Science Doctoral Research Training Centre (UCL SECReT), which was Europe’s largest centre for doctoral training in security and crime science. == Design Against Crime Research Centre == Another branch of crime science has grown from its combination with design science. At the Central Saint Martins College of Arts and Design a research centre was founded with the focus of studying how design could be used as a tool against crime - the Design against Crime Research Centre. A number of practical theft-aware design practices have emerged there. Examples are chairs with a hanger that allows people to keep their bags within their reach for the whole time, or foldable bicycles that can serve as their own safety lock by wrapping around static poles in the environment. == International Crime Science Network == An international Crime Science Network was formed in 2003, with support from the EPSRC. Since then the term crime science has been variously interpreted, sometimes with a different emphasis from Ross's original description published in 1999, and often favouring situational crime prevention (redesigning products, services and policies to remove opportunities, temptations and provocations and make detection more certain) rather than other forms of intervention. However a common feature is a focus on delivering immediate reductions in crime. New crime science departments have been established at Waikato, Cincinnati, Philadelphia and elsewhere. == Growth of the Crime Science Field == The concept of crime science appears to be taking root more broadly with: The establishment of crime science departments at the University of Waikato in New Zealand, and University of Cincinnati in the US, and elsewhere. Crime Science courses at several institutions including Northumbria University in the UK, at the University of Twente in the Netherlands. and Temple University, Philadelphia in the US. A Crime Science Unit at DSTL, the research division of the UK Ministry of Defence. The term crime science increasingly being adopted by situational and experimental criminologists in the US and Australia. An annual Crime Science Network gathering in London which draws police and academics from across the world. A Springer Open Access Interdisciplinary journal devoted to Crime Science. Crime science increasingly being cited in criminology text books and journals papers (sometimes claimed as a new branch of criminology, and sometimes reviled as anti-criminology). A move in traditional criminology towards the aims originally set out by Ross in his concern for a more evidence-based, scientific approach to crime reduction. Crime science featuring in several learned journals in other disciplines (such as a special issue of the European Journal of Applied Mathematics devoted to "crime modelling"). == See also == Broken windows theory Parable of the broken window Crime prevention through environmental design Crime statistics Crime scene investigation Forensic science Evidence-based policing Intelligence-led policing Jill Dando Jill Dando Institute Community policing Peelian principles and Policing by consent Police science Predictive policing Preventive police Proactive policing Problem-oriented policing Recidivism == References == == Bibliography == Junger, Marianne; Laycock, Gloria; Hartel, Pieter; Ratcliffe, Jerry (11 June 2012). "Crime science: editorial statement". Crime Science. 1 (1): 1.1 – 1.3. doi:10.1186/2193-7680-1-1. ISSN 2193-7680. Hartel, Pieter H.; Junger, Marianne; Wieringa, Roelf J. (October 2010). "Cyber-crime Science = Crime Science + Information Security". CTIT Technical Report Series (Technical Report TR-CTIT-10-34). Enschede.: Centre for Telematics and Information Technology (CTIT), University of Twente. ISSN 1381-3625. Retrieved 21 January 2021. Pease, Ken (22 February 2010). "Crime science". In Shoham, Shlomo Giora; Knepper, Paul; Kett, Martin (eds.). International Handbook of Criminology (1 ed.). Boca Raton: Routledge. pp. 3–23. doi:10.1201/9781420085525. ISBN 978-0-429-25000-2. Retrieved 21 January 2021. Clarke, Ronald V. (31 March 2011). "Crime Science". In McLaughlin, Eugene; Newburn, Tim (eds.). The SAGE Handbook of Criminological Theory (Print, Online). SAGE Publications Ltd. pp. 271–283. doi:10.4135/9781446200926.n15. ISBN 978-1-4129-2038-4. Retrieved 22 January 2021. Guerette, Rob T.; Bowers, Kate J. (November 2009). "Assessing the Extent of Crime Displacement and Diffusion of Benefits: A Review of Situational Crime Prevention Evaluations*". Criminology. 47 (4): 1331–1368. doi:10.1111/j.1745-9125.2009.00177.x. ISSN 1745-9125. Retrieved 21 January 2021. Willison, Robert; Siponen, Mikko (1 September 2009). "Overcoming the insider: reducing employee computer crime through Situational Crime Prevention". Communications of the ACM. 52 (9): 133–137. doi:10.1145/1562164.1562198. ISSN 0001-0782. S2CID 2987733. Retrieved 21 January 2021. Cox, Karen (1 July 2008). "The application of crime science to the prevention of medication errors". British Journal of Nursing. 17 (14): 924–927. doi:10.12968/bjon.2008.17.14.30662. ISSN 0966-0461. PMID 18935846. Retrieved 21 January 2021. Tilley, Nick; Laycock, Gloria (2007). "From Crime Prevention to Crime Science". In Farrell, Graham; Bowers, Kate J.; Johnson, Shane D.; Townsley, Mike (eds.). Imagination for crime prevention : essays in honour of Ken Pease (Hardcover, Paperback). Monsey, New York: Criminal Justice Press. ISBN 978-1-881798-71-2. Retrieved 22 January 2021. Laycock, Gloria (2005). "Chapter 1: Defining Crime Science.". In Smith, Melissa J.; Tilley, Nick (eds.). Crime science: new approaches to preventing and detecting crime. Crime Science Series (1 ed.). Uffculme, UK: Willan Publishing. pp. 3–24. ISBN 1-843-92090-5. Clarke, Ronald V. (1997). "Part One: Introduction". In Clarke, Ronald V. (ed.). Situational Crime Prevention Successful Case Studies (PDF) (2 ed.). Guilderland, New York: Harrow and Heston. ISBN 0-911577-38-6. Archived from the original (PDF) on 26 June 2010. Retrieved 21 January 2021. == External links == Jill Dando Institute of Crime Science, U.K. Center for Problem-Oriented Policing, U.S.A. International Centre for the Prevention of Crime, CA Security Science Doctoral Research Training Centre New Zealand Institute for Security and Crime Science, N.Z. Cyber-crime Science
Wikipedia/Crime_science
Fractography is the study of the fracture surfaces of materials. Fractographic methods are routinely used to determine the cause of failure in engineering structures, especially in product failure and the practice of forensic engineering or failure analysis. In material science research, fractography is used to develop and evaluate theoretical models of crack growth behavior. One of the aims of fractographic examination is to determine the cause of failure by studying the characteristics of a fractured surface. Different types of crack growth (e.g. fatigue, stress corrosion cracking, hydrogen embrittlement) produce characteristic features on the surface, which can be used to help identify the failure mode. The overall pattern of cracking can be more important than a single crack, however, especially in the case of brittle materials like ceramics and glasses. == Usage == Fractography is a widely used technique in forensic engineering, forensic materials engineering and fracture mechanics to understand the causes of failures and also to verify theoretical failure predictions with real life failures. It is of use in forensic science for analysing broken products which have been used as weapons, such as broken bottles for example. Thus a defendant might claim that a bottle was faulty and broke accidentally when it impacted a victim of an assault. Fractography could show the allegation to be false, and that considerable force was needed to smash the bottle before using the broken end as a weapon to deliberately attack the victim. Bullet holes in glass windscreens or windows can also indicate the direction of impact and the energy of the projectile. In these cases, the overall pattern of cracking is vital to reconstructing the sequence of events, rather than the specific characteristics of a single crack. Fractography can determine whether a cause of train derailment was a faulty rail, or if a wing of a plane had fatigue cracks before a crash. Fractography is used also in materials research, since fracture properties can correlate with other properties and with structure of materials. == Feature identification == === Origin === An important aim of fractography is to establish and examine the origin of cracking, as examination at the origin may reveal the cause of crack initiation. Initial fractographic examination is commonly carried out on a macro scale utilising low power optical microscopy and oblique lighting techniques to identify the extent of cracking, possible modes and likely origins. Optical microscopy or macrophotography are often enough to pinpoint the nature of the failure and the causes of crack initiation and growth if the loading pattern is known. Common features that may cause crack initiation are inclusions, voids or empty holes in the material, contamination, and stress concentrations. === Fatigue crack growth === The image of a broken crankshaft shows the component failed from a surface defect near the bulb at lower centre. The semi-circular marks near the origin indicate a crack growing up into the bulk material by process known as fatigue. The crankshaft also shows hachures, which are the lines on fracture surfaces that can be traced back to the origin of the fracture. Some modes of crack growth can leave characteristic marks on the surface that identify the mode of crack growth and origin on a macro scale e.g. beachmarks or striations on fatigue cracks. == Microscopy == Microscopes can be used to determine the initiation point and the mechanism that caused crack growth. The information can be obtained from images of the fracture surface known as fractographs and used in constructing diagrams. A schematic fracture surface map can be used to isolate and identify the features on the surface which show how the product failed. Such a map can be a valuable way of presenting information which shows clearly how a crack was initiated which grew with time. === USB Microscopy === USB microscopes are especially useful for examining fracture surface features since they are small enough to be hand-held. A variety of camera sizes and resolution are available commercially at low cost. The camera cable plugs into the computer via a USB plug and most such devices come with illumination at the camera supplied by LED lights. === Scanning electron microscopy === In many cases, fractography requires examination at a finer scale, which is usually carried out in a scanning electron microscope or SEM. The resolution is much higher than the optical microscope, although samples are examined in a partial vacuum and colour is absent. Improved SEM's now allow examination at near atmospheric pressures, so allowing examination of sensitive materials such as those of biological origin. The SEM is especially useful when combined with Energy dispersive X-ray spectroscopy or EDX, which can be performed in the microscope, so very small areas of the sample can be analysed for their elemental composition. == Example == === Breast implant === A cusp is formed where brittle cracks meet, as shown on the picture of a failed catheter (Cp). The cusp was formed by brittle failure of the catheter on a breast implant in silicone rubber. The origin of the cracks is at the shoulder at the left-hand side. Identifying such features will allow a fracture surface map to be made of the surface being studied. The implant failed because of overload, all the imposed loads being concentrated at the connection between the catheter and the bag holding salt solution. As a result, the patient reported loss of fluid from the implant, and it was extracted surgically and replaced. In the case of the failed breast implant catheter, the crack path was very simple, but the cause more subtle. Further scanning electron microscopy showed numerous microcracks between the bag and the catheter, indicating that the adhesive bond between the two components had failed prematurely, perhaps through faulty manufacture. The material of construction of both bag and catheter, silicone rubber is a physically weak elastomer, and product design must allow for the low tear or shear strength of the material. === Maritime Patrol Aircraft === A non-critical crack occurred in the fastener hole of a lower wing plank. The plank was made from a 3.2 mm thick AA7075-T6 aluminium alloy. The time of the detection of the crack and the aircraft's counting g-meter allowed investigators to find out the load on the aircraft from use. The cracks on an SEM showed evidence and patterns of fatigue. The cyclic load and fatigue appeared to have progressively gone worse with some cracks being large and others being small in length and width indicating occasional force stronger than 2> g's. The g-meter showed that the aircraft had flown 2,500 flights, with the g force and acceleration occasionally exceeding more than 2 G's. This was more than the maximum advertised for the manufacturer. The conclusion was that fatigue and cracks should be inspected regularly on old or commonly used aircraft. The study also found novel ways for Quantitative fractography to be used on aircraft, which compares load history (in this case the g-meter) and records of the alloy experiencing fatigue in a lab setting with different pressure, cycles, and temperatures. The study used the database of cracks to create a model that predicts forces and crack progression. == See also == Conchoidal fracture Fatigue (material) Failure analysis Forensic engineering Forensic materials engineering Fracture Forensic polymer engineering Forensic science == References == Lewis, Peter Rhys, Reynolds, K, and Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004). Mills, Kathleen Fractography, American Society of Metals (ASM) handbook, volume 12 (1991). N.T. Goldsmith, R.J.H. Wanhill, L. Molent, Quantitative fractography of fatigue and an illustrative case study, Engineering Failure Analysis, volume 96 (February 2019) Pages 426–435.
Wikipedia/Fractography
Science News (SN) is an American monthly magazine devoted to articles about new scientific and technical developments, typically gleaned from recent scientific and technical journals. The periodical has been described as having a scope across "all sciences" and as having "up to date" coverage. == History == Science News has been published since 1922 by the Society for Science & the Public, a non-profit organization founded by E. W. Scripps in 1920. American chemist Edwin Slosson served as the publication's first editor. From 1922 to 1966, it was called Science News Letter. The title was changed to Science News with the March 12, 1966, issue (vol. 89, no. 11). Tom Siegfried was the editor from 2007 to 2012. In 2012, Siegfried stepped down, and Eva Emerson became the Editor in Chief of the magazine. In 2017, Eva Emerson stepped down to become the editor of a new digital magazine, Annual Reviews. On February 1, 2018, Nancy Shute became the Editor in Chief of the magazine. In April 2008, the magazine changed from a weekly format to a biweekly format, and the website was redeployed. The April 12 issue (Vol.173 #15) was the last weekly issue. The first biweekly issue (Vol.173 #16) was dated May 10 and featured a new design. The 4-week break between the last weekly issue and the first biweekly issue was explained in the Letter from the Publisher (p. 227) in the April 12 issue. In January 2025, the magazine began publishing on a monthly basis, with significant changes to style, format, and physical aspects. == Departments == The articles of the magazine are placed under "News": Life Matter and Energy Atom and Cosmos Body and Brain Earth Genes and Cells The articles featured on the magazine's cover are placed under "Features". The departments that remain constant from issue to issue are: Editor's Note—A column written by Eva Emerson, the magazine's editor-in-chief, that usually highlights the current issue's prime topics. Notebook—A page that includes several sections: Say What?—A definition and description of a scientific term. 50 Years Ago—An excerpt from an older issue of the magazine. Mystery Solved—An explanation of the science underlying everyday life. SN Online—Excerpts from articles published online. How Bizarre...—An odd or interesting fact that may not be well known to the magazine's audience. Reviews and Previews—A discussion of upcoming and recently released books, movies and services. Feedback—Letters from readers commenting on the recent Science News articles. Comment—An interview with a researcher. == See also == Institute for Nonprofit News (member) == References == == External links == Science News, the magazine's website Science News Magazine Bookshop, appears to be an outlet for books reviewed in the magazine
Wikipedia/Science_News
Science & Justice is a peer-reviewed scientific journal of forensics published by Elsevier on behalf of the Forensic Science Society and the International Society for Forensic Genetics. The journal was established in 1960 as the Journal of the Forensic Science Society and obtained its current name in 1995. One notable article was an analysis of the assassination of John F. Kennedy, which disputed the conclusion of the 1982 United States National Academy of Sciences report that the House Select Committee on Assassinations finding of a fourth shot in acoustical evidence was incorrect. A later article re-analyzed the acoustic synchronization evidence, rebutting this argument as well as correcting errors in the 1982 report, while supporting its finding that the sounds alleged to be gunshots occurred about a minute after the assassination. Follow-up Science & Justice articles have been published, too. == References == == External links == Official website Forensic Science Society International Society for Forensic Genetics
Wikipedia/Science_&_Justice
A polygraph, often incorrectly referred to as a lie detector test, is a pseudoscientific device or procedure that measures and records several physiological indicators such as blood pressure, pulse, respiration, and skin conductivity while a person is asked and answers a series of questions. The belief underpinning the use of the polygraph is that deceptive answers will produce physiological responses that can be differentiated from those associated with non-deceptive answers; however, there are no specific physiological reactions associated with lying, making it difficult to identify factors that separate those who are lying from those who are telling the truth. In some countries, polygraphs are used as an interrogation tool with criminal suspects or candidates for sensitive public or private sector employment. Some United States law enforcement and federal government agencies, as well as many police departments, use polygraph examinations to interrogate suspects and screen new employees. Within the US federal government, a polygraph examination is also referred to as a psychophysiological detection of deception examination. Assessments of polygraphy by scientific and government bodies generally suggest that polygraphs are highly inaccurate, may easily be defeated by countermeasures, and are an imperfect or invalid means of assessing truthfulness. A comprehensive 2003 review by the National Academy of Sciences of existing research concluded that there was "little basis for the expectation that a polygraph test could have extremely high accuracy." The American Psychological Association states that "most psychologists agree that there is little evidence that polygraph tests can accurately detect lies." For this reason, the use of polygraphs to detect lies is considered a form of pseudoscience, or junk science. == Testing procedure == The examiner typically begins polygraph test sessions with a pre-test interview to gain some preliminary information which will later be used to develop diagnostic questions. Then the tester will explain how the polygraph is supposed to work, emphasizing that it can detect lies and that it is important to answer truthfully. Then a "stim test" is often conducted: the subject is asked to deliberately lie and then the tester reports that he was able to detect this lie. Guilty subjects are likely to become more anxious when they are reminded of the test's validity. However, there are risks of innocent subjects being equally or more anxious than the guilty. Then the actual test starts. Some of the questions asked are "irrelevant" ("Is your name Fred?"), others are "diagnostic" questions, and the remainder are the "relevant questions" that the tester is really interested in. The different types of questions alternate. The test is passed if the physiological responses to the diagnostic questions are larger than those during the relevant questions. Criticisms have been given regarding the validity of the administration of the Control Question Technique. The CQT may be vulnerable to being conducted in an interrogation-like fashion. This kind of interrogation style would elicit a nervous response from innocent and guilty suspects alike. There are several other ways of administering the questions. An alternative is the Guilty Knowledge Test (GKT), or the Concealed Information Test, which is used in Japan. The administration of this test is given to prevent potential errors that may arise from the questioning style. The test is usually conducted by a tester with no knowledge of the crime or circumstances in question. The administrator tests the participant on their knowledge of the crime that would not be known to an innocent person. For example: "Was the crime committed with a .45 or a 9 mm?" The questions are in multiple choice and the participant is rated on how they react to the correct answer. If they react strongly to the guilty information, then proponents of the test believe that it is likely that they know facts relevant to the case. This administration is considered more valid by supporters of the test because it contains many safeguards to avoid the risk of the administrator influencing the results. == Effectiveness == Assessments of polygraphy by scientific and government bodies generally suggest that polygraphs are inaccurate, may be defeated by countermeasures, and are an imperfect or invalid means of assessing truthfulness. Despite claims that polygraph tests are between 80% and 90% accurate by advocates, the National Research Council has found no evidence of effectiveness. In particular, studies have indicated that the relevant–irrelevant questioning technique is not ideal, as many innocent subjects exert a heightened physiological reaction to the crime-relevant questions. The American Psychological Association states "Most psychologists agree that there is little evidence that polygraph tests can accurately detect lies." In 2002, a review by the National Research Council found that, in populations "untrained in countermeasures, specific-incident polygraph tests can discriminate lying from truth telling at rates well above chance, though well below perfection". The review also warns against generalization from these findings to justify the use of polygraphs—"polygraph accuracy for screening purposes is almost certainly lower than what can be achieved by specific-incident polygraph tests in the field"—and notes some examinees may be able to take countermeasures to produce deceptive results. In the 1998 US Supreme Court case United States v. Scheffer, the majority stated that "There is simply no consensus that polygraph evidence is reliable [...] Unlike other expert witnesses who testify about factual matters outside the jurors' knowledge, such as the analysis of fingerprints, ballistics, or DNA found at a crime scene, a polygraph expert can supply the jury only with another opinion." The Supreme Court summarized their findings by stating that the use of polygraph was "little better than could be obtained by the toss of a coin." In 2005, the 11th Circuit Court of Appeals stated that "polygraphy did not enjoy general acceptance from the scientific community". In 2001, William Iacono, Professor of Psychology and Neuroscience at the University of Minnesota, concluded: Although the CQT [Control Question Test] may be useful as an investigative aid and tool to induce confessions, it does not pass muster as a scientifically credible test. CQT theory is based on naive, implausible assumptions indicating (a) that it is biased against innocent individuals and (b) that it can be beaten simply by artificially augmenting responses to control questions. Although it is not possible to adequately assess the error rate of the CQT, both of these conclusions are supported by published research findings in the best social science journals (Honts et al., 1994; Horvath, 1977; Kleinmuntz & Szucko, 1984; Patrick & Iacono, 1991). Although defense attorneys often attempt to have the results of friendly CQTs admitted as evidence in court, there is no evidence supporting their validity and ample reason to doubt it. Members of scientific organizations who have the requisite background to evaluate the CQT are overwhelmingly skeptical of the claims made by polygraph proponents. Polygraphs measure arousal, which can be affected by anxiety, anxiety disorders such as post-traumatic stress disorder (PTSD), nervousness, fear, confusion, hypoglycemia, psychosis, depression, substance-induced states (nicotine, stimulants), substance-withdrawal state (alcohol withdrawal) or other emotions; polygraphs do not measure "lies". A polygraph cannot differentiate anxiety caused by dishonesty and anxiety caused by something else. Since the polygraph does not measure lying, the Silent Talker Lie Detector inventors expected that adding a camera to film microexpressions would improve the accuracy of the evaluators. This did not happen in practice according to an article in the Intercept. === US Congress Office of Technology Assessment === In 1983, the US Congress Office of Technology Assessment published a review of the technology and found that there is at present only limited scientific evidence for establishing the validity of polygraph testing. Even where the evidence seems to indicate that polygraph testing detects deceptive subjects better than chance, significant error rates are possible, and examiner and examinee differences and the use of countermeasures may further affect validity. === National Academy of Sciences === In 2003, the National Academy of Sciences (NAS) issued a report entitled "The Polygraph and Lie Detection". The NAS found that "overall, the evidence is scanty and scientifically weak", concluding that 57 of the approximately 80 research studies that the American Polygraph Association relied on to reach their conclusions were significantly flawed. These studies did show that specific-incident polygraph testing, in a person untrained in counter-measures, could discern the truth at "a level greater than chance, yet short of perfection". However, due to several flaws, the levels of accuracy shown in these studies "are almost certainly higher than actual polygraph accuracy of specific-incident testing in the field". By adding a camera, the Silent Talker Lie Detector attempted to give more data to the evaluator by providing information about microexpressions. However adding the Silent Talker camera did not improve lie detection and was very expensive and cumbersome to include according to an article in the Intercept. When polygraphs are used as a screening tool (in national security matters and for law enforcement agencies for example) the level of accuracy drops to such a level that "Its accuracy in distinguishing actual or potential security violators from innocent test takers is insufficient to justify reliance on its use in employee security screening in federal agencies." The NAS concluded that the polygraph "may have some utility but that there is "little basis for the expectation that a polygraph test could have extremely high accuracy". The NAS conclusions paralleled those of the earlier United States Congress Office of Technology Assessment report "Scientific Validity of Polygraph Testing: A Research Review and Evaluation". Similarly, a report to Congress by the Moynihan Commission on Government Secrecy concluded that "The few Government-sponsored scientific research reports on polygraph validity (as opposed to its utility), especially those focusing on the screening of applicants for employment, indicate that the polygraph is neither scientifically valid nor especially effective beyond its ability to generate admissions". Despite the NAS finding of a "high rate of false positives," failures to expose individuals such as Aldrich Ames and Larry Wu-Tai Chin, and other inabilities to show a scientific justification for the use of the polygraph, it continues to be employed. == Countermeasures == Several proposed countermeasures designed to pass polygraph tests have been described. There are two major types of countermeasures: "general state" (intending to alter the physiological or psychological state of the subject during the test), and "specific point" (intending to alter the physiological or psychological state of the subject at specific periods during the examination, either to increase or decrease responses during critical examination periods). General state: asked how he passed the polygraph test, Central Intelligence Agency officer turned KGB mole Aldrich Ames explained that he sought advice from his Soviet handler and received the simple instruction to: "Get a good night's sleep, and rest, and go into the test rested and relaxed. Be nice to the polygraph examiner, develop a rapport, and be cooperative and try to maintain your calm". Additionally, Ames explained, "There's no special magic... Confidence is what does it. Confidence and a friendly relationship with the examiner... rapport, where you smile and you make him think that you like him". Specific point: other suggestions for countermeasures include for the subject to mentally record the control and relevant questions as the examiner reviews them before the interrogation begins. During the interrogation the subject is supposed to carefully control their breathing while answering the relevant questions, and to try to artificially increase their heart rate during the control questions, for example by thinking of something scary or exciting, or by pricking themselves with a pointed object concealed somewhere on the body. In this way the results will not show a significant reaction to any of the relevant questions. == Use == Law enforcement agencies and intelligence agencies in the United States are by far the biggest users of polygraph technology. In the United States alone most federal law enforcement agencies either employ their own polygraph examiners or use the services of examiners employed in other agencies. In 1978 Richard Helms, the eighth Director of Central Intelligence, stated: We discovered there were some Eastern Europeans who could defeat the polygraph at any time. Americans are not very good at it, because we are raised to tell the truth and when we lie it is easy to tell we are lying. But we find a lot of Europeans and Asiatics can handle that polygraph without a blip, and you know they are lying and you have evidence that they are lying. Susan McCarthy of Salon said in 2000 that "The polygraph is an American phenomenon, with limited use in a few countries, such as Canada, Israel and Japan." === Armenia === In Armenia, government administered polygraphs are legal, at least for use in national security investigations. The National Security Service (NSS), Armenia's primary intelligence service, requires polygraph examinations of all new applicants. === Australia === Polygraph evidence became inadmissible in New South Wales courts under the Lie Detectors Act 1983. Under the same act, it is also illegal to use polygraphs for the purpose of granting employment, insurance, financial accommodation, and several other purposes for which polygraphs may be used in other jurisdictions. === Canada === In Canada, the 1987 decision of R v Béland, the Supreme Court of Canada rejected the use of polygraph results as evidence in court, finding that they were inadmissible. The polygraph is still used as a tool in the investigation of criminal acts and sometimes employed in the screening of employees for government organizations. In the province of Ontario, the Employment Standards Act, 2000 prohibits employers from asking or requiring employees to undergo a polygraph test. Police services are permitted use polygraph tests as part of an investigation if the person consents. === Europe === In a majority of European jurisdictions, polygraphs are generally considered to be unreliable for gathering evidence, and are usually not used by local law enforcement agencies. Polygraph testing is widely seen in Europe to violate the right to remain silent.: 62ff  In England and Wales a polygraph test can be taken, but the results cannot be used in a court of law to prove a case. However, the Offender Management Act 2007 put in place an option to use polygraph tests to monitor serious sex offenders on parole in England and Wales; these tests became compulsory in 2014 for high risk sexual offenders currently on parole in England and Wales. The Supreme Court of Poland declared on January 29, 2015, that the use of polygraph in interrogation of suspects is forbidden by the Polish Code of Criminal Procedure. Its use might be allowed though if the suspect has been already accused of a crime and if the interrogated person consents of the use of a polygraph. Even then, the use of polygraph can never be used as a substitute of actual evidence. As of 2017, the justice ministry and Supreme Court of both of the Netherlands and Germany had rejected use of polygraphs.: 62ff  According to the 2017 book Psychology and Law: Bridging the Gap by psychologists David Canter and Rita Žukauskienė Belgium was the European country with the most prevalent use of polygraph testing by police, with about 300 polygraphs carried out each year in the course of police investigations. The results are not considered viable evidence in bench trials, but have been used in jury trials.: 62ff  In Lithuania, "polygraphs have been in use since 1992", with law enforcement utilizing the Event Knowledge Test (a "modification" of the Concealed Information Test) in criminal investigations. === India === In 2008, an Indian court adopted the Brain Electrical Oscillation Signature Profiling test as evidence to convict a woman who was accused of murdering her fiancé. It was the first time that the result of polygraph was used as evidence in court. On May 5, 2010, The Supreme Court of India declared use of narcoanalysis, brain mapping and polygraph tests on suspects as illegal and against the constitution if consent is not obtained and forced. Article 20(3) of the Indian Constitution states: "No person accused of any offence shall be compelled to be a witness against himself." Polygraph tests are still legal if the defendant requests one. === Israel === The Supreme Court of Israel, in Civil Appeal 551/89 (Menora Insurance v. Jacob Sdovnik), ruled that the polygraph has not been recognized as a reliable device. In other decisions, polygraph results were ruled inadmissible in criminal trials. Polygraph results are only admissible in civil trials if the person being tested agrees to it in advance. === Philippines === The results of polygraph tests are inadmissible in court in the Philippines. The National Bureau of Investigation, however, uses polygraphs in aid of investigation. === United States === In 2018, Wired magazine reported that an estimated 2.5 million polygraph tests were given each year in the United States, with the majority administered to paramedics, police officers, firefighters, and state troopers. The average cost to administer the test is more than $700 and is part of a $2 billion industry. In 2007, polygraph testimony was admitted by stipulation in 19 states, and was subject to the discretion of the trial judge in federal court. The use of polygraph in court testimony remains controversial, although it is used extensively in post-conviction supervision, particularly of sex offenders. In Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993), the old Frye standard was lifted and all forensic evidence, including polygraph, had to meet the new Daubert standard in which "underlying reasoning or methodology is scientifically valid and properly can be applied to the facts at issue." While polygraph tests are commonly used in police investigations in the US, no defendant or witness can be forced to undergo the test unless they are under the supervision of the courts. In United States v. Scheffer (1998), the US Supreme Court left it up to individual jurisdictions whether polygraph results could be admitted as evidence in court cases. Nevertheless, it is used extensively by prosecutors, defense attorneys, and law enforcement agencies. In the states of Rhode Island, Massachusetts, Maryland, New Jersey, Oregon, Delaware and Iowa it is illegal for any employer to order a polygraph either as conditions to gain employment, or if an employee has been suspected of wrongdoing. The Employee Polygraph Protection Act of 1988 (EPPA) generally prevents employers from using lie detector tests, either for pre-employment screening or during the course of employment, with certain exemptions. As of 2013, about 70,000 job applicants are polygraphed by the federal government on an annual basis. In the United States, the State of New Mexico admits polygraph testing in front of juries under certain circumstances. In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the website of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them—it’s not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the website, accused the NSA polygraph video of being "Orwellian". The polygraph was invented in 1921 by John Augustus Larson, a medical student at the University of California, Berkeley and a police officer of the Berkeley Police Department in Berkeley, California. The polygraph was on the Encyclopædia Britannica 2003 list of greatest inventions, described as inventions that "have had profound effects on human life for better or worse." In 2013, the US federal government had begun indicting individuals who stated that they were teaching methods on how to defeat a polygraph test. During one of those investigations, upwards of 30 federal agencies were involved in investigations of almost 5000 people who had various degrees of contact with those being prosecuted or who had purchased books or DVDs on the topic of beating polygraph tests. == Security clearances == In 1995, Harold James Nicholson, a Central Intelligence Agency (CIA) employee later convicted of spying for Russia, had undergone his periodic five-year reinvestigation, in which he showed a strong probability of deception on questions regarding relationships with a foreign intelligence unit. This polygraph test later led to an investigation which resulted in his eventual arrest and conviction. In most cases, however, polygraphs are more of a tool to "scare straight" those who would consider espionage. Jonathan Pollard was advised by his Israeli handlers that he was to resign his job from American intelligence if he was ever told he was subject to a polygraph test. Likewise, John Anthony Walker was advised by his handlers not to engage in espionage until he had been promoted to the highest position for which a polygraph test was not required, to refuse promotion to higher positions for which polygraph tests were required, and to retire when promotion was mandated. In 1983, CIA employee Edward Lee Howard was dismissed when, during a polygraph screening, he truthfully answered a series of questions admitting to minor crimes such as petty theft and drug abuse. In retaliation for his perceived unjust punishment for minor offenses, he later sold his knowledge of CIA operations to the Soviet Union. Polygraph tests may not deter espionage. From 1945 to the present, at least six Americans have committed espionage while successfully passing polygraph tests. Notable cases of two men who created a false negative result with the polygraphs were Larry Wu-Tai Chin, who spied for China, and Aldrich Ames, who was given two polygraph examinations while with the CIA, the first in 1986 and the second in 1991, while spying for the Soviet Union/Russia. The CIA reported that he passed both examinations after experiencing initial indications of deception. According to a Senate investigation, an FBI review of the first examination concluded that the indications of deception were never resolved. Ana Belen Montes, a Cuban spy, passed a counterintelligence scope polygraph test administered by the US Defense Intelligence Agency (DIA) in 1994. Despite these errors, in August 2008, the DIA announced that it would subject each of its 5,700 prospective and current employees to polygraph testing at least once annually. This expansion of polygraph screening at DIA occurred while DIA polygraph managers ignored documented technical problems discovered in the Lafayette computerized polygraph system. The DIA uses computerized Lafayette polygraph systems for routine counterintelligence testing. The impact of the technical flaws within the Lafayette system on the analysis of recorded physiology and on the final polygraph test evaluation is currently unknown. In 2012, a McClatchy investigation found that the National Reconnaissance Office was possibly breaching ethical and legal boundaries by encouraging its polygraph examiners to extract personal and private information from US Department of Defense personnel during polygraph tests that purported to be limited in scope to counterintelligence matters. Allegations of abusive polygraph practices were brought forward by former NRO polygraph examiners. == Alternative tests == Most polygraph researchers have focused more on the exam's predictive value on a subject's guilt. However, there have been no empirical theories established to explain how a polygraph measures deception. A 2010 study indicated that functional magnetic resonance imaging (fMRI) may benefit in explaining the psychological correlations of polygraph exams. It could also explain which parts of the brain are active when subjects use artificial memories. Most brain activity occurs in both sides of the prefrontal cortex, which is linked to response inhibition. This indicates that deception may involve inhibition of truthful responses. Some researchers believe that reaction time (RT) based tests may replace polygraphs in concealed information detection. RT based tests differ from polygraphs in stimulus presentation duration and can be conducted without physiological recording as subject response time is measured via computer. However, researchers have found limitations to these tests as subjects voluntarily control their reaction time, deception can still occur within the response deadline, and the test itself lacks physiological recording. == History == Earlier societies utilized elaborate methods of lie detection which mainly involved torture. For instance, in the Middle Ages, boiling water was used to detect liars, as it was believed honest men would withstand it better than liars. Early devices for lie detection include an 1895 invention of Cesare Lombroso used to measure changes in blood pressure for police cases, a 1904 device by Vittorio Benussi used to measure breathing, the Mackenzie-Lewis Polygraph first developed by James Mackenzie in 1906 and an abandoned project by American William Moulton Marston which used blood pressure to examine German prisoners of war (POWs). Marston said he found a strong positive correlation between systolic blood pressure and lying. Marston wrote a second paper on the concept in 1915, when finishing his undergraduate studies. He entered Harvard Law School and graduated in 1918, re-publishing his earlier work in 1917. Marston's main inspiration for the device was his wife, Elizabeth Holloway Marston. "According to Marston’s son, it was his mother Elizabeth, Marston's wife, who suggested to him that 'When she got mad or excited, her blood pressure seemed to climb'" (Lamb, 2001). Although Elizabeth is not listed as Marston’s collaborator in his early work, Lamb, Matte (1996), and others refer directly and indirectly to Elizabeth's work on her husband's deception research. She also appears in a picture taken in his polygraph laboratory in the 1920s (reproduced in Marston, 1938). Despite his predecessors' contributions, Marston styled himself the "father of the polygraph". (Today he is often equally or more noted as the creator of the comic book character Wonder Woman and her Lasso of Truth, which can force people to tell the truth.) Marston remained the device's primary advocate, lobbying for its use in the courts. In 1938 he published a book, The Lie Detector Test, wherein he documented the theory and use of the device. In 1938 he appeared in advertising by the Gillette company claiming that the polygraph showed Gillette razors were better than the competition. A device recording both blood pressure and breathing was invented in 1921 by John Augustus Larson of the University of California and first applied in law enforcement work by the Berkeley Police Department under its nationally renowned police chief August Vollmer. Further work on this device was done by Leonarde Keeler. As Larson's protege, Keeler updated the device by making it portable and added the galvanic skin response to it in 1939. His device was then purchased by the FBI, and served as the prototype of the modern polygraph. Several devices similar to Keeler's polygraph version included the Berkeley Psychograph, a blood pressure-pulse-respiration recorder developed by C. D. Lee in 1936 and the Darrow Behavior Research Photopolygraph, which was developed and intended solely for behavior research experiments. A device which recorded muscular activity accompanying changes in blood pressure was developed in 1945 by John E. Reid, who claimed that greater accuracy could be obtained by making these recordings simultaneously with standard blood pressure-pulse-respiration recordings. == Society and culture == === Portrayals in media === Lie detection has a long history in mythology and fairy tales; the polygraph has allowed modern fiction to use a device more easily seen as scientific and plausible. Notable instances of polygraph usage include uses in crime and espionage themed television shows and some daytime television talk shows, cartoons and films. Numerous TV shows have been called Lie Detector or featured the device. The first Lie Detector TV show aired in the 1950s, created and hosted by Ralph Andrews. In the 1960s Andrews produced a series of specials hosted by Melvin Belli. In the 1970s the show was hosted by Jack Anderson. In early 1983 Columbia Pictures Television put on a syndicated series hosted by F. Lee Bailey. In 1998 TV producer Mark Phillips with his Mark Phillips Philms & Telephision put Lie Detector back on the air on the FOX Network—on that program Ed Gelb with host Marcia Clark questioned Mark Fuhrman about the allegation that he "planted the bloody glove". In 2005 Phillips produced Lie Detector as a series for PAX/ION; some of the guests included Paula Jones, Reverend Paul Crouch accuser Lonny Ford, Ben Rowling, Jeff Gannon and Swift Boat Vet, Steve Garner. In the UK, shows such as The Jeremy Kyle Show used polygraph tests extensively. The show was ultimately canceled when a participant committed suicide shortly after being polygraphed. The guest was slated by Kyle on the show for failing the polygraph, but no other evidence has come forward to prove any guilt. Producers later admitted in the inquiry that they were unsure on how accurate the tests performed were. In the Fox game show The Moment of Truth, contestants are privately asked personal questions a few days before the show while hooked to a polygraph. On the show they asked the same questions in front of a studio audience and members of their family. In order to advance in the game they must give a "truthful" answer as determined by the previous polygraph exam. Daytime talk shows, such as Maury Povich and Steve Wilkos, have used polygraphs to supposedly detect deception in interview subjects on their programs that pertain to cheating, child abuse, and theft. In episode 93 of the US science show MythBusters, the hosts attempted to fool the polygraph by using pain when answering truthfully, in order to test the notion that polygraphs interpret truthful and non-truthful answers as the same. They also attempted to fool the polygraph by thinking pleasant thoughts when lying and thinking stressful thoughts when telling the truth, to try to confuse the machine. However, neither technique was successful for a number of reasons. Michael Martin correctly identified each guilty and innocent subject. Martin suggested that when conducted properly, polygraphs are correct 98% of the time, but no scientific evidence has been offered for this. The history of the polygraph is the subject of the documentary film The Lie Detector, which first aired on American Experience on January 3, 2023. === Hand-held lie detector for US military === A hand-held lie detector is being deployed by the US Department of Defense according to a report in 2008 by investigative reporter Bill Dedman of NBC News. The Preliminary Credibility Assessment Screening System, or PCASS, captures less physiological information than a polygraph, and uses an algorithm, not the judgment of a polygraph examiner, to render a decision whether it believes the person is being deceptive or not. The device was first used in Afghanistan by US Army troops. The Department of Defense ordered its use be limited to non-US persons, in overseas locations only. === Notable cases === Polygraphy has been faulted for failing to trap known spies such as double-agent Aldrich Ames, who passed two polygraph tests while spying for the Soviet Union. Ames failed several tests while at the CIA that were never acted on. Other spies who passed the polygraph include Karl Koecher, Ana Montes, and Leandro Aragoncillo. CIA spy Harold James Nicholson failed his polygraph examinations, which aroused suspicions that led to his eventual arrest. Polygraph examination and background checks failed to detect Nada Nadim Prouty, who was not a spy but was convicted for improperly obtaining US citizenship and using it to obtain a restricted position at the FBI. The polygraph also failed to catch Gary Ridgway, the "Green River Killer". Another suspect allegedly failed a given lie detector test, whereas Ridgway passed. Ridgway passed a polygraph in 1984; he confessed almost 20 years later when confronted with DNA evidence. Conversely, innocent people have been known to fail polygraph tests. In Wichita, Kansas in 1986, Bill Wegerle was suspected of murdering his wife Vicki Wegerle because he failed two polygraph tests (one administered by the police, the other conducted by an expert that Wegerle had hired), although he was neither arrested nor convicted of her death. In March 2004, evidence surfaced connecting her death to the serial killer known as BTK, and in 2005 DNA evidence from the Wegerle murder confirmed that BTK was Dennis Rader, exonerating Wegerle. Prolonged polygraph examinations are sometimes used as a tool by which confessions are extracted from a defendant, as in the case of Richard Miller, who was persuaded to confess largely by polygraph results combined with appeals from a religious leader. In the Watts family murders, Christopher Watts failed one such polygraph test and subsequently confessed to murdering his wife. In the 2002 disappearance of seven-year-old Danielle van Dam of San Diego, police suspected neighbor David Westerfield; he became the prime suspect when he allegedly failed a polygraph test. == See also == Bogus pipeline Cleve Backster Doug Williams (polygraph critic) Ecological fallacy Ronald Pelton Voice stress analysis P300 (neuroscience)#Applications == References == == Further reading == Aftergood, Steven (2000). "Essays on Science and Society: Polygraph Testing and the DOE National Laboratories". Science. 290 (5493): 939–940. doi:10.1126/science.290.5493.939. PMID 17749189. S2CID 153185280. Alder, Ken (2007). The Lie Detectors. New York: Free Press. ISBN 978-0-7432-5988-0. Bunn, Geoffrey C. The Truth Machine: A Social History of the Lie Detector (Johns Hopkins University Press; 2012) 256 pages Blinkhorn, S. (1988) "Lie Detection as a psychometric procedure" In "The Polygraph Test" (Gale, A. ed. 1988) 29–39. Cumming, Alfred (Specialist in Intelligence and National Security). "Polygraph Use by the Department of Energy: Issues for Congress." (Archive) Congressional Research Service. February 9, 2009. Harris, Mark (October 1, 2018). "The Lie Generator: Inside the Black Mirror World of Polygraph Job Screenings". Wired. Jones, Ishmael (2008). The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture. New York: Encounter Books. ISBN 978-1-59403-382-7. Lykken, David (1998). A Tremor in the Blood. New York: Plenum Trade. ISBN 978-0-306-45782-1. Maschke, G.W. & Scalabrini, G.J. (2018) The Lie Behind the Lie Detector. 5th ed. Available on-line at Learn How to Pass (or Beat) a Polygraph Test. McCarthy, Susan. "Passing the polygraph." Salon. March 2, 2000. Roese, N. J.; Jamieson, D. W. (1993). "Twenty years of bogus pipeline research: A critical review and meta-analysis". Psychological Bulletin. 114 (2): 363–375. doi:10.1037/0033-2909.114.2.363. Sullivan, John (2007). Gatekeeper. Potomac Books Inc. ISBN 978-1-59797-045-7. Taylor, Marisa (Tish Wells contributed). "Feds expand polygraph screening, often seeking intimate facts." McClatchy. December 6, 2012. Woodrow, Michael J. "The Truth about the Psychophysiological Detection of Deception Examination 3rd Edition" Lulu Press. New York ISBN 978-1-105-89546-3 == External links == AntiPolygraph.org Archived 2019-09-05 at the Wayback Machine, a website critical of polygraph The Polygraph Museum Historical photographs and descriptions of polygraph instruments. The North American Polygraph and Psychophysiology: Disinterested, Uninterested, and Interested Perspectives by John J. Furedy, International Journal of Psychophysiology, Spring/Summer 1996 Trial By Ordeal? Polygraph Testing In Australia "Thought Wave Lie Detector Measures Current in Nerves" Popular Mechanics, July 1937 Mikkelson, David (July 11, 2011). "Next case on the Legal Colander". Snopes.
Wikipedia/Polygraph_test
Forensic psychotherapy is the application of psychological knowledge to the treatment of offender-patients who commit violent acts against themselves or others. This form of treatment allows for a therapist to potentially understand the offender and their mental state. It gives the individual providing treatment the opportunity to examine further whether the offender’s criminal behavior was a conscious act or not, what exactly their association with violent behavior is, and what possible motives could have driven them. The discipline of forensic psychotherapy is one that requires the involvement of individuals other than simply the therapist and patient. A therapist may collaborate with other professionals, such as physicians, social workers, nurses and other psychologists in order to best serve the offenders’ needs. Whether the treatment is successful or not relies on a multitude of things, but typically ensuring that a systemic approach is taken and that all involved in the treatment process are well informed and supportive has proven to be the most effective. In addition to group work forensic psychotherapy may also involve therapeutic communities, individual interaction with victims as well as offenders, and family work. In order for this specialized therapy to be as effective as possible, it demands the compliance of not only the patient and therapist, but of the rest of society as well. The main focus of forensic psychotherapy is not to condone the acts of the offender, but to obtain a psychodynamic understanding of the offender in order to attempt to provide them with an effective form of treatment to help them take responsibility for any crimes committed and to prevent the perpetration of crimes by the offender in the future. Guidelines have been set to ensure proficiency in the field of Forensic Psychology. == Controversy Regarding Treatment == It has been difficult to illustrate a clear link between psychological interventions and the successful reduction of offenses. Nothing has brought about the complete eradication of crime in patients being treated using this practice. At times this difficulty has contributed to a profound pessimism about the effectiveness of any form of treatment. This began in the United States of America, but pessimism regarding the effectiveness of treatment soon spread to the United Kingdom. This was said to have adversely affected the provision of rehabilitative treatments. The development of cognitive behavioral therapy made it possible to demonstrate an effect upon some attitudes and offending behaviors. These behaviors being measured in controlled research studies led to the introduction of structured treatment programs in prisons across Canada, the United States, the United Kingdom and more recently, mainland Europe. For a period of time, there have been positive benefits in the provision of resources, particularly in prison settings. However, there has been serious conflict as professionals continue to compete for limited resources and one model claimed superiority over another. It has remained difficult to establish with great certainty which methods, if any, are effective over a significant period of time. However, psychodynamic forensic psychotherapy has been shown to have some successful impact, as have Therapeutic Communities. == Role of Forensic Psychologists == Forensic psychology conceptualizes both the criminal and civil sides of the justice system, while simultaneously encompassing the clinical and experimental aspects of psychology. Forensic psychologists can receive training as either clinical psychologists or experimental psychologists, and will generally have one primary role in terms of employment. A large portion of forensic psychologists are treatment providers, who evaluate and provide some sort of psychological treatment or intervention. However, many individuals in this field engage in other roles that are more related to their specific interests and/or training. These secondary roles are often involved with the criminal justice system- for example, forensic psychologists often will step in as an expert witness, being called upon to testify in court about a topic in which they have a specialized knowledge. Forensic Psychologists often assume the role of evaluators, typically being asked to evaluate a criminal defendant’s mental state. This is done in order to determine factors such as whether or not the defendant is competent to stand trial, if the defendant would be a future risk factor, and what the defendant’s mental state was like at the time in which they committed the alleged offense. Much of the time, once an assessment has been made by the forensic psychologist, they are then asked to testify in court as an expert witness about their findings. Areas of concern include potential risk and confidentiality. == Settings == There are many different settings in which a forensic psychologist may work. An individual that has specialized knowledge in regards to mental health, as well as the legal system, proves to be a vital asset for the courts and the criminal justice system. Many forensic psychologists spend a significant portion of time working in legal settings, but there are a lot of other locations in which a forensic psychologist can receive training to work at. Police departments, research centers, hospitals, medical examiners offices and universities are also settings in which forensic psychologists are often employed. Community settings are settings in which patients are managed by community forensic teams. == Forensic Psychotherapy == The aim of forensic psychotherapy is not only to understand the crime an individual has committed, but to understand the person as a whole within his/her environment. Forensic psychotherapy may involve group work, individual work, work with victims, and work with families, as well as within therapeutic communities. Working from the premise that the offender has a complex internal world which may be characterized by punitive and unreliable internal representations of paternal and other figures, psychotherapy can shed light on the unconscious impulses, conflicts, and primitive defense mechanisms, involved in his or her destructive actions and "acting out". It helps to understand the triggers to the violent acts and timing of the acts. Forensic psychotherapy aims to help the offender understand why they committed the act and take responsibility for it, aiming to prevent future crimes committed. The intimacy and profound experience of therapy may enable an offender to reframe and restructure these harsh images which tend to blunt sensitivities and, when projected out onto others, act as a rationale or driving force for criminal acting out. The patient may develop self-awareness, and an awareness of the nature of their deeds, and ultimately be able to live a more adjusted life. The effectiveness of psychodynamic psychotherapy, as is the case with other psychological therapies, is limited far as behavioral change for antisocial personality or psychopathic offenders. These two types of offenders comprise the primary diagnostic group found in forensic psychotherapy work. The evidence which is emerging, suggests that a range and variety of treatments may be most helpful for such offenders. Treatment of high risk offenders poses particular problems of perverse transference and counter transference which can undermine and confound effective treatment so it would be usual to expect such treatment to be conducted by experienced practitioners who are well supported and supervised. == Controversy Regarding The Application of Forensic Psychotherapy == There have also been some controversies regarding the application of forensic psychotherapy within courts and its use during or after trials. Among clinicians there are prejudices that judges see the analysis of unconscious motivations as simply adminshing the guilt of the offender and as a means of working around legal systems. Because of the scenarios in which forensic psychotherapy is utilized often times the patient-offender will face punishment for their crimes just as they are prepared to undergo proper treatment. The success of forensic psychotherapy could likely result in this, leading to many patients not receiving full treatment. Another controversy that affects the development of forensic psychotherapy is the publics perception of an offender, especially those involved in serious crimes like pedophilia. It's common for trial outcomes to be viewed as black and white, the victim is good and the perpetrator is bad. Because of this preconception any treatment for the perpetrator is generally looked down upon. It is not an uncommon occurrence that people who were once victims may later on become perpetrators. Another important fact to consider is that in the vast majority of child abuse cases the primary offender is either the victims family or close friends of the family. When forensic psychotherapists attempt to determine what causes in an offenders past could have led to their motivations for perpetrating a crime they could be met with public scrutiny for appearing to sympathize with the offender. == Guidelines for Forensic Psychotherapy == In 1931, a group of individuals who established the Association for the Scientific Treatment of Delinquency and Crime developed Forensic Therapy at the Portman Clinic in London. Determined to influence and enhance the understanding of this method of treatment, the International Association for Forensic Psychotherapy (IAFP) was formed in June 1991 and is still active today. The main function of this association is to advance the understanding people have of forensic psychotherapy, including the forms of treatment and risk factors associated with this practice. This helps to promote the health of not only offenders, but of victims as well. The American Academy for Forensic Psychology and the American Psychology-Law Society published the Speciality Guidelines for Forensic Psychologists in 1991. It provides direction to forensic psychologists in identifying competent practice, practicing responsibly, establishing relationships with parties involved and identifying issues. APA also created guidelines in the 1990s for new forensic psychologists. In 1994 the Guidelines for Child Custody Evaluations in Divorce Proceedings was adopted by the APA Council of Representatives to promote proficiency. In 1998 the Guidelines for Psychological Evaluations in Child Protection Matters was adopted by the APA Council of Representatives as well. Certification is done statewide and nationwide to ensure competence. More classes are being offered in Forensic Psychology and more opportunities are available at the graduate and post graduate level. == References == == Further reading == Welldon, Estela (1993). "Forensic Psychotherapy and Group Analysis". Group Analysis. 26 (4): 487–502. doi:10.1177/0533316493264009. S2CID 144418101. == External links == International Association for Forensic Psychotherapy [1]
Wikipedia/Forensic_psychotherapy
Recreational drug use is the use of one or more psychoactive drugs to induce an altered state of consciousness, either for pleasure or for some other casual purpose or pastime. When a psychoactive drug enters the user's body, it induces an intoxicating effect. Recreational drugs are commonly divided into three categories: depressants (drugs that induce a feeling of relaxation and calmness), stimulants (drugs that induce a sense of energy and alertness), and hallucinogens (drugs that induce perceptual distortions such as hallucination). In popular practice, recreational drug use is generally tolerated as a social behaviour, rather than perceived as the medical condition of self-medication. However, drug use and drug addiction are severely stigmatized everywhere in the world. Many people also use prescribed and controlled depressants such as opioids, opiates, and benzodiazepines. What controlled substances are considered generally unlawful to possess varies by country, but usually includes cannabis, cocaine, opioids, MDMA, amphetamine, methamphetamine, psychedelics, benzodiazepines, and barbiturates. As of 2015, it is estimated that about 5% of people worldwide aged 15 to 65 (158 million to 351 million) had used controlled drugs at least once. Common recreational drugs include caffeine, commonly found in coffee, tea, soft drinks, and chocolate; alcohol, commonly found in beer, wine, cocktails, and distilled spirits; nicotine, commonly found in tobacco, tobacco-based products, and electronic cigarettes; cannabis and hashish (with legality of possession varying inter/intra-nationally); and the controlled substances listed as controlled drugs in the Single Convention on Narcotic Drugs (1961) and the Convention on Psychotropic Substances (1971) of the United Nations (UN). Since the early 2000s, the European Union (EU) has developed several comprehensive and multidisciplinary strategies as part of its drug policy in order to prevent the diffusion of recreational drug use and abuse among the European population and raise public awareness on the adverse effects of drugs among all member states of the European Union, as well as conjoined efforts with European law enforcement agencies, such as Europol and EMCDDA, in order to counter organized crime and illegal drug trade in Europe. == Reasons for use == Many researchers have explored the etiology of recreational drug use. Some of the most common theories are: genetics, personality type, psychological problems, self-medication, sex, age, depression, curiosity, boredom, rebelliousness, a sense of belonging to a group, family and attachment issues, history of trauma, failure at school or work, socioeconomic stressors, peer pressure, juvenile delinquency, availability, historical factors, or socio-cultural influences. There has been no consensus on a single cause. Instead, experts tend to apply the biopsychosocial model. Any number of factors may influence an individual's drug use, as they are not mutually exclusive. Regardless of genetics, mental health, or traumatic experiences, social factors play a large role in the exposure to and availability of certain types of drugs and patterns of use. According to addiction researcher Martin A. Plant, some people go through a period of self-redefinition before initiating recreational drug use. They tend to view using drugs as part of a general lifestyle that involves belonging to a subculture that they associate with heightened status and the challenging of social norms. Plant states: "From the user's point of view there are many positive reasons to become part of the milieu of drug taking. The reasons for drug use appear to have as much to do with needs for friendship, pleasure and status as they do with unhappiness or poverty. Becoming a drug taker, to many people, is a positive affirmation rather than a negative experience". === Evolution === Anthropological research has suggested that humans "may have evolved to counter-exploit plant neurotoxins". The ability to use botanical chemicals to serve the function of endogenous neurotransmitters may have improved survival rates, conferring an evolutionary advantage. A typically restrictive prehistoric diet may have emphasized the apparent benefit of consuming psychoactive drugs, which had themselves evolved to imitate neurotransmitters. Chemical–ecological adaptations and the genetics of hepatic enzymes, particularly cytochrome P450, have led researchers to propose that "humans have shared a co-evolutionary relationship with psychotropic plant substances that is millions of years old." == Health risks == The severity of impact and type of risks that come with recreational drug use vary widely with the drug in question and the amount being used. There are many factors in the environment and within the user that interact with each drug differently. Alcohol is sometimes considered one of the most dangerous recreational drugs. Alcoholic drinks, tobacco products and other nicotine-based products (e.g., electronic cigarettes), and cannabis are regarded by various medical professionals as the most common and widespread gateway drugs. In the United States, Australia, and New Zealand, the general onset of drinking alcohol, tobacco smoking, cannabis smoking, and consumption of multiple drugs most frequently occurs during adolescence and in middle school and secondary school settings. Some scientific studies in the early 21st century found that a low to moderate level of alcohol consumption, particularly of red wine, might have substantial health benefits such as decreased risk of cardiovascular diseases, stroke, and cognitive decline. This claim has been disputed, specifically by British researcher David Nutt, professor of neuropsychopharmacology at the Imperial College London, who stated that studies showing benefits for "moderate" alcohol consumption in "some middle-aged men" lacked controls for the variable of what the subjects were drinking beforehand. Experts in the United Kingdom have suggested that some psychoactive drugs that may be causing less harm to fewer users (although they are also used less frequently in the first place) are cannabis, psilocybin mushrooms, LSD, and MDMA; however, these drugs have risks and side effects of their own. === Drug harmfulness === Drug harmfulness is defined as the degree to which a psychoactive drug has the potential to cause harm to the user and is measured in several ways, such as by addictiveness and the potential for physical harm. More objectively harmful drugs may be colloquially referred to as "hard drugs", and less harmful drugs as "soft drugs". The term "soft drug" is considered controversial by critics as it may imply the false belief that soft drugs cause lesser or insignificant harm. === Responsible use === Responsible drug use advocates that users should not take drugs at the same time as activities such as driving, swimming, operating machinery, or other activities that are unsafe without a sober state. Responsible drug use is emphasized as a primary prevention technique in harm-reduction drug policies. Harm-reduction policies were popularized in the late 1980s, although they began in the 1970s counter-culture, through cartoons explaining responsible drug use and the consequences of irresponsible drug use to users. Another issue is that the illegality of drugs causes social and economic consequences for users—the drugs may be "cut" with adulterants and the purity varies wildly, making overdoses more likely—and legalization of drug production and distribution could reduce these and other dangers of illegal drug use. == Prevention == In efforts to curtail recreational drug use, governments worldwide introduced several laws prohibiting the possession of almost all varieties of recreational drugs during the 20th century. The "War on Drugs" promoted by the United States, however, is now facing increasing criticism. Evidence is insufficient to tell if behavioral interventions help prevent recreational drug use in children. One in four adolescents has used an illegal drug, and one in ten of those adolescents who need addiction treatment get some type of care. School-based programs are the most commonly used method for drug use education; however, the success rates of these intervention programs are highly dependent on the commitment of participants and are limited in general. == Demographics == === Australia === Alcohol is the most widely used recreational drug in Australia. 86.2% of Australians aged 12 years and over have consumed alcohol at least once in their lifetime, compared to 34.8% of Australians aged 12 years and over who have used cannabis at least once in their lifetime. === United States === From the mid-19th century to the 1930s, American physicians prescribed Cannabis sativa as a prescription drug for various medical conditions. In the 1960s, the counterculture movement introduced the use of psychoactive drugs, including cannabis. Young adults and college students reported the recreational prevalence of cannabis, among other drugs, at 20-25% while the cultural mindset of using was open and curious. In 1969, the FBI reported that between the years 1966 and 1968, the number of arrests for marijuana possession, which had been outlawed throughout the United States under Marijuana Tax Act of 1937, had increased by 98%. Despite acknowledgement that drug use was greatly growing among America's youth during the late 1960s, surveys have suggested that only as much as 4% of the American population had ever smoked marijuana by 1969. By 1972, however, that number would increase to 12%. That number would then double by 1977. The Controlled Substances Act of 1970 classified marijuana along with heroin and LSD as a Schedule I drug, i.e., having the relatively highest abuse potential and no accepted medical use. Most marijuana at that time came from Mexico, but in 1975 the Mexican government agreed to eradicate the crop by spraying it with the herbicide paraquat, raising fears of toxic side effects. Colombia then became the main supplier. The "zero tolerance" climate of the Reagan and Bush administrations (1981–1993) resulted in passage of strict laws and mandatory sentences for possession of marijuana. The "War on Drugs" thus brought with it a shift from reliance on imported supplies to domestic cultivation, particularly in Hawaii and California. Beginning in 1982, the Drug Enforcement Administration turned increased attention to marijuana farms in the United States, and there was a shift to the indoor growing of plants specially developed for small size and high yield. After over a decade of decreasing use, marijuana smoking began an upward trend once more in the early 1990s, especially among teenagers, but by the end of the decade this upswing had leveled off well below former peaks of use. == Society and culture == Many movements and organizations are advocating for or against the liberalization of the use of recreational drugs, most notably regarding the legalization of marijuana and cannabinoids for medical and/or recreational use. Subcultures have emerged among users of recreational drugs, as well as alternative lifestyles and social movements among those who abstain from them, such as teetotalism and "straight edge". Since the early 2000s, medical professionals have acknowledged and addressed the problem of the increasing consumption of alcoholic drinks and club drugs (such as MDMA, cocaine, rohypnol, GHB, ketamine, PCP, LSD, and methamphetamine) associated with rave culture among adolescents and young adults in the Western world. Studies have shown that adolescents are more likely than young adults to use multiple drugs, and the consumption of club drugs is highly associated with the presence of criminal behaviors and recent alcohol abuse or dependence. The prevalence of recreational drugs in human societies is widely reflected in fiction, entertainment, and the arts, subject to prevailing laws and social conventions. For instance, in the music industry, the musical genres hip hop, hardcore rap, and trap, alongside their derivative subgenres and subcultures, are most notorious for having continuously celebrated and promoted drug trafficking, gangster lifestyle, and consumption of alcohol and other drugs since their inception in the United States during the late 1980s–early 1990s. In video games, for example, drugs are portrayed in a variety of ways: including power-ups (cocaine gum replenishes stamina in Red Dead Redemption 2), obstacles to be avoided (such as the Fuzzies in Super Mario World 2: Yoshi's Island that distort the player's view when accidentally consumed), items to be bought and sold for in-game currency (coke dealing is a big part of Scarface: The World Is Yours). In the Fallout video game franchise, drugs ("chems" in the game) can fill the role of any above mentioned. Drug trafficking, gang rivalries, and their related criminal underworld also play a big part in the Grand Theft Auto video game franchise. == Common recreational drugs == The following substances are commonly used recreationally: Alcohol: Most drinking alcohol is ethanol, CH3CH2OH. Drinking alcohol creates intoxication, relaxation and lowered inhibitions. It is produced by the fermentation of sugars by yeasts to create wine, beer, and distilled liquor (e.g., vodka, rum, gin, etc.). In most areas of the world, it is legal for those over a certain age (18 in most countries). It is an IARC Group 1 carcinogen and a teratogen. Alcohol withdrawal can be life-threatening. Amphetamines: Used recreationally to provide alertness and a sense of energy. Prescribed for ADHD, narcolepsy, depression, and weight loss. A potent central nervous system stimulant, in the 1940s and 50s methamphetamine was used by Axis and Allied troops in World War II, and, later on, other armies, and by Japanese factory workers. It increases muscle strength and fatigue resistance and improves reaction time. Methamphetamine use can be neurotoxic, which means it damages dopamine neurons. As a result of this brain damage, chronic use can lead to post acute withdrawal syndrome. Caffeine: Often found in coffee, black tea, energy drinks, some soft drinks (e.g., Coca-Cola, Pepsi, and Mountain Dew, among others), and chocolate. It is the world's most widely consumed psychoactive drug, but has only mild dependence liability for long-term users. Cannabis: Its common forms include marijuana and hashish, which are smoked, vaporized or eaten. It contains at least 85 cannabinoids. The primary psychoactive component is THC, which mimics the neurotransmitter anandamide, named after the Hindu ananda, "joy, bliss, delight". When cannabis is eaten, THC metabolized into 11-OH-THC, this molecule is the primary psychoactive compound of edible forms of cannabis. THC and 11-OH-THC are partial agonist at CB1 and CB2 receptors of the endocannabinoid system. Cocaine: It is available as a white powder, which is insufflated ("sniffed" into the nostrils) or converted into a solution with water and injected. A popular derivative, crack cocaine is typically smoked. When transformed into its freebase form, crack, the cocaine vapour may be inhaled directly. This is thought to increase bioavailability, but has also been found to be toxic, due to the production of methylecgonidine during pyrolysis. MDMA: Commonly known as ecstasy, it is a common club drug in the rave scene. Ketamine: An anesthetic used legally by paramedics and doctors in emergency situations for its dissociative and analgesic qualities and illegally in the club drug scene. Lean: A liquid drug mixture made when mixing cough syrup, sweets, soft drinks and codeine. It originated in the 1990s in Houston. Ever since then, this drug usage has grown and is often used at parties and in the trap music scene. Many people would get a drowsy feeling when consuming this drug. LSD: A popular ergoline derivative, that was first synthesized in 1938 by Albert Hofmann. However, he failed to notice its psychedelic effects until 1943. It's a serotonergic psychedelic (partial agonist at serotonin receptors, particularly the 5-HT2A subtypes) like psilocin, mescaline and DMT. But LSD is unique because it is also a partial agonist of dopamine and norepinephrine receptors, particularly the D2R subtypes. LSD (d-Lysergic Acid Diethylamide) is a molecule of the lysergamide family, a subclass of the tryptamine family. In the 1950s, it was used in psychological therapy, and, covertly, by the CIA in Project MKULTRA, in which the drug was administered to unwitting US and Canadian citizens. It played a central role in 1960s 'counter-culture', and was banned in October 1968 by US President Lyndon B Johnson. Nitrous oxide: legally used by dentists as an anxiolytic and anaesthetic, it is also used recreationally by users who obtain it from whipped cream canisters (whippets or whip-its) (see inhalant), as it causes perceptual effects, a "high" and at higher doses, hallucinations. Opiates and opioids: Available by prescription for pain relief. Commonly used opioids include oxycodone, hydrocodone, codeine, fentanyl, heroin, methadone, and morphine. Opioids have a high potential for addiction and have the ability to induce severe physical withdrawal symptoms upon cessation of frequent use. Heroin can be smoked, insufflated, or turned into a solution with water and injected. Percocet is a prescription opioid containing oxycodone and acetaminophen. Psilocybin mushrooms: This hallucinogenic drug was an important drug in the psychedelic scene. Until 1963, when it was chemically analysed by Albert Hofmann, it was completely unknown to modern science that Psilocybe semilanceata ("Liberty Cap", common throughout Europe) contains psilocybin, a hallucinogen previously identified only in species native to Mexico, Asia, and North America. Tobacco: Nicotiana tabacum. Nicotine is the key drug contained in tobacco leaves, which are either smoked, chewed or snuffed. It contains nicotine, which crosses the blood–brain barrier in 10–20 seconds. It mimics the action of the neurotransmitter acetylcholine at nicotinic acetylcholine receptors in the brain and the neuromuscular junction. The neuronal forms of the receptor are present both post-synaptically (involved in classical neurotransmission) and pre-synaptically, where they can influence the release of multiple neurotransmitters. Tranquilizers: barbiturates, benzodiazepines (e.g. alprazolam, diazepam, etc.)(commonly prescribed for anxiety disorders; known to cause dementia and post acute withdrawal syndrome) "Bath salts": slang term that generally refers to substituted cathinones such as Mephedrone and Methylenedioxypyrovalerone (MDPV), but not always DMT – primary ingredient in ayahuasca, can also be smoked (inhalation causes a brief effect lasting usually 5 to 15 minutes). Peyote: This hallucinogen contains mescaline, native to southwestern Texas and Mexico. Echinopsis pachanoi is a faster growing cactus containing mescaline. It is one of the few narcotics legally available in the United States for religious purposes by the Native American Church. Salvia divinorum: This hallucinogenic Mexican herb in the mint family; not considered recreational, most likely due to the nature of the hallucinations (legal in some jurisdictions) Synthetic cannabis: "Spice", "K2", JWH-018, AM-2201 Quaaludes: A popular club drug in the 1970s. No longer prescribed or manufactured in many countries but remains popular in South Africa. == Routes of administration == Drugs are often associated with a particular route of administration. Many drugs can be consumed in more than one way. For example, marijuana can be swallowed like food or smoked, and cocaine can be "sniffed" in the nostrils, injected, or, with various modifications, smoked. inhalation: all intoxicative inhalants (see below) that are gases or solvent vapours that are inhaled through the trachea, as the name suggests insufflation: also known as "snorting", or "sniffing", this method involves the user placing a powder in the nostrils and breathing in through the nose, so that the drug is absorbed by the mucous membranes. Drugs that are "snorted", or "sniffed", include powdered amphetamines, cocaine, heroin, ketamine, MDMA, and snuff tobacco. Subcutaneous injection (see also the article Skin popping): injection of drug into the third lowest layer of skin. Intramuscular injection: injection of drug into a muscle. intravenous injection (see also the article Drug injection): the user injects a solution of water and the drug into a vein, or less commonly, into the tissue. Drugs that are injected include morphine and heroin, less commonly other opioids. Stimulants like cocaine or methamphetamine may also be injected. In rare cases, users inject other drugs. oral intake: caffeine, ethanol, cannabis edibles, psilocybin mushrooms, coca tea, poppy tea, laudanum, GHB, ecstasy pills with MDMA or various other substances (mainly stimulants and psychedelics), prescription and over-the-counter drugs (ADHD and narcolepsy medications, benzodiazepines, anxiolytics, sedatives, cough suppressants, morphine, codeine, opioids and others) sublingual: substances diffuse into the blood through tissues under the tongue. Many psychoactive drugs can be or have been specifically designed for sublingual administration, including barbiturates, benzodiazepines, opioid analgesics with poor gastrointestinal bioavailability, LSD blotters, coca leaves, some hallucinogens. This route of administration is activated when chewing some forms of smokeless tobacco (e.g. dipping tobacco, snus). intrarectal ("plugging"): administering into the rectum, most water-soluble drugs can be used this way. smoking (see also the section below): tobacco, cannabis, opium, crystal meth, phencyclidine, crack cocaine, and heroin (diamorphine as freebase) known as chasing the dragon. transdermal patches with prescription drugs: e.g. methylphenidate (Daytrana) and fentanyl. Many drugs are taken through various routes. Intravenous route is the most efficient, but also one of the most dangerous. Nasal, rectal, inhalation and smoking are safer. The oral route is one of the safest and most comfortable, but of little bioavailability. == Types == === Depressants === Depressants are psychoactive drugs that temporarily diminish the function or activity of a specific part of the body or mind. Colloquially, depressants are known as "downers", and users generally take them to feel more relaxed and less tense. Examples of these kinds of effects may include anxiolysis, sedation, and hypotension. Depressants are widely used throughout the world as prescription medicines and as illicit substances. When these are used, effects may include anxiolysis (reduction of anxiety), analgesia (pain relief), sedation, somnolence, cognitive/memory impairment, dissociation, muscle relaxation, lowered blood pressure/heart rate, respiratory depression, anesthesia, and anticonvulsant effects. Depressants exert their effects through a number of different pharmacological mechanisms, the most prominent of which include potentiation of GABA or opioid activity, and inhibition of adrenergic, histamine or acetylcholine activity. Some are also capable of inducing feelings of euphoria. The most widely used depressant by far is alcohol (i.e. ethanol). Stimulants or "uppers", such as amphetamines or cocaine, which increase mental or physical function, have an opposite effect to depressants. Depressants, in particular alcohol, can precipitate psychosis. A 2019 systematic review and meta-analysis by Murrie et al. found that the rate of transition from opioid, alcohol and sedative induced psychosis to schizophrenia was 12%, 10% and 9% respectively. ==== Antihistamines ==== Antihistamines (or "histamine antagonists") inhibit the release or action of histamine. "Antihistamine" can be used to describe any histamine antagonist, but the term is usually reserved for the classical antihistamines that act upon the H1 histamine receptor. Antihistamines are used as treatment for allergies. Allergies are caused by an excessive response of the body to allergens, such as the pollen released by grasses and trees. An allergic reaction causes release of histamine by the body. Other uses of antihistamines are to help with normal symptoms of insect stings even if there is no allergic reaction. Their recreational appeal exists mainly due to their anticholinergic properties, that induce anxiolysis and, in some cases such as diphenhydramine, chlorpheniramine, and orphenadrine, a characteristic euphoria at moderate doses. High dosages taken to induce recreational drug effects may lead to overdoses. Antihistamines are also consumed in combination with alcohol, particularly by youth who find it hard to obtain alcohol. The combination of the two drugs can cause intoxication with lower alcohol doses. Hallucinations and possibly delirium resembling the effects of Datura stramonium can result if the drug is taken in much higher than therapeutic doses. Antihistamines are widely available over the counter at drug stores (without a prescription), in the form of allergy medication and some cough medicines. They are sometimes used in combination with other substances such as alcohol. The most common unsupervised use of antihistamines in terms of volume and percentage of the total is perhaps in parallel to the medicinal use of some antihistamines to extend and intensify the effects of opioids and depressants. The most commonly used are hydroxyzine, mainly to extend a supply of other drugs, as in medical use, and the above-mentioned ethanolamine and alkylamine-class first-generation antihistamines, which are – once again as in the 1950s – the subject of medical research into their anti-depressant properties. For all of the above reasons, the use of medicinal scopolamine for recreational uses is also observed. ==== Analgesics ==== Analgesics (also known as "painkillers") are used to relieve pain (achieve analgesia). The word analgesic derives from Greek "αν-" (an-, "without") and "άλγος" (álgos, "pain"). Analgesic drugs act in various ways on the peripheral and central nervous systems; they include paracetamol (also known in the US as acetaminophen), the nonsteroidal anti-inflammatory drugs (NSAIDs) such as the salicylates (e.g. aspirin), and opioid drugs such as hydrocodone, codeine, heroin and oxycodone. Some further examples of the brand name prescription opiates and opioid analgesics that may be used recreationally include Vicodin, Lortab, Norco (hydrocodone), Avinza, Kapanol (morphine), Opana, Paramorphan (oxymorphone), Dilaudid, Palladone (hydromorphone), and OxyContin (oxycodone). ==== Tranquilizers ==== The following are examples of tranquilizers (GABAergics): Barbiturates Benzodiazepines Ethanol (drinking alcohol; ethyl alcohol) Nonbenzodiazepines Others carisoprodol (Soma) chloral hydrate diethyl ether ethchlorvynol (Placidyl; "jelly-bellies") gamma-butyrolactone (GBL, a prodrug to GHB) gamma-hydroxybutyrate (GHB; G; Xyrem; "Liquid Ecstasy", "Fantasy") glutethimide (Doriden) kava (from Piper methysticum; contains kavalactones) ketamine, a phencyclidine (PCP) analog meprobamate (Miltown) methaqualone (Sopor, Mandrax; "Quaaludes") phenibut propofol (Diprivan), a general anesthetic theanine (found in Camellia sinensis, the tea plant) valerian (from Valeriana officinalis) === Stimulants === Stimulants, also known as "psychostimulants", induce euphoria with improvements in mental and physical function, such as enhanced alertness, wakefulness, and locomotion. Stimulants are also occasionally called "uppers". Depressants or "downers", which decrease mental or physical function, are in stark contrast to stimulants and are considered to be their functional opposites. Stimulants enhance the activity of the central and peripheral nervous systems. Common effects may include increased alertness, awareness, wakefulness, endurance, productivity, and motivation, arousal, locomotion, heart rate, and blood pressure, and a diminished desire for food and sleep. Use of stimulants may cause the body to significantly reduce its production of endogenous compounds that fulfill similar functions. Once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and dysphoric. This is colloquially termed a "crash" and may promote reuse of the stimulant. Amphetamines are a significant cause of drug-induced psychosis. Importantly, a 2019 meta-analysis found that 22% of people with amphetamine-induced psychosis transition to a later diagnosis of schizophrenia. Examples of stimulants include: Sympathomimetics (catecholaminergics)—e.g. amphetamine, methamphetamine, cocaine, methylphenidate, ephedrine, pseudoephedrine Entactogens (serotonergics, primarily phenethylamines)—e.g. MDMA (which is also an amphetamine) Eugeroics, e.g. modafinil Others arecoline (found in Areca catechu) caffeine (found in Coffea spp.) nicotine (found in Nicotiana spp.) rauwolscine (found in Rauvolfia serpentina) yohimbine (Procomil; a tryptamine alkaloid found in Pausinystalia johimbe) === Euphoriants === Alcohol: "Euphoria, the feeling of well-being, has been reported during the early (10–15 min) phase of alcohol consumption" (e.g., beer, wine or spirits) Cannabis: Tetrahydrocannabinol, the main psychoactive ingredient in this plant, can have sedative and euphoric properties. Catnip: Catnip contains a sedative known as nepetalactone that activates opioid receptors. In cats it elicits sniffing, licking, chewing, head shaking, rolling, and rubbing which are indicators of pleasure. In humans, however, catnip does not act as a euphoriant. Stimulants: "Psychomotor stimulants produce locomotor activity (the subject becomes hyperactive), euphoria, (often expressed by excessive talking and garrulous behaviour), and anorexia. The amphetamines are the best known drugs in this category..." MDMA: The "euphoriant drugs such as MDMA ('ecstasy') and MDEA ('eve')" are popular among young adults. MDMA "users experience short-term feelings of euphoria, rushes of energy and increased tactility" as well as interpersonal connectedness. Opium: This "drug derived from the unripe seed-pods of the opium poppy…produces drowsiness and euphoria and reduces pain. Morphine and codeine are opium derivatives." Opioids have led to many deaths in the United States, particularly by causing respiratory depression. === Hallucinogens === Hallucinogens can be divided into three broad categories: psychedelics, dissociatives, and deliriants. They can cause subjective changes in perception, thought, emotion and consciousness. Unlike other psychoactive drugs such as stimulants and opioids, hallucinogens do not merely amplify familiar states of mind but also induce experiences that differ from those of ordinary consciousness, often compared to non-ordinary forms of consciousness such as trance, meditation, conversion experiences, and dreams. Psychedelics, dissociatives, and deliriants have a long worldwide history of use within medicinal and religious traditions. They are used in shamanic forms of ritual healing and divination, in initiation rites, and in the religious rituals of syncretistic movements such as União do Vegetal, Santo Daime, Temple of the True Inner Light, and the Native American Church. When used in religious practice, psychedelic drugs, as well as other substances like tobacco, are referred to as entheogens. Hallucinogen-induced psychosis occurs when psychosis persists despite no longer being intoxicated with the drug. It is estimated that 26% of people with hallucinogen-induced psychosis will transition to a diagnosis of schizophrenia. This percentage is less than the psychosis transition rate for cannabis (34%) but higher than that of amphetamines (22%). Starting in the mid-20th century, psychedelic drugs have been the object of extensive attention in the Western world. They have been and are being explored as potential therapeutic agents in treating depression, post-traumatic stress disorder, obsessive–compulsive disorder, alcoholism, and opioid addiction. Yet the most popular, and at the same time most stigmatized, use of psychedelics in Western culture has been associated with the search for direct religious experience, enhanced creativity, personal development, and "mind expansion". The use of psychedelic drugs was a major element of the 1960s counterculture, where it became associated with various social movements and a general atmosphere of rebellion and strife between generations. Deliriants atropine (alkaloid found in plants of the family Solanaceae, including datura, deadly nightshade, henbane and mandrake) dimenhydrinate (Dramamine, an antihistamine) diphenhydramine (Benadryl, Unisom, Nytol) hyoscyamine (alkaloid also found in the Solanaceae) hyoscine hydrobromide (another Solanaceae alkaloid) myristicin (found in Myristica fragrans ("Nutmeg")) ibotenic acid (found in Amanita muscaria ("Fly Agaric"); prodrug to muscimol) muscimol (also found in Amanita muscaria, a GABAergic) Dissociatives dextromethorphan (DXM; Robitussin, Delsym, etc.; "Dex", "Robo", "Cough Syrup", "DXM") "Triple C's, Coricidin, Skittles" refer to a potentially fatal formulation containing both dextromethorphan and chlorpheniramine. ketamine (K; Ketalar, Ketaset, Ketanest; "Ket", "Kit Kat", "Special-K", "Vitamin K", "Jet Fuel", "Horse Tranquilizer") methoxetamine (Mex, Mket, Mexi) phencyclidine (PCP; Sernyl; "Angel Dust", "Rocket Fuel", "Sherm", "Killer Weed", "Super Grass") nitrous oxide (N2O; "NOS", "Laughing Gas", "Whippets", "Balloons") Psychedelics Phenethylamines 2C-B ("Nexus", "Venus", "Eros", "Bees") 2C-E ("Eternity", "Hummingbird") 2C-I ("Infinity") 2C-T-2 ("Rosy") 2C-T-7 ("Blue Mystic", "Lucky 7") DOB DOC DOI DOM ("Serenity, Tranquility, and Peace" ("STP")) MDMA ("Ecstasy", "E", "Molly", "Mandy", "MD", "Crystal Love") mescaline (found in peyote and Trichocereus macrogonus (Peruvian torch, San Pedro cactus)) Tryptamines (including ergolines and lysergamides) 5-MeO-DiPT ("Foxy", "Foxy Methoxy") 5-MeO-DMT (found in various plants like chacruna, jurema, vilca, and yopo) alpha-methyltryptamine (αMT; Indopan; "Spirals") bufotenin (secreted by Bufo alvarius, also found in various Amanita mushrooms) N,N-dimethyltryptamine (N,N-DMT; DMT; "Dimitri", "Disneyland", "Spice"; found in large amounts in Psychotria and in D. cabrerana) lysergic acid amide (LSA; ergine; found in morning glory and Hawaiian baby woodrose seeds) lysergic acid diethylamide (LSD; L; Delysid; "Acid", "Sid". "Cid", "Lucy", "Sidney", "Blotters", "Droppers", "Sugar Cubes") O-Acetylpsilocin (believed to be a prodrug of psilocin) psilocin (found in psilocybin mushrooms) psilocybin (also found in psilocybin mushrooms; prodrug to psilocin) ibogaine (found in Tabernanthe iboga ("Iboga")) Atypicals salvinorin A (found in Salvia divinorum, a trans-neoclerodane diterpenoid ("Diviner's Sage", "Lady Salvia", "Salvinorin")) tetrahydrocannabinol (found in cannabis) === Inhalants === Inhalants are gases, aerosols, or solvents that are breathed in and absorbed through the lungs. While some "inhalant" drugs are used for medical purposes, as in the case of nitrous oxide, a dental anesthetic, inhalants are used as recreational drugs for their intoxicating effect. Most inhalant drugs that are used non-medically are ingredients in household or industrial chemical products that are not intended to be concentrated and inhaled, including organic solvents (found in cleaning products, fast-drying glues, and nail polish removers), fuels (gasoline (petrol) and kerosene), and propellant gases such as Freon and compressed hydrofluorocarbons that are used in aerosol cans such as hairspray, whipped cream, and non-stick cooking spray. A small number of recreational inhalant drugs are pharmaceutical products that are used illicitly, such as anesthetics (ether and nitrous oxide) and volatile anti-angina drugs (alkyl nitrites, more commonly known as "poppers"). The most serious inhalant abuse occurs among children and teens who "[...] live on the streets completely without family ties". Inhalant users inhale vapor or aerosol propellant gases using plastic bags held over the mouth or by breathing from a solvent-soaked rag or an open container. The effects of inhalants range from an alcohol-like intoxication and intense euphoria to vivid hallucinations, depending on the substance and the dosage. Some inhalant users are injured due to the harmful effects of the solvents or gases, or due to other chemicals used in the products inhaled. As with any recreational drug, users can be injured due to dangerous behavior while they are intoxicated, such as driving under the influence. Computer cleaning dusters are dangerous to inhale, because the gases expand and cool rapidly upon being sprayed. In many cases, users have died from hypoxia (lack of oxygen), pneumonia, cardiac failure or arrest, or aspiration of vomit. Examples include: Chloroform Ethyl chloride Diethyl ether Ethane and ethylene Laughing gas (nitrous oxide) Poppers (alkyl nitrites) Solvents and propellants (including propane, butane, freon, gasoline, kerosene, toluene) along with the fumes of glues containing them == List of drugs which can be smoked == Plants: black tar heroin cannabis datura and other Solanaceae (formerly smoked to treat asthma) opium salvia divinorum tobacco possibly other plants (see the section below) Substances (also not necessarily psychoactive plants smoked within them): 5-MeO-DMT Bufotenine crack cocaine dimethyltryptamine (DMT) DiPT methamphetamine Methaqualone phencyclidine (PCP) synthetic cannabinoids (see also: synthetic cannabis) many others, including some prescription drugs == List of psychoactive plants, fungi, and animals == Minimally psychoactive plants which contain mainly caffeine and theobromine: cocoa coffee guarana (caffeine in guarana is sometimes called guaranine) kola tea (caffeine in tea is sometimes called theine) – also contains theanine yerba mate (caffeine in yerba mate is sometimes called mateine) Most known psychoactive plants: cannabis: cannabinoids coca: cocaine kava: kavalactones khat: cathine and cathinone nutmeg: myristicin and elemicin opium poppy: morphine, codeine, and other opiates salvia divinorum: salvinorin A tobacco: nicotine and beta-carboline alkaloids Solanaceae plants—contain atropine, hyoscyamine, and scopolamine: datura deadly nightshade Atropa belladonna henbane mandrake (mandragora) other Solanaceae Cacti with mescaline: Peyote Trichocereus macrogonus, the Peruvian torch cactus, and in particular its variety T. macrogonus var. pachanoi, the San Pedro cactus Other plants: Areca catechu (see: betel and paan)—arecoline Ayahuasca (for DMT) Calea zacatechichi damiana ephedra: ephedrine kratom: mitragynine, mitraphylline, 7-hydroxymitragynine, raubasine, and corynanthine Morning glory and Hawaiian Baby Woodrose – lysergic acid amide (LSA, ergine) Rauvolfia serpentina: rauwolscine Silene capensis Tabernanthe iboga ("Iboga")—ibogaine valerian: valerian (the chemical with the same name) various plants like chacruna, jurema, vilca, and yopo – 5-MeO-DMT yohimbe (Pausinystalia johimbe): yohimbine and corynanthine many others Fungi: various Amanita mushrooms: muscimol Amanita muscaria: ibotenic acid and muscimol Claviceps purpurea and other Clavicipitaceae: ergotamine (not psychoactive itself but used in synthesis of LSD) psilocybin mushrooms: psilocybin and psilocin Psychoactive animals: hallucinogenic fish psychoactive toads: Bufo alvarius (Colorado River toad or Sonoran Desert toad) contains bufotenin (5-MeO-DMT) == See also == == References == == Further reading == Martin, Christopher S.; Chung, Tammy; Langenbucher, James W. (2017). "Part 1: Defining and Characterizing the Nature and Extent of Substance Use Disorders – Historical and Cultural Perspectives on Substance Use and Substance Use Disorders". In Sher, Kenneth J. (ed.). The Oxford Handbook of Substance Use and Substance Use Disorders: Volume 1. Oxford Library of Psychology. Oxford and New York: Oxford University Press. pp. 27–59. doi:10.1093/oxfordhb/9780199381678.013.001. ISBN 9780199381678. LCCN 2016020729. Anthony, James; Barondess, David A.; Radovanovic, Mirjana; Lopez-Quintero, Catalina (2017). "Part 1: Psychiatric Comorbidity – Polydrug Use: Research Topics and Issues". In Sher, Kenneth J. (ed.). The Oxford Handbook of Substance Use and Substance Use Disorders: Volume 2. Oxford Library of Psychology. Oxford and New York: Oxford University Press. pp. 27–59. doi:10.1093/oxfordhb/9780199381708.013.006. ISBN 9780199381708. LCCN 2016020729. Hernández-Serrano, Olga; Gras, Maria E.; Font-Mayolas, Sílvia; Sullman, Mark J. M. (2016). "Part VI: Dual and Polydrug Abuse – Chapter 83: Types of Polydrug Usage". In Preedy, Victor R. (ed.). Neuropathology of Drug Addictions and Substance Misuse, Volume 3: General Processes and Mechanisms, Prescription Medications, Caffeine and Areca, Polydrug Misuse, Emerging Addictions and Non-Drug Addictions. Cambridge, Massachusetts: Academic Press, imprint of Elsevier. pp. 839–849. doi:10.1016/B978-0-12-800634-4.00083-4. ISBN 978-0-12-800634-4. == External links == "The Science of Drug Use: A Resource for the Justice Sector". www.drugabuse.gov. North Bethesda, Maryland: National Institute on Drug Abuse. 26 May 2020. Archived from the original on 6 September 2023. Retrieved 21 March 2024. School-Based Drug Abuse Prevention: Promising and Successful Programs (PDF). Ottawa, Ontario: Public Safety Canada. 31 January 2018. ISBN 978-1-100-12181-9. Archived (PDF) from the original on 19 May 2021. Retrieved 21 March 2024. Sacco, L. N.; Finklea, K. (3 May 2016). "Synthetic Drugs: Overview and Issues for Congress" (PDF). Washington, D.C.: Congressional Research Service. Archived (PDF) from the original on 8 December 2021. Retrieved 21 March 2024.
Wikipedia/Hard_and_soft_drugs
Epigenetics in forensic science is the application of epigenetics to solving crimes. Forensic science has been using DNA as evidence since 1984, however this does not give information about any changes in the individual since birth and will not be useful in distinguishing identical siblings. The focus of epigenetics in the forensic field is on non-heritable changes such as aging and diseases. Epigenetics involves any changes to the DNA that does not affect the sequence, but instead affects the activity of the DNA, such as the level of transcription of a particular gene. These changes can be passed down transgenerationally through the germline or arise after birth from environmental factors. In humans and other mammals, CpG dinucleotides are the main sequence that develops methylation, and because of this most studies on try and find unique methylation sites. There are a few methylation sites that have been determined as a cause of environmental influences from age, lifestyle, or certain diseases. == DNA methylation == DNA methylation is a common epigenetic mark being studied as potential evidence in forensic science. Unlike DNA, realistic DNA methylation is less likely be planted at crime scenes.> Current methods to fabricate DNA usually exclude important methylation marks found in biological tissues making this a way to confirm the identity of an individual when evidence is being assessed. Many different tissues can be used to analyze methylation. == Sample preservation == The effect of cryopreservation on epigenetic marks in tissues is a new area of study. The primary focus of this research is on oocytes and sperm for the purpose of assisted reproductive technology, however it can be useful in forensics for the preservation of evidence. Methylation can be analyzed in fresh tissue that is cryo-preserved within 24 hours of death and it can then be analyzed in this tissue for up to 1 year. If the tissue is formalin-fixed or putrefied, methylation analysis is much more difficult. == Aging == Although blood is the primary sample used in studies, most tissues consistently show that methylation increases early in life and slowly decreases, globally, throughout late adulthood. This process is referred to as epigenetic drift. The epigenetic clock refers to methylation sites that are highly associated with aging. These sites consistently change across individuals and can therefore be used as age markers for an individual. There are some models that have been developed to predict ages for specific samples, such as saliva and buccal epithelial cells, blood, or semen, but others have been made to age any tissue. In 2011, three significant, hypermethylated CpG sites related to aging across all samples were found in the KCNQ1DN, NPTX2, and GRIA2 genes. The age guess for over 700 samples had a mean absolute deviation from chronological age (MAD) of 11.4 years. Two years later, almost 8,000 samples were used in an elastic net regularized regression to create a new age predictive model. This resulted in 353 CpG sites being chosen for the age prediction, and the model had a MAD of 3.6 years. There is evidence for specific methylation sites to be associated with the circadian clock, meaning a sample could have a time of day associated with their death through methylation marks. In whole blood from humans, plasma homocysteine and global DNA methylation change in levels throughout the day. Homocysteine levels peak in the evening and are at their lowest overnight while DNA methylation follows an inverse pattern. Other studies with rats found that expression of DNMT3B and other methylation enzymes oscillate with the circadian clock and may be regulated by the circadian clock. Another methylation associated factor, MECP2, is phosphorylated by the superchiasmatic nucleus in response to light signaling. In a group of subjects that died from a variety of causes, there was partial methylation at the PER2, PER3, CRY1, and TIM promoters which are important genes in controlling the circadian clock. The methylation of CRY1 varied within an individual's tissues and between two individuals, however the difference between individuals may have been due to methamphetamine exposure. === Teeth === An age model using dentin from teeth is currently being studied. Over 300 genes have been found that are a part of odontogenesis and quite a few affect the epigenome. For example, JMJD3 is a histone demethylase that modifies the methylation of homeobox and bone morphogenetic proteins. More studies are being done to differentiate genetic, epigenetic, and environmental factors on methylation in teeth so that aging algorithms are more accurate. Previously, measuring differences in between sets of teeth was done with calipers, but 2D and 3D imaging has become more available and allows for better accuracy of measurements. New programs are being developed to analyze these images of teeth. Mono-zygotic twin studies reveal 8-29% of changes between the twins' teeth is from the environment. Several studies of mono-zygotic twins have shown that when they have a tooth defect, such as congenitally missing or supernumerary teeth, the twins can share the same number or position of the defective tooth, but sometimes not both of these factors. === Twin identification === Monozygotic twins provide information on epigenetic differences that are not from genetic factors. Epigenetic markers differ the most in monozygotic twins who spend time apart or have a very different medical history. As twins age, their methylation and acetylation of histone H3 and H4 increasingly vary. These marks are specific to the environmental changes between the twins and not changes in methylation from general aging. The rate of disease discordance between monozygotic twins is usually over 50%, including heritable diseases. This does not correlate to the disease prevalence rate. There are more phenotypic methylation differences in twins discordant for bipolar, schizophrenia, or systemic lupus erythematosus than in unrelated cases. There is no difference between twins discordant for rheumatoid arthritis or dermatomyositis. A limitation to the current studies on twin disease discordance is the lack of a baseline epigenetic profile of the twins before they develop the disease. This baseline will be used to distinguish the environmental changes between the twins to narrow down the methylation sites related to the disease. Several studies are obtaining newborn epigenetic profiles for long-term research. == References ==
Wikipedia/Epigenetics_in_forensic_science
DNA phenotyping is the process of predicting an organism's phenotype using only genetic information collected from genotyping or DNA sequencing. This term, also known as molecular photofitting, is primarily used to refer to the prediction of a person's physical appearance and/or biogeographic ancestry for forensic purposes. DNA phenotyping uses many of the same scientific methods as those being used for genetically informed personalized medicine, in which drug responsiveness (pharmacogenomics) and medical outcomes are predicted from a patient's genetic information. Significant genetic variants associated with a particular trait are discovered using a genome-wide association study (GWAS) approach, in which hundreds of thousands or millions of single-nucleotide polymorphisms (SNPs) are tested for their association with each trait of interest. Predictive modeling is then used to build a mathematical model for making trait predictions about new subjects. == Predicted phenotypes == Human phenotypes are predicted from DNA using direct or indirect methods. With direct methods, genetic variants mechanistically linked with variable expression of the relevant phenotypes are measured and used with appropriate statistical methodologies to infer trait value. With indirect methods, variants associated with genetic component(s) of ancestry that correlate with the phenotype of interest, such as Ancestry Informative Markers, are measured and used with appropriate statistical methodologies to infer trait value. The direct method is always preferable, for obvious reasons, but depending on the genetic architecture of the phenotype, is not always possible. Biogeographic ancestry determination methods have been highly developed within the genetics community, as it is a key GWAS quality control step. These approaches typically use genome-wide human genetic clustering and/or principal component analysis to compare new subjects to curated individuals with known ancestry, such as the International HapMap Project or the 1000 Genomes Project. Another approach is to assay ancestry informative markers (AIMs), SNPs that vary in frequency between the major human populations. As early as 2004, evidence was compiled showing that the bulk of phenotypic variation in human iris color could be attributed to polymorphisms in the OCA2 gene. This paper, and the work it cited, laid the foundation for the inference of human iris color from DNA, first carried out on basic level by DNAPrint Genomics Beginning in 2009, academic groups developed and reported on more accurate predictive models for eye color and, more recently, hair color in the European population. More recently, companies such as Parabon NanoLabs and Identitas have begun offering forensic DNA phenotyping services for U.S. and international law enforcement. However, the science behind the commercial services offered by Parabon NanoLabs has been criticized as it has not been subjected to scrutiny in peer-reviewed scientific publications. It has been suggested that it is not known "whether their ability to estimate a face’s appearance is better than chance, or if it’s an approximation based on what we know about ancestry”. DNA phenotyping is often referred to as a "biologic witness," a play on the term eye-witness. Just as an eye-witness may describe the appearance of a person of interest, the DNA left at a crime scene can be used to discover the physical appearance of the person who left it. This allows DNA phenotyping to be used as an investigative tool to help guide the police when searching for suspects. DNA phenotyping can be particularly helpful in cold cases, where there may not be a current lead. However, it is not a method used to help incarcerate suspects, as more traditional forensic measures are better suited for this. == Pigmentation Prediction == One online tool available to the public and law enforcement is the HIrisPlex-S Webtool. This system uses SNPs that are linked to human pigmentation to predict an individual's phenotype. Using the multiplex assay described in three separate papers, the genotype for 41 different SNPs can be generated, which are linked to hair, eye and skin color in humans. The genotype can then be entered into the HIrisPlex-S Webtool to generate the most probable phenotype of an individual based on their genetic information.no This tool originally started as the IrisPlex System, consisting of six SNPs linked to eye color (rs12913832, rs1800407, rs12896399, rs16891982, rs1393350 and rs12203592). The addition of 18 SNPs linked to both hair and eye color lead to the updated HIrisPlex System (rs312262906, rs11547464, rs885479, rs1805008, rs1805005, rs1805006, rs1805007, rs1805009, rs201326893, rs2228479, rs1110400, rs28777, rs12821256, rs4959270, rs1042602, rs2402130, rs2378249 and rs683). Another assay was developed using 17 SNPs involved in skin pigmentation to create the current HIris-SPlex System (s3114908, rs1800414, rs10756819, rs2238289, rs17128291, rs6497292, rs1129038, rs1667394, rs1126809, rs1470608, rs1426654, rs6119471, rs1545397, rs6059655, rs12441727, rs3212355 and rs8051733). The predictions for eye pigmentation are Blue, Intermediate and Brown. There are two categories for hair pigmentation: color (Blond, Brown, Red and Black) and shade (light and dark). The predictions for skin pigmentation are Very Pale, Pale, Intermediate, Dark and Dark to Black. Unlike eye and hair predictions where only the highest probability is used to make a prediction, the top two highest probabilities for skin color are used to account for tanning ability and other variations. == Genes responsible for facial features == In 2018, researchers found 15 loci in which genes are found that are responsible for our facial features. == Differences from DNA profiling == Traditional DNA profiling, sometimes referred to as DNA fingerprinting, uses DNA as a biometric identifier. Like an iris scan or fingerprint, a DNA profile can uniquely identify an individual with very high accuracy. For forensic purposes, this means that investigators must have already identified and obtained DNA from a potentially matching individual. DNA phenotyping is used when investigators need to narrow the pool of possible individuals or identify unknown remains by learning about the person's ancestry and appearance. When the suspected individual is identified, traditional DNA profiling can be used to prove a match, provided there is a reference sample that can be used for comparison. == Published DNA phenotyping composites == On 9 January 2015, the fourth anniversary of the murders of Candra Alston and her three-year-old daughter Malaysia Boykin, police in Columbia, South Carolina, issued a press release containing what is thought to be the first composite image in forensic history to be published entirely on the basis of a DNA sample. The image, produced by Parabon NanoLabs with the company's Snapshot DNA Phenotyping System, consists of a digital mesh of predicted face morphology overlaid with textures representing predicted eye color, hair color and skin color. Kenneth Canzater Jr. was charged with the murders in 2017. On 30 June 2015, NBC Nightly News featured a DNA phenotyping composite, also produced by Parabon, of a suspect in the 1988 murder of April Tinsley near Fort Wayne, Indiana. The television segment also included a composite of national news correspondent Kate Snow, which was produced using DNA extracted from the rim of a water bottle that the network submitted to Parabon for a blinded test of the company's Snapshot™ DNA Phenotyping Service. Snow's identity and her use of the bottle were revealed only after the composite had been produced. In 2018 John D. Miller was charged with the murder. Sheriff Tony Mancuso of the Calcasieu Parish Sheriff's Office in Lake Charles, Louisiana, held a press conference on 1 September 2015 to announce the release of a Parabon Snapshot composite for a suspect in the 2009 murder of Sierra Bouzigard in Moss Bluff, Louisiana. The investigation had previously focused on a group of Hispanic males with whom Bouzigard was last seen. Snapshot analysis indicates the suspect is predominantly European, with fair skin, green or possibly blue eyes and brown or black hair. Sheriff Mancuso told the media, “This totally redirects our whole investigation and will move this case in a new direction.” Blake A. Russell was charged with the murder in 2017. Florida police chiefs from Miami Beach, Miami, Coral Gables and Miami-Dade jointly released a Snapshot composite of the “Serial Creeper” on 10 September 2015. For more than a year, the perpetrator has been spying on and sexually terrorizing women, and police believe he is connected to at least 15 crimes, possibly as many as 40. In a Miami Beach attack on 18 August 2015, which was first reported to the public on 23 September 2015, the perpetrator spoke in Spanish and told his victim he was from Cuba. Consistent with this claim, Snapshot had previously determined that the subject is Latino, with European, Native American, and African ancestry, an admixture most similar to that found in Latino individuals from the Caribbean and Northern South America. On 2 February 2016, the Anne Arundel County Maryland Police Department released what is believed to be the first published composite created by combining DNA phenotyping and forensic facial reconstruction from a victim's skull. The victim's body which had suffered severe upper body trauma was found on 23 April 1985 in a metal trash container at the construction site of the Marley Station Mall in Glen Burnie, MD. Police initially estimated the homicide occurred approximately five months before the body was discovered. Later the date of death was changed to about 1963. Thom Shaw, an IAI-certified forensic artist at Parabon NanoLabs, performed the physical facial reconstruction and the digital adaptation of a Snapshot composite to reflect details gleaned from the victim's facial morphology. In 2019, with the help of Parabon and genetic genealogy, the body was identified as Roger Kelso, born in Fort Wayne, Indiana in 1943. The murderer was not identified. Police in Tacoma, Washington, disclosed Parabon Snapshot reports to the public on 6 April 2016 for two male suspects believed to be individually responsible for the deaths of Michella Welch (age 12) and Jennifer Bastian (age 13), both abducted from Tacoma's North End area in 1986, just four months apart. Investigators long believed one person committed both crimes because of their many similarities. However, 2016 DNA testing proved two individuals were separately involved. Snapshot descriptions of the two killers were released to aid the public in generating new leads for the investigations. In 2018 Gary Charles Hartman and Robert D. Washburn were charged with the murders of the two girls. In 2019 Washington State passed a law called "Jennifer and Michella's law" named after the two murdered girls. This law allowed police to take DNA samples from people convicted of indecent exposure and from dead sex offenders. Also on 6 April 2016, police in Athens Ohio released a Snapshot composite of an active sexual predator linked to at least three attacks, the most recent in December 2015 near Ohio University. On 15 April 2016, the Hallandale Beach Florida Police Department released a Snapshot composite of a suspect believed to be responsible for the murders of Toronto residents David “Donny” Pichosky and Rochelle Wise. It was the first time a Snapshot composite of a female was released to the public. On 21 April 2016, police in Windsor, Canada, released a Snapshot composite of the suspect responsible for the abduction and murder of Ljubica Topic in 1971. It was the first public release of a Snapshot composite outside of the United States and, at the time, the oldest case to which the technology had been applied. On 11 May, the Loudoun County Sheriff's Office in Virginia released a Snapshot composite of a suspect responsible for abducting and sexually assaulting a 9-year-old girl in 1987. On 16 May 2016, eve of the third anniversary of veteran John “Jack” Fay's murder, the Warwick Rhode Island Police Department released a Snapshot composite produced using DNA taken from a hammer found near the crime scene. Police hoped the composite would generate fresh leads in a case that may have involved multiple assailants. On 3 May 2017 Idaho Falls, Idaho Police released a DNA phenotype composite sketch from DNA found at the murder scene of Angie Dodge on 13 June 1996. Police hoped the widespread distribution of the composite sketch would generate new leads into the suspect. Excerpt from Idaho Falls Police Department Press release: "The crime scene and evidence collected at the scene, including the collection and extraction of one major and two minor DNA profiles, indicates that there was more than one individual involved in the death of Angie Dodge. With current technologies, the major profile collected is the only viable DNA sample that can be used to make an identification." Christopher Tapp was released in 2017 after spending 20 years in jail for taking part in the rape and murder of Angie Dodge although his DNA did not match DNA at the crime scene. In May 2019 Brian Leigh Dripps confessed to the murder of Dodge after Idaho Falls, Idaho Police charged Dripps. Dripps DNA matched DNA left at the crime scene. Parabon Nanolabs had helped investigate this case using DNA genetic genealogy and GEDmatch. == See also == DNA Phenotype Genotyping Genome-wide association study Single-nucleotide polymorphisms Predictive modeling == References == == External links == Parabon NanoLabs Identitas
Wikipedia/DNA_phenotyping
The prohibition of drugs through sumptuary legislation or religious law is a common means of attempting to prevent the recreational use of certain intoxicating substances. An area has a prohibition of drugs when its government uses the force of law to punish the use or possession of drugs which have been classified as controlled. A government may simultaneously have systems in place to regulate both controlled and non controlled drugs. Regulation controls the manufacture, distribution, marketing, sale, and use of certain drugs, for instance through a prescription system. For example, in some states, the possession or sale of amphetamines is a crime unless a patient has a physician's prescription for the drug; having a prescription authorizes a pharmacy to sell and a patient to use a drug that would otherwise be prohibited. Although prohibition mostly concerns psychoactive drugs (which affect mental processes such as perception, cognition, and mood), prohibition can also apply to non-psychoactive drugs, such as anabolic steroids. Many governments do not criminalize the possession of a limited quantity of certain drugs for personal use, while still prohibiting their sale or manufacture, or possession in large quantities. Some laws (or judicial practice) set a specific volume of a particular drug, above which is considered ipso jure to be evidence of trafficking or sale of the drug. Some Islamic countries prohibit the use of alcohol (see list of countries with alcohol prohibition). Many governments levy a tax on alcohol and tobacco products, and restrict alcohol and tobacco from being sold or gifted to a minor. Other common restrictions include bans on outdoor drinking and indoor smoking. In the early 20th century, many countries had alcohol prohibition. These include the United States (1920–1933), Finland (1919–1932), Norway (1916–1927), Canada (1901–1948), Iceland (1915–1922) and the Russian Empire/USSR (1914–1925). In fact, the first international treaty to control a psychoactive substance adopted in 1890 actually concerned alcoholic beverages (Brussels Conference). The first treaty on opium only arrived two decades later, in 1912. == Definitions == Drugs, in the context of prohibition, are any of a number of psychoactive substances whose use a government or religious body seeks to control. What constitutes a drug varies by century and belief system. What is a psychoactive substance is relatively well known to modern science. Examples include a range from caffeine found in coffee, tea, and chocolate, nicotine in tobacco products; botanical extracts morphine and heroin, and synthetic compounds MDMA and fentanyl. Almost without exception, these substances also have a medical use, in which case they are called pharmaceutical drugs or just pharmaceuticals. The use of medicine to save or extend life or to alleviate suffering is uncontroversial in most cultures. Prohibition applies to certain conditions of possession or use. Recreational use refers to the use of substances primarily for their psychoactive effect outside of a clinical situation or doctor's care. In the twenty-first century, caffeine has pharmaceutical uses. Caffeine is used to treat bronchopulmonary dysplasia. In most cultures, caffeine in the form of coffee or tea is unregulated. Over 2.25 billion cups of coffee are consumed in the world every day. Some religions, including the Church of Jesus Christ of Latter-day Saints, prohibit coffee. They believe that it is both physically and spiritually unhealthy to consume coffee. A government's interest to control a drug may be based on its negative effects on its users, or it may simply have a revenue interest. The British parliament prohibited the possession of untaxed tea with the imposition of the Tea Act of 1773. In this case, as in many others, it is not a substance that is prohibited, but the conditions under which it is possessed or consumed. Those conditions include matters of intent, which makes the enforcement of laws difficult. In Colorado possession of "blenders, bowls, containers, spoons, and mixing devices" is illegal if there was intent to use them with drugs. Many drugs, beyond their pharmaceutical and recreational uses, have industrial uses. Nitrous oxide, or laughing gas is a dental anesthetic, also used to prepare whipped cream, fuel rocket engines, and enhance the performance of race cars. Ethanol, or drinking alcohol, is also used as a fuel, industrial solvent and disinfectant. == History == The cultivation, use, and trade of psychoactive and other drugs has occurred since ancient times. Concurrently, authorities have often restricted drug possession and trade for a variety of political and religious reasons. In the 20th century, the United States led a major renewed surge in drug prohibition called the "War on Drugs". === Early drug laws === The prohibition on alcohol under Islamic Sharia law, which is usually attributed to passages in the Qur'an, dates back to the early seventh century. Although Islamic law is often interpreted as prohibiting all intoxicants (not only alcohol), the ancient practice of hashish smoking has continued throughout the history of Islam, against varying degrees of resistance. A major campaign against hashish-eating Sufis were conducted in Egypt in the 11th and 12th centuries resulting among other things in the burning of fields of cannabis. Though the prohibition of illegal drugs was established under Sharia law, particularly against the use of hashish as a recreational drug, classical jurists of medieval Islamic jurisprudence accepted the use of hashish for medicinal and therapeutic purposes, and agreed that its "medical use, even if it leads to mental derangement, should remain exempt [from punishment]". In the 14th century, the Islamic scholar Az-Zarkashi spoke of "the permissibility of its use for medical purposes if it is established that it is beneficial". In the Ottoman Empire, Murad IV attempted to prohibit coffee drinking to Muslims as haraam, arguing that it was an intoxicant, but this ruling was overturned soon after he died in 1640. The introduction of coffee in Europe from Muslim Turkey prompted calls for it to be banned as the devil's work, although Pope Clement VIII sanctioned its use in 1600, declaring that it was "so delicious that it would be a pity to let the infidels have exclusive use of it". Bach's Coffee Cantata, from the 1730s, presents a vigorous debate between a girl and her father over her desire to consume coffee. The early association between coffeehouses and seditious political activities in England led to the banning of such establishments in the mid-17th century. A number of Asian rulers had similarly enacted early prohibitions, many of which were later forcefully overturned by Western colonial powers during the 18th and 19th centuries. In 1360, for example, King Ramathibodi I, of Ayutthaya Kingdom (now Thailand), prohibited opium consumption and trade. The prohibition lasted nearly 500 years until 1851 when King Rama IV allowed Chinese migrants to consume opium. The Konbaung Dynasty prohibited all intoxicants and stimulants during the reign of King Bodawpaya (1781–1819). After Burma became a British colony, the restrictions on opium were abolished and the colonial government established monopolies selling Indian-produced opium. In late Qing China, opium imported by foreign traders, such as those employed by Jardine Matheson and the East India Company, was consumed by all social classes in Southern China. Between 1821 and 1837, imports of the drug increased fivefold. The wealth drain and widespread social problems that resulted from this consumption prompted the Chinese government to attempt to end the trade. This effort was initially successful, with Lin Zexu ordering the destruction of opium at Humen in June 1839. However, the opium traders lobbied the British government to declare war on China, resulting in the First Opium War. The Qing government was defeated and the war ended with the Treaty of Nanking, which legalized opium trading in Chinese law === First modern drug regulations === The first modern law in Europe for the regulating of drugs was the Pharmacy Act 1868 in the United Kingdom. There had been previous moves to establish the medical and pharmaceutical professions as separate, self-regulating bodies, but the General Medical Council, established in 1863, unsuccessfully attempted to assert control over drug distribution. The act set controls on the distribution of poisons and drugs. Poisons could only be sold if the purchaser was known to the seller or to an intermediary known to both, and drugs, including opium and all preparations of opium or of poppies, had to be sold in containers with the seller's name and address. Despite the reservation of opium to professional control, general sales did continue to a limited extent, with mixtures with less than 1 percent opium being unregulated. After the legislation passed, the death rate caused by opium immediately fell from 6.4 per million population in 1868 to 4.5 in 1869. Deaths among children under five dropped from 20.5 per million population between 1863 and 1867 to 12.7 per million in 1871 and further declined to between 6 and 7 per million in the 1880s. In the United States, the first drug law was passed in San Francisco in 1875, banning the smoking of opium in opium dens. The reason cited was "many women and young girls, as well as young men of a respectable family, were being induced to visit the Chinese opium-smoking dens, where they were ruined morally and otherwise." This was followed by other laws throughout the country, and federal laws that barred Chinese people from trafficking in opium. Though the laws affected the use and distribution of opium by Chinese immigrants, no action was taken against the producers of such products as laudanum, a tincture of opium and alcohol, commonly taken as a panacea by white Americans. The distinction between its use by white Americans and Chinese immigrants was thus a form of racial discrimination as it was based on the form in which it was ingested: Chinese immigrants tended to smoke it, while it was often included in various kinds of generally liquid medicines often (but not exclusively) used by Americans of European descent. The laws targeted opium smoking, but not other methods of ingestion. Britain passed the All-India Opium Act of 1878, which limited recreational opium sales to registered Indian opium-eaters and Chinese opium-smokers and prohibiting its sale to emigrant workers from British Burma. Following the passage of a regional law in 1895, Australia's Aboriginals Protection and Restriction of the Sale of Opium Act 1897 addressed opium addiction among Aborigines, though it soon became a general vehicle for depriving them of basic rights by administrative regulation. Opium sale was prohibited to the general population in 1905, and smoking and possession were prohibited in 1908. Despite these laws, the late 19th century saw an increase in opiate consumption. This was due to the prescribing and dispensing of legal opiates by physicians and pharmacists to relieve menstruation pain. It is estimated that between 150,000 and 200,000 opiate addicts lived in the United States at the time, and a majority of these addicts were women. === Changing attitudes and the drug prohibition campaign === Foreign traders, including those employed by Jardine Matheson and the East India Company, smuggled opium into China in order to balance high trade deficits. Chinese attempts to outlaw the trade led to the First Opium War and the subsequent legalization of the trade at the Treaty of Nanking. Attitudes towards the opium trade were initially ambivalent, but in 1874 the Society for the Suppression of the Opium Trade was formed in England by Quakers led by the Rev. Frederick Storrs-Turner. By the 1890s, increasingly strident campaigns were waged by Protestant missionaries in China for its abolition. The first such society was established at the 1890 Shanghai Missionary Conference, where British and American representatives, including John Glasgow Kerr, Arthur E. Moule, Arthur Gostick Shorrock and Griffith John, agreed to establish the Permanent Committee for the Promotion of Anti-Opium Societies. Due to increasing pressure in the British parliament, the Liberal government under William Ewart Gladstone approved the appointment of a Royal Commission on Opium to India in 1893. The commission was tasked with ascertaining the impact of Indian opium exports to the Far East, and to advise whether the trade should be banned and opium consumption itself banned in India. After an extended inquiry, the Royal Commission rejected the claims made by the anti-opium campaigners regarding the supposed societal harm caused by the trade and the issue was finalized for another 15 years. The missionary organizations were outraged over the Royal Commission on Opium's conclusions and set up the Anti-Opium League in China; the league gathered data from every Western-trained medical doctor in China and published Opinions of Over 100 Physicians on the Use of Opium in China. This was the first anti-drug campaign to be based on scientific principles, and it had a tremendous impact on the state of educated opinion in the West. In England, the home director of the China Inland Mission, Benjamin Broomhall, was an active opponent of the opium trade, writing two books to promote the banning of opium smoking: The Truth about Opium Smoking and The Chinese Opium Smoker. In 1888, Broomhall formed and became secretary of the Christian Union for the Severance of the British Empire with the Opium Traffic and editor of its periodical, National Righteousness. He lobbied the British parliament to ban the opium trade. Broomhall and James Laidlaw Maxwell appealed to the London Missionary Conference of 1888 and the Edinburgh Missionary Conference of 1910 to condemn the continuation of the trade. As Broomhall lay dying, an article from The Times was read to him with the welcome news that an international agreement had been signed ensuring the end of the opium trade within two years. In 1906, a motion to 'declare the opium trade "morally indefensible" and remove Government support for it', initially unsuccessfully proposed by Arthur Pease in 1891, was put before the House of Commons. This time the motion passed. The Qing government banned opium soon afterward. These changing attitudes led to the founding of the International Opium Commission in 1909. An International Opium Convention was signed by 13 nations at The Hague on January 23, 1912, during the First International Opium Conference. This was the first international drug control treaty and it was registered in the League of Nations Treaty Series on January 23, 1922. The Convention provided that "The contracting Powers shall use their best endeavors to control or to cause to be controlled, all person manufacturing, importing, selling, distributing, and exporting morphine, cocaine, and their respective salts, as well as the buildings in which these persons carry such an industry or trade." The treaty became international law in 1919 when it was incorporated into the Treaty of Versailles. The role of the commission was passed to the League of Nations, and all signatory nations agreed to prohibit the import, sale, distribution, export, and use of all narcotic drugs, except for medical and scientific purposes. === Prohibition === In the UK the Defence of the Realm Act 1914, passed at the onset of the First World War, gave the government wide-ranging powers to requisition the property and to criminalize specific activities. A moral panic was whipped up by the press in 1916 over the alleged sale of drugs to the troops of the British Indian Army. With the temporary powers of DORA, the Army Council quickly banned the sale of all psychoactive drugs to troops, unless required for medical reasons. However, shifts in the public attitude towards drugs—they were beginning to be associated with prostitution, vice and immorality—led the government to pass further unprecedented laws, banning and criminalising the possession and dispensation of all narcotics, including opium and cocaine. After the war, this legislation was maintained and strengthened with the passing of the Dangerous Drugs Act 1920 (10 & 11 Geo. 5. c. 46). Home Office control was extended to include raw opium, morphine, cocaine, ecogonine and heroin. Hardening of Canadian attitudes toward Chinese-Canadian opium users and fear of a spread of the drug into the white population led to the effective criminalization of opium for nonmedical use in Canada between 1908 and the mid-1920s. The Mao Zedong government nearly eradicated both consumption and production of opium during the 1950s using social control and isolation. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. The remnant opium trade primarily served Southeast Asia, but spread to American soldiers during the Vietnam War, with 20 percent of soldiers regarding themselves as addicted during the peak of the epidemic in 1971. In 2003, China was estimated to have four million regular drug users and one million registered drug addicts. In the US, the Harrison Act was passed in 1914, and required sellers of opiates and cocaine to get a license. While originally intended to regulate the trade, it soon became a prohibitive law, eventually becoming legal precedent that any prescription for a narcotic given by a physician or pharmacist – even in the course of medical treatment for addiction – constituted conspiracy to violate the Harrison Act. In 1919, the Supreme Court ruled in Doremus that the Harrison Act was constitutional and in Webb that physicians could not prescribe narcotics solely for maintenance. In Jin Fuey Moy v. United States, the court upheld that it was a violation of the Harrison Act even if a physician provided prescription of a narcotic for an addict, and thus subject to criminal prosecution. This is also true of the later Marijuana Tax Act in 1937. Soon, however, licensing bodies did not issue licenses, effectively banning the drugs. The American judicial system did not initially accept drug prohibition. Prosecutors argued that possessing drugs was a tax violation, as no legal licenses to sell drugs were in existence; hence, a person possessing drugs must have purchased them from an unlicensed source. After some wrangling, this was accepted as federal jurisdiction under the interstate commerce clause of the U.S. Constitution. ==== Alcohol prohibition ==== The prohibition of alcohol commenced in Finland in 1919 and in the United States in 1920. Because alcohol was the most popular recreational drug in these countries, reactions to its prohibition were far more negative than to the prohibition of other drugs, which were commonly associated with ethnic minorities, prostitution, and vice. Public pressure led to the repeal of alcohol prohibition in Finland in 1932, and in the United States in 1933. Residents of many provinces of Canada also experienced alcohol prohibition for similar periods in the first half of the 20th century. In Sweden, a referendum in 1922 decided against an alcohol prohibition law (with 51% of the votes against and 49% for prohibition), but starting in 1914 (nationwide from 1917) and until 1955 Sweden employed an alcohol rationing system with personal liquor ration books ("motbok"). === War on Drugs === In response to rising drug use among young people and the counterculture movement, government efforts to enforce prohibition were strengthened in many countries from the 1960s onward. Support at an international level for the prohibition of psychoactive drug use became a consistent feature of United States policy during both Republican and Democratic administrations, to such an extent that US support for foreign governments has often been contingent on their adherence to US drug policy. Major milestones in this campaign include the introduction of the Single Convention on Narcotic Drugs in 1961, the Convention on Psychotropic Substances in 1971 and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances in 1988. A few developing countries where consumption of the prohibited substances has enjoyed longstanding cultural support, long resisted such outside pressure to pass legislation adhering to these conventions. Nepal only did so in 1976. In 1972, United States President Richard Nixon announced the commencement of the so-called "War on Drugs". Later, President Reagan added the position of drug czar to the President's Executive Office. In 1973, New York introduced mandatory minimum sentences of 15 years to life imprisonment for possession of more than 113 grams (4 oz) of a so-called hard drug, called the Rockefeller drug laws after New York Governor and later Vice President Nelson Rockefeller. Similar laws were introduced across the United States. California's broader 'three strikes and you're out' policy adopted in 1994 was the first mandatory sentencing policy to gain widespread publicity and was subsequently adopted in most United States jurisdictions. This policy mandates life imprisonment for a third criminal conviction of any felony offense. A similar 'three strikes' policy was introduced to the United Kingdom by the Conservative government in 1997. This legislation enacted a mandatory minimum sentence of seven years for those convicted for a third time of a drug trafficking offense involving a class A drug. === Calls for legalization, relegalization or decriminalization === The terms relegalization, legalization, legal regulations, or decriminalization are used with very different meanings by different authors, something that can be confusing when the claims are not specified. Here are some variants: Sales of one or more drugs (e.g., marijuana) for personal use become legal, at least if sold in a certain way. Sales of an extracts with a specific substance become legal sold in a certain way, for example on prescription. Use or possession of small amounts for personal use do not lead to incarceration if it is the only crime, but it is still illegal; the court or the prosecutor can impose a fine. (In that sense, Sweden both legalized and supported drug prohibition simultaneously.) Use or possession of small amounts for personal use do not lead to incarceration. The case is not treated in an ordinary court, but by a commission that may recommend treatment or sanctions including fines. (In that sense, Portugal both legalized and supported drug prohibitions). There are efforts around the world to promote the relegalization and decriminalization of drugs. These policies are often supported by proponents of liberalism and libertarianism on the grounds of individual freedom, as well as by leftists who believe prohibition to be a method of suppression of the working class by the ruling class. Prohibition of drugs is supported by proponents of conservatism as well various NGOs. A number of NGOs are aligned in support of drug prohibition as members of the World Federation Against Drugs. In the WFAD constitution, the "Declaration of the World Forum Against Drugs" (2008) advocates for "no other goal than a drug-free world", and states that a balanced policy of drug abuse prevention, education, treatment, law enforcement, research, and supply reduction provides the most effective platform to reduce drug abuse and its associated harms and calls on governments to consider demand reduction as one of their first priorities. It supports the UN drug conventions, the inclusion of cannabis as one of the "hard drugs", and the use of criminal sanctions "when appropriate" to deter drug use. It opposes legalization in any form, and harm reduction in general. According to some critics, drug prohibition is responsible for enriching "organised criminal networks" while the hypothesis that the prohibition of drugs generates violence is consistent with research done over long time-series and cross-country facts. In the United Kingdom, where the principal piece of drug prohibition legislation is the Misuse of Drugs Act 1971, criticism includes: Drug classification: making a hash of it?, Fifth Report of Session 2005–06, House of Commons Science and Technology Committee, which said that the present system of drug classification is based on historical assumptions, not scientific assessment Development of a rational scale to assess the harm of drugs of potential misuse, David Nutt, Leslie A. King, William Saulsbury, Colin Blakemore, The Lancet, 24 March 2007, said the act is "not fit for purpose" and "the exclusion of alcohol and tobacco from the Misuse of Drugs Act is, from a scientific perspective, arbitrary" The Drug Equality Alliance (DEA) argue that the Government is administering the Act arbitrarily, contrary to its purpose, contrary to the original wishes of Parliament and therefore illegally. They are currently assisting and supporting several legal challenges to this alleged maladministration. In February 2008 the then-president of Honduras, Manuel Zelaya, called on the world to legalize drugs, in order, he said, to prevent the majority of violent murders occurring in Honduras. Honduras is used by cocaine smugglers as a transiting point between Colombia and the US. Honduras, with a population of 7 million, suffers an average of 8–10 murders a day, with an estimated 70% being a result of this international drug trade. The same problem is occurring in Guatemala, El Salvador, Costa Rica and Mexico, according to Zelaya. In January 2012 Colombian President Juan Manuel Santos made a plea to the United States and Europe to start a global debate about legalizing drugs. This call was echoed by the Guatemalan President Otto Pérez Molina, who announced his desire to legalize drugs, saying "What I have done is put the issue back on the table." In a report dealing with HIV in June 2014, the World Health Organization (WHO) of the UN called for the decriminalization of drugs particularly including injected ones. This conclusion put WHO at odds with broader long-standing UN policy favoring criminalization. Eight states of the United States (Alaska, California, Colorado, Maine, Massachusetts, Nevada, Oregon, and Washington), as well as the District of Columbia, have legalized the sale of marijuana for personal recreational use as of 2017, although recreational use remains illegal under U.S. federal law. The conflict between state and federal law is, as of 2018, unresolved. Since Uruguay in 2014 and Canada in 2018 legalized cannabis, the debate has known a new turn internationally. On March 14th, 2025, the United Nations Commission on Narcotic Drugs decided to create a panel of independent experts to rethink the global drug control regime. == Drug prohibition laws == The following individual drugs, listed under their respective family groups (e.g., barbiturates, benzodiazepines, opiates), are the most frequently sought after by drug users and as such are prohibited or otherwise heavily regulated for use in many countries: Among the barbiturates, pentobarbital (Nembutal), secobarbital (Seconal), and amobarbital (Amytal) Among the benzodiazepines, temazepam (Restoril; Normison; Euhypnos), flunitrazepam (Rohypnol; Hypnor; Flunipam), and alprazolam (Xanax) Cannabis products, e.g., marijuana, hashish, and hashish oil Among the dissociatives, phencyclidine (PCP), and ketamine are the most sought after. hallucinogens such as LSD, mescaline, peyote, and psilocybin Empathogen-entactogen drugs like MDMA ("ecstasy") Among the narcotics, it is opiates such as morphine and codeine, and opioids such as diacetylmorphine (Heroin), hydrocodone (Vicodin; Hycodan), oxycodone (Percocet; Oxycontin), hydromorphone (Dilaudid), and oxymorphone (Opana). Sedatives such as GHB and methaqualone (Quaalude) Stimulants such as cocaine, amphetamine (Adderall), dextroamphetamine (Dexedrine), methamphetamine (Desoxyn), methcathinone, and methylphenidate (Ritalin) The regulation of the above drugs varies in many countries. Alcohol possession and consumption by adults is today widely banned only in Islamic countries and certain states of India. Although alcohol prohibition was eventually repealed in the countries that enacted it, there are, for example, still parts of the United States that do not allow alcohol sales, though alcohol possession may be legal (see dry counties). New Zealand has banned the importation of chewing tobacco as part of the Smoke-free Environments Act 1990. In some parts of the world, provisions are made for the use of traditional sacraments like ayahuasca, iboga, and peyote. In Gabon, iboga (tabernanthe iboga) has been declared a national treasure and is used in rites of the Bwiti religion. The active ingredient, ibogaine, is proposed as a treatment of opioid withdrawal and various substance use disorders. In countries where alcohol and tobacco are legal, certain measures are frequently undertaken to discourage use of these drugs. For example, packages of alcohol and tobacco sometimes communicate warnings directed towards the consumer, communicating the potential risks of partaking in the use of the substance. These drugs also frequently have special sin taxes associated with the purchase thereof, in order to recoup the losses associated with public funding for the health problems the use causes in long-term users. Restrictions on advertising also exist in many countries, and often a state holds a monopoly on manufacture, distribution, marketing, and/or the sale of these drugs. === List of principal drug prohibition laws by jurisdiction (non-exhaustive) === Australia: Standard for the Uniform Scheduling of Medicines and Poisons Bangladesh: Narcotics Substances Control Act, 2018 Belize: Misuse of Drugs Act (Belize) Canada: Controlled Drugs and Substances Act Estonia: Narcotic Drugs and Psychotropic Substances Act (Estonia) Germany: Narcotic Drugs Act India: Narcotic Drugs and Psychotropic Substances Act (India) Netherlands: Opium Law New Zealand: Misuse of Drugs Act 1975 Pakistan: Control of Narcotic Substances Act 1997 Philippines: Comprehensive Dangerous Drugs Act of 2002 Poland: Drug Abuse Prevention Act 2005 Portugal: Decree-Law 15/93 Ireland: Misuse of Drugs Act (Ireland) South Africa: Drugs and Drug Trafficking Act 1992 Singapore: Misuse of Drugs Act (Singapore) Sweden: Lag om kontroll av narkotika (SFS 1992:860) Thailand: Psychotropic Substances Act (Thailand) and Narcotics Act United Kingdom: Misuse of Drugs Act 1971 and Drugs Act 2005 United States: Controlled Substances Act International: Single Convention on Narcotic Drugs === Legal dilemmas === The sentencing statutes in the United States Code that cover controlled substances are complicated. For example, a first-time offender convicted in a single proceeding for selling marijuana three times, and found to have carried a gun on him all three times (even if it were not used) is subject to a minimum sentence of 55 years in federal prison. In Hallucinations: Behavior, Experience, and Theory (1975), senior US government researchers Louis Jolyon West and Ronald K. Siegel explain how drug prohibition can be used for selective social control: The role of drugs in the exercise of political control is also coming under increasing discussion. Control can be through prohibition or supply. The total or even partial prohibition of drugs gives the government considerable leverage for other types of control. An example would be the selective application of drug laws ... against selected components of the population such as members of certain minority groups or political organizations. Linguist Noam Chomsky argues that drug laws are currently, and have historically been, used by the state to oppress sections of society it opposes: Very commonly substances are criminalized because they're associated with what's called the dangerous classes, poor people, or working people. So for example in England in the 19th century, there was a period when gin was criminalized and whiskey wasn't, because gin is what poor people drink. === Legal highs and prohibition === In 2013 the European Monitoring Centre for Drugs and Drug Addiction reported that there are 280 new legal drugs, known as "legal highs", available in Europe. One of the best known, mephedrone, was banned in the United Kingdom in 2010. On November 24, 2010, the U.S. Drug Enforcement Administration announced it would use emergency powers to ban many synthetic cannabinoids within a month. An estimated 73 new psychoactive synthetic drugs appeared on the UK market in 2012. The response of the Home Office has been to create a temporary class drug order which bans the manufacture, import, and supply (but not the possession) of named substances. === Corruption === In certain countries, there is concern that campaigns against drugs and organized crime are a cover for corrupt officials tied to drug trafficking themselves. In the United States, Federal Bureau of Narcotics chief Harry Anslinger's opponents accused him of taking bribes from the Mafia to enact prohibition and create a black market for alcohol. More recently in the Philippines, one death squad hitman told author Niko Vorobyov that he was being paid by military officers to eliminate those drug dealers who failed to pay a 'tax'. Under President Rodrigo Duterte, the Philippines has waged a bloody war against drugs that may have resulted in up to 29,000 extrajudicial killings. When it comes to social control with cannabis, there are different aspects to consider. Not only do we assess legislative leaders and the way they vote on cannabis, but we also must consider the federal regulations and taxation that contribute to social controls. For instance, according to a report on the U.S. customs and border protections, the American industry, although banned the main usage of marijuana, was still using products similar such as hemp seeds, oils etc. leading to the previously discussed marijuana tax act. The Tax act provisions required importers to register and pay an annual tax of $24 and receive an official stamp. Stamps for Products were then affixed to each original order form and recorded by the state revenue collector. Then, a customs collector was to maintain the custody of imported marijuana at entry ports until required documents were received, reviewed and approved.Shipments were subject to searches, seizures and forfeitures if any provisions of the law were not met. Violations would result in fines of no more than $2000 or potential imprisonment for up to 5 years. Oftentimes, this created opportunity for corruption, stolen imports that would later lead to smuggling, oftentimes by state officials and tight knit elitists. == Penalties == === United States === Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of "soft drugs", such as cannabis, illegal, though some local governments have laws contradicting federal laws. In the U.S., the War on Drugs is thought to be contributing to a prison overcrowding problem. In 1996, 59.6% of prisoners were drug-related criminals. The U.S. population grew by about +25% from 1980 to 2000. In that same 20 year time period, the U.S. prison population tripled, making the U.S. the world leader in both percentage and absolute number of citizens incarcerated. The United States has 5% of the world's population, but 25% of the prisoners. About 90% of United States prisoners are incarcerated in state jails. In 2016, about 572,000, over 44%, of the 1.3 million people in these state jails, were serving time for drug offenses. 728,000 were incarcerated for violent offenses. The data from Federal Bureau of Prisons online statistics page states that 45.9% of prisoners were incarcerated for drug offenses, as of December 2021. === European Union === In 2004, the Council of the European Union adopted a framework decision harmonizing the minimum penal provisions for illicit drug-related activities. In particular, article 2(9) stipulates that activities may be exempt from the minimum provisions "when it is committed by its perpetrators exclusively for their own personal consumption as defined by national law." This was made, in particular, to accommodate more liberal national systems such as the Dutch coffee shops (see below) or the Spanish Cannabis Social Clubs. ==== The Netherlands ==== In the Netherlands, cannabis and other "soft" drugs are decriminalised in small quantities. The Dutch government treats the problem as more of a public health issue than a criminal issue. Contrary to popular belief, cannabis is still technically illegal. Coffee shops that sell cannabis to people 18 or above are tolerated, and pay taxes like any other business for their cannabis and hashish sales, although distribution is a grey area that the authorities would rather not go into as it is not decriminalised. Many "coffee shops" are found in Amsterdam and cater mainly to the large tourist trade; the local consumption rate is far lower than in the US. The administrative bodies responsible for enforcing the drug policies include the Ministry of Health, Welfare and Sport, the Ministry of Justice, the Ministry of the Interior and Kingdom Relations, and the Ministry of Finance. Local authorities also shape local policy, within the national framework. When compared to other countries, Dutch drug consumption falls in the European average at six per cent regular use (twenty-one per cent at some point in life) and considerably lower than the Anglo-Saxon countries headed by the United States with an eight per cent recurring use (thirty-four at some point in life). === Australia === A Nielsen poll in 2012 found that only 27% of voters favoured decriminalisation. Australia has steep penalties for growing and using drugs even for personal use. with Western Australia having the toughest laws. There is an associated anti-drug culture amongst a significant number of Australians. Law enforcement targets drugs, particularly in the party scene. In 2012, crime statistics in Victoria revealed that police were increasingly arresting users rather than dealers, and the Liberal government banned the sale of bongs that year. === Indonesia === Indonesia carries a maximum penalty of death for drug dealing, and a maximum of 15 years prison for drug use. In 2004, Australian citizen Schapelle Corby was convicted of smuggling 4.4 kilograms of cannabis into Bali, a crime that carried a maximum penalty of death. Her trial reached the verdict of guilty with a punishment of 20 years imprisonment. Corby claimed to be an unwitting drug mule. Australian citizens known as the "Bali Nine" were caught smuggling heroin. Two of the nine, Andrew Chan and Myuran Sukumaran, were executed April 29, 2015 along with six other foreign nationals. In August 2005, Australian model Michelle Leslie was arrested with two ecstasy pills. She pleaded guilty to possession and in November 2005 was sentenced to 3 months imprisonment, which she was deemed to have already served, and was released from prison immediately upon her admission of guilt on the charge of possession. At the 1961 Single Convention on Narcotic Drugs, Indonesia, along with India, Turkey, Pakistan and some South American countries opposed the criminalisation of drugs. === Republic of China (Taiwan) === Taiwan carries a maximum penalty of death for drug trafficking, while smoking tobacco and wine are classified as legal entertainment drug. The Department of Health is in charge of drug prohibition. == Cost == In 2020, the direct cost of drug prohibition to United States taxpayers was estimated at over $40 billion annually. Prohibition can increase organized crime, government corruption, and mass incarceration via the trade in illegal drugs, while racial and gender disparities in enforcement are evident. Although drug prohibition is often portrayed by proponents as a measure to improve public health, evidence is lacking. In 2016, the Johns Hopkins–Lancet Commission concluded that the "harms of prohibition far outweigh the benefits", citing increased risk of overdoses and HIV infection and detrimental effects on the social determinants of health. Some proponents argue that drug prohibition's effect on suppressing usage rates (although the magnitude of this effect is unknown) outweighs the negative effects of prohibition. Alternative approaches to prohibition include drug legalization, drug decriminalization, and government monopoly. == See also == Alcohol law Arguments for and against drug prohibition Chasing the Scream Drug liberalization Demand reduction Drug policy of the Soviet Union Harm reduction List of anti-cannabis organizations Medellín Cartel Mexican drug war Puerto Rican drug war Prohibitionism Tobacco control War on Drugs US specific: Allegations of CIA drug trafficking School district drug policies Drug Free America Foundation Drug Policy Alliance DrugWarRant Gary Webb Marijuana Policy Project National Organization for the Reform of Marijuana Laws Students for Sensible Drug Policy Woman's Christian Temperance Union == References == == Further reading == == External links == Making Contact: The Mission to End Prohibition. Radio piece featuring LEAP founder and former narcotics officer Jack Cole, and Drug Policy Alliance founder Ethan Nadelmann EMCDDA – Decriminalisation in Europe? Recent developments in legal approaches to drug use Archived January 12, 2007, at the Wayback Machine. 10 Downing Street's Strategy Unit Drugs Report War on drugs Archived April 30, 2011, at the Wayback Machine Part I: Winners, documentary (50 min) explaining 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. War on drugs Archived April 30, 2011, at the Wayback Machine Part II: Losers, documentary (50 min) showing downside of the 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. After the War on Drugs: Options for Control (Report) The Drug War as a Socialist Enterprise by Milton Friedman Free from the Nightmare of Prohibition Archived February 23, 2006, at the Wayback Machine by Harry Browne Prohibition news page – Alcohol and Drugs History Society Drugs and conservatives should go together
Wikipedia/Illicit_drugs
The Journal of Forensic Sciences (JFS) is a bimonthly peer-reviewed scientific journal is the official publication of the American Academy of Forensic Sciences, published by Wiley-Blackwell. It covers all aspects of forensic science. The mission of the JFS is to advance forensic science research, education and practice by publishing peer-reviewed manuscripts of the highest quality. These publications will strengthen the scientific foundation of forensic science in legal and regulatory communities around the world. == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.832. == References == == External links == Official website
Wikipedia/Journal_of_Forensic_Sciences
Wildlife forensic science is forensic science applied to legal issues involving wildlife. Wildlife forensic sciences also deal with conservation and identification of rare species and is a useful tool for non-invasive studies. Methods can be used to determine relatedness of the animals in the area allowing them to determine rare and endangered species that are candidates for genetic rescue. Techniques using things such as the SSCP or Single-Strand Conformational Polymorphism gel electrophoresis technique, microscopy, DNA barcoding, Mitochondrial Microsatellite Analysis and some DNA and Isotope analysis can identify species and individual animals in most cases if they have already been captured . Unlike human identification, animal identification requires determination of its family, genus, and species, and sex in order to individualize the animal, typically through the use of DNA based analyses. == History == Wildlife forensic science stems from the various issues that are dealt with when it comes to wildlife crime. Wildlife crime includes actions such as wildlife trafficking, poaching, wildlife cruelty, and habitat destruction. Wildlife Crime can basically be anything that threatens the existence of a species; animals, plants, bacteria, or fungi. Wildlife Crime also creates a variety of problems for wildlife. Including but not limited to ecological stability, economies, public health, and criminal justice. Out of international crime, Wildlife Crime is the third largest illegal trade behind drugs and firearms, and potentially makes $20 billion dollars per year. The two main ways to prevent wildlife crime is various types of legislation which helps protect and identify species, and the application of wildlife forensic science, which is used to view biological aspects of the crime, and is used as supporting evidence to pass various legislative acts. Wildlife forensic science have helped support the creation of various wildlife protection acts, such as the Endangered Species Act and the Lacey Act among others. === Endangered Species Act === The Endangered Species Act or ESA developed in the United States. United States President, Richard M. Nixon signed the Endangered Species Act on December 28, 1973. The overarching goal of the ESA was to conserve and protected both wildlife as well as protecting their habitats across the globe. This act aimed to prevent extinction and encourage recovery of organisms, and also included protection for various ecosystems that wildlife resides in. This act developed from wildlife being threatened in the United States, and also encouraged the creation of various treaties with countries across the globe in order to protect species. This included migratory bird treaties with Canada, Mexico, and Japan and the creation of a convention for Natural Protection and Wildlife Preservation. === Lacey Act === The Lacey Act was developed in 1900, and has since gone through various expansions throughout the years to protect and enforce wildlife laws. The act initially stated with preventing hunters from killing game illegally, and then transferring them across states. The Lacey act has gone through various amendments, including in 2008 and 2009, which expanded the acts reaches. These new updates included expanding to protection timber and timber products. Now, the Lacey act focuses on the illegal trafficking of not just wildlife, but plants as well. Importing, exporting, transportation, and the sale and purchase of species are now all sections that are included in the modern Lacy Act. === Migratory Bird Treaty Act === The Migratory Bird Treaty Act was enacted in 1918, and worked to implement treaties from Canada (1916), Mexico (1936), Japan (1972), and Russia (1976) to protect migratory bird populations. Migratory birds are protected if it meets any one of the following three criteria. It occurs in the United States naturally, or is listed as one of the birds on the international treaties with Canada, Mexico, Japan, and Russia. If there are taxonomy splits that result in a new species coming from a species that was previously on the list, and fits the criteria for number 1, they will be protected. If there is new evidence that the bird used to have a natural occurrence in the United States. === Marine Mammal Protection Act === The Marine Mammal Protection Act protects a variety of mammals that use the ocean as a main survival resources. This can include mammals that live completely in the ocean such as whales, manatees, and dolphins, and animals that spend rely on the ocean, but also stay on land at times, such as walruses and polar bears. The MMPA protects against any form of collection or killing in U.S waters or by U.S citizens. == Threats == === Wildlife trafficking === Wildlife trafficking includes the trading of live animals or parts of wildlife that are sold and came from the wild. Wildlife trade generates a large amount of revenue each year, and can equal billions of dollars. Various animals are trafficked for the pet trade, such as birds, reptiles, and corals. Animal parts that are commonly traded include bushmeat, animal horns for medicinal and ornamental purposes, and products to make fashion products. A prime example of a trafficked animal are pangolins, that are often trafficked for their scales. Animals across all taxa are affected by wildlife trafficking. Wildlife forensic science comes into play throughout wildlife trafficking as scientists use DNA to try and determine information about the species that are being trafficked. === Poaching === Poaching is a complex and global problem. Part of what makes poaching such a complex issue it provides complications to conservation, additionally, poaching is also linked to poverty. However, thousands of species are faced with poaching, including animals like African Elephants and Rhinoceros. Products from poaching can include ivory, animal skins, bones, bushmeat. These items may be sold as they are or turned into leathers, traditional medicines, ornaments, and more. In addition, poaching isn't just a wildlife problem, poaching also occurs for ornamental plants as well. === Animal cruelty === After the 2020 production of Tiger King: Murder, Mayhem, and Madnese, people are more aware of roadside zoos. These zoos have brought to light possible flaws in legislation and the limited protection of endangered wildlife. However beyond the legislative implications of operations such as roadside zoos, various wildlife crime aspects lead to animal cruelty and animal abuse. These situations often lead to animals being killed, which lends itself to wildlife forensic sciences, to explore the aftermath of these events. Wildlife forensics can assist in determining what species of animals may have been in a location, as well as determining what may have happened to wildlife if they are killed by cruelty. This can lead to convictions in cases against people operating roadside zoos and general wildlife cruelty. In addition, investigations of animal cruelty can lead to new legislation to protect wildlife. === Habitat destruction === Habitat destruction is an additional threat that faces wildlife. Habitat destruction includes habitat fragmentation, introduction of invasive species, and true habitat destruction. However, in order to help combat habitat destruction genetic sequencing and classification of morphological structure play key roles in protecting an area. Wildlife forensic sciences have been used to sequence animals such as pangolins, and plants such as orchids, in order to identify the species living in areas that are being destroyed, and to help provide evidence and the support for protection of areas. Naming species is a key issue in being able to conserve an area, and wildlife forensics can assist in this via genetic analysis. == Types of evidence == In order to understand wildlife crime, and in order to complete wildlife forensic science, evidence is needed. There are various parts of evidence that are used in order to understand the crime and species being affected. Evidence can take a variety of forms such as the entire organism, both living or dead, pieces of the animals (fur, feathers, bones, and organs), or even the products created from the organism. These products may include jewelries, processed meats, clothing, ornaments, and medicines. == Techniques == === Single-Strand Conformational Polymorphism Gel electrophoresis === A simple and sensitive technique used to identify any mutations and also used in the genotyping of animals. The technique uses the method based on the fact that single-stranded DNA has a defined conformation. Any altered conformation due to a single base change in the sequence can cause single-stranded DNA to migrate differently under nondenaturing electrophoresis conditions, so a wild-type and mutant DNA samples display different band patterns. There are 4 steps to this method: polymerase chain reaction (PCR) amplification of DNA sequence of interest denaturation of double-stranded PCR products cooling of the denatured DNA (single-stranded) to maximize self-annealing detection of mobility difference of the single-stranded DNAs by electrophoresis under non-denaturing conditions. === DNA and isotope analysis === DNA analysis is used to help determine the species of an animal they use DNA nucleotide sequencing as a key method and follow it up by comparing sequenced DNA fragments with reference DNA sequences of different species. The similarity or sequence homology between the unknown and reference sequences facilitates to ascertain the species of origin. This technique is used to determine relatedness of a rare species and to also check for any signs of inbreeding depression in the target species to see if it is a candidate for genetic rescue. Isotope analysis is used in this same vein to determine the composition of the habitat that animal resides in. === Mitochondrial microsatellite analysis === Mitochondrial microsatellite analysis methods are often performed to individualize the remains of an animal and determine if a species is endangered, or if it was hunted out of season. Mitochondrial DNA reference profiles can be easily be obtained from public databases like the International Nucleotide Sequence Database (INSDC), the European Molecular Biology Laboratory (EMBL), and the Bardode of Life Data System (BOLD or BOLDSystems). Mitochondrial DNA is used due to its high copy number, and the presence of differences in mutation rates among closely related species. The cytochrome c oxidase unit 1 (CO1) region (also known as the DNA barcode region) mutates at a lower rate and is used for higher level taxonomic classifications whereas the control region and cytochrome b are used in distinguishing closer related taxa due to their mutation rate being higher. === DNA barcoding === DNA barcoding is often used in Wildlife Forensic Science cases to identify an unknown species found at a crime scene. Blood, hair, bone, and other genetic materials are first collected at the scene, then DNA extraction is performed on the samples collected. After that, DNA quantification or PCR is performed to quantify the DNA, then DNA sequencing is performed to sequence the DNA. Lastly, the sequenced DNA is compared to a DNA database for a possible identification of the unknown species. This technique is often used in poaching cases, animal abuse cases, and killing of endangered animals. === Microscopy === This technique is when genetic microscopes are used to look down to a single cell it is used to look at recombination also look for mutations in genes it has been used to help identify many deleterious alleles in genes. === Ballistics === The science of wound ballistics is beginning to gain attention for wildlife forensics as a method to determine what type of firearm may have killed an animal. This focuses on specifically wound ballistics, and what the wound damage is on the body of the organism. These can be traced back to specific types of bullets and firearms, and may be useful in tracing crime back to certain parties or organizations. === Fingerprinting === Fingerprinting is a current technique that is actually particularly common in human crime, and overtime has begun to migrate to the wildlife forensics world. Fingerprinting can pick up a variety of marks, and beyond just fingerprints can pick up impressions of most body parts. These fingermarks can be found on most surfaces, and can either be patent or latent fingerprints. Patent fingerprints can be collected by photography, as patent fingerprints are visible to the naked eye. For latent prints, there are various methods to collect them, including powders, fuming, chemical, optical, and instrumental methods. In wildlife forensic science, fingerprinting has been used to lift latent marks off of pangolin scales, and additionally studies have recovered fingermarks on raptor feathers using magnetic and fluorescent powders. New successes and studies are also being found pulling fingerprints off of eggshells, ivory, teeth, bone, leathers, and antlers. === Forensic entomology === Forensic entomology is the use of insects to demine information about criminal cases. Forensic Entomology is commonly accepted in legal cases and is particularly helpful in determining time of death for both human and wildlife crimes. One of the key insects in these studies are blow flies, which deposit eggs on bodies, and the time off hatching for the eggs can be important for determining time of death. An example of forensic entomology occurred in Manitoba, Canada where two young black bears were found disemboweled. Officers collected blow fly eggs from the deceased cubs, and data was used to determine time of death. This data was used in the conviction and proved itself valuable in a wildlife crime context. === Forensic pathology === Forensic pathology developed out of the veterinary profession, and begin as a way to study disease in domestic animals, and eventually migrated to wildlife. In a forensic sense wildlife pathology has been used to look into the cause and the circumstances of deaths of threatened species. This forensic pathology helps provide baseline data and basic samples such as blood of feces. Forensic Pathology also includes full biopsies which can help analyze tissue and organ changes that may have led to the death of an animal. One of the most common examinations for Forensic Pathology are necropsies, but there are also pre-mortem examinations occur as well. == Laboratories and organizations == Various laboratories and organizations have formed in order to develop and perform wildlife forensic sciences. The Society for Wildlife Forensic Science and Scientific Working Group for Wildlife Forensic Sciences were both formed in 2011. In addition, the Wildlife Forensic and Conservation Genetics Cell was formed by merging a Wildlife Forensic and Conservation Genetics Labs. The Wildlife Forensic and Conservation Genetics Cell was formed in order to support the enforcement and creation of the Wildlife Protection Act. In addition to various laboratories, there are several organizations that also aid in the wildlife forensic science. The World Wildlife Fund helps provide education about wildlife crime and wildlife forensic sciences. CITES, TRAFFIC, and the IUCN also support wildlife forensic science, and use the data from it to support the conservation of wildlife. Finally, Interpol, an organization that handles international crime focuses specific support to wildlife forensic science and its use in solving wildlife crimes. == Scope == While animals and plants are the victims in the crimes of illegal wildlife trade and animal abuse, society is also affected when those crimes are used to fund illegal drugs, weapons and terrorism. Links between human trafficking, public corruption and illegal fishing have also been reported. The continued development and integration of wildlife forensic science as a field will be critical for successful management of the many significant social and conservation issues related to the illegal wildlife trade and wildlife law enforcement. == See also == Marine forensics == References == == Further reading == == External links == INTERPOL Wildlife Crime Working Group National Fish and Wildlife Forensics Laboratory NOAA Marine Forensics Laboratory Society for Wildlife Forensic Science (SWFS) Article on SWGWILD (the Scientific Working Group for Wildlife Forensics) Italian National Reference Centre for Veterinary Forensic Medicine (CeMedForVet)
Wikipedia/Wildlife_forensic_science
DNA profiling (also called DNA fingerprinting and genetic fingerprinting) is the process of determining an individual's deoxyribonucleic acid (DNA) characteristics. DNA analysis intended to identify a species, rather than an individual, is called DNA barcoding. DNA profiling is a forensic technique in criminal investigations, comparing criminal suspects' profiles to DNA evidence so as to assess the likelihood of their involvement in the crime. It is also used in paternity testing, to establish immigration eligibility, and in genealogical and medical research. DNA profiling has also been used in the study of animal and plant populations in the fields of zoology, botany, and agriculture. == Background == Starting in the mid 1970s, scientific advances allowed the use of DNA as a material for the identification of an individual. The first patent covering the direct use of DNA variation for forensics (US5593832A) was filed by Jeffrey Glassberg in 1983, based upon work he had done while at Rockefeller University in the United States in 1981. British geneticist Sir Alec Jeffreys independently developed a process for DNA profiling in 1984 while working in the Department of Genetics at the University of Leicester. Jeffreys discovered that a DNA examiner could establish patterns in unknown DNA. These patterns were a part of inherited traits that could be used to advance the field of relationship analysis. These discoveries led to the first use of DNA profiling in a criminal case. The process, developed by Jeffreys in conjunction with Peter Gill and Dave Werrett of the Forensic Science Service (FSS), was first used forensically in the solving of the murder of two teenagers who had been raped and murdered in Narborough, Leicestershire in 1983 and 1986. In the murder inquiry, led by Detective David Baker, the DNA contained within blood samples obtained voluntarily from around 5,000 local men who willingly assisted Leicestershire Constabulary with the investigation, resulted in the exoneration of Richard Buckland, an initial suspect who had confessed to one of the crimes, and the subsequent conviction of Colin Pitchfork on January 2, 1988. Pitchfork, a local bakery employee, had coerced his coworker Ian Kelly to stand in for him when providing a blood sample—Kelly then used a forged passport to impersonate Pitchfork. Another coworker reported the deception to the police. Pitchfork was arrested, and his blood was sent to Jeffreys' lab for processing and profile development. Pitchfork's profile matched that of DNA left by the murderer which confirmed Pitchfork's presence at both crime scenes; he pleaded guilty to both murders. After some years, a chemical company named Imperial Chemical Industries (ICI) introduced the first ever commercially available kit to the world. Despite being a relatively recent field, it had a significant global influence on both criminal justice system and society. Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is different that it is possible to distinguish one individual from another, unless they are monozygotic (identical) twins. DNA profiling uses repetitive sequences that are highly variable, called variable number tandem repeats (VNTRs), in particular short tandem repeats (STRs), also known as microsatellites, and minisatellites. VNTR loci are similar between closely related individuals, but are so variable that unrelated individuals are unlikely to have the same VNTRs. Before VNTRs and STRs, people like Jeffreys used a process called restriction fragment length polymorphism (RFLP). This process regularly used large portions of DNA to analyze the differences between two DNA samples. RFLP was among the first technologies used in DNA profiling and analysis. However, as technology has evolved, new technologies, like STR, emerged and took the place of older technology like RFLP. The admissibility of DNA evidence in courts was disputed in the United States in the 1980s and 1990s, but has since become more universally accepted due to improved techniques. == Profiling processes == === DNA extraction === When a sample such as blood or saliva is obtained, the DNA is only a small part of what is present in the sample. Before the DNA can be analyzed, it must be extracted from the cells and purified. There are many ways this can be accomplished, but all methods follow the same basic procedure. The cell and nuclear membranes need to be broken up to allow the DNA to be free in solution. Once the DNA is free, it can be separated from all other cellular components. After the DNA has been separated in solution, the remaining cellular debris can then be removed from the solution and discarded, leaving only DNA. The most common methods of DNA extraction include organic extraction (also called phenol–chloroform extraction), Chelex extraction, and solid-phase extraction. Differential extraction is a modified version of extraction in which DNA from two different types of cells can be separated from each other before being purified from the solution. Each method of extraction works well in the laboratory, but analysts typically select their preferred method based on factors such as the cost, the time involved, the quantity of DNA yielded, and the quality of DNA yielded. === RFLP analysis === RFLP stands for restriction fragment length polymorphism and, in terms of DNA analysis, describes a DNA testing method which utilizes restriction enzymes to "cut" the DNA at short and specific sequences throughout the sample. To start off processing in the laboratory, the sample has to first go through an extraction protocol, which may vary depending on the sample type or laboratory SOPs (Standard Operating Procedures). Once the DNA has been "extracted" from the cells within the sample and separated away from extraneous cellular materials and any nucleases that would degrade the DNA, the sample can then be introduced to the desired restriction enzymes to be cut up into discernable fragments. Following the enzyme digestion, a Southern Blot is performed. Southern Blots are a size-based separation method that are performed on a gel with either radioactive or chemiluminescent probes. RFLP could be conducted with single-locus or multi-locus probes (probes which target either one location on the DNA or multiple locations on the DNA). Incorporating the multi-locus probes allowed for higher discrimination power for the analysis, however completion of this process could take several days to a week for one sample due to the extreme amount of time required by each step required for visualization of the probes. === Polymerase chain reaction (PCR) analysis === This technique was developed in 1983 by Kary Mullis. PCR is now a common and important technique used in medical and biological research labs for a variety of applications. PCR, or Polymerase Chain Reaction, is a widely used molecular biology technique to amplify a specific DNA sequence. Amplification is achieved by a series of three steps: 1- Denaturation : In this step, the DNA is heated to 95 °C to dissociate the hydrogen bonds between the complementary base pairs of the double-stranded DNA. 2-Annealing : During this stage the reaction is cooled to 50-65 °C . This enables the primers to attach to a specific location on the single -stranded template DNA by way of hydrogen bonding. 3-Extension : A thermostable DNA polymerase which is Taq polymerase is commonly used at this step. This is done at a temperature of 72 °C . DNA polymerase adds nucleotides in the 5'-3' direction and synthesizes the complementary strand of the DNA template . === STR analysis === The system of DNA profiling used today is based on polymerase chain reaction (PCR) and uses simple sequences. From country to country, different STR-based DNA-profiling systems are in use. In North America, systems that amplify the CODIS 20 core loci are almost universal, whereas in the United Kingdom the DNA-17 loci system is in use, and Australia uses 18 core markers. The true power of STR analysis is in its statistical power of discrimination. Because the 20 loci that are currently used for discrimination in CODIS are independently assorted (having a certain number of repeats at one locus does not change the likelihood of having any number of repeats at any other locus), the product rule for probabilities can be applied. This means that, if someone has the DNA type of ABC, where the three loci were independent, then the probability of that individual having that DNA type is the probability of having type A times the probability of having type B times the probability of having type C. This has resulted in the ability to generate match probabilities of 1 in a quintillion (1x1018) or more. However, DNA database searches showed much more frequent than expected false DNA profile matches. === Y-chromosome analysis === Due to the paternal inheritance, Y-haplotypes provide information about the genetic ancestry of the male population. To investigate this population history, and to provide estimates for haplotype frequencies in criminal casework, the "Y haplotype reference database (YHRD)" has been created in 2000 as an online resource. It currently comprises more than 300,000 minimal (8 locus) haplotypes from world-wide populations. === Mitochondrial analysis === mtDNA can be obtained from such material as hair shafts and old bones/teeth. Control mechanism based on interaction point with data. This can be determined by tooled placement in sample. == Issues with forensic DNA samples == When people think of DNA analysis, they often think about television shows like NCIS or CSI, which portray DNA samples coming into a lab and being instantly analyzed, followed by the pulling up of a picture of the suspect within minutes⁠. However, the reality is quite different, and perfect DNA samples are often not collected from the scene of a crime. Homicide victims are frequently left exposed to harsh conditions before they are found, and objects that are used to commit crimes have often been handled by more than one person. The two most prevalent issues that forensic scientists encounter when analyzing DNA samples are degraded samples and DNA mixtures. === Degraded DNA === Before modern PCR methods existed, it was almost impossible to analyze degraded DNA samples. Methods like restriction fragment length polymorphism (RFLP), which was the first technique used for DNA analysis in forensic science, required high molecular weight DNA in the sample in order to get reliable data. High molecular weight DNA, however, is lacking in degraded samples, as the DNA is too fragmented to carry out RFLP accurately. It was only when polymerase chain reaction techniques were invented that analysis of degraded DNA samples were able to be carried out. Multiplex PCR in particular made it possible to isolate and to amplify the small fragments of DNA that are still left in degraded samples. When multiplex PCR methods are compared to the older methods like RFLP, a vast difference can be seen. Multiplex PCR can theoretically amplify less than 1 ng of DNA, but RFLP had to have a least 100 ng of DNA in order to carry out an analysis. === Low-Template DNA === Low-template DNA can happen when there is less than 0.1 ng() of DNA in a sample. This can lead to more stochastic effects (random events) such as allelic dropout or allelic drop-in which can alter the interpretation of a DNA profile. These stochastic effects can lead to the unequal amplification of the 2 alleles that come from a heterozygous individual. It is especially important to take low-template DNA into account when dealing with a mixture of DNA sample. This is because for one (or more) of the contributors in the mixture, they are more likely to have less than the optimal amount of DNA for the PCR reaction to work properly. Therefore, stochastic thresholds are developed for DNA profile interpretation. The stochastic threshold is the minimum peak height (RFU value), seen in an electropherogram where dropout occurs. If the peak height value is above this threshold, then it is reasonable to assume that allelic dropout has not occurred. For example, if only 1 peak is seen for a particular locus in the electropherogram but its peak height is above the stochastic threshold, then we can reasonably assume that this individual is homozygous and is not missing its heterozygous partner allele that otherwise would have dropped out due to having low-template DNA. Allelic dropout can occur when there is low-template DNA because there is such little DNA to start with that at this locus the contributor to the DNA sample (or mixture) is a true heterozygote but the other allele is not amplified and so it would be lost. Allelic drop-in can also occur when there is low-template DNA because sometimes the stutter peak can be amplified. The stutter is an artifact of PCR. During the PCR reaction, DNA Polymerase will come in and add nucleotides off of the primer, but this whole process is very dynamic, meaning that the DNA Polymerase is constantly binding, popping off and then rebinding. Therefore, sometimes DNA Polymerase will rejoin at the short tandem repeat ahead of it, leading to a short tandem repeat that is 1 repeat less than the template. During PCR, if DNA Polymerase happens to bind to a locus in stutter and starts to amplify it to make lots of copies, then this stutter product will appear randomly in the electropherogram, leading to allelic drop-in. ==== MiniSTR analysis ==== In instances in which DNA samples are degraded, like if there are intense fires or all that remains are bone fragments, standard STR testing on those samples can be inadequate. When standard STR testing is done on highly degraded samples, the larger STR loci often drop out, and only partial DNA profiles are obtained. Partial DNA profiles can be a powerful tool, but the probability of a random match is larger than if a full profile was obtained. One method that has been developed to analyse degraded DNA samples is to use miniSTR technology. In the new approach, primers are specially designed to bind closer to the STR region. In normal STR testing, the primers bind to longer sequences that contain the STR region within the segment. MiniSTR analysis, however, targets only the STR location, which results in a DNA product that is much smaller. By placing the primers closer to the actual STR regions, there is a higher chance that successful amplification of this region will occur. Successful amplification of those STR regions can now occur, and more complete DNA profiles can be obtained. The success that smaller PCR products produce a higher success rate with highly degraded samples was first reported in 1995, when miniSTR technology was used to identify victims of the Waco fire. === DNA mixtures === Mixtures are another common issue faced by forensic scientists when they are analyzing unknown or questionable DNA samples. A mixture is defined as a DNA sample that contains two or more individual contributors. That can often occur when a DNA sample is swabbed from an item that is handled by more than one person or when a sample contains both the victim's and the assailant's DNA. The presence of more than one individual in a DNA sample can make it challenging to detect individual profiles, and interpretation of mixtures should be performed only by highly trained individuals. Mixtures that contain two or three individuals can be interpreted with difficulty. Mixtures that contain four or more individuals are much too convoluted to get individual profiles. One common scenario in which a mixture is often obtained is in the case of sexual assault. A sample may be collected that contains material from the victim, the victim's consensual sexual partners, and the perpetrator(s). Mixtures can generally be sorted into three categories: Type A, Type B, and Type C. Type A mixtures have alleles with similar peak-heights all around, so the contributors cannot be distinguished from each other. Type B mixtures can be deconvoluted by comparing peak-height ratios to determine which alleles were donated together. Type C mixtures cannot be safely interpreted with current technology because the samples were affected by DNA degradation or having too small a quantity of DNA present. When looking at an electropherogram, it is possible to determine the number of contributors in less complex mixtures based on the number of peaks located in each locus. In comparison to a single source profile, which will only have one or two peaks at each locus, a mixture is when there are three or more peaks at two or more loci. If there are three peaks at only a single locus, then it is possible to have a single contributor who is tri-allelic at that locus. Two person mixtures will have between two and four peaks at each locus, and three person mixtures will have between three and six peaks at each locus. Mixtures become increasingly difficult to deconvolute as the number of contributors increases. As detection methods in DNA profiling advance, forensic scientists are seeing more DNA samples that contain mixtures, as even the smallest contributor can now be detected by modern tests. The ease in which forensic scientists have in interpenetrating DNA mixtures largely depends on the ratio of DNA present from each individual, the genotype combinations, and the total amount of DNA amplified. The DNA ratio is often the most important aspect to look at in determining whether a mixture can be interpreted. For example, if a DNA sample had two contributors, it would be easy to interpret individual profiles if the ratio of DNA contributed by one person was much higher than the second person. When a sample has three or more contributors, it becomes extremely difficult to determine individual profiles. Fortunately, advancements in probabilistic genotyping may make that sort of determination possible in the future. Probabilistic genotyping uses complex computer software to run through thousands of mathematical computations to produce statistical likelihoods of individual genotypes found in a mixture. DNA profiling in plant: Plant DNA profiling (fingerprinting) is a method for identifying cultivars that uses molecular marker techniques. This method is gaining attention due to Trade Related Intellectual property rights (TRIPs) and the Convention on Biological Diversity (CBD). Advantages of Plant DNA profiling: Identification, authentication, specific distinction, detecting adulteration and identifying phytoconstituents are all possible with DNA fingerprinting in medical plants. DNA based markers are critical for these applications, determining the future of scientific study in pharmacognosy. It also helps with determining the traits (such as seed size and leaf color) are likely to improve the offspring or not. == DNA databases == An early application of a DNA database was the compilation of a Mitochondrial DNA Concordance, prepared by Kevin W. P. Miller and John L. Dawson at the University of Cambridge from 1996 to 1999 from data collected as part of Miller's PhD thesis. There are now several DNA databases in existence around the world. Some are private, but most of the largest databases are government-controlled. The United States maintains the largest DNA database, with the Combined DNA Index System (CODIS) holding over 13 million records as of May 2018. The United Kingdom maintains the National DNA Database (NDNAD), which is of similar size, despite the UK's smaller population. The size of this database, and its rate of growth, are giving concern to civil liberties groups in the UK, where police have wide-ranging powers to take samples and retain them even in the event of acquittal. The Conservative–Liberal Democrat coalition partially addressed these concerns with part 1 of the Protection of Freedoms Act 2012, under which DNA samples must be deleted if suspects are acquitted or not charged, except in relation to certain (mostly serious or sexual) offenses. Public discourse around the introduction of advanced forensic techniques (such as genetic genealogy using public genealogy databases and DNA phenotyping approaches) has been limited, disjointed, unfocused, and raises issues of privacy and consent that may warrant the establishment of additional legal protections. The U.S. Patriot Act of the United States provides a means for the U.S. government to get DNA samples from suspected terrorists. DNA information from crimes is collected and deposited into the CODIS database, which is maintained by the FBI. CODIS enables law enforcement officials to test DNA samples from crimes for matches within the database, providing a means of finding specific biological profiles associated with collected DNA evidence. When a match is made from a national DNA databank to link a crime scene to an offender having provided a DNA sample to a database, that link is often referred to as a cold hit. A cold hit is of value in referring the police agency to a specific suspect but is of less evidential value than a DNA match made from outside the DNA Databank. FBI agents cannot legally store DNA of a person not convicted of a crime. DNA collected from a suspect not later convicted must be disposed of and not entered into the database. In 1998, a man residing in the UK was arrested on accusation of burglary. His DNA was taken and tested, and he was later released. Nine months later, this man's DNA was accidentally and illegally entered in the DNA database. New DNA is automatically compared to the DNA found at cold cases and, in this case, this man was found to be a match to DNA found at a rape and assault case one year earlier. The government then prosecuted him for these crimes. During the trial the DNA match was requested to be removed from the evidence because it had been illegally entered into the database. The request was carried out. The DNA of the perpetrator, collected from victims of rape, can be stored for years until a match is found. In 2014, to address this problem, Congress extended a bill that helps states deal with "a backlog" of evidence. DNA profiling databases in Plants: PIDS: PIDS(Plant international DNA-fingerprinting system) is an open source web server and free software based plant international DNA fingerprinting system. It manages huge amount of microsatellite DNA fingerprint data, performs genetic studies, and automates collection, storage and maintenance while decreasing human error and increasing efficiency. The system may be tailored to specific laboratory needs, making it a valuable tool for plant breeders, forensic science, and human fingerprint recognition. It keeps track of experiments, standardizes data and promotes inter-database communication. It also helps with the regulation of variety quality, the preservation of variety rights and the use of molecular markers in breeding by providing location statistics, merging, comparison and genetic analysis function. == Considerations in evaluating DNA evidence == When using RFLP, the theoretical risk of a coincidental match is 1 in 100 billion (100,000,000,000) although the practical risk is actually 1 in 1,000 because monozygotic twins are 0.2% of the human population. Moreover, the rate of laboratory error is almost certainly higher than that and actual laboratory procedures often do not reflect the theory under which the coincidence probabilities were computed. For example, coincidence probabilities may be calculated based on the probabilities that markers in two samples have bands in precisely the same location, but a laboratory worker may conclude that similar but not precisely-identical band patterns result from identical genetic samples with some imperfection in the agarose gel. However, in that case, the laboratory worker increases the coincidence risk by expanding the criteria for declaring a match. Studies conducted in the 2000s quoted relatively-high error rates, which may be cause for concern. In the early days of genetic fingerprinting, the necessary population data to compute a match probability accurately was sometimes unavailable. Between 1992 and 1996, arbitrary-low ceilings were controversially put on match probabilities used in RFLP analysis, rather than the higher theoretically computed ones. === Evidence of genetic relationship === It is possible to use DNA profiling as evidence of genetic relationship although such evidence varies in strength from weak to positive. Testing that shows no relationship is absolutely certain. Further, while almost all individuals have a single and distinct set of genes, ultra-rare individuals, known as "chimeras", have at least two different sets of genes. There have been two cases of DNA profiling that falsely suggested that a mother was unrelated to her children. == Fake DNA evidence == The functional analysis of genes and their coding sequences (open reading frames [ORFs]) typically requires that each ORF be expressed, the encoded protein purified, antibodies produced, phenotypes examined, intracellular localization determined, and interactions with other proteins sought. In a study conducted by the life science company Nucleix and published in the journal Forensic Science International, scientists found that an in vitro synthesized sample of DNA matching any desired genetic profile can be constructed using standard molecular biology techniques without obtaining any actual tissue from that person. == DNA evidence in criminal trials == === Familial DNA searching === Familial DNA searching (sometimes referred to as "familial DNA" or "familial DNA database searching") is the practice of creating new investigative leads in cases where DNA evidence found at the scene of a crime (forensic profile) strongly resembles that of an existing DNA profile (offender profile) in a state DNA database but there is not an exact match. After all other leads have been exhausted, investigators may use specially developed software to compare the forensic profile to all profiles taken from a state's DNA database to generate a list of those offenders already in the database who are most likely to be a very close relative of the individual whose DNA is in the forensic profile. Familial DNA database searching was first used in an investigation leading to the conviction of Jeffrey Gafoor of the murder of Lynette White in the United Kingdom on 4 July 2003. DNA evidence was matched to Gafoor's nephew, who at 14 years old had not been born at the time of the murder in 1988. It was used again in 2004 to find a man who threw a brick from a motorway bridge and hit a lorry driver, killing him. DNA found on the brick matched that found at the scene of a car theft earlier in the day, but there were no good matches on the national DNA database. A wider search found a partial match to an individual; on being questioned, this man revealed he had a brother, Craig Harman, who lived very close to the original crime scene. Harman voluntarily submitted a DNA sample, and confessed when it matched the sample from the brick. As of 2011, familial DNA database searching is not conducted on a national level in the United States, where states determine how and when to conduct familial searches. The first familial DNA search with a subsequent conviction in the United States was conducted in Denver, Colorado, in 2008, using software developed under the leadership of Denver District Attorney Mitch Morrissey and Denver Police Department Crime Lab Director Gregg LaBerge. California was the first state to implement a policy for familial searching under then-Attorney General Jerry Brown, who later became Governor. In his role as consultant to the Familial Search Working Group of the California Department of Justice, former Alameda County Prosecutor Rock Harmon is widely considered to have been the catalyst in the adoption of familial search technology in California. The technique was used to catch the Los Angeles serial killer known as the "Grim Sleeper" in 2010. It was not a witness or informant that tipped off law enforcement to the identity of the "Grim Sleeper" serial killer, who had eluded police for more than two decades, but DNA from the suspect's own son. The suspect's son had been arrested and convicted in a felony weapons charge and swabbed for DNA the year before. When his DNA was entered into the database of convicted felons, detectives were alerted to a partial match to evidence found at the "Grim Sleeper" crime scenes. David Franklin Jr., also known as the Grim Sleeper, was charged with ten counts of murder and one count of attempted murder. More recently, familial DNA led to the arrest of 21-year-old Elvis Garcia on charges of sexual assault and false imprisonment of a woman in Santa Cruz in 2008. In March 2011 Virginia Governor Bob McDonnell announced that Virginia would begin using familial DNA searches. At a press conference in Virginia on 7 March 2011, regarding the East Coast Rapist, Prince William County prosecutor Paul Ebert and Fairfax County Police Detective John Kelly said the case would have been solved years ago if Virginia had used familial DNA searching. Aaron Thomas, the suspected East Coast Rapist, was arrested in connection with the rape of 17 women from Virginia to Rhode Island, but familial DNA was not used in the case. Critics of familial DNA database searches argue that the technique is an invasion of an individual's 4th Amendment rights. Privacy advocates are petitioning for DNA database restrictions, arguing that the only fair way to search for possible DNA matches to relatives of offenders or arrestees would be to have a population-wide DNA database. Some scholars have pointed out that the privacy concerns surrounding familial searching are similar in some respects to other police search techniques, and most have concluded that the practice is constitutional. The Ninth Circuit Court of Appeals in United States v. Pool (vacated as moot) suggested that this practice is somewhat analogous to a witness looking at a photograph of one person and stating that it looked like the perpetrator, which leads law enforcement to show the witness photos of similar looking individuals, one of whom is identified as the perpetrator. Critics also state that racial profiling could occur on account of familial DNA testing. In the United States, the conviction rates of racial minorities are much higher than that of the overall population. It is unclear whether this is due to discrimination from police officers and the courts, as opposed to a simple higher rate of offence among minorities. Arrest-based databases, which are found in the majority of the United States, lead to an even greater level of racial discrimination. An arrest, as opposed to conviction, relies much more heavily on police discretion. For instance, investigators with Denver District Attorney's Office successfully identified a suspect in a property theft case using a familial DNA search. In this example, the suspect's blood left at the scene of the crime strongly resembled that of a current Colorado Department of Corrections prisoner. === Partial matches === Partial DNA matches are the result of moderate stringency CODIS searches that produce a potential match that shares at least one allele at every locus. Partial matching does not involve the use of familial search software, such as those used in the United Kingdom and the United States, or additional Y-STR analysis and therefore often misses sibling relationships. Partial matching has been used to identify suspects in several cases in both countries and has also been used as a tool to exonerate the falsely accused. Darryl Hunt was wrongly convicted in connection with the rape and the murder of a young woman in 1984 in North Carolina. === Surreptitious DNA collecting === Police forces may collect DNA samples without a suspect's knowledge, and use it as evidence. The legality of the practice has been questioned in Australia. In the United States, where it has been accepted, courts often rule that there is no expectation of privacy and cite California v. Greenwood (1988), in which the Supreme Court held that the Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside the curtilage of a home. Critics of this practice underline that this analogy ignores that "most people have no idea that they risk surrendering their genetic identity to the police by, for instance, failing to destroy a used coffee cup. Moreover, even if they do realize it, there is no way to avoid abandoning one's DNA in public." The United States Supreme Court ruled in Maryland v. King (2013) that DNA sampling of prisoners arrested for serious crimes is constitutional. In the United Kingdom, the Human Tissue Act 2004 prohibits private individuals from covertly collecting biological samples (hair, fingernails, etc.) for DNA analysis but exempts medical and criminal investigations from the prohibition. === England and Wales === Evidence from an expert who has compared DNA samples must be accompanied by evidence as to the sources of the samples and the procedures for obtaining the DNA profiles. The judge must ensure that the jury must understand the significance of DNA matches and mismatches in the profiles. The judge must also ensure that the jury does not confuse the match probability (the probability that a person that is chosen at random has a matching DNA profile to the sample from the scene) with the probability that a person with matching DNA committed the crime. In 1996 R v. Doheny Juries should weigh up conflicting and corroborative evidence, using their own common sense and not by using mathematical formulae, such as Bayes' theorem, so as to avoid "confusion, misunderstanding and misjudgment". ==== Presentation and evaluation of evidence of partial or incomplete DNA profiles ==== In R v Bates, Moore-Bick LJ said: We can see no reason why partial profile DNA evidence should not be admissible provided that the jury are made aware of its inherent limitations and are given a sufficient explanation to enable them to evaluate it. There may be cases where the match probability in relation to all the samples tested is so great that the judge would consider its probative value to be minimal and decide to exclude the evidence in the exercise of his discretion, but this gives rise to no new question of principle and can be left for decision on a case by case basis. However, the fact that there exists in the case of all partial profile evidence the possibility that a "missing" allele might exculpate the accused altogether does not provide sufficient grounds for rejecting such evidence. In many there is a possibility (at least in theory) that evidence that would assist the accused and perhaps even exculpate him altogether exists, but that does not provide grounds for excluding relevant evidence that is available and otherwise admissible, though it does make it important to ensure that the jury are given sufficient information to enable them to evaluate that evidence properly. === DNA testing in the United States === There are state laws on DNA profiling in all 50 states of the United States. Detailed information on database laws in each state can be found at the National Conference of State Legislatures website. === Development of artificial DNA === In August 2009, scientists in Israel raised serious doubts concerning the use of DNA by law enforcement as the ultimate method of identification. In a paper published in the journal Forensic Science International: Genetics, the Israeli researchers demonstrated that it is possible to manufacture DNA in a laboratory, thus falsifying DNA evidence. The scientists fabricated saliva and blood samples, which originally contained DNA from a person other than the supposed donor of the blood and saliva. The researchers also showed that, using a DNA database, it is possible to take information from a profile and manufacture DNA to match it, and that this can be done without access to any actual DNA from the person whose DNA they are duplicating. The synthetic DNA oligos required for the procedure are common in molecular laboratories. The New York Times quoted the lead author, Daniel Frumkin, saying, "You can just engineer a crime scene ... any biology undergraduate could perform this". Frumkin perfected a test that can differentiate real DNA samples from fake ones. His test detects epigenetic modifications, in particular, DNA methylation. Seventy percent of the DNA in any human genome is methylated, meaning it contains methyl group modifications within a CpG dinucleotide context. Methylation at the promoter region is associated with gene silencing. The synthetic DNA lacks this epigenetic modification, which allows the test to distinguish manufactured DNA from genuine DNA. It is unknown how many police departments, if any, currently use the test. No police lab has publicly announced that it is using the new test to verify DNA results. Researchers at the University of Tokyo integrated an artificial DNA replication scheme with a rebuilt gene expression system and micro-compartmentalization utilizing cell-free materials alone for the first time. Multiple cycles of serial dilution were performed on a system contained in microscale water-in-oil droplets. Chances of making DNA change on purpose Overall, this study's artificial genomic DNA, which kept copying itself using self-encoded proteins and made its sequence better on its own, is a good starting point for making more complex artificial cells. By adding the genes needed for transcription and translation to artificial genomic DNA, it may be possible in the future to make artificial cells that can grow on their own when fed small molecules like amino acids and nucleotides. Using living organisms to make useful things, like drugs and food, would be more stable and easier to control in these artificial cells. On July 7, 2008, the American chemical society reported that Japanese chemists have created the world's first DNA molecule comprised nearly completely of synthetic components. A nano-particle based artificial transcription factor for gene regulation: Nano Script is a nanoparticle-based artificial transcription factor that is supposed to replicate the structure and function of TFs. On gold nanoparticles, functional peptides and tiny molecules referred to as synthetic transcription factors, which imitate the various TF domains, were attached to create Nano Script. We show that Nano Script localizes to the nucleus and begins transcription of a reporter plasmid by an amount more than 15-fold. Moreover, Nano Script can successfully transcribe targeted genes onto endogenous DNA in a nonviral manner. Three different fluorophores—red, green, and blue—were carefully fixed on the DNA rod surface to provide spatial information and create a nanoscale barcode. Epifluorescence and total internal reflection fluorescence microscopy reliably deciphered spatial information between fluorophores. By moving the three fluorophores on the DNA rod, this nanoscale barcode created 216 fluorescence patterns. === Cases === In 1986, Richard Buckland was exonerated, despite having admitted to the rape and murder of a teenager near Leicester, the city where DNA profiling was first developed. This was the first use of DNA fingerprinting in a criminal investigation, and the first to prove a suspect's innocence. The following year Colin Pitchfork was identified as the perpetrator of the same murder, in addition to another, using the same techniques that had cleared Buckland. In 1987, genetic fingerprinting was used in a US criminal court for the first time in the trial of a man accused of unlawful intercourse with a mentally disabled 14-year-old female who gave birth to a baby. In 1987, Florida rapist Tommie Lee Andrews was the first person in the United States to be convicted as a result of DNA evidence, for raping a woman during a burglary; he was convicted on 6 November 1987, and sentenced to 22 years in prison. In 1990, a violent murder of a young student in Brno was the first criminal case in Czechoslovakia solved by DNA evidence, with the murderer sentenced to 23 years in prison. In 1992, DNA from a palo verde tree was used to convict Mark Alan Bogan of murder. DNA from seed pods of a tree at the crime scene was found to match that of seed pods found in Bogan's truck. This is the first instance of plant DNA admitted in a criminal case. In 1994, the claim that Anna Anderson was Grand Duchess Anastasia Nikolaevna of Russia was tested after her death using samples of her tissue that had been stored at a Charlottesville hospital following a medical procedure. The tissue was tested using DNA fingerprinting, and showed that she bore no relation to the Romanovs. In 1994, Earl Washington, Jr., of Virginia had his death sentence commuted to life imprisonment a week before his scheduled execution date based on DNA evidence. He received a full pardon in 2000 based on more advanced testing. In 1999, Raymond Easton, a disabled man from Swindon, England, was arrested and detained for seven hours in connection with a burglary. He was released due to an inaccurate DNA match. His DNA had been retained on file after an unrelated domestic incident some time previously. In 2000 Frank Lee Smith was proved innocent by DNA profiling of the murder of an eight-year-old girl after spending 14 years on death row in Florida, USA. However he had died of cancer just before his innocence was proven. In view of this the Florida state governor ordered that in future any death row inmate claiming innocence should have DNA testing. In May 2000 Gordon Graham murdered Paul Gault at his home in Lisburn, Northern Ireland. Graham was convicted of the murder when his DNA was found on a sports bag left in the house as part of an elaborate ploy to suggest the murder occurred after a burglary had gone wrong. Graham was having an affair with the victim's wife at the time of the murder. It was the first time Low Copy Number DNA was used in Northern Ireland. In 2001, Wayne Butler was convicted for the murder of Celia Douty. It was the first murder in Australia to be solved using DNA profiling. In 2002, the body of James Hanratty, hanged in 1962 for the "A6 murder", was exhumed and DNA samples from the body and members of his family were analysed. The results convinced Court of Appeal judges that Hanratty's guilt, which had been strenuously disputed by campaigners, was proved "beyond doubt". Paul Foot and some other campaigners continued to believe in Hanratty's innocence and argued that the DNA evidence could have been contaminated, noting that the small DNA samples from items of clothing, kept in a police laboratory for over 40 years "in conditions that do not satisfy modern evidential standards", had had to be subjected to very new amplification techniques in order to yield any genetic profile. However, no DNA other than Hanratty's was found on the evidence tested, contrary to what would have been expected had the evidence indeed been contaminated. In August 2002, Annalisa Vicentini was shot dead in Tuscany. Bartender Peter Hamkin, 23, was arrested, in Merseyside in March 2003 on an extradition warrant heard at Bow Street Magistrates' Court in London to establish whether he should be taken to Italy to face a murder charge. DNA "proved" he shot her, but he was cleared on other evidence. In 2003, Welshman Jeffrey Gafoor was convicted of the 1988 murder of Lynette White, when crime scene evidence collected 12 years earlier was re-examined using STR techniques, resulting in a match with his nephew. In June 2003, because of new DNA evidence, Dennis Halstead, John Kogut and John Restivo won a re-trial on their murder conviction, their convictions were struck down and they were released. In 2004, DNA testing shed new light into the mysterious 1912 disappearance of Bobby Dunbar, a four-year-old boy who vanished during a fishing trip. He was allegedly found alive eight months later in the custody of William Cantwell Walters, but another woman claimed that the boy was her son, Bruce Anderson, whom she had entrusted in Walters' custody. The courts disbelieved her claim and convicted Walters for the kidnapping. The boy was raised and known as Bobby Dunbar throughout the rest of his life. However, DNA tests on Dunbar's son and nephew revealed the two were not related, thus establishing that the boy found in 1912 was not Bobby Dunbar, whose real fate remains unknown. In 2005, Gary Leiterman was convicted of the 1969 murder of Jane Mixer, a law student at the University of Michigan, after DNA found on Mixer's pantyhose was matched to Leiterman. DNA in a drop of blood on Mixer's hand was matched to John Ruelas, who was only four years old in 1969 and was never successfully connected to the case in any other way. Leiterman's defense unsuccessfully argued that the unexplained match of the blood spot to Ruelas pointed to cross-contamination and raised doubts about the reliability of the lab's identification of Leiterman. In November 2008, Anthony Curcio was arrested for masterminding one of the most elaborately planned armored car heists in history. DNA evidence linked Curcio to the crime. In March 2009, Sean Hodgson—convicted of 1979 killing of Teresa De Simone, 22, in her car in Southampton—was released after tests proved DNA from the scene was not his. It was later matched to DNA retrieved from the exhumed body of David Lace. Lace had previously confessed to the crime but was not believed by the detectives. He served time in prison for other crimes committed at the same time as the murder and then committed suicide in 1988. In 2012, a case of babies being switched, many decades earlier, was discovered by accident. After undertaking DNA testing for other purposes, Alice Collins Plebuch was advised that her ancestry appeared to include a significant Ashkenazi Jewish component, despite a belief in her family that they were of predominantly Irish descent. Profiling of Plebuch's genome suggested that it included distinct and unexpected components associated with Ashkenazi, Middle Eastern, and Eastern European populations. This led Plebuch to conduct an extensive investigation, after which she concluded that her father had been switched (possibly accidentally) with another baby soon after birth. Plebuch was also able to identify the biological ancestors of her father. In 2016 Anthea Ring, abandoned as a baby, was able to use a DNA sample and DNA matching database to discover her deceased mother's identity and roots in County Mayo, Ireland. A recently developed forensic test was subsequently used to capture DNA from saliva left on old stamps and envelopes by her suspected father, uncovered through painstaking genealogy research. The DNA in the first three samples was too degraded to use. However, on the fourth, more than enough DNA was found. The test, which has a degree of accuracy acceptable in UK courts, proved that a man named Patrick Coyne was her biological father. In 2018 the Buckskin girl (a body found in 1981 in Ohio) was identified as Marcia King from Arkansas using DNA genealogical techniques In 2018 Joseph James DeAngelo was arrested as the main suspect for the Golden State Killer using DNA and genealogy techniques. In 2018, William Earl Talbott II was arrested as a suspect for the 1987 murders of Jay Cook and Tanya Van Cuylenborg with the assistance of genealogical DNA testing. The same genetic genealogist that helped in this case also helped police with 18 other arrests in 2018. In 2018, with the use of Next Generation Identification System's enhanced biometric capabilities, the FBI matched the fingerprint of a suspect named Timothy David Nelson and arrested him 20 years after the alleged sexual assault. == DNA evidence as evidence to prove rights of succession to British titles == DNA testing has been used to establish the right of succession to British titles. Cases: Baron Moynihan Pringle baronets == See also == == References == == Further reading == == External links == McKie R (24 May 2009). "Eureka moment that led to the discovery of DNA fingerprinting". The Observer. London. Forensic Science, Statistics, and the Law – Blog that tracks scientific and legal developments pertinent to forensic DNA profiling Create a DNA Fingerprint – PBS.org In silico simulation of Molecular Biology Techniques – A place to learn typing techniques by simulating them National DNA Databases in the EU The Innocence Record Archived 13 September 2019 at the Wayback Machine, Winston & Strawn LLP/The Innocence Project Making Sense of DNA Backlogs, 2012: Myths vs. Reality United States Department of Justice "Making Sense of Forensic Genetics". Sense about Science. 25 January 2017. Retrieved 19 April 2020.
Wikipedia/DNA_profiling
In chemical kinetics, the overall rate of a reaction is often approximately determined by the slowest step, known as the rate-determining step (RDS or RD-step or r/d step) or rate-limiting step. For a given reaction mechanism, the prediction of the corresponding rate equation (for comparison with the experimental rate law) is often simplified by using this approximation of the rate-determining step. In principle, the time evolution of the reactant and product concentrations can be determined from the set of simultaneous rate equations for the individual steps of the mechanism, one for each step. However, the analytical solution of these differential equations is not always easy, and in some cases numerical integration may even be required. The hypothesis of a single rate-determining step can greatly simplify the mathematics. In the simplest case the initial step is the slowest, and the overall rate is just the rate of the first step. Also, the rate equations for mechanisms with a single rate-determining step are usually in a simple mathematical form, whose relation to the mechanism and choice of rate-determining step is clear. The correct rate-determining step can be identified by predicting the rate law for each possible choice and comparing the different predictions with the experimental law, as for the example of NO2 and CO below. The concept of the rate-determining step is very important to the optimization and understanding of many chemical processes such as catalysis and combustion. == Example reaction: NO2 + CO == As an example, consider the gas-phase reaction NO2 + CO → NO + CO2. If this reaction occurred in a single step, its reaction rate (r) would be proportional to the rate of collisions between NO2 and CO molecules: r = k[NO2][CO], where k is the reaction rate constant, and square brackets indicate a molar concentration. Another typical example is the Zel'dovich mechanism. === First step rate-determining === In fact, however, the observed reaction rate is second-order in NO2 and zero-order in CO, with rate equation r = k[NO2]2. This suggests that the rate is determined by a step in which two NO2 molecules react, with the CO molecule entering at another, faster, step. A possible mechanism in two elementary steps that explains the rate equation is: NO2 + NO2 → NO + NO3 (slow step, rate-determining) NO3 + CO → NO2 + CO2 (fast step) In this mechanism the reactive intermediate species NO3 is formed in the first step with rate r1 and reacts with CO in the second step with rate r2. However, NO3 can also react with NO if the first step occurs in the reverse direction (NO + NO3 → 2 NO2) with rate r−1, where the minus sign indicates the rate of a reverse reaction. The concentration of a reactive intermediate such as [NO3] remains low and almost constant. It may therefore be estimated by the steady-state approximation, which specifies that the rate at which it is formed equals the (total) rate at which it is consumed. In this example NO3 is formed in one step and reacts in two, so that d [ NO 3 ] d t = r 1 − r 2 − r − 1 ≈ 0. {\displaystyle {\frac {d{\ce {[NO3]}}}{dt}}=r_{1}-r_{2}-r_{-1}\approx 0.} The statement that the first step is the slow step actually means that the first step in the reverse direction is slower than the second step in the forward direction, so that almost all NO3 is consumed by reaction with CO and not with NO. That is, r−1 ≪ r2, so that r1 − r2 ≈ 0. But the overall rate of reaction is the rate of formation of final product (here CO2), so that r = r2 ≈ r1. That is, the overall rate is determined by the rate of the first step, and (almost) all molecules that react at the first step continue to the fast second step. === Pre-equilibrium: if the second step were rate-determining === The other possible case would be that the second step is slow and rate-determining, meaning that it is slower than the first step in the reverse direction: r2 ≪ r−1. In this hypothesis, r1 − r−1 ≈ 0, so that the first step is (almost) at equilibrium. The overall rate is determined by the second step: r = r2 ≪ r1, as very few molecules that react at the first step continue to the second step, which is much slower. Such a situation in which an intermediate (here NO3) forms an equilibrium with reactants prior to the rate-determining step is described as a pre-equilibrium For the reaction of NO2 and CO, this hypothesis can be rejected, since it implies a rate equation that disagrees with experiment. NO2 + NO2 → NO + NO3 (fast step) NO3 + CO → NO2 + CO2 (slow step, rate-determining) If the first step were at equilibrium, then its equilibrium constant expression permits calculation of the concentration of the intermediate NO3 in terms of more stable (and more easily measured) reactant and product species: K 1 = [ NO ] [ NO 3 ] [ NO 2 ] 2 , {\displaystyle K_{1}={\frac {{\ce {[NO][NO3]}}}{{\ce {[NO2]^2}}}},} [ NO 3 ] = K 1 [ NO 2 ] 2 [ NO ] . {\displaystyle [{\ce {NO3}}]=K_{1}{\frac {{\ce {[NO2]^2}}}{{\ce {[NO]}}}}.} The overall reaction rate would then be r = r 2 = k 2 [ NO 3 ] [ CO ] = k 2 K 1 [ NO 2 ] 2 [ CO ] [ NO ] , {\displaystyle r=r_{2}=k_{2}{\ce {[NO3][CO]}}=k_{2}K_{1}{\frac {{\ce {[NO2]^2 [CO]}}}{{\ce {[NO]}}}},} which disagrees with the experimental rate law given above, and so disproves the hypothesis that the second step is rate-determining for this reaction. However, some other reactions are believed to involve rapid pre-equilibria prior to the rate-determining step, as shown below. == Nucleophilic substitution == Another example is the unimolecular nucleophilic substitution (SN1) reaction in organic chemistry, where it is the first, rate-determining step that is unimolecular. A specific case is the basic hydrolysis of tert-butyl bromide (t-C4H9Br) by aqueous sodium hydroxide. The mechanism has two steps (where R denotes the tert-butyl radical t-C4H9): Formation of a carbocation R−Br → R+ + Br−. Nucleophilic attack by hydroxide ion R+ + OH− → ROH. This reaction is found to be first-order with r = k[R−Br], which indicates that the first step is slow and determines the rate. The second step with OH− is much faster, so the overall rate is independent of the concentration of OH−. In contrast, the alkaline hydrolysis of methyl bromide (CH3Br) is a bimolecular nucleophilic substitution (SN2) reaction in a single bimolecular step. Its rate law is second-order: r = k[R−Br][OH−]. == Composition of the transition state == A useful rule in the determination of mechanism is that the concentration factors in the rate law indicate the composition and charge of the activated complex or transition state. For the NO2–CO reaction above, the rate depends on [NO2]2, so that the activated complex has composition N2O4, with 2 NO2 entering the reaction before the transition state, and CO reacting after the transition state. A multistep example is the reaction between oxalic acid and chlorine in aqueous solution: H2C2O4 + Cl2 → 2 CO2 + 2 H+ + 2 Cl−. The observed rate law is v = k [ Cl 2 ] [ H 2 C 2 O 4 ] [ H + ] 2 [ Cl − ] , {\displaystyle v=k{\frac {{\ce {[Cl2][H2C2O4]}}}{[{\ce {H+}}]^{2}[{\ce {Cl^-}}]}},} which implies an activated complex in which the reactants lose 2H+ + Cl− before the rate-determining step. The formula of the activated complex is Cl2 + H2C2O4 − 2 H+ − Cl− + x H2O, or C2O4Cl(H2O)–x (an unknown number of water molecules are added because the possible dependence of the reaction rate on H2O was not studied, since the data were obtained in water solvent at a large and essentially unvarying concentration). One possible mechanism in which the preliminary steps are assumed to be rapid pre-equilibria occurring prior to the transition state is Cl2 + H2O ⇌ HOCl + Cl− + H+ H2C2O4 ⇌ H+ + HC2O−4 HOCl + HC2O−4 → H2O + Cl− + 2 CO2 == Reaction coordinate diagram == In a multistep reaction, the rate-determining step does not necessarily correspond to the highest Gibbs energy on the reaction coordinate diagram. If there is a reaction intermediate whose energy is lower than the initial reactants, then the activation energy needed to pass through any subsequent transition state depends on the Gibbs energy of that state relative to the lower-energy intermediate. The rate-determining step is then the step with the largest Gibbs energy difference relative either to the starting material or to any previous intermediate on the diagram. Also, for reaction steps that are not first-order, concentration terms must be considered in choosing the rate-determining step. == Chain reactions == Not all reactions have a single rate-determining step. In particular, the rate of a chain reaction is usually not controlled by any single step. == Diffusion control == In the previous examples the rate determining step was one of the sequential chemical reactions leading to a product. The rate-determining step can also be the transport of reactants to where they can interact and form the product. This case is referred to as diffusion control and, in general, occurs when the formation of product from the activated complex is very rapid and thus the provision of the supply of reactants is rate-determining. == See also == Product-determining step Rate-limiting step (biochemistry) == References == Zumdahl, Steven S. (2005). Chemical Principles (5th ed.). Houghton Mifflin. pp. 727–8. ISBN 0618372067.
Wikipedia/Rate-determining_step
The Van 't Hoff equation relates the change in the equilibrium constant, Keq, of a chemical reaction to the change in temperature, T, given the standard enthalpy change, ΔrH⊖, for the process. The subscript r {\displaystyle r} means "reaction" and the superscript ⊖ {\displaystyle \ominus } means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique (Studies in Dynamic Chemistry). The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system. The Van 't Hoff plot, which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction. == Equation == === Summary and uses === The standard pressure, P 0 {\displaystyle P^{0}} , is used to define the reference state for the Van 't Hoff equation, which is where ln denotes the natural logarithm, K e q {\displaystyle K_{eq}} is the thermodynamic equilibrium constant, and R is the ideal gas constant. This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium. In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature). Since in reality Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and the standard reaction entropy Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} do vary with temperature for most processes, the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant. A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as d ln ⁡ K e q d 1 T = − Δ r H ⊖ R . {\displaystyle {\frac {d\ln K_{\mathrm {eq} }}{d{\frac {1}{T}}}}=-{\frac {\Delta _{r}H^{\ominus }}{R}}.} The definite integral between temperatures T1 and T2 is then ln ⁡ K 2 K 1 = Δ r H ⊖ R ( 1 T 1 − 1 T 2 ) . {\displaystyle \ln {\frac {K_{2}}{K_{1}}}={\frac {\Delta _{r}H^{\ominus }}{R}}\left({\frac {1}{T_{1}}}-{\frac {1}{T_{2}}}\right).} In this equation K1 is the equilibrium constant at absolute temperature T1, and K2 is the equilibrium constant at absolute temperature T2. === Development from thermodynamics === Combining the well-known formula for the Gibbs free energy of reaction Δ r G ⊖ = Δ r H ⊖ − T Δ r S ⊖ , {\displaystyle \Delta _{r}G^{\ominus }=\Delta _{r}H^{\ominus }-T\Delta _{r}S^{\ominus },} where S is the entropy of the system, with the Gibbs free energy isotherm equation: Δ r G ⊖ = − R T ln ⁡ K e q , {\displaystyle \Delta _{r}G^{\ominus }=-RT\ln K_{\mathrm {eq} },} we obtain ln ⁡ K e q = − Δ r H ⊖ R T + Δ r S ⊖ R . {\displaystyle \ln K_{\mathrm {eq} }=-{\frac {\Delta _{r}H^{\ominus }}{RT}}+{\frac {\Delta _{r}S^{\ominus }}{R}}.} Differentiation of this expression with respect to the variable T while assuming that both Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} are independent of T yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations. Provided that Δ r H ⊖ {\displaystyle \Delta _{r}H^{\ominus }} and Δ r S ⊖ {\displaystyle \Delta _{r}S^{\ominus }} are constant, the preceding equation gives ln K as a linear function of ⁠1/T⁠ and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant R to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by R to obtain the standard entropy change. === Van 't Hoff isotherm === The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature: ( d G d ξ ) T , p = Δ r G = Δ r G ⊖ + R T ln ⁡ Q r , {\displaystyle \left({\frac {dG}{d\xi }}\right)_{T,p}=\Delta _{\mathrm {r} }G=\Delta _{\mathrm {r} }G^{\ominus }+RT\ln Q_{\mathrm {r} },} where Δ r G {\displaystyle \Delta _{\mathrm {r} }G} is the Gibbs free energy of reaction under non-standard states at temperature T {\displaystyle T} , Δ r G ⊖ {\displaystyle \Delta _{r}G^{\ominus }} is the Gibbs free energy for the reaction at ( T , P 0 ) {\displaystyle (T,P^{0})} , ξ {\displaystyle \xi } is the extent of reaction, and Qr is the thermodynamic reaction quotient. Since Δ r G ⊖ = − R T ln ⁡ K e q {\displaystyle \Delta _{r}G^{\ominus }=-RT\ln K_{eq}} , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T. This finds applications in the field of electrochemistry. particularly in the study of the temperature dependence of voltaic cells. The isotherm can also be used at fixed temperature to describe the Law of Mass Action. When a reaction is at equilibrium, Qr = Keq and Δ r G = 0 {\displaystyle \Delta _{\mathrm {r} }G=0} . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when ΔrG < 0, the reaction moves in the forward direction, whereas when ΔrG > 0, the reaction moves in the backwards direction. See Chemical equilibrium. == Van 't Hoff plot == For a reversible reaction, the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with ln Keq on the y-axis and ⁠1/T⁠ on the x axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation ln ⁡ K e q = − Δ r H ⊖ R T + Δ r S ⊖ R . {\displaystyle \ln K_{\mathrm {eq} }=-{\frac {\Delta _{r}H^{\ominus }}{RT}}+{\frac {\Delta _{r}S^{\ominus }}{R}}.} This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction. From this plot, −⁠ΔrH/R⁠ is the slope, and ⁠ΔrS/R⁠ is the intercept of the linear fit. By measuring the equilibrium constant, Keq, at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using Δ r H = − R × slope , Δ r S = R × intercept . {\displaystyle {\begin{aligned}\Delta _{r}H&=-R\times {\text{slope}},\\\Delta _{r}S&=R\times {\text{intercept}}.\end{aligned}}} The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot. === Endothermic reactions === For an endothermic reaction, heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope: slope = − Δ r H R , {\displaystyle {\text{slope}}=-{\frac {\Delta _{r}H}{R}},} When the reaction is endothermic, ΔrH > 0 (and the gas constant R > 0), so slope = − Δ r H R < 0. {\displaystyle {\text{slope}}=-{\frac {\Delta _{r}H}{R}}<0.} Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope. === Exothermic reactions === For an exothermic reaction, heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope: slope = − Δ r H R , {\displaystyle {\text{slope}}=-{\frac {\Delta _{r}H}{R}},} For an exothermic reaction ΔrH < 0, so slope = − Δ r H R > 0. {\displaystyle {\text{slope}}=-{\frac {\Delta _{r}H}{R}}>0.} Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope. === Error propagation === At first glance, using the fact that ΔrG⊖ = −RT ln K = ΔrH⊖ − TΔrS⊖ it would appear that two measurements of K would suffice to be able to obtain an accurate value of ΔrH⊖: Δ r H ⊖ = R ln ⁡ K 1 − ln ⁡ K 2 1 T 2 − 1 T 1 , {\displaystyle \Delta _{r}H^{\ominus }=R{\frac {\ln K_{1}-\ln K_{2}}{{\frac {1}{T_{2}}}-{\frac {1}{T_{1}}}}},} where K1 and K2 are the equilibrium constant values obtained at temperatures T1 and T2 respectively. However, the precision of ΔrH⊖ values obtained in this way is highly dependent on the precision of the measured equilibrium constant values. The use of error propagation shows that the error in ΔrH⊖ will be about 76 kJ/mol times the experimental uncertainty in (ln K1 − ln K2), or about 110 kJ/mol times the uncertainty in the ln K values. Similar considerations apply to the entropy of reaction obtained from ΔrS⊖ = ⁠1/T⁠(ΔH⊖ + RT ln K). Notably, when equilibrium constants are measured at three or more temperatures, values of ΔrH⊖ and ΔrS⊖ are often obtained by straight-line fitting. The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this. == Applications of the Van 't Hoff plot == === Van 't Hoff analysis === In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error. Assume two products B and C form in a reaction: a A + d D → b B, a A + d D → c C. In this case, Keq can be defined as ratio of B to C rather than the equilibrium constant. When ⁠B/C⁠ > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region. When ⁠B/C⁠ < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region. Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product. In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C-terminus or the N-terminus of the amino acid proline. The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C-terminus, but entropically it was more favorable to hydrogen bond with the N-terminus. Specifically, they found that C-terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N-terminus hydrogen bonding was favored by 31–43 J/(K mol). This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C-terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N-terminus, was preferred. === Mechanistic studies === A chemical reaction may undergo different reaction mechanisms at different temperatures. In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures. Δ r H 1 = − R × slope 1 , Δ r S 1 = R × intercept 1 ; Δ r H 2 = − R × slope 2 , Δ r S 2 = R × intercept 2 . {\displaystyle {\begin{aligned}\Delta _{r}H_{1}&=-R\times {\text{slope}}_{1},&\Delta _{r}S_{1}&=R\times {\text{intercept}}_{1};\\[5pt]\Delta _{r}H_{2}&=-R\times {\text{slope}}_{2},&\Delta _{r}S_{2}&=R\times {\text{intercept}}_{2}.\end{aligned}}} In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature. === Temperature dependence === If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term ⁠c/T2⁠ in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction: ln ⁡ K e q = a + b T + c T 2 , {\displaystyle \ln K_{\mathrm {eq} }=a+{\frac {b}{T}}+{\frac {c}{T^{2}}},} where Δ r H = − R ( b + 2 c T ) , Δ r S = R ( a − c T 2 ) . {\displaystyle {\begin{aligned}\Delta _{r}H&=-R\left(b+{\frac {2c}{T}}\right),\\\Delta _{r}S&=R\left(a-{\frac {c}{T^{2}}}\right).\end{aligned}}} Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists. === Surfactant self-assembly === The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy ΔH⊖m of surfactants from the temperature dependence of the critical micelle concentration (CMC): d d T ln ⁡ C M C = Δ H m ⊖ R T 2 . {\displaystyle {\frac {d}{dT}}\ln \mathrm {CMC} ={\frac {\Delta H_{\mathrm {m} }^{\ominus }}{RT^{2}}}.} However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead: R T 2 ( ∂ ∂ T ln ⁡ C M C ) P = − Δ r H m ⊖ ( N ) + T ( ∂ ∂ N ( G N + 1 − G N ) ) T , P ( ∂ N ∂ T ) P , {\displaystyle RT^{2}\left({\frac {\partial }{\partial T}}\ln \mathrm {CMC} \right)_{P}=-\Delta _{r}H_{\mathrm {m} }^{\ominus }(N)+T\left({\frac {\partial }{\partial N}}\left(G_{N+1}-G_{N}\right)\right)_{T,P}\left({\frac {\partial N}{\partial T}}\right)_{P},} with GN + 1 and GN being the free energies of the surfactant in a micelle with aggregation number N + 1 and N respectively. This effect is particularly relevant for nonionic ethoxylated surfactants or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms. == See also == Clausius–Clapeyron relation Van 't Hoff factor (i) Gibbs–Helmholtz equation Solubility equilibrium Arrhenius equation == References ==
Wikipedia/Van_'t_Hoff_equation
In chemical kinetics, a reaction rate constant or reaction rate coefficient (⁠ k {\displaystyle k} ⁠) is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants. For a reaction between reactants A and B to form a product C, where A and B are reactants C is a product a, b, and c are stoichiometric coefficients, the reaction rate is often found to have the form: r = k [ A ] m [ B ] n {\displaystyle r=k[\mathrm {A} ]^{m}[\mathrm {B} ]^{n}} Here ⁠ k {\displaystyle k} ⁠ is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.) The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally. Sum of m and n, that is, (m + n) is called the overall order of reaction. == Elementary steps == For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step the reaction rate is described by r = k 1 [ A ] {\displaystyle r=k_{1}[\mathrm {A} ]} , where k 1 {\displaystyle k_{1}} is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1. For a bimolecular step the reaction rate is described by r = k 2 [ A ] [ B ] {\displaystyle r=k_{2}[\mathrm {A} ][\mathrm {B} ]} , where k 2 {\displaystyle k_{2}} is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1. For a termolecular step the reaction rate is described by r = k 3 [ A ] [ B ] [ C ] {\displaystyle r=k_{3}[\mathrm {A} ][\mathrm {B} ][\mathrm {C} ]} , where k 3 {\displaystyle k_{3}} is a termolecular rate constant. There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + O2 + N2 → O3 + N2. One well-established example is the termolecular step 2 I + H2 → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas). == Relationship to other parameters == For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: t 1 / 2 = ln ⁡ 2 k {\textstyle t_{1/2}={\frac {\ln 2}{k}}} . Transition state theory gives a relationship between the rate constant k ( T ) {\displaystyle k(T)} and the Gibbs free energy of activation Δ G ‡ = Δ H ‡ − T Δ S ‡ {\displaystyle {\Delta G^{\ddagger }=\Delta H^{\ddagger }-T\Delta S^{\ddagger }}} , a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic ( Δ H ‡ {\displaystyle \Delta H^{\ddagger }} ) and entropic ( Δ S ‡ {\displaystyle \Delta S^{\ddagger }} ) changes that need to be achieved for the reaction to take place: The result from transition state theory is k ( T ) = k B T h e − Δ G ‡ / R T {\textstyle k(T)={\frac {k_{\mathrm {B} }T}{h}}e^{-\Delta G^{\ddagger }/RT}} , where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol. == Dependence on temperature == The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by: k ( T ) = A e − E a / R T {\displaystyle k(T)=Ae^{-E_{\mathrm {a} }/RT}} The reaction rate is given by: r = A e − E a / R T [ A ] m [ B ] n , {\displaystyle r=Ae^{-E_{\mathrm {a} }/RT}[\mathrm {A} ]^{m}[\mathrm {B} ]^{n},} where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e−Ea⁄RT. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below). Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory: k ( T ) = κ k B T h ( c ⊖ ) 1 − M e − Δ G ‡ / R T = ( κ k B T h ( c ⊖ ) 1 − M ) e Δ S ‡ / R e − Δ H ‡ / R T , {\displaystyle k(T)=\kappa {\frac {k_{\mathrm {B} }T}{h}}(c^{\ominus })^{1-M}e^{-\Delta G^{\ddagger }/RT}=\left(\kappa {\frac {k_{\mathrm {B} }T}{h}}(c^{\ominus })^{1-M}\right)e^{\Delta S^{\ddagger }/R}e^{-\Delta H^{\ddagger }/RT},} where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision. The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory. The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step. Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations: k ( T ) = P Z e − Δ E / R T , {\displaystyle k(T)=PZe^{-\Delta E/RT},} where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, Z ∝ T 1 / 2 {\displaystyle Z\propto T^{1/2}} , making the temperature dependence of k different from both the Arrhenius and Eyring models. === Comparison of models === All three theories model the temperature dependence of k using an equation of the form k ( T ) = C T α e − Δ E / R T {\displaystyle k(T)=CT^{\alpha }e^{-\Delta E/RT}} for some constant C, where α = 0, 1⁄2, and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system. == Units == The units of the rate constant depend on the overall order of reaction. If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1) For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1) For order one, the rate constant has units of s−1 For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1) For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1) For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1) == Plasma and gases == Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software. == Rate constant calculations == Rate constant can be calculated for elementary reactions by molecular dynamics simulations. One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale. One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations. == Divided saddle theory == The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state. A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored: k = k S D ⋅ α R S S D {\displaystyle k=k_{\mathrm {SD} }\cdot \alpha _{\mathrm {RS} }^{\mathrm {SD} }} where αSDRS is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations == See also == Reaction rate Equilibrium constant Molecularity == References ==
Wikipedia/Rate_constant
Accelerated aging is testing that uses aggravated conditions of heat, humidity, oxygen, sunlight, vibration, etc. to speed up the normal aging processes of items. It is used to help determine the long-term effects of expected levels of stress within a shorter time, usually in a laboratory by controlled standard test methods. It is used to estimate the useful lifespan of a product or its shelf life when actual lifespan data is unavailable. This occurs with products that have not existed long enough to have gone through their useful lifespan: for example, a new type of car engine or a new polymer for replacement joints. Physical testing or chemical testing is carried out by subjecting the product to representative levels of stress for long time periods, unusually high levels of stress used to accelerate the effects of natural aging, or levels of stress that intentionally force failures (for further analysis). Mechanical parts are run at very high speed, far in excess of what they would receive in normal usage. Polymers are often kept at elevated temperatures, in order to accelerate chemical breakdown. Environmental chambers are often used. Also, the device or material under test can be exposed to rapid (but controlled) changes in temperature, humidity, pressure, strain, etc. For example, cycles of heat and cold can simulate the effect of day and night for a few hours or minutes. == Techniques and methods == Accelerated aging employs a variety of controlled methods to replicate and speed up the effects of natural aging. These methods vary depending on the type of product, material, or environmental condition being simulated. Below are the most commonly used techniques: === Environmental Stress testing === ==== Temperature cycling ==== Samples are exposed to repeated cycles of extreme heat and cold, mimicking daily or seasonal temperature fluctuations. For example, in the automotive industry, components like engines and braking systems are tested using temperature cycling to simulate real-world conditions such as hot desert climates during the day and freezing temperatures at night. In electronics, printed circuit boards (PCBs) are subjected to rapid temperature shifts to evaluate solder joint reliability and material resilience. ==== Thermal shock ==== Thermal shock refers to the rapid exposure of materials or components to extreme temperature differences over a very short period. Unlike temperature cycling, which involves gradual changes between high and low temperatures, thermal shock imposes abrupt transitions that can lead to immediate stresses within a material. This method is often used to evaluate a product's resistance to cracking, warping, or other forms of failure caused by sudden thermal gradients. For example, glass or ceramic components in aerospace applications are subjected to thermal shock tests to ensure durability under high-speed atmospheric reentry conditions. Thermal shock chambers are specialized devices that facilitate rapid temperature transitions to simulate extreme environmental conditions. These chambers typically consist of two or three zones with distinct temperature settings—high, low, and sometimes ambient. A product carrier basket automatically transports the test specimens between these zones, ensuring swift temperature changes. BGA components are particularly susceptible to failures induced by thermal shock due to the mechanical stresses exerted on solder joints during rapid temperature changes. Research has shown that thermal shock can lead to the initiation and propagation of cracks within these solder joints, compromising the integrity and reliability of electronic assemblies. The reliability of PCB assemblies often hinges on the durability of their solder joints. In harsh environments characterized by significant temperature variations, these joints are prone to crack formation and eventual fracture, underscoring the importance of rigorous thermal shock testing in the design and assessment of electronic components. Beyond electronics, thermal shock testing is employed across various industries to assess the durability of materials and products subjected to rapid temperature changes. For example, in the automotive sector, components such as engine parts and safety equipment are tested to ensure they can withstand the thermal stresses encountered during operation. Similarly, "the aerospace industry uses environmental test chambers to test parts like avionics, satellite equipment, and airplane parts. These parts need to be able to handle extreme temperatures during launch, re-entry, and in space. ==== Humidity testing ==== Humidity testing involves subjecting materials or products to high levels of moisture or fluctuating humidity conditions to simulate exposure to tropical, coastal, or industrial environments. This method is used to evaluate the effects of moisture on material degradation, corrosion, swelling, and overall performance. For example, electronic devices undergo humidity testing to ensure their enclosures and seals can prevent moisture ingress, while construction materials such as wood or adhesives are tested to evaluate resistance to warping or delamination. Humidity testing is often conducted in combination with elevated temperatures to accelerate the effects of moisture exposure, particularly for materials like polymers, metals, and composites. ==== UV exposure ==== UV testing is a component of aging tests designed to simulate the long-term effects of ultraviolet (UV) radiation exposure on materials, products, and coatings. UV radiation, a component of sunlight, is one of the primary contributors to material degradation over time. UV testing helps assess the durability and performance of materials under prolonged exposure to UV light, providing insights into their expected lifespan and identifying potential vulnerabilities. Purpose and Applications: The primary purpose of UV testing is to evaluate the resistance of materials to photodegradation, including fading, discoloration, cracking, embrittlement, or loss of mechanical properties. Common applications of UV testing include: Plastics and Polymers: Assessing the weatherability of polymers used in outdoor products. Coatings and Paints: Ensuring the durability of protective and decorative coatings exposed to sunlight. Textiles: Evaluating the fade resistance of fabrics and dyes. Testing Methods: Accelerated UV Testing: This approach uses specialized equipment, such as xenon arc or fluorescent UV lamps, to simulate UV radiation in a controlled environment. Common standards include ASTM G154 (fluorescent UV lamps) and ASTM G155 (xenon arc lamps). ==== Oxygen and pollutant exposure ==== Samples are exposed to controlled concentrations of oxygen or atmospheric pollutants (e.g., ozone or sulfur dioxide) to simulate oxidative degradation or corrosion. ==== Salt spray test ==== Salt spray test, also known as salt fog testing, is a widely utilized method in environmental stress testing to assess the corrosion resistance of materials and surface coatings. By exposing specimens to a controlled saline environment, this accelerated aging test simulates the corrosive effects of marine and coastal conditions, providing valuable insights into a material's durability and longevity. ==== Dust testing ==== Dust testing is used to evaluate the resilience and performance of devices and systems exposed to particulate contaminants. In gas distribution networks, dust contamination can originate from various sources, notably black powder (sulfide-induced black dust). Hydrogen sulfide (H₂S) present in natural gas can react with metals, particularly copper, forming metal sulfides. Over time, these sulfides can flake off, creating fine black dust that poses risks to gas appliances and meters. Gas meters are susceptible to contamination from these types of dust. The ingress of metal dust can lead to mechanical wear (Abrasive particles can erode moving parts, compromising measurement accuracy), blockages (Accumulation of dust can obstruct internal pathways, affecting gas flow and meter functionality) and corrosion (Chemical interactions between dust particles and meter components can accelerate degradation). === Mechanical stress testing === Mechanical stress testing evaluates the durability of materials and components under repeated mechanical loads, simulating real-world conditions that may cause degradation over time. These tests help identify potential failures due to fatigue, wear, or structural weaknesses. ==== High-speed operation ==== High-speed operation tests assess how a material or device withstands prolonged exposure to rapid movement or mechanical cycling. This is commonly used in industries such as aerospace, automotive, and manufacturing, where components experience frequent high-speed motion. The test may include accelerated wear simulations, friction analysis, and thermal effects caused by rapid motion. ==== Vibration testing ==== Vibration testing simulates mechanical oscillations that a component may encounter during its lifespan. This method helps determine resistance to structural fatigue and mechanical resonance, which can lead to failure. Testing is performed using controlled vibration frequencies and amplitudes, often conducted with electrodynamic or hydraulic shakers. Industries such as electronics, transportation, and construction rely on vibration testing to improve product reliability and safety. === Chemical stability testing === Chemical stability testing evaluates the long-term resistance of materials and products to chemical degradation. This form of testing is crucial for determining how substances react to environmental factors such as heat, humidity, oxidation, and exposure to aggressive chemicals. It is widely used in industries such as pharmaceuticals, polymers, and coatings to ensure product reliability and safety. ==== Thermal aging ==== Thermal aging tests assess the effects of prolonged exposure to elevated temperatures on a material's chemical and physical properties. High temperatures can accelerate oxidation, polymer degradation, and phase transitions, leading to reduced mechanical strength and altered performance characteristics. These tests are commonly applied in the evaluation of plastics, rubber, lubricants, and electronic components. ==== Chemical exposure ==== Chemical exposure testing examines the effects of contact with reactive substances, such as acids, bases, solvents, and oxidizing agents, on material stability. These tests help predict corrosion, swelling, discoloration, and structural degradation that may occur over time. Industries such as pharmaceuticals, aerospace, and construction use chemical exposure testing to assess product durability under real-world conditions. === Combined stress testing === Combined stress testing evaluates the aging behavior of materials, components, and products under multiple simultaneous stress factors. Unlike single-factor aging tests that examine specific conditions (e.g., heat, humidity, mechanical load, or chemical exposure), combined stress testing aims to replicate real-world conditions where multiple degradation mechanisms act together. This type of testing is essential for assessing long-term reliability, identifying failure modes, and improving product durability across industries such as aerospace, automotive, electronics, and pharmaceuticals. ==== Importance of combined stress testing ==== Many products are subjected to multiple environmental and mechanical stressors during their operational life. For instance, an electronic device in an automotive setting may experience high temperatures, humidity, vibrations, and cyclic mechanical loads simultaneously. Traditional single-variable aging tests may not accurately predict product lifespan when multiple factors interact in complex ways. By using combined stress testing, researchers and engineers can better understand synergetic degradation effects and develop more resilient materials and designs. ==== Synergistic effects in combined stress testing ==== One of the main challenges in combined stress testing is the presence of synergistic effects, where the impact of multiple stresses acting together is greater than the sum of their individual effects. For example: Heat and humidity – When combined, they can accelerate hydrolytic degradation in polymers more significantly than either factor alone. Mechanical stress and corrosion – Repeated loading can cause microcracks that allow corrosive agents to penetrate deeper, leading to faster material failure. Thermal cycling and electrical load – In electronics, repeated temperature changes combined with high current loads can cause solder joint fatigue, increasing the risk of electrical failures. ==== Challenges and future developments ==== Despite its advantages, combined stress testing presents challenges, such as: Increased complexity – Simulating multiple stress factors requires advanced test setups and longer evaluation times. Difficulties in failure attribution – Identifying which stressor is primarily responsible for a given failure mode can be challenging. High costs – The need for specialized equipment and extended test durations increases testing expenses. Emerging techniques, such as machine learning-based predictive modeling and multi-physics simulation, are being explored to optimize combined stress testing. These approaches allow researchers to better predict product performance under complex conditions while reducing the need for extensive physical testing. === Validation of results === The validation of aging test results is essential to ensure that the data obtained accurately represents the long-term performance and durability of a material, component, or product. Aging tests are designed to simulate real-world conditions over an accelerated timeframe, but validation is necessary to confirm that these simulations provide meaningful and reliable predictions. Validation involves statistical analysis, replication of results, correlation with field data, and compliance with industry standards. ==== Statistical analysis and reproducibility ==== Aging test results must undergo rigorous statistical analysis to determine their reliability and significance. Key statistical methods used in validation include standard deviation analysis, confidence interval estimation, and regression modeling to establish trends over time. Reproducibility is also critical; results must be consistent when tests are repeated under identical conditions. Inter-laboratory studies and round-robin testing are often conducted to ensure that independent research teams can replicate findings with minimal variance. ==== Correlation with real-world performance ==== To validate aging test results, researchers compare experimental outcomes with actual field data collected from long-term use of the product in its intended environment. This correlation helps assess whether the accelerated test conditions accurately simulate real degradation mechanisms. For example, in automotive materials testing, exposure to controlled ultraviolet (UV) light and humidity in a lab must reflect the wear observed in outdoor vehicle surfaces over years of service. If discrepancies arise, test conditions may require adjustment to better replicate environmental stressors. ==== Compliance with industry standards ==== Many industries follow established guidelines and standards to validate aging test results. Regulatory agencies and international organizations provide testing protocols to ensure uniformity and comparability across different studies. Examples include: ASTM International (ASTM) – Provides standardized methods for accelerated aging tests in various materials, including polymers, metals, and coatings. For instance, ASTM F1980 is a standard guide for accelerated aging of sterile barrier systems for medical devices. International Organization for Standardization (ISO) – Defines guidelines for environmental testing, including aging simulations for aerospace, automotive, and medical products. For examples, ISO 11607-1 specifies requirements for packaging materials intended to maintain the sterility of medical devices. United States Pharmacopeia (USP) – Establishes criteria for drug stability testing to determine shelf life under varying storage conditions. USP General Chapter <1150> provides guidance on pharmaceutical stability. ==== Uncertainty and limitations in validation ==== Despite rigorous validation efforts, uncertainties exist in aging test results due to inherent limitations in accelerated testing methodologies. Factors contributing to uncertainty include: Extrapolation errors – Predicting long-term performance based on short-term accelerated tests may introduce errors if degradation mechanisms do not scale linearly. Environmental variability – Real-world conditions can be unpredictable, making it difficult to replicate exact field conditions in a laboratory setting. Material inconsistencies – Variability in raw materials, manufacturing processes, or usage conditions can affect long-term performance in ways that are not fully captured by controlled tests. To address these uncertainties, sensitivity analysis and conservative safety margins are often applied in engineering designs and product specifications. == Applications == Accelerated aging is widely used across various industries to assess product longevity, reliability, and performance under simulated conditions. By exposing materials and components to intensified stressors, these tests help predict real-world degradation mechanisms within a reduced timeframe. Applications of accelerated aging include pharmaceuticals, medical devices, electronics, automotive materials, aerospace components, and consumer goods. === Pharmaceuticals and medical devices === In the pharmaceutical and medical industries, accelerated aging is critical for determining the shelf life and stability of drugs, vaccines, and sterile medical devices. Stability testing follows guidelines such as those outlined in the International Council for Harmonisation (ICH) Q1A(R2), which establishes protocols for subjecting pharmaceuticals to elevated temperature and humidity conditions. Medical device packaging validation, often performed according to ASTM F1980, ensures that sterile barrier integrity remains intact over time. === Electronics and semiconductors === Electronics undergo accelerated aging to evaluate the long-term reliability of circuit boards, semiconductors, and connectors. Tests such as Highly Accelerated Life Testing (HALT) and Highly Accelerated Stress Screening (HASS) are commonly used to identify early failures due to thermal cycling, mechanical vibrations, and electrical load. Standards like IEC 60068-2 provide guidelines for environmental testing of electronic devices. === Automotive industry === In the automotive sector, accelerated aging is used to test polymers, coatings, adhesives, and structural materials for resistance to heat, UV exposure, humidity, and mechanical stress. Xenon arc and QUV weathering tests (ISO 4892-2) simulate prolonged sun exposure to predict material degradation. Additionally, corrosion testing such as SAE J2334 replicates environmental conditions, including salt spray and humidity, to evaluate metal durability in vehicle components. === Aerospace and defense === === Consumer products and packaging === === Library and archival preservation science === Accelerated aging is also used in library and archival preservation science. In this context, a material, usually paper, is subjected to extreme conditions in an effort to speed up the natural aging process. Usually, the extreme conditions consist of elevated temperature, but tests making use of concentrated pollutants or intense light also exist. These tests may be used for several purposes. To predict the long-term effects of particular conservation treatments. In such a test, treated and untreated papers are both subjected to a single set of fixed, standardized conditions. The two are then compared in an effort to determine whether the treatment has a positive or negative effect on the lifespan of the paper. To study the basic processes of paper decay. In such a test, the purpose is not to predict a particular outcome for a specific type of paper, but rather to gain a greater understanding of the chemical mechanisms of decay. To predict the lifespan of a particular type of paper. In such a test, paper samples are generally subjected to several elevated temperatures and a constant level of relative humidity equivalent to the relative humidity in which they would be stored. The researcher then measures a relevant quality of the samples, such as folding endurance, at each temperature. This allows the researcher to determine how many days at each temperature it takes for a particular level of degradation to be reached. From the data collected, the researcher extrapolates the rate at which the samples might decay at lower temperatures, such as those at which the paper would be stored under normal conditions. In theory, this allows the researcher to predict the lifespan of the paper. This test is based on the Arrhenius equation. This type of test is, however, a subject of frequent criticism. There is no single recommended set of conditions at which these tests should be performed. In fact, temperatures from 22 to 160 degrees Celsius, relative humidities from 1% to 100%, and test durations from one hour to 180 days have all been used. ISO 5630-3 recommends accelerated aging at 80 degrees Celsius and 65% relative humidity when using a fixed set of conditions. Besides variations in the conditions to which the papers are subjected, there are also multiple ways in which the test can be set up. For instance, rather than simply placing single sheets in a climate controlled chamber, the Library of Congress recommends sealing samples in an air-tight glass tube and aging the papers in stacks, which more closely resembles the way in which they are likely to age under normal circumstances, rather than in single sheets. ==== Limitations and criticisms ==== Accelerated aging techniques, particularly those using the Arrhenius equation, have frequently been criticized in recent decades. While some researchers claim that the Arrhenius equation can be used to quantitatively predict the lifespan of tested papers, other researchers disagree. Many argue that this method cannot predict an exact lifespan for the tested papers, but that it can be used to rank papers by permanence. A few researchers claim that even such rankings can be deceptive, and that these types of accelerated aging tests can only be used to determine whether a particular treatment or paper quality has a positive or negative effect on the paper's permanence. There are several reasons for this skepticism. One argument is that entirely different chemical processes take place at higher temperatures than at lower temperatures, which means the accelerated aging process and natural aging process are not parallel. Another is that paper is a "complex system" and the Arrhenius equation only applicable to elementary reactions. Other researchers criticize the ways in which deterioration is measured during these experiments. Some point out that there is no standard point at which a paper is considered unusable for library and archival purposes. Others claim that the degree of correlation between macroscopic, mechanical properties of paper and molecular, chemical deterioration has not been convincingly proven. Reservations about the utility of this method in the automotive industry as a method for assessing corrosion performance have been documented In an effort to improve the quality of accelerated aging tests, some researchers have begun comparing materials which have undergone accelerated aging to materials which have undergone natural aging. The Library of Congress, for instance, began a long-term experiment in 2000 to compare artificially aged materials to materials allowed to undergo natural aging for a hundred years. ==== History ==== The technique of artificially accelerating the deterioration of paper through heat was known by 1899, when it was described by W. Herzberg. Accelerated aging was further refined during the 1920s, with tests using sunlight and elevated temperatures being used to rank the permanence of various papers in the United States and Sweden. In 1929, a frequently used method in which 72 hours at 100 degrees Celsius is considered equivalent to 18–25 years of natural aging was established by R. H. Rasch. In the 1950s, researchers began to question the validity of accelerated aging tests which relied on dry heat and a single temperature, pointing out that relative humidity affects the chemical processes which produce paper degradation and that the reactions which cause degradation have different activation energies. This led researchers like Baer and Lindström to advocate accelerated aging techniques using the Arrhenius equation and a realistic relative humidity. == See also == Arrhenius equation Environmental stress screening Environmental chamber Highly Accelerated Life Test Planned obsolescence == References == == External links == Medical Plastics and Biomaterials Magazine
Wikipedia/Accelerated_aging
In chemical kinetics, the entropy of activation of a reaction is one of the two parameters (along with the enthalpy of activation) that are typically obtained from the temperature dependence of a reaction rate constant, when these data are analyzed using the Eyring equation of the transition state theory. The standard entropy of activation is symbolized ΔS‡ and equals the change in entropy when the reactants change from their initial state to the activated complex or transition state (Δ = change, S = entropy, ‡ = activation). == Importance == Entropy of activation determines the preexponential factor A of the Arrhenius equation for temperature dependence of reaction rates. The relationship depends on the molecularity of the reaction: for reactions in solution and unimolecular gas reactions A = (ekBT/h) exp(ΔS‡/R), while for bimolecular gas reactions A = (e2kBT/h) (RT/p) exp(ΔS‡/R). In these equations e is the base of natural logarithms, h is the Planck constant, kB is the Boltzmann constant and T the absolute temperature. R′ is the ideal gas constant. The factor is needed because of the pressure dependence of the reaction rate. R′ = 8.3145×10−2 (bar·L)/(mol·K). The value of ΔS‡ provides clues about the molecularity of the rate determining step in a reaction, i.e. the number of molecules that enter this step. Positive values suggest that entropy increases upon achieving the transition state, which often indicates a dissociative mechanism in which the activated complex is loosely bound and about to dissociate. Negative values for ΔS‡ indicate that entropy decreases on forming the transition state, which often indicates an associative mechanism in which two reaction partners form a single activated complex. == Derivation == It is possible to obtain entropy of activation using Eyring equation. This equation is of the form k = κ k B T h e Δ S ‡ R e − Δ H ‡ R T {\displaystyle k={\frac {\kappa k_{\mathrm {B} }T}{h}}e^{\frac {\Delta S^{\ddagger }}{R}}e^{-{\frac {\Delta H^{\ddagger }}{RT}}}} where: k {\displaystyle k} = reaction rate constant T {\displaystyle T} = absolute temperature Δ H ‡ {\displaystyle \Delta H^{\ddagger }} = enthalpy of activation R {\displaystyle R} = gas constant κ {\displaystyle \kappa } = transmission coefficient k B {\displaystyle k_{\mathrm {B} }} = Boltzmann constant = R/NA, NA = Avogadro constant h {\displaystyle h} = Planck constant Δ S ‡ {\displaystyle \Delta S^{\ddagger }} = entropy of activation This equation can be turned into the form ln ⁡ k T = − Δ H ‡ R ⋅ 1 T + ln ⁡ κ k B h + Δ S ‡ R {\displaystyle \ln {\frac {k}{T}}={\frac {-\Delta H^{\ddagger }}{R}}\cdot {\frac {1}{T}}+\ln {\frac {\kappa k_{\mathrm {B} }}{h}}+{\frac {\Delta S^{\ddagger }}{R}}} The plot of ln ⁡ ( k / T ) {\displaystyle \ln(k/T)} versus 1 / T {\displaystyle 1/T} gives a straight line with slope − Δ H ‡ / R {\displaystyle -\Delta H^{\ddagger }/R} from which the enthalpy of activation can be derived and with intercept ln ⁡ ( κ k B / h ) + Δ S ‡ / R {\displaystyle \ln(\kappa k_{\mathrm {B} }/h)+\Delta S^{\ddagger }/R} from which the entropy of activation is derived. == References ==
Wikipedia/Entropy_of_activation
In chemistry, the rate equation (also known as the rate law or empirical differential rate equation) is an empirical differential mathematical expression for the reaction rate of a given reaction in terms of concentrations of chemical species and constant parameters (normally rate coefficients and partial orders of reaction) only. For many reactions, the initial rate is given by a power law such as v 0 = k [ A ] x [ B ] y {\displaystyle v_{0}\;=\;k[\mathrm {A} ]^{x}[\mathrm {B} ]^{y}} where ⁠ [ A ] {\displaystyle [\mathrm {A} ]} ⁠ and ⁠ [ B ] {\displaystyle [\mathrm {B} ]} ⁠ are the molar concentrations of the species ⁠ A {\displaystyle \mathrm {A} } ⁠ and ⁠ B , {\displaystyle \mathrm {B} ,} ⁠ usually in moles per liter (molarity, ⁠ M {\displaystyle M} ⁠). The exponents ⁠ x {\displaystyle x} ⁠ and ⁠ y {\displaystyle y} ⁠ are the partial orders of reaction for ⁠ A {\displaystyle \mathrm {A} } ⁠ and ⁠ B {\displaystyle \mathrm {B} } ⁠, respectively, and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The order of reaction is a number which quantifies the degree to which the rate of a chemical reaction depends on concentrations of the reactants. In other words, the order of reaction is the exponent to which the concentration of a particular reactant is raised. The constant ⁠ k {\displaystyle k} ⁠ is the reaction rate constant or rate coefficient and at very few places velocity constant or specific rate of reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate v = k [ A ] x [ B ] y {\displaystyle v\;=\;k[{\ce {A}}]^{x}[{\ce {B}}]^{y}} applies throughout the course of the reaction. Elementary (single-step) reactions and reaction steps have reaction orders equal to the stoichiometric coefficients for each reactant. The overall reaction order, i.e. the sum of stoichiometric coefficients of reactants, is always equal to the molecularity of the elementary reaction. However, complex (multi-step) reactions may or may not have reaction orders equal to their stoichiometric coefficients. This implies that the order and the rate equation of a given reaction cannot be reliably deduced from the stoichiometry and must be determined experimentally, since an unknown reaction mechanism could be either elementary or complex. When the experimental rate equation has been determined, it is often of use for deduction of the reaction mechanism. The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species. A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules: v 0 = k K 1 K 2 C A C B ( 1 + K 1 C A + K 2 C B ) 2 . {\displaystyle v_{0}=k{\frac {K_{1}K_{2}C_{A}C_{B}}{(1+K_{1}C_{A}+K_{2}C_{B})^{2}}}.} == Definition == Consider a typical chemical reaction in which two reactants A and B combine to form a product C: A + 2 B ⟶ 3 C . {\displaystyle {\ce {{A}+ {2B}-> {3C}}}.} This can also be written − A − 2 B + 3 C = 0. {\displaystyle -\mathrm {A} -2\mathrm {B} +3\mathrm {C} =0.} The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the molar concentration of chemical X, − d [ A ] d t = − 1 2 d [ B ] d t = 1 3 d [ C ] d t . {\displaystyle -{\frac {d[\mathrm {A} ]}{dt}}=-{\frac {1}{2}}{\frac {d[\mathrm {B} ]}{dt}}={\frac {1}{3}}{\frac {d[\mathrm {C} ]}{dt}}.} If the reaction takes place in a closed system at constant temperature and volume, without a build-up of reaction intermediates, the reaction rate v {\displaystyle v} is defined as v = 1 ν i d [ X i ] d t , {\displaystyle v={\frac {1}{\nu _{i}}}{\frac {d[\mathrm {X} _{i}]}{dt}},} where νi is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant. The initial reaction rate v 0 = v t = 0 {\displaystyle v_{0}=v_{t=0}} has some functional dependence on the concentrations of the reactants, v 0 = f ( [ A ] , [ B ] , … ) , {\displaystyle v_{0}=f\left([\mathrm {A} ],[\mathrm {B} ],\ldots \right),} and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment. == Power laws == A common form for the rate equation is a power law: v 0 = k [ A ] x [ B ] y ⋯ {\displaystyle v_{0}=k[{\ce {A}}]^{x}[{\ce {B}}]^{y}\cdots } The constant ⁠ k {\displaystyle k} ⁠ is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction. In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients. The differential rate equation for an elementary reaction using mathematical product notation is: − d d t [ Reactants ] = k ∏ i [ Reactants i ] {\displaystyle -{d \over dt}[{\text{Reactants}}]=k\prod _{i}[{\text{Reactants}}_{i}]} Where: − d d t [ Reactants ] {\textstyle -{d \over dt}[{\text{Reactants}}]} is the rate of change of reactant concentration with respect to time. k is the rate constant of the reaction. ∏ i [ Reactants i ] {\textstyle \prod _{i}[{\text{Reactants}}_{i}]} represents the concentrations of the reactants, raised to the powers of their stoichiometric coefficients and multiplied together. === Determination of reaction order === ==== Method of initial rates ==== The natural logarithm of the power-law rate equation is ln ⁡ v 0 = ln ⁡ k + x ln ⁡ [ A ] + y ln ⁡ [ B ] + ⋯ {\displaystyle \ln v_{0}=\ln k+x\ln[{\ce {A}}]+y\ln[{\ce {B}}]+\cdots } This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant ⁠ A {\displaystyle {\rm {A}}} ⁠ with all other concentrations ⁠ [ B ] , [ C ] , … {\displaystyle [{\rm {B],[{\rm {C],\dots }}}}} ⁠ kept constant, so that ln ⁡ v 0 = x ln ⁡ [ A ] + constant . {\displaystyle \ln v_{0}=x\ln[{\ce {A}}]+{\textrm {constant}}.} The slope of a graph of ⁠ ln ⁡ v {\displaystyle \ln v} ⁠ as a function of ln ⁡ [ A ] {\displaystyle \ln[{\ce {A}}]} then corresponds to the order ⁠ x {\displaystyle x} ⁠ with respect to reactant ⁠ A {\displaystyle {\rm {A}}} ⁠. However, this method is not always reliable because measurement of the initial rate requires accurate determination of small changes in concentration in short times (compared to the reaction half-life) and is sensitive to errors, and the rate equation will not be completely determined if the rate also depends on substances not present at the beginning of the reaction, such as intermediates or products. ==== Integral method ==== The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion. For example, the integrated rate law for a first-order reaction is ln ⁡ [ A ] = − k t + ln ⁡ [ A ] 0 , {\displaystyle \ln {[{\ce {A}}]}=-kt+\ln {[{\ce {A}}]_{0}},} where ⁠ [ A ] {\displaystyle [{\rm {A]}}} ⁠ is the concentration at time ⁠ t {\displaystyle t} ⁠ and ⁠ [ A ] 0 {\displaystyle [{\rm {A]_{0}}}} ⁠ is the initial concentration at zero time. The first-order rate law is confirmed if ln ⁡ [ A ] {\displaystyle \ln {[{\ce {A}}]}} is in fact a linear function of time. In this case the rate constant ⁠ k {\displaystyle k} ⁠ is equal to the slope with sign reversed. ==== Method of flooding ==== The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction a·A + b·B → c·C with rate law v 0 = k ⋅ [ A ] x ⋅ [ B ] y , {\displaystyle v_{0}=k\cdot [{\rm {A}}]^{x}\cdot [{\rm {B}}]^{y},} the partial order ⁠ x {\displaystyle x} ⁠ with respect to ⁠ A {\displaystyle {\rm {A}}} ⁠ is determined using a large excess of ⁠ B {\displaystyle {\rm {B}}} ⁠. In this case v 0 = k ′ ⋅ [ A ] x {\displaystyle v_{0}=k'\cdot [{\rm {A}}]^{x}} with k ′ = k ⋅ [ B ] y , {\displaystyle k'=k\cdot [{\rm {B}}]^{y},} and ⁠ x {\displaystyle x} ⁠ may be determined by the integral method. The order ⁠ y {\displaystyle y} ⁠ with respect to ⁠ B {\displaystyle {\rm {B}}} ⁠ under the same conditions (with ⁠ B {\displaystyle {\rm {B}}} ⁠ in excess) is determined by a series of similar experiments with a range of initial concentration ⁠ [ B ] 0 {\displaystyle [{\rm {B]_{0}}}} ⁠ so that the variation of ⁠ k ′ {\displaystyle k'} ⁠ can be measured. === Zero order === For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. The rate law for zero order reaction is − d [ A ] d t = k [ A ] 0 = k , {\displaystyle -{d[A] \over dt}=k[A]^{0}=k,} The unit of k is mol dm−3 s−1. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface. Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol. Similarly, reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine (PH3) on a hot tungsten surface at high pressure is zero order in phosphine, which decomposes at a constant rate. In homogeneous catalysis zero order behavior can come about from reversible inhibition. For example, ring-opening metathesis polymerization using third-generation Grubbs catalyst exhibits zero order behavior in catalyst due to the reversible inhibition that occurs between pyridine and the ruthenium center. === First order === A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is − d [ A ] d t = k [ A ] , {\displaystyle -{\frac {d[{\ce {A}}]}{dt}}=k[{\ce {A}}],} The unit of k is s−1. Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. However according to the Lindemann mechanism the reaction consists of two steps: the bimolecular collision which is second order and the reaction of the energized molecule which is unimolecular and first order. The rate of the overall reaction depends on the slowest step, so the overall reaction will be first order when the reaction of the energized reactant is slower than the collision step. The half-life is independent of the starting concentration and is given by t 1 / 2 = ln ⁡ ( 2 ) k {\textstyle t_{1/2}={\frac {\ln {(2)}}{k}}} . The mean lifetime is τ = 1/k. Examples of such reactions are: 2 N 2 O 5 ⟶ 4 NO 2 + O 2 {\displaystyle {\ce {2N2O5 -> 4NO2 + O2}}} [ CoCl ( NH 3 ) 5 ] 2 + + H 2 O ⟶ [ Co ( H 2 O ) ( NH 3 ) 5 ] 3 + + Cl − {\displaystyle {\ce {[CoCl(NH3)5]^2+ + H2O -> [Co(H2O)(NH3)5]^3+ + Cl-}}} H 2 O 2 ⟶ H 2 O + 1 2 O 2 {\displaystyle {\ce {H2O2 -> H2O + 1/2O2}}} In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution, ArN+2 + X− → ArX + N2, the rate equation is v 0 = k [ ArN 2 + ] , {\displaystyle v_{0}=k[{\ce {ArN2+}}],} where Ar indicates an aryl group. === Second order === A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared, v 0 = k [ A ] 2 , {\displaystyle v_{0}=k[{\ce {A}}]^{2},} or (more commonly) to the product of two concentrations, v 0 = k [ A ] [ B ] . {\displaystyle v_{0}=k[{\ce {A}}][{\ce {B}}].} As an example of the first type, the reaction NO2 + CO → NO + CO2 is second-order in the reactant NO2 and zero order in the reactant CO. The observed rate is given by v 0 = k [ NO 2 ] 2 , {\displaystyle v_{0}=k[{\ce {NO2}}]^{2},} and is independent of the concentration of CO. For the rate proportional to a single concentration squared, the time dependence of the concentration is given by 1 [ A ] = 1 [ A ] 0 + k t . {\displaystyle {\frac {1}{{\ce {[A]}}}}={\frac {1}{{\ce {[A]0}}}}+kt.} The unit of k is mol−1 dm3 s−1. The time dependence for a rate proportional to two unequal concentrations is [ A ] [ B ] = [ A ] 0 [ B ] 0 e ( [ A ] 0 − [ B ] 0 ) k t ; {\displaystyle {\frac {{\ce {[A]}}}{{\ce {[B]}}}}={\frac {{\ce {[A]0}}}{{\ce {[B]0}}}}e^{\left({\ce {[A]0}}-{\ce {[B]0}}\right)kt};} if the concentrations are equal, they satisfy the previous equation. The second type includes nucleophilic addition-elimination reactions, such as the alkaline hydrolysis of ethyl acetate: CH 3 COOC 2 H 5 + OH − ⟶ CH 3 COO − + C 2 H 5 OH {\displaystyle {\ce {CH3COOC2H5 + OH- -> CH3COO- + C2H5OH}}} This reaction is first-order in each reactant and second-order overall: v 0 = k [ CH 3 COOC 2 H 5 ] [ OH − ] {\displaystyle v_{0}=k[{\ce {CH3COOC2H5}}][{\ce {OH-}}]} If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes v 0 = k [ imidazole ] [ CH 3 COOC 2 H 5 ] . {\displaystyle v_{0}=k[{\text{imidazole}}][{\ce {CH3COOC2H5}}].} The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole, which as a catalyst does not appear in the overall chemical equation. Another well-known class of second-order reactions are the SN2 (bimolecular nucleophilic substitution) reactions, such as the reaction of n-butyl bromide with sodium iodide in acetone: CH 3 CH 2 CH 2 CH 2 Br + NaI ⟶ CH 3 CH 2 CH 2 CH 2 I + NaBr ↓ {\displaystyle {\ce {CH3CH2CH2CH2Br + NaI -> CH3CH2CH2CH2I + NaBr(v)}}} This same compound can be made to undergo a bimolecular (E2) elimination reaction, another common type of second-order reaction, if the sodium iodide and acetone are replaced with sodium tert-butoxide as the salt and tert-butanol as the solvent: CH 3 CH 2 CH 2 CH 2 Br + NaO t − Bu ⟶ CH 3 CH 2 CH = CH 2 + NaBr + HO t − Bu {\displaystyle {\ce {{CH3CH2CH2CH2Br}+NaO{\mathit {t}}-Bu->{CH3CH2CH=CH2}+{NaBr}+HO{\mathit {t}}-Bu}}} === Pseudo-first order === If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, leading to a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation v 0 = k [ A ] [ B ] , {\displaystyle v_{0}=k[{\ce {A}}][{\ce {B}}],} if the concentration of reactant B is constant then v 0 = k [ A ] [ B ] = k ′ [ A ] , {\displaystyle v_{0}=k[{\ce {A}}][{\ce {B}}]=k'[{\ce {A}}],} where the pseudo–first-order rate constant k ′ = k [ B ] . {\displaystyle k'=k[{\ce {B}}].} The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier. One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics, where the concentration of water is constant because it is present in large excess: CH 3 COOCH 3 + H 2 O ⟶ CH 3 COOH + CH 3 OH {\displaystyle {\ce {CH3COOCH3 + H2O -> CH3COOH + CH3OH}}} The hydrolysis of sucrose (C12H22O11) in acid solution is often cited as a first-order reaction with rate v 0 = k [ C 12 H 22 O 11 ] . {\displaystyle v_{0}=k[{\ce {C12H22O11}}].} The true rate equation is third-order, v 0 = k [ C 12 H 22 O 11 ] [ H + ] [ H 2 O ] ; {\displaystyle v_{0}=k[{\ce {C12H22O11}}][{\ce {H+}}][{\ce {H2O}}];} however, the concentrations of both the catalyst H+ and the solvent H2O are normally constant, so that the reaction is pseudo–first-order. === Summary for reaction orders 0, 1, 2, and n === Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order. Here ⁠ M {\displaystyle {\rm {M}}} ⁠ stands for concentration in molarity (mol · L−1), ⁠ t {\displaystyle t} ⁠ for time, and ⁠ k {\displaystyle k} ⁠ for the reaction rate constant. The half-life of a first-order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693). === Fractional order === In fractional order reactions, the order is a non-integer, which often indicates a chemical chain reaction or other complex reaction mechanism. For example, the pyrolysis of acetaldehyde (CH3CHO) into methane and carbon monoxide proceeds with an order of 1.5 with respect to acetaldehyde: v 0 = k [ CH 3 CHO ] 3 / 2 . {\displaystyle v_{0}=k[{\ce {CH3CHO}}]^{3/2}.} The decomposition of phosgene (COCl2) to carbon monoxide and chlorine has order 1 with respect to phosgene itself and order 0.5 with respect to chlorine: v 0 = k [ COCl 2 ] [ Cl 2 ] 1 / 2 . {\displaystyle v_{0}=k{\ce {[COCl2] [Cl2]}}^{1/2}.} The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is Initiation CH 3 CHO ⟶ ⋅ CH 3 + ⋅ CHO {\displaystyle {\ce {CH3CHO -> .CH3 + .CHO}}} Propagation ⋅ CH 3 + CH 3 CHO ⟶ CH 3 CO ⋅ + CH 4 {\displaystyle {\ce {.CH3 + CH3CHO -> CH3CO. + CH4}}} CH 3 CO ⋅ ⟶ ⋅ CH 3 + CO {\displaystyle {\ce {CH3CO. -> .CH3 + CO}}} Termination 2 ⋅ CH 3 ⟶ C 2 H 6 {\displaystyle {\ce {2 .CH3 -> C2H6}}} where • denotes a free radical. To simplify the theory, the reactions of the *CHO to form a second *CH3 are ignored. In the steady state, the rates of formation and destruction of methyl radicals are equal, so that d [ ⋅ CH 3 ] d t = k i [ CH 3 CHO ] − k t [ ⋅ CH 3 ] 2 = 0 , {\displaystyle {\frac {d[{\ce {.CH3}}]}{dt}}=k_{i}[{\ce {CH3CHO}}]-k_{t}[{\ce {.CH3}}]^{2}=0,} so that the concentration of methyl radical satisfies [ ⋅ CH 3 ] ∝ [ CH 3 CHO ] 1 2 ⋅ {\displaystyle {\ce {[.CH3]\quad \propto \quad [CH3CHO]^{1/2}.}}} The reaction rate equals the rate of the propagation steps which form the main reaction products CH4 and CO: v 0 = d [ CH 4 ] d t | 0 = k p [ ⋅ CH 3 ] [ CH 3 CHO ] ∝ [ CH 3 CHO ] 3 2 {\displaystyle v_{0}={\frac {d[{\ce {CH4}}]}{dt}}|_{0}=k_{p}{\ce {[.CH3][CH3CHO]}}\quad \propto \quad {\ce {[CH3CHO]^{3/2}}}} in agreement with the experimental order of 3/2. == Complex laws == === Mixed order === More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form v 0 = k 1 [ A ] + k 2 [ A ] 2 {\displaystyle v_{0}=k_{1}[A]+k_{2}[A]^{2}} represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed. Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is v 0 = [ Fe ( CN ) 6 ] 2 − k α + k β [ Fe ( CN ) 6 ] 2 − {\displaystyle v_{0}={\frac {{\ce {[Fe(CN)6]^2-}}}{k_{\alpha }+k_{\beta }{\ce {[Fe(CN)6]^2-}}}}} This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining. Notable mechanisms with mixed-order rate laws with two-term denominators include: Michaelis–Menten kinetics for enzyme-catalysis: first-order in substrate (second-order overall) at low substrate concentrations, zero order in substrate (first-order overall) at higher substrate concentrations; and the Lindemann mechanism for unimolecular reactions: second-order at low pressures, first-order at high pressures. === Negative order === A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation v 0 = k [ O 3 ] 2 [ O 2 ] − 1 {\displaystyle v_{0}=k{\ce {[O_3]^2}}{\ce {[O_2]^{-1}}}} in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen. When a partial order is negative, the overall order is usually considered as undefined. In the above example, for instance, the reaction is not described as first order even though the sum of the partial orders is 2 + ( − 1 ) = 1 {\displaystyle 2+(-1)=1} , because the rate equation is more complex than that of a simple first-order reaction. == Opposed reactions == A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients): a A + b B ↽ − − ⇀ p P + q Q {\displaystyle {\ce {{{\mathit {a}}A}+{{\mathit {b}}B}<=>{{\mathit {p}}P}+{{\mathit {q}}Q}}}} The reaction rate expression for the above reactions (assuming each one is elementary) can be written as: v = k 1 [ A ] a [ B ] b − k − 1 [ P ] p [ Q ] q {\displaystyle v=k_{1}[{\ce {A}}]^{a}[{\ce {B}}]^{b}-k_{-1}[{\ce {P}}]^{p}[{\ce {Q}}]^{q}} where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B. The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance): k 1 [ A ] a [ B ] b = k − 1 [ P ] p [ Q ] q K = [ P ] p [ Q ] q [ A ] a [ B ] b = k 1 k − 1 {\displaystyle {\begin{aligned}&k_{1}[{\ce {A}}]^{a}[{\ce {B}}]^{b}=k_{-1}[{\ce {P}}]^{p}[{\ce {Q}}]^{q}\\[8pt]&K={\frac {[{\ce {P}}]^{p}[{\ce {Q}}]^{q}}{[{\ce {A}}]^{a}[{\ce {B}}]^{b}}}={\frac {k_{1}}{k_{-1}}}\end{aligned}}} === Simple example === In a simple equilibrium between two species: A ↽ − − ⇀ P {\displaystyle {\ce {A <=> P}}} where the reaction starts with an initial concentration of reactant A, [ A ] 0 {\displaystyle {\ce {[A]0}}} , and an initial concentration of 0 for product P at time t=0. Then the equilibrium constant K is expressed as: K = d e f k 1 k − 1 = [ P ] e [ A ] e {\displaystyle K\ {\stackrel {\mathrm {def} }{=}}\ {\frac {k_{1}}{k_{-1}}}={\frac {\left[{\ce {P}}\right]_{e}}{\left[{\ce {A}}\right]_{e}}}} where [ A ] e {\displaystyle [{\ce {A}}]_{e}} and [ P ] e {\displaystyle [{\ce {P}}]_{e}} are the concentrations of A and P at equilibrium, respectively. The concentration of A at time t, [ A ] t {\displaystyle [{\ce {A}}]_{t}} , is related to the concentration of P at time t, [ P ] t {\displaystyle [{\ce {P}}]_{t}} , by the equilibrium reaction equation: [ A ] t = [ A ] 0 − [ P ] t {\displaystyle {\ce {[A]_{\mathit {t}}=[A]0-[P]_{\mathit {t}}}}} The term [ P ] 0 {\displaystyle {\ce {[P]0}}} is not present because, in this simple example, the initial concentration of P is 0. This applies even when time t is at infinity; i.e., equilibrium has been reached: [ A ] e = [ A ] 0 − [ P ] e {\displaystyle {\ce {[A]_{\mathit {e}}=[A]0-[P]_{\mathit {e}}}}} then it follows, by the definition of K, that [ P ] e = k 1 k 1 + k − 1 [ A ] 0 {\displaystyle [{\ce {P}}]_{e}={\frac {k_{1}}{k_{1}+k_{-1}}}{\ce {[A]0}}} and, therefore, [ A ] e = [ A ] 0 − [ P ] e = k − 1 k 1 + k − 1 [ A ] 0 {\displaystyle \ [{\ce {A}}]_{e}={\ce {[A]0}}-[{\ce {P}}]_{e}={\frac {k_{-1}}{k_{1}+k_{-1}}}{\ce {[A]0}}} These equations allow us to uncouple the system of differential equations, and allow us to solve for the concentration of A alone. The reaction equation was given previously as: v = k 1 [ A ] a [ B ] b − k − 1 [ P ] p [ Q ] q {\displaystyle v=k_{1}[{\ce {A}}]^{a}[{\ce {B}}]^{b}-k_{-1}[{\ce {P}}]^{p}[{\ce {Q}}]^{q}} For A ↽ − − ⇀ P {\displaystyle {\ce {A <=> P}}} this is simply − d [ A ] d t = k 1 [ A ] t − k − 1 [ P ] t {\displaystyle -{\frac {d[{\ce {A}}]}{dt}}=k_{1}[{\ce {A}}]_{t}-k_{-1}[{\ce {P}}]_{t}} The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be [ A ] t {\displaystyle [{\ce {A}}]_{t}} , the concentration of A at time t. Let x e {\displaystyle x_{e}} be the concentration of A at equilibrium. Then: − d [ A ] d t = k 1 [ A ] t − k − 1 [ P ] t − d x d t = k 1 x − k − 1 [ P ] t = k 1 x − k − 1 ( [ A ] 0 − x ) = ( k 1 + k − 1 ) x − k − 1 [ A ] 0 {\displaystyle {\begin{aligned}-{\frac {d[{\ce {A}}]}{dt}}&={k_{1}[{\ce {A}}]_{t}}-{k_{-1}[{\ce {P}}]_{t}}\\[8pt]-{\frac {dx}{dt}}&={k_{1}x}-{k_{-1}[{\ce {P}}]_{t}}\\[8pt]&={k_{1}x}-{k_{-1}({\ce {[A]0}}-x)}\\[8pt]&={(k_{1}+k_{-1})x}-{k_{-1}{\ce {[A]0}}}\end{aligned}}} Since: k 1 + k − 1 = k − 1 [ A ] 0 x e {\displaystyle k_{1}+k_{-1}=k_{-1}{\frac {{\ce {[A]0}}}{x_{e}}}} the reaction rate becomes: d x d t = k − 1 [ A ] 0 x e ( x e − x ) {\displaystyle {\frac {dx}{dt}}={\frac {k_{-1}{\ce {[A]0}}}{x_{e}}}(x_{e}-x)} which results in: ln ⁡ ( [ A ] 0 − [ A ] e [ A ] t − [ A ] e ) = ( k 1 + k − 1 ) t {\displaystyle \ln \left({\frac {{\ce {[A]0}}-[{\ce {A}}]_{e}}{[{\ce {A}}]_{t}-[{\ce {A}}]_{e}}}\right)=(k_{1}+k_{-1})t} . A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known. === Generalization of simple example === If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions: [ A ] = [ A ] 0 1 k 1 + k − 1 ( k − 1 + k 1 e − ( k 1 + k − 1 ) t ) + [ P ] 0 k − 1 k 1 + k − 1 ( 1 − e − ( k 1 + k − 1 ) t ) [ P ] = [ A ] 0 k 1 k 1 + k − 1 ( 1 − e − ( k 1 + k − 1 ) t ) + [ P ] 0 1 k 1 + k − 1 ( k 1 + k − 1 e − ( k 1 + k − 1 ) t ) {\displaystyle {\begin{aligned}&\left[{\ce {A}}\right]={\ce {[A]0}}{\frac {1}{k_{1}+k_{-1}}}\left(k_{-1}+k_{1}e^{-\left(k_{1}+k_{-1}\right)t}\right)+{\ce {[P]0}}{\frac {k_{-1}}{k_{1}+k_{-1}}}\left(1-e^{-\left(k_{1}+k_{-1}\right)t}\right)\\[8pt]&\left[{\ce {P}}\right]={\ce {[A]0}}{\frac {k_{1}}{k_{1}+k_{-1}}}\left(1-e^{-\left(k_{1}+k_{-1}\right)t}\right)+{\ce {[P]0}}{\frac {1}{k_{1}+k_{-1}}}\left(k_{1}+k_{-1}e^{-\left(k_{1}+k_{-1}\right)t}\right)\end{aligned}}} When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy. == Consecutive reactions == If the rate constants for the following reaction are k 1 {\displaystyle k_{1}} and k 2 {\displaystyle k_{2}} ; A ⟶ B ⟶ C {\displaystyle {\ce {A -> B -> C}}} , then the rate equation is: For reactant A: d [ A ] d t = − k 1 [ A ] {\displaystyle {\frac {d[{\ce {A}}]}{dt}}=-k_{1}[{\ce {A}}]} For reactant B: d [ B ] d t = k 1 [ A ] − k 2 [ B ] {\displaystyle {\frac {d[{\ce {B}}]}{dt}}=k_{1}[{\ce {A}}]-k_{2}[{\ce {B}}]} For product C: d [ C ] d t = k 2 [ B ] {\displaystyle {\frac {d[{\ce {C}}]}{dt}}=k_{2}[{\ce {B}}]} With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are [ A ] = [ A ] 0 e − k 1 t {\displaystyle [{\ce {A}}]={\ce {[A]0}}e^{-k_{1}t}} [ B ] = { [ A ] 0 k 1 k 2 − k 1 ( e − k 1 t − e − k 2 t ) + [ B ] 0 e − k 2 t k 1 ≠ k 2 [ A ] 0 k 1 t e − k 1 t + [ B ] 0 e − k 1 t otherwise {\displaystyle \left[{\ce {B}}\right]={\begin{cases}{\ce {[A]0}}{\frac {k_{1}}{k_{2}-k_{1}}}\left(e^{-k_{1}t}-e^{-k_{2}t}\right)+{\ce {[B]0}}e^{-k_{2}t}&k_{1}\neq k_{2}\\{\ce {[A]0}}k_{1}te^{-k_{1}t}+{\ce {[B]0}}e^{-k_{1}t}&{\text{otherwise}}\\\end{cases}}} [ C ] = { [ A ] 0 ( 1 + k 1 e − k 2 t − k 2 e − k 1 t k 2 − k 1 ) + [ B ] 0 ( 1 − e − k 2 t ) + [ C ] 0 k 1 ≠ k 2 [ A ] 0 ( 1 − e − k 1 t − k 1 t e − k 1 t ) + [ B ] 0 ( 1 − e − k 1 t ) + [ C ] 0 otherwise {\displaystyle \left[{\ce {C}}\right]={\begin{cases}{\ce {[A]0}}\left(1+{\frac {k_{1}e^{-k_{2}t}-k_{2}e^{-k_{1}t}}{k_{2}-k_{1}}}\right)+{\ce {[B]0}}\left(1-e^{-k_{2}t}\right)+{\ce {[C]0}}&k_{1}\neq k_{2}\\{\ce {[A]0}}\left(1-e^{-k_{1}t}-k_{1}te^{-k_{1}t}\right)+{\ce {[B]0}}\left(1-e^{-k_{1}t}\right)+{\ce {[C]0}}&{\text{otherwise}}\\\end{cases}}} The steady state approximation leads to very similar results in an easier way. == Parallel or competitive reactions == When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place. === Two first order reactions === A ⟶ B {\displaystyle {\ce {A -> B}}} and A ⟶ C {\displaystyle {\ce {A -> C}}} , with constants k 1 {\displaystyle k_{1}} and k 2 {\displaystyle k_{2}} and rate equations − d [ A ] d t = ( k 1 + k 2 ) [ A ] {\displaystyle -{\frac {d[{\ce {A}}]}{dt}}=(k_{1}+k_{2})[{\ce {A}}]} ; d [ B ] d t = k 1 [ A ] {\displaystyle {\frac {d[{\ce {B}}]}{dt}}=k_{1}[{\ce {A}}]} and d [ C ] d t = k 2 [ A ] {\displaystyle {\frac {d[{\ce {C}}]}{dt}}=k_{2}[{\ce {A}}]} The integrated rate equations are then [ A ] = [ A ] 0 e − ( k 1 + k 2 ) t {\displaystyle [{\ce {A}}]={\ce {[A]0}}e^{-(k_{1}+k_{2})t}} ; [ B ] = k 1 k 1 + k 2 [ A ] 0 ( 1 − e − ( k 1 + k 2 ) t ) {\displaystyle [{\ce {B}}]={\frac {k_{1}}{k_{1}+k_{2}}}{\ce {[A]0}}\left(1-e^{-(k_{1}+k_{2})t}\right)} and [ C ] = k 2 k 1 + k 2 [ A ] 0 ( 1 − e − ( k 1 + k 2 ) t ) {\displaystyle [{\ce {C}}]={\frac {k_{2}}{k_{1}+k_{2}}}{\ce {[A]0}}\left(1-e^{-(k_{1}+k_{2})t}\right)} . One important relationship in this case is [ B ] [ C ] = k 1 k 2 {\displaystyle {\frac {{\ce {[B]}}}{{\ce {[C]}}}}={\frac {k_{1}}{k_{2}}}} === One first order and one second order reaction === This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: A + H 2 O ⟶ B {\displaystyle {\ce {A + H2O -> B}}} and A + R ⟶ C {\displaystyle {\ce {A + R -> C}}} . The rate equations are: d [ B ] d t = k 1 [ A ] [ H 2 O ] = k 1 ′ [ A ] {\displaystyle {\frac {d[{\ce {B}}]}{dt}}=k_{1}{\ce {[A][H2O]}}=k_{1}'[{\ce {A}}]} and d [ C ] d t = k 2 [ A ] [ R ] {\displaystyle {\frac {d[{\ce {C}}]}{dt}}=k_{2}{\ce {[A][R]}}} , where k 1 ′ {\displaystyle k_{1}'} is the pseudo first order constant. The integrated rate equation for the main product [C] is [ C ] = [ R ] 0 [ 1 − e − k 2 k 1 ′ [ A ] 0 ( 1 − e − k 1 ′ t ) ] {\displaystyle {\ce {[C]=[R]0}}\left[1-e^{-{\frac {k_{2}}{k_{1}'}}{\ce {[A]0}}\left(1-e^{-k_{1}'t}\right)}\right]} , which is equivalent to ln ⁡ [ R ] 0 [ R ] 0 − [ C ] = k 2 [ A ] 0 k 1 ′ ( 1 − e − k 1 ′ t ) {\displaystyle \ln {\frac {{\ce {[R]0}}}{{\ce {[R]0-[C]}}}}={\frac {k_{2}{\ce {[A]0}}}{k_{1}'}}\left(1-e^{-k_{1}'t}\right)} . Concentration of B is related to that of C through [ B ] = − k 1 ′ k 2 ln ⁡ ( 1 − [ C ] [ R ] 0 ) {\displaystyle [{\ce {B}}]=-{\frac {k_{1}'}{k_{2}}}\ln \left(1-{\frac {\ce {[C]}}{\ce {[R]0}}}\right)} The integrated equations were analytically obtained but during the process it was assumed that [ A ] 0 − [ C ] ≈ [ A ] 0 {\displaystyle {\ce {[A]0}}-{\ce {[C]}}\approx {\ce {[A]0}}} . Therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0 == Stoichiometric reaction networks == The most general description of a chemical reaction network considers a number N {\displaystyle N} of distinct chemical species reacting via R {\displaystyle R} reactions. The chemical equation of the j {\displaystyle j} -th reaction can then be written in the generic form r 1 j X 1 + r 2 j X 2 + ⋯ + r N j X N → k j p 1 j X 1 + p 2 j X 2 + ⋯ + p N j X N , {\displaystyle r_{1j}{\ce {X}}_{1}+r_{2j}{\ce {X}}_{2}+\cdots +r_{Nj}{\ce {X}}_{N}{\ce {->[k_{j}]}}\ p_{1j}{\ce {X}}_{1}+\ p_{2j}{\ce {X}}_{2}+\cdots +p_{Nj}{\ce {X}}_{N},} which is often written in the equivalent form ∑ i = 1 N r i j X i → k j ∑ i = 1 N p i j X i . {\displaystyle \sum _{i=1}^{N}r_{ij}{\ce {X}}_{i}{\ce {->[k_{j}]}}\sum _{i=1}^{N}\ p_{ij}{\ce {X}}_{i}.} Here j {\displaystyle j} is the reaction index running from 1 to R {\displaystyle R} , X i {\displaystyle {\ce {X}}_{i}} denotes the i {\displaystyle i} -th chemical species, k j {\displaystyle k_{j}} is the rate constant of the j {\displaystyle j} -th reaction and r i j {\displaystyle r_{ij}} and p i j {\displaystyle p_{ij}} are the stoichiometric coefficients of reactants and products, respectively. The rate of such a reaction can be inferred by the law of mass action f j ( [ X ] ) = k j ∏ z = 1 N [ X z ] r z j {\displaystyle f_{j}([\mathbf {X} ])=k_{j}\prod _{z=1}^{N}[{\ce {X}}_{z}]^{r_{zj}}} which denotes the flux of molecules per unit time and unit volume. Here ( [ X ] ) = ( [ X 1 ] , [ X 2 ] , … , [ X N ] ) {\displaystyle {\ce {([\mathbf {X} ])=([X1],[X2],\ldots ,[X_{\mathit {N}}])}}} is the vector of concentrations. This definition includes the elementary reactions: zero order reactions for which r z j = 0 {\displaystyle r_{zj}=0} for all z {\displaystyle z} , first order reactions for which r z j = 1 {\displaystyle r_{zj}=1} for a single z {\displaystyle z} , second order reactions for which r z j = 1 {\displaystyle r_{zj}=1} for exactly two z {\displaystyle z} ; that is, a bimolecular reaction, or r z j = 2 {\displaystyle r_{zj}=2} for a single z {\displaystyle z} ; that is, a dimerization reaction. Each of these is discussed in detail below. One can define the stoichiometric matrix N i j = p i j − r i j , {\displaystyle N_{ij}=p_{ij}-r_{ij},} denoting the net extent of molecules of i {\displaystyle i} in reaction j {\displaystyle j} . The reaction rate equations can then be written in the general form d [ X i ] d t = ∑ j = 1 R N i j f j ( [ X ] ) . {\displaystyle {\frac {d[{\ce {X}}_{i}]}{dt}}=\sum _{j=1}^{R}N_{ij}f_{j}([\mathbf {X} ]).} This is the product of the stoichiometric matrix and the vector of reaction rate functions. Particular simple solutions exist in equilibrium, d [ X i ] d t = 0 {\displaystyle {\frac {d[{\ce {X}}_{i}]}{dt}}=0} , for systems composed of merely reversible reactions. In this case, the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix N i j {\displaystyle N_{ij}} alone and does not depend on the particular form of the rate functions f j {\displaystyle f_{j}} . All other cases where detailed balance is violated are commonly studied by flux balance analysis, which has been developed to understand metabolic pathways. == General dynamics of unimolecular conversion == For a general unimolecular reaction involving interconversion of N {\displaystyle N} different species, whose concentrations at time t {\displaystyle t} are denoted by X 1 ( t ) {\displaystyle X_{1}(t)} through X N ( t ) {\displaystyle X_{N}(t)} , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species X i {\displaystyle X_{i}} to species X j {\displaystyle X_{j}} be denoted as k i j {\displaystyle k_{ij}} , and construct a rate-constant matrix K {\displaystyle K} whose entries are the k i j {\displaystyle k_{ij}} . Also, let X ( t ) = ( X 1 ( t ) , X 2 ( t ) , … , X N ( t ) ) T {\displaystyle X(t)=(X_{1}(t),X_{2}(t),\ldots ,X_{N}(t))^{T}} be the vector of concentrations as a function of time. Let J = ( 1 , 1 , 1 , … , 1 ) T {\displaystyle J=(1,1,1,\ldots ,1)^{T}} be the vector of ones. Let I {\displaystyle I} be the N × N {\displaystyle N\times N} identity matrix. Let diag {\displaystyle \operatorname {diag} } be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector. Let L − 1 {\displaystyle {\mathcal {L}}^{-1}} be the inverse Laplace transform from s {\displaystyle s} to t {\displaystyle t} . Then the time-evolved state X ( t ) {\displaystyle X(t)} is given by X ( t ) = L − 1 [ ( s I + diag ⁡ ( K J ) − K T ) − 1 X ( 0 ) ] , {\displaystyle X(t)={\mathcal {L}}^{-1}[(sI+\operatorname {diag} (KJ)-K^{T})^{-1}X(0)],} thus providing the relation between the initial conditions of the system and its state at time t {\displaystyle t} . == See also == Michaelis–Menten kinetics Molecularity Petersen matrix Reaction–diffusion system Reactions on surfaces: rate equations for reactions where at least one of the reactants adsorbs onto a surface Reaction progress kinetic analysis Reaction rate Reaction rate constant Steady state approximation Gillespie algorithm Balance equation Belousov–Zhabotinsky reaction Lotka–Volterra equations Chemical kinetics == References == === Books cited === == External links == Chemical kinetics, reaction rate, and order (needs flash player) Reaction kinetics, examples of important rate laws (lecture with audio). Rates of Reaction
Wikipedia/Rate_equation
Diffusion-controlled (or diffusion-limited) reactions are reactions in which the reaction rate is equal to the rate of transport of the reactants through the reaction medium (usually a solution). The process of chemical reaction can be considered as involving the diffusion of reactants until they encounter each other in the right stoichiometry and form an activated complex which can form the product species. The observed rate of chemical reactions is, generally speaking, the rate of the slowest or "rate determining" step. In diffusion controlled reactions the formation of products from the activated complex is much faster than the diffusion of reactants and thus the rate is governed by collision frequency. Diffusion control is rare in the gas phase, where rates of diffusion of molecules are generally very high. Diffusion control is more likely in solution where diffusion of reactants is slower due to the greater number of collisions with solvent molecules. Reactions where the activated complex forms easily and the products form rapidly are most likely to be limited by diffusion control. Examples are those involving catalysis and enzymatic reactions. Heterogeneous reactions where reactants are in different phases are also candidates for diffusion control. One classical test for diffusion control of a heterogeneous reaction is to observe whether the rate of reaction is affected by stirring or agitation; if so then the reaction is almost certainly diffusion controlled under those conditions. == Derivation == The following derivation is adapted from Foundations of Chemical Kinetics. This derivation assumes the reaction A + B → C {\displaystyle A+B\rightarrow C} . Consider a sphere of radius R A {\displaystyle R_{A}} , centered at a spherical molecule A, with reactant B flowing in and out of it. A reaction is considered to occur if molecules A and B touch, that is, when the distance between the two molecules is R A B {\displaystyle R_{AB}} apart. If we assume a local steady state, then the rate at which B reaches R A B {\displaystyle R_{AB}} is the limiting factor and balances the reaction. Therefore, the steady state condition becomes 1. k [ B ] = − 4 π r 2 J B {\displaystyle k[B]=-4\pi r^{2}J_{B}} where J B {\displaystyle J_{B}} is the flux of B, as given by Fick's law of diffusion, 2. J B = − D A B ( d B ( r ) d r + [ B ] k B T d U d r ) {\displaystyle J_{B}=-D_{AB}({\frac {dB(r)}{dr}}+{\frac {[B]}{k_{B}T}}{\frac {dU}{dr}})} , where D A B {\displaystyle D_{AB}} is the diffusion coefficient and can be obtained by the Stokes-Einstein equation, and the second term is the gradient of the chemical potential with respect to position. Note that [B] refers to the average concentration of B in the solution, while [B](r) is the "local concentration" of B at position r. Inserting 2 into 1 results in 3. k [ B ] = 4 π r 2 D A B ( d B ( r ) d r + [ B ] ( r ) k B T d U d r ) {\displaystyle k[B]=4\pi r^{2}D_{AB}({\frac {dB(r)}{dr}}+{\frac {[B](r)}{k_{B}T}}{\frac {dU}{dr}})} . It is convenient at this point to use the identity exp ⁡ ( − U ( r ) / k B T ) ⋅ d d r ( [ B ] ( r ) exp ⁡ ( U ( r ) / k B T ) = ( d B ( r ) d r + [ B ] ( r ) k B T d U d r ) {\displaystyle \exp(-U(r)/k_{B}T)\cdot {\frac {d}{dr}}([B](r)\exp(U(r)/k_{B}T)=({\frac {dB(r)}{dr}}+{\frac {[B](r)}{k_{B}T}}{\frac {dU}{dr}})} allowing us to rewrite 3 as 4. k [ B ] = 4 π r 2 D A B exp ⁡ ( − U ( r ) / k B T ) ⋅ d d r ( [ B ] ( r ) exp ⁡ ( U ( r ) / k B T ) {\displaystyle k[B]=4\pi r^{2}D_{AB}\exp(-U(r)/k_{B}T)\cdot {\frac {d}{dr}}([B](r)\exp(U(r)/k_{B}T)} . Rearranging 4 allows us to write 5. k [ B ] exp ⁡ ( U ( r ) / k B T ) 4 π r 2 D A B = d d r ( [ B ] ( r ) exp ⁡ ( U ( r ) / k B T ) {\displaystyle {\frac {k[B]\exp(U(r)/k_{B}T)}{4\pi r^{2}D_{AB}}}={\frac {d}{dr}}([B](r)\exp(U(r)/k_{B}T)} Using the boundary conditions that [ B ] ( r ) → [ B ] {\displaystyle [B](r)\rightarrow [B]} , ie the local concentration of B approaches that of the solution at large distances, and consequently U ( r ) → 0 {\displaystyle U(r)\rightarrow 0} , as r → ∞ {\displaystyle r\rightarrow \infty } , we can solve 5 by separation of variables, we get 6. ∫ R A B ∞ d r k [ B ] exp ⁡ ( U ( r ) / k B T ) 4 π r 2 D A B = ∫ R A B ∞ d ( [ B ] ( r ) exp ⁡ ( U ( r ) / k B T ) {\displaystyle \int _{R_{AB}}^{\infty }dr{\frac {k[B]\exp(U(r)/k_{B}T)}{4\pi r^{2}D_{AB}}}=\int _{R_{AB}}^{\infty }d([B](r)\exp(U(r)/k_{B}T)} or 7. k [ B ] 4 π D A B β = [ B ] − [ B ] ( R A B ) exp ⁡ ( U ( R A B ) / k B T ) {\displaystyle {\frac {k[B]}{4\pi D_{AB}\beta }}=[B]-[B](R_{AB})\exp(U(R_{AB})/k_{B}T)} (where : β − 1 = ∫ R A B ∞ 1 r 2 exp ⁡ ( U ( r ) k B T d r ) {\displaystyle \beta ^{-1}=\int _{R_{AB}}^{\infty }{\frac {1}{r^{2}}}\exp({\frac {U(r)}{k_{B}T}}dr)} ) For the reaction between A and B, there is an inherent reaction constant k r {\displaystyle k_{r}} , so [ B ] ( R A B ) = k [ B ] / k r {\displaystyle [B](R_{AB})=k[B]/k_{r}} . Substituting this into 7 and rearranging yields 8. k = 4 π D A B β k r k r + 4 π D A B β exp ⁡ ( U ( R A B ) k B T ) {\displaystyle k={\frac {4\pi D_{AB}\beta k_{r}}{k_{r}+4\pi D_{AB}\beta \exp({\frac {U(R_{AB})}{k_{B}T}})}}} === Limiting conditions === ==== Very fast intrinsic reaction ==== Suppose k r {\displaystyle k_{r}} is very large compared to the diffusion process, so A and B react immediately. This is the classic diffusion limited reaction, and the corresponding diffusion limited rate constant, can be obtained from 8 as k D = 4 π D A B β {\displaystyle k_{D}=4\pi D_{AB}\beta } . 8 can then be re-written as the "diffusion influenced rate constant" as 9. k = k D k r k r + k D exp ⁡ ( U ( R A B ) k B T ) {\displaystyle k={\frac {k_{D}k_{r}}{k_{r}+k_{D}\exp({\frac {U(R_{AB})}{k_{B}T}})}}} ==== Weak intermolecular forces ==== If the forces that bind A and B together are weak, ie U ( r ) ≈ 0 {\displaystyle U(r)\approx 0} for all r except very small r, β − 1 ≈ 1 R A B {\displaystyle \beta ^{-1}\approx {\frac {1}{R_{AB}}}} . The reaction rate 9 simplifies even further to 10. k = k D k r k r + k D {\displaystyle k={\frac {k_{D}k_{r}}{k_{r}+k_{D}}}} This equation is true for a very large proportion of industrially relevant reactions in solution. === Viscosity dependence === The Stokes-Einstein equation describes a frictional force on a sphere of diameter R A {\displaystyle R_{A}} as D A = k B T 3 π R A η {\displaystyle D_{A}={\frac {k_{B}T}{3\pi R_{A}\eta }}} where η {\displaystyle \eta } is the viscosity of the solution. Inserting this into 9 gives an estimate for k D {\displaystyle k_{D}} as 8 R T 3 η {\displaystyle {\frac {8RT}{3\eta }}} , where R is the gas constant, and η {\displaystyle \eta } is given in centipoise. For the following molecules, an estimate for k D {\displaystyle k_{D}} is given: == See also == Diffusion limited enzyme == References ==
Wikipedia/Diffusion-controlled_reaction
In theoretical chemistry, an energy profile is a theoretical representation of a chemical reaction or process as a single energetic pathway as the reactants are transformed into products. This pathway runs along the reaction coordinate, which is a parametric curve that follows the pathway of the reaction and indicates its progress; thus, energy profiles are also called reaction coordinate diagrams. They are derived from the corresponding potential energy surface (PES), which is used in computational chemistry to model chemical reactions by relating the energy of a molecule(s) to its structure (within the Born–Oppenheimer approximation). Qualitatively, the reaction coordinate diagrams (one-dimensional energy surfaces) have numerous applications. Chemists use reaction coordinate diagrams as both an analytical and pedagogical aid for rationalizing and illustrating kinetic and thermodynamic events. The purpose of energy profiles and surfaces is to provide a qualitative representation of how potential energy varies with molecular motion for a given reaction or process. == Potential energy surfaces == In simplest terms, a potential energy surface or PES is a mathematical or graphical representation of the relation between energy of a molecule and its geometry. The methods for describing the potential energy are broken down into a classical mechanics interpretation (molecular mechanics) and a quantum mechanical interpretation. In the quantum mechanical interpretation an exact expression for energy can be obtained for any molecule derived from quantum principles (although an infinite basis set may be required) but ab initio calculations/methods will often use approximations to reduce computational cost. Molecular mechanics is empirically based and potential energy is described as a function of component terms that correspond to individual potential functions such as torsion, stretches, bends, Van der Waals energies, electrostatics and cross terms. Each component potential function is fit to experimental data or properties predicted by ab initio calculations. Molecular mechanics is useful in predicting equilibrium geometries and transition states as well as relative conformational stability. As a reaction occurs the atoms of the molecules involved will generally undergo some change in spatial orientation through internal motion as well as its electronic environment. Distortions in the geometric parameters result in a deviation from the equilibrium geometry (local energy minima). These changes in geometry of a molecule or interactions between molecules are dynamic processes which call for understanding all the forces operating within the system. Since these forces can be mathematically derived as first derivative of potential energy with respect to a displacement, it makes sense to map the potential energy E of the system as a function of geometric parameters q1, q2, q3 and so on. The potential energy at given values of the geometric parameters (q1, q2, ..., qn) is represented as a hyper-surface (when n > 2) or a surface (when n ≤ 2). Mathematically, it can be written as E = f ( q 1 , q 2 , … , q n ) {\displaystyle E=f(q_{1},q_{2},\dots ,q_{n})} For the quantum mechanical interpretation, a PES is typically defined within the Born–Oppenheimer approximation (in order to distinguish between nuclear and electronic motion and energy) which states that the nuclei are stationary relative to the electrons. In other words, the approximation allows the kinetic energy of the nuclei (or movement of the nuclei) to be neglected and therefore the nuclei repulsion is a constant value (as static point charges) and is only considered when calculating the total energy of the system. The electronic energy is then taken to depend parametrically on the nuclear coordinates, meaning a new electronic energy (Ee) must be calculated for each corresponding atomic configuration. PES is an important concept in computational chemistry and greatly aids in geometry and transition state optimization. === Degrees of freedom === An n-atom system is defined by 3n coordinates: (x, y, z) for each atom. These 3n degrees of freedom can be broken down to include 3 overall translational and 3 (or 2) overall rotational degrees of freedom for a non-linear system (for a linear system). However, overall translational or rotational degrees do not affect the potential energy of the system, which only depends on its internal coordinates. Thus an n-atom system will be defined by 3n – 6 (non-linear) or 3n – 5 (linear) coordinates. These internal coordinates may be represented by simple stretch, bend, torsion coordinates, or symmetry-adapted linear combinations, or redundant coordinates, or normal modes coordinates, etc. For a system described by n-internal coordinates a separate potential energy function can be written with respect to each of these coordinates by holding the other n – 1 parameters at a constant value allowing the potential energy contribution from a particular molecular motion (or interaction) to be monitored while the other n – 1 parameters are defined. Consider a diatomic molecule AB which can macroscopically visualized as two balls (which depict the two atoms A and B) connected through a spring which depicts the bond. As this spring (or bond) is stretched or compressed, the potential energy of the ball-spring system (AB molecule) changes and this can be mapped on a 2-dimensional plot as a function of distance between A and B, i.e. bond length. The concept can be expanded to a tri-atomic molecule such as water where we have two O−H bonds and H−O−H bond angle as variables on which the potential energy of a water molecule will depend. We can safely assume the two O−H bonds to be equal. Thus, a PES can be drawn mapping the potential energy E of a water molecule as a function of two geometric parameters, q1 = O–H bond length and q2 = H–O–H bond angle. The lowest point on such a PES will define the equilibrium structure of a water molecule. The same concept is applied to organic compounds like ethane, butane etc. to define their lowest energy and most stable conformations. === Characterizing a PES === The most important points on a PES are the stationary points where the surface is flat, i.e. parallel to a horizontal line corresponding to one geometric parameter, a plane corresponding to two such parameters or even a hyper-plane corresponding to more than two geometric parameters. The energy values corresponding to the transition states and the ground state of the reactants and products can be found using the potential energy function by calculating the function's critical points or the stationary points. Stationary points occur when the 1st partial derivative of the energy with respect to each geometric parameter is equal to zero. ∂ E ∂ q 1 = ∂ E ∂ q 2 = ⋯ = ∂ E ∂ q n = 0 {\displaystyle {\frac {\partial E}{\partial q_{1}}}={\frac {\partial E}{\partial q_{2}}}=\dots ={\frac {\partial E}{\partial q_{n}}}=0} Using analytical derivatives of the derived expression for energy, E = f ( q 1 , q 2 , … , q n ) , {\displaystyle E=f(q_{1},q_{2},\dots ,q_{n}),} one can find and characterize a stationary point as minimum, maximum or a saddle point. The ground states are represented by local energy minima and the transition states by saddle points. Minima represent stable or quasi-stable species, i.e. reactants and products with finite lifetime. Mathematically, a minimum point is given as ∂ E ∂ q 1 = 0 {\displaystyle {\frac {\partial E}{\partial q_{1}}}=0} ∂ 2 E ∂ q 1 2 > 0 {\displaystyle {\frac {\partial ^{2}E}{\partial q_{1}^{2}}}>0} A point may be local minimum when it is lower in energy compared to its surrounding only or a global minimum which is the lowest energy point on the entire potential energy surface. Saddle point represents a maximum along only one direction (that of the reaction coordinate) and is a minimum along all other directions. In other words, a saddle point represents a transition state along the reaction coordinate. Mathematically, a saddle point occurs when ∂ 2 E ∂ q 2 > 0 {\displaystyle {\frac {\partial ^{2}E}{\partial q^{2}}}>0} for all q except along the reaction coordinate and ∂ 2 E ∂ q 1 2 < 0 {\displaystyle {\frac {\partial ^{2}E}{\partial q_{1}^{2}}}<0} along the reaction coordinate. == Reaction coordinate diagrams == The intrinsic reaction coordinate (IRC), derived from the potential energy surface, is a parametric curve that connects two energy minima in the direction that traverses the minimum energy barrier (or shallowest ascent) passing through one or more saddle point(s). However, in reality if reacting species attains enough energy it may deviate from the IRC to some extent. The energy values (points on the hyper-surface) along the reaction coordinate result in a 1-D energy surface (a line) and when plotted against the reaction coordinate (energy vs reaction coordinate) gives what is called a reaction coordinate diagram (or energy profile). Another way of visualizing an energy profile is as a cross section of the hyper surface, or surface, long the reaction coordinate. Figure 5 shows an example of a cross section, represented by the plane, taken along the reaction coordinate and the potential energy is represented as a function or composite of two geometric variables to form a 2-D energy surface. In principle, the potential energy function can depend on N variables but since an accurate visual representation of a function of 3 or more variables cannot be produced (excluding level hypersurfaces) a 2-D surface has been shown. The points on the surface that intersect the plane are then projected onto the reaction coordinate diagram (shown on the right) to produce a 1-D slice of the surface along the IRC. The reaction coordinate is described by its parameters, which are frequently given as a composite of several geometric parameters, and can change direction as the reaction progresses so long as the smallest energy barrier (or activation energy (Ea)) is traversed. The saddle point represents the highest energy point lying on the reaction coordinate connecting the reactant and product; this is known as the transition state. A reaction coordinate diagram may also have one or more transient intermediates which are shown by high energy wells connected via a transition state peak. Any chemical structure that lasts longer than the time for typical bond vibrations (10−13 – 10−14s) can be considered as intermediate. A reaction involving more than one elementary step has one or more intermediates being formed which, in turn, means there is more than one energy barrier to overcome. In other words, there is more than one transition state lying on the reaction pathway. As it is intuitive that pushing over an energy barrier or passing through a transition state peak would entail the highest energy, it becomes clear that it would be the slowest step in a reaction pathway. However, when more than one such barrier is to be crossed, it becomes important to recognize the highest barrier which will determine the rate of the reaction. This step of the reaction whose rate determines the overall rate of reaction is known as rate determining step or rate limiting step. The height of energy barrier is always measured relative to the energy of the reactant or starting material. Different possibilities have been shown in figure 6. Reaction coordinate diagrams also give information about the equilibrium between a reactant or a product and an intermediate. If the barrier energy for going from intermediate to product is much higher than the one for reactant to intermediate transition, it can be safely concluded that a complete equilibrium is established between the reactant and intermediate. However, if the two energy barriers for reactant-to-intermediate and intermediate-to-product transformation are nearly equal, then no complete equilibrium is established and steady state approximation is invoked to derive the kinetic rate expressions for such a reaction. === Drawing a reaction coordinate diagram === Although a reaction coordinate diagram is essentially derived from a potential energy surface, it is not always feasible to draw one from a PES. A chemist draws a reaction coordinate diagram for a reaction based on the knowledge of free energy or enthalpy change associated with the transformation which helps him to place the reactant and product into perspective and whether any intermediate is formed or not. One guideline for drawing diagrams for complex reactions is the principle of least motion which says that a favored reaction proceeding from a reactant to an intermediate or from one intermediate to another or product is one which has the least change in nuclear position or electronic configuration. Thus, it can be said that the reactions involving dramatic changes in position of nuclei actually occur through a series of simple chemical reactions. Hammond postulate is another tool which assists in drawing the energy of a transition state relative to a reactant, an intermediate or a product. It states that the transition state resembles the reactant, intermediate or product that it is closest in energy to, as long the energy difference between the transition state and the adjacent structure is not too large. This postulate helps to accurately predict the shape of a reaction coordinate diagram and also gives an insight into the molecular structure at the transition state. === Kinetic and thermodynamic considerations === A chemical reaction can be defined by two important parameters- the Gibbs free energy associated with a chemical transformation and the rate of such a transformation. These parameters are independent of each other. While free energy change describes the stability of products relative to reactants, the rate of any reaction is defined by the energy of the transition state relative to the starting material. Depending on these parameters, a reaction can be favorable or unfavorable, fast or slow and reversible or irreversible, as shown in figure 8. A favorable reaction is one in which the change in free energy ∆G° is negative (exergonic) or in other words, the free energy of product, G°product, is less than the free energy of the starting materials, G°reactant. ∆G°> 0 (endergonic) corresponds to an unfavorable reaction. The ∆G° can be written as a function of change in enthalpy (∆H°) and change in entropy (∆S°) as ∆G°= ∆H° – T∆S°. Practically, enthalpies, not free energy, are used to determine whether a reaction is favorable or unfavorable, because ∆H° is easier to measure and T∆S° is usually too small to be of any significance (for T < 100 °C). A reaction with ∆H°<0 is called exothermic reaction while one with ∆H°>0 is endothermic. The relative stability of reactant and product does not define the feasibility of any reaction all by itself. For any reaction to proceed, the starting material must have enough energy to cross over an energy barrier. This energy barrier is known as activation energy (∆G≠) and the rate of reaction is dependent on the height of this barrier. A low energy barrier corresponds to a fast reaction and high energy barrier corresponds to a slow reaction. A reaction is in equilibrium when the rate of forward reaction is equal to the rate of reverse reaction. Such a reaction is said to be reversible. If the starting material and product(s) are in equilibrium then their relative abundance is decided by the difference in free energy between them. In principle, all elementary steps are reversible, but in many cases the equilibrium lies so much towards the product side that the starting material is effectively no longer observable or present in sufficient concentration to have an effect on reactivity. Practically speaking, the reaction is considered to be irreversible. While most reversible processes will have a reasonably small K of 103 or less, this is not a hard and fast rule, and a number of chemical processes require reversibility of even very favorable reactions. For instance, the reaction of an carboxylic acid with amines to form a salt takes place with K of 105–6, and at ordinary temperatures, this process is regarded as irreversible. Yet, with sufficient heating, the reverse reaction takes place to allow formation of the tetrahedral intermediate and, ultimately, amide and water. (For an extreme example requiring reversibility of a step with K > 1011, see demethylation.) A reaction can also be rendered irreversible if a subsequent, faster step takes place to consume the initial product(s), or a gas is evolved in an open system. Thus, there is no value of K that serves as a "dividing line" between reversible and irreversible processes. Instead, reversibility depends on timescale, temperature, the reaction conditions, and the overall energy landscape. When a reactant can form two different products depending on the reaction conditions, it becomes important to choose the right conditions to favor the desired product. If a reaction is carried out at relatively lower temperature, then the product formed is one lying across the smaller energy barrier. This is called kinetic control and the ratio of the products formed depends on the relative energy barriers leading to the products. Relative stabilities of the products do not matter. However, at higher temperatures the molecules have enough energy to cross over both energy barriers leading to the products. In such a case, the product ratio is determined solely by the energies of the products and energies of the barrier do not matter. This is known as thermodynamic control and it can only be achieved when the products can inter-convert and equilibrate under the reaction condition. A reaction coordinate diagram can also be used to qualitatively illustrate kinetic and thermodynamic control in a reaction. == Applications == Following are few examples on how to interpret reaction coordinate diagrams and use them in analyzing reactions. Solvent Effect: In general, if the transition state for the rate determining step corresponds to a more charged species relative to the starting material then increasing the polarity of the solvent will increase the rate of the reaction since a more polar solvent be more effective at stabilizing the transition state (ΔG‡ would decrease). If the transition state structure corresponds to a less charged species then increasing the solvents polarity would decrease the reaction rate since a more polar solvent would be more effective at stabilizing the starting material (ΔGo would decrease which in turn increases ΔG‡). SN1 vs SN2 The SN1 and SN2 mechanisms are used as an example to demonstrate how solvent effects can be indicated in reaction coordinate diagrams. SN1: Figure 10 shows the rate determining step for an SN1 mechanism, formation of the carbocation intermediate, and the corresponding reaction coordinate diagram. For an SN1 mechanism the transition state structure shows a partial charge density relative to the neutral ground state structure. Therefore, increasing the solvent polarity, for example from hexanes (shown as blue) to ether (shown in red), would decrease the rate of the reaction. As shown in figure 9, the starting material has approximately the same stability in both solvents (therefore ΔΔGo=ΔGopolar – ΔGonon polar is small) and the transition state is stabilized more in ether meaning ΔΔG≠ = ΔG≠polar – ΔG≠non-polar is large. SN2: For an SN2 mechanism a strongly basic nucleophile (i.e. a charged nucleophile) is favorable. In figure 11 below the rate determining step for Williamson ether synthesis is shown. The starting material is methyl chloride and an ethoxide ion which has a localized negative charge meaning it is more stable in polar solvents. The figure shows a transition state structure as the methyl chloride undergoes nucleophilic attack. In the transition state structure the charge is distributed between the Cl and the O atoms and the more polar solvent is less effective at stabilizing the transition state structure relative to the starting materials. In other words, the energy difference between the polar and non-polar solvent is greater for the ground state (for the starting material) than in the transition state. Catalysts: There are two types of catalysts, positive and negative. Positive catalysts increase the reaction rate and negative catalysts (or inhibitors) slow down a reaction and possibly cause the reaction not occur at all. The purpose of a catalyst is to alter the activation energy. Figure 12 illustrates the purpose of a catalyst in that only the activation energy is changed and not the relative thermodynamic stabilities, shown in the figure as ΔH, of the products and reactants. This means that a catalyst will not alter the equilibrium concentrations of the products and reactants but will only allow the reaction to reach equilibrium faster. Figure 13 shows the catalyzed pathway occurring in multiple steps which is a more realistic depiction of a catalyzed process. The new catalyzed pathway can occur through the same mechanism as the uncatalyzed reaction or through an alternate mechanism. An enzyme is a biological catalyst that increases the rate for many vital biochemical reactions. Figure 13 shows a common way to illustrate the effect of an enzyme on a given biochemical reaction. == See also == Gibbs free energy Enthalpy Entropy Computational chemistry Molecular mechanics Born–Oppenheimer approximation == References ==
Wikipedia/Energy_profile_(chemistry)
Reaction dynamics is a field within physical chemistry, studying why chemical reactions occur, how to predict their behavior, and how to control them. It is closely related to chemical kinetics, but is concerned with individual chemical events on atomic length scales and over very brief time periods. It considers state-to-state kinetics between reactant and product molecules in specific quantum states, and how energy is distributed between translational, vibrational, rotational, and electronic modes. Experimental methods of reaction dynamics probe the chemical physics associated with molecular collisions. They include crossed molecular beam and infrared chemiluminescence experiments, both recognized by the 1986 Nobel Prize in Chemistry awarded to Dudley Herschbach, Yuan T. Lee, and John C. Polanyi "for their contributions concerning the dynamics of chemical elementary processes", In the crossed beam method used by Herschbach and Lee, narrow beams of reactant molecules in selected quantum states are allowed to react in order to determine the reaction probability as a function of such variables as the translational, vibrational and rotational energy of the reactant molecules and their angle of approach. In contrast the method of Polanyi measures vibrational energy of the products by detecting the infrared chemiluminescence emitted by vibrationally excited molecules, in some cases for reactants in defined energy states. Spectroscopic observation of reaction dynamics on the shortest time scales is known as femtochemistry, since the typical times studied are of the order of 1 femtosecond = 10−15 s. This subject has been recognized by the award of the 1999 Nobel Prize in Chemistry to Ahmed Zewail. In addition, theoretical studies of reaction dynamics involve calculating the potential energy surface for a reaction as a function of nuclear positions, and then calculating the trajectory of a point on this surface representing the state of the system. A correction can be applied to include the effect of quantum tunnelling through the activation energy barrier, especially for the movement of hydrogen atoms. == References == == Further reading == Steinfeld J.I., Francisco J.S. and Hase W.L. Chemical Kinetics and Dynamics (2nd ed., Prentice-Hall 1999) chaps.6-13 ISBN 0-13-737123-3
Wikipedia/Reaction_dynamics
The Eyring equation (occasionally also known as Eyring–Polanyi equation) is an equation used in chemical kinetics to describe changes in the rate of a chemical reaction against temperature. It was developed almost simultaneously in 1935 by Henry Eyring, Meredith Gwynne Evans and Michael Polanyi. The equation follows from the transition state theory, also known as activated-complex theory. If one assumes a constant enthalpy of activation and constant entropy of activation, the Eyring equation is similar to the empirical Arrhenius equation, despite the Arrhenius equation being empirical and the Eyring equation based on statistical mechanical justification. == General form == The general form of the Eyring–Polanyi equation somewhat resembles the Arrhenius equation: k = κ k B T h e − Δ G ‡ R T {\displaystyle \ k={\frac {\kappa k_{\mathrm {B} }T}{h}}e^{-{\frac {\Delta G^{\ddagger }}{RT}}}} where k {\displaystyle k} is the rate constant, Δ G ‡ {\displaystyle \Delta G^{\ddagger }} is the Gibbs energy of activation, κ {\displaystyle \kappa } is the transmission coefficient, k B {\displaystyle k_{\mathrm {B} }} is the Boltzmann constant, T {\displaystyle T} is the temperature, and h {\displaystyle h} is the Planck constant. The transmission coefficient κ {\displaystyle \kappa } is often assumed to be equal to one as it reflects what fraction of the flux through the transition state proceeds to the product without recrossing the transition state. So, a transmission coefficient equal to one means that the fundamental no-recrossing assumption of transition state theory holds perfectly. However, κ {\displaystyle \kappa } is typically not one because (i) the reaction coordinate chosen for the process at hand is usually not perfect and (ii) many barrier-crossing processes are somewhat or even strongly diffusive in nature. For example, the transmission coefficient of methane hopping in a gas hydrate from one site to an adjacent empty site is between 0.25 and 0.5. Typically, reactive flux correlation function (RFCF) simulations are performed in order to explicitly calculate κ {\displaystyle \kappa } from the resulting plateau in the RFCF. This approach is also referred to as the Bennett-Chandler approach, which yields a dynamical correction to the standard transition state theory-based rate constant. It can be rewritten as: k = κ k B T h e Δ S ‡ R e − Δ H ‡ R T {\displaystyle k={\frac {\kappa k_{\mathrm {B} }T}{h}}e^{\frac {\Delta S^{\ddagger }}{R}}e^{-{\frac {\Delta H^{\ddagger }}{RT}}}} One can put this equation in the following form: ln ⁡ k T = − Δ H ‡ R ⋅ 1 T + ln ⁡ κ k B h + Δ S ‡ R {\displaystyle \ln {\frac {k}{T}}={\frac {-\Delta H^{\ddagger }}{R}}\cdot {\frac {1}{T}}+\ln {\frac {\kappa k_{\mathrm {B} }}{h}}+{\frac {\Delta S^{\ddagger }}{R}}} where: k {\displaystyle k} = reaction rate constant T {\displaystyle T} = absolute temperature Δ H ‡ {\displaystyle \Delta H^{\ddagger }} = enthalpy of activation R {\displaystyle R} = gas constant κ {\displaystyle \kappa } = transmission coefficient k B {\displaystyle k_{\mathrm {B} }} = Boltzmann constant = R/NA, NA = Avogadro constant h {\displaystyle h} = Planck constant Δ S ‡ {\displaystyle \Delta S^{\ddagger }} = entropy of activation If one assumes constant enthalpy of activation, constant entropy of activation, and constant transmission coefficient, this equation can be used as follows: A certain chemical reaction is performed at different temperatures and the reaction rate is determined. The plot of ln ⁡ ( k / T ) {\displaystyle \ln(k/T)} versus 1 / T {\displaystyle 1/T} gives a straight line with slope − Δ H ‡ / R {\displaystyle -\Delta H^{\ddagger }/R} from which the enthalpy of activation can be derived and with intercept ln ⁡ ( κ k B / h ) + Δ S ‡ / R {\displaystyle \ln(\kappa k_{\mathrm {B} }/h)+\Delta S^{\ddagger }/R} from which the entropy of activation is derived. == Accuracy == Transition state theory requires a value of the transmission coefficient, called κ {\displaystyle \kappa } in that theory. This value is often taken to be unity (i.e., the species passing through the transition state A B ‡ {\displaystyle AB^{\ddagger }} always proceed directly to products AB and never revert to reactants A and B). To avoid specifying a value of κ {\displaystyle \kappa } , the rate constant can be compared to the value of the rate constant at some fixed reference temperature (i.e., k ( T ) / k ( T R e f ) {\displaystyle \ k(T)/k(T_{\rm {Ref}})} ) which eliminates the κ {\displaystyle \kappa } factor in the resulting expression if one assumes that the transmission coefficient is independent of temperature. == Error propagation formulas == Error propagation formulas for Δ H ‡ {\displaystyle \Delta H^{\ddagger }} and Δ S ‡ {\displaystyle \Delta S^{\ddagger }} have been published. == Notes == == References == Evans, M.G.; Polanyi M. (1935). "Some applications of the transition state method to the calculation of reaction velocities, especially in solution". Trans. Faraday Soc. 31: 875–894. doi:10.1039/tf9353100875. Eyring, H. (1935). "The Activated Complex in Chemical Reactions". J. Chem. Phys. 3 (2): 107–115. Bibcode:1935JChPh...3..107E. doi:10.1063/1.1749604. Eyring, H.; Polanyi, M. (2013-11-01). "On Simple Gas Reactions". Zeitschrift für Physikalische Chemie. 227 (11): 1221–1246. doi:10.1524/zpch.2013.9023. ISSN 2196-7156. S2CID 119992451. Laidler, K.J.; King M.C. (1983). "The development of Transition-State Theory". J. Phys. Chem. 87 (15): 2657–2664. doi:10.1021/j100238a002. Polanyi, J.C. (1987). "Some concepts in reaction dynamics". Science. 236 (4802): 680–690. Bibcode:1987Sci...236..680P. doi:10.1126/science.236.4802.680. PMID 17748308. S2CID 19914017. Chapman, S. and Cowling, T.G. (1991). "The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases" (3rd Edition). Cambridge University Press, ISBN 9780521408448 == External links == Eyring equation at the University of Regensburg (archived from the original) Online-tool to calculate the reaction rate from an energy barrier (in kJ/mol) using the Eyring equation
Wikipedia/Eyring_equation
Polarography is a type of voltammetry where the working electrode is a dropping mercury electrode (DME) or a static mercury drop electrode (SMDE), which are useful for their wide cathodic ranges and renewable surfaces. It was invented in 1922 by Czechoslovak chemist Jaroslav Heyrovský, for which he won the Nobel prize in 1959. The main advantages of mercury as electrode material are as follows: 1) a large voltage window: ca. from +0.2 V to -1.8 V vs reversible hydrogen electrode (RHE). Hg electrode is particularly well-suited for studying electroreduction reactions. 2) very reproducible electrode surface, since mercury is liquid. 3) very easy cleaning of the electrode surface by making a new drop of mercury from a large Hg pool connected by a glass capillary. Polarography played a major role as an experimental tool in the advancement of both Analytical Chemistry and Electrochemistry until the 1990s (see figure below), when it was supplanted by other methods that did not require the use of mercury. == Principle of operation == Polarography is an electrochemical voltammetric technique that employs (dropping or static) mercury drop as a working electrode. In its most simple form polarography can be used to determine concentrations of electroactive species in liquids by measuring their mass-transport limiting currents. In such an experiment the potential of the working mercury drop electrode is linearly changed in time, and the electrode current is recorded at a certain time just before the mercury drop dislodges from a glass capillary from where the stream of mercury emerges. A plot of the current vs. potential in a polarography experiment shows the current oscillations corresponding to the drops of Hg falling from the capillary. If the maximum currents of each drop were connected, a sigmoidal shape would result. The limiting current (the plateau on the sigmoid), is called the diffusion-limited current because diffusion is the principal contribution to the flux of the electroactive material at this point of the Hg drop life. More advanced varieties of polarography (see below) produce peaks (which allow for a better resolution of different chemical species) rather than the waves of classical polarography, and improve the detection limits, which in some cases can be as low as 10^-9 M. == Limitations == There are limitations in particular for the classical polarography experiment for quantitative analytical measurements. Because the current is continuously measured during the growth of the Hg drop, there is a substantial contribution from capacitive current. As the Hg flows from the capillary end, there is initially a large increase in the surface area. As a consequence, the initial current is dominated by capacitive effects as charging of the rapidly increasing interface occurs. Toward the end of the drop life, there is little change in the surface area which diminishes the contribution of capacitance changes to the total current. At the same time, any redox process which occurs will result in faradaic current that decays approximately as the square root of time (due to the increasing dimensions of the Nernst diffusion layer). The exponential decay of the capacitive current is much more rapid than the decay of the faradaic current; hence, the faradaic current is proportionally larger at the end of the drop life. Unfortunately, this process is complicated by the continuously changing potential that is applied to the working electrode (the Hg drop) throughout the experiment. Because the potential changes during the drop lifetime (assuming typical experimental parameters of a 2 mV/s scan rate and a 4 s drop time, the potential can change by 8 mV from the beginning to the end of the drop), the charging of the interface (capacitive current) has a continuous contribution to the total current, even at the end of the drop when the surface area is not rapidly changing. As such, the typical signal to noise ratio of a polarographic experiment allows detection limits of only approximately 10−5 or 10−6 M. == Improvements == Dramatically better discrimination against the capacitive current can be obtained using the tast and pulse polarographic techniques. These have been developed with the introduction of analogue and digital electronic potentiostats. The first major improvement was obtained by measuring the current only at the end of each drop lifetime (tast polarography). An even greater enhancement was the introduction of differential pulse polarography. Here, the current is measured before the beginning and before the end of short potential pulses. The latter are superimposed on the linear potential-time-function of the voltammetric scan. Typical amplitudes of these pulses range between 10 and 50 mV, whereas pulse duration is 20 to 50 ms. The difference between both current values is the analytical signal. This technique results in a 100 to 1000-fold improvement of the detection limit, because the capacitive component is effectively subtracted. == Qualitative information == Qualitative information can also be determined from the half-wave potential of the polarogram (the current vs. potential plot in a polarographic experiment). The value of the half-wave potential is related to the standard potential for the redox reaction being studied. This technique and especially the differential pulse anodic stripping voltammetry (DPASV) method can be used for environmental analysis, and especially for marine study for the characterisation of organic matter and metals interactions. == Quantitative information == The Ilkovic equation is a relation used in polarography relating the diffusion current (Id) and the concentration of the depolarizer (c), which is the substance reduced or oxidized at the dropping mercury electrode. The Ilkovic equation has the form I d = k n D 1 / 2 m r 2 / 3 t 1 / 6 c {\displaystyle I_{\text{d}}=knD^{1/2}m_{r}^{2/3}t^{1/6}c} where: k is a constant which includes π and the density of mercury, and with the Faraday constant F has been evaluated at 708 for maximal current and 607 for average current D is the diffusion coefficient of the depolarizer in the medium (cm2/s) n is the number of electrons exchanged in the electrode reaction, m is the mass flow rate of Hg through the capillary (mg/s) t is the drop lifetime in seconds, c is depolarizer concentration in mol/cm3. The equation is named after the scientist who derived it, the Slovak chemist Dionýz Ilkovič (1907–1980). == See also == Electroanalytical method Hanging mercury drop electrode == References ==
Wikipedia/Polarography
Science Illustrated is a multilingual popular science magazine published by the Swedish publisher Bonnier Publications International A/S. == History and profile == Science Illustrated was launched simultaneously in Denmark, Norway and Sweden in 1984. The Finnish version was started in Helsinki, Finland in 1986. The Norwegian version is based in Oslo. According to official websites, the magazine – with a total circulation of 370,000 copies – is the biggest in the Nordic countries with a focus on nature, technology, medicine and culture. === Editions === == See also == List of Norwegian magazines List of magazines in Denmark == References == == External links == Website of Australian Science Illustrated
Wikipedia/Science_Illustrated
In physics, the energy–momentum relation, or relativistic dispersion relation, is the relativistic equation relating total energy (which is also called relativistic energy) to invariant mass (which is also called rest mass) and momentum. It is the extension of mass–energy equivalence for bodies or systems with non-zero momentum. It can be formulated as: This equation holds for a body or system, such as one or more particles, with total energy E, invariant mass m0, and momentum of magnitude p; the constant c is the speed of light. It assumes the special relativity case of flat spacetime and that the particles are free. Total energy is the sum of rest energy E 0 = m 0 c 2 {\displaystyle E_{0}=m_{0}c^{2}} and relativistic kinetic energy: E K = E − E 0 = ( p c ) 2 + ( m 0 c 2 ) 2 − m 0 c 2 {\displaystyle E_{K}=E-E_{0}={\sqrt {(pc)^{2}+\left(m_{0}c^{2}\right)^{2}}}-m_{0}c^{2}} Invariant mass is mass measured in a center-of-momentum frame. For bodies or systems with zero momentum, it simplifies to the mass–energy equation E 0 = m 0 c 2 {\displaystyle E_{0}=m_{0}c^{2}} , where total energy in this case is equal to rest energy. The Dirac sea model, which was used to predict the existence of antimatter, is closely related to the energy–momentum relation. == Connection to E = mc2 == The energy–momentum relation is consistent with the familiar mass–energy relation in both its interpretations: E = mc2 relates total energy E to the (total) relativistic mass m (alternatively denoted mrel or mtot), while E0 = m0c2 relates rest energy E0 to (invariant) rest mass m0. Unlike either of those equations, the energy–momentum equation (1) relates the total energy to the rest mass m0. All three equations hold true simultaneously. == Special cases == If the body is a massless particle (m0 = 0), then (1) reduces to E = pc. For photons, this is the relation, discovered in 19th century classical electromagnetism, between radiant momentum (causing radiation pressure) and radiant energy. If the body's speed v is much less than c, then (1) reduces to E = ⁠1/2⁠m0v2 + m0c2; that is, the body's total energy is simply its classical kinetic energy (⁠1/2⁠m0v2) plus its rest energy. If the body is at rest (v = 0), i.e. in its center-of-momentum frame (p = 0), we have E = E0 and m = m0; thus the energy–momentum relation and both forms of the mass–energy relation (mentioned above) all become the same. A more general form of relation (1) holds for general relativity. The invariant mass (or rest mass) is an invariant for all frames of reference (hence the name), not just in inertial frames in flat spacetime, but also accelerated frames traveling through curved spacetime (see below). However the total energy of the particle E and its relativistic momentum p are frame-dependent; relative motion between two frames causes the observers in those frames to measure different values of the particle's energy and momentum; one frame measures E and p, while the other frame measures E′ and p′, where E′ ≠ E and p′ ≠ p, unless there is no relative motion between observers, in which case each observer measures the same energy and momenta. Although we still have, in flat spacetime: E ′ 2 − ( p ′ c ) 2 = ( m 0 c 2 ) 2 . {\displaystyle {E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.} The quantities E, p, E′, p′ are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy and momenta by equating the relations in the different frames. Again in flat spacetime, this translates to; E 2 − ( p c ) 2 = E ′ 2 − ( p ′ c ) 2 = ( m 0 c 2 ) 2 . {\displaystyle {E}^{2}-\left(pc\right)^{2}={E'}^{2}-\left(p'c\right)^{2}=\left(m_{0}c^{2}\right)^{2}\,.} Since m0 does not change from frame to frame, the energy–momentum relation is used in relativistic mechanics and particle physics calculations, as energy and momentum are given in a particle's rest frame (that is, E′ and p′ as an observer moving with the particle would conclude to be) and measured in the lab frame (i.e. E and p as determined by particle physicists in a lab, and not moving with the particles). In relativistic quantum mechanics, it is the basis for constructing relativistic wave equations, since if the relativistic wave equation describing the particle is consistent with this equation – it is consistent with relativistic mechanics, and is Lorentz invariant. In relativistic quantum field theory, it is applicable to all particles and fields. == Origins and derivation of the equation == The energy–momentum relation goes back to Max Planck's article published in 1906. It was used by Walter Gordon in 1926 and then by Paul Dirac in 1928 under the form E = c 2 p 2 + ( m 0 c 2 ) 2 + V {\textstyle E={\sqrt {c^{2}p^{2}+(m_{0}c^{2})^{2}}}+V} , where V is the amount of potential energy. The equation can be derived in a number of ways, two of the simplest include: From the relativistic dynamics of a massive particle, By evaluating the norm of the four-momentum of the system. This method applies to both massive and massless particles, and can be extended to multi-particle systems with relatively little effort (see § Many-particle systems below). === Heuristic approach for massive particles === For a massive object moving at three-velocity u = (ux, uy, uz) with magnitude |u| = u in the lab frame: E = γ ( u ) m 0 c 2 {\displaystyle E=\gamma _{(\mathbf {u} )}m_{0}c^{2}} is the total energy of the moving object in the lab frame, p = γ ( u ) m 0 u {\displaystyle \mathbf {p} =\gamma _{(\mathbf {u} )}m_{0}\mathbf {u} } is the three dimensional relativistic momentum of the object in the lab frame with magnitude |p| = p. The relativistic energy E and momentum p include the Lorentz factor defined by: γ ( u ) = 1 1 − u ⋅ u c 2 = 1 1 − ( u c ) 2 {\displaystyle \gamma _{(\mathbf {u} )}={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}={\frac {1}{\sqrt {1-\left({\frac {u}{c}}\right)^{2}}}}} Some authors use relativistic mass defined by: m = γ ( u ) m 0 {\displaystyle m=\gamma _{(\mathbf {u} )}m_{0}} although rest mass m0 has a more fundamental significance, and will be used primarily over relativistic mass m in this article. Squaring the 3-momentum gives: p 2 = p ⋅ p = m 0 2 u ⋅ u 1 − u ⋅ u c 2 = m 0 2 u 2 1 − ( u c ) 2 {\displaystyle p^{2}=\mathbf {p} \cdot \mathbf {p} ={\frac {m_{0}^{2}\mathbf {u} \cdot \mathbf {u} }{1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}={\frac {m_{0}^{2}u^{2}}{1-\left({\frac {u}{c}}\right)^{2}}}} then solving for u2 and substituting into the Lorentz factor one obtains its alternative form in terms of 3-momentum and mass, rather than 3-velocity: γ = 1 + ( p m 0 c ) 2 {\displaystyle \gamma ={\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}} Inserting this form of the Lorentz factor into the energy equation gives: E = m 0 c 2 1 + ( p m 0 c ) 2 {\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}} followed by more rearrangement it yields (1). The elimination of the Lorentz factor also eliminates implicit velocity dependence of the particle in (1), as well as any inferences to the "relativistic mass" of a massive particle. This approach is not general as massless particles are not considered. Naively setting m0 = 0 would mean that E = 0 and p = 0 and no energy–momentum relation could be derived, which is not correct. === Norm of the four-momentum === ==== Special relativity ==== In Minkowski space, energy (divided by c) and momentum are two components of a Minkowski four-vector, namely the four-momentum; P = ( E c , p ) , {\displaystyle \mathbf {P} =\left({\frac {E}{c}},\mathbf {p} \right)\,,} (these are the contravariant components). The Minkowski inner product ⟨ , ⟩ of this vector with itself gives the square of the norm of this vector, it is proportional to the square of the rest mass m of the body: ⟨ P , P ⟩ = | P | 2 = ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,} a Lorentz invariant quantity, and therefore independent of the frame of reference. Using the Minkowski metric η with metric signature (− + + +), the inner product is ⟨ P , P ⟩ = | P | 2 = − ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=-\left(m_{0}c\right)^{2}\,,} and ⟨ P , P ⟩ {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle } = P α η α β P β {\displaystyle =P^{\alpha }\eta _{\alpha \beta }P^{\beta }} = ( E c p x p y p z ) ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) ( E c p x p y p z ) {\displaystyle ={\begin{pmatrix}{\frac {E}{c}}&p_{x}&p_{y}&p_{z}\end{pmatrix}}{\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}{\frac {E}{c}}\\p_{x}\\p_{y}\\p_{z}\end{pmatrix}}} = − ( E c ) 2 + p 2 , {\displaystyle =-\left({\frac {E}{c}}\right)^{2}+p^{2}\,,} so − ( m 0 c ) 2 = − ( E c ) 2 + p 2 {\displaystyle -\left(m_{0}c\right)^{2}=-\left({\frac {E}{c}}\right)^{2}+p^{2}} or, in natural units where c = 1, | P | 2 + ( m 0 ) 2 = 0. {\displaystyle |\mathbf {P} |^{2}+(m_{0})^{2}=0.} ==== General relativity ==== In general relativity, the 4-momentum is a four-vector defined in a local coordinate frame, although by definition the inner product is similar to that of special relativity, ⟨ P , P ⟩ = | P | 2 = ( m 0 c ) 2 , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=\left(m_{0}c\right)^{2}\,,} in which the Minkowski metric η is replaced by the metric tensor field g: ⟨ P , P ⟩ = | P | 2 = P α g α β P β , {\displaystyle \left\langle \mathbf {P} ,\mathbf {P} \right\rangle =|\mathbf {P} |^{2}=P^{\alpha }g_{\alpha \beta }P^{\beta }\,,} solved from the Einstein field equations. Then: P α g α β P β = ( m 0 c ) 2 . {\displaystyle P^{\alpha }g_{\alpha \beta }P^{\beta }=\left(m_{0}c\right)^{2}\,.} == Units of energy, mass and momentum == In natural units where c = 1, the energy–momentum equation reduces to E 2 = p 2 + m 0 2 . {\displaystyle E^{2}=p^{2}+m_{0}^{2}\,.} In particle physics, energy is typically given in units of electron volts (eV), momentum in units of eV·c−1, and mass in units of eV·c−2. In electromagnetism, and because of relativistic invariance, it is useful to have the electric field E and the magnetic field B in the same unit (Gauss), using the cgs (Gaussian) system of units, where energy is given in units of erg, mass in grams (g), and momentum in g·cm·s−1. Energy may also in theory be expressed in units of grams, though in practice it requires a large amount of energy to be equivalent to masses in this range. For example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat. Energies of thermonuclear bombs are usually given in tens of kilotons and megatons referring to the energy liberated by exploding that amount of trinitrotoluene (TNT). == Special cases == === Centre-of-momentum frame (one particle) === For a body in its rest frame, the momentum is zero, so the equation simplifies to E 0 = m 0 c 2 , {\displaystyle E_{0}=m_{0}c^{2}\,,} where m0 is the rest mass of the body. === Massless particles === If the object is massless, as is the case for a photon, then the equation reduces to E = p c . {\displaystyle E=pc\,.} This is a useful simplification. It can be rewritten in other ways using the de Broglie relations: E = h c λ = ℏ c k . {\displaystyle E={\frac {hc}{\lambda }}=\hbar ck\,.} if the wavelength λ or wavenumber k are given. === Correspondence principle === Rewriting the relation for massive particles as: E = m 0 c 2 1 + ( p m 0 c ) 2 , {\displaystyle E=m_{0}c^{2}{\sqrt {1+\left({\frac {p}{m_{0}c}}\right)^{2}}}\,,} and expanding into power series by the binomial theorem (or a Taylor series): E = m 0 c 2 [ 1 + 1 2 ( p m 0 c ) 2 − 1 8 ( p m 0 c ) 4 + ⋯ ] , {\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {p}{m_{0}c}}\right)^{2}-{\frac {1}{8}}\left({\frac {p}{m_{0}c}}\right)^{4}+\cdots \right]\,,} in the limit that u ≪ c, we have γ(u) ≈ 1 so the momentum has the classical form p ≈ m0u, then to first order in (⁠p/m0c⁠)2 (i.e. retain the term (⁠p/m0c⁠)2n for n = 1 and neglect all terms for n ≥ 2) we have E ≈ m 0 c 2 [ 1 + 1 2 ( m 0 u m 0 c ) 2 ] , {\displaystyle E\approx m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {m_{0}u}{m_{0}c}}\right)^{2}\right]\,,} or E ≈ m 0 c 2 + 1 2 m 0 u 2 , {\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}u^{2}\,,} where the second term is the classical kinetic energy, and the first is the rest energy of the particle. This approximation is not valid for massless particles, since the expansion required the division of momentum by mass. Incidentally, there are no massless particles in classical mechanics. == Many-particle systems == === Addition of four momenta === In the case of many particles with relativistic momenta pn and energy En, where n = 1, 2, ... (up to the total number of particles) simply labels the particles, as measured in a particular frame, the four-momenta in this frame can be added; ∑ n P n = ∑ n ( E n c , p n ) = ( ∑ n E n c , ∑ n p n ) , {\displaystyle \sum _{n}\mathbf {P} _{n}=\sum _{n}\left({\frac {E_{n}}{c}},\mathbf {p} _{n}\right)=\left(\sum _{n}{\frac {E_{n}}{c}},\sum _{n}\mathbf {p} _{n}\right)\,,} and then take the norm; to obtain the relation for a many particle system: | ( ∑ n P n ) | 2 = ( ∑ n E n c ) 2 − ( ∑ n p n ) 2 = ( M 0 c ) 2 , {\displaystyle \left|\left(\sum _{n}\mathbf {P} _{n}\right)\right|^{2}=\left(\sum _{n}{\frac {E_{n}}{c}}\right)^{2}-\left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(M_{0}c\right)^{2}\,,} where M0 is the invariant mass of the whole system, and is not equal to the sum of the rest masses of the particles unless all particles are at rest (see Mass in special relativity § The mass of composite systems for more detail). Substituting and rearranging gives the generalization of (1); The energies and momenta in the equation are all frame-dependent, while M0 is frame-independent. === Center-of-momentum frame === In the center-of-momentum frame (COM frame), by definition we have: ∑ n p n = 0 , {\displaystyle \sum _{n}\mathbf {p} _{n}={\boldsymbol {0}}\,,} with the implication from (2) that the invariant mass is also the centre of momentum (COM) mass–energy, aside from the c2 factor: ( ∑ n E n ) 2 = ( M 0 c 2 ) 2 ⇒ ∑ n E C O M n = E C O M = M 0 c 2 , {\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(M_{0}c^{2}\right)^{2}\Rightarrow \sum _{n}E_{\mathrm {COM} \,n}=E_{\mathrm {COM} }=M_{0}c^{2}\,,} and this is true for all frames since M0 is frame-independent. The energies ECOM n are those in the COM frame, not the lab frame. However, many familiar bound systems have the lab frame as COM frame, since the system itself is not in motion and so the momenta all cancel to zero. An example would be a simple object (where vibrational momenta of atoms cancel) or a container of gas where the container is at rest. In such systems, all the energies of the system are measured as mass. For example, the heat in an object on a scale, or the total of kinetic energies in a container of gas on the scale, all are measured by the scale as the mass of the system. === Rest masses and the invariant mass === Either the energies or momenta of the particles, as measured in some frame, can be eliminated using the energy momentum relation for each particle: E n 2 − ( p n c ) 2 = ( m n c 2 ) 2 , {\displaystyle E_{n}^{2}-\left(\mathbf {p} _{n}c\right)^{2}=\left(m_{n}c^{2}\right)^{2}\,,} allowing M0 to be expressed in terms of the energies and rest masses, or momenta and rest masses. In a particular frame, the squares of sums can be rewritten as sums of squares (and products): ( ∑ n E n ) 2 = ( ∑ n E n ) ( ∑ k E k ) = ∑ n , k E n E k = 2 ∑ n < k E n E k + ∑ n E n 2 , {\displaystyle \left(\sum _{n}E_{n}\right)^{2}=\left(\sum _{n}E_{n}\right)\left(\sum _{k}E_{k}\right)=\sum _{n,k}E_{n}E_{k}=2\sum _{n<k}E_{n}E_{k}+\sum _{n}E_{n}^{2}\,,} ( ∑ n p n ) 2 = ( ∑ n p n ) ⋅ ( ∑ k p k ) = ∑ n , k p n ⋅ p k = 2 ∑ n < k p n ⋅ p k + ∑ n p n 2 , {\displaystyle \left(\sum _{n}\mathbf {p} _{n}\right)^{2}=\left(\sum _{n}\mathbf {p} _{n}\right)\cdot \left(\sum _{k}\mathbf {p} _{k}\right)=\sum _{n,k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}=2\sum _{n<k}\mathbf {p} _{n}\cdot \mathbf {p} _{k}+\sum _{n}\mathbf {p} _{n}^{2}\,,} so substituting the sums, we can introduce their rest masses mn in (2): ∑ n ( m n c 2 ) 2 + 2 ∑ n < k ( E n E k − c 2 p n ⋅ p k ) = ( M 0 c 2 ) 2 . {\displaystyle \sum _{n}\left(m_{n}c^{2}\right)^{2}+2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)=\left(M_{0}c^{2}\right)^{2}\,.} The energies can be eliminated by: E n = ( p n c ) 2 + ( m n c 2 ) 2 , E k = ( p k c ) 2 + ( m k c 2 ) 2 , {\displaystyle E_{n}={\sqrt {\left(\mathbf {p} _{n}c\right)^{2}+\left(m_{n}c^{2}\right)^{2}}}\,,\quad E_{k}={\sqrt {\left(\mathbf {p} _{k}c\right)^{2}+\left(m_{k}c^{2}\right)^{2}}}\,,} similarly the momenta can be eliminated by: p n ⋅ p k = | p n | | p k | cos ⁡ θ n k , | p n | = 1 c E n 2 − ( m n c 2 ) 2 , | p k | = 1 c E k 2 − ( m k c 2 ) 2 , {\displaystyle \mathbf {p} _{n}\cdot \mathbf {p} _{k}=\left|\mathbf {p} _{n}\right|\left|\mathbf {p} _{k}\right|\cos \theta _{nk}\,,\quad |\mathbf {p} _{n}|={\frac {1}{c}}{\sqrt {E_{n}^{2}-\left(m_{n}c^{2}\right)^{2}}}\,,\quad |\mathbf {p} _{k}|={\frac {1}{c}}{\sqrt {E_{k}^{2}-\left(m_{k}c^{2}\right)^{2}}}\,,} where θnk is the angle between the momentum vectors pn and pk. Rearranging: ( M 0 c 2 ) 2 − ∑ n ( m n c 2 ) 2 = 2 ∑ n < k ( E n E k − c 2 p n ⋅ p k ) . {\displaystyle \left(M_{0}c^{2}\right)^{2}-\sum _{n}\left(m_{n}c^{2}\right)^{2}=2\sum _{n<k}\left(E_{n}E_{k}-c^{2}\mathbf {p} _{n}\cdot \mathbf {p} _{k}\right)\,.} Since the invariant mass of the system and the rest masses of each particle are frame-independent, the right hand side is also an invariant (even though the energies and momenta are all measured in a particular frame). == Matter waves == Using the de Broglie relations for energy and momentum for matter waves, E = ℏ ω , p = ℏ k , {\displaystyle E=\hbar \omega \,,\quad \mathbf {p} =\hbar \mathbf {k} \,,} where ω is the angular frequency and k is the wavevector with magnitude |k| = k, equal to the wave number, the energy–momentum relation can be expressed in terms of wave quantities: ( ℏ ω ) 2 = ( c ℏ k ) 2 + ( m 0 c 2 ) 2 , {\displaystyle \left(\hbar \omega \right)^{2}=\left(c\hbar k\right)^{2}+\left(m_{0}c^{2}\right)^{2}\,,} and tidying up by dividing by (ħc)2 throughout: This can also be derived from the magnitude of the four-wavevector K = ( ω c , k ) , {\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},\mathbf {k} \right)\,,} in a similar way to the four-momentum above. Since the reduced Planck constant ħ and the speed of light c both appear and clutter this equation, this is where natural units are especially helpful. Normalizing them so that ħ = c = 1, we have: ω 2 = k 2 + m 0 2 . {\displaystyle \omega ^{2}=k^{2}+m_{0}^{2}\,.} == Tachyon and exotic matter == The velocity of a bradyon with the relativistic energy–momentum relation E 2 = p 2 c 2 + m 0 2 c 4 . {\displaystyle E^{2}=p^{2}c^{2}+m_{0}^{2}c^{4}\,.} can never exceed c. On the contrary, it is always greater than c for a tachyon whose energy–momentum equation is E 2 = p 2 c 2 − m 0 2 c 4 . {\displaystyle E^{2}=p^{2}c^{2}-m_{0}^{2}c^{4}\,.} By contrast, the hypothetical exotic matter has a negative mass and the energy–momentum equation is E 2 = − p 2 c 2 + m 0 2 c 4 . {\displaystyle E^{2}=-p^{2}c^{2}+m_{0}^{2}c^{4}\,.} == See also == Mass–energy equivalence Four-momentum Mass in special relativity == References == A. Halpern (1988). 3000 Solved Problems in Physics, Schaum Series. McGraw-Hill. pp. 704–705. ISBN 978-0-07-025734-4. G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. p. 65. ISBN 978-0-521-57507-2. C.B. Parker (1994). McGraw-Hill Encyclopaedia of Physics (2nd ed.). McGraw-Hill. pp. 1192, 1193. ISBN 0-07-051400-3. R.G. Lerner; G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC Publishers. p. 1052. ISBN 0-89573-752-3.
Wikipedia/Relativistic_energy
In chemical kinetics, a reaction rate constant or reaction rate coefficient (⁠ k {\displaystyle k} ⁠) is a proportionality constant which quantifies the rate and direction of a chemical reaction by relating it with the concentration of reactants. For a reaction between reactants A and B to form a product C, where A and B are reactants C is a product a, b, and c are stoichiometric coefficients, the reaction rate is often found to have the form: r = k [ A ] m [ B ] n {\displaystyle r=k[\mathrm {A} ]^{m}[\mathrm {B} ]^{n}} Here ⁠ k {\displaystyle k} ⁠ is the reaction rate constant that depends on temperature, and [A] and [B] are the molar concentrations of substances A and B in moles per unit volume of solution, assuming the reaction is taking place throughout the volume of the solution. (For a reaction taking place at a boundary, one would use moles of A or B per unit area instead.) The exponents m and n are called partial orders of reaction and are not generally equal to the stoichiometric coefficients a and b. Instead they depend on the reaction mechanism and can be determined experimentally. Sum of m and n, that is, (m + n) is called the overall order of reaction. == Elementary steps == For an elementary step, there is a relationship between stoichiometry and rate law, as determined by the law of mass action. Almost all elementary steps are either unimolecular or bimolecular. For a unimolecular step the reaction rate is described by r = k 1 [ A ] {\displaystyle r=k_{1}[\mathrm {A} ]} , where k 1 {\displaystyle k_{1}} is a unimolecular rate constant. Since a reaction requires a change in molecular geometry, unimolecular rate constants cannot be larger than the frequency of a molecular vibration. Thus, in general, a unimolecular rate constant has an upper limit of k1 ≤ ~1013 s−1. For a bimolecular step the reaction rate is described by r = k 2 [ A ] [ B ] {\displaystyle r=k_{2}[\mathrm {A} ][\mathrm {B} ]} , where k 2 {\displaystyle k_{2}} is a bimolecular rate constant. Bimolecular rate constants have an upper limit that is determined by how frequently molecules can collide, and the fastest such processes are limited by diffusion. Thus, in general, a bimolecular rate constant has an upper limit of k2 ≤ ~1010 M−1s−1. For a termolecular step the reaction rate is described by r = k 3 [ A ] [ B ] [ C ] {\displaystyle r=k_{3}[\mathrm {A} ][\mathrm {B} ][\mathrm {C} ]} , where k 3 {\displaystyle k_{3}} is a termolecular rate constant. There are few examples of elementary steps that are termolecular or higher order, due to the low probability of three or more molecules colliding in their reactive conformations and in the right orientation relative to each other to reach a particular transition state. There are, however, some termolecular examples in the gas phase. Most involve the recombination of two atoms or small radicals or molecules in the presence of an inert third body which carries off excess energy, such as O + O2 + N2 → O3 + N2. One well-established example is the termolecular step 2 I + H2 → 2 HI in the hydrogen-iodine reaction. In cases where a termolecular step might plausibly be proposed, one of the reactants is generally present in high concentration (e.g., as a solvent or diluent gas). == Relationship to other parameters == For a first-order reaction (including a unimolecular one-step process), there is a direct relationship between the unimolecular rate constant and the half-life of the reaction: t 1 / 2 = ln ⁡ 2 k {\textstyle t_{1/2}={\frac {\ln 2}{k}}} . Transition state theory gives a relationship between the rate constant k ( T ) {\displaystyle k(T)} and the Gibbs free energy of activation Δ G ‡ = Δ H ‡ − T Δ S ‡ {\displaystyle {\Delta G^{\ddagger }=\Delta H^{\ddagger }-T\Delta S^{\ddagger }}} , a quantity that can be regarded as the free energy change needed to reach the transition state. In particular, this energy barrier incorporates both enthalpic ( Δ H ‡ {\displaystyle \Delta H^{\ddagger }} ) and entropic ( Δ S ‡ {\displaystyle \Delta S^{\ddagger }} ) changes that need to be achieved for the reaction to take place: The result from transition state theory is k ( T ) = k B T h e − Δ G ‡ / R T {\textstyle k(T)={\frac {k_{\mathrm {B} }T}{h}}e^{-\Delta G^{\ddagger }/RT}} , where h is the Planck constant and R the molar gas constant. As useful rules of thumb, a first-order reaction with a rate constant of 10−4 s−1 will have a half-life (t1/2) of approximately 2 hours. For a one-step process taking place at room temperature, the corresponding Gibbs free energy of activation (ΔG‡) is approximately 23 kcal/mol. == Dependence on temperature == The Arrhenius equation is an elementary treatment that gives the quantitative basis of the relationship between the activation energy and the reaction rate at which a reaction proceeds. The rate constant as a function of thermodynamic temperature is then given by: k ( T ) = A e − E a / R T {\displaystyle k(T)=Ae^{-E_{\mathrm {a} }/RT}} The reaction rate is given by: r = A e − E a / R T [ A ] m [ B ] n , {\displaystyle r=Ae^{-E_{\mathrm {a} }/RT}[\mathrm {A} ]^{m}[\mathrm {B} ]^{n},} where Ea is the activation energy, and R is the gas constant, and m and n are experimentally determined partial orders in [A] and [B], respectively. Since at temperature T the molecules have energies according to a Boltzmann distribution, one can expect the proportion of collisions with energy greater than Ea to vary with e−Ea⁄RT. The constant of proportionality A is the pre-exponential factor, or frequency factor (not to be confused here with the reactant A) takes into consideration the frequency at which reactant molecules are colliding and the likelihood that a collision leads to a successful reaction. Here, A has the same dimensions as an (m + n)-order rate constant (see Units below). Another popular model that is derived using more sophisticated statistical mechanical considerations is the Eyring equation from transition state theory: k ( T ) = κ k B T h ( c ⊖ ) 1 − M e − Δ G ‡ / R T = ( κ k B T h ( c ⊖ ) 1 − M ) e Δ S ‡ / R e − Δ H ‡ / R T , {\displaystyle k(T)=\kappa {\frac {k_{\mathrm {B} }T}{h}}(c^{\ominus })^{1-M}e^{-\Delta G^{\ddagger }/RT}=\left(\kappa {\frac {k_{\mathrm {B} }T}{h}}(c^{\ominus })^{1-M}\right)e^{\Delta S^{\ddagger }/R}e^{-\Delta H^{\ddagger }/RT},} where ΔG‡ is the free energy of activation, a parameter that incorporates both the enthalpy and entropy change needed to reach the transition state. The temperature dependence of ΔG‡ is used to compute these parameters, the enthalpy of activation ΔH‡ and the entropy of activation ΔS‡, based on the defining formula ΔG‡ = ΔH‡ − TΔS‡. In effect, the free energy of activation takes into account both the activation energy and the likelihood of successful collision, while the factor kBT/h gives the frequency of molecular collision. The factor (c⊖)1-M ensures the dimensional correctness of the rate constant when the transition state in question is bimolecular or higher. Here, c⊖ is the standard concentration, generally chosen based on the unit of concentration used (usually c⊖ = 1 mol L−1 = 1 M), and M is the molecularity of the transition state. Lastly, κ, usually set to unity, is known as the transmission coefficient, a parameter which essentially serves as a "fudge factor" for transition state theory. The biggest difference between the two theories is that Arrhenius theory attempts to model the reaction (single- or multi-step) as a whole, while transition state theory models the individual elementary steps involved. Thus, they are not directly comparable, unless the reaction in question involves only a single elementary step. Finally, in the past, collision theory, in which reactants are viewed as hard spheres with a particular cross-section, provided yet another common way to rationalize and model the temperature dependence of the rate constant, although this approach has gradually fallen into disuse. The equation for the rate constant is similar in functional form to both the Arrhenius and Eyring equations: k ( T ) = P Z e − Δ E / R T , {\displaystyle k(T)=PZe^{-\Delta E/RT},} where P is the steric (or probability) factor and Z is the collision frequency, and ΔE is energy input required to overcome the activation barrier. Of note, Z ∝ T 1 / 2 {\displaystyle Z\propto T^{1/2}} , making the temperature dependence of k different from both the Arrhenius and Eyring models. === Comparison of models === All three theories model the temperature dependence of k using an equation of the form k ( T ) = C T α e − Δ E / R T {\displaystyle k(T)=CT^{\alpha }e^{-\Delta E/RT}} for some constant C, where α = 0, 1⁄2, and 1 give Arrhenius theory, collision theory, and transition state theory, respectively, although the imprecise notion of ΔE, the energy needed to overcome the activation barrier, has a slightly different meaning in each theory. In practice, experimental data does not generally allow a determination to be made as to which is "correct" in terms of best fit. Hence, all three are conceptual frameworks that make numerous assumptions, both realistic and unrealistic, in their derivations. As a result, they are capable of providing different insights into a system. == Units == The units of the rate constant depend on the overall order of reaction. If concentration is measured in units of mol·L−1 (sometimes abbreviated as M), then For order (m + n), the rate constant has units of mol1−(m+n)·L(m+n)−1·s−1 (or M1−(m+n)·s−1) For order zero, the rate constant has units of mol·L−1·s−1 (or M·s−1) For order one, the rate constant has units of s−1 For order two, the rate constant has units of L·mol−1·s−1 (or M−1·s−1) For order three, the rate constant has units of L2·mol−2·s−1 (or M−2·s−1) For order four, the rate constant has units of L3·mol−3·s−1 (or M−3·s−1) == Plasma and gases == Calculation of rate constants of the processes of generation and relaxation of electronically and vibrationally excited particles are of significant importance. It is used, for example, in the computer simulation of processes in plasma chemistry or microelectronics. First-principle based models should be used for such calculation. It can be done with the help of computer simulation software. == Rate constant calculations == Rate constant can be calculated for elementary reactions by molecular dynamics simulations. One possible approach is to calculate the mean residence time of the molecule in the reactant state. Although this is feasible for small systems with short residence times, this approach is not widely applicable as reactions are often rare events on molecular scale. One simple approach to overcome this problem is Divided Saddle Theory. Such other methods as the Bennett Chandler procedure, and Milestoning have also been developed for rate constant calculations. == Divided saddle theory == The theory is based on the assumption that the reaction can be described by a reaction coordinate, and that we can apply Boltzmann distribution at least in the reactant state. A new, especially reactive segment of the reactant, called the saddle domain, is introduced, and the rate constant is factored: k = k S D ⋅ α R S S D {\displaystyle k=k_{\mathrm {SD} }\cdot \alpha _{\mathrm {RS} }^{\mathrm {SD} }} where αSDRS is the conversion factor between the reactant state and saddle domain, while kSD is the rate constant from the saddle domain. The first can be simply calculated from the free energy surface, the latter is easily accessible from short molecular dynamics simulations == See also == Reaction rate Equilibrium constant Molecularity == References ==
Wikipedia/Reaction_rate_constant
Thin-layer chromatography (TLC) is a chromatography technique that separates components in non-volatile mixtures. It is performed on a TLC plate made up of a non-reactive solid coated with a thin layer of adsorbent material. This is called the stationary phase. The sample is deposited on the plate, which is eluted with a solvent or solvent mixture known as the mobile phase (or eluent). This solvent then moves up the plate via capillary action. As with all chromatography, some compounds are more attracted to the mobile phase, while others are more attracted to the stationary phase. Therefore, different compounds move up the TLC plate at different speeds and become separated. To visualize colourless compounds, the plate is viewed under UV light or is stained. Testing different stationary and mobile phases is often necessary to obtain well-defined and separated spots. TLC is quick, simple, and gives high sensitivity for a relatively low cost. It can monitor reaction progress, identify compounds in a mixture, determine purity, or purify small amounts of compound. == Procedure == The process for TLC is similar to paper chromatography but provides faster runs, better separations, and the choice between different stationary phases. Plates can be labelled before or after the chromatography process with a pencil or other implement that will not interfere with the process. There are four main stages to running a thin-layer chromatography plate: Plate preparation: Using a capillary tube, a small amount of a concentrated solution of the sample is deposited near the bottom edge of a TLC plate. The solvent is allowed to completely evaporate before the next step. A vacuum chamber may be necessary for non-volatile solvents. To make sure there is sufficient compound to obtain a visible result, the spotting procedure can be repeated. Depending on the application, multiple different samples may be placed in a row the same distance from the bottom edge; each sample will move up the plate in its own "lane." Development chamber preparation: The development solvent or solvent mixture is placed into a transparent container (separation/development chamber) to a depth of less than 1 centimetre. A strip of filter paper (aka "wick") is also placed along the container wall. This filter paper should touch the solvent and almost reach the top of the container. The container is covered with a lid and the solvent vapors are allowed to saturate the atmosphere of the container. Failure to do so results in poor separation and non-reproducible results. Development: The TLC plate is placed in the container such that the sample spot(s) are not submerged into the mobile phase. The container is covered to prevent solvent evaporation. The solvent migrates up the plate by capillary action, meets the sample mixture, and carries it up the plate (elutes the sample). The plate is removed from the container before the solvent reaches the top of the plate; otherwise, the results will be misleading. The solvent front, the highest mark the solvent has travelled along the plate, is marked. Visualization: The solvent evaporates from the plate. Visualization methods include UV light, staining, and many more. == Separation process and principle == The separation of compounds is due to the differences in their attraction to the stationary phase and because of differences in solubility in the solvent. As a result, the compounds and the mobile phase compete for binding sites on the stationary phase. Different compounds in the sample mixture travel at different rates due to the differences in their partition coefficients. Different solvents, or different solvent mixtures, gives different separation. The retardation factor (Rf), or retention factor, quantifies the results. It is the distance traveled by a given substance divided by the distance traveled by the mobile phase. In normal-phase TLC, the stationary phase is polar. Silica gel is very common in normal-phase TLC. More polar compounds in a sample mixture interact more strongly with the polar stationary phase. As a result, more-polar compounds move less (resulting in smaller Rf) while less-polar compounds move higher up the plate (higher Rf). A more-polar mobile phase also binds more strongly to the plate, competing more with the compound for binding sites; a more-polar mobile phase also dissolves polar compounds more. As such, all compounds on the TLC plate move higher up the plate in polar solvent mixtures. "Strong" solvents move compounds higher up the plate, whereas "weak" solvents move them less. If the stationary phase is non-polar, like C18-functionalized silica plates, it is called reverse-phase TLC. In this case, non-polar compounds move less and polar compounds move more. The solvent mixture will also be much more polar than in normal-phase TLC. === Solvent choice === An eluotropic series, which orders solvents by how much they move compounds, can help in selecting a mobile phase. Solvents are also divided into solvent selectivity groups. Using solvents with different elution strengths or different selectivity groups can often give very different results. While single-solvent mobile phases can sometimes give good separation, some cases may require solvent mixtures. In normal-phase TLC, the most common solvent mixtures include ethyl acetate/hexanes (EtOAc/Hex) for less-polar compounds and methanol/dichloromethane (MeOH/DCM) for more polar compounds. Different solvent mixtures and solvent ratios can help give better separation. In reverse-phase TLC, solvent mixtures are typically water with a less-polar solvent: Typical choices are water with tetrahydrofuran (THF), acetonitrile (ACN), or methanol. == Analysis == As the chemicals being separated may be colourless, several methods exist to visualise the spots: Placing the plate under blacklight (366 nm light) makes fluorescent compounds glow TLC plates containing a small amount of fluorescent compound (usually manganese-activated zinc silicate) in the adsorbent layer allow for visualisation of some compounds under UV-C light (254 nm). The adsorbent layer will fluoresce light-green, while spots containing compounds that absorb UV-C light will not. Placing the plate in a container filled with iodine vapours temporarily stains the spots. They typically become a yellow or brown colour. The TLC plate can either be dipped in or sprayed with a stain and sometimes heated depending on the stain used. Many stains exist for a large range of chemical moieties but some examples include: Potassium permanganate (no heating, for oxidisable groups) Ninhydrin (heating, amines and amino-acids) Acidic vanillin (heating, general reagent) Phosphomolybdic acid (no heating, general reagent) In the case of lipids, the chromatogram may be transferred to a polyvinylidene fluoride membrane and then subjected to further analysis, for example, mass spectrometry. This technique is known as far-eastern blot. == Plate production == TLC plates are usually commercially available, with standard particle size ranges to improve reproducibility. They are prepared by mixing the adsorbent, such as silica gel, with a small amount of inert binder like calcium sulfate (gypsum) and water. This mixture is spread as a thick slurry on an unreactive carrier sheet, usually glass, thick aluminum foil, or plastic. The resultant plate is dried and activated by heating in an oven for thirty minutes at 110 °C. The thickness of the absorbent layer is typically around 0.1–0.25 mm for analytical purposes and around 0.5–2.0 mm for preparative TLC. Other adsorbent coatings include aluminium oxide (alumina), or cellulose. == Applications == === Reaction monitoring and characterization === TLC is a useful tool for reaction monitoring. For this, the plate normally contains a spot of starting material, a spot from the reaction mixture, and a co-spot (or cross-spot) containing both. The analysis will show if the starting material disappeared and if any new products appeared. This provides a quick and easy way to estimate how far a reaction has proceeded. In one study, TLC has been applied in the screening of organic reactions. The researchers react an alcohol and a catalyst directly in the co-spot of a TLC plate before developing it. This provides quick and easy small-scale testing of different reagents. Compound characterization with TLC is also possible and is similar to reaction monitoring. However, rather than spotting with starting material and reaction mixture, it is with an unknown and a known compound. They may be the same compound if both spots have the same Rf and look the same under the chosen visualization method. However, co-elution complicates both reaction monitoring and characterization. This is because different compounds will move to the same spot on the plate. In such cases, different solvent mixtures may provide better separation. === Purity and purification === TLC helps show the purity of a sample. A pure sample should only contain one spot by TLC. TLC is also useful for small-scale purification. Because the separated compounds will be on different areas of the plate, a scientist can scrape off the stationary phase particles containing the desired compound and dissolve them into an appropriate solvent. Once all the compound dissolves in the solvent, they filter out the silica particles, then evaporate the solvent to isolate the product. Big preparative TLC plates with thick silica gel coatings can separate more than 100 mg of material. For larger-scale purification and isolation, TLC is useful to quickly test solvent mixtures before running flash column chromatography on a large batch of impure material. A compound elutes from a column when the amount of solvent collected is equal to 1/Rf. The eluent from flash column chromatography gets collected across several containers (for example, test tubes) called fractions. TLC helps show which fractions contain impurities and which contain pure compound. Furthermore, two-dimensional TLC can help check if a compound is stable on a particular stationary phase. This test requires two runs on a square-shaped TLC plate. The plate is rotated by 90º before the second run. If the target compound appears on the diagonal of the square, it is stable on the chosen stationary phase. Otherwise, it is decomposing on the plate. If this is the case, an alternative stationary phase may prevent this decomposition. TLC is also an analytical method for the direct separation of enantiomers and the control of enantiomeric purity, e.g. active pharmaceutical ingredients (APIs) that are chiral. Separation of green plant matter in spinach (note that images from steps 1-6 are zoomed into the bottom of the plate) == See also == Column chromatography HPTLC Radial chromatography Chiral thin-layer chromatography == References == == Bibliography ==
Wikipedia/Thin_layer_chromatography
The prohibition of drugs through sumptuary legislation or religious law is a common means of attempting to prevent the recreational use of certain intoxicating substances. An area has a prohibition of drugs when its government uses the force of law to punish the use or possession of drugs which have been classified as controlled. A government may simultaneously have systems in place to regulate both controlled and non controlled drugs. Regulation controls the manufacture, distribution, marketing, sale, and use of certain drugs, for instance through a prescription system. For example, in some states, the possession or sale of amphetamines is a crime unless a patient has a physician's prescription for the drug; having a prescription authorizes a pharmacy to sell and a patient to use a drug that would otherwise be prohibited. Although prohibition mostly concerns psychoactive drugs (which affect mental processes such as perception, cognition, and mood), prohibition can also apply to non-psychoactive drugs, such as anabolic steroids. Many governments do not criminalize the possession of a limited quantity of certain drugs for personal use, while still prohibiting their sale or manufacture, or possession in large quantities. Some laws (or judicial practice) set a specific volume of a particular drug, above which is considered ipso jure to be evidence of trafficking or sale of the drug. Some Islamic countries prohibit the use of alcohol (see list of countries with alcohol prohibition). Many governments levy a tax on alcohol and tobacco products, and restrict alcohol and tobacco from being sold or gifted to a minor. Other common restrictions include bans on outdoor drinking and indoor smoking. In the early 20th century, many countries had alcohol prohibition. These include the United States (1920–1933), Finland (1919–1932), Norway (1916–1927), Canada (1901–1948), Iceland (1915–1922) and the Russian Empire/USSR (1914–1925). In fact, the first international treaty to control a psychoactive substance adopted in 1890 actually concerned alcoholic beverages (Brussels Conference). The first treaty on opium only arrived two decades later, in 1912. == Definitions == Drugs, in the context of prohibition, are any of a number of psychoactive substances whose use a government or religious body seeks to control. What constitutes a drug varies by century and belief system. What is a psychoactive substance is relatively well known to modern science. Examples include a range from caffeine found in coffee, tea, and chocolate, nicotine in tobacco products; botanical extracts morphine and heroin, and synthetic compounds MDMA and fentanyl. Almost without exception, these substances also have a medical use, in which case they are called pharmaceutical drugs or just pharmaceuticals. The use of medicine to save or extend life or to alleviate suffering is uncontroversial in most cultures. Prohibition applies to certain conditions of possession or use. Recreational use refers to the use of substances primarily for their psychoactive effect outside of a clinical situation or doctor's care. In the twenty-first century, caffeine has pharmaceutical uses. Caffeine is used to treat bronchopulmonary dysplasia. In most cultures, caffeine in the form of coffee or tea is unregulated. Over 2.25 billion cups of coffee are consumed in the world every day. Some religions, including the Church of Jesus Christ of Latter-day Saints, prohibit coffee. They believe that it is both physically and spiritually unhealthy to consume coffee. A government's interest to control a drug may be based on its negative effects on its users, or it may simply have a revenue interest. The British parliament prohibited the possession of untaxed tea with the imposition of the Tea Act of 1773. In this case, as in many others, it is not a substance that is prohibited, but the conditions under which it is possessed or consumed. Those conditions include matters of intent, which makes the enforcement of laws difficult. In Colorado possession of "blenders, bowls, containers, spoons, and mixing devices" is illegal if there was intent to use them with drugs. Many drugs, beyond their pharmaceutical and recreational uses, have industrial uses. Nitrous oxide, or laughing gas is a dental anesthetic, also used to prepare whipped cream, fuel rocket engines, and enhance the performance of race cars. Ethanol, or drinking alcohol, is also used as a fuel, industrial solvent and disinfectant. == History == The cultivation, use, and trade of psychoactive and other drugs has occurred since ancient times. Concurrently, authorities have often restricted drug possession and trade for a variety of political and religious reasons. In the 20th century, the United States led a major renewed surge in drug prohibition called the "War on Drugs". === Early drug laws === The prohibition on alcohol under Islamic Sharia law, which is usually attributed to passages in the Qur'an, dates back to the early seventh century. Although Islamic law is often interpreted as prohibiting all intoxicants (not only alcohol), the ancient practice of hashish smoking has continued throughout the history of Islam, against varying degrees of resistance. A major campaign against hashish-eating Sufis were conducted in Egypt in the 11th and 12th centuries resulting among other things in the burning of fields of cannabis. Though the prohibition of illegal drugs was established under Sharia law, particularly against the use of hashish as a recreational drug, classical jurists of medieval Islamic jurisprudence accepted the use of hashish for medicinal and therapeutic purposes, and agreed that its "medical use, even if it leads to mental derangement, should remain exempt [from punishment]". In the 14th century, the Islamic scholar Az-Zarkashi spoke of "the permissibility of its use for medical purposes if it is established that it is beneficial". In the Ottoman Empire, Murad IV attempted to prohibit coffee drinking to Muslims as haraam, arguing that it was an intoxicant, but this ruling was overturned soon after he died in 1640. The introduction of coffee in Europe from Muslim Turkey prompted calls for it to be banned as the devil's work, although Pope Clement VIII sanctioned its use in 1600, declaring that it was "so delicious that it would be a pity to let the infidels have exclusive use of it". Bach's Coffee Cantata, from the 1730s, presents a vigorous debate between a girl and her father over her desire to consume coffee. The early association between coffeehouses and seditious political activities in England led to the banning of such establishments in the mid-17th century. A number of Asian rulers had similarly enacted early prohibitions, many of which were later forcefully overturned by Western colonial powers during the 18th and 19th centuries. In 1360, for example, King Ramathibodi I, of Ayutthaya Kingdom (now Thailand), prohibited opium consumption and trade. The prohibition lasted nearly 500 years until 1851 when King Rama IV allowed Chinese migrants to consume opium. The Konbaung Dynasty prohibited all intoxicants and stimulants during the reign of King Bodawpaya (1781–1819). After Burma became a British colony, the restrictions on opium were abolished and the colonial government established monopolies selling Indian-produced opium. In late Qing China, opium imported by foreign traders, such as those employed by Jardine Matheson and the East India Company, was consumed by all social classes in Southern China. Between 1821 and 1837, imports of the drug increased fivefold. The wealth drain and widespread social problems that resulted from this consumption prompted the Chinese government to attempt to end the trade. This effort was initially successful, with Lin Zexu ordering the destruction of opium at Humen in June 1839. However, the opium traders lobbied the British government to declare war on China, resulting in the First Opium War. The Qing government was defeated and the war ended with the Treaty of Nanking, which legalized opium trading in Chinese law === First modern drug regulations === The first modern law in Europe for the regulating of drugs was the Pharmacy Act 1868 in the United Kingdom. There had been previous moves to establish the medical and pharmaceutical professions as separate, self-regulating bodies, but the General Medical Council, established in 1863, unsuccessfully attempted to assert control over drug distribution. The act set controls on the distribution of poisons and drugs. Poisons could only be sold if the purchaser was known to the seller or to an intermediary known to both, and drugs, including opium and all preparations of opium or of poppies, had to be sold in containers with the seller's name and address. Despite the reservation of opium to professional control, general sales did continue to a limited extent, with mixtures with less than 1 percent opium being unregulated. After the legislation passed, the death rate caused by opium immediately fell from 6.4 per million population in 1868 to 4.5 in 1869. Deaths among children under five dropped from 20.5 per million population between 1863 and 1867 to 12.7 per million in 1871 and further declined to between 6 and 7 per million in the 1880s. In the United States, the first drug law was passed in San Francisco in 1875, banning the smoking of opium in opium dens. The reason cited was "many women and young girls, as well as young men of a respectable family, were being induced to visit the Chinese opium-smoking dens, where they were ruined morally and otherwise." This was followed by other laws throughout the country, and federal laws that barred Chinese people from trafficking in opium. Though the laws affected the use and distribution of opium by Chinese immigrants, no action was taken against the producers of such products as laudanum, a tincture of opium and alcohol, commonly taken as a panacea by white Americans. The distinction between its use by white Americans and Chinese immigrants was thus a form of racial discrimination as it was based on the form in which it was ingested: Chinese immigrants tended to smoke it, while it was often included in various kinds of generally liquid medicines often (but not exclusively) used by Americans of European descent. The laws targeted opium smoking, but not other methods of ingestion. Britain passed the All-India Opium Act of 1878, which limited recreational opium sales to registered Indian opium-eaters and Chinese opium-smokers and prohibiting its sale to emigrant workers from British Burma. Following the passage of a regional law in 1895, Australia's Aboriginals Protection and Restriction of the Sale of Opium Act 1897 addressed opium addiction among Aborigines, though it soon became a general vehicle for depriving them of basic rights by administrative regulation. Opium sale was prohibited to the general population in 1905, and smoking and possession were prohibited in 1908. Despite these laws, the late 19th century saw an increase in opiate consumption. This was due to the prescribing and dispensing of legal opiates by physicians and pharmacists to relieve menstruation pain. It is estimated that between 150,000 and 200,000 opiate addicts lived in the United States at the time, and a majority of these addicts were women. === Changing attitudes and the drug prohibition campaign === Foreign traders, including those employed by Jardine Matheson and the East India Company, smuggled opium into China in order to balance high trade deficits. Chinese attempts to outlaw the trade led to the First Opium War and the subsequent legalization of the trade at the Treaty of Nanking. Attitudes towards the opium trade were initially ambivalent, but in 1874 the Society for the Suppression of the Opium Trade was formed in England by Quakers led by the Rev. Frederick Storrs-Turner. By the 1890s, increasingly strident campaigns were waged by Protestant missionaries in China for its abolition. The first such society was established at the 1890 Shanghai Missionary Conference, where British and American representatives, including John Glasgow Kerr, Arthur E. Moule, Arthur Gostick Shorrock and Griffith John, agreed to establish the Permanent Committee for the Promotion of Anti-Opium Societies. Due to increasing pressure in the British parliament, the Liberal government under William Ewart Gladstone approved the appointment of a Royal Commission on Opium to India in 1893. The commission was tasked with ascertaining the impact of Indian opium exports to the Far East, and to advise whether the trade should be banned and opium consumption itself banned in India. After an extended inquiry, the Royal Commission rejected the claims made by the anti-opium campaigners regarding the supposed societal harm caused by the trade and the issue was finalized for another 15 years. The missionary organizations were outraged over the Royal Commission on Opium's conclusions and set up the Anti-Opium League in China; the league gathered data from every Western-trained medical doctor in China and published Opinions of Over 100 Physicians on the Use of Opium in China. This was the first anti-drug campaign to be based on scientific principles, and it had a tremendous impact on the state of educated opinion in the West. In England, the home director of the China Inland Mission, Benjamin Broomhall, was an active opponent of the opium trade, writing two books to promote the banning of opium smoking: The Truth about Opium Smoking and The Chinese Opium Smoker. In 1888, Broomhall formed and became secretary of the Christian Union for the Severance of the British Empire with the Opium Traffic and editor of its periodical, National Righteousness. He lobbied the British parliament to ban the opium trade. Broomhall and James Laidlaw Maxwell appealed to the London Missionary Conference of 1888 and the Edinburgh Missionary Conference of 1910 to condemn the continuation of the trade. As Broomhall lay dying, an article from The Times was read to him with the welcome news that an international agreement had been signed ensuring the end of the opium trade within two years. In 1906, a motion to 'declare the opium trade "morally indefensible" and remove Government support for it', initially unsuccessfully proposed by Arthur Pease in 1891, was put before the House of Commons. This time the motion passed. The Qing government banned opium soon afterward. These changing attitudes led to the founding of the International Opium Commission in 1909. An International Opium Convention was signed by 13 nations at The Hague on January 23, 1912, during the First International Opium Conference. This was the first international drug control treaty and it was registered in the League of Nations Treaty Series on January 23, 1922. The Convention provided that "The contracting Powers shall use their best endeavors to control or to cause to be controlled, all person manufacturing, importing, selling, distributing, and exporting morphine, cocaine, and their respective salts, as well as the buildings in which these persons carry such an industry or trade." The treaty became international law in 1919 when it was incorporated into the Treaty of Versailles. The role of the commission was passed to the League of Nations, and all signatory nations agreed to prohibit the import, sale, distribution, export, and use of all narcotic drugs, except for medical and scientific purposes. === Prohibition === In the UK the Defence of the Realm Act 1914, passed at the onset of the First World War, gave the government wide-ranging powers to requisition the property and to criminalize specific activities. A moral panic was whipped up by the press in 1916 over the alleged sale of drugs to the troops of the British Indian Army. With the temporary powers of DORA, the Army Council quickly banned the sale of all psychoactive drugs to troops, unless required for medical reasons. However, shifts in the public attitude towards drugs—they were beginning to be associated with prostitution, vice and immorality—led the government to pass further unprecedented laws, banning and criminalising the possession and dispensation of all narcotics, including opium and cocaine. After the war, this legislation was maintained and strengthened with the passing of the Dangerous Drugs Act 1920 (10 & 11 Geo. 5. c. 46). Home Office control was extended to include raw opium, morphine, cocaine, ecogonine and heroin. Hardening of Canadian attitudes toward Chinese-Canadian opium users and fear of a spread of the drug into the white population led to the effective criminalization of opium for nonmedical use in Canada between 1908 and the mid-1920s. The Mao Zedong government nearly eradicated both consumption and production of opium during the 1950s using social control and isolation. Ten million addicts were forced into compulsory treatment, dealers were executed, and opium-producing regions were planted with new crops. Remaining opium production shifted south of the Chinese border into the Golden Triangle region. The remnant opium trade primarily served Southeast Asia, but spread to American soldiers during the Vietnam War, with 20 percent of soldiers regarding themselves as addicted during the peak of the epidemic in 1971. In 2003, China was estimated to have four million regular drug users and one million registered drug addicts. In the US, the Harrison Act was passed in 1914, and required sellers of opiates and cocaine to get a license. While originally intended to regulate the trade, it soon became a prohibitive law, eventually becoming legal precedent that any prescription for a narcotic given by a physician or pharmacist – even in the course of medical treatment for addiction – constituted conspiracy to violate the Harrison Act. In 1919, the Supreme Court ruled in Doremus that the Harrison Act was constitutional and in Webb that physicians could not prescribe narcotics solely for maintenance. In Jin Fuey Moy v. United States, the court upheld that it was a violation of the Harrison Act even if a physician provided prescription of a narcotic for an addict, and thus subject to criminal prosecution. This is also true of the later Marijuana Tax Act in 1937. Soon, however, licensing bodies did not issue licenses, effectively banning the drugs. The American judicial system did not initially accept drug prohibition. Prosecutors argued that possessing drugs was a tax violation, as no legal licenses to sell drugs were in existence; hence, a person possessing drugs must have purchased them from an unlicensed source. After some wrangling, this was accepted as federal jurisdiction under the interstate commerce clause of the U.S. Constitution. ==== Alcohol prohibition ==== The prohibition of alcohol commenced in Finland in 1919 and in the United States in 1920. Because alcohol was the most popular recreational drug in these countries, reactions to its prohibition were far more negative than to the prohibition of other drugs, which were commonly associated with ethnic minorities, prostitution, and vice. Public pressure led to the repeal of alcohol prohibition in Finland in 1932, and in the United States in 1933. Residents of many provinces of Canada also experienced alcohol prohibition for similar periods in the first half of the 20th century. In Sweden, a referendum in 1922 decided against an alcohol prohibition law (with 51% of the votes against and 49% for prohibition), but starting in 1914 (nationwide from 1917) and until 1955 Sweden employed an alcohol rationing system with personal liquor ration books ("motbok"). === War on Drugs === In response to rising drug use among young people and the counterculture movement, government efforts to enforce prohibition were strengthened in many countries from the 1960s onward. Support at an international level for the prohibition of psychoactive drug use became a consistent feature of United States policy during both Republican and Democratic administrations, to such an extent that US support for foreign governments has often been contingent on their adherence to US drug policy. Major milestones in this campaign include the introduction of the Single Convention on Narcotic Drugs in 1961, the Convention on Psychotropic Substances in 1971 and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances in 1988. A few developing countries where consumption of the prohibited substances has enjoyed longstanding cultural support, long resisted such outside pressure to pass legislation adhering to these conventions. Nepal only did so in 1976. In 1972, United States President Richard Nixon announced the commencement of the so-called "War on Drugs". Later, President Reagan added the position of drug czar to the President's Executive Office. In 1973, New York introduced mandatory minimum sentences of 15 years to life imprisonment for possession of more than 113 grams (4 oz) of a so-called hard drug, called the Rockefeller drug laws after New York Governor and later Vice President Nelson Rockefeller. Similar laws were introduced across the United States. California's broader 'three strikes and you're out' policy adopted in 1994 was the first mandatory sentencing policy to gain widespread publicity and was subsequently adopted in most United States jurisdictions. This policy mandates life imprisonment for a third criminal conviction of any felony offense. A similar 'three strikes' policy was introduced to the United Kingdom by the Conservative government in 1997. This legislation enacted a mandatory minimum sentence of seven years for those convicted for a third time of a drug trafficking offense involving a class A drug. === Calls for legalization, relegalization or decriminalization === The terms relegalization, legalization, legal regulations, or decriminalization are used with very different meanings by different authors, something that can be confusing when the claims are not specified. Here are some variants: Sales of one or more drugs (e.g., marijuana) for personal use become legal, at least if sold in a certain way. Sales of an extracts with a specific substance become legal sold in a certain way, for example on prescription. Use or possession of small amounts for personal use do not lead to incarceration if it is the only crime, but it is still illegal; the court or the prosecutor can impose a fine. (In that sense, Sweden both legalized and supported drug prohibition simultaneously.) Use or possession of small amounts for personal use do not lead to incarceration. The case is not treated in an ordinary court, but by a commission that may recommend treatment or sanctions including fines. (In that sense, Portugal both legalized and supported drug prohibitions). There are efforts around the world to promote the relegalization and decriminalization of drugs. These policies are often supported by proponents of liberalism and libertarianism on the grounds of individual freedom, as well as by leftists who believe prohibition to be a method of suppression of the working class by the ruling class. Prohibition of drugs is supported by proponents of conservatism as well various NGOs. A number of NGOs are aligned in support of drug prohibition as members of the World Federation Against Drugs. In the WFAD constitution, the "Declaration of the World Forum Against Drugs" (2008) advocates for "no other goal than a drug-free world", and states that a balanced policy of drug abuse prevention, education, treatment, law enforcement, research, and supply reduction provides the most effective platform to reduce drug abuse and its associated harms and calls on governments to consider demand reduction as one of their first priorities. It supports the UN drug conventions, the inclusion of cannabis as one of the "hard drugs", and the use of criminal sanctions "when appropriate" to deter drug use. It opposes legalization in any form, and harm reduction in general. According to some critics, drug prohibition is responsible for enriching "organised criminal networks" while the hypothesis that the prohibition of drugs generates violence is consistent with research done over long time-series and cross-country facts. In the United Kingdom, where the principal piece of drug prohibition legislation is the Misuse of Drugs Act 1971, criticism includes: Drug classification: making a hash of it?, Fifth Report of Session 2005–06, House of Commons Science and Technology Committee, which said that the present system of drug classification is based on historical assumptions, not scientific assessment Development of a rational scale to assess the harm of drugs of potential misuse, David Nutt, Leslie A. King, William Saulsbury, Colin Blakemore, The Lancet, 24 March 2007, said the act is "not fit for purpose" and "the exclusion of alcohol and tobacco from the Misuse of Drugs Act is, from a scientific perspective, arbitrary" The Drug Equality Alliance (DEA) argue that the Government is administering the Act arbitrarily, contrary to its purpose, contrary to the original wishes of Parliament and therefore illegally. They are currently assisting and supporting several legal challenges to this alleged maladministration. In February 2008 the then-president of Honduras, Manuel Zelaya, called on the world to legalize drugs, in order, he said, to prevent the majority of violent murders occurring in Honduras. Honduras is used by cocaine smugglers as a transiting point between Colombia and the US. Honduras, with a population of 7 million, suffers an average of 8–10 murders a day, with an estimated 70% being a result of this international drug trade. The same problem is occurring in Guatemala, El Salvador, Costa Rica and Mexico, according to Zelaya. In January 2012 Colombian President Juan Manuel Santos made a plea to the United States and Europe to start a global debate about legalizing drugs. This call was echoed by the Guatemalan President Otto Pérez Molina, who announced his desire to legalize drugs, saying "What I have done is put the issue back on the table." In a report dealing with HIV in June 2014, the World Health Organization (WHO) of the UN called for the decriminalization of drugs particularly including injected ones. This conclusion put WHO at odds with broader long-standing UN policy favoring criminalization. Eight states of the United States (Alaska, California, Colorado, Maine, Massachusetts, Nevada, Oregon, and Washington), as well as the District of Columbia, have legalized the sale of marijuana for personal recreational use as of 2017, although recreational use remains illegal under U.S. federal law. The conflict between state and federal law is, as of 2018, unresolved. Since Uruguay in 2014 and Canada in 2018 legalized cannabis, the debate has known a new turn internationally. On March 14th, 2025, the United Nations Commission on Narcotic Drugs decided to create a panel of independent experts to rethink the global drug control regime. == Drug prohibition laws == The following individual drugs, listed under their respective family groups (e.g., barbiturates, benzodiazepines, opiates), are the most frequently sought after by drug users and as such are prohibited or otherwise heavily regulated for use in many countries: Among the barbiturates, pentobarbital (Nembutal), secobarbital (Seconal), and amobarbital (Amytal) Among the benzodiazepines, temazepam (Restoril; Normison; Euhypnos), flunitrazepam (Rohypnol; Hypnor; Flunipam), and alprazolam (Xanax) Cannabis products, e.g., marijuana, hashish, and hashish oil Among the dissociatives, phencyclidine (PCP), and ketamine are the most sought after. hallucinogens such as LSD, mescaline, peyote, and psilocybin Empathogen-entactogen drugs like MDMA ("ecstasy") Among the narcotics, it is opiates such as morphine and codeine, and opioids such as diacetylmorphine (Heroin), hydrocodone (Vicodin; Hycodan), oxycodone (Percocet; Oxycontin), hydromorphone (Dilaudid), and oxymorphone (Opana). Sedatives such as GHB and methaqualone (Quaalude) Stimulants such as cocaine, amphetamine (Adderall), dextroamphetamine (Dexedrine), methamphetamine (Desoxyn), methcathinone, and methylphenidate (Ritalin) The regulation of the above drugs varies in many countries. Alcohol possession and consumption by adults is today widely banned only in Islamic countries and certain states of India. Although alcohol prohibition was eventually repealed in the countries that enacted it, there are, for example, still parts of the United States that do not allow alcohol sales, though alcohol possession may be legal (see dry counties). New Zealand has banned the importation of chewing tobacco as part of the Smoke-free Environments Act 1990. In some parts of the world, provisions are made for the use of traditional sacraments like ayahuasca, iboga, and peyote. In Gabon, iboga (tabernanthe iboga) has been declared a national treasure and is used in rites of the Bwiti religion. The active ingredient, ibogaine, is proposed as a treatment of opioid withdrawal and various substance use disorders. In countries where alcohol and tobacco are legal, certain measures are frequently undertaken to discourage use of these drugs. For example, packages of alcohol and tobacco sometimes communicate warnings directed towards the consumer, communicating the potential risks of partaking in the use of the substance. These drugs also frequently have special sin taxes associated with the purchase thereof, in order to recoup the losses associated with public funding for the health problems the use causes in long-term users. Restrictions on advertising also exist in many countries, and often a state holds a monopoly on manufacture, distribution, marketing, and/or the sale of these drugs. === List of principal drug prohibition laws by jurisdiction (non-exhaustive) === Australia: Standard for the Uniform Scheduling of Medicines and Poisons Bangladesh: Narcotics Substances Control Act, 2018 Belize: Misuse of Drugs Act (Belize) Canada: Controlled Drugs and Substances Act Estonia: Narcotic Drugs and Psychotropic Substances Act (Estonia) Germany: Narcotic Drugs Act India: Narcotic Drugs and Psychotropic Substances Act (India) Netherlands: Opium Law New Zealand: Misuse of Drugs Act 1975 Pakistan: Control of Narcotic Substances Act 1997 Philippines: Comprehensive Dangerous Drugs Act of 2002 Poland: Drug Abuse Prevention Act 2005 Portugal: Decree-Law 15/93 Ireland: Misuse of Drugs Act (Ireland) South Africa: Drugs and Drug Trafficking Act 1992 Singapore: Misuse of Drugs Act (Singapore) Sweden: Lag om kontroll av narkotika (SFS 1992:860) Thailand: Psychotropic Substances Act (Thailand) and Narcotics Act United Kingdom: Misuse of Drugs Act 1971 and Drugs Act 2005 United States: Controlled Substances Act International: Single Convention on Narcotic Drugs === Legal dilemmas === The sentencing statutes in the United States Code that cover controlled substances are complicated. For example, a first-time offender convicted in a single proceeding for selling marijuana three times, and found to have carried a gun on him all three times (even if it were not used) is subject to a minimum sentence of 55 years in federal prison. In Hallucinations: Behavior, Experience, and Theory (1975), senior US government researchers Louis Jolyon West and Ronald K. Siegel explain how drug prohibition can be used for selective social control: The role of drugs in the exercise of political control is also coming under increasing discussion. Control can be through prohibition or supply. The total or even partial prohibition of drugs gives the government considerable leverage for other types of control. An example would be the selective application of drug laws ... against selected components of the population such as members of certain minority groups or political organizations. Linguist Noam Chomsky argues that drug laws are currently, and have historically been, used by the state to oppress sections of society it opposes: Very commonly substances are criminalized because they're associated with what's called the dangerous classes, poor people, or working people. So for example in England in the 19th century, there was a period when gin was criminalized and whiskey wasn't, because gin is what poor people drink. === Legal highs and prohibition === In 2013 the European Monitoring Centre for Drugs and Drug Addiction reported that there are 280 new legal drugs, known as "legal highs", available in Europe. One of the best known, mephedrone, was banned in the United Kingdom in 2010. On November 24, 2010, the U.S. Drug Enforcement Administration announced it would use emergency powers to ban many synthetic cannabinoids within a month. An estimated 73 new psychoactive synthetic drugs appeared on the UK market in 2012. The response of the Home Office has been to create a temporary class drug order which bans the manufacture, import, and supply (but not the possession) of named substances. === Corruption === In certain countries, there is concern that campaigns against drugs and organized crime are a cover for corrupt officials tied to drug trafficking themselves. In the United States, Federal Bureau of Narcotics chief Harry Anslinger's opponents accused him of taking bribes from the Mafia to enact prohibition and create a black market for alcohol. More recently in the Philippines, one death squad hitman told author Niko Vorobyov that he was being paid by military officers to eliminate those drug dealers who failed to pay a 'tax'. Under President Rodrigo Duterte, the Philippines has waged a bloody war against drugs that may have resulted in up to 29,000 extrajudicial killings. When it comes to social control with cannabis, there are different aspects to consider. Not only do we assess legislative leaders and the way they vote on cannabis, but we also must consider the federal regulations and taxation that contribute to social controls. For instance, according to a report on the U.S. customs and border protections, the American industry, although banned the main usage of marijuana, was still using products similar such as hemp seeds, oils etc. leading to the previously discussed marijuana tax act. The Tax act provisions required importers to register and pay an annual tax of $24 and receive an official stamp. Stamps for Products were then affixed to each original order form and recorded by the state revenue collector. Then, a customs collector was to maintain the custody of imported marijuana at entry ports until required documents were received, reviewed and approved.Shipments were subject to searches, seizures and forfeitures if any provisions of the law were not met. Violations would result in fines of no more than $2000 or potential imprisonment for up to 5 years. Oftentimes, this created opportunity for corruption, stolen imports that would later lead to smuggling, oftentimes by state officials and tight knit elitists. == Penalties == === United States === Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of "soft drugs", such as cannabis, illegal, though some local governments have laws contradicting federal laws. In the U.S., the War on Drugs is thought to be contributing to a prison overcrowding problem. In 1996, 59.6% of prisoners were drug-related criminals. The U.S. population grew by about +25% from 1980 to 2000. In that same 20 year time period, the U.S. prison population tripled, making the U.S. the world leader in both percentage and absolute number of citizens incarcerated. The United States has 5% of the world's population, but 25% of the prisoners. About 90% of United States prisoners are incarcerated in state jails. In 2016, about 572,000, over 44%, of the 1.3 million people in these state jails, were serving time for drug offenses. 728,000 were incarcerated for violent offenses. The data from Federal Bureau of Prisons online statistics page states that 45.9% of prisoners were incarcerated for drug offenses, as of December 2021. === European Union === In 2004, the Council of the European Union adopted a framework decision harmonizing the minimum penal provisions for illicit drug-related activities. In particular, article 2(9) stipulates that activities may be exempt from the minimum provisions "when it is committed by its perpetrators exclusively for their own personal consumption as defined by national law." This was made, in particular, to accommodate more liberal national systems such as the Dutch coffee shops (see below) or the Spanish Cannabis Social Clubs. ==== The Netherlands ==== In the Netherlands, cannabis and other "soft" drugs are decriminalised in small quantities. The Dutch government treats the problem as more of a public health issue than a criminal issue. Contrary to popular belief, cannabis is still technically illegal. Coffee shops that sell cannabis to people 18 or above are tolerated, and pay taxes like any other business for their cannabis and hashish sales, although distribution is a grey area that the authorities would rather not go into as it is not decriminalised. Many "coffee shops" are found in Amsterdam and cater mainly to the large tourist trade; the local consumption rate is far lower than in the US. The administrative bodies responsible for enforcing the drug policies include the Ministry of Health, Welfare and Sport, the Ministry of Justice, the Ministry of the Interior and Kingdom Relations, and the Ministry of Finance. Local authorities also shape local policy, within the national framework. When compared to other countries, Dutch drug consumption falls in the European average at six per cent regular use (twenty-one per cent at some point in life) and considerably lower than the Anglo-Saxon countries headed by the United States with an eight per cent recurring use (thirty-four at some point in life). === Australia === A Nielsen poll in 2012 found that only 27% of voters favoured decriminalisation. Australia has steep penalties for growing and using drugs even for personal use. with Western Australia having the toughest laws. There is an associated anti-drug culture amongst a significant number of Australians. Law enforcement targets drugs, particularly in the party scene. In 2012, crime statistics in Victoria revealed that police were increasingly arresting users rather than dealers, and the Liberal government banned the sale of bongs that year. === Indonesia === Indonesia carries a maximum penalty of death for drug dealing, and a maximum of 15 years prison for drug use. In 2004, Australian citizen Schapelle Corby was convicted of smuggling 4.4 kilograms of cannabis into Bali, a crime that carried a maximum penalty of death. Her trial reached the verdict of guilty with a punishment of 20 years imprisonment. Corby claimed to be an unwitting drug mule. Australian citizens known as the "Bali Nine" were caught smuggling heroin. Two of the nine, Andrew Chan and Myuran Sukumaran, were executed April 29, 2015 along with six other foreign nationals. In August 2005, Australian model Michelle Leslie was arrested with two ecstasy pills. She pleaded guilty to possession and in November 2005 was sentenced to 3 months imprisonment, which she was deemed to have already served, and was released from prison immediately upon her admission of guilt on the charge of possession. At the 1961 Single Convention on Narcotic Drugs, Indonesia, along with India, Turkey, Pakistan and some South American countries opposed the criminalisation of drugs. === Republic of China (Taiwan) === Taiwan carries a maximum penalty of death for drug trafficking, while smoking tobacco and wine are classified as legal entertainment drug. The Department of Health is in charge of drug prohibition. == Cost == In 2020, the direct cost of drug prohibition to United States taxpayers was estimated at over $40 billion annually. Prohibition can increase organized crime, government corruption, and mass incarceration via the trade in illegal drugs, while racial and gender disparities in enforcement are evident. Although drug prohibition is often portrayed by proponents as a measure to improve public health, evidence is lacking. In 2016, the Johns Hopkins–Lancet Commission concluded that the "harms of prohibition far outweigh the benefits", citing increased risk of overdoses and HIV infection and detrimental effects on the social determinants of health. Some proponents argue that drug prohibition's effect on suppressing usage rates (although the magnitude of this effect is unknown) outweighs the negative effects of prohibition. Alternative approaches to prohibition include drug legalization, drug decriminalization, and government monopoly. == See also == Alcohol law Arguments for and against drug prohibition Chasing the Scream Drug liberalization Demand reduction Drug policy of the Soviet Union Harm reduction List of anti-cannabis organizations Medellín Cartel Mexican drug war Puerto Rican drug war Prohibitionism Tobacco control War on Drugs US specific: Allegations of CIA drug trafficking School district drug policies Drug Free America Foundation Drug Policy Alliance DrugWarRant Gary Webb Marijuana Policy Project National Organization for the Reform of Marijuana Laws Students for Sensible Drug Policy Woman's Christian Temperance Union == References == == Further reading == == External links == Making Contact: The Mission to End Prohibition. Radio piece featuring LEAP founder and former narcotics officer Jack Cole, and Drug Policy Alliance founder Ethan Nadelmann EMCDDA – Decriminalisation in Europe? Recent developments in legal approaches to drug use Archived January 12, 2007, at the Wayback Machine. 10 Downing Street's Strategy Unit Drugs Report War on drugs Archived April 30, 2011, at the Wayback Machine Part I: Winners, documentary (50 min) explaining 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. War on drugs Archived April 30, 2011, at the Wayback Machine Part II: Losers, documentary (50 min) showing downside of the 'War on Drugs' by Tegenlicht of VPRO Dutch television. After short introduction in Dutch (1 min), English spoken. Broadband internet needed. After the War on Drugs: Options for Control (Report) The Drug War as a Socialist Enterprise by Milton Friedman Free from the Nightmare of Prohibition Archived February 23, 2006, at the Wayback Machine by Harry Browne Prohibition news page – Alcohol and Drugs History Society Drugs and conservatives should go together
Wikipedia/Prohibition_of_drugs
Liquid chromatography–mass spectrometry (LC–MS) is an analytical chemistry technique that combines the physical separation capabilities of liquid chromatography (or HPLC) with the mass analysis capabilities of mass spectrometry (MS). Coupled chromatography – MS systems are popular in chemical analysis because the individual capabilities of each technique are enhanced synergistically. While liquid chromatography separates mixtures with multiple components, mass spectrometry provides spectral information that may help to identify (or confirm the suspected identity of) each separated component. MS is not only sensitive, but provides selective detection, relieving the need for complete chromatographic separation. LC–MS is also appropriate for metabolomics because of its good coverage of a wide range of chemicals. This tandem technique can be used to analyze biochemical, organic, and inorganic compounds commonly found in complex samples of environmental and biological origin. Therefore, LC–MS may be applied in a wide range of sectors including biotechnology, environment monitoring, food processing, and pharmaceutical, agrochemical, and cosmetic industries. Since the early 2000s, LC–MS (or more specifically LC–MS/MS) has also begun to be used in clinical applications. In addition to the liquid chromatography and mass spectrometry devices, an LC–MS system contains an interface that efficiently transfers the separated components from the LC column into the MS ion source. The interface is necessary because the LC and MS devices are fundamentally incompatible. While the mobile phase in a LC system is a pressurized liquid, the MS analyzers commonly operate under high vacuum. Thus, it is not possible to directly pump the eluate from the LC column into the MS source. Overall, the interface is a mechanically simple part of the LC–MS system that transfers the maximum amount of analyte, removes a significant portion of the mobile phase used in LC and preserves the chemical identity of the chromatography products (chemically inert). As a requirement, the interface should not interfere with the ionizing efficiency and vacuum conditions of the MS system. Nowadays, most extensively applied LC–MS interfaces are based on atmospheric pressure ionization (API) strategies like electrospray ionization (ESI), atmospheric-pressure chemical ionization (APCI), and atmospheric pressure photoionization (APPI). These interfaces became available in the 1990s after a two decade long research and development process. == History == The coupling of chromatography with MS is a well developed chemical analysis strategy dating back from the 1950s. Gas chromatography (GC)–MS was originally introduced in 1952, when A. T. James and A. J. P. Martin were trying to develop tandem separation – mass analysis techniques. In GC, the analytes are eluted from the separation column as a gas and the connection with electron ionization (EI) or chemical ionization (CI) ion sources in the MS system was a technically simpler challenge. Because of this, the development of GC-MS systems was faster than LC–MS and such systems were first commercialized in the 1970s. The development of LC–MS systems took longer than GC-MS and was directly related to the development of proper interfaces. Victor Talrose and his collaborators in Russia started the development of LC–MS in the late 1960s, when they first used capillaries to connect an LC column to an EI source. A similar strategy was investigated by McLafferty and collaborators in 1973 who coupled the LC column to a CI source, which allowed a higher liquid flow into the source. This was the first and most obvious way of coupling LC with MS, and was known as the capillary inlet interface. This pioneer interface for LC–MS had the same analysis capabilities of GC-MS and was limited to rather volatile analytes and non-polar compounds with low molecular mass (below 400 Da). In the capillary inlet interface, the evaporation of the mobile phase inside the capillary was one of the main issues. Within the first years of development of LC–MS, on-line and off-line alternatives were proposed as coupling alternatives. In general, off-line coupling involved fraction collection, evaporation of solvent, and transfer of analytes to the MS using probes. Off-line analyte treatment process was time-consuming and there was an inherent risk of sample contamination. Rapidly, it was realized that the analysis of complex mixtures would require the development of a fully automated on-line coupling solution in LC–MS. The key to the success and widespread adoption of LC–MS as a routine analytical tool lies in the interface and ion source between the liquid-based LC and the vacuum-base MS. The following interfaces were stepping-stones on the way to the modern atmospheric-pressure ionization interfaces, and are described for historical interest. === Moving-belt interface === The moving-belt interface (MBI) was developed by McFadden et al. in 1977 and commercialized by Finnigan. This interface consisted of an endless moving belt onto which the LC column effluent was deposited in a band. On the belt, the solvent was evaporated by gently heating and efficiently exhausting the solvent vapours under reduced pressure in two vacuum chambers. After the liquid phase was removed, the belt passed over a heater which flash desorbed the analytes into the MS ion source. One of the significant advantages of the MBI was its compatibility with a wide range of chromatographic conditions. MBI was successfully used for LC–MS applications between 1978 and 1990 because it allowed coupling of LC to MS devices using EI, CI, and fast-atom bombardment (FAB) ion sources. The most common MS systems connected by MBI interfaces to LC columns were magnetic sector and quadrupole instruments. MBI interfaces for LC–MS allowed MS to be widely applied in the analysis of drugs, pesticides, steroids, alkaloids, and polycyclic aromatic hydrocarbons. This interface is no longer used because of its mechanical complexity and the difficulties associated with belt renewal (or cleaning) as well as its inability to handle very labile biomolecules. === Direct liquid-introduction interface === The direct liquid-introduction (DLI) interface was developed in 1980. This interface was intended to solve the problem of evaporation of liquid inside the capillary inlet interface. In DLI, a small portion of the LC flow was forced through a small aperture or diaphragm (typically 10 μm in diameter) to form a liquid jet composed of small droplets that were subsequently dried in a desolvation chamber. The analytes were ionized using a solvent-assisted chemical ionization source, where the LC solvents acted as reagent gases. To use this interface, it was necessary to split the flow coming out of the LC column because only a small portion of the effluent (10 to 50 μl/min out of 1 ml/min) could be introduced into the source without raising the vacuum pressure of the MS system too high. Alternately, Henion at Cornell University had success with using micro-bore LC methods so that the entire (low) flow of the LC could be used. One of the main operational problems of the DLI interface was the frequent clogging of the diaphragm orifices. The DLI interface was used between 1982 and 1985 for the analysis of pesticides, corticosteroids, metabolites in horse urine, erythromycin, and vitamin B12. However, this interface was replaced by the thermospray interface, which removed the flow rate limitations and the issues with the clogging diaphragms. A related device was the particle beam interface (PBI), developed by Willoughby and Browner in 1984. Particle beam interfaces took over the wide applications of MBI for LC–MS in 1988. The PBI operated by using a helium gas nebulizer to spray the eluant into the vacuum, drying the droplets and pumping away the solvent vapour (using a jet separator) while the stream of monodisperse dried particles containing the analyte entered the source. Drying the droplets outside of the source volume, and using a jet separator to pump away the solvent vapour, allowed the particles to enter and be vapourized in a low-pressure EI source. As with the MBI, the ability to generate library-searchable EI spectra was a distinct advantage for many applications. Commercialized by Hewlett Packard, and later by VG and Extrel, it enjoyed moderate success, but has been largely supplanted by the atmospheric pressure interfaces such as electrospray and APCI which provide a broader range of compound coverage and applications. === Thermospray interface === The thermospray (TSP) interface was developed in 1980 by Marvin Vestal and co-workers at the University of Houston. It was commercialized by Vestec and several of the major mass spectrometer manufacturers. The interface resulted from a long-term research project intended to find a LC–MS interface capable of handling high flow rates (1 ml/min) and avoiding the flow split in DLI interfaces. The TSP interface was composed of a heated probe, a desolvation chamber, and an ion focusing skimmer. The LC effluent passed through the heated probe and emerged as a jet of vapor and small droplets flowing into the desolvation chamber at low pressure. Initially operated with a filament or discharge as the source of ions (thereby acting as a CI source for vapourized analyte), it was soon discovered that ions were also observed when the filament or discharge was off. This could be attributed to either direct emission of ions from the liquid droplets as they evaporated in a process related to electrospray ionization or ion evaporation, or to chemical ionization of vapourized analyte molecules from buffer ions (such as ammonium acetate). The fact that multiply-charged ions were observed from some larger analytes suggests that direct analyte ion emission was occurring under at least some conditions. The interface was able to handle up to 2 ml/min of eluate from the LC column and would efficiently introduce it into the MS vacuum system. TSP was also more suitable for LC–MS applications involving reversed phase liquid chromatography (RT-LC). With time, the mechanical complexity of TSP was simplified, and this interface became popular as the first ideal LC–MS interface for pharmaceutical applications comprising the analysis of drugs, metabolites, conjugates, nucleosides, peptides, natural products, and pesticides. The introduction of TSP marked a significant improvement for LC–MS systems and was the most widely applied interface until the beginning of the 1990s, when it began to be replaced by interfaces involving atmospheric pressure ionization (API). === FAB based interfaces === The first fast atom bombardment (FAB) and continuous flow-FAB (CF-FAB) interfaces were developed in 1985 and 1986 respectively. Both interfaces were similar, but they differed in that the first used a porous frit probe as connecting channel, while CF-FAB used a probe tip. From these, the CF-FAB was more successful as a LC–MS interface and was useful to analyze non-volatile and thermally labile compounds. In these interfaces, the LC effluent passed through the frit or CF-FAB channels to form a uniform liquid film at the tip. There, the liquid was bombarded with ion beams or high energy atoms (fast atoms). For stable operation, the FAB based interfaces were able to handle liquid flow rates of only 1–15 μl and were also restricted to microbore and capillary columns. In order to be used in FAB MS ionization sources, the analytes of interest had to be mixed with a matrix (e.g., glycerol) that could be added before or after the separation in the LC column. FAB based interfaces were extensively used to characterize peptides, but lost applicability with the advent of electrospray based interfaces in 1988. == Liquid chromatography == Liquid chromatography is a method of physical separation in which the components of a liquid mixture are distributed between two immiscible phases, i.e., stationary and mobile. The practice of LC can be divided into five categories, i.e., adsorption chromatography, partition chromatography, ion-exchange chromatography, size-exclusion chromatography, and affinity chromatography. Among these, the most widely used variant is the reverse-phase (RP) mode of the partition chromatography technique, which makes use of a nonpolar (hydrophobic) stationary phase and a polar mobile phase. In common applications, the mobile phase is a mixture of water and other polar solvents (e.g., methanol, isopropanol, and acetonitrile), and the stationary matrix is prepared by attaching long-chain alkyl groups (e.g., n-octadecyl or C18) to the external and internal surfaces of irregularly or spherically shaped 5 μm diameter porous silica particles. In HPLC, typically 20 μl of the sample of interest are injected into the mobile phase stream delivered by a high pressure pump. The mobile phase containing the analytes permeates through the stationary phase bed in a definite direction. The components of the mixture are separated depending on their chemical affinity with the mobile and stationary phases. The separation occurs after repeated sorption and desorption steps occurring when the liquid interacts with the stationary bed. The liquid solvent (mobile phase) is delivered under high pressure (up to 400 bar or 5800 psi) into a packed column containing the stationary phase. The high pressure is necessary to achieve a constant flow rate for reproducible chromatography experiments. Depending on the partitioning between the mobile and stationary phases, the components of the sample will flow out of the column at different times. The column is the most important component of the LC system and is designed to withstand the high pressure of the liquid. Conventional LC columns are 100–300 mm long with outer diameter of 6.4 mm (1/4 inch) and internal diameter of 3.0–4.6 mm. For applications involving LC–MS, the length of chromatography columns can be shorter (30–50 mm) with 3–5 μm diameter packing particles. In addition to the conventional model, other LC columns are the narrow bore, microbore, microcapillary, and nano-LC models. These columns have smaller internal diameters, allow for a more efficient separation, and handle liquid flows under 1 ml/min (the conventional flow-rate). In order to improve separation efficiency and peak resolution, ultra performance liquid chromatography (UHPLC) can be used instead of HPLC. This LC variant uses columns packed with smaller silica particles (~1.7 μm diameter) and requires higher operating pressures in the range of 310000 to 775000 torr (6000 to 15000 psi, 400 to 1034 bar). == Mass spectrometry == Mass spectrometry (MS) is an analytical technique that measures the mass-to-charge ratio (m/z) of charged particles (ions). Although there are many different kinds of mass spectrometers, all of them make use of electric or magnetic fields to manipulate the motion of ions produced from an analyte of interest and determine their m/z. The basic components of a mass spectrometer are the ion source, the mass analyzer, the detector, and the data and vacuum systems. The ion source is where the components of a sample introduced in a MS system are ionized by means of electron beams, photon beams (UV lights), laser beams or corona discharge. In the case of electrospray ionization, the ion source moves ions that exist in liquid solution into the gas phase. The ion source converts and fragments the neutral sample molecules into gas-phase ions that are sent to the mass analyzer. While the mass analyzer applies the electric and magnetic fields to sort the ions by their masses, the detector measures and amplifies the ion current to calculate the abundances of each mass-resolved ion. In order to generate a mass spectrum that a human eye can easily recognize, the data system records, processes, stores, and displays data in a computer. The mass spectrum can be used to determine the mass of the analytes, their elemental and isotopic composition, or to elucidate the chemical structure of the sample. MS is an experiment that must take place in gas phase and under vacuum (1.33 * 10−2 to 1.33 * 10−6 pascal). Therefore, the development of devices facilitating the transition from samples at higher pressure and in condensed phase (solid or liquid) into a vacuum system has been essential to develop MS as a potent tool for identification and quantification of organic compounds like peptides. MS is now in very common use in analytical laboratories that study physical, chemical, or biological properties of a great variety of compounds. Among the many different kinds of mass analyzers, the ones that find application in LC–MS systems are the quadrupole, time-of-flight (TOF), ion traps, and hybrid quadrupole-TOF (QTOF) analyzers. == Interfaces == The interface between a liquid phase technique (HPLC) with a continuously flowing eluate, and a gas phase technique carried out in a vacuum was difficult for a long time. The advent of electrospray ionization changed this. Currently, the most common LC–MS interfaces are electrospray ionization (ESI), atmospheric pressure chemical ionization (APCI), and atmospheric pressure photo-ionization (APPI). These are newer MS ion sources that facilitate the transition from a high pressure environment (HPLC) to high vacuum conditions needed at the MS analyzer. Although these interfaces are described individually, they can also be commercially available as dual ESI/APCI, ESI/APPI, or APCI/APPI ion sources. Various deposition and drying techniques were used in the past (e.g., moving belts) but the most common of these was the off-line MALDI deposition. A new approach still under development called direct-EI LC–MS interface, couples a nano HPLC system and an electron ionization equipped mass spectrometer. === Electrospray ionization (ESI) === ESI interface for LC–MS systems was developed by Fenn and collaborators in 1988. This ion source/ interface can be used for the analysis of moderately polar and even very polar molecules (e.g., metabolites, xenobiotics, peptides, nucleotides, polysaccharides). The liquid eluate coming out of the LC column is directed into a metal capillary kept at 3 to 5 kV and is nebulized by a high-velocity coaxial flow of gas at the tip of the capillary, creating a fine spray of charged droplets in front of the entrance to the vacuum chamber. To avoid contamination of the vacuum system by buffers and salts, this capillary is usually perpendicularly located at the inlet of the MS system, in some cases with a counter-current of dry nitrogen in front of the entrance through which ions are directed by the electric field. In some sources, rapid droplet evaporation and thus maximum ion emission is achieved by mixing an additional stream of hot gas with the spray plume in front of the vacuum entrance. In other sources, the droplets are drawn through a heated capillary tube as they enter the vacuum, promoting droplet evaporation and ion emission. These methods of increasing droplet evaporation now allow the use of liquid flow rates of 1 - 2 mL/min to be used while still achieving efficient ionisation and high sensitivity. Thus while the use of 1 – 3 mm microbore columns and lower flow rates of 50 - 200 μl/min was commonly considered necessary for optimum operation, this limitation is no longer as important, and the higher column capacity of larger bore columns can now be advantageously employed with ESI LC–MS systems. Positively and negatively charged ions can be created by switching polarities, and it is possible to acquire alternate positive and negative mode spectra rapidly within the same LC run . While most large molecules (greater than MW 1500–2000) produce multiply charged ions in the ESI source, the majority of smaller molecules produce singly charged ions. === Atmospheric pressure chemical ionization (APCI) === The development of the APCI interface for LC–MS started with Horning and collaborators in the early 1973. However, its commercial application was introduced at the beginning of the 1990s after Henion and collaborators improved the LC–APCI–MS interface in 1986. The APCI ion source/ interface can be used to analyze small, neutral, relatively non-polar, and thermally stable molecules (e.g., steroids, lipids, and fat soluble vitamins). These compounds are not well ionized using ESI. In addition, APCI can also handle mobile phase streams containing buffering agents. The liquid from the LC system is pumped through a capillary and there is also nebulization at the tip, where a corona discharge takes place. First, the ionizing gas surrounding the interface and the mobile phase solvent are subject to chemical ionization at the ion source. Later, these ions react with the analyte and transfer their charge. The sample ions then pass through small orifice skimmers by means of or ion-focusing lenses. Once inside the high vacuum region, the ions are subject to mass analysis. This interface can be operated in positive and negative charge modes and singly-charged ions are mainly produced. APCI ion source can also handle flow rates between 500 and 2000 μl/min and it can be directly connected to conventional 4.6 mm ID columns. === Atmospheric pressure photoionization (APPI) === The APPI interface for LC–MS was developed simultaneously by Bruins and Syage in 2000. APPI is another LC–MS ion source/ interface for the analysis of neutral compounds that cannot be ionized using ESI. This interface is similar to the APCI ion source, but instead of a corona discharge, the ionization occurs by using photons coming from a discharge lamp. In the direct-APPI mode, singly charged analyte molecular ions are formed by absorption of a photon and ejection of an electron. In the dopant-APPI mode, an easily ionizable compound (Dopant) is added to the mobile phase or the nebulizing gas to promote a reaction of charge-exchange between the dopant molecular ion and the analyte. The ionized sample is later transferred to the mass analyzer at high vacuum as it passes through small orifice skimmers. == Applications == The coupling of MS with LC systems is attractive because liquid chromatography can separate delicate and complex natural mixtures, which chemical composition needs to be well established (e.g., biological fluids, environmental samples, and drugs). Further, LC–MS has applications in volatile explosive residue analysis. Nowadays, LC–MS has become one of the most widely used chemical analysis techniques because more than 85% of natural chemical compounds are polar and thermally labile and GC-MS cannot process these samples. As an example, HPLC–MS is regarded as the leading analytical technique for proteomics and pharmaceutical laboratories. Other important applications of LC–MS include the analysis of food, pesticides, and plant phenols. === Pharmacokinetics === LC–MS is widely used in the field of bioanalysis and is specially involved in pharmacokinetic studies of pharmaceuticals. Pharmacokinetic studies are needed to determine how quickly a drug will be cleared from the body organs and the hepatic blood flow. MS analyzers are useful in these studies because of their shorter analysis time, and higher sensitivity and specificity compared to UV detectors commonly attached to HPLC systems. One major advantage is the use of tandem MS–MS, where the detector may be programmed to select certain ions to fragment. The measured quantity is the sum of molecule fragments chosen by the operator. As long as there are no interferences or ion suppression in LC–MS, the LC separation can be quite quick. === Proteomics/metabolomics === LC–MS is used in proteomics as a method to detect and identify the components of a complex mixture. The bottom-up proteomics LC–MS approach generally involves protease digestion and denaturation using trypsin as a protease, urea to denature the tertiary structure, and iodoacetamide to modify the cysteine residues. After digestion, LC–MS is used for peptide mass fingerprinting, or LC–MS/MS (tandem MS) is used to derive the sequences of individual peptides. LC–MS/MS is most commonly used for proteomic analysis of complex samples where peptide masses may overlap even with a high-resolution mass spectrometry. Samples of complex biological (e.g., human serum) may be analyzed in modern LC–MS/MS systems, which can identify over 1000 proteins. However, this high level of protein identification is possible only after separating the sample by means of SDS-PAGE gel or HPLC-SCX. Recently, LC–MS/MS has been applied to search peptide biomarkers. Examples are the recent discovery and validation of peptide biomarkers for four major bacterial respiratory tract pathogens (Staphylococcus aureus, Moraxella catarrhalis; Haemophilus influenzae and Streptococcus pneumoniae) and the SARS-CoV-2 virus. LC–MS has emerged as one of the most commonly used techniques in global metabolite profiling of biological tissue (e.g., blood plasma, serum, urine). LC–MS is also used for the analysis of natural products and the profiling of secondary metabolites in plants. In this regard, MS-based systems are useful to acquire more detailed information about the wide spectrum of compounds from a complex biological samples. LC–nuclear magnetic resonance (NMR) is also used in plant metabolomics, but this technique can only detect and quantify the most abundant metabolites. LC–MS has been useful to advance the field of plant metabolomics, which aims to study the plant system at molecular level providing a non-biased characterization of the plant metabolome in response to its environment. The first application of LC–MS in plant metabolomics was the detection of a wide range of highly polar metabolites, oligosaccharides, amino acids, amino sugars, and sugar nucleotides from Cucurbita maxima phloem tissues. Another example of LC–MS in plant metabolomics is the efficient separation and identification of glucose, sucrose, raffinose, stachyose, and verbascose from leaf extracts of Arabidopsis thaliana. === Drug development === LC–MS is frequently used in drug development because it allows quick molecular weight confirmation and structure identification. These features speed up the process of generating, testing, and validating a discovery starting from a vast array of products with potential application. LC–MS applications for drug development are highly automated methods used for peptide mapping, glycoprotein mapping, lipodomics, natural products dereplication, bioaffinity screening, in vivo drug screening, metabolic stability screening, metabolite identification, impurity identification, quantitative bioanalysis, and quality control. == See also == Gas chromatography–mass spectrometry Capillary electrophoresis–mass spectrometry Ion-mobility spectrometry–mass spectrometry == References == == Further reading ==
Wikipedia/Liquid_chromatography-mass_spectrometry
Partition chromatography theory and practice was introduced through the work and publications of Archer Martin and Richard Laurence Millington Synge during the 1940s. They would later receive the 1952 Nobel Prize in Chemistry "for their invention of partition chromatography". == Synopsis == The process of separating mixtures of chemical compounds by passing them through a column that contains a solid stationary phase that was eluted with a mobile phase (column chromatography) was well known at that time. Chromatographic separation was considered to occur by an adsorption process whereby compounds adhered to a solid media and were washed off the column with a solvent, mixture of solvents, or solvent gradient. In contrast, Martin and Synge developed and described a chromatographic separation process whereby compounds were partitioned between two liquid phases similar to the separatory funnel liquid-liquid separation dynamic. This was an important departure, both in theory and under equilibrium conditions. Martin and Synge initially attempted to devise a method of performing a sequential liquid-liquid extraction with serially connected glass vessels that functioned as separatory funnels. The seminal article presenting their early studies described a rather complicated instrument that allowed partitioning of amino acids between water and chloroform phases. The process was termed "counter-current liquid-liquid extraction." Martin and Synge described the theory of this technique in reference to continuous fractional distillation described by Randall and Longtin. This approach was deemed too cumbersome, so they developed a method of absorbing water onto silica gel as the stationary phase and using a solvent, such as chloroform, as the mobile phase. This work was published in 1941 as "a new form of chromatogram employing two liquid phases." The article describes both the theory in terms of the partition coefficient of a compound, and the application of the process to the separation of amino acids on a water-impregnated silica column eluted with a water:chloroform:n-butanol solvent mixture. == Impact on separation methodology == The previously described work of Martin and Synge impacted the development of the previously known column chromatography and inspired new forms of chromatography such as countercurrent distribution, paper chromatography, and gas-liquid chromatography which is more commonly known as gas chromatography. The modification of silica gel stationary phase led to many creative ways of modifying stationary phases in order to influence the separation characteristics. The most notable modification was the chemical bonding of alkane functional groups to silica gel to produce reversed-phase media. The original problem that Martin and Synge encountered with devising an instrument that would employ two free-flowing liquid phases was solved by Lyman C. Craig in 1944, and commercial counter-current distribution instruments were used for many important discoveries. The introduction of paper chromatography was an important analytical technique which gave rise to thin-layer chromatography. Finally, gas-liquid chromatography, a fundamental technique in modern analytical chemistry, was described by Martin with coauthors A. T. James and G. Howard Smith in 1952. == References ==
Wikipedia/Partition_chromatography
A chromatography detector is a device that detects and quantifies separated compounds as they elute from the chromatographic column. These detectors are integral to various chromatographic techniques, such as gas chromatography, liquid chromatography, and high-performance liquid chromatography, and supercritical fluid chromatography among others. The main function of a chromatography detector is to translate the physical or chemical properties of the analyte molecules into measurable signal, typically electrical signal, that can be displayed as a function of time in a graphical presentation, called a chromatograms. Chromatograms can provide valuable information about the composition and concentration of the components in the sample. Detectors operate based on specific principles, including optical, electrochemical, thermal conductivity, fluorescence, mass spectrometry, and more. Each type of detector has its unique capabilities and is suitable for specific applications, depending on the nature of the analytes and the sensitivity and selectivity required for the analysis. There are two general types of detectors: destructive and non-destructive. The destructive detectors perform continuous transformation of the column effluent (burning, evaporation or mixing with reagents) with subsequent measurement of some physical property of the resulting material (plasma, aerosol or reaction mixture). The non-destructive detectors are directly measuring some property of the column eluent (for example, ultraviolet absorption) and thus affords greater analyte recovery. == Destructive detectors == In liquid chromatography: Charged aerosol detector electrically charged aerosol is used for the detection of non-UV-absorbing chargeable molecules, especially saccharides and lipids Evaporative light scattering detector evaporating non volatile solutes inside a volatile mobile phase for universal detection. used for saccharides and lipids and other non-UV-absorbing molecules In gas chromatography: Flame ionization detector which uses ionizing flame to detect most hydrocarbon molecules Flame photometric detector which uses atomizing flame to get light emitted from specific elements to detect and quantify them Nitrogen Phosphorus Detector a thermionic detector with photometeric detection, sensitive specifically to nitrogen and phosphorus hydrocarbons Atomic-emission detector is a hyphenation between gas chromatography and atomic emission spectrophotometer for detection of elements. In all types of chromatography: Mass spectrometer is in fact hyphenation between the separative instrument and a mass spectrometry instrument to get information on the molecular weight or atomic weight of the solute. In the advanced mass spectrometry technologies there is information on solutes structure and even chemical properties. The hyphenation between ultra high performance chromatography with high resolution mass spectrometers revolutionalized entire new scientific fields of research and application, such as toxicology, proteomics, lipidomics, genomics, metabolomics and metabonomics. == Non-destructive detectors == Non-destructive detectors in liquid chromatography: Ultraviolet light detectors, fixed or variable wavelength, which includes diode array detectors. The ultraviolet light absorption of the effluent is continuously measured at single or multiple wavelengths. These are by far most popular detectors for liquid chromatography. Fluorescence detector. Irradiates the effluent with a light of set wavelength and measure the fluorescence of the effluent at a single or multiple wavelength. Refractive index detector. Continuously measures the refractive index of the effluent. The lowest sensitivity of all detectors. Often used in size exclusion chromatography for polymer analysis. Radio flow detector. Measures radioactivity of the effluent. This detector can be destructive if a scintillation cocktail is continuously added to the effluent. Chiral detector continuously measures the optical angle of rotation of the effluent. It is used only when chiral compounds are being analyzed. Conductivity monitor. Continuously measures the conductivity of the effluent. Used only when conductive eluents (water or alcohols) are used. Non-destructive detectors in gas chromatography: Thermal conductivity detector: measures the thermal conductivity of the eluent. Electron capture detector: the most sensitive detector known. Allows for the detection of organic molecules containing halogen, nitro groups etc. Photoionization detector measures the increase in conductivity achieved by ionizing the effluent gas with ultraviolet light radiation. Olfactometric detector: assesses the odor activity of the eluent using human assessors. Electronic nose detector which mimics human nose is emerging as a modern and advanced version of the olfactory detection is the electronic nose detector. == References ==
Wikipedia/Chromatography_detector
Ion chromatography (or ion-exchange chromatography) is a form of chromatography that separates ions and ionizable polar molecules based on their affinity to the ion exchanger. It works on almost any kind of charged molecule—including small inorganic anions, large proteins, small nucleotides, and amino acids. However, ion chromatography must be done in conditions that are one pH unit away from the isoelectric point of a protein. The two types of ion chromatography are anion-exchange and cation-exchange. Cation-exchange chromatography is used when the molecule of interest is positively charged. The molecule is positively charged because the pH for chromatography is less than the pI (also known as pH(I)). In this type of chromatography, the stationary phase is negatively charged and positively charged molecules are loaded to be attracted to it. Anion-exchange chromatography is when the stationary phase is positively charged and negatively charged molecules (meaning that pH for chromatography is greater than the pI) are loaded to be attracted to it. It is often used in protein purification, water analysis, and quality control. The water-soluble and charged molecules such as proteins, amino acids, and peptides bind to moieties which are oppositely charged by forming ionic bonds to the insoluble stationary phase. The equilibrated stationary phase consists of an ionizable functional group where the targeted molecules of a mixture to be separated and quantified can bind while passing through the column—a cationic stationary phase is used to separate anions and an anionic stationary phase is used to separate cations. Cation exchange chromatography is used when the desired molecules to separate are cations and anion exchange chromatography is used to separate anions. The bound molecules then can be eluted and collected using an eluant which contains anions and cations by running a higher concentration of ions through the column or by changing the pH of the column. One of the primary advantages for the use of ion chromatography is that only one interaction is involved the separation, as opposed to other separation techniques; therefore, ion chromatography may have higher matrix tolerance. Another advantage of ion exchange is the predictability of elution patterns (based on the presence of the ionizable group). For example, when cation exchange chromatography is used, certain cations will elute out first and others later. A local charge balance is always maintained. However, there are also disadvantages involved when performing ion-exchange chromatography, such as constant evolution of the technique which leads to the inconsistency from column to column. A major limitation to this purification technique is that it is limited to ionizable group. == History == Ion chromatography has advanced through the accumulation of knowledge over a course of many years. Starting from 1947, Spedding and Powell used displacement ion-exchange chromatography for the separation of the rare earths. Additionally, they showed the ion-exchange separation of 14N and 15N isotopes in ammonia. At the start of the 1950s, Kraus and Nelson demonstrated the use of many analytical methods for metal ions dependent on their separation of their chloride, fluoride, nitrate or sulfate complexes by anion chromatography. Automatic in-line detection was progressively introduced from 1960 to 1980 as well as novel chromatographic methods for metal ion separations. A groundbreaking method by Small, Stevens and Bauman at Dow Chemical Co. unfolded the creation of the modern ion chromatography. Anions and cations could now be separated efficiently by a system of suppressed conductivity detection. In 1979, a method for anion chromatography with non-suppressed conductivity detection was introduced by Gjerde et al. Following it in 1980, was a similar method for cation chromatography. As a result, a period of extreme competition began within the IC market, with supporters for both suppressed and non-suppressed conductivity detection. This competition led to fast growth of new forms and the fast evolution of IC. A challenge that needs to be overcome in the future development of IC is the preparation of highly efficient monolithic ion-exchange columns and overcoming this challenge would be of great importance to the development of IC. The boom of Ion exchange chromatography primarily began between 1935 and 1950 during World War II and it was through the "Manhattan project" that applications and IC were significantly extended. Ion chromatography was originally introduced by two English researchers, agricultural Sir Thompson and chemist J T Way. The works of Thompson and Way involved the action of water-soluble fertilizer salts, ammonium sulfate and potassium chloride. These salts could not easily be extracted from the ground due to the rain. They performed ion methods to treat clays with the salts, resulting in the extraction of ammonia in addition to the release of calcium. It was in the fifties and sixties that theoretical models were developed for IC for further understanding and it was not until the seventies that continuous detectors were utilized, paving the path for the development from low-pressure to high-performance chromatography. Not until 1975 was "ion chromatography" established as a name in reference to the techniques, and was thereafter used as a name for marketing purposes. Today IC is important for investigating aqueous systems, such as drinking water. It is a popular method for analyzing anionic elements or complexes that help solve environmentally relevant problems. Likewise, it also has great uses in the semiconductor industry. Because of the abundant separating columns, elution systems, and detectors available, chromatography has developed into the main method for ion analysis. When this technique was initially developed, it was primarily used for water treatment. Since 1935, ion exchange chromatography rapidly manifested into one of the most heavily leveraged techniques, with its principles often being applied to majority of fields of chemistry, including distillation, adsorption, and filtration. == Principle == Ion-exchange chromatography separates molecules based on their respective charged groups. Ion-exchange chromatography retains analyte molecules on the column based on coulombic (ionic) interactions. The ion exchange chromatography matrix consists of positively and negatively charged ions. Essentially, molecules undergo electrostatic interactions with opposite charges on the stationary phase matrix. The stationary phase consists of an immobile matrix that contains charged ionizable functional groups or ligands. The stationary phase surface displays ionic functional groups (R-X) that interact with analyte ions of opposite charge. To achieve electroneutrality, these immobilized charges couple with exchangeable counterions in the solution. Ionizable molecules that are to be purified, compete with these exchangeable counterions, for binding to the immobilized charges on the stationary phase. These ionizable molecules are retained or eluted based on their charge. Initially, molecules that do not bind or bind weakly to the stationary phase are first to be washed away. Altered conditions are needed for the elution of the molecules that bind to the stationary phase. The concentration of the exchangeable counterions, which competes with the molecules for binding, can be increased, or the pH can be changed to affect the ionic charge of the eluent or the solute. A change in pH affects the charge on the particular molecules and, therefore, alter their binding. When reducing the net charge of the solute's molecules, they start eluting out. This way, such adjustments can be used to release the proteins of interest. Additionally, concentration of counterions can be gradually varied to affect the retention of the ionized molecules, thus separate them. This type of elution is called gradient elution. On the other hand, step elution can be used, in which the concentration of counterions are varied in steps. This type of chromatography is further subdivided into cation exchange chromatography and anion-exchange chromatography. Positively charged molecules bind to cation exchange resins, while negatively charged molecules bind to anion exchange resins. The ionic compound consisting of the cationic species M+ and the anionic species B− can be retained by the stationary phase. Cation exchange chromatography retains positively charged cations because the stationary phase displays a negatively charged functional group: R-X − C + + M + B − ⇄ R-X − M + + C + + B − {\displaystyle {\text{R-X}}^{-}{\text{C}}^{+}\,+\,{\text{M}}^{+}\,{\text{B}}^{-}\rightleftarrows \,{\text{R-X}}^{-}{\text{M}}^{+}\,+\,{\text{C}}^{+}\,+\,{\text{B}}^{-}} Anion exchange chromatography retains anions using positively charged functional group: R-X + A − + M + B − ⇄ R-X + B − + M + + A − {\displaystyle {\text{R-X}}^{+}{\text{A}}^{-}\,+\,{\text{M}}^{+}\,{\text{B}}^{-}\rightleftarrows \,{\text{R-X}}^{+}{\text{B}}^{-}\,+\,{\text{M}}^{+}\,+\,{\text{A}}^{-}} Note that the ion strength of either C+ or A− in the mobile phase can be adjusted to shift the equilibrium position, thus retention time. The ion chromatogram shows a typical chromatogram obtained with an anion exchange column. == Procedure == Before ion-exchange chromatography can be initiated, it must be equilibrated. The stationary phase must be equilibrated to certain requirements that depend on the experiment that you are working with. Once equilibrated, the charged ions in the stationary phase will be attached to its opposite charged exchangeable ions, such as Cl− or Na+. Next, a buffer should be chosen in which the desired protein can bind to. After equilibration, the column needs to be washed. The washing phase will help elute out all impurities that does not bind to the matrix while the protein of interest remains bounded. This sample buffer needs to have the same pH as the buffer used for equilibration to help bind the desired proteins. Uncharged proteins will be eluted out of the column at a similar speed of the buffer flowing through the column with no retention. Once the sample has been loaded onto to the column, and the column has been washed with the buffer to elute out all non-desired proteins, elution is carried out at specific conditions to elute the desired proteins that are bound to the matrix. Bound proteins are eluted out by utilizing a gradient of linearly increasing salt concentration. With increasing ionic strength of the buffer, the salt ions will compete with the desired proteins in order to bind to charged groups on the surface of the medium. This will cause desired proteins to be eluted out of the column. Proteins that have a low net charge will be eluted out first as the salt concentration increases causing the ionic strength to increase. Proteins with high net charge will need a higher ionic strength for them to be eluted out of the column. It is possible to perform ion exchange chromatography in bulk, on thin layers of medium such as glass or plastic plates coated with a layer of the desired stationary phase, or in chromatography columns. Thin layer chromatography or column chromatography share similarities in that they both act within the same governing principles; there is constant and frequent exchange of molecules as the mobile phase travels along the stationary phase. It is not imperative to add the sample in minute volumes as the predetermined conditions for the exchange column have been chosen so that there will be strong interaction between the mobile and stationary phases. Furthermore, the mechanism of the elution process will cause a compartmentalization of the differing molecules based on their respective chemical characteristics. This phenomenon is due to an increase in salt concentrations at or near the top of the column, thereby displacing the molecules at that position, while molecules bound lower are released at a later point when the higher salt concentration reaches that area. These principles are the reasons that ion exchange chromatography is an excellent candidate for initial chromatography steps in a complex purification procedure as it can quickly yield small volumes of target molecules regardless of a greater starting volume. Comparatively simple devices are often used to apply counterions of increasing gradient to a chromatography column. Counterions such as copper (II) are chosen most often for effectively separating peptides and amino acids through complex formation. A simple device can be used to create a salt gradient. Elution buffer is consistently being drawn from the chamber into the mixing chamber, thereby altering its buffer concentration. Generally, the buffer placed into the chamber is usually of high initial concentration, whereas the buffer placed into the stirred chamber is usually of low concentration. As the high concentration buffer from the left chamber is mixed and drawn into the column, the buffer concentration of the stirred column gradually increase. Altering the shapes of the stirred chamber, as well as of the limit buffer, allows for the production of concave, linear, or convex gradients of counterion. A multitude of different mediums are used for the stationary phase. Among the most common immobilized charged groups used are trimethylaminoethyl (TAM), triethylaminoethyl (TEAE), diethyl-2-hydroxypropylaminoethyl (QAE), aminoethyl (AE), diethylaminoethyl (DEAE), sulpho (S), sulphomethyl (SM), sulphopropyl (SP), carboxy (C), and carboxymethyl (CM). Successful packing of the column is an important aspect of ion chromatography. Stability and efficiency of a final column depends on packing methods, solvent used, and factors that affect mechanical properties of the column. In contrast to early inefficient dry- packing methods, wet slurry packing, in which particles that are suspended in an appropriate solvent are delivered into a column under pressure, shows significant improvement. Three different approaches can be employed in performing wet slurry packing: the balanced density method (solvent's density is about that of porous silica particles), the high viscosity method (a solvent of high viscosity is used), and the low viscosity slurry method (performed with low viscosity solvents). Polystyrene is used as a medium for ion- exchange. It is made from the polymerization of styrene with the use of divinylbenzene and benzoyl peroxide. Such exchangers form hydrophobic interactions with proteins which can be irreversible. Due to this property, polystyrene ion exchangers are not suitable for protein separation. They are used on the other hand for the separation of small molecules in amino acid separation and removal of salt from water. Polystyrene ion exchangers with large pores can be used for the separation of protein but must be coated with a hydrophilic substance. Cellulose based medium can be used for the separation of large molecules as they contain large pores. Protein binding in this medium is high and has low hydrophobic character. DEAE is an anion exchange matrix that is produced from a positive side group of diethylaminoethyl bound to cellulose or Sephadex. Agarose gel based medium contain large pores as well but their substitution ability is lower in comparison to dextrans. The ability of the medium to swell in liquid is based on the cross-linking of these substances, the pH and the ion concentrations of the buffers used. Incorporation of high temperature and pressure allows a significant increase in the efficiency of ion chromatography, along with a decrease in time. Temperature has an influence of selectivity due to its effects on retention properties. The retention factor (k = (tRg − tMg)/(tMg − text)) increases with temperature for small ions, and the opposite trend is observed for larger ions. Despite ion selectivity in different mediums, further research is being done to perform ion exchange chromatography through the range of 40–175 °C. An appropriate solvent can be chosen based on observations of how column particles behave in a solvent. Using an optical microscope, one can easily distinguish a desirable dispersed state of slurry from aggregated particles. == Weak and strong ion exchangers == A "strong" ion exchanger will not lose the charge on its matrix once the column is equilibrated and so a wide range of pH buffers can be used. "Weak" ion exchangers have a range of pH values in which they will maintain their charge. If the pH of the buffer used for a weak ion exchange column goes out of the capacity range of the matrix, the column will lose its charge distribution and the molecule of interest may be lost. Despite the smaller pH range of weak ion exchangers, they are often used over strong ion exchangers due to their having greater specificity. In some experiments, the retention times of weak ion exchangers are just long enough to obtain desired data at a high specificity. Resins (often termed 'beads') of ion exchange columns may include functional groups such as weak/strong acids and weak/strong bases. There are also special columns that have resins with amphoteric functional groups that can exchange both cations and anions. Some examples of functional groups of strong ion exchange resins are quaternary ammonium cation (Q), which is an anion exchanger, and sulfonic acid (S, -SO2OH), which is a cation exchanger. These types of exchangers can maintain their charge density over a pH range of 0–14. Examples of functional groups of Weak ion exchange resins include diethylaminoethyl (DEAE, -C2H4N(C2H5)2), which is an anion exchanger, and carboxymethyl (CM, -CH2-COOH), which is a cation exchanger. These two types of exchangers can maintain the charge density of their columns over a pH range of 5–9. In ion chromatography, the interaction of the solute ions and the stationary phase based on their charges determines which ions will bind and to what degree. When the stationary phase features positive groups which attracts anions, it is called an anion exchanger; when there are negative groups on the stationary phase, cations are attracted and it is a cation exchanger. The attraction between ions and stationary phase also depends on the resin, organic particles used as ion exchangers. Each resin features relative selectivity which varies based on the solute ions present who will compete to bind to the resin group on the stationary phase. The selectivity coefficient, the equivalent to the equilibrium constant, is determined via a ratio of the concentrations between the resin and each ion, however, the general trend is that ion exchangers prefer binding to the ion with a higher charge, smaller hydrated radius, and higher polarizability, or the ability for the electron cloud of an ion to be disrupted by other charges. Despite this selectivity, excess amounts of an ion with a lower selectivity introduced to the column would cause the lesser ion to bind more to the stationary phase as the selectivity coefficient allows fluctuations in the binding reaction that takes place during ion exchange chromatography. Following table shows the commonly used ion exchangers == Typical technique == A sample is introduced, either manually or with an autosampler, into a sample loop of known volume. A buffered aqueous solution known as the mobile phase carries the sample from the loop onto a column that contains some form of stationary phase material. This is typically a resin or gel matrix consisting of agarose or cellulose beads with covalently bonded charged functional groups. Equilibration of the stationary phase is needed in order to obtain the desired charge of the column. If the column is not properly equilibrated the desired molecule may not bind strongly to the column. The target analytes (anions or cations) are retained on the stationary phase but can be eluted by increasing the concentration of a similarly charged species that displaces the analyte ions from the stationary phase. For example, in cation exchange chromatography, the positively charged analyte can be displaced by adding positively charged sodium ions. The analytes of interest must then be detected by some means, typically by conductivity or UV/visible light absorbance. Control an IC system usually requires a chromatography data system (CDS). In addition to IC systems, some of these CDSs can also control gas chromatography (GC) and HPLC. == Membrane exchange chromatography == A type of ion exchange chromatography, membrane exchange is a relatively new method of purification designed to overcome limitations of using columns packed with beads. Membrane Chromatographic devices are cheap to mass-produce and disposable unlike other chromatography devices that require maintenance and time to revalidate. There are three types of membrane absorbers that are typically used when separating substances. The three types are flat sheet, hollow fibre, and radial flow. The most common absorber and best suited for membrane chromatography is multiple flat sheets because it has more absorbent volume. It can be used to overcome mass transfer limitations and pressure drop, making it especially advantageous for isolating and purifying viruses, plasmid DNA, and other large macromolecules. The column is packed with microporous membranes with internal pores which contain adsorptive moieties that can bind the target protein. Adsorptive membranes are available in a variety of geometries and chemistry which allows them to be used for purification and also fractionation, concentration, and clarification in an efficiency that is 10 fold that of using beads. Membranes can be prepared through isolation of the membrane itself, where membranes are cut into squares and immobilized. A more recent method involved the use of live cells that are attached to a support membrane and are used for identification and clarification of signaling molecules. == Separating proteins == Ion exchange chromatography can be used to separate proteins because they contain charged functional groups. The ions of interest (in this case charged proteins) are exchanged for another ions (usually H+) on a charged solid support. The solutes are most commonly in a liquid phase, which tends to be water. Take for example proteins in water, which would be a liquid phase that is passed through a column. The column is commonly known as the solid phase since it is filled with porous synthetic particles that are of a particular charge. These porous particles are also referred to as beads, may be aminated (containing amino groups) or have metal ions in order to have a charge. The column can be prepared using porous polymers, for macromolecules of a mass of over 100 000 Da, the optimum size of the porous particle is about 1 μm2. This is because slow diffusion of the solutes within the pores does not restrict the separation quality. The beads containing positively charged groups, which attract the negatively charged proteins, are commonly referred to as anion exchange resins. The amino acids that have negatively charged side chains at pH 7 (pH of water) are glutamate and aspartate. The beads that are negatively charged are called cation exchange resins, as positively charged proteins will be attracted. The amino acids that have positively charged side chains at pH 7 are lysine, histidine and arginine. The isoelectric point is the pH at which a compound - in this case a protein - has no net charge. A protein's isoelectric point or PI can be determined using the pKa of the side chains, if the amino (positive chain) is able to cancel out the carboxyl (negative) chain, the protein would be at its PI. Using buffers instead of water for proteins that do not have a charge at pH 7 is a good idea as it enables the manipulation of pH to alter ionic interactions between the proteins and the beads. Weakly acidic or basic side chains are able to have a charge if the pH is high or low enough respectively. Separation can be achieved based on the natural isoelectric point of the protein. Alternatively a peptide tag can be genetically added to the protein to give the protein an isoelectric point away from most natural proteins (e.g., 6 arginines for binding to a cation-exchange resin or 6 glutamates for binding to an anion-exchange resin such as DEAE-Sepharose). Elution by increasing ionic strength of the mobile phase is more subtle. It works because ions from the mobile phase interact with the immobilized ions on the stationary phase, thus "shielding" the stationary phase from the protein, and letting the protein elute. Elution from ion-exchange columns can be sensitive to changes of a single charge- chromatofocusing. Ion-exchange chromatography is also useful in the isolation of specific multimeric protein assemblies, allowing purification of specific complexes according to both the number and the position of charged peptide tags. === Gibbs–Donnan effect === In ion exchange chromatography, the Gibbs–Donnan effect is observed when the pH of the applied buffer and the ion exchanger differ, even up to one pH unit. For example, in anion-exchange columns, the ion exchangers repeal protons so the pH of the buffer near the column differs is higher than the rest of the solvent. As a result, an experimenter has to be careful that the protein(s) of interest is stable and properly charged in the "actual" pH. This effect comes as a result of two similarly charged particles, one from the resin and one from the solution, failing to distribute properly between the two sides; there is a selective uptake of one ion over another. For example, in a sulphonated polystyrene resin, a cation exchange resin, the chlorine ion of a hydrochloric acid buffer should equilibrate into the resin. However, since the concentration of the sulphonic acid in the resin is high, the hydrogen of HCl has no tendency to enter the column. This, combined with the need of electroneutrality, leads to a minimum amount of hydrogen and chlorine entering the resin. == Uses == === Clinical utility === A use of ion chromatography can be seen in argentation chromatography. Usually, silver and compounds containing acetylenic and ethylenic bonds have very weak interactions. This phenomenon has been widely tested on olefin compounds. The ion complexes the olefins make with silver ions are weak and made based on the overlapping of pi, sigma, and d orbitals and available electrons therefore cause no real changes in the double bond. This behavior was manipulated to separate lipids, mainly fatty acids from mixtures in to fractions with differing number of double bonds using silver ions. The ion resins were impregnated with silver ions, which were then exposed to various acids (silicic acid) to elute fatty acids of different characteristics. Detection limits as low as 1 μM can be obtained for alkali metal ions. It may be used for measurement of HbA1c, porphyrin and with water purification. Ion Exchange Resins(IER) have been widely used especially in medicines due to its high capacity and the uncomplicated system of the separation process. One of the synthetic uses is to use Ion Exchange Resins for kidney dialysis. This method is used to separate the blood elements by using the cellulose membraned artificial kidney. Another clinical application of ion chromatography is in the rapid anion exchange chromatography technique used to separate creatine kinase (CK) isoenzymes from human serum and tissue sourced in autopsy material (mostly CK rich tissues were used such as cardiac muscle and brain). These isoenzymes include MM, MB, and BB, which all carry out the same function given different amino acid sequences. The functions of these isoenzymes are to convert creatine, using ATP, into phosphocreatine expelling ADP. Mini columns were filled with DEAE-Sephadex A-50 and further eluted with tris- buffer sodium chloride at various concentrations (each concentration was chosen advantageously to manipulate elution). Human tissue extract was inserted in columns for separation. All fractions were analyzed to see total CK activity and it was found that each source of CK isoenzymes had characteristic isoenzymes found within. Firstly, CK- MM was eluted, then CK-MB, followed by CK-BB. Therefore, the isoenzymes found in each sample could be used to identify the source, as they were tissue specific. Using the information from results, correlation could be made about the diagnosis of patients and the kind of CK isoenzymes found in most abundant activity. From the finding, about 35 out of 71 patients studied suffered from heart attack (myocardial infarction) also contained an abundant amount of the CK-MM and CK-MB isoenzymes. Findings further show that many other diagnosis including renal failure, cerebrovascular disease, and pulmonary disease were only found to have the CK-MM isoenzyme and no other isoenzyme. The results from this study indicate correlations between various diseases and the CK isoenzymes found which confirms previous test results using various techniques. Studies about CK-MB found in heart attack victims have expanded since this study and application of ion chromatography. === Industrial applications === Since 1975 ion chromatography has been widely used in many branches of industry. The main beneficial advantages are reliability, very good accuracy and precision, high selectivity, high speed, high separation efficiency, and low cost of consumables. The most significant development related to ion chromatography are new sample preparation methods; improving the speed and selectivity of analytes separation; lowering of limits of detection and limits of quantification; extending the scope of applications; development of new standard methods; miniaturization and extending the scope of the analysis of a new group of substances. Allows for quantitative testing of electrolyte and proprietary additives of electroplating baths. It is an advancement of qualitative hull cell testing or less accurate UV testing. Ions, catalysts, brighteners and accelerators can be measured. Ion exchange chromatography has gradually become a widely known, universal technique for the detection of both anionic and cationic species. Applications for such purposes have been developed, or are under development, for a variety of fields of interest, and in particular, the pharmaceutical industry. The usage of ion exchange chromatography in pharmaceuticals has increased in recent years, and in 2006, a chapter on ion exchange chromatography was officially added to the United States Pharmacopia-National Formulary (USP-NF). Furthermore, in 2009 release of the USP-NF, the United States Pharmacopia made several analyses of ion chromatography available using two techniques: conductivity detection, as well as pulse amperometric detection. Majority of these applications are primarily used for measuring and analyzing residual limits in pharmaceuticals, including detecting the limits of oxalate, iodide, sulfate, sulfamate, phosphate, as well as various electrolytes including potassium, and sodium. In total, the 2009 edition of the USP-NF officially released twenty eight methods of detection for the analysis of active compounds, or components of active compounds, using either conductivity detection or pulse amperometric detection. === Drug development === There has been a growing interest in the application of IC in the analysis of pharmaceutical drugs. IC is used in different aspects of product development and quality control testing. For example, IC is used to improve stabilities and solubility properties of pharmaceutical active drugs molecules as well as used to detect systems that have higher tolerance for organic solvents. IC has been used for the determination of analytes as a part of a dissolution test. For instance, calcium dissolution tests have shown that other ions present in the medium can be well resolved among themselves and also from the calcium ion. Therefore, IC has been employed in drugs in the form of tablets and capsules in order to determine the amount of drug dissolve with time. IC is also widely used for detection and quantification of excipients or inactive ingredients used in pharmaceutical formulations. Detection of sugar and sugar alcohol in such formulations through IC has been done due to these polar groups getting resolved in ion column. IC methodology also established in analysis of impurities in drug substances and products. Impurities or any components that are not part of the drug chemical entity are evaluated and they give insights about the maximum and minimum amounts of drug that should be administered in a patient per day. == See also == Anion-exchange chromatography Chromatofocusing High performance liquid chromatography Isoelectric point == References == == Bibliography == Small, Hamish (1989). Ion chromatography. New York: Plenum Press. ISBN 978-0-306-43290-3. Tatjana Weiss; Weiss, Joachim (2005). Handbook of Ion Chromatography. Weinheim: Wiley-VCH. ISBN 978-3-527-28701-7. Gjerde, Douglas T.; Fritz, James S. (2000). Ion Chromatography. Weinheim: Wiley-VCH. ISBN 978-3-527-29914-0. Jackson, Peter; Haddad, Paul R. (1990). Ion chromatography: principles and applications. Amsterdam: Elsevier. ISBN 978-0-444-88232-5. Mercer, Donald W (1974). "Separation of tissue and serum creatine kinase isoenzymes by ion-exchange column chromatography". Clinical Chemistry. 20 (1): 36–40. doi:10.1093/clinchem/20.1.36. PMID 4809470. Morris, L. J. (1966). "Separations of lipids by silver ion chromatography". Journal of Lipid Research. 7 (6): 717–732. doi:10.1016/S0022-2275(20)38948-3. PMID 5339485. Ghosh, Raja (2002). "Protein separation using membrane chromatography: opportunities and challenges". Journal of Chromatography A. 952 (1): 13–27. doi:10.1016/s0021-9673(02)00057-2. PMID 12064524. == External links == Media related to Ion chromatography at Wikimedia Commons
Wikipedia/Ion-exchange_chromatography
Micellar liquid chromatography (MLC) is a form of reversed phase liquid chromatography that uses an aqueous micellar solutions as the mobile phase. == Theory == The use of micelles in high performance liquid chromatography was first introduced by Armstrong and Henry in 1980. The technique is used mainly to enhance retention and selectivity of various solutes that would otherwise be inseparable or poorly resolved. Micellar liquid chromatography (MLC) has been used in a variety of applications including separation of mixtures of charged and neutral solutes, direct injection of serum and other physiological fluids, analysis of pharmaceutical compounds, separation of enantiomers, analysis of inorganic organometallics, and a host of others. One of the main drawbacks of the technique is the reduced efficiency that is caused by the micelles. Despite the sometimes poor efficiency, MLC is a better choice than ion-exchange LC or ion-pairing LC for separation of charged molecules and mixtures of charged and neutral species. Some of the aspects which will be discussed are the theoretical aspects of MLC, the use of models in predicting retentive characteristics of MLC, the effect of micelles on efficiency and selectivity, and general applications of MLC. Reverse phase high-performance liquid chromatography (RP-HPLC) involves a non-polar stationary phase, often a hydrocarbon chain, and a polar mobile or liquid phase. The mobile phase generally consists of an aqueous portion with an organic addition, such as methanol or acetonitrile. When a solution of analytes is injected into the system, the components begin to partition out of the mobile phase and interact with the stationary phase. Each component interacts with the stationary phase in a different manner depending upon its polarity and hydrophobicity. In reverse phase HPLC, the solute with the greatest polarity will interact less with the stationary phase and spend more time in the mobile phase. As the polarity of the components decreases, the time spent in the column increases. Thus, a separation of components is achieved based on polarity. The addition of micelles to the mobile phase introduces a third phase into which the solutes may partition. == Micelles == Micelles are composed of surfactant, or detergent, monomers with a hydrophobic moiety, or tail, on one end, and a hydrophilic moiety, or head group, on the other. The polar head group may be anionic, cationic, zwitterionic, or non-ionic. When the concentration of a surfactant in solution reaches its critical micelle concentration (CMC), it forms micelles which are aggregates of the monomers. The CMC is different for each surfactant, as is the number of monomers which make up the micelle, termed the aggregation number (AN). Table 1 lists some common detergents used to form micelles along with their CMC and AN where available. Many of the characteristics of micelles differ from those of bulk solvents. For example, the micelles are, by nature, spatially heterogeneous with a hydrocarbon, nearly anhydrous core and a highly solvated, polar head group. They have a high surface-to-volume ratio due to their small size and generally spherical shape. Their surrounding environment (pH, ionic strength, buffer ion, presence of a co-solvent, and temperature) has an influence on their size, shape, critical micelle concentration, aggregation number and other properties. Another important property of micelles is the Krafft point, the temperature at which the solubility of the surfactant is equal to its CMC. For HPLC applications involving micelles, it is best to choose a surfactant with a low Krafft point and CMC. A high CMC would require a high concentration of surfactant which would increase the viscosity of the mobile phase, an undesirable condition. Additionally, a Krafft point should be well below room temperature to avoid having to apply heat to the mobile phase. To avoid potential interference with absorption detectors, a surfactant should also have a small molar absorptivity at the chosen wavelength of analysis. Light scattering should not be a concern due to the small size, a few nanometers, of the micelle. The effect of organic additives on micellar properties is another important consideration. A small amount of organic solvent is often added to the mobile phase to help improve efficiency and to improve separations of compounds. Care needs to be taken when determining how much organic to add. Too high a concentration of the organic may cause the micelle to disperse, as it relies on hydrophobic effects for its formation. The maximum concentration of organic depends on the organic solvent itself, and on the micelle. This information is generally not known precisely, but a generally accepted practice is to keep the volume percentage of organic below 15–20%. == Research == Fischer and Jandera studied the effect of changing the concentration of methanol on CMC values for three commonly used surfactants. Two cationic, hexadecyltrimethylammonium bromide (CTAB), and N-(a-carbethoxypentadecyl) trimethylammonium bromide (Septonex), and one anionic surfactant, sodium dodecyl sulphate (SDS) were chosen for the experiment. Generally speaking, the CMC increased as the concentration of methanol increased. It was then concluded that the distribution of the surfactant between the bulk mobile phase and the micellar phase shifts toward the bulk as the methanol concentration increases. For CTAB, the rise in CMC is greatest from 0–10% methanol, and is nearly constant from 10–20%. Above 20% methanol, the micelles disaggregate and do not exist. For SDS, the CMC values remain unaffected below 10% methanol, but begin to increase as the methanol concentration is further increased. Disaggregation occurs above 30% methanol. Finally, for Septonex, only a slight increase in CMC is observed up to 20%, with disaggregation occurring above 25%. As has been asserted, the mobile phase in MLC consists of micelles in an aqueous solvent, usually with a small amount of organic modifier added to complete the mobile phase. A typical reverse phase alkyl-bonded stationary phase is used. The first discussion of the thermodynamics involved in the retention mechanism was published by Armstrong and Nome in 1981. In MLC, there are three partition coefficients which must be taken into account. The solute will partition between the water and the stationary phase (KSW), the water and the micelles (KMW), and the micelles and the stationary phase (KSM). Armstrong and Nome derived an equation describing the partition coefficients in terms of the retention factor, formally capacity factor, k¢. In HPLC, the capacity factor represents the molar ratio of the solute in the stationary phase to the mobile phase. The capacity factor is easily measure based on retention times of the compound and any unretained compound. The equation rewritten by Guermouche et al. is presented here: 1/k¢ = [n • (KMW-1)/(f • KSW)] • CM +1/(f • KSW) Where: k¢ is the capacity factor of the solute KSW is the partition coefficient of the solute between the stationary phase and the water KMW is the partition coefficient of the solute between the micelles and the water f is the phase volume ratio (stationary phase volume/mobile phase volume) n is the molar volume of the surfactant CM is the concentration of the micelle in the mobile phase (total surfactant concentration - critical micelle concentration) A plot of 1/k¢ verses CM gives a straight line in which KSW can be calculated from the intercept and KMW can be obtained from the ratio of the slope to the intercept. Finally, KSM can be obtained from the ratio of the other two partition coefficients: KSM = KSW/ KMW As can be observed from Figure 1, KMW is independent of any effects from the stationary phase, assuming the same micellar mobile phase. The validity of the retention mechanism proposed by Armstrong and Nome has been successfully, and repeated confirmed experimentally. However, some variations and alternate theories have also been proposed. Jandera and Fischer developed equations to describe the dependence of retention behavior on the change in micellar concentrations. They found that the retention of most compounds tested decreased with increasing concentrations of micelles. From this, it can be surmised that the compounds associate with the micelles as they spend less time associated with the stationary phase. Foley proposed a similar retentive model to that of Armstrong and Nome which was a general model for secondary chemical equilibria in liquid chromatography. While this model was developed in a previous reference, and could be used for any secondary chemical equilibria such as acid-base equilibria, and ion-pairing, Foley further refined the model for MLC. When an equilibrant (X), in this case surfactant, is added to the mobile phase, a secondary equilibria is created in which an analyte will exist as free analyte (A), and complexed with the equilibrant (AX). The two forms will be retained by the stationary phase to different extents, thus allowing the retention to be varied by adjusting the concentration of equilibrant (micelles). The resulting equation solved for capacity factor in terms of partition coefficients is much the same as that of Armstrong and Nome: 1/k¢ = (KSM/k¢S) • [M] + 1/k¢S Where: k¢ is the capacity factor of the complexed solute and the free solute k¢S is the capacity factor of the free solute KSM is the partition coefficient of the solute between the stationary phase and the micelle [M] may be either the concentration of surfactant or the concentration of micelle Foley used the above equation to determine the solute-micelle association constants and free solute retention factors for a variety of solutes with different surfactants and stationary phases. From this data, it is possible to predict the type and optimum surfactant concentrations needed for a given solute or solutes. Foley has not been the only researcher interested in determining the solute-micelle association constants. A review article by Marina and Garcia with 53 references discusses the usefulness of obtaining solute-micelle association constants. The association constants for two solutes can be used to help understand the retention mechanism. The separation factor of two solutes, a, can be expressed as KSM1/KSM2. If the experimental a coincides with the ratio of the two solute-micelle partition coefficients, it can be assumed that their retention occurs through a direct transfer from the micellar phase to the stationary phase. In addition, calculation of a would allow for prediction of separation selectivity before the analysis is performed, provided the two coefficients are known. The desire to predict retention behavior and selectivity has led to the development of several mathematical models. Changes in pH, surfactant concentration, and concentration of organic modifier play a significant role in determining the chromatographic separation. Often one or more of these parameters need to be optimized to achieve the desired separation, yet the optimum parameters must take all three variables into account simultaneously. The review by Garcia-Alvarez-Coque et al. mentioned several successful models for varying scenarios, a few of which will be mentioned here. The classic models by Armstrong and Nome and Foley are used to describe the general cases. Foley's model applies to many cases and has been experimentally verified for ionic, neutral, polar and nonpolar solutes; anionic, cationic, and non-ionic surfactants, and C8, C¬18, and cyano stationary phases. The model begins to deviate for highly and lowly retained solutes. Highly retained solutes may become irreversibly bound to the stationary phase, where lowly retained solutes may elute in the column void volume. Other models proposed by Arunyanart and Cline-Love and Rodgers and Khaledi describe the effect of pH on the retention of weak acids and bases. These authors derived equations relating pH and micellar concentration to retention. As the pH varies, sigmoidal behavior is observed for the retention of acidic and basic species. This model has been shown to accurately predict retention behavior. Still other models predict behavior in hybrid micellar systems using equations or modeling behavior based on controlled experimentation. Additionally, models accounting for the simultaneous effect of pH, micelle and organic concentration have been suggested. These models allow for further enhancement of the optimization of the separation of weak acids and bases. One research group, Rukhadze, et al. derived a first order linear relationship describing the influence of micelle and organic concentration, and pH on the selectivity and resolution of seven barbiturates. The researchers discovered that a second order mathematical equation would more precisely fit the data. The derivations and experimental details are beyond the scope of this discussion. The model was successful in predicting the experimental conditions necessary to achieve a separation for compounds which are traditionally difficult to resolve. Jandera, Fischer, and Effenberger approached the modeling problem in yet another way. The model used was based on lipophilicity and polarity indices of solutes. The lipophilicity index relates a given solute to a hypothetical number of carbon atoms in an alkyl chain. It is based and depends on a given calibration series determined experimentally. The lipophilicity index should be independent of the stationary phase and organic modifier concentration. The polarity index is a measure of the polarity of the solute-solvent interactions. It depends strongly on the organic solvent, and somewhat on the polar groups present in the stationary phase. 23 compounds were analyzed with varying mobile phases and compared to the lipophilicity and polarity indices. The results showed that the model could be applied to MLC, but better predictive behavior was found with concentrations of surfactant below the CMC, sub-micellar. A final type of model based on molecular properties of a solute is a branch of quantitative structure-activity relationships (QSAR). QSAR studies attempt to correlate biological activity of drugs, or a class of drugs, with structures. The normally accepted means of uptake for a drug, or its metabolite, is through partitioning into lipid bilayers. The descriptor most often used in QSAR to determine the hydrophobicity of a compound is the octanol-water partition coefficient, log P. MLC provides an attractive and practical alternative to QSAR. When micelles are added to a mobile phase, many similarities exist between the micellar mobile phase/stationary phase and the biological membrane/water interface. In MLC, the stationary phase become modified by the adsorption of surfactant monomers which are structurally similar to the membranous hydrocarbon chains in the biological model. Additionally, the hydrophilic/hydrophobic interactions of the micelles are similar to that in the polar regions of a membrane. Thus, the development of quantitative structure-retention relationships (QRAR) has become widespread. Escuder-Gilabert et al. tested three different QRAR retention models on ionic compounds. Several classes of compounds were tested including catecholamines, local anesthetics, diuretics, and amino acids. The best model relating log K and log P was found to be one in which the total molar charge of a compound at a given pH is included as a variable. This model proved to give fairly accurate predictions of log P, R > 0.9. Other studies have been performed which develop predictive QRAR models for tricyclic antidepressants and barbiturates. == Efficiency == The main limitation in the use of MLC is the reduction in efficiency (peak broadening) that is observed when purely aqueous micellar mobile phases are used. Several explanations for the poor efficiency have been theorized. Poor wetting of the stationary phase by the micellar aqueous mobile phase, slow mass transfer between the micelles and the stationary phase, and poor mass transfer within the stationary phase have all been postulated as possible causes. To enhance efficiency, the most common approaches have been the addition of small amounts of isopropyl alcohol and increase in temperature. A review by Berthod studied the combined theories presented above and applied the Knox equation to independently determine the cause of the reduced efficiency. The Knox equation is commonly used in HPLC to describe the different contributions to overall band broadening of a solute. The Knox equation is expressed as: h = An^(1/3)+ B/n + Cn Where: h = the reduced plate height count (plate height/stationary phase particle diameter) n = the reduced mobile phase linear velocity (velocity times stationary phase particle diameter/solute diffusion coefficient in the mobile phase) A, B, and C are constants related to solute flow anisotropy (eddy diffusion), molecular longitudinal diffusion, and mass transfer properties respectively. Berthod's use of the Knox equation to experimentally determine which of the proposed theories was most correct led him to the following conclusions. The flow anisotropy in micellar phase seems to be much greater than in traditional hydro-organic mobile phases of similar viscosity. This is likely due to the partial clogging of the stationary phase pores by adsorbed surfactant molecules. Raising the column temperature served to both decrease viscosity of the mobile phase and the amount of adsorbed surfactant. Both results reduce the A term and the amount of eddy diffusion, and thereby increase efficiency. The increase in the B term, as related to longitudinal diffusion, is associated with the decrease in the solute diffusion coefficient in the mobile phase, DM, due to the presence of the micelles, and an increase in the capacity factor, k¢. Again, this is related to surfactant adsorption on the stationary phase causing a dramatic decrease in the solute diffusion coefficient in the stationary phase, DS. Again an increase in temperature, now coupled with an addition of alcohol to the mobile phase, drastically decreases the amount of the absorbed surfactant. In turn, both actions reduce the C term caused by a slow mass transfer from the stationary phase to the mobile phase. Further optimization of efficiency can be gained by reducing the flow rate to one closely matched to that derived from the Knox equation. Overall, the three proposed theories seemed to have contributing effects of the poor efficiency observed, and can be partially countered by the addition of organic modifiers, particularly alcohol, and increasing the column temperature. == Applications == Despite the reduced efficiency verses reversed phase HPLC, hundreds of applications have been reported using MLC. One of the most advantageous is the ability to directly inject physiological fluids. Micelles have an ability to solubilize proteins which enables MLC to be useful in analyzing untreated biological fluids such as plasma, serum, and urine. Martinez et al. found MLC to be highly useful in analyzing a class of drugs called b-antagonists, so called beta-blockers, in urine samples. The main advantage of the use of MLC with this type of sample, is the great time savings in sample preparation. Alternative methods of analysis including reversed phase HPLC require lengthy extraction and sample work up procedures before analysis can begin. With MLC, direct injection is often possible, with retention times of less than 15 minutes for the separation of up to nine b-antagonists. Another application compared reversed phase HPLC with MLC for the analysis of desferrioxamine in serum. Desferrioxamine (DFO) is a commonly used drug for removal of excess iron in patients with chronic and acute levels. The analysis of DFO along with its chelated complexes, Fe(III) DFO and Al(III) DFO has proven to be difficult at best in previous attempts. This study found that direct injection of the serum was possible for MLC, verses an ultrafiltration step necessary in HPLC. This analysis proved to have difficulties with the separation of the chelated DFO compounds and with the sensitivity levels for DFO itself when MLC was applied. The researcher found that, in this case, reverse phase HPLC, was a better, more sensitive technique despite the time savings in direct injection. Analysis of pharmaceuticals by MLC is also gaining popularity. The selectivity and peak shape of MLC over commonly used ion-pair chromatography is much enhanced. MLC mimics, yet enhances, the selectivity offered by ion-pairing reagents for the separation of active ingredients in pharmaceutical drugs. For basic drugs, MLC improves the excessive peak tailing frequently observed in ion-pairing. Hydrophilic drugs are often unretained using conventional HPLC, are retained by MLC due to solubilization into the micelles. Commonly found drugs in cold medications such as acetaminophen, L-ascorbic acid, phenylpropanolamine HCL, tipepidine hibenzate, and chlorpheniramine maleate have been successfully separated with good peak shape using MLC. Additional basic drugs like many narcotics, such as codeine and morphine, have also been successfully separated using MLC. Another novel application of MLC involves the separation and analysis of inorganic compounds, mostly simple ions. This is a relatively new area for MLC, but has seen some promising results. MLC has been observed to provide better selectivity of inorganic ions that ion-exchange or ion-pairing chromatography. While this application is still in the beginning stages of development, the possibilities exist for novel, much enhanced separations of inorganic species. Since the technique was first reported on in 1980, micellar liquid chromatography has been used in hundreds of applications. This micelle controlled technique provides for unique opportunities for solving complicated separation problems. Despite the poor efficiency of MLC, it has been successfully used in many applications. The use of MLC in the future appears to be extremely advantages in the areas of physiological fluids, pharmaceuticals, and even inorganic ions. The technique has proven to be superior over ion-pairing and ion-exchange for many applications. As new approaches are developed to combat the poor efficiency of MLC, its application is sure to spread and gain more acceptance. == References ==
Wikipedia/Micellar_liquid_chromatography
Anion-exchange chromatography is a process that separates substances based on their charges using an ion-exchange resin containing positively charged groups, such as diethyl-aminoethyl groups (DEAE). In solution, the resin is coated with positively charged counter-ions (cations). Anion exchange resins will bind to negatively charged molecules, displacing the counter-ion. Anion exchange chromatography is commonly used to purify proteins, amino acids, sugars/carbohydrates and other acidic substances with a negative charge at higher pH levels. The tightness of the binding between the substance and the resin is based on the strength of the negative charge of the substance. == General technique for protein purification == A slurry of resin, such as DEAE-Sephadex is poured into the column. The matrix that is used is insoluble with charged groups that are covalently attached. These charged groups are referred to as exchangers like cation and anion exchangers. After it settles, the column is pre-equilibrated in buffer before the protein mixture is applied. DEAE-Sephadex is a positively-charged slurry that will have electrostatic interactions with the negatively charged atoms, making them elute later than the positively-charged molecules in the interested sample. This is a separation technique used widely to discover specific proteins, or enzymes in the body. Unbound proteins are collected in the flow-through and/or in subsequent buffer washes. Proteins that bind to the positively charged resin are retained and can be eluted in one of two ways. First, the salt concentration in the elution buffer is gradually increased. The negative ions in the salt solution (e.g. Cl−) compete with protein in binding to the resin. Second, the pH of the solution can be gradually decreased which results in a more positive charge on the protein, releasing it from the resin. Both of these techniques can displace the negatively charged protein which is then eluted into test tubes fractions with the buffer. The separation of proteins will depend on the differences in total charge. Composition of ionizable side chain groups will determine the total charge of the protein at a particular pH. At the isoelectric point (pI), the total charge on the protein is 0 and it will not bind to the matrix. If the pH is above the pI, the protein will have a negative charge and bind to the matrix in an anion exchange column. The stability of the protein at values above or below the pI, will determine if an anion exchange column or cation exchange column should be used. If it is stable at pH values below the pI, the cation exchange column be used. If it is stable at pH values above the pI then the anion exchange column can be used. == References ==
Wikipedia/Anion-exchange_chromatography
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation. Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive. == Etymology and pronunciation == Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments. == History == The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes. Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules. == Terms == Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture. Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample. Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing. Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated. Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation. Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction. Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase. Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column. Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place. Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column. Eluotropic series – a list of solvents ranked according to their eluting power. Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing. Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated. Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis. Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste. Solute – the sample components in partition chromatography. Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography. Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography Detector – the instrument used for qualitative and quantitative detection of analytes after separation. Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used. == Techniques by chromatographic bed shape == === Column chromatography === Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample. In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage. In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells. Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein. === Planar chromatography === Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance. ==== Paper chromatography ==== Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far. ==== Thin-layer chromatography (TLC) ==== Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity. Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step). == Displacement chromatography == The basic principle of displacement chromatography is: A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities. There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations. == Techniques by physical state of mobile phase == === Gas chromatography === Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns. Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research. === Liquid chromatography === Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography. In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 (octadecylsilyl) as the stationary phase) is termed reversed phase liquid chromatography (RPLC). === Supercritical fluid chromatography === Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure. == Techniques by separation mechanism == === Affinity chromatography === Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained. Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties. However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity. === Ion exchange chromatography === Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC. === Size-exclusion chromatography === Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume). Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions. === Expanded bed adsorption chromatographic separation === An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed. Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage. == Special techniques == === Reversed-phase chromatography === Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate. === Hydrophobic interaction chromatography === Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution. In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money. If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins. === Hydrodynamic chromatography === Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than 10 5 {\displaystyle 10^{5}} daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column. HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important. HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device. === Two-dimensional chromatography === In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system. Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation. === Simulated moving-bed chromatography === The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely. In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed. True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well. === Pyrolysis gas chromatography === Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry. Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis. Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well. === Fast protein liquid chromatography === Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application. === Countercurrent chromatography === Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force. ==== Hydrodynamic countercurrent chromatography (CCC) ==== The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently. ==== Centrifugal partition chromatography (CPC) ==== In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume. === Periodic counter-current chromatography === In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion. === Chiral chromatography === Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available. Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ). === Aqueous normal-phase chromatography === Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning. == Applications == Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals. == See also == == References == == External links == IUPAC Nomenclature for Chromatography Overlapping Peaks Program – Learning by Simulations Chromatography Videos – MIT OCW – Digital Lab Techniques Manual Chromatography Equations Calculators – MicroSolv Technology Corporation
Wikipedia/Liquid_chromatography
Electrokinetic phenomena are a family of several different effects that occur in heterogeneous fluids, or in porous bodies filled with fluid, or in a fast flow over a flat surface. The term heterogeneous here means a fluid containing particles. Particles can be solid, liquid or gas bubbles with sizes on the scale of a micrometer or nanometer. There is a common source of all these effects—the so-called interfacial 'double layer' of charges. Influence of an external force on the diffuse layer generates tangential motion of a fluid with respect to an adjacent charged surface. This force might be electric, pressure gradient, concentration gradient, or gravity. In addition, the moving phase might be either continuous fluid or dispersed phase. == Family == Various combinations of the driving force and moving phase determine various electrokinetic effects. According to J.Lyklema, the complete family of electrokinetic phenomena includes: electrophoresis, as motion of charged particles under influence of electric field; electro-osmosis, as motion of liquid in porous body under influence of electric field; diffusiophoresis, as motion of particles under influence of a chemical potential gradient; capillary osmosis, as motion of liquid in porous body under influence of the chemical potential gradient; sedimentation potential, as electric field generated by sedimenting colloid particles; streaming potential/current, as either electric potential or current generated by fluid moving through porous body, or relative to flat surface; colloid vibration current, as electric current generated by particles moving in fluid under influence of ultrasound; electric sonic amplitude, as ultrasound generated by colloidal particles in oscillating electric field. == Further reading == There are detailed descriptions of electrokinetic phenomena in many books on interface and colloid science. == See also == Isotachophoresis Onsager reciprocal relations Surface charge Cationization of cotton == References ==
Wikipedia/Electrokinetic_phenomena
In computational chemistry and computational physics, the embedded atom model, embedded-atom method or EAM, is an approximation describing the energy between atoms and is a type of interatomic potential. The energy is a function of a sum of functions of the separation between an atom and its neighbors. In the original model, by Murray Daw and Mike Baskes, the latter functions represent the electron density. The EAM is related to the second moment approximation to tight binding theory, also known as the Finnis-Sinclair model. These models are particularly appropriate for metallic systems. Embedded-atom methods are widely used in molecular dynamics simulations. == Model simulation == In a simulation, the potential energy of an atom, i {\displaystyle i} , is given by E i = F α ( ∑ j ≠ i ρ β ( r i j ) ) + 1 2 ∑ j ≠ i ϕ α β ( r i j ) {\displaystyle E_{i}=F_{\alpha }\left(\sum _{j\neq i}\rho _{\beta }(r_{ij})\right)+{\frac {1}{2}}\sum _{j\neq i}\phi _{\alpha \beta }(r_{ij})} , where r i j {\displaystyle r_{ij}} is the distance between atoms i {\displaystyle i} and j {\displaystyle j} , ϕ α β {\displaystyle \phi _{\alpha \beta }} is a pair-wise potential function, ρ β {\displaystyle \rho _{\beta }} is the contribution to the electron charge density from atom j {\displaystyle j} of type β {\displaystyle \beta } at the location of atom i {\displaystyle i} , and F {\displaystyle F} is an embedding function that represents the energy required to place atom i {\displaystyle i} of type α {\displaystyle \alpha } into the electron cloud. Since the electron cloud density is a summation over many atoms, usually limited by a cutoff radius, the EAM potential is a multibody potential. For a single element system of atoms, three scalar functions must be specified: the embedding function, a pair-wise interaction, and an electron cloud contribution function. For a binary alloy, the EAM potential requires seven functions: three pair-wise interactions (A-A, A-B, B-B), two embedding functions, and two electron cloud contribution functions. Generally these functions are provided in a tabularized format and interpolated by cubic splines. == See also == Interatomic potential Lennard-Jones potential Bond order potential Force field (chemistry) == References ==
Wikipedia/Embedded_atom_model
Ab initio quantum chemistry methods are a class of computational chemistry techniques based on quantum chemistry that aim to solve the electronic Schrödinger equation. Ab initio means "from first principles" or "from the beginning", meaning using only physical constants and the positions and number of electrons in the system as input. This ab initio approach contrasts with other computational methods that rely on empirical parameters or approximations. By solving this fundamental equation, ab initio methods seek to accurately predict various chemical properties, including electron densities, energies, and molecular structures. The ability to run these calculations has enabled theoretical chemists to solve a range of problems and their importance is highlighted by the awarding of the 1998 Nobel prize to John Pople and Walter Kohn. The term ab initio was first used in quantum chemistry by Robert Parr and coworkers, including David Craig in a semiempirical study on the excited states of benzene. The background is described by Parr. == Accuracy and scaling == Ab initio electronic structure methods aim to calculate the many-electron function which is the solution of the non-relativistic electronic Schrödinger equation (in the Born–Oppenheimer approximation). The many-electron function is generally a linear combination of many simpler electron functions with the dominant function being the Hartree-Fock function. Each of these simple functions are then approximated using only one-electron functions. The one-electron functions are then expanded as a linear combination of a finite set of basis functions. This approach has the advantage that it can be made to converge to the exact solution, when the basis set tends toward the limit of a complete set and where all possible configurations are included (called "Full CI"). However this convergence to the limit is computationally very demanding and most calculations are far from the limit. Nevertheless important conclusions have been made from these more limited classifications. One needs to consider the computational cost of ab initio methods when determining whether they are appropriate for the problem at hand. When compared to much less accurate approaches, such as molecular mechanics, ab initio methods often take larger amounts of computer time, memory, and disk space, though, with modern advances in computer science and technology such considerations are becoming less of an issue. The Hartree-Fock (HF) method scales nominally as N4 (N being a relative measure of the system size, not the number of basis functions) – e.g., if one doubles the number of electrons and the number of basis functions (double the system size), the calculation will take 16 (24) times as long per iteration. However, in practice it can scale closer to N3 as the program can identify zero and extremely small integrals and neglect them. Correlated calculations scale less favorably, though their accuracy is usually greater, which is the trade off one needs to consider. One popular method is Møller–Plesset perturbation theory (MP). To second order (MP2), MP scales as N4. To third order (MP3) MP scales as N6. To fourth order (MP4) MP scales as N7. Another method, coupled cluster with singles and doubles (CCSD), scales as N6 and extensions, CCSD(T) and CR-CC(2,3), scale as N6 with one noniterative step which scales as N7. Hybrid Density functional theory (DFT) methods using functionals which include Hartree–Fock exchange scale in a similar manner to Hartree–Fock but with a larger proportionality term and are thus more expensive than an equivalent Hartree–Fock calculation. Local DFT methods that do not include Hartree–Fock exchange can scale better than Hartree–Fock. === Linear scaling approaches === The problem of computational expense can be alleviated through simplification schemes. In the density fitting scheme, the four-index integrals used to describe the interaction between electron pairs are reduced to simpler two- or three-index integrals, by treating the charge densities they contain in a simplified way. This reduces the scaling with respect to basis set size. Methods employing this scheme are denoted by the prefix "df-", for example the density fitting MP2 is df-MP2 (many authors use lower-case to prevent confusion with DFT). In the local approximation, the molecular orbitals are first localized by a unitary rotation in the orbital space (which leaves the reference wave function invariant, i.e., not an approximation) and subsequently interactions of distant pairs of localized orbitals are neglected in the correlation calculation. This sharply reduces the scaling with molecular size, a major problem in the treatment of biologically-sized molecules. Methods employing this scheme are denoted by the prefix "L", e.g. LMP2. Both schemes can be employed together, as in the df-LMP2 and df-LCCSD(T0) methods. In fact, df-LMP2 calculations are faster than df-Hartree–Fock calculations and thus are feasible in nearly all situations in which also DFT is. == Classes of methods == The most popular classes of ab initio electronic structure methods: === Hartree–Fock methods === Hartree–Fock (HF) Restricted open-shell Hartree–Fock (ROHF) Unrestricted Hartree–Fock (UHF) === Post-Hartree–Fock methods === Møller–Plesset perturbation theory (MPn) Configuration interaction (CI) Coupled cluster (CC) Quadratic configuration interaction (QCI) Quantum chemistry composite methods Sign learning kink-based (SiLK) quantum Monte Carlo === Multi-reference methods === Multi-configurational self-consistent field (MCSCF including CASSCF and RASSCF) Multi-reference configuration interaction (MRCI) n-electron valence state perturbation theory (NEVPT) Complete active space perturbation theory (CASPTn) State universal multi-reference coupled-cluster theory (SUMR-CC) == Methods in detail == === Hartree–Fock and post-Hartree–Fock methods === The simplest type of ab initio electronic structure calculation is the Hartree–Fock (HF) scheme, in which the instantaneous Coulombic electron-electron repulsion is not specifically taken into account. Only its average effect (mean field) is included in the calculation. This is a variational procedure; therefore, the obtained approximate energies, expressed in terms of the system's wave function, are always equal to or greater than the exact energy, and tend to a limiting value called the Hartree–Fock limit as the size of the basis is increased. Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. Møller–Plesset perturbation theory (MPn) and coupled cluster theory (CC) are examples of these post-Hartree–Fock methods. In some cases, particularly for bond breaking processes, the Hartree–Fock method is inadequate and this single-determinant reference function is not a good basis for post-Hartree–Fock methods. It is then necessary to start with a wave function that includes more than one determinant such as multi-configurational self-consistent field (MCSCF) and methods have been developed that use these multi-determinant references for improvements. However, if one uses coupled cluster methods such as CCSDT, CCSDt, CR-CC(2,3), or CC(t;3) then single-bond breaking using the single determinant HF reference is feasible. For an accurate description of double bond breaking, methods such as CCSDTQ, CCSDTq, CCSDtq, CR-CC(2,4), or CC(tq;3,4) also make use of the single determinant HF reference, and do not require one to use multi-reference methods. Example Is the bonding situation in disilyne Si2H2 the same as in acetylene (C2H2)? A series of ab initio studies of Si2H2 is an example of how ab initio computational chemistry can predict new structures that are subsequently confirmed by experiment. They go back over 20 years, and most of the main conclusions were reached by 1995. The methods used were mostly post-Hartree–Fock, particularly configuration interaction (CI) and coupled cluster (CC). Initially the question was whether disilyne, Si2H2 had the same structure as ethyne (acetylene), C2H2. In early studies, by Binkley and Lischka and Kohler, it became clear that linear Si2H2 was a transition structure between two equivalent trans-bent structures and that the ground state was predicted to be a four-membered ring bent in a 'butterfly' structure with hydrogen atoms bridged between the two silicon atoms. Interest then moved to look at whether structures equivalent to vinylidene (Si=SiH2) existed. This structure is predicted to be a local minimum, i.e. an isomer of Si2H2, lying higher in energy than the ground state but below the energy of the trans-bent isomer. Then a new isomer with an unusual structure was predicted by Brenda Colegrove in Henry F. Schaefer III's group. It requires post-Hartree–Fock methods to obtain a local minimum for this structure. It does not exist on the Hartree–Fock energy hypersurface. The new isomer is a planar structure with one bridging hydrogen atom and one terminal hydrogen atom, cis to the bridging atom. Its energy is above the ground state but below that of the other isomers. Similar results were later obtained for Ge2H2. Al2H2 and Ga2H2 have exactly the same isomers, in spite of having two electrons less than the Group 14 molecules. The only difference is that the four-membered ring ground state is planar and not bent. The cis-mono-bridged and vinylidene-like isomers are present. Experimental work on these molecules is not easy, but matrix isolation spectroscopy of the products of the reaction of hydrogen atoms and silicon and aluminium surfaces has found the ground state ring structures and the cis-mono-bridged structures for Si2H2 and Al2H2. Theoretical predictions of the vibrational frequencies were crucial in understanding the experimental observations of the spectra of a mixture of compounds. This may appear to be an obscure area of chemistry, but the differences between carbon and silicon chemistry is always a lively question, as are the differences between group 13 and group 14 (mainly the B and C differences). The silicon and germanium compounds were the subject of a Journal of Chemical Education article. === Valence bond methods === Valence bond (VB) methods are generally ab initio although some semi-empirical versions have been proposed. Current VB approaches are: Generalized valence bond (GVB) Modern valence bond theory (MVBT) === Quantum Monte Carlo methods === A method that avoids making the variational overestimation of HF in the first place is Quantum Monte Carlo (QMC), in its variational, diffusion, and Green's function forms. These methods work with an explicitly correlated wave function and evaluate integrals numerically using a Monte Carlo integration. Such calculations can be very time-consuming. The accuracy of QMC depends strongly on the initial guess of many-body wave-functions and the form of the many-body wave-function. One simple choice is Slater-Jastrow wave-function in which the local correlations are treated with the Jastrow factor. Sign Learning Kink-based (SiLK) Quantum Monte Carlo (website): The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is based on Feynman's path integral formulation of quantum mechanics, and can reduce the minus sign problem when calculating energies in atomic and molecular systems. == See also == Density functional theory Car–Parrinello molecular dynamics Quantum chemistry computer programs – see columns for Hartree–Fock and post-Hartree–Fock methods == References ==
Wikipedia/Ab_initio_quantum_chemistry_methods
Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model. == History == The theory of mean-field interacting particle models had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. The mathematical foundations of these classes of models were developed from the mid-1980s to the mid-1990s by several mathematicians, including Werner Braun, Klaus Hepp, Karl Oelschläger, Gérard Ben Arous and Marc Brunaud, Donald Dawson, Jean Vaillancourt and Jürgen Gärtner, Christian Léonard, Sylvie Méléard, Sylvie Roelly, Alain-Sol Sznitman and Hiroshi Tanaka for diffusion type models; F. Alberto Grünbaum, Tokuzo Shiga, Hiroshi Tanaka, Sylvie Méléard and Carl Graham for general classes of interacting jump-diffusion processes. We also quote an earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies. Mean-field genetic type particle methods are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. The Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field particle approximation of Feynman-Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984 In molecular chemistry, the use of genetic heuristic-like particle methods (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall. N. Rosenbluth and Arianna. W. Rosenbluth. The first pioneering articles on the applications of these heuristic-like particle methods in nonlinear filtering problems were the independent studies of Neil Gordon, David Salmon and Adrian Smith (bootstrap filter), Genshiro Kitagawa (Monte Carlo filter) , and the one by Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut published in the 1990s. The term interacting "particle filters" was first coined in 1996 by Del Moral. Particle filters were also developed in signal processing in the early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems. The foundations and the first rigorous analysis on the convergence of genetic type models and mean field Feynman-Kac particle methods are due to Pierre Del Moral in 1996. Branching type particle methods with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. The first uniform convergence results with respect to the time parameter for mean field particle models were developed in the end of the 1990s by Pierre Del Moral and Alice Guionnet for interacting jump type processes, and by Florent Malrieu for nonlinear diffusion type processes. New classes of mean field particle simulation techniques for Feynman-Kac path-integration problems includes genealogical tree based models, backward particle models, adaptive mean field particle models, island type particle models, and particle Markov chain Monte Carlo methods == Applications == In physics, and more particularly in statistical mechanics, these nonlinear evolution equations are often used to describe the statistical behavior of microscopic interacting particles in a fluid or in some condensed matter. In this context, the random evolution of a virtual fluid or a gas particle is represented by McKean-Vlasov diffusion processes, reaction–diffusion systems, or Boltzmann type collision processes. As its name indicates, the mean field particle model represents the collective behavior of microscopic particles weakly interacting with their occupation measures. The macroscopic behavior of these many-body particle systems is encapsulated in the limiting model obtained when the size of the population tends to infinity. Boltzmann equations represent the macroscopic evolution of colliding particles in rarefied gases, while McKean Vlasov diffusions represent the macroscopic behavior of fluid particles and granular gases. In computational physics and more specifically in quantum mechanics, the ground state energies of quantum systems is associated with the top of the spectrum of Schrödinger's operators. The Schrödinger equation is the quantum mechanics version of the Newton's second law of motion of classical mechanics (the mass times the acceleration is the sum of the forces). This equation represents the wave function (a.k.a. the quantum state) evolution of some physical system, including molecular, atomic of subatomic systems, as well as macroscopic systems like the universe. The solution of the imaginary time Schrödinger equation (a.k.a. the heat equation) is given by a Feynman-Kac distribution associated with a free evolution Markov process (often represented by Brownian motions) in the set of electronic or macromolecular configurations and some potential energy function. The long time behavior of these nonlinear semigroups is related to top eigenvalues and ground state energies of Schrödinger's operators. The genetic type mean field interpretation of these Feynman-Kac models are termed Resample Monte Carlo, or Diffusion Monte Carlo methods. These branching type evolutionary algorithms are based on mutation and selection transitions. During the mutation transition, the walkers evolve randomly and independently in a potential energy landscape on particle configurations. The mean field selection process (a.k.a. quantum teleportation, population reconfiguration, resampled transition) is associated with a fitness function that reflects the particle absorption in an energy well. Configurations with low relative energy are more likely to duplicate. In molecular chemistry, and statistical physics Mean field particle methods are also used to sample Boltzmann-Gibbs measures associated with some cooling schedule, and to compute their normalizing constants (a.k.a. free energies, or partition functions). In computational biology, and more specifically in population genetics, spatial branching processes with competitive selection and migration mechanisms can also be represented by mean field genetic type population dynamics models. The first moments of the occupation measures of a spatial branching process are given by Feynman-Kac distribution flows. The mean field genetic type approximation of these flows offers a fixed population size interpretation of these branching processes. Extinction probabilities can be interpreted as absorption probabilities of some Markov process evolving in some absorbing environment. These absorption models are represented by Feynman-Kac models. The long time behavior of these processes conditioned on non-extinction can be expressed in an equivalent way by quasi-invariant measures, Yaglom limits, or invariant measures of nonlinear normalized Feynman-Kac flows. In computer sciences, and more particularly in artificial intelligence these mean field type genetic algorithms are used as random search heuristics that mimic the process of evolution to generate useful solutions to complex optimization problems. These stochastic search algorithms belongs to the class of Evolutionary models. The idea is to propagate a population of feasible candidate solutions using mutation and selection mechanisms. The mean field interaction between the individuals is encapsulated in the selection and the cross-over mechanisms. In mean field games and multi-agent interacting systems theories, mean field particle processes are used to represent the collective behavior of complex systems with interacting individuals. In this context, the mean field interaction is encapsulated in the decision process of interacting agents. The limiting model as the number of agents tends to infinity is sometimes called the continuum model of agents In information theory, and more specifically in statistical machine learning and signal processing, mean field particle methods are used to sample sequentially from the conditional distributions of some random process with respect to a sequence of observations or a cascade of rare events. In discrete time nonlinear filtering problems, the conditional distributions of the random states of a signal given partial and noisy observations satisfy a nonlinear updating-prediction evolution equation. The updating step is given by Bayes' rule, and the prediction step is a Chapman-Kolmogorov transport equation. The mean field particle interpretation of these nonlinear filtering equations is a genetic type selection-mutation particle algorithm During the mutation step, the particles evolve independently of one another according to the Markov transitions of the signal . During the selection stage, particles with small relative likelihood values are killed, while the ones with high relative values are multiplied. These mean field particle techniques are also used to solve multiple-object tracking problems, and more specifically to estimate association measures The continuous time version of these particle models are mean field Moran type particle interpretations of the robust optimal filter evolution equations or the Kushner-Stratonotich stochastic partial differential equation. These genetic type mean field particle algorithms also termed Particle Filters and Sequential Monte Carlo methods are extensively and routinely used in operation research and statistical inference . The term "particle filters" was first coined in 1996 by Del Moral, and the term "sequential Monte Carlo" by Liu and Chen in 1998. Subset simulation and Monte Carlo splitting techniques are particular instances of genetic particle schemes and Feynman-Kac particle models equipped with Markov chain Monte Carlo mutation transitions == Illustrations of the mean field simulation method == === Countable state space models === To motivate the mean field simulation algorithm we start with S a finite or countable state space and let P(S) denote the set of all probability measures on S. Consider a sequence of probability distributions ( η 0 , η 1 , ⋯ ) {\displaystyle (\eta _{0},\eta _{1},\cdots )} on S satisfying an evolution equation: for some, possibly nonlinear, mapping Φ : P ( S ) → P ( S ) . {\displaystyle \Phi :P(S)\to P(S).} These distributions are given by vectors η n = ( η n ( x ) ) x ∈ S , {\displaystyle \eta _{n}=(\eta _{n}(x))_{x\in S},} that satisfy: 0 ⩽ η n ( x ) ⩽ 1 , ∑ x ∈ S η n ( x ) = 1. {\displaystyle 0\leqslant \eta _{n}(x)\leqslant 1,\qquad \sum \nolimits _{x\in S}\eta _{n}(x)=1.} Therefore, Φ {\displaystyle \Phi } is a mapping from the ( s − 1 ) {\displaystyle (s-1)} -unit simplex into itself, where s stands for the cardinality of the set S. When s is too large, solving equation (1) is intractable or computationally very costly. One natural way to approximate these evolution equations is to reduce sequentially the state space using a mean field particle model. One of the simplest mean field simulation scheme is defined by the Markov chain ξ n ( N ) = ( ξ n ( N , 1 ) , ⋯ , ξ n ( N , N ) ) {\displaystyle \xi _{n}^{(N)}=\left(\xi _{n}^{(N,1)},\cdots ,\xi _{n}^{(N,N)}\right)} on the product space S N {\displaystyle S^{N}} , starting with N independent random variables with probability distribution η 0 {\displaystyle \eta _{0}} and elementary transitions P ( ξ n + 1 ( N , 1 ) = y 1 , ⋯ , ξ n + 1 ( N , N ) = y N | ξ n ( N ) ) = ∏ i = 1 N Φ ( η n N ) ( y i ) , {\displaystyle \mathbf {P} \left(\left.\xi _{n+1}^{(N,1)}=y^{1},\cdots ,\xi _{n+1}^{(N,N)}=y^{N}\right|\xi _{n}^{(N)}\right)=\prod _{i=1}^{N}\Phi \left(\eta _{n}^{N}\right)\left(y^{i}\right),} with the empirical measure η n N = 1 N ∑ j = 1 N 1 ξ n ( N , j ) {\displaystyle \eta _{n}^{N}={\frac {1}{N}}\sum _{j=1}^{N}1_{\xi _{n}^{(N,j)}}} where 1 x {\displaystyle 1_{x}} is the indicator function of the state x. In other words, given ξ n ( N ) {\displaystyle \xi _{n}^{(N)}} the samples ξ n + 1 ( N ) {\displaystyle \xi _{n+1}^{(N)}} are independent random variables with probability distribution Φ ( η n N ) {\displaystyle \Phi \left(\eta _{n}^{N}\right)} . The rationale behind this mean field simulation technique is the following: We expect that when η n N {\displaystyle \eta _{n}^{N}} is a good approximation of η n {\displaystyle \eta _{n}} , then Φ ( η n N ) {\displaystyle \Phi \left(\eta _{n}^{N}\right)} is an approximation of Φ ( η n ) = η n + 1 {\displaystyle \Phi \left(\eta _{n}\right)=\eta _{n+1}} . Thus, since η n + 1 N {\displaystyle \eta _{n+1}^{N}} is the empirical measure of N conditionally independent random variables with common probability distribution Φ ( η n N ) {\displaystyle \Phi \left(\eta _{n}^{N}\right)} , we expect η n + 1 N {\displaystyle \eta _{n+1}^{N}} to be a good approximation of η n + 1 {\displaystyle \eta _{n+1}} . Another strategy is to find a collection K η n = ( K η n ( x , y ) ) x , y ∈ S {\displaystyle K_{\eta _{n}}=\left(K_{\eta _{n}}(x,y)\right)_{x,y\in S}} of stochastic matrices indexed by η n ∈ P ( S ) {\displaystyle \eta _{n}\in P(S)} such that This formula allows us to interpret the sequence ( η 0 , η 1 , ⋯ ) {\displaystyle (\eta _{0},\eta _{1},\cdots )} as the probability distributions of the random states ( X ¯ 0 , X ¯ 1 , ⋯ ) {\displaystyle \left({\overline {X}}_{0},{\overline {X}}_{1},\cdots \right)} of the nonlinear Markov chain model with elementary transitions P ( X ¯ n + 1 = y | X ¯ n = x ) = K η n ( x , y ) , Law ( X ¯ n ) = η n . {\displaystyle \mathbf {P} \left(\left.{\overline {X}}_{n+1}=y\right|{\overline {X}}_{n}=x\right)=K_{\eta _{n}}(x,y),\qquad {\text{Law}}({\overline {X}}_{n})=\eta _{n}.} A collection of Markov transitions K η n {\displaystyle K_{\eta _{n}}} satisfying the equation (1) is called a McKean interpretation of the sequence of measures η n {\displaystyle \eta _{n}} . The mean field particle interpretation of (2) is now defined by the Markov chain ξ n ( N ) = ( ξ n ( N , 1 ) , ⋯ , ξ n ( N , N ) ) {\displaystyle \xi _{n}^{(N)}=\left(\xi _{n}^{(N,1)},\cdots ,\xi _{n}^{(N,N)}\right)} on the product space S N {\displaystyle S^{N}} , starting with N independent random copies of X 0 {\displaystyle X_{0}} and elementary transitions P ( ξ n + 1 ( N , 1 ) = y 1 , ⋯ , ξ n + 1 ( N , N ) = y N | ξ n ( N ) ) = ∏ i = 1 N K n + 1 , η n N ( ξ n ( N , i ) , y i ) , {\displaystyle \mathbf {P} \left(\left.\xi _{n+1}^{(N,1)}=y^{1},\cdots ,\xi _{n+1}^{(N,N)}=y^{N}\right|\xi _{n}^{(N)}\right)=\prod _{i=1}^{N}K_{n+1,\eta _{n}^{N}}\left(\xi _{n}^{(N,i)},y^{i}\right),} with the empirical measure η n N = 1 N ∑ j = 1 N 1 ξ n ( N , j ) {\displaystyle \eta _{n}^{N}={\frac {1}{N}}\sum _{j=1}^{N}1_{\xi _{n}^{(N,j)}}} Under some weak regularity conditions on the mapping Φ {\displaystyle \Phi } for any function f : S → R {\displaystyle f:S\to \mathbf {R} } , we have the almost sure convergence 1 N ∑ j = 1 N f ( ξ n ( N , j ) ) → N ↑ ∞ E ( f ( X ¯ n ) ) = ∑ x ∈ S η n ( x ) f ( x ) {\displaystyle {\frac {1}{N}}\sum _{j=1}^{N}f\left(\xi _{n}^{(N,j)}\right)\to _{N\uparrow \infty }E\left(f({\overline {X}}_{n})\right)=\sum _{x\in S}\eta _{n}(x)f(x)} These nonlinear Markov processes and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces. === Feynman-Kac models === To illustrate the abstract models presented above, we consider a stochastic matrix M = ( M ( x , y ) ) x , y ∈ S {\displaystyle M=(M(x,y))_{x,y\in S}} and some function G : S → ( 0 , 1 ) {\displaystyle G:S\to (0,1)} . We associate with these two objects the mapping { Φ : P ( S ) → P ( S ) ( η n ( x ) ) x ∈ S ↦ ( Φ ( η n ) ( y ) ) y ∈ S Φ ( η n ) ( y ) = ∑ x ∈ S Ψ G ( η n ) ( x ) M ( x , y ) {\displaystyle {\begin{cases}\Phi :P(S)\to P(S)\\(\eta _{n}(x))_{x\in S}\mapsto \left(\Phi (\eta _{n})(y)\right)_{y\in S}\end{cases}}\qquad \Phi (\eta _{n})(y)=\sum _{x\in S}\Psi _{G}(\eta _{n})(x)M(x,y)} and the Boltzmann-Gibbs measures Ψ G ( η n ) ( x ) {\displaystyle \Psi _{G}(\eta _{n})(x)} defined by Ψ G ( η n ) ( x ) = η n ( x ) G ( x ) ∑ z ∈ S η n ( z ) G ( z ) . {\displaystyle \Psi _{G}(\eta _{n})(x)={\frac {\eta _{n}(x)G(x)}{\sum _{z\in S}\eta _{n}(z)G(z)}}.} We denote by K η n = ( K η n ( x , y ) ) x , y ∈ S {\displaystyle K_{\eta _{n}}=\left(K_{\eta _{n}}(x,y)\right)_{x,y\in S}} the collection of stochastic matrices indexed by η n ∈ P ( S ) {\displaystyle \eta _{n}\in P(S)} given by K η n ( x , y ) = ϵ G ( x ) M ( x , y ) + ( 1 − ϵ G ( x ) ) Φ ( η n ) ( y ) {\displaystyle K_{\eta _{n}}(x,y)=\epsilon G(x)M(x,y)+(1-\epsilon G(x))\Phi (\eta _{n})(y)} for some parameter ϵ ∈ [ 0 , 1 ] {\displaystyle \epsilon \in [0,1]} . It is readily checked that the equation (2) is satisfied. In addition, we can also show (cf. for instance) that the solution of (1) is given by the Feynman-Kac formula η n ( x ) = E ( 1 x ( X n ) ∏ p = 0 n − 1 G ( X p ) ) E ( ∏ p = 0 n − 1 G ( X p ) ) , {\displaystyle \eta _{n}(x)={\frac {E\left(1_{x}(X_{n})\prod _{p=0}^{n-1}G(X_{p})\right)}{E\left(\prod _{p=0}^{n-1}G(X_{p})\right)}},} with a Markov chain X n {\displaystyle X_{n}} with initial distribution η 0 {\displaystyle \eta _{0}} and Markov transition M. For any function f : S → R {\displaystyle f:S\to \mathbf {R} } we have η n ( f ) := ∑ x ∈ S η n ( x ) f ( x ) = E ( f ( X n ) ∏ p = 0 n − 1 G ( X p ) ) E ( ∏ p = 0 n − 1 G ( X p ) ) {\displaystyle \eta _{n}(f):=\sum _{x\in S}\eta _{n}(x)f(x)={\frac {E\left(f(X_{n})\prod _{p=0}^{n-1}G(X_{p})\right)}{E\left(\prod _{p=0}^{n-1}G(X_{p})\right)}}} If G ( x ) = 1 {\displaystyle G(x)=1} is the unit function and ϵ = 1 {\displaystyle \epsilon =1} , then we have K η n ( x , y ) = M ( x , y ) = P ( X n + 1 = y | X n = x ) , η n ( x ) = E ( 1 x ( X n ) ) = P ( X n = x ) . {\displaystyle K_{\eta _{n}}(x,y)=M(x,y)=\mathbf {P} \left(\left.X_{n+1}=y\right|X_{n}=x\right),\qquad \eta _{n}(x)=E\left(1_{x}(X_{n})\right)=\mathbf {P} (X_{n}=x).} And the equation (2) reduces to the Chapman-Kolmogorov equation η n + 1 ( y ) = ∑ x ∈ S η n ( x ) M ( x , y ) ⇔ P ( X n + 1 = y ) = ∑ x ∈ S P ( X n + 1 = y | X n = x ) P ( X n = x ) {\displaystyle \eta _{n+1}(y)=\sum _{x\in S}\eta _{n}(x)M(x,y)\qquad \Leftrightarrow \qquad \mathbf {P} \left(X_{n+1}=y\right)=\sum _{x\in S}\mathbf {P} (X_{n+1}=y|X_{n}=x)\mathbf {P} \left(X_{n}=x\right)} The mean field particle interpretation of this Feynman-Kac model is defined by sampling sequentially N conditionally independent random variables ξ n + 1 ( N , i ) {\displaystyle \xi _{n+1}^{(N,i)}} with probability distribution K n + 1 , η n N ( ξ n ( N , i ) , y ) = ϵ G ( ξ n ( N , i ) ) M ( ξ n ( N , i ) , y ) + ( 1 − ϵ G ( ξ n ( N , i ) ) ) ∑ j = 1 N G ( ξ n ( N , j ) ) ∑ k = 1 N G ( ξ n ( N , k ) ) M ( ξ n ( N , j ) , y ) {\displaystyle K_{n+1,\eta _{n}^{N}}\left(\xi _{n}^{(N,i)},y\right)=\epsilon G\left(\xi _{n}^{(N,i)}\right)M\left(\xi _{n}^{(N,i)},y\right)+\left(1-\epsilon G\left(\xi _{n}^{(N,i)}\right)\right)\sum _{j=1}^{N}{\frac {G\left(\xi _{n}^{(N,j)}\right)}{\sum _{k=1}^{N}G\left(\xi _{n}^{(N,k)}\right)}}M\left(\xi _{n}^{(N,j)},y\right)} In other words, with a probability ϵ G ( ξ n ( N , i ) ) {\displaystyle \epsilon G\left(\xi _{n}^{(N,i)}\right)} the particle ξ n ( N , i ) {\displaystyle \xi _{n}^{(N,i)}} evolves to a new state ξ n + 1 ( N , i ) = y {\displaystyle \xi _{n+1}^{(N,i)}=y} randomly chosen with the probability distribution M ( ξ n ( N , i ) , y ) {\displaystyle M\left(\xi _{n}^{(N,i)},y\right)} ; otherwise, ξ n ( N , i ) {\displaystyle \xi _{n}^{(N,i)}} jumps to a new location ξ n ( N , j ) {\displaystyle \xi _{n}^{(N,j)}} randomly chosen with a probability proportional to G ( ξ n ( N , j ) ) {\displaystyle G\left(\xi _{n}^{(N,j)}\right)} and evolves to a new state ξ n + 1 ( N , i ) = y {\displaystyle \xi _{n+1}^{(N,i)}=y} randomly chosen with the probability distribution M ( ξ n ( N , j ) , y ) . {\displaystyle M\left(\xi _{n}^{(N,j)},y\right).} If G ( x ) = 1 {\displaystyle G(x)=1} is the unit function and ϵ = 1 {\displaystyle \epsilon =1} , the interaction between the particle vanishes and the particle model reduces to a sequence of independent copies of the Markov chain X n {\displaystyle X_{n}} . When ϵ = 0 {\displaystyle \epsilon =0} the mean field particle model described above reduces to a simple mutation-selection genetic algorithm with fitness function G and mutation transition M. These nonlinear Markov chain models and their mean field particle interpretation can be extended to time non homogeneous models on general measurable state spaces (including transition states, path spaces and random excursion spaces) and continuous time models. === Gaussian nonlinear state space models === We consider a sequence of real valued random variables ( X ¯ 0 , X ¯ 1 , ⋯ ) {\displaystyle \left({\overline {X}}_{0},{\overline {X}}_{1},\cdots \right)} defined sequentially by the equations with a collection W n {\displaystyle W_{n}} of independent standard Gaussian random variables, a positive parameter σ, some functions a , b , c : R → R , {\displaystyle a,b,c:\mathbf {R} \to \mathbf {R} ,} and some standard Gaussian initial random state X ¯ 0 {\displaystyle {\overline {X}}_{0}} . We let η n {\displaystyle \eta _{n}} be the probability distribution of the random state X ¯ n {\displaystyle {\overline {X}}_{n}} ; that is, for any bounded measurable function f, we have E ( f ( X ¯ n ) ) = ∫ R f ( x ) η n ( d x ) , {\displaystyle E\left(f({\overline {X}}_{n})\right)=\int _{\mathbf {R} }f(x)\eta _{n}(dx),} with P ( X ¯ n ∈ d x ) = η n ( d x ) {\displaystyle \mathbf {P} \left({\overline {X}}_{n}\in dx\right)=\eta _{n}(dx)} The integral is the Lebesgue integral, and dx stands for an infinitesimal neighborhood of the state x. The Markov transition of the chain is given for any bounded measurable functions f by the formula E ( f ( X ¯ n + 1 ) | X ¯ n = x ) = ∫ R K η n ( x , d y ) f ( y ) , {\displaystyle E\left(\left.f\left({\overline {X}}_{n+1}\right)\right|{\overline {X}}_{n}=x\right)=\int _{\mathbf {R} }K_{\eta _{n}}(x,dy)f(y),} with K η n ( x , d y ) = P ( X ¯ n + 1 ∈ d y | X ¯ n = x ) = 1 2 π σ exp ⁡ { − 1 2 σ 2 ( y − [ b ( x ) ∫ R a ( z ) η n ( d z ) + c ( x ) ] ) 2 } d y {\displaystyle K_{\eta _{n}}(x,dy)=\mathbf {P} \left(\left.{\overline {X}}_{n+1}\in dy\right|{\overline {X}}_{n}=x\right)={\frac {1}{{\sqrt {2\pi }}\sigma }}\exp {\left\{-{\frac {1}{2\sigma ^{2}}}\left(y-\left[b(x)\int _{\mathbf {R} }a(z)\eta _{n}(dz)+c(x)\right]\right)^{2}\right\}}dy} Using the tower property of conditional expectations we prove that the probability distributions η n {\displaystyle \eta _{n}} satisfy the nonlinear equation ∫ R η n + 1 ( d y ) f ( y ) = ∫ R [ ∫ R η n ( d x ) K η n ( x , d y ) ] f ( y ) {\displaystyle \int _{\mathbf {R} }\eta _{n+1}(dy)f(y)=\int _{\mathbf {R} }\left[\int _{\mathbf {R} }\eta _{n}(dx)K_{\eta _{n}}(x,dy)\right]f(y)} for any bounded measurable functions f. This equation is sometimes written in the more synthetic form η n + 1 = Φ ( η n ) = η n K η n ⇔ η n + 1 ( d y ) = ( η n K η n ) ( d y ) = ∫ x ∈ R η n ( d x ) K η n ( x , d y ) {\displaystyle \eta _{n+1}=\Phi \left(\eta _{n}\right)=\eta _{n}K_{\eta _{n}}\quad \Leftrightarrow \quad \eta _{n+1}(dy)=\left(\eta _{n}K_{\eta _{n}}\right)(dy)=\int _{x\in \mathbf {R} }\eta _{n}(dx)K_{\eta _{n}}(x,dy)} The mean field particle interpretation of this model is defined by the Markov chain ξ n ( N ) = ( ξ n ( N , 1 ) , ⋯ , ξ n ( N , N ) ) {\displaystyle \xi _{n}^{(N)}=\left(\xi _{n}^{(N,1)},\cdots ,\xi _{n}^{(N,N)}\right)} on the product space R N {\displaystyle \mathbf {R} ^{N}} by ξ n + 1 ( N , i ) = ( 1 N ∑ j = 1 N a ( ξ n ( N , i ) ) ) b ( ξ n ( N , i ) ) + c ( ξ n ( N , i ) ) + σ W n i 1 ⩽ i ⩽ N {\displaystyle \xi _{n+1}^{(N,i)}=\left({\frac {1}{N}}\sum _{j=1}^{N}a\left(\xi _{n}^{(N,i)}\right)\right)b\left(\xi _{n}^{(N,i)}\right)+c\left(\xi _{n}^{(N,i)}\right)+\sigma W_{n}^{i}\qquad 1\leqslant i\leqslant N} where ξ 0 ( N ) = ( ξ 0 ( N , 1 ) , ⋯ , ξ 0 ( N , N ) ) , ( W n 1 , ⋯ , W n N ) {\displaystyle \xi _{0}^{(N)}=\left(\xi _{0}^{(N,1)},\cdots ,\xi _{0}^{(N,N)}\right),\qquad \left(W_{n}^{1},\cdots ,W_{n}^{N}\right)} stand for N independent copies of X ¯ 0 {\displaystyle {\overline {X}}_{0}} and W n ; n ⩾ 1 , {\displaystyle W_{n};n\geqslant 1,} respectively. For regular models (for instance for bounded Lipschitz functions a, b, c) we have the almost sure convergence 1 N ∑ j = 1 N f ( ξ n ( N , i ) ) = ∫ R f ( y ) η n N ( d y ) → N ↑ ∞ E ( f ( X ¯ n ) ) = ∫ R f ( y ) η n ( d y ) , {\displaystyle {\frac {1}{N}}\sum _{j=1}^{N}f\left(\xi _{n}^{(N,i)}\right)=\int _{\mathbf {R} }f(y)\eta _{n}^{N}(dy)\to _{N\uparrow \infty }E\left(f({\overline {X}}_{n})\right)=\int _{\mathbf {R} }f(y)\eta _{n}(dy),} with the empirical measure η n N = 1 N ∑ j = 1 N δ ξ n ( N , i ) {\displaystyle \eta _{n}^{N}={\frac {1}{N}}\sum _{j=1}^{N}\delta _{\xi _{n}^{(N,i)}}} for any bounded measurable functions f (cf. for instance ). In the above display, δ x {\displaystyle \delta _{x}} stands for the Dirac measure at the state x. === Continuous time mean field models === We consider a standard Brownian motion W ¯ t n {\displaystyle {\overline {W}}_{t_{n}}} (a.k.a. Wiener Process) evaluated on a time mesh sequence t 0 = 0 < t 1 < ⋯ < t n < ⋯ {\displaystyle t_{0}=0<t_{1}<\cdots <t_{n}<\cdots } with a given time step t n − t n − 1 = h {\displaystyle t_{n}-t_{n-1}=h} . We choose c ( x ) = x {\displaystyle c(x)=x} in equation (1), we replace b ( x ) {\displaystyle b(x)} and σ by b ( x ) × h {\displaystyle b(x)\times h} and σ × h {\displaystyle \sigma \times {\sqrt {h}}} , and we write X ¯ t n {\displaystyle {\overline {X}}_{t_{n}}} instead of X ¯ n {\displaystyle {\overline {X}}_{n}} the values of the random states evaluated at the time step t n . {\displaystyle t_{n}.} Recalling that ( W ¯ t n + 1 − W ¯ t n ) {\displaystyle \left({\overline {W}}_{t_{n+1}}-{\overline {W}}_{t_{n}}\right)} are independent centered Gaussian random variables with variance t n − t n − 1 = h , {\displaystyle t_{n}-t_{n-1}=h,} the resulting equation can be rewritten in the following form When h → 0, the above equation converge to the nonlinear diffusion process d X ¯ t = E ( a ( X ¯ t ) ) b ( X ¯ t ) d t + σ d W ¯ t {\displaystyle d{\overline {X}}_{t}=E\left(a\left({\overline {X}}_{t}\right)\right)b({\overline {X}}_{t})dt+\sigma d{\overline {W}}_{t}} The mean field continuous time model associated with these nonlinear diffusions is the (interacting) diffusion process ξ t ( N ) = ( ξ t ( N , i ) ) 1 ⩽ i ⩽ N {\displaystyle \xi _{t}^{(N)}=\left(\xi _{t}^{(N,i)}\right)_{1\leqslant i\leqslant N}} on the product space R N {\displaystyle \mathbf {R} ^{N}} defined by d ξ t ( N , i ) = ( 1 N ∑ j = 1 N a ( ξ t ( N , i ) ) ) b ( ξ t ( N , i ) ) + σ d W ¯ t i 1 ⩽ i ⩽ N {\displaystyle d\xi _{t}^{(N,i)}=\left({\frac {1}{N}}\sum _{j=1}^{N}a\left(\xi _{t}^{(N,i)}\right)\right)b\left(\xi _{t}^{(N,i)}\right)+\sigma d{\overline {W}}_{t}^{i}\qquad 1\leqslant i\leqslant N} where ξ 0 ( N ) = ( ξ 0 ( N , 1 ) , ⋯ , ξ 0 ( N , N ) ) , ( W ¯ t 1 , ⋯ , W ¯ t N ) {\displaystyle \xi _{0}^{(N)}=\left(\xi _{0}^{(N,1)},\cdots ,\xi _{0}^{(N,N)}\right),\qquad \left({\overline {W}}_{t}^{1},\cdots ,{\overline {W}}_{t}^{N}\right)} are N independent copies of X ¯ 0 {\displaystyle {\overline {X}}_{0}} and W ¯ t . {\displaystyle {\overline {W}}_{t}.} For regular models (for instance for bounded Lipschitz functions a, b) we have the almost sure convergence 1 N ∑ j = 1 N f ( ξ t ( N , i ) ) = ∫ R f ( y ) η t N ( d y ) → N ↑ ∞ E ( f ( X ¯ t ) ) = ∫ R f ( y ) η t ( d y ) {\displaystyle {\frac {1}{N}}\sum _{j=1}^{N}f\left(\xi _{t}^{(N,i)}\right)=\int _{\mathbf {R} }f(y)\eta _{t}^{N}(dy)\to _{N\uparrow \infty }E\left(f({\overline {X}}_{t})\right)=\int _{\mathbf {R} }f(y)\eta _{t}(dy)} , with η t = Law ( X ¯ t ) , {\displaystyle \eta _{t}={\text{Law}}\left({\overline {X}}_{t}\right),} and the empirical measure η t N = 1 N ∑ j = 1 N δ ξ t ( N , i ) {\displaystyle \eta _{t}^{N}={\frac {1}{N}}\sum _{j=1}^{N}\delta _{\xi _{t}^{(N,i)}}} for any bounded measurable functions f (cf. for instance.). These nonlinear Markov processes and their mean field particle interpretation can be extended to interacting jump-diffusion processes == References == == External links ==
Wikipedia/Mean-field_particle_methods
In computational chemistry, a water model is used to simulate and thermodynamically calculate water clusters, liquid water, and aqueous solutions with explicit solvent, often using molecular dynamics or Monte Carlo methods. The models describe intermolecular forces between water molecules and are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate the specific nature of the intermolecular forces, many types of models have been developed. In general, these can be classified by the following three characteristics; (i) the number of interaction points or sites, (ii) whether the model is rigid or flexible, and (iii) whether the model includes polarization effects. An alternative to the explicit water models is to use an implicit solvation model, also termed a continuum model. Examples of this type of model include the COSMO solvation model, the polarizable continuum model (PCM) and hybrid solvation models. == Simple water models == The rigid models are considered the simplest water models and rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law, and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P (transferable intermolecular potential with 3 points) and TIP4P is represented by E a b = ∑ i on a ∑ j on b k C q i q j r i j + A r OO 12 − B r OO 6 , {\displaystyle E_{ab}=\sum _{i}^{{\text{on }}a}\sum _{j}^{{\text{on }}b}{\frac {k_{C}q_{i}q_{j}}{r_{ij}}}+{\frac {A}{r_{\text{OO}}^{12}}}-{\frac {B}{r_{\text{OO}}^{6}}},} where kC, the electrostatic constant, has a value of 332.1 Å·kcal/(mol·e²) in the units commonly used in molecular modeling; qi and qj are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms. The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model. == 2-site == A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory. == 3-site == Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of the models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°. The table below lists the parameters for some 3-site models. The SPC/E model adds an average polarization correction to the potential energy function: E pol = 1 2 ∑ i ( μ − μ 0 ) 2 α i , {\displaystyle E_{\text{pol}}={\frac {1}{2}}\sum _{i}{\frac {(\mu -\mu ^{0})^{2}}{\alpha _{i}}},} where μ is the electric dipole moment of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of 1.608×10−40 F·m2. Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model. The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified. Three-site model (TIP3P) has better performance in calculating specific heats. === Flexible SPC water model === The flexible simple point-charge water model (or flexible SPC water model) is a re-parametrization of the three-site SPC water model. The SPC model is rigid, whilst the flexible SPC model is flexible. In the model of Toukan and Rahman, the O–H stretching is made anharmonic, and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water. Flexible SPC is implemented in the programs MDynaMix and Abalone. === Other models === Ferguson (flexible SPC) CVFF (flexible) MG (flexible and dissociative) KKY potential (flexible model). BLXL (smear charged potential). == 4-site == The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal–Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is thus of historical interest only. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough. The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water; and TIP4PQ/2005, a similar model but designed to accurately describe the properties of solid and liquid water when quantum effects are included in the simulation. Most of the four-site water models use an OH distance and HOH angle which match those of the free water molecule. One exception is the OPC model, in which no geometry constraints are imposed other than the fundamental C2v molecular symmetry of the water molecule. Instead, the point charges and their positions are optimized to best describe the electrostatics of the water molecule. OPC reproduces a comprehensive set of bulk properties more accurately than several of the commonly used rigid n-site water models. The OPC model is implemented within the AMBER force field. Others: q-TIP4P/F (flexible) TIP4P/2005f (flexible) == 5-site == The 5-site models place the negative charge on dummy atoms (labelled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximal density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums. Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r): S ( r i j ) = { 0 if r i j ≤ R L , ( r i j − R L ) 2 ( 3 R U − R L − 2 r i j ) ( R U − R L ) 2 if R L ≤ r i j ≤ R U , 1 if R U ≤ r i j . {\displaystyle S(r_{ij})={\begin{cases}0&{\text{if }}r_{ij}\leq R_{\text{L}},\\{\frac {(r_{ij}-R_{L})^{2}(3R_{\text{U}}-R_{\text{L}}-2r_{ij})}{(R_{\text{U}}-R_{\text{L}})^{2}}}&{\text{if }}R_{\text{L}}\leq r_{ij}\leq R_{\text{U}},\\1&{\text{if }}R_{\text{U}}\leq r_{ij}.\end{cases}}} Thus, the RL and RU parameters only apply to BNS and ST2. == 6-site == Originally designed to study water/ice systems, a 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden. Since it had a very high melting temperature when employed under periodic electrostatic conditions (Ewald summation), a modified version was published later optimized by using the Ewald method for estimating the Coulomb interaction. == Other == The effect of explicit solute model on solute behavior in biomolecular simulations has been also extensively studied. It was shown that explicit water models affected the specific solvation and dynamics of unfolded peptides, while the conformational behavior and flexibility of folded peptides remained intact. MB model. A more abstract model resembling the Mercedes-Benz logo that reproduces some features of water in two-dimensional systems. It is not used as such for simulations of "real" (i.e., three-dimensional) systems, but it is useful for qualitative studies and for educational purposes. Coarse-grained models. One- and two-site models of water have also been developed. In coarse-grain models, each site can represent several water molecules. Many-body models. Water models built using training-set configurations solved quantum mechanically, which then use machine learning protocols to extract potential-energy surfaces. These potential-energy surfaces are fed into MD simulations for an unprecedented degree of accuracy in computing physical properties of condensed phase systems. Another classification of many body models is on the basis of the expansion of the underlying electrostatics, e.g., the SCME (Single Center Multipole Expansion) model == Computational cost == The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O–O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1). When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step). == See also == Water (properties) Water (data page) Water dimer Force field (chemistry) Comparison of force field implementations Molecular mechanics Molecular modelling Comparison of software for molecular mechanics modeling Solvent models == References ==
Wikipedia/Flexible_SPC_water_model
SCP-ISM, or screened Coulomb potentials implicit solvent model, is a continuum approximation of solvent effects for use in computer simulations of biological macromolecules, such as proteins and nucleic acids, usually within the framework of molecular dynamics. It is based on the classic theory of polar liquids, as developed by Peter Debye and corrected by Lars Onsager to incorporate reaction field effects. The model can be combined with quantum chemical calculations to formally derive a continuum model of solvent effects suitable for computer simulations of small and large molecular systems. The model is included in the CHARMM molecular mechanics code. == References == == External links == An essay on SCP-ISM CHARMM website
Wikipedia/Screened_Coulomb_potentials_implicit_solvent_model
In solid-state physics, the tight-binding model (or TB model) is an approach to the calculation of electronic band structure using an approximate set of wave functions based upon superposition of wave functions for isolated atoms located at each atomic site. The method is closely related to the LCAO method (linear combination of atomic orbitals method) used in chemistry. Tight-binding models are applied to a wide variety of solids. The model gives good qualitative results in many cases and can be combined with other models that give better results where the tight-binding model fails. Though the tight-binding model is a one-electron model, the model also provides a basis for more advanced calculations like the calculation of surface states and application to various kinds of many-body problem and quasiparticle calculations. == Introduction == The name "tight binding" of this electronic band structure model suggests that this quantum mechanical model describes the properties of tightly bound electrons in solids. The electrons in this model should be tightly bound to the atom to which they belong and they should have limited interaction with states and potentials on surrounding atoms of the solid. As a result, the wave function of the electron will be rather similar to the atomic orbital of the free atom to which it belongs. The energy of the electron will also be rather close to the ionization energy of the electron in the free atom or ion because the interaction with potentials and states on neighboring atoms is limited. Though the mathematical formulation of the one-particle tight-binding Hamiltonian may look complicated at first glance, the model is not complicated at all and can be understood intuitively quite easily. There are only three kinds of matrix elements that play a significant role in the theory. Two of those three kinds of elements should be close to zero and can often be neglected. The most important elements in the model are the interatomic matrix elements, which would simply be called the bond energies by a chemist. In general there are a number of atomic energy levels and atomic orbitals involved in the model. This can lead to complicated band structures because the orbitals belong to different point-group representations. The reciprocal lattice and the Brillouin zone often belong to a different space group than the crystal of the solid. High-symmetry points in the Brillouin zone belong to different point-group representations. When simple systems like the lattices of elements or simple compounds are studied it is often not very difficult to calculate eigenstates in high-symmetry points analytically. So the tight-binding model can provide nice examples for those who want to learn more about group theory. The tight-binding model has a long history and has been applied in many ways and with many different purposes and different outcomes. The model doesn't stand on its own. Parts of the model can be filled in or extended by other kinds of calculations and models like the nearly-free electron model. The model itself, or parts of it, can serve as the basis for other calculations. In the study of conductive polymers, organic semiconductors and molecular electronics, for example, tight-binding-like models are applied in which the role of the atoms in the original concept is replaced by the molecular orbitals of conjugated systems and where the interatomic matrix elements are replaced by inter- or intramolecular hopping and tunneling parameters. These conductors nearly all have very anisotropic properties and sometimes are almost perfectly one-dimensional. == Historical background == By 1928, the idea of a molecular orbital had been advanced by Robert Mulliken, who was influenced considerably by the work of Friedrich Hund. The LCAO method for approximating molecular orbitals was introduced in 1928 by B. N. Finklestein and G. E. Horowitz, while the LCAO method for solids was developed by Felix Bloch, as part of his doctoral dissertation in 1928, concurrently with and independent of the LCAO-MO approach. A much simpler interpolation scheme for approximating the electronic band structure, especially for the d-bands of transition metals, is the parameterized tight-binding method conceived in 1954 by John Clarke Slater and George Fred Koster, sometimes referred to as the SK tight-binding method. With the SK tight-binding method, electronic band structure calculations on a solid need not be carried out with full rigor as in the original Bloch's theorem but, rather, first-principles calculations are carried out only at high-symmetry points and the band structure is interpolated over the remainder of the Brillouin zone between these points. In this approach, interactions between different atomic sites are considered as perturbations. There exist several kinds of interactions we must consider. The crystal Hamiltonian is only approximately a sum of atomic Hamiltonians located at different sites and atomic wave functions overlap adjacent atomic sites in the crystal, and so are not accurate representations of the exact wave function. There are further explanations in the next section with some mathematical expressions. In the recent research about strongly correlated material the tight binding approach is basic approximation because highly localized electrons like 3-d transition metal electrons sometimes display strongly correlated behaviors. In this case, the role of electron-electron interaction must be considered using the many-body physics description. The tight-binding model is typically used for calculations of electronic band structure and band gaps in the static regime. However, in combination with other methods such as the random phase approximation (RPA) model, the dynamic response of systems may also be studied. In 2019, Bannwarth et al. introduced the GFN2-xTB method, primarily for the calculation of structures and non-covalent interaction energies. == Mathematical formulation == We introduce the atomic orbitals φ m ( r ) {\displaystyle \varphi _{m}(\mathbf {r} )} , which are eigenfunctions of the Hamiltonian H a t {\displaystyle H_{\rm {at}}} of a single isolated atom. When the atom is placed in a crystal, this atomic wave function overlaps adjacent atomic sites, and so are not true eigenfunctions of the crystal Hamiltonian. The overlap is less when electrons are tightly bound, which is the source of the descriptor "tight-binding". Any corrections to the atomic potential Δ U {\displaystyle \Delta U} required to obtain the true Hamiltonian H {\displaystyle H} of the system, are assumed small: H ( r ) = H a t ( r ) + ∑ R n ≠ 0 V ( r − R n ) = H a t ( r ) + Δ U ( r ) , {\displaystyle H(\mathbf {r} )=H_{\mathrm {at} }(\mathbf {r} )+\sum _{\mathbf {R} _{n}\neq \mathbf {0} }V(\mathbf {r} -\mathbf {R} _{n})=H_{\mathrm {at} }(\mathbf {r} )+\Delta U(\mathbf {r} )\ ,} where V ( r − R n ) {\displaystyle V(\mathbf {r} -\mathbf {R} _{n})} denotes the atomic potential of one atom located at site R n {\displaystyle \mathbf {R} _{n}} in the crystal lattice. A solution ψ m {\displaystyle \psi _{m}} to the time-independent single electron Schrödinger equation is then approximated as a linear combination of atomic orbitals φ m ( r − R n ) {\displaystyle \varphi _{m}(\mathbf {r-R_{n}} )} : ψ m ( r ) = ∑ R n b m ( R n ) φ m ( r − R n ) {\displaystyle \psi _{m}(\mathbf {r} )=\sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})} , where m {\displaystyle m} refers to the m-th atomic energy level. === Translational symmetry and normalization === The Bloch theorem states that the wave function in a crystal can change under translation only by a phase factor: ψ ( r + R ℓ ) = e i k ⋅ R ℓ ψ ( r ) , {\displaystyle \psi (\mathbf {r+R_{\ell }} )=e^{i\mathbf {k\cdot R_{\ell }} }\psi (\mathbf {r} )\ ,} where k {\displaystyle \mathbf {k} } is the wave vector of the wave function. Consequently, the coefficients satisfy ∑ R n b m ( R n ) φ m ( r − R n + R ℓ ) = e i k ⋅ R ℓ ∑ R n b m ( R n ) φ m ( r − R n ) . {\displaystyle \sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n}+\mathbf {R} _{\ell })=e^{i\mathbf {k} \cdot \mathbf {R} _{\ell }}\sum _{\mathbf {R} _{n}}b_{m}(\mathbf {R} _{n})\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})\ .} By substituting R p = R n − R ℓ {\displaystyle \mathbf {R} _{p}=\mathbf {R} _{n}-\mathbf {R_{\ell }} } , we find b m ( R p + R ℓ ) = e i k ⋅ R ℓ b m ( R p ) , {\displaystyle b_{m}(\mathbf {R} _{p}+\mathbf {R} _{\ell })=e^{i\mathbf {k\cdot R_{\ell }} }b_{m}(\mathbf {R} _{p})\ ,} (where in RHS we have replaced the dummy index R n {\displaystyle \mathbf {R} _{n}} with R p {\displaystyle \mathbf {R} _{p}} ) or b m ( R ℓ ) = e i k ⋅ R ℓ b m ( 0 ) . {\displaystyle b_{m}(\mathbf {R} _{\ell })=e^{i\mathbf {k} \cdot \mathbf {R} _{\ell }}b_{m}(\mathbf {0} )\ .} Normalizing the wave function to unity: ∫ d 3 r ψ m ∗ ( r ) ψ m ( r ) = 1 {\displaystyle \int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )\psi _{m}(\mathbf {r} )=1} = ∑ R n b m ∗ ( R n ) ∑ R ℓ b m ( R ℓ ) ∫ d 3 r φ m ∗ ( r − R n ) φ m ( r − R ℓ ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\sum _{\mathbf {R_{\ell }} }b_{m}(\mathbf {R_{\ell }} )\int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\varphi _{m}(\mathbf {r} -\mathbf {R} _{\ell })} = b m ∗ ( 0 ) b m ( 0 ) ∑ R n e − i k ⋅ R n ∑ R ℓ e i k ⋅ R ℓ ∫ d 3 r φ m ∗ ( r − R n ) φ m ( r − R ℓ ) {\displaystyle =b_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k\cdot R_{n}} }\sum _{\mathbf {R_{\ell }} }e^{i\mathbf {k\cdot R_{\ell }} }\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\varphi _{m}(\mathbf {r} -\mathbf {R} _{\ell })} = N b m ∗ ( 0 ) b m ( 0 ) ∑ R p e − i k ⋅ R p ∫ d 3 r φ m ∗ ( r − R p ) φ m ( r ) {\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{p}}e^{-i\mathbf {k} \cdot \mathbf {R} _{p}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{p})\varphi _{m}(\mathbf {r} )\ } = N b m ∗ ( 0 ) b m ( 0 ) ∑ R p e i k ⋅ R p ∫ d 3 r φ m ∗ ( r ) φ m ( r − R p ) , {\displaystyle =Nb_{m}^{*}(0)b_{m}(0)\sum _{\mathbf {R} _{p}}e^{i\mathbf {k} \cdot \mathbf {R} _{p}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} )\varphi _{m}(\mathbf {r} -\mathbf {R} _{p})\ ,} so the normalization sets b m ( 0 ) {\displaystyle b_{m}(0)} as b m ∗ ( 0 ) b m ( 0 ) = 1 N ⋅ 1 1 + ∑ R p ≠ 0 e i k ⋅ R p α m ( R p ) , {\displaystyle b_{m}^{*}(0)b_{m}(0)={\frac {1}{N}}\ \cdot \ {\frac {1}{1+\sum _{\mathbf {R} _{p}\neq 0}e^{i\mathbf {k} \cdot \mathbf {R} _{p}}\alpha _{m}(\mathbf {R} _{p})}}\ ,} where α m ( R p ) {\displaystyle {\alpha _{m}(\mathbf {R} _{p})}} are the atomic overlap integrals, which frequently are neglected resulting in b m ( 0 ) ≈ 1 N , {\displaystyle b_{m}(0)\approx {\frac {1}{\sqrt {N}}}\ ,} and ψ m ( r ) ≈ 1 N ∑ R n e i k ⋅ R n φ m ( r − R n ) . {\displaystyle \psi _{m}(\mathbf {r} )\approx {\frac {1}{\sqrt {N}}}\sum _{\mathbf {R} _{n}}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\ \varphi _{m}(\mathbf {r} -\mathbf {R} _{n})\ .} === The tight binding Hamiltonian === Using the tight binding form for the wave function, and assuming only the m-th atomic energy level is important for the m-th energy band, the Bloch energies ε m {\displaystyle \varepsilon _{m}} are of the form ε m = ∫ d 3 r ψ m ∗ ( r ) H ( r ) ψ m ( r ) {\displaystyle \varepsilon _{m}=\int d^{3}r\ \psi _{m}^{*}(\mathbf {r} )H(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) H ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) H a t ( r ) ψ m ( r ) + ∑ R n b m ∗ ( R n ) ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H_{\mathrm {at} }(\mathbf {r} )\psi _{m}(\mathbf {r} )+\sum _{\mathbf {R} _{n}}b_{m}^{*}(\mathbf {R} _{n})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} = ∑ R n , R l b m ∗ ( R n ) b m ( R l ) ∫ d 3 r φ m ∗ ( r − R n ) H a t ( r ) φ m ( r − R l ) + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =\sum _{\mathbf {R} _{n},\mathbf {R} _{l}}b_{m}^{*}(\mathbf {R} _{n})b_{m}(\mathbf {R} _{l})\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})H_{\mathrm {at} }(\mathbf {r} )\varphi _{m}(\mathbf {r} -\mathbf {R} _{l})+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} = b m ∗ ( 0 ) b m ( 0 ) N ∫ d 3 r φ m ∗ ( r ) H a t ( r ) φ m ( r ) + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) {\displaystyle =b_{m}^{*}(\mathbf {0} )b_{m}(\mathbf {0} )\ N\int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} )H_{\mathrm {at} }(\mathbf {r} )\varphi _{m}(\mathbf {r} )+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )} ≈ E m + b m ∗ ( 0 ) ∑ R n e − i k ⋅ R n ∫ d 3 r φ m ∗ ( r − R n ) Δ U ( r ) ψ m ( r ) . {\displaystyle \approx E_{m}+b_{m}^{*}(0)\sum _{\mathbf {R} _{n}}e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\ \int d^{3}r\ \varphi _{m}^{*}(\mathbf {r} -\mathbf {R} _{n})\Delta U(\mathbf {r} )\psi _{m}(\mathbf {r} )\ .} Here in the last step it was assumed that the overlap integral is zero and thus b m ∗ ( 0 ) b m ( 0 ) = 1 N {\displaystyle b_{m}^{*}(\mathbf {0} )b_{m}(\mathbf {0} )={\frac {1}{N}}} . The energy then becomes ε m ( k ) = E m − N | b m ( 0 ) | 2 ( β m + ∑ R n ≠ 0 ∑ l γ m , l ( R n ) e i k ⋅ R n ) , {\displaystyle \varepsilon _{m}(\mathbf {k} )=E_{m}-N\ |b_{m}(0)|^{2}\left(\beta _{m}+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}\gamma _{m,l}(\mathbf {R} _{n})e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\right)\ ,} = E m − β m + ∑ R n ≠ 0 ∑ l e i k ⋅ R n γ m , l ( R n ) 1 + ∑ R n ≠ 0 ∑ l e i k ⋅ R n α m , l ( R n ) , {\displaystyle =E_{m}-\ {\frac {\beta _{m}+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\gamma _{m,l}(\mathbf {R} _{n})}{\ \ 1+\sum _{\mathbf {R} _{n}\neq 0}\sum _{l}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\alpha _{m,l}(\mathbf {R} _{n})}}\ ,} where Em is the energy of the m-th atomic level, and α m , l {\displaystyle \alpha _{m,l}} , β m {\displaystyle \beta _{m}} and γ m , l {\displaystyle \gamma _{m,l}} are the tight binding matrix elements discussed below. === The tight binding matrix elements === The elements β m = − ∫ φ m ∗ ( r ) Δ U ( r ) φ m ( r ) d 3 r , {\displaystyle \beta _{m}=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{m}(\mathbf {r} )\,d^{3}r}{\text{,}}} are the atomic energy shift due to the potential on neighboring atoms. This term is relatively small in most cases. If it is large it means that potentials on neighboring atoms have a large influence on the energy of the central atom. The next class of terms γ m , l ( R n ) = − ∫ φ m ∗ ( r ) Δ U ( r ) φ l ( r − R n ) d 3 r , {\displaystyle \gamma _{m,l}(\mathbf {R} _{n})=-\int {\varphi _{m}^{*}(\mathbf {r} )\Delta U(\mathbf {r} )\varphi _{l}(\mathbf {r} -\mathbf {R} _{n})\,d^{3}r}{\text{,}}} is the interatomic matrix element between the atomic orbitals m and l on adjacent atoms. It is also called the bond energy or two center integral and it is the dominant term in the tight binding model. The last class of terms α m , l ( R n ) = ∫ φ m ∗ ( r ) φ l ( r − R n ) d 3 r , {\displaystyle \alpha _{m,l}(\mathbf {R} _{n})=\int {\varphi _{m}^{*}(\mathbf {r} )\varphi _{l}(\mathbf {r} -\mathbf {R} _{n})\,d^{3}r}{\text{,}}} denote the overlap integrals between the atomic orbitals m and l on adjacent atoms. These, too, are typically small; if not, then Pauli repulsion has a non-negligible influence on the energy of the central atom. == Evaluation of the matrix elements == As mentioned before the values of the β m {\displaystyle \beta _{m}} -matrix elements are not so large in comparison with the ionization energy because the potentials of neighboring atoms on the central atom are limited. If β m {\displaystyle \beta _{m}} is not relatively small it means that the potential of the neighboring atom on the central atom is not small either. In that case it is an indication that the tight binding model is not a very good model for the description of the band structure for some reason. The interatomic distances can be too small or the charges on the atoms or ions in the lattice is wrong for example. The interatomic matrix elements γ m , l {\displaystyle \gamma _{m,l}} can be calculated directly if the atomic wave functions and the potentials are known in detail. Most often this is not the case. There are numerous ways to get parameters for these matrix elements. Parameters can be obtained from chemical bond energy data. Energies and eigenstates on some high symmetry points in the Brillouin zone can be evaluated and values integrals in the matrix elements can be matched with band structure data from other sources. The interatomic overlap matrix elements α m , l {\displaystyle \alpha _{m,l}} should be rather small or neglectable. If they are large it is again an indication that the tight binding model is of limited value for some purposes. Large overlap is an indication for too short interatomic distance for example. In metals and transition metals the broad s-band or sp-band can be fitted better to an existing band structure calculation by the introduction of next-nearest-neighbor matrix elements and overlap integrals but fits like that don't yield a very useful model for the electronic wave function of a metal. Broad bands in dense materials are better described by a nearly free electron model. The tight binding model works particularly well in cases where the band width is small and the electrons are strongly localized, like in the case of d-bands and f-bands. The model also gives good results in the case of open crystal structures, like diamond or silicon, where the number of neighbors is small. The model can easily be combined with a nearly free electron model in a hybrid NFE-TB model. == Connection to Wannier functions == Bloch functions describe the electronic states in a periodic crystal lattice. Bloch functions can be represented as a Fourier series ψ m ( k , r ) = 1 N ∑ n a m ( R n , r ) e i k ⋅ R n , {\displaystyle \psi _{m}(\mathbf {k} ,\mathbf {r} )={\frac {1}{\sqrt {N}}}\sum _{n}{a_{m}(\mathbf {R} _{n},\mathbf {r} )}e^{i\mathbf {k} \cdot \mathbf {R} _{n}}\ ,} where R n {\displaystyle \mathbf {R} _{n}} denotes an atomic site in a periodic crystal lattice, k {\displaystyle \mathbf {k} } is the wave vector of the Bloch's function, r {\displaystyle \mathbf {r} } is the electron position, m {\displaystyle m} is the band index, and the sum is over all N {\displaystyle N} atomic sites. The Bloch's function is an exact eigensolution for the wave function of an electron in a periodic crystal potential corresponding to an energy E m ( k ) {\displaystyle E_{m}(\mathbf {k} )} , and is spread over the entire crystal volume. Using the Fourier transform analysis, a spatially localized wave function for the m-th energy band can be constructed from multiple Bloch's functions: a m ( R n , r ) = 1 N ∑ k e − i k ⋅ R n ψ m ( k , r ) = 1 N ∑ k e i k ⋅ ( r − R n ) u m ( k , r ) . {\displaystyle a_{m}(\mathbf {R} _{n},\mathbf {r} )={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{-i\mathbf {k} \cdot \mathbf {R} _{n}}\psi _{m}(\mathbf {k} ,\mathbf {r} )}={\frac {1}{\sqrt {N}}}\sum _{\mathbf {k} }{e^{i\mathbf {k} \cdot (\mathbf {r} -\mathbf {R} _{n})}u_{m}(\mathbf {k} ,\mathbf {r} )}.} These real space wave functions a m ( R n , r ) {\displaystyle {a_{m}(\mathbf {R} _{n},\mathbf {r} )}} are called Wannier functions, and are fairly closely localized to the atomic site R n {\displaystyle \mathbf {R} _{n}} . Of course, if we have exact Wannier functions, the exact Bloch functions can be derived using the inverse Fourier transform. However it is not easy to calculate directly either Bloch functions or Wannier functions. An approximate approach is necessary in the calculation of electronic structures of solids. If we consider the extreme case of isolated atoms, the Wannier function would become an isolated atomic orbital. That limit suggests the choice of an atomic wave function as an approximate form for the Wannier function, the so-called tight binding approximation. == Second quantization == Modern explanations of electronic structure like t-J model and Hubbard model are based on tight binding model. Tight binding can be understood by working under a second quantization formalism. Using the atomic orbital as a basis state, the second quantization Hamiltonian operator in the tight binding framework can be written as: H = − t ∑ ⟨ i , j ⟩ , σ ( c i , σ † c j , σ + h . c . ) {\displaystyle H=-t\sum _{\langle i,j\rangle ,\sigma }(c_{i,\sigma }^{\dagger }c_{j,\sigma }^{}+h.c.)} , c i σ † , c j σ {\displaystyle c_{i\sigma }^{\dagger },c_{j\sigma }} - creation and annihilation operators σ {\displaystyle \displaystyle \sigma } - spin polarization t {\displaystyle \displaystyle t} - hopping integral ⟨ i , j ⟩ {\displaystyle \displaystyle \langle i,j\rangle } - nearest neighbor index h . c . {\displaystyle \displaystyle h.c.} - the hermitian conjugate of the other term(s) Here, hopping integral t {\displaystyle \displaystyle t} corresponds to the transfer integral γ {\displaystyle \displaystyle \gamma } in tight binding model. Considering extreme cases of t → 0 {\displaystyle t\rightarrow 0} , it is impossible for an electron to hop into neighboring sites. This case is the isolated atomic system. If the hopping term is turned on ( t > 0 {\displaystyle \displaystyle t>0} ) electrons can stay in both sites lowering their kinetic energy. In the strongly correlated electron system, it is necessary to consider the electron-electron interaction. This term can be written in H e e = 1 2 ∑ n , m , σ ⟨ n 1 m 1 , n 2 m 2 | e 2 | r 1 − r 2 | | n 3 m 3 , n 4 m 4 ⟩ c n 1 m 1 σ 1 † c n 2 m 2 σ 2 † c n 4 m 4 σ 2 c n 3 m 3 σ 1 {\displaystyle \displaystyle H_{ee}={\frac {1}{2}}\sum _{n,m,\sigma }\langle n_{1}m_{1},n_{2}m_{2}|{\frac {e^{2}}{|r_{1}-r_{2}|}}|n_{3}m_{3},n_{4}m_{4}\rangle c_{n_{1}m_{1}\sigma _{1}}^{\dagger }c_{n_{2}m_{2}\sigma _{2}}^{\dagger }c_{n_{4}m_{4}\sigma _{2}}c_{n_{3}m_{3}\sigma _{1}}} This interaction Hamiltonian includes direct Coulomb interaction energy and exchange interaction energy between electrons. There are several novel physics induced from this electron-electron interaction energy, such as metal-insulator transitions (MIT), high-temperature superconductivity, and several quantum phase transitions. == Example: one-dimensional s-band == Here the tight binding model is illustrated with a s-band model for a string of atoms with a single s-orbital in a straight line with spacing a and σ bonds between atomic sites. To find approximate eigenstates of the Hamiltonian, we can use a linear combination of the atomic orbitals | k ⟩ = 1 N ∑ n = 1 N e i n k a | n ⟩ {\displaystyle |k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n=1}^{N}e^{inka}|n\rangle } where N = total number of sites and k {\displaystyle k} is a real parameter with − π a ≦ k ≦ π a {\displaystyle -{\frac {\pi }{a}}\leqq k\leqq {\frac {\pi }{a}}} . (This wave function is normalized to unity by the leading factor 1/√N provided overlap of atomic wave functions is ignored.) Assuming only nearest neighbor overlap, the only non-zero matrix elements of the Hamiltonian can be expressed as ⟨ n | H | n ⟩ = E 0 = E i − U . {\displaystyle \langle n|H|n\rangle =E_{0}=E_{i}-U\ .} ⟨ n ± 1 | H | n ⟩ = − Δ {\displaystyle \langle n\pm 1|H|n\rangle =-\Delta \ } ⟨ n | n ⟩ = 1 ; {\displaystyle \langle n|n\rangle =1\ ;} ⟨ n ± 1 | n ⟩ = S . {\displaystyle \langle n\pm 1|n\rangle =S\ .} The energy Ei is the ionization energy corresponding to the chosen atomic orbital and U is the energy shift of the orbital as a result of the potential of neighboring atoms. The ⟨ n ± 1 | H | n ⟩ = − Δ {\displaystyle \langle n\pm 1|H|n\rangle =-\Delta } elements, which are the Slater and Koster interatomic matrix elements, are the bond energies E i , j {\displaystyle E_{i,j}} . In this one dimensional s-band model we only have σ {\displaystyle \sigma } -bonds between the s-orbitals with bond energy E s , s = V s s σ {\displaystyle E_{s,s}=V_{ss\sigma }} . The overlap between states on neighboring atoms is S. We can derive the energy of the state | k ⟩ {\displaystyle |k\rangle } using the above equation: H | k ⟩ = 1 N ∑ n e i n k a H | n ⟩ {\displaystyle H|k\rangle ={\frac {1}{\sqrt {N}}}\sum _{n}e^{inka}H|n\rangle } ⟨ k | H | k ⟩ = 1 N ∑ n , m e i ( n − m ) k a ⟨ m | H | n ⟩ {\displaystyle \langle k|H|k\rangle ={\frac {1}{N}}\sum _{n,\ m}e^{i(n-m)ka}\langle m|H|n\rangle } = 1 N ∑ n ⟨ n | H | n ⟩ + 1 N ∑ n ⟨ n − 1 | H | n ⟩ e + i k a + 1 N ∑ n ⟨ n + 1 | H | n ⟩ e − i k a {\displaystyle ={\frac {1}{N}}\sum _{n}\langle n|H|n\rangle +{\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}+{\frac {1}{N}}\sum _{n}\langle n+1|H|n\rangle e^{-ika}} = E 0 − 2 Δ cos ⁡ ( k a ) , {\displaystyle =E_{0}-2\Delta \,\cos(ka)\ ,} where, for example, 1 N ∑ n ⟨ n | H | n ⟩ = E 0 1 N ∑ n 1 = E 0 , {\displaystyle {\frac {1}{N}}\sum _{n}\langle n|H|n\rangle =E_{0}{\frac {1}{N}}\sum _{n}1=E_{0}\ ,} and 1 N ∑ n ⟨ n − 1 | H | n ⟩ e + i k a = − Δ e i k a 1 N ∑ n 1 = − Δ e i k a . {\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|H|n\rangle e^{+ika}=-\Delta e^{ika}{\frac {1}{N}}\sum _{n}1=-\Delta e^{ika}\ .} 1 N ∑ n ⟨ n − 1 | n ⟩ e + i k a = S e i k a 1 N ∑ n 1 = S e i k a . {\displaystyle {\frac {1}{N}}\sum _{n}\langle n-1|n\rangle e^{+ika}=Se^{ika}{\frac {1}{N}}\sum _{n}1=Se^{ika}\ .} Thus the energy of this state | k ⟩ {\displaystyle |k\rangle } can be represented in the familiar form of the energy dispersion: E ( k ) = E 0 − 2 Δ cos ⁡ ( k a ) 1 + 2 S cos ⁡ ( k a ) {\displaystyle E(k)={\frac {E_{0}-2\Delta \,\cos(ka)}{1+2S\,\cos(ka)}}} . For k = 0 {\displaystyle k=0} the energy is E = ( E 0 − 2 Δ ) / ( 1 + 2 S ) {\displaystyle E=(E_{0}-2\Delta )/(1+2S)} and the state consists of a sum of all atomic orbitals. This state can be viewed as a chain of bonding orbitals. For k = π / ( 2 a ) {\displaystyle k=\pi /(2a)} the energy is E = E 0 {\displaystyle E=E_{0}} and the state consists of a sum of atomic orbitals which are a factor e i π / 2 {\displaystyle e^{i\pi /2}} out of phase. This state can be viewed as a chain of non-bonding orbitals. Finally for k = π / a {\displaystyle k=\pi /a} the energy is E = ( E 0 + 2 Δ ) / ( 1 − 2 S ) {\displaystyle E=(E_{0}+2\Delta )/(1-2S)} and the state consists of an alternating sum of atomic orbitals. This state can be viewed as a chain of anti-bonding orbitals. This example is readily extended to three dimensions, for example, to a body-centered cubic or face-centered cubic lattice by introducing the nearest neighbor vector locations in place of simply n a. Likewise, the method can be extended to multiple bands using multiple different atomic orbitals at each site. The general formulation above shows how these extensions can be accomplished. == Table of interatomic matrix elements == In 1954 J.C. Slater and G.F. Koster published, mainly for the calculation of transition metal d-bands, a table of interatomic matrix elements E i , j ( r → n , n ′ ) = ⟨ n , i | H | n ′ , j ⟩ {\displaystyle E_{i,j}({\vec {\mathbf {r} }}_{n,n'})=\langle n,i|H|n',j\rangle } which can also be derived from the cubic harmonic orbitals straightforwardly. The table expresses the matrix elements as functions of LCAO two-centre bond integrals between two cubic harmonic orbitals, i and j, on adjacent atoms. The bond integrals are for example the V s s σ {\displaystyle V_{ss\sigma }} , V p p π {\displaystyle V_{pp\pi }} and V d d δ {\displaystyle V_{dd\delta }} for sigma, pi and delta bonds (Notice that these integrals should also depend on the distance between the atoms, i.e. are a function of ( l , m , n ) {\displaystyle (l,m,n)} , even though it is not explicitly stated every time.). The interatomic vector is expressed as r → n , n ′ = ( r x , r y , r z ) = d ( l , m , n ) {\displaystyle {\vec {\mathbf {r} }}_{n,n'}=(r_{x},r_{y},r_{z})=d(l,m,n)} where d is the distance between the atoms and l, m and n are the direction cosines to the neighboring atom. E s , s = V s s σ {\displaystyle E_{s,s}=V_{ss\sigma }} E s , x = l V s p σ {\displaystyle E_{s,x}=lV_{sp\sigma }} E x , x = l 2 V p p σ + ( 1 − l 2 ) V p p π {\displaystyle E_{x,x}=l^{2}V_{pp\sigma }+(1-l^{2})V_{pp\pi }} E x , y = l m V p p σ − l m V p p π {\displaystyle E_{x,y}=lmV_{pp\sigma }-lmV_{pp\pi }} E x , z = l n V p p σ − l n V p p π {\displaystyle E_{x,z}=lnV_{pp\sigma }-lnV_{pp\pi }} E s , x y = 3 l m V s d σ {\displaystyle E_{s,xy}={\sqrt {3}}lmV_{sd\sigma }} E s , x 2 − y 2 = 3 2 ( l 2 − m 2 ) V s d σ {\displaystyle E_{s,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}(l^{2}-m^{2})V_{sd\sigma }} E s , 3 z 2 − r 2 = [ n 2 − ( l 2 + m 2 ) / 2 ] V s d σ {\displaystyle E_{s,3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]V_{sd\sigma }} E x , x y = 3 l 2 m V p d σ + m ( 1 − 2 l 2 ) V p d π {\displaystyle E_{x,xy}={\sqrt {3}}l^{2}mV_{pd\sigma }+m(1-2l^{2})V_{pd\pi }} E x , y z = 3 l m n V p d σ − 2 l m n V p d π {\displaystyle E_{x,yz}={\sqrt {3}}lmnV_{pd\sigma }-2lmnV_{pd\pi }} E x , z x = 3 l 2 n V p d σ + n ( 1 − 2 l 2 ) V p d π {\displaystyle E_{x,zx}={\sqrt {3}}l^{2}nV_{pd\sigma }+n(1-2l^{2})V_{pd\pi }} E x , x 2 − y 2 = 3 2 l ( l 2 − m 2 ) V p d σ + l ( 1 − l 2 + m 2 ) V p d π {\displaystyle E_{x,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}l(l^{2}-m^{2})V_{pd\sigma }+l(1-l^{2}+m^{2})V_{pd\pi }} E y , x 2 − y 2 = 3 2 m ( l 2 − m 2 ) V p d σ − m ( 1 + l 2 − m 2 ) V p d π {\displaystyle E_{y,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}m(l^{2}-m^{2})V_{pd\sigma }-m(1+l^{2}-m^{2})V_{pd\pi }} E z , x 2 − y 2 = 3 2 n ( l 2 − m 2 ) V p d σ − n ( l 2 − m 2 ) V p d π {\displaystyle E_{z,x^{2}-y^{2}}={\frac {\sqrt {3}}{2}}n(l^{2}-m^{2})V_{pd\sigma }-n(l^{2}-m^{2})V_{pd\pi }} E x , 3 z 2 − r 2 = l [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ − 3 l n 2 V p d π {\displaystyle E_{x,3z^{2}-r^{2}}=l[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}ln^{2}V_{pd\pi }} E y , 3 z 2 − r 2 = m [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ − 3 m n 2 V p d π {\displaystyle E_{y,3z^{2}-r^{2}}=m[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }-{\sqrt {3}}mn^{2}V_{pd\pi }} E z , 3 z 2 − r 2 = n [ n 2 − ( l 2 + m 2 ) / 2 ] V p d σ + 3 n ( l 2 + m 2 ) V p d π {\displaystyle E_{z,3z^{2}-r^{2}}=n[n^{2}-(l^{2}+m^{2})/2]V_{pd\sigma }+{\sqrt {3}}n(l^{2}+m^{2})V_{pd\pi }} E x y , x y = 3 l 2 m 2 V d d σ + ( l 2 + m 2 − 4 l 2 m 2 ) V d d π + ( n 2 + l 2 m 2 ) V d d δ {\displaystyle E_{xy,xy}=3l^{2}m^{2}V_{dd\sigma }+(l^{2}+m^{2}-4l^{2}m^{2})V_{dd\pi }+(n^{2}+l^{2}m^{2})V_{dd\delta }} E x y , y z = 3 l m 2 n V d d σ + l n ( 1 − 4 m 2 ) V d d π + l n ( m 2 − 1 ) V d d δ {\displaystyle E_{xy,yz}=3lm^{2}nV_{dd\sigma }+ln(1-4m^{2})V_{dd\pi }+ln(m^{2}-1)V_{dd\delta }} E x y , z x = 3 l 2 m n V d d σ + m n ( 1 − 4 l 2 ) V d d π + m n ( l 2 − 1 ) V d d δ {\displaystyle E_{xy,zx}=3l^{2}mnV_{dd\sigma }+mn(1-4l^{2})V_{dd\pi }+mn(l^{2}-1)V_{dd\delta }} E x y , x 2 − y 2 = 3 2 l m ( l 2 − m 2 ) V d d σ + 2 l m ( m 2 − l 2 ) V d d π + [ l m ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{xy,x^{2}-y^{2}}={\frac {3}{2}}lm(l^{2}-m^{2})V_{dd\sigma }+2lm(m^{2}-l^{2})V_{dd\pi }+[lm(l^{2}-m^{2})/2]V_{dd\delta }} E y z , x 2 − y 2 = 3 2 m n ( l 2 − m 2 ) V d d σ − m n [ 1 + 2 ( l 2 − m 2 ) ] V d d π + m n [ 1 + ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{yz,x^{2}-y^{2}}={\frac {3}{2}}mn(l^{2}-m^{2})V_{dd\sigma }-mn[1+2(l^{2}-m^{2})]V_{dd\pi }+mn[1+(l^{2}-m^{2})/2]V_{dd\delta }} E z x , x 2 − y 2 = 3 2 n l ( l 2 − m 2 ) V d d σ + n l [ 1 − 2 ( l 2 − m 2 ) ] V d d π − n l [ 1 − ( l 2 − m 2 ) / 2 ] V d d δ {\displaystyle E_{zx,x^{2}-y^{2}}={\frac {3}{2}}nl(l^{2}-m^{2})V_{dd\sigma }+nl[1-2(l^{2}-m^{2})]V_{dd\pi }-nl[1-(l^{2}-m^{2})/2]V_{dd\delta }} E x y , 3 z 2 − r 2 = 3 [ l m ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ − 2 l m n 2 V d d π + [ l m ( 1 + n 2 ) / 2 ] V d d δ ] {\displaystyle E_{xy,3z^{2}-r^{2}}={\sqrt {3}}\left[lm(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }-2lmn^{2}V_{dd\pi }+[lm(1+n^{2})/2]V_{dd\delta }\right]} E y z , 3 z 2 − r 2 = 3 [ m n ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ + m n ( l 2 + m 2 − n 2 ) V d d π − [ m n ( l 2 + m 2 ) / 2 ] V d d δ ] {\displaystyle E_{yz,3z^{2}-r^{2}}={\sqrt {3}}\left[mn(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+mn(l^{2}+m^{2}-n^{2})V_{dd\pi }-[mn(l^{2}+m^{2})/2]V_{dd\delta }\right]} E z x , 3 z 2 − r 2 = 3 [ l n ( n 2 − ( l 2 + m 2 ) / 2 ) V d d σ + l n ( l 2 + m 2 − n 2 ) V d d π − [ l n ( l 2 + m 2 ) / 2 ] V d d δ ] {\displaystyle E_{zx,3z^{2}-r^{2}}={\sqrt {3}}\left[ln(n^{2}-(l^{2}+m^{2})/2)V_{dd\sigma }+ln(l^{2}+m^{2}-n^{2})V_{dd\pi }-[ln(l^{2}+m^{2})/2]V_{dd\delta }\right]} E x 2 − y 2 , x 2 − y 2 = 3 4 ( l 2 − m 2 ) 2 V d d σ + [ l 2 + m 2 − ( l 2 − m 2 ) 2 ] V d d π + [ n 2 + ( l 2 − m 2 ) 2 / 4 ] V d d δ {\displaystyle E_{x^{2}-y^{2},x^{2}-y^{2}}={\frac {3}{4}}(l^{2}-m^{2})^{2}V_{dd\sigma }+[l^{2}+m^{2}-(l^{2}-m^{2})^{2}]V_{dd\pi }+[n^{2}+(l^{2}-m^{2})^{2}/4]V_{dd\delta }} E x 2 − y 2 , 3 z 2 − r 2 = 3 [ ( l 2 − m 2 ) [ n 2 − ( l 2 + m 2 ) / 2 ] V d d σ / 2 + n 2 ( m 2 − l 2 ) V d d π + [ ( 1 + n 2 ) ( l 2 − m 2 ) / 4 ] V d d δ ] {\displaystyle E_{x^{2}-y^{2},3z^{2}-r^{2}}={\sqrt {3}}\left[(l^{2}-m^{2})[n^{2}-(l^{2}+m^{2})/2]V_{dd\sigma }/2+n^{2}(m^{2}-l^{2})V_{dd\pi }+[(1+n^{2})(l^{2}-m^{2})/4]V_{dd\delta }\right]} E 3 z 2 − r 2 , 3 z 2 − r 2 = [ n 2 − ( l 2 + m 2 ) / 2 ] 2 V d d σ + 3 n 2 ( l 2 + m 2 ) V d d π + 3 4 ( l 2 + m 2 ) 2 V d d δ {\displaystyle E_{3z^{2}-r^{2},3z^{2}-r^{2}}=[n^{2}-(l^{2}+m^{2})/2]^{2}V_{dd\sigma }+3n^{2}(l^{2}+m^{2})V_{dd\pi }+{\frac {3}{4}}(l^{2}+m^{2})^{2}V_{dd\delta }} Not all interatomic matrix elements are listed explicitly. Matrix elements that are not listed in this table can be constructed by permutation of indices and cosine directions of other matrix elements in the table. Note that swapping orbital indices is the same as a spatial inversion. According to the parity properties of spherical harmonics, Y M L ( − r ) = ( − 1 ) l Y M L ( r ) {\displaystyle Y_{M}^{L}(-\mathbf {r} )=(-1)^{l}Y_{M}^{L}(\mathbf {r} )} . The bond integrals are proportional to the integral of the product of two real spherical harmonics; the real spherical harmonics (e.g. the p x , p y , p z , d x y , ⋯ {\displaystyle p_{x},p_{y},p_{z},d_{xy},\cdots } functions) have the same parity properties as the complex spherical harmonics. Then the bond integrals transform under inversion (i.e. swapping orbitals) as V L ′ L M = ( − 1 ) L + L ′ V L L ′ M {\displaystyle V_{L'LM}=(-1)^{L+L'}V_{LL'M}} , with L , L ′ , M {\displaystyle L,~L',~M} the angular momenta and magnetic quantum number. For example, E x , s = − l V s p σ = − E s , x {\displaystyle E_{x,s}=-lV_{sp\sigma }=-E_{s,x}} and E y , x = E x , y {\displaystyle E_{y,x}=E_{x,y}} . == See also == == References == N. W. Ashcroft and N. D. Mermin, Solid State Physics (Thomson Learning, Toronto, 1976). Stephen Blundell Magnetism in Condensed Matter(Oxford, 2001). S.Maekawa et al. Physics of Transition Metal Oxides (Springer-Verlag Berlin Heidelberg, 2004). John Singleton Band Theory and Electronic Properties of Solids (Oxford, 2001). == Further reading == Walter Ashley Harrison (1989). Electronic Structure and the Properties of Solids. Dover Publications. ISBN 0-486-66021-4. N. W. Ashcroft and N. D. Mermin (1976). Solid State Physics. Toronto: Thomson Learning. Davies, John H. (1998). The physics of low-dimensional semiconductors: An introduction. Cambridge, United Kingdom: Cambridge University Press. ISBN 0-521-48491-X. Goringe, C M; Bowler, D R; Hernández, E (1997). "Tight-binding modelling of materials". Reports on Progress in Physics. 60 (12): 1447–1512. Bibcode:1997RPPh...60.1447G. doi:10.1088/0034-4885/60/12/001. S2CID 250846071. Slater, J. C.; Koster, G. F. (1954). "Simplified LCAO Method for the Periodic Potential Problem". Physical Review. 94 (6): 1498–1524. Bibcode:1954PhRv...94.1498S. doi:10.1103/PhysRev.94.1498. == External links == Crystal-field Theory, Tight-binding Method, and Jahn-Teller Effect in E. Pavarini, E. Koch, F. Anders, and M. Jarrell (eds.): Correlated Electrons: From Models to Materials, Jülich 2012, ISBN 978-3-89336-796-2 Tight-Binding Studio: A Technical Software Package to Find the Parameters of Tight-Binding Hamiltonian
Wikipedia/Tight_binding_(physics)
In the field of computational chemistry, energy minimization (also called energy optimization, geometry minimization, or geometry optimization) is the process of finding an arrangement in space of a collection of atoms where, according to some computational model of chemical bonding, the net inter-atomic force on each atom is acceptably close to zero and the position on the potential energy surface (PES) is a stationary point (described later). The collection of atoms might be a single molecule, an ion, a condensed phase, a transition state or even a collection of any of these. The computational model of chemical bonding might, for example, be quantum mechanics. As an example, when optimizing the geometry of a water molecule, one aims to obtain the hydrogen-oxygen bond lengths and the hydrogen-oxygen-hydrogen bond angle which minimize the forces that would otherwise be pulling atoms together or pushing them apart. The motivation for performing a geometry optimization is the physical significance of the obtained structure: optimized structures often correspond to a substance as it is found in nature and the geometry of such a structure can be used in a variety of experimental and theoretical investigations in the fields of chemical structure, thermodynamics, chemical kinetics, spectroscopy and others. Typically, but not always, the process seeks to find the geometry of a particular arrangement of the atoms that represents a local or global energy minimum. Instead of searching for global energy minimum, it might be desirable to optimize to a transition state, that is, a saddle point on the potential energy surface. Additionally, certain coordinates (such as a chemical bond length) might be fixed during the optimization. == Molecular geometry and mathematical interpretation == The geometry of a set of atoms can be described by a vector of the atoms' positions. This could be the set of the Cartesian coordinates of the atoms or, when considering molecules, might be so called internal coordinates formed from a set of bond lengths, bond angles and dihedral angles. Given a set of atoms and a vector, r, describing the atoms' positions, one can introduce the concept of the energy as a function of the positions, E(r). Geometry optimization is then a mathematical optimization problem, in which it is desired to find the value of r for which E(r) is at a local minimum, that is, the derivative of the energy with respect to the position of the atoms, ∂E/∂r, is the zero vector and the second derivative matrix of the system, ( ∂ 2 E ∂ r i ∂ r j ) i j {\displaystyle {\begin{pmatrix}{\frac {\partial ^{2}E}{\partial r_{i}\,\partial r_{j}}}\end{pmatrix}}_{ij}} , also known as the Hessian matrix, which describes the curvature of the PES at r, has all positive eigenvalues (is positive definite). A special case of a geometry optimization is a search for the geometry of a transition state; this is discussed below. The computational model that provides an approximate E(r) could be based on quantum mechanics (using either density functional theory or semi-empirical methods), force fields, or a combination of those in case of QM/MM. Using this computational model and an initial guess (or ansatz) of the correct geometry, an iterative optimization procedure is followed, for example: calculate the force on each atom (that is, -∂E/∂r) if the force is less than some threshold, finish otherwise, move the atoms by some computed step ∆r that is predicted to reduce the force repeat from the start == Practical aspects of optimization == As described above, some method such as quantum mechanics can be used to calculate the energy, E(r) , the gradient of the PES, that is, the derivative of the energy with respect to the position of the atoms, ∂E/∂r and the second derivative matrix of the system, ∂∂E/∂ri∂rj, also known as the Hessian matrix, which describes the curvature of the PES at r. An optimization algorithm can use some or all of E(r) , ∂E/∂r and ∂∂E/∂ri∂rj to try to minimize the forces and this could in theory be any method such as gradient descent, conjugate gradient or Newton's method, but in practice, algorithms which use knowledge of the PES curvature, that is the Hessian matrix, are found to be superior. For most systems of practical interest, however, it may be prohibitively expensive to compute the second derivative matrix, and it is estimated from successive values of the gradient, as is typical in a Quasi-Newton optimization. The choice of the coordinate system can be crucial for performing a successful optimization. Cartesian coordinates, for example, are redundant since a non-linear molecule with N atoms has 3N–6 vibrational degrees of freedom whereas the set of Cartesian coordinates has 3N dimensions. Additionally, Cartesian coordinates are highly correlated, that is, the Hessian matrix has many non-diagonal terms that are not close to zero. This can lead to numerical problems in the optimization, because, for example, it is difficult to obtain a good approximation to the Hessian matrix and calculating it precisely is too computationally expensive. However, in case that energy is expressed with standard force fields, computationally efficient methods have been developed able to derive analytically the Hessian matrix in Cartesian coordinates while preserving a computational complexity of the same order to that of gradient computations. Internal coordinates tend to be less correlated but are more difficult to set-up and it can be difficult to describe some systems, such as ones with symmetry or large condensed phases. Many modern computational chemistry software packages contain automatic procedures for the automatic generation of reasonable coordinate systems for optimization. === Degree of freedom restriction === Some degrees of freedom can be eliminated from an optimization, for example, positions of atoms or bond lengths and angles can be given fixed values. Sometimes these are referred to as being frozen degrees of freedom. Figure 1 depicts a geometry optimization of the atoms in a carbon nanotube in the presence of an external electrostatic field. In this optimization, the atoms on the left have their positions frozen. Their interaction with the other atoms in the system are still calculated, but alteration the atoms' position during the optimization is prevented. == Transition state optimization == Transition state structures can be determined by searching for saddle points on the PES of the chemical species of interest. A first-order saddle point is a position on the PES corresponding to a minimum in all directions except one; a second-order saddle point is a minimum in all directions except two, and so on. Defined mathematically, an nth order saddle point is characterized by the following: ∂E/∂r = 0 and the Hessian matrix, ∂∂E/∂ri∂rj, has exactly n negative eigenvalues. Algorithms to locate transition state geometries fall into two main categories: local methods and semi-global methods. Local methods are suitable when the starting point for the optimization is very close to the true transition state (very close will be defined shortly) and semi-global methods find application when it is sought to locate the transition state with very little a priori knowledge of its geometry. Some methods, such as the Dimer method (see below), fall into both categories. === Local searches === A so-called local optimization requires an initial guess of the transition state that is very close to the true transition state. Very close typically means that the initial guess must have a corresponding Hessian matrix with one negative eigenvalue, or, the negative eigenvalue corresponding to the reaction coordinate must be greater in magnitude than the other negative eigenvalues. Further, the eigenvector with the most negative eigenvalue must correspond to the reaction coordinate, that is, it must represent the geometric transformation relating to the process whose transition state is sought. Given the above pre-requisites, a local optimization algorithm can then move "uphill" along the eigenvector with the most negative eigenvalue and "downhill" along all other degrees of freedom, using something similar to a quasi-Newton method. === Dimer method === The dimer method can be used to find possible transition states without knowledge of the final structure or to refine a good guess of a transition structure. The “dimer” is formed by two images very close to each other on the PES. The method works by moving the dimer uphill from the starting position whilst rotating the dimer to find the direction of lowest curvature (ultimately negative). === Activation Relaxation Technique (ART) === The Activation Relaxation Technique (ART) is also an open-ended method to find new transition states or to refine known saddle points on the PES. The method follows the direction of lowest negative curvature (computed using the Lanczos algorithm) on the PES to reach the saddle point, relaxing in the perpendicular hyperplane between each "jump" (activation) in this direction. === Chain-of-state methods === Chain-of-state methods can be used to find the approximate geometry of the transition state based on the geometries of the reactant and product. The generated approximate geometry can then serve as a starting point for refinement via a local search, which was described above. Chain-of-state methods use a series of vectors, that is points on the PES, connecting the reactant and product of the reaction of interest, rreactant and rproduct, thus discretizing the reaction pathway. Very commonly, these points are referred to as beads due to an analogy of a set of beads connected by strings or springs, which connect the reactant and products. The series of beads is often initially created by interpolating between rreactant and rproduct, for example, for a series of N + 1 beads, bead i might be given by r i = i N r p r o d u c t + ( 1 − i N ) r r e a c t a n t {\displaystyle \mathbf {r} _{i}={\frac {i}{N}}\mathbf {r} _{\mathrm {product} }+\left(1-{\frac {i}{N}}\right)\mathbf {r} _{\mathrm {reactant} }} where i ∈ 0, 1, ..., N. Each of the beads ri has an energy, E(ri), and forces, -∂E/∂ri and these are treated with a constrained optimization process that seeks to get an as accurate as possible representation of the reaction pathway. For this to be achieved, spacing constraints must be applied so that each bead ri does not simply get optimized to the reactant and product geometry. Often this constraint is achieved by projecting out components of the force on each bead ri, or alternatively the movement of each bead during optimization, that are tangential to the reaction path. For example, if for convenience, it is defined that gi = ∂E/∂ri, then the energy gradient at each bead minus the component of the energy gradient that is tangential to the reaction pathway is given by g i ⊥ = g i − τ i ( τ i ⋅ g i ) = ( I − τ i τ i T ) g i {\displaystyle \mathbf {g} _{i}^{\perp }=\mathbf {g} _{i}-\mathbf {\tau } _{i}(\mathbf {\tau } _{i}\cdot \mathbf {g} _{i})=\left(I-\mathbf {\tau } _{i}\mathbf {\tau } _{i}^{T}\right)\mathbf {g} _{i}} where I is the identity matrix and τi is a unit vector representing the reaction path tangent at ri. By projecting out components of the energy gradient or the optimization step that are parallel to the reaction path, an optimization algorithm significantly reduces the tendency of each of the beads to be optimized directly to a minimum. ==== Synchronous transit ==== The simplest chain-of-state method is the linear synchronous transit (LST) method. It operates by taking interpolated points between the reactant and product geometries and choosing the one with the highest energy for subsequent refinement via a local search. The quadratic synchronous transit (QST) method extends LST by allowing a parabolic reaction path, with optimization of the highest energy point orthogonally to the parabola. ==== Nudged elastic band ==== In Nudged elastic band (NEB) method, the beads along the reaction pathway have simulated spring forces in addition to the chemical forces, -∂E/∂ri, to cause the optimizer to maintain the spacing constraint. Specifically, the force fi on each point i is given by f i = f i ∥ − g i ⊥ {\displaystyle \mathbf {f} _{i}=\mathbf {f} _{i}^{\parallel }-\mathbf {g} _{i}^{\perp }} where f i ∥ = k [ ( ( r i + 1 − r i ) − ( r i − r i − 1 ) ) ⋅ τ i ] τ i {\displaystyle \mathbf {f} _{i}^{\parallel }=k\left[\left(\left(\mathbf {r} _{i+1}-\mathbf {r} _{i}\right)-\left(\mathbf {r} _{i}-\mathbf {r} _{i-1}\right)\right)\cdot \tau _{i}\right]\tau _{i}} is the spring force parallel to the pathway at each point ri (k is a spring constant and τi, as before, is a unit vector representing the reaction path tangent at ri). In a traditional implementation, the point with the highest energy is used for subsequent refinement in a local search. There are many variations on the NEB method, such including the climbing image NEB, in which the point with the highest energy is pushed upwards during the optimization procedure so as to (hopefully) give a geometry which is even closer to that of the transition state. There have also been extensions to include Gaussian process regression for reducing the number of evaluations. For systems with non-Euclidean (R^2) geometry, like magnetic systems, the method is modified to the geodesic nudged elastic band approach. ==== String method ==== The string method uses splines connecting the points, ri, to measure and enforce distance constraints between the points and to calculate the tangent at each point. In each step of an optimization procedure, the points might be moved according to the force acting on them perpendicular to the path, and then, if the equidistance constraint between the points is no-longer satisfied, the points can be redistributed, using the spline representation of the path to generate new vectors with the required spacing. Variations on the string method include the growing string method, in which the guess of the pathway is grown in from the end points (that is the reactant and products) as the optimization progresses. == Comparison with other techniques == Geometry optimization is fundamentally different from a molecular dynamics simulation. The latter simulates the motion of molecules with respect to time, subject to temperature, chemical forces, initial velocities, Brownian motion of a solvent, and so on, via the application of Newton's laws of Motion. This means that the trajectories of the atoms which get computed, have some physical meaning. Geometry optimization, by contrast, does not produce a "trajectory" with any physical meaning – it is concerned with minimization of the forces acting on each atom in a collection of atoms, and the pathway via which it achieves this lacks meaning. Different optimization algorithms could give the same result for the minimum energy structure, but arrive at it via a different pathway. == See also == Constraint composite graph Graph cuts in computer vision – apparatus for solving computer vision problems that can be formulated in terms of energy minimization Energy principles in structural mechanics == References == == External links == Numerical Recipes in Fortran 77 == Additional references == Payne et al., "Iterative minimization techniques for ab initio total-energy calculations: Molecular dynamics and conjugate gradients", Reviews of Modern Physics 64 (4), pp. 1045–1097. (1992) (abstract) Stich et al., "Conjugate gradient minimization of the energy functional: A new method for electronic structure calculation", Physical Review B 39 (8), pp. 4997–5004, (1989) Chadi, "Energy-minimization approach to the atomic geometry of semiconductor surfaces", Physical Review Letters 41 (15), pp. 1062–1065 (1978)
Wikipedia/Energy_minimization
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach). == Molecular mechanics == Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics. E = E bonds + E angle + E dihedral + E non-bonded {\displaystyle E=E_{\text{bonds}}+E_{\text{angle}}+E_{\text{dihedral}}+E_{\text{non-bonded}}\,} E non-bonded = E electrostatic + E van der Waals {\displaystyle E_{\text{non-bonded}}=E_{\text{electrostatic}}+E_{\text{van der Waals}}\,} This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects. == Variables == Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations. === Coordinate representations === Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method. == Applications == Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes. == See also == == References == == Further reading ==
Wikipedia/Molecular_modeling
Homology modeling, also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein (the "template"). Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of a sequence alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure. Evolutionarily related proteins have similar sequences and naturally occurring homologous proteins have similar protein structure. It has been shown that three-dimensional protein structure is evolutionarily more conserved than would be expected on the basis of sequence conservation alone. The sequence alignment and template structure are then used to produce a structural model of the target. Because protein structures are more conserved than DNA sequences, and detectable levels of sequence similarity usually imply significant structural similarity. The quality of the homology model is dependent on the quality of the sequence alignment and template structure. The approach can be complicated by the presence of alignment gaps (commonly called indels) that indicate a structural region present in the target but not in the template, and by structure gaps in the template that arise from poor resolution in the experimental procedure (usually X-ray crystallography) used to solve the structure. Model quality declines with decreasing sequence identity; a typical model has ~1–2 Å root mean square deviation between the matched Cα atoms at 70% sequence identity but only 2–4 Å agreement at 25% sequence identity. However, the errors are significantly higher in the loop regions, where the amino acid sequences of the target and template proteins may be completely different. Regions of the model that were constructed without a template, usually by loop modeling, are generally much less accurate than the rest of the model. Errors in side chain packing and position also increase with decreasing identity, and variations in these packing configurations have been suggested as a major reason for poor model quality at low identity. Taken together, these various atomic-position errors are significant and impede the use of homology models for purposes that require atomic-resolution data, such as drug design and protein–protein interaction predictions; even the quaternary structure of a protein may be difficult to predict from homology models of its subunit(s). Nevertheless, homology models can be useful in reaching qualitative conclusions about the biochemistry of the query sequence, especially in formulating hypotheses about why certain residues are conserved, which may in turn lead to experiments to test those hypotheses. For example, the spatial arrangement of conserved residues may suggest whether a particular residue is conserved to stabilize the folding, to participate in binding some small molecule, or to foster association with another protein or nucleic acid. Homology modeling can produce high-quality structural models when the target and template are closely related, which has inspired the formation of a structural genomics consortium dedicated to the production of representative experimental structures for all classes of protein folds. The chief inaccuracies in homology modeling, which worsen with lower sequence identity, derive from errors in the initial sequence alignment and from improper template selection. Like other methods of structure prediction, current practice in homology modeling is assessed in a biennial large-scale experiment known as the Critical Assessment of Techniques for Protein Structure Prediction, or Critical Assessment of Structure Prediction (CASP). == Motive == The method of homology modeling is based on the observation that protein tertiary structure is better conserved than amino acid sequence. Thus, even proteins that have diverged appreciably in sequence but still share detectable similarity will also share common structural properties, particularly the overall fold. Because it is difficult and time-consuming to obtain experimental structures from methods such as X-ray crystallography and protein NMR for every protein of interest, homology modeling can provide useful structural models for generating hypotheses about a protein's function and directing further experimental work. There are exceptions to the general rule that proteins sharing significant sequence identity will share a fold. For example, a judiciously chosen set of mutations of less than 50% of a protein can cause the protein to adopt a completely different fold. However, such a massive structural rearrangement is unlikely to occur in evolution, especially since the protein is usually under the constraint that it must fold properly and carry out its function in the cell. Consequently, the roughly folded structure of a protein (its "topology") is conserved longer than its amino-acid sequence and much longer than the corresponding DNA sequence; in other words, two proteins may share a similar fold even if their evolutionary relationship is so distant that it cannot be discerned reliably. For comparison, the function of a protein is conserved much less than the protein sequence, since relatively few changes in amino-acid sequence are required to take on a related function. == Steps in model production == The homology modeling procedure can be broken down into four sequential steps: template selection, target-template alignment, model construction, and model assessment. The first two steps are often essentially performed together, as the most common methods of identifying templates rely on the production of sequence alignments; however, these alignments may not be of sufficient quality because database search techniques prioritize speed over alignment quality. These processes can be performed iteratively to improve the quality of the final model, although quality assessments that are not dependent on the true target structure are still under development. Optimizing the speed and accuracy of these steps for use in large-scale automated structure prediction is a key component of structural genomics initiatives, partly because the resulting volume of data will be too large to process manually and partly because the goal of structural genomics requires providing models of reasonable quality to researchers who are not themselves structure prediction experts. == Template selection and sequence alignment == The critical first step in homology modeling is the identification of the best template structure, if indeed any are available. The simplest method of template identification relies on serial pairwise sequence alignments aided by database search techniques such as FASTA and BLAST. More sensitive methods based on multiple sequence alignment – of which PSI-BLAST is the most common example – iteratively update their position-specific scoring matrix to successively identify more distantly related homologs. This family of methods has been shown to produce a larger number of potential templates and to identify better templates for sequences that have only distant relationships to any solved structure. Protein threading, also known as fold recognition or 3D-1D alignment, can also be used as a search technique for identifying templates to be used in traditional homology modeling methods. Recent CASP experiments indicate that some protein threading methods such as RaptorX are more sensitive than purely sequence(profile)-based methods when only distantly-related templates are available for the proteins under prediction. When performing a BLAST search, a reliable first approach is to identify hits with a sufficiently low E-value, which are considered sufficiently close in evolution to make a reliable homology model. Other factors may tip the balance in marginal cases; for example, the template may have a function similar to that of the query sequence, or it may belong to a homologous operon. However, a template with a poor E-value should generally not be chosen, even if it is the only one available, since it may well have a wrong structure, leading to the production of a misguided model. A better approach is to submit the primary sequence to fold-recognition servers or, better still, consensus meta-servers which improve upon individual fold-recognition servers by identifying similarities (consensus) among independent predictions. Often several candidate template structures are identified by these approaches. Although some methods can generate hybrid models with better accuracy from multiple templates, most methods rely on a single template. Therefore, choosing the best template from among the candidates is a key step, and can affect the final accuracy of the structure significantly. This choice is guided by several factors, such as the similarity of the query and template sequences, of their functions, and of the predicted query and observed template secondary structures. Perhaps most importantly, the coverage of the aligned regions: the fraction of the query sequence structure that can be predicted from the template, and the plausibility of the resulting model. Thus, sometimes several homology models are produced for a single query sequence, with the most likely candidate chosen only in the final step. It is possible to use the sequence alignment generated by the database search technique as the basis for the subsequent model production; however, more sophisticated approaches have also been explored. One proposal generates an ensemble of stochastically defined pairwise alignments between the target sequence and a single identified template as a means of exploring "alignment space" in regions of sequence with low local similarity. "Profile-profile" alignments that first generate a sequence profile of the target and systematically compare it to the sequence profiles of solved structures; the coarse-graining inherent in the profile construction is thought to reduce noise introduced by sequence drift in nonessential regions of the sequence. == Model generation == Given a template and an alignment, the information contained therein must be used to generate a three-dimensional structural model of the target, represented as a set of Cartesian coordinates for each atom in the protein. Three major classes of model generation methods have been proposed. === Fragment assembly === The original method of homology modeling relied on the assembly of a complete model from conserved structural fragments identified in closely related solved structures. For example, a modeling study of serine proteases in mammals identified a sharp distinction between "core" structural regions conserved in all experimental structures in the class, and variable regions typically located in the loops where the majority of the sequence differences were localized. Thus unsolved proteins could be modeled by first constructing the conserved core and then substituting variable regions from other proteins in the set of solved structures. Current implementations of this method differ mainly in the way they deal with regions that are not conserved or that lack a template. The variable regions are often constructed with the help of a protein fragment library. === Segment matching === The segment-matching method divides the target into a series of short segments, each of which is matched to its own template fitted from the Protein Data Bank. Thus, sequence alignment is done over segments rather than over the entire protein. Selection of the template for each segment is based on sequence similarity, comparisons of alpha carbon coordinates, and predicted steric conflicts arising from the van der Waals radii of the divergent atoms between target and template. === Satisfaction of spatial restraints === The most common current homology modeling method takes its inspiration from calculations required to construct a three-dimensional structure from data generated by NMR spectroscopy. One or more target-template alignments are used to construct a set of geometrical criteria that are then converted to probability density functions for each restraint. Restraints applied to the main protein internal coordinates – protein backbone distances and dihedral angles – serve as the basis for a global optimization procedure that originally used conjugate gradient energy minimization to iteratively refine the positions of all heavy atoms in the protein. This method had been dramatically expanded to apply specifically to loop modeling, which can be extremely difficult due to the high flexibility of loops in proteins in aqueous solution. A more recent expansion applies the spatial-restraint model to electron density maps derived from cryoelectron microscopy studies, which provide low-resolution information that is not usually itself sufficient to generate atomic-resolution structural models. To address the problem of inaccuracies in initial target-template sequence alignment, an iterative procedure has also been introduced to refine the alignment on the basis of the initial structural fit. The most commonly used software in spatial restraint-based modeling is MODELLER and a database called ModBase has been established for reliable models generated with it. == Loop modeling == Regions of the target sequence that are not aligned to a template are modeled by loop modeling; they are the most susceptible to major modeling errors and occur with higher frequency when the target and template have low sequence identity. The coordinates of unmatched sections determined by loop modeling programs are generally much less accurate than those obtained from simply copying the coordinates of a known structure, particularly if the loop is longer than 10 residues. The first two sidechain dihedral angles (χ1 and χ2) can usually be estimated within 30° for an accurate backbone structure; however, the later dihedral angles found in longer side chains such as lysine and arginine are notoriously difficult to predict. Moreover, small errors in χ1 (and, to a lesser extent, in χ2) can cause relatively large errors in the positions of the atoms at the terminus of side chain; such atoms often have a functional importance, particularly when located near the active site. == Model assessment == A large number of methods have been developed for selecting a native-like structure from a set of models. Scoring functions have been based on both molecular mechanics energy functions (Lazaridis and Karplus 1999; Petrey and Honig 2000; Feig and Brooks 2002; Felts et al. 2002; Lee and Duan 2004), statistical potentials (Sippl 1995; Melo and Feytmans 1998; Samudrala and Moult 1998; Rojnuckarin and Subramaniam 1999; Lu and Skolnick 2001; Wallqvist et al. 2002; Zhou and Zhou 2002), residue environments (Luthy et al. 1992; Eisenberg et al. 1997; Park et al. 1997; Summa et al. 2005), local side-chain and backbone interactions (Fang and Shortle 2005), orientation-dependent properties (Buchete et al. 2004a,b; Hamelryck 2005), packing estimates (Berglund et al. 2004), solvation energy (Petrey and Honig 2000; McConkey et al. 2003; Wallner and Elofsson 2003; Berglund et al. 2004), hydrogen bonding (Kortemme et al. 2003), and geometric properties (Colovos and Yeates 1993; Kleywegt 2000; Lovell et al. 2003; Mihalek et al. 2003). A number of methods combine different potentials into a global score, usually using a linear combination of terms (Kortemme et al. 2003; Tosatto 2005), or with the help of machine learning techniques, such as neural networks (Wallner and Elofsson 2003) and support vector machines (SVM) (Eramian et al. 2006). Comparisons of different global model quality assessment programs can be found in recent papers by Pettitt et al. (2005), Tosatto (2005), and Eramian et al. (2006). Less work has been reported on the local quality assessment of models. Local scores are important in the context of modeling because they can give an estimate of the reliability of different regions of a predicted structure. This information can be used in turn to determine which regions should be refined, which should be considered for modeling by multiple templates, and which should be predicted ab initio. Information on local model quality could also be used to reduce the combinatorial problem when considering alternative alignments; for example, by scoring different local models separately, fewer models would have to be built (assuming that the interactions between the separate regions are negligible or can be estimated separately). One of the most widely used local scoring methods is Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), which combines secondary structure, solvent accessibility, and polarity of residue environments. ProsaII (Sippl 1993), which is based on a combination of a pairwise statistical potential and a solvation term, is also applied extensively in model evaluation. Other methods include the Errat program (Colovos and Yeates 1993), which considers distributions of nonbonded atoms according to atom type and distance, and the energy strain method (Maiorov and Abagyan 1998), which uses differences from average residue energies in different environments to indicate which parts of a protein structure might be problematic. Melo and Feytmans (1998) use an atomic pairwise potential and a surface-based solvation potential (both knowledge-based) to evaluate protein structures. Apart from the energy strain method, which is a semiempirical approach based on the ECEPP3 force field (Nemethy et al. 1992), all of the local methods listed above are based on statistical potentials. A conceptually distinct approach is the ProQres method, which was very recently introduced by Wallner and Elofsson (2006). ProQres is based on a neural network that combines structural features to distinguish correct from incorrect regions. ProQres was shown to outperform earlier methodologies based on statistical approaches (Verify3D, ProsaII, and Errat). The data presented in Wallner and Elofsson's study suggests that their machine-learning approach based on structural features is indeed superior to statistics-based methods. However, the knowledge-based methods examined in their work, Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), Prosa (Sippl 1993), and Errat (Colovos and Yeates 1993), are not based on newer statistical potentials. == Benchmarking == Several large-scale benchmarking efforts have been made to assess the relative quality of various current homology modeling methods. Critical Assessment of Structure Prediction (CASP) is a community-wide prediction experiment that runs every two years during the summer months and challenges prediction teams to submit structural models for a number of sequences whose structures have recently been solved experimentally but have not yet been published. Its partner Critical Assessment of Fully Automated Structure Prediction (CAFASP) has run in parallel with CASP but evaluates only models produced via fully automated servers. Continuously running experiments that do not have prediction 'seasons' focus mainly on benchmarking publicly available webservers. LiveBench and EVA run continuously to assess participating servers' performance in prediction of imminently released structures from the PDB. CASP and CAFASP serve mainly as evaluations of the state of the art in modeling, while the continuous assessments seek to evaluate the model quality that would be obtained by a non-expert user employing publicly available tools. == Accuracy == The accuracy of the structures generated by homology modeling is highly dependent on the sequence identity between target and template. Above 50% sequence identity, models tend to be reliable, with only minor errors in side chain packing and rotameric state, and an overall RMSD between the modeled and the experimental structure falling around 1 Å. This error is comparable to the typical resolution of a structure solved by NMR. In the 30–50% identity range, errors can be more severe and are often located in loops. Below 30% identity, serious errors occur, sometimes resulting in the basic fold being mis-predicted. This low-identity region is often referred to as the "twilight zone" within which homology modeling is extremely difficult, and to which it is possibly less suited than fold recognition methods. At high sequence identities, the primary source of error in homology modeling derives from the choice of the template or templates on which the model is based, while lower identities exhibit serious errors in sequence alignment that inhibit the production of high-quality models. It has been suggested that the major impediment to quality model production is inadequacies in sequence alignment, since "optimal" structural alignments between two proteins of known structure can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. Attempts have been made to improve the accuracy of homology models built with existing methods by subjecting them to molecular dynamics simulation in an effort to improve their RMSD to the experimental structure. However, current force field parameterizations may not be sufficiently accurate for this task, since homology models used as starting structures for molecular dynamics tend to produce slightly worse structures. Slight improvements have been observed in cases where significant restraints were used during the simulation. == Sources of error == The two most common and large-scale sources of error in homology modeling are poor template selection and inaccuracies in target-template sequence alignment. Controlling for these two factors by using a structural alignment, or a sequence alignment produced on the basis of comparing two solved structures, dramatically reduces the errors in final models; these "gold standard" alignments can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. Results from the most recent CASP experiment suggest that "consensus" methods collecting the results of multiple fold recognition and multiple alignment searches increase the likelihood of identifying the correct template; similarly, the use of multiple templates in the model-building step may be worse than the use of the single correct template but better than the use of a single suboptimal one. Alignment errors may be minimized by the use of a multiple alignment even if only one template is used, and by the iterative refinement of local regions of low similarity. A lesser source of model errors are errors in the template structure. The PDBREPORT Archived 2007-05-31 at the Wayback Machine database lists several million, mostly very small but occasionally dramatic, errors in experimental (template) structures that have been deposited in the PDB. Serious local errors can arise in homology models where an insertion or deletion mutation or a gap in a solved structure result in a region of target sequence for which there is no corresponding template. This problem can be minimized by the use of multiple templates, but the method is complicated by the templates' differing local structures around the gap and by the likelihood that a missing region in one experimental structure is also missing in other structures of the same protein family. Missing regions are most common in loops where high local flexibility increases the difficulty of resolving the region by structure-determination methods. Although some guidance is provided even with a single template by the positioning of the ends of the missing region, the longer the gap, the more difficult it is to model. Loops of up to about 9 residues can be modeled with moderate accuracy in some cases if the local alignment is correct. Larger regions are often modeled individually using ab initio structure prediction techniques, although this approach has met with only isolated success. The rotameric states of side chains and their internal packing arrangement also present difficulties in homology modeling, even in targets for which the backbone structure is relatively easy to predict. This is partly due to the fact that many side chains in crystal structures are not in their "optimal" rotameric state as a result of energetic factors in the hydrophobic core and in the packing of the individual molecules in a protein crystal. One method of addressing this problem requires searching a rotameric library to identify locally low-energy combinations of packing states. It has been suggested that a major reason that homology modeling so difficult when target-template sequence identity lies below 30% is that such proteins have broadly similar folds but widely divergent side chain packing arrangements. == Utility == Uses of the structural models include protein–protein interaction prediction, protein–protein docking, molecular docking, and functional annotation of genes identified in an organism's genome. Even low-accuracy homology models can be useful for these purposes, because their inaccuracies tend to be located in the loops on the protein surface, which are normally more variable even between closely related proteins. The functional regions of the protein, especially its active site, tend to be more highly conserved and thus more accurately modeled. Homology models can also be used to identify subtle differences between related proteins that have not all been solved structurally. For example, the method was used to identify cation binding sites on the Na+/K+ ATPase and to propose hypotheses about different ATPases' binding affinity. Used in conjunction with molecular dynamics simulations, homology models can also generate hypotheses about the kinetics and dynamics of a protein, as in studies of the ion selectivity of a potassium channel. Large-scale automated modeling of all identified protein-coding regions in a genome has been attempted for the yeast Saccharomyces cerevisiae, resulting in nearly 1000 quality models for proteins whose structures had not yet been determined at the time of the study, and identifying novel relationships between 236 yeast proteins and other previously solved structures. == See also == Protein structure prediction Protein structure prediction software Protein threading Molecular replacement == References ==
Wikipedia/Homology_model
The Born–Mayer equation is an equation that is used to calculate the lattice energy of a crystalline ionic compound. It is a refinement of the Born–Landé equation by using an improved repulsion term. E = − N A M z + z − e 2 4 π ϵ 0 r 0 ( 1 − ρ r 0 ) {\displaystyle E=-{\frac {N_{A}Mz^{+}z^{-}e^{2}}{4\pi \epsilon _{0}r_{0}}}\left(1-{\frac {\rho }{r_{0}}}\right)} where: NA = Avogadro constant; M = Madelung constant, relating to the geometry of the crystal; z+ = charge number of cation z− = charge number of anion e = elementary charge, 1.6022×10−19 C ε0 = permittivity of free space 4πε0 = 1.112×10−10 C2/(J·m) r0 = distance to closest ion ρ = a constant dependent on the compressibility of the crystal; 30 pm works well for all alkali metal halides == See also == Born–Landé equation Kapustinskii equation == References ==
Wikipedia/Born–Mayer_equation
In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure. De novo methods, a term first coined by William DeGrado, tend to require vast computational resources, and have thus only been carried out for relatively small proteins. De novo protein structure modeling is distinguished from Template-based modeling (TBM) by the fact that no solved homologue to the protein of interest is used, making efforts to predict protein structure from amino acid sequence exceedingly difficult. Prediction of protein structure de novo for larger proteins will require better algorithms and larger computational resources such as those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing projects (such as Folding@home, Rosetta@home, the Human Proteome Folding Project, or Nutritious Rice for the World). Although computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) to fields such as medicine and drug design make de novo structure prediction an active research field. == Background == Currently, the gap between known protein sequences and confirmed protein structures is immense. At the beginning of 2008, only about 1% of the sequences listed in the UniProtKB database corresponded to structures in the Protein Data Bank (PDB), leaving a gap between sequence and structure of approximately five million. Experimental techniques for determining tertiary structure have faced serious bottlenecks in their ability to determine structures for particular proteins. For example, whereas X-ray crystallography has been successful in crystallizing approximately 80,000 cytosolic proteins, it has been far less successful in crystallizing membrane proteins – approximately 280. In light of experimental limitations, devising efficient computer programs to close the gap between known sequence and structure is believed to be the only feasible option. De novo protein structure prediction methods attempt to predict tertiary structures from sequences based on general principles that govern protein folding energetics and/or statistical tendencies of conformational features that native structures acquire, without the use of explicit templates. Research into de novo structure prediction has been primarily focused into three areas: alternate lower-resolution representations of proteins, accurate energy functions, and efficient sampling methods. A general paradigm for de novo prediction involves sampling conformation space, guided by scoring functions and other sequence-dependent biases such that a large set of candidate (“decoy") structures are generated. Native-like conformations are then selected from these decoys using scoring functions as well as conformer clustering. High-resolution refinement is sometimes used as a final step to fine-tune native-like structures. There are two major classes of scoring functions. Physics-based functions are based on mathematical models describing aspects of the known physics of molecular interaction. Knowledge-based functions are formed with statistical models capturing aspects of the properties of native protein conformations. == Amino acid sequence determines tertiary protein structure == Several lines of evidence have been presented in favor of the notion that primary protein sequence contains all the information required for overall three-dimensional protein structure, making the idea of a de novo protein prediction possible. First, proteins with different functions usually have different amino acid sequences. Second, several different human diseases, such as Duchenne muscular dystrophy, can be linked to loss of protein function resulting from a change in just a single amino acid in the primary sequence. Third, proteins with similar functions across many different species often have similar amino acid sequences. Ubiquitin, for example, is a protein involved in regulating the degradation of other proteins; its amino acid sequence is nearly identical in species as far separated as Drosophila melanogaster and Homo sapiens. Fourth, by thought experiment, one can deduce that protein folding must not be a completely random process and that information necessary for folding must be encoded within the primary structure. For example, if we assume that each of 100 amino acid residues within a small polypeptide could take up 10 different conformations on average, giving 10^100 different conformations for the polypeptide. If one possible conformation was tested every 10^-13 second, then it would take about 10^77 years to sample all possible conformations. However, proteins are properly folded within the body on short timescales all the time, meaning that the process cannot be random and, thus, can potentially be modeled. One of the strongest lines of evidence for the supposition that all the relevant information needed to encode protein tertiary structure is found in the primary sequence was demonstrated in the 1950s by Christian Anfinsen. In a classic experiment, he showed that ribonuclease A could be entirely denatured by being submerged in a solution of urea (to disrupt stabilizing hydrophobic bonds) in the presence of a reducing agent (to cleave stabilizing disulfide bonds). Upon removal of the protein from this environment, the denatured and functionless ribonuclease protein spontaneously recoiled and regained function, demonstrating that protein tertiary structure is encoded in the primary amino acid sequence. Had the protein reformed randomly, over one-hundred different combinations of four disulfide bonds could have formed. However, in the majority of cases proteins will require the presence of molecular chaperons within the cell for proper folding. The overall shape of a protein may be encoded in its amino acid structure, but its folding may depend on chaperons to assist in folding. Primary to Tertiary == Successful de novo modeling requirements == De novo conformation predictors usually function by producing candidate conformations (decoys) and then choosing amongst them based on their thermodynamic stability and energy state. Most successful predictors will have the following three factors in common: 1) An accurate energy function that corresponds the most thermodynamically stable state to the native structure of a protein 2) An efficient search method capable of quickly identifying low-energy states through conformational search 3) The ability to select native-like models from a collection of decoy structures De novo programs will search three dimensional space and, in the process, produce candidate protein conformations. As a protein approaches its correctly folded, native state, entropy and free energy will decrease. Using this information, de novo predictors can discriminate amongst decoys. Specifically, de novo programs will select possible conformations with lower free energies – which are more likely to be correct than those structures with higher free energies. As stated by David A. Baker in regards to how his de novo Rosetta predictor works, “during folding, each local segment of the chain flickers between a different subset of local conformations…folding to the native structure occurs when the conformations adopted by the local segments and their relative orientations allow…low energy features of native protein structures. In the Rosetta algorithm…the program then searches for the combination of these local conformations that has the lowest overall energy.” However, some de novo methods work by first enumerating through the entire conformational space using a simplified representation of a protein structure, and then select the ones that are most likely to be native-like. An example of this approach is one based on representing protein folds using tetrahedral lattices and building all atoms models on top of all possible conformations obtained using the tetrahedral representation. This approach was used successfully at CASP3 to predict a protein fold whose topology had not been observed before by Michael Levitt's team. By developing the QUARK program, Xu and Zhang showed that ab initio structure of some proteins can be successfully constructed through a knowledge-based force field . == Prediction strategies == If a protein of known tertiary structure shares at least 30% of its sequence with a potential homolog of undetermined structure, comparative methods that overlay the putative unknown structure with the known can be utilized to predict the likely structure of the unknown. However, below this threshold three other classes of strategy are used to determine possible structure from an initial model: ab initio protein prediction, fold recognition, and threading. Ab Initio Methods: In ab initio methods, an initial effort to elucidate secondary structures (alpha helix, beta sheet, beta turn, etc.) from primary structure is made by utilization of physicochemical parameters and neural net algorithms. From that point, algorithms predict tertiary folding. One drawback to this strategy is that it is not yet capable of incorporating the locations and orientation of amino acid side chains. Fold Prediction: In fold recognition strategies, a prediction of secondary structure is first made and then compared to either a library of known protein folds, such as CATH or SCOP, or what is known as a "periodic table" of possible secondary structure forms. A confidence score is then assigned to likely matches. Threading: In threading strategies, the fold recognition technique is expanded further. In this process, empirically based energy functions for the interaction of residue pairs are used to place the unknown protein onto a putative backbone as a best fit, accommodating gaps where appropriate. The best interactions are then accentuated in order to discriminate amongst potential decoys and to predict the most likely conformation. The goal of both fold and threading strategies is to ascertain whether a fold in an unknown protein is similar to a domain in a known one deposited in a database, such as the protein databank (PDB). This is in contrast to de novo (ab initio) methods where structure is determined using a physics-base approach en lieu of comparing folds in the protein to structures in a data base. == Limitations of de novo prediction methods == A major limitation of de novo protein prediction methods is the extraordinary amount of computer time required to successfully solve for the native conformation of a protein. Distributed methods, such as Rosetta@home, have attempted to ameliorate this by recruiting individuals who then volunteer idle home computer time in order to process data. Even these methods face challenges, however. For example, a distributed method was utilized by a team of researchers at the University of Washington and the Howard Hughes Medical Institute to predict the tertiary structure of the protein T0283 from its amino acid sequence. In a blind test comparing the accuracy of this distributed technique with the experimentally confirmed structure deposited within the Protein Databank (PDB), the predictor produced excellent agreement with the deposited structure. However, the time and number of computers required for this feat was enormous – almost two years and approximately 70,000 home computers, respectively. One method proposed to overcome such limitations involves the use of Markov models (see Markov chain Monte Carlo). One possibility is that such models could be constructed in order to assist with free energy computation and protein structure prediction, perhaps by refining computational simulations. Another way of circumventing the computational power limitations is using coarse-grained modeling. Coarse-grained protein models allow for de novo structure prediction of small proteins, or large protein fragments, in a short computational time. === Structure prediction of de novo proteins === Another limitation of protein structure prediction software concerns a specific class of proteins, namely de novo proteins. Structure prediction software such as AlphaFold rely on co-evolutionary data derived from multiple sequence alignment (MSA) and homologous protein sequences to predict structures of proteins. However, per definition, de novo proteins lack homologous sequences, as they are evolutionarily new. Thus, structure prediction software which relies on such homology can be expected to perform poorly in predicting structures of de novo proteins. To improve accuracy of structure prediction for de novo proteins, new softwares have been developed. Namely, ESMFold is a newly developed large language model (LLM) for the prediction of protein structures based solely on their amino acid sequences. It can predict a 3D structure of a protein with atomic-level resolution with an input of a single amino acid sequence. == Critical assessment of protein structure prediction == “Progress for all variants of computational protein structure prediction methods is assessed in the biannual, community wide Critical Assessment of Protein Structure Prediction (CASP) experiments. In the CASP experiments, research groups are invited to apply their prediction methods to amino acid sequences for which the native structure is not known but to be determined and to be published soon. Even though the number of amino acid sequences provided by the CASP experiments is small, these competitions provide a good measure to benchmark methods and progress in the field in an arguably unbiased manner.” == Notes == Samudrala, R, Xia, Y, Huang, E.S., Levitt, M. Ab initio prediction of protein structure using a combined hierarchical approach. (1999). Proteins Suppl 3: 194-198. Bradley, P.; Malmstrom, L.; Qian, B.; Schonbrun, J.; Chivian, D.; Kim, D. E.; Meiler, J.; Misura, K. M.; Baker, D. (2005). "Free modeling with Rosetta in CASP6". Proteins. 61 (Suppl 7): 128–34. doi:10.1002/prot.20729. PMID 16187354. S2CID 36366681. Bonneau; Baker, D (2001). "Ab Initio Protein Structure Prediction: Progress and Prospects". Annu. Rev. Biophys. Biomol. Struct. 30: 173–89. doi:10.1146/annurev.biophys.30.1.173. PMID 11340057. J. Skolnick, Y. Zhang and A. Kolinski. Ab Initio modeling. Structural genomics and high throughput structural biology. M. Sundsrom, M. Norin and A. Edwards, eds. 2006: 137-162. J Lee, S Wu, Y Zhang. Ab initio protein structure prediction. From Protein Structure to Function with Bioinformatics, Chapter 1, Edited by D. J. Rigden, (Springer-London, 2009), P. 1-26. == See also == Protein structure prediction Protein structure prediction software Protein design == References == == External links == CASP Folding@Home Archived 2012-09-08 at the Wayback Machine HPF project Foldit Archived 2011-04-04 at the Wayback Machine UniProtKB Protein Data Bank (PDB) Expert Protein Analysis System - links to protein prediction tools
Wikipedia/De_novo_protein_structure_prediction
In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method, GMRES, and LOBPCG. In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstract additive Schwarz method. In non-overlapping methods, the subdomains intersect only on their interface. In primal methods, such as Balancing domain decomposition and BDDC, the continuity of the solution across subdomain interface is enforced by representing the value of the solution on all neighboring subdomains by the same unknown. In dual methods, such as FETI, the continuity of the solution across the subdomain interface is enforced by Lagrange multipliers. The FETI-DP method is hybrid between a dual and a primal method. Non-overlapping domain decomposition methods are also called iterative substructuring methods. Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints. Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations. == Example 1: 1D Linear BVP == u ″ ( x ) − u ( x ) = 0 {\displaystyle u''(x)-u(x)=0} u ( 0 ) = 0 , u ( 1 ) = 1 {\displaystyle u(0)=0,u(1)=1} The exact solution is: u ( x ) = e x − e − x e 1 − e − 1 {\displaystyle u(x)={\frac {e^{x}-e^{-x}}{e^{1}-e^{-1}}}} Subdivide the domain into two subdomains, one from [ 0 , 1 2 ] {\displaystyle \left[0,{\frac {1}{2}}\right]} and another from [ 1 2 , 1 ] {\displaystyle \left[{\frac {1}{2}},1\right]} . In the left subdomain define the interpolating function v 1 ( x ) {\displaystyle v_{1}(x)} and in the right define v 2 ( x ) {\displaystyle v_{2}(x)} . At the interface between these two subdomains the following interface conditions shall be imposed: v 1 ( 1 2 ) = v 2 ( 1 2 ) {\displaystyle v_{1}\left({\frac {1}{2}}\right)=v_{2}\left({\frac {1}{2}}\right)} v 1 ′ ( 1 2 ) = v 2 ′ ( 1 2 ) {\displaystyle v_{1}'\left({\frac {1}{2}}\right)=v_{2}'\left({\frac {1}{2}}\right)} Let the interpolating functions be defined as: v 1 ( x ) = ∑ n = 0 N u n T n ( y 1 ( x ) ) {\displaystyle v_{1}(x)=\sum _{n=0}^{N}u_{n}T_{n}(y_{1}(x))} v 2 ( x ) = ∑ n = 0 N u n + N T n ( y 2 ( x ) ) {\displaystyle v_{2}(x)=\sum _{n=0}^{N}u_{n+N}T_{n}(y_{2}(x))} y 1 ( x ) = 4 x − 1 {\displaystyle y_{1}(x)=4x-1} y 2 ( x ) = 4 x − 3 {\displaystyle y_{2}(x)=4x-3} Where T n ( y ) {\displaystyle T_{n}(y)} is the nth cardinal function of the chebyshev polynomials of the first kind with input argument y. If N=4 then the following approximation is obtained by this scheme: u 1 = 0.06236 {\displaystyle u_{1}=0.06236} u 2 = 0.21495 {\displaystyle u_{2}=0.21495} u 3 = 0.37428 {\displaystyle u_{3}=0.37428} u 4 = 0.44341 {\displaystyle u_{4}=0.44341} u 5 = 0.51492 {\displaystyle u_{5}=0.51492} u 6 = 0.69972 {\displaystyle u_{6}=0.69972} u 7 = 0.90645 {\displaystyle u_{7}=0.90645} This was obtained with the following MATLAB code. == Related Books == Barry Smith, Petter Bjørstad, and William Gropp: Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge Univ. Press, ISBN 0-521-49589-X (1996). == See also == Multigrid method == External links == The official Domain Decomposition Methods page "Domain Decomposition - Numerical Simulations page". Archived from the original on 2021-01-26.
Wikipedia/Domain_decomposition_method
When examining a system computationally one may be interested in knowing how the free energy changes as a function of some inter- or intramolecular coordinate (such as the distance between two atoms or a torsional angle). The free energy surface along the chosen coordinate is referred to as the potential of mean force (PMF). If the system of interest is in a solvent, then the PMF also incorporates the solvent effects. == General description == The PMF can be obtained in Monte Carlo or molecular dynamics simulations to examine how a system's energy changes as a function of some specific reaction coordinate parameter. For example, it may examine how the system's energy changes as a function of the distance between two residues, or as a protein is pulled through a lipid bilayer. It can be a geometrical coordinate or a more general energetic (solvent) coordinate. Often PMF simulations are used in conjunction with umbrella sampling, because typically the PMF simulation will fail to adequately sample the system space as it proceeds. == Mathematical description == The Potential of Mean Force of a system with N particles is by construction the potential that gives the average force over all the configurations of all the n+1...N particles acting on a particle j at any fixed configuration keeping fixed a set of particles 1...n − ∇ j w ( n ) = ∫ e − β V ( − ∇ j V ) d q n + 1 … d q N ∫ e − β V d q n + 1 … d q N , j = 1 , 2 , … , n {\displaystyle -\nabla _{j}w^{(n)}\,=\,{\frac {\int e^{-\beta V}(-\nabla _{j}V)dq_{n+1}\dots dq_{N}}{\int e^{-\beta V}dq_{n+1}\dots dq_{N}}},~j=1,2,\dots ,n} Above, − ∇ j w ( n ) {\displaystyle -\nabla _{j}w^{(n)}} is the averaged force, i.e. "mean force" on particle j. And w ( n ) {\displaystyle w^{(n)}} is the so-called potential of mean force. For n = 2 {\displaystyle n=2} , w ( 2 ) ( r ) {\displaystyle w^{(2)}(r)} is the average work needed to bring the two particles from infinite separation to a distance r {\displaystyle r} . It is also related to the radial distribution function of the system, g ( r ) {\displaystyle g(r)} , by: g ( r ) = e − β w ( 2 ) ( r ) {\displaystyle g(r)=e^{-\beta w^{(2)}(r)}} == Application == The potential of mean force w ( 2 ) {\displaystyle w^{(2)}} is usually applied in the Boltzmann inversion method as a first guess for the effective pair interaction potential that ought to reproduce the correct radial distribution function in a mesoscopic simulation. Lemkul et al. have used steered molecular dynamics simulations to calculate the potential of mean force to assess the stability of Alzheimer's amyloid protofibrils. Gosai et al. have also used umbrella sampling simulations to show that potential of mean force decreases between thrombin and its aptamer (a protein-ligand complex) under the effect of electrical fields. == See also == Statistical potential Free energy perturbation Potential energy surface == References == == Further reading == McQuarrie, D. A. Statistical Mechanics. Chandler, D. (1987). Introduction to Modern Statistical Mechanics. Oxford University Press. == External links == Potential of Mean force
Wikipedia/Potential_of_mean_force
Abalone is a general purpose molecular dynamics and molecular graphics program for simulations of bio-molecules in a periodic boundary conditions in explicit (flexible SPC water model) or in implicit water models. Mainly designed to simulate the protein folding and DNA-ligand complexes in AMBER force field. == Key features == 3D molecular graphics Automatic Force Field generator for bioelements: H, C, N, O Building and editing chemical structures Library of building blocks Force fields: Assisted Model Building with Energy Refinement (AMBER) 94, 96, 99SB, 03; Optimized Potentials for Liquid Simulations (OPLS) Geometry optimizing Molecular dynamics with multiple time step integrator Hybrid Monte Carlo Replica exchange Interface with quantum chemistry - ORCA, NWChem, Firefly (PC GAMESS), CP2K GPU accelerated molecular modeling == See also == == References == == External links == Official website Benchmarking
Wikipedia/Abalone_(molecular_mechanics)
Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations. In 2007, Nvidia introduced video cards that could be used not only to show graphics but also for scientific calculations. These cards include many arithmetic units (as of 2016, up to 3,584 in Tesla P100) working in parallel. Long before this event, the computational power of video cards was purely used to accelerate graphics calculations. The new features of these cards made it possible to develop parallel programs in a high-level application programming interface (API) named CUDA. This technology substantially simplified programming by enabling programs to be written in C/C++. More recently, OpenCL allows cross-platform GPU acceleration. Quantum chemistry calculations and molecular mechanics simulations (molecular modeling in terms of classical mechanics) are among beneficial applications of this technology. The video cards can accelerate the calculations tens of times, so a PC with such a card has the power similar to that of a cluster of workstations based on common processors. == GPU accelerated molecular modelling software == === Programs === Abalone – Molecular Dynamics (Benchmark) ACEMD on GPUs since 2009 Benchmark AMBER on GPUs version Ascalaph on GPUs version – Ascalaph Liquid GPU AutoDock – Molecular docking BigDFT Ab initio program based on wavelet BrianQC Quantum chemistry (HF and DFT) and molecular mechanics Blaze ligand-based virtual screening CHARMM – Molecular dynamics [1] CP2K Ab initio molecular dynamics Desmond (software) on GPUs, workstations, and clusters Firefly (formerly PC GAMESS) FastROCS GOMC – GPU Optimized Monte Carlo simulation engine GPIUTMD – Graphical processors for Many-Particle Dynamics GPU4PySCF – GPU accelerated plugin package for PySCF GPUMD - A light weight general-purpose molecular dynamics code GROMACS on GPUs HALMD – Highly Accelerated Large-scale MD package HOOMD-blue Archived 2011-11-11 at the Wayback Machine – Highly Optimized Object-oriented Many-particle Dynamics—Blue Edition LAMMPS on GPUs version – lammps for accelerators LIO DFT-Based GPU optimized code - [2] Octopus has support for OpenCL. oxDNA – DNA and RNA coarse-grained simulations on GPUs PWmat – Plane-Wave Density Functional Theory simulations RUMD - Roskilde University Molecular Dynamics TeraChem – Quantum chemistry and ab initio Molecular Dynamics TINKER on GPUs. VMD & NAMD on GPUs versions YASARA runs MD simulations on all GPUs using OpenCL. === API === BrianQC – has an open C level API for quantum chemistry simulations on GPUs, provides GPU-accelerated version of Q-Chem and PSI OpenMM – an API for accelerating molecular dynamics on GPUs, v1.0 provides GPU-accelerated version of GROMACS mdcore – an open-source platform-independent library for molecular dynamics simulations on modern shared-memory parallel architectures. === Distributed computing projects === GPUGRID distributed supercomputing infrastructure Folding@home distributed computing project Exscalate4Cov large-scale virtual screening experiment == See also == == References == == External links == More links for classical and quantum сhemistry on GPUs
Wikipedia/Molecular_modeling_on_GPUs
Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of electron correlation effects into the methods. Within the framework of Hartree–Fock calculations, some pieces of information (such as two-electron integrals) are sometimes approximated or completely omitted. In order to correct for this loss, semi-empirical methods are parametrized, that is their results are fitted by a set of parameters, normally in such a way as to produce results that best agree with experimental data, but sometimes to agree with ab initio results. == Type of simplifications used == Semi-empirical methods follow what are often called empirical methods where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel. For all valence electron systems, the extended Hückel method was proposed by Roald Hoffmann. Semi-empirical calculations are much faster than their ab initio counterparts, mostly due to the use of the zero differential overlap approximation. Their results, however, can be very wrong if the molecule being computed is not similar enough to the molecules in the database used to parametrize the method. == Preferred application domains == === Methods restricted to π-electrons === These methods exist for the calculation of electronically excited states of polyenes, both cyclic and linear. These methods, such as the Pariser–Parr–Pople method (PPP), can provide good estimates of the π-electronic excited states, when parameterized well. For many years, the PPP method outperformed ab initio excited state calculations. === Methods restricted to all valence electrons. === These methods can be grouped into several groups: Methods such as CNDO/2, INDO and NDDO that were introduced by John Pople. The implementations aimed to fit, not experiment, but ab initio minimum basis set results. These methods are now rarely used but the methodology is often the basis of later methods. Methods that are in the MOPAC, AMPAC, SPARTAN and/or CP2K computer programs originally from the group of Michael Dewar. These are MINDO, MNDO, AM1, PM3, PM6, PM7 and SAM1. Here the objective is to use parameters to fit experimental heats of formation, dipole moments, ionization potentials, and geometries. This is by far the largest group of semiempirical methods. Methods whose primary aim is to calculate excited states and hence predict electronic spectra. These include ZINDO and SINDO. The OMx (x=1,2,3) methods can also be viewed as belonging to this class, although they are also suitable for ground-state applications; in particular, the combination of OM2 and MRCI is an important tool for excited state molecular dynamics. Tight-binding methods, e.g. a large family of methods known as DFTB, are sometimes classified as semiempirical methods as well. More recent examples include the semiempirical quantum mechanical methods GFNn-xTB (n=0,1,2), which are particularly suited for the geometry, vibrational frequencies, and non-covalent interactions of large molecules. The NOTCH method includes many new, physically-motivated terms compared to the NDDO family of methods, is much less empirical than the other semi-empirical methods (almost all of its parameters are determined non-empirically), provides robust accuracy for bonds between uncommon element combinations, and is applicable to ground and excited states. == See also == List of quantum chemistry and solid-state physics software == References ==
Wikipedia/Semi-empirical_quantum_chemistry_methods
Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method. The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD). Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics. In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms. AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources. The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of the nuclei. CPMD and BOMD are different types of AIMD. However, whereas BOMD treats the electronic structure problem within the time-independent Schrödinger equation, CPMD explicitly includes the electrons as active degrees of freedom, via (fictitious) dynamical variables. The software is a parallelized plane wave / pseudopotential implementation of density functional theory, particularly designed for ab initio molecular dynamics. == Car–Parrinello method == The Car–Parrinello method is a type of molecular dynamics, usually employing periodic boundary conditions, planewave basis sets, and density functional theory, proposed by Roberto Car and Michele Parrinello in 1985 while working at SISSA, who were subsequently awarded the Dirac Medal by ICTP in 2009. In contrast to Born–Oppenheimer molecular dynamics wherein the nuclear (ions) degree of freedom are propagated using ionic forces which are calculated at each iteration by approximately solving the electronic problem with conventional matrix diagonalization methods, the Car–Parrinello method explicitly introduces the electronic degrees of freedom as (fictitious) dynamical variables, writing an extended Lagrangian for the system which leads to a system of coupled equations of motion for both ions and electrons. In this way, an explicit electronic minimization at each time step, as done in Born–Oppenheimer MD, is not needed: after an initial standard electronic minimization, the fictitious dynamics of the electrons keeps them on the electronic ground state corresponding to each new ionic configuration visited along the dynamics, thus yielding accurate ionic forces. In order to maintain this adiabaticity condition, it is necessary that the fictitious mass of the electrons is chosen small enough to avoid a significant energy transfer from the ionic to the electronic degrees of freedom. This small fictitious mass in turn requires that the equations of motion are integrated using a smaller time step than the one (1–10 fs) commonly used in Born–Oppenheimer molecular dynamics. Currently, the CPMD method can be applied to systems that consist of a few tens or hundreds of atoms and access timescales on the order of tens of picoseconds. == General approach == In CPMD the core electrons are usually described by a pseudopotential and the wavefunction of the valence electrons are approximated by a plane wave basis set. The ground state electronic density (for fixed nuclei) is calculated self-consistently, usually using the density functional theory method. Kohn-Sham equations are often used to calculate the electronic structure, where electronic orbitals are expanded in a plane-wave basis set. Then, using that density, forces on the nuclei can be computed, to update the trajectories (using, e.g. the Verlet integration algorithm). In addition, however, the coefficients used to obtain the electronic orbital functions can be treated as a set of extra spatial dimensions, and trajectories for the orbitals can be calculated in this context. == Fictitious dynamics == CPMD is an approximation of the Born–Oppenheimer MD (BOMD) method. In BOMD, the electrons' wave function must be minimized via matrix diagonalization at every step in the trajectory. CPMD uses fictitious dynamics to keep the electrons close to the ground state, preventing the need for a costly self-consistent iterative minimization at each time step. The fictitious dynamics relies on the use of a fictitious electron mass (usually in the range of 400 – 800 a.u.) to ensure that there is very little energy transfer from nuclei to electrons, i.e. to ensure adiabaticity. Any increase in the fictitious electron mass resulting in energy transfer would cause the system to leave the ground-state BOMD surface. === Lagrangian === L = 1 2 ( ∑ I n u c l e i M I R ˙ I 2 + μ ∑ i o r b i t a l s ∫ d r | ψ ˙ i ( r , t ) | 2 ) − E [ { ψ i } , { R I } ] + ∑ i j Λ i j ( ∫ d r ψ i ψ j − δ i j ) , {\displaystyle {\mathcal {L}}={\frac {1}{2}}\left(\sum _{I}^{\mathrm {nuclei} }\ M_{I}{\dot {\mathbf {R} }}_{I}^{2}+\mu \sum _{i}^{\mathrm {orbitals} }\int d\mathbf {r} \ |{\dot {\psi }}_{i}(\mathbf {r} ,t)|^{2}\right)-E\left[\{\psi _{i}\},\{\mathbf {R} _{I}\}\right]+\sum _{ij}\Lambda _{ij}\left(\int d\mathbf {r} \ \psi _{i}\psi _{j}-\delta _{ij}\right),} where μ {\displaystyle \mu } is the fictitious mass parameter; E[{ψi},{RI}] is the Kohn–Sham energy density functional, which outputs energy values when given Kohn–Sham orbitals and nuclear positions. === Orthogonality constraint === ∫ d r ψ i ∗ ( r , t ) ψ j ( r , t ) = δ i j , {\displaystyle \int d\mathbf {r} \ \psi _{i}^{*}(\mathbf {r} ,t)\psi _{j}(\mathbf {r} ,t)=\delta _{ij},} where δij is the Kronecker delta. === Equations of motion === The equations of motion are obtained by finding the stationary point of the Lagrangian under variations of ψi and RI, with the orthogonality constraint. M I R ¨ I = − ∇ I E [ { ψ i } , { R I } ] {\displaystyle M_{I}{\ddot {\mathbf {R} }}_{I}=-\nabla _{I}\,E\left[\{\psi _{i}\},\{\mathbf {R} _{I}\}\right]} μ ψ ¨ i ( r , t ) = − δ E δ ψ i ∗ ( r , t ) + ∑ j Λ i j ψ j ( r , t ) , {\displaystyle \mu {\ddot {\psi }}_{i}(\mathbf {r} ,t)=-{\frac {\delta E}{\delta \psi _{i}^{*}(\mathbf {r} ,t)}}+\sum _{j}\Lambda _{ij}\psi _{j}(\mathbf {r} ,t),} where Λij is a Lagrangian multiplier matrix to comply with the orthonormality constraint. === Born–Oppenheimer limit === In the formal limit where μ → 0, the equations of motion approach Born–Oppenheimer molecular dynamics. == Software packages == There are a number of software packages available for performing AIMD simulations. Some of the most widely used packages include: CP2K: an open-source software package for AIMD. Quantum Espresso: an open-source package for performing DFT calculations. It includes a module for AIMD. VASP: a commercial software package for performing DFT calculations. It includes a module for AIMD. Gaussian: a commercial software package that can perform AIMD. NWChem: an open-source software package for AIMD. LAMMPS: an open-source software package for performing classical and ab initio MD simulations. SIESTA: an open-source software package for AIMD. == Application == Studying the behavior of water near a hydrophobic graphene sheet. Investigating the structure and dynamics of liquid water at ambient temperature. Solving the heat transfer problems (heat conduction and thermal radiation) between Si/Ge superlattices. Probing the proton transfer along 1D water chains inside carbon nanotubes. Evaluating the critical point of aluminum. Predicting the amorphous phase of the phase-change memory material GeSbTe. Studying the combustion process of lignite-water systems. Computing and analyzing the IR spectra in terms of H-bond interactions. == See also == Computational physics Density functional theory Computational chemistry Molecular dynamics Quantum chemistry Ab initio quantum chemistry methods Quantum chemistry computer programs List of software for molecular mechanics modeling List of quantum chemistry and solid-state physics software CP2K == References == == External links == Car-Parrinello Molecular Dynamics about [CP2K Open Source Molecular Dynamics ]
Wikipedia/Car–Parrinello_method
The reaction field method is used in molecular simulations to simulate the effect of long-range dipole-dipole interactions for simulations with periodic boundary conditions. Around each molecule there is a 'cavity' or sphere within which the Coulomb interactions are treated explicitly. Outside of this cavity the medium is assumed to have a uniform dielectric constant. The molecule induces polarization in this media which in turn creates a reaction field, sometimes called the Onsager reaction field. Although Onsager's name is often attached to the technique, because he considered such a geometry in his theory of the dielectric constant, the method was first introduced by Barker and Watts in 1973. The effective pairwise potential becomes: U A B = q A q B [ 1 r A B + ( ε R F − 1 ) r A B 2 ( 2 ε R F + 1 ) r c 3 ] {\displaystyle U_{AB}=q_{A}q_{B}\left[{\frac {1}{r_{AB}}}+{\frac {(\varepsilon _{RF}-1)r_{AB}^{2}}{(2\varepsilon _{RF}+1)r_{c}^{3}}}\right]} where r c {\displaystyle r_{c}} is the cut-off radius. The reaction field in the center of the cavity is given by : E R F = 2 ( ε R F − 1 ) 2 ε R F + 1 M → r c 3 {\displaystyle E_{RF}={\frac {2(\varepsilon _{RF}-1)}{2\varepsilon _{RF}+1}}{\frac {\vec {M}}{r_{c}^{3}}}} where M → = ∑ μ i {\displaystyle {\vec {M}}=\sum \mu _{i}} is the total dipole moment of all the molecules in the cavity. The contribution to the potential energy of the molecule i {\displaystyle i} at the center of the cavity is − 1 / 2 μ i ⋅ E R F {\displaystyle -1/2\mu _{i}\cdot E_{RF}} and the torque on molecule i {\displaystyle i} is simply μ i × E R F {\displaystyle \mu _{i}\times E_{RF}} . When a molecule enters or leaves the sphere defined by the cut-off radius, there is a discontinuous jump in energy. When all of these jumps in energy are summed, they do not exactly cancel, leading to poor energy conservation, a deficiency found whenever a spherical cut-off is used. The situation can be improved by tapering the potential energy function to zero near the cut-off radius. Beyond a certain radius r t {\displaystyle r_{t}} the potential is multiplied by a tapering function f ( r ) {\displaystyle f(r)} . A simple choice is linear tapering with r t = .95 r c {\displaystyle r_{t}=.95r_{c}} , although better results may be found with more sophisticated tapering functions. Another potential difficulty of the reaction field method is that the dielectric constant must be known a priori. However, it turns out that in most cases dynamical properties are fairly insensitive to the choice of ε R F {\displaystyle \varepsilon _{RF}} . It can be put in by hand, or calculated approximately using any of a number of well-known relations between the dipole fluctuations inside the simulation box and the macroscopic dielectric constant. Another possible modification is to take into account the finite time required for the reaction field to respond to changes in the cavity. This "delayed reaction field method" was investigated by van Gunsteren, Berendsen and Rullmann in 1978. It was found to give better results—this makes sense, as without taking into account the delay, the reaction field is overestimated. However, the delayed method has additional difficulties with energy conservation and thus is not suitable for simulating an NVE ensemble. == Comparison with other techniques == The reaction field method is an alternative to the popular technique of Ewald summation. Today, Ewald summation is the usual technique of choice, but for many quantities of interest both techniques yield equivalent results. For example, in Monte Carlo simulations of liquid crystals, (using both the hard spherocylinder and Gay-Berne models) the results from the reaction field method and Ewald summation are consistent. However, the reaction field presents a considerable reduction in the computer time required. The reaction field should be applied carefully, and becomes complicated or impossible to implement for non-isotropic systems, such as systems dominated by large biomolecules or systems with liquid-vapour or liquid-solid coexistence. In section 5.5.5 of his book, Allen compares the reaction field with other methods, focusing on the simulation of the Stockmayer system (the simplest model for a dipolar fluid, such as water). The work of Adams, et al. (1979) showed that the reaction field produces results with thermodynamic quantities (volume, pressure and temperature) which are in good agreement with other methods, although pressure was slightly higher with the reaction field method compared to the Ewald-Kornfeld method (1.69 vs 1.52). The results show that macroscopic thermodynamic properties do not depend heavily on how long-range forces are treated. Similarly, single particle correlation functions do not depend heavily on the method employed. Several other results also show that the dielectric constant ϵ {\displaystyle \epsilon } can be well estimated with either the reaction field or a lattice summation technique. == References == == Further reading == Neumann, M.; Steinhauser, O. (1980). "The influence of boundary conditions used in machine simulations on the structure of polar systems". Molecular Physics. 39: 437–454. doi:10.1080/00268978000100361. Neumann, Martin; Steinhauser, Othmar; Pawley, G. Stuart (1984). "Consistent calculation of the static and frequency-dependent dielectric constant in computer simulations". Molecular Physics. 52: 97–113. doi:10.1080/00268978400101081. Baumketner, Andrij (2009). "Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics". The Journal of Chemical Physics. 130: 104106. Bibcode:2009JChPh.130j4106B. doi:10.1063/1.3081138. PMC 2671211. PMID 19292522. Reaction Field method
Wikipedia/Reaction_field_method
Coarse-grained modeling, coarse-grained models, aim at simulating the behaviour of complex systems using their coarse-grained (simplified) representation. Coarse-grained models are widely used for molecular modeling of biomolecules at various granularity levels. A wide range of coarse-grained models have been proposed. They are usually dedicated to computational modeling of specific molecules: proteins, nucleic acids, lipid membranes, carbohydrates or water. In these models, molecules are represented not by individual atoms, but by "pseudo-atoms" approximating groups of atoms, such as whole amino acid residue. By decreasing the degrees of freedom much longer simulation times can be studied at the expense of molecular detail. Coarse-grained models have found practical applications in molecular dynamics simulations. Another case of interest is the simplification of a given discrete-state system, as very often descriptions of the same system at different levels of detail are possible. An example is given by the chemomechanical dynamics of a molecular machine, such as Kinesin. The coarse-grained modeling originates from work by Michael Levitt and Ariel Warshel in 1970s. Coarse-grained models are presently often used as components of multiscale modeling protocols in combination with reconstruction tools (from coarse-grained to atomistic representation) and atomistic resolution models. Atomistic resolution models alone are presently not efficient enough to handle large system sizes and simulation timescales. Coarse graining and fine graining in statistical mechanics addresses the subject of entropy S {\displaystyle S} , and thus the second law of thermodynamics. One has to realise that the concept of temperature T {\displaystyle T} cannot be attributed to an arbitrarily microscopic particle since this does not radiate thermally like a macroscopic or "black body". However, one can attribute a nonzero entropy S {\displaystyle S} to an object with as few as two states like a "bit" (and nothing else). The entropies of the two cases are called thermal entropy and von Neumann entropy respectively. They are also distinguished by the terms coarse grained and fine grained respectively. This latter distinction is related to the aspect spelled out above and is elaborated on below. The Liouville theorem (sometimes also called Liouville equation) d d t ( Δ q Δ p ) = 0 {\displaystyle {\frac {d}{dt}}(\Delta q\Delta p)=0} states that a phase space volume Γ {\displaystyle \Gamma } (spanned by q {\displaystyle q} and p {\displaystyle p} , here in one spatial dimension) remains constant in the course of time, no matter where the point q , p {\displaystyle q,p} contained in Δ q Δ p {\displaystyle \Delta q\Delta p} moves. This is a consideration in classical mechanics. In order to relate this view to macroscopic physics one surrounds each point q , p {\displaystyle q,p} e.g. with a sphere of some fixed volume - a procedure called coarse graining which lumps together points or states of similar behaviour. The trajectory of this sphere in phase space then covers also other points and hence its volume in phase space grows. The entropy S {\displaystyle S} associated with this consideration, whether zero or not, is called coarse grained entropy or thermal entropy. A large number of such systems, i.e. the one under consideration together with many copies, is called an ensemble. If these systems do not interact with each other or anything else, and each has the same energy E {\displaystyle E} , the ensemble is called a microcanonical ensemble. Each replica system appears with the same probability, and temperature does not enter. Now suppose we define a probability density ρ ( q i , p i , t ) {\displaystyle \rho (q_{i},p_{i},t)} describing the motion of the point q i , p i {\displaystyle q_{i},p_{i}} with phase space element Δ q i Δ p i {\displaystyle \Delta q_{i}\Delta p_{i}} . In the case of equilibrium or steady motion the equation of continuity implies that the probability density ρ {\displaystyle \rho } is independent of time t {\displaystyle t} . We take ρ i = ρ ( q i , p i ) {\displaystyle \rho _{i}=\rho (q_{i},p_{i})} as nonzero only inside the phase space volume V Γ {\displaystyle V_{\Gamma }} . One then defines the entropy S {\displaystyle S} by the relation S = − Σ i ρ i ln ⁡ ρ i , {\displaystyle S=-\Sigma _{i}\rho _{i}\ln \rho _{i},\;\;} where Σ i ρ i = 1. {\displaystyle \;\;\Sigma _{i}\rho _{i}=1.} Then,by maximisation for a given energy E {\displaystyle E} , i.e. linking δ S = 0 {\displaystyle \delta S=0} with δ {\displaystyle \delta } of the other sum equal to zero via a Lagrange multiplier λ {\displaystyle \lambda } , one obtains (as in the case of a lattice of spins or with a bit at each lattice point) V Γ = e ( λ + 1 ) = 1 ρ {\displaystyle V_{\Gamma }=e^{(\lambda +1)}={\frac {1}{\rho }}} {\displaystyle \;\;\;} and S = ln ⁡ V Γ {\displaystyle S=\ln V_{\Gamma }} , the volume of Γ {\displaystyle \Gamma } being proportional to the exponential of S. This is again a consideration in classical mechanics. In quantum mechanics the phase space becomes a space of states, and the probability density ρ {\displaystyle \rho } an operator with a subspace of states Γ {\displaystyle \Gamma } of dimension or number of states N Γ {\displaystyle N_{\Gamma }} specified by a projection operator P Γ {\displaystyle P_{\Gamma }} . Then the entropy S {\displaystyle S} is (obtained as above) S = − T r ρ ln ⁡ ρ = ln ⁡ N Γ , {\displaystyle S=-Tr\rho \ln \rho =\ln N_{\Gamma },} and is described as fine grained or von Neumann entropy. If N Γ = 1 {\displaystyle N_{\Gamma }=1} , the entropy vanishes and the system is said to be in a pure state. Here the exponential of S is proportional to the number of states. The microcanonical ensemble is again a large number of noninteracting copies of the given system and S {\displaystyle S} , energy E {\displaystyle E} etc. become ensemble averages. Now consider interaction of a given system with another one - or in ensemble terminology - the given system and the large number of replicas all immersed in a big one called a heat bath characterised by ρ {\displaystyle \rho } . Since the systems interact only via the heat bath, the individual systems of the ensemble can have different energies E i , E j , . . . {\displaystyle E_{i},E_{j},...} depending on which energy state E i , E j , . . . {\displaystyle E_{i},E_{j},...} they are in. This interaction is described as entanglement and the ensemble as canonical ensemble (the macrocanonical ensemble permits also exchange of particles). The interaction of the ensemble elements via the heat bath leads to temperature T {\displaystyle T} , as we now show. Considering two elements with energies E i , E j {\displaystyle E_{i},E_{j}} , the probability of finding these in the heat bath is proportional to ρ ( E i ) ρ ( E j ) {\displaystyle \rho (E_{i})\rho (E_{j})} , and this is proportional to ρ ( E i + E j ) {\displaystyle \rho (E_{i}+E_{j})} if we consider the binary system as a system in the same heat bath defined by the function ρ {\displaystyle \rho } . It follows that ρ ( E ) ∝ e − μ E {\displaystyle \rho (E)\propto e^{-\mu E}} (the only way to satisfy the proportionality), where μ {\displaystyle \mu } is a constant. Normalisation then implies ρ ( E i ) = e − μ E i Σ j e − μ E j , Σ i ρ ( E i ) = 1. {\displaystyle \rho (E_{i})={\frac {e^{-\mu E_{i}}}{\Sigma _{j}e^{-\mu E_{j}}}},\Sigma _{i}\rho (E_{i})=1.} Then in terms of ensemble averages S ¯ = − ln ⁡ ρ ¯ {\displaystyle {\overline {S}}=-{\overline {\ln \rho }}} , and μ ≡ 1 T , k B = 1 , {\displaystyle \mu \equiv {\frac {1}{T}},\;k_{B}=1,} or by comparison with the second law of thermodynamics. S ¯ {\displaystyle {\overline {S}}} is now the entanglement entropy or fine grained von Neumann entropy. This is zero if the system is in a pure state, and is nonzero when in a mixed (entangled) state. Above we considered a system immersed in another huge one called heat bath with the possibility of allowing heat exchange between them. Frequently one considers a different situation, i.e. two systems A and B with a small hole in the partition between them. Suppose B is originally empty but A contains an explosive device which fills A instantaneously with photons. Originally A and B have energies E A {\displaystyle E_{A}} and E B {\displaystyle E_{B}} respectively, and there is no interaction. Hence originally both are in pure quantum states and have zero fine grained entropies. Immediately after explosion A is filled with photons, the energy still being E A {\displaystyle E_{A}} and that of B also E B {\displaystyle E_{B}} (no photon has yet escaped). Since A is filled with photons, these obey a Planck distribution law and hence the coarse grained thermal entropy of A is nonzero (recall: lots of configurations of the photons in A, lots of states with one maximal), although the fine grained quantum mechanical entropy is still zero (same energy state), as also that of B. Now allow photons to leak slowly (i.e. with no disturbance of the equilibrium) from A to B. With fewer photons in A, its coarse grained entropy diminishes but that of B increases. This entanglement of A and B implies they are now quantum mechanically in mixed states, and so their fine grained entropies are no longer zero. Finally when all photons are in B, the coarse grained entropy of A as well as its fine grained entropy vanish and A is again in a pure state but with new energy. On the other hand B now has an increased thermal entropy, but since the entanglement is over it is quantum mechanically again in a pure state, its ground state, and that has zero fine grained von Neumann entropy. Consider B: In the course of the entanglement with A its fine grained or entanglement entropy started and ended in pure states (thus with zero entropies). Its coarse grained entropy, however, rose from zero to its final nonzero value. Roughly half way through the procedure the entanglement entropy of B reaches a maximum and then decreases to zero at the end. The classical coarse grained thermal entropy of the second law of thermodynamics is not the same as the (mostly smaller) quantum mechanical fine grained entropy. The difference is called information. As may be deduced from the foregoing arguments, this difference is roughly zero before the entanglement entropy (which is the same for A and B) attains its maximum. An example of coarse graining is provided by Brownian motion. == Software packages == Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) Extensible Simulation Package for Research on Soft Matter ESPResSo (external link) == References ==
Wikipedia/Coarse-grained_modeling
Mixed quantum-classical (MQC) dynamics is a class of computational theoretical chemistry methods tailored to simulate non-adiabatic (NA) processes in molecular and supramolecular chemistry. Such methods are characterized by: Propagation of nuclear dynamics through classical trajectories; Propagation of the electrons (or fast particles) through quantum methods; A feedback algorithm between the electronic and nuclear subsystems to recover nonadiabatic information. == Use of NA-MQC dynamics == In the Born-Oppenheimer approximation, the ensemble of electrons of a molecule or supramolecular system can have several discrete states. The potential energy of each of these electronic states depends on the position of the nuclei, forming multidimensional surfaces. Under usual conditions (room temperature, for instance), the molecular system is in the ground electronic state (the electronic state of lowest energy). In this stationary situation, nuclei and electrons are in equilibrium, and the molecule naturally vibrates near harmonically due to the zero-point energy. Particle collisions and photons with wavelengths in the range from visible to X-ray can promote the electrons to electronically excited states. Such events create a non-equilibrium between nuclei and electrons, which leads to an ultrafast response (picosecond scale) of the molecular system. During the ultrafast evolution, the nuclei may reach geometric configurations where the electronic states mix, allowing the system to transfer to another state spontaneously. These state transfers are nonadiabatic phenomena. Nonadiabatic dynamics is the field of computational chemistry that simulates such ultrafast nonadiabatic response. In principle, the problem can be exactly addressed by solving the time-dependent Schrödinger equation (TDSE) for all particles (nuclei and electrons). Methods like the multiconfigurational self-consistent Hartree (MCTDH) have been developed to do such task. Nevertheless, they are limited to small systems with two dozen degrees of freedom due to the enormous difficulties of developing multidimensional potential energy surfaces and the costs of the numerical integration of the quantum equations. NA-MQC dynamics methods have been developed to reduce the burden of these simulations by profiting from the fact that the nuclear dynamics is near classical. Treating the nuclei classically allows simulating the molecular system in full dimensionality. The impact of the underlying assumptions depends on each particular NA-MQC method. Most of NA-MQC dynamics methods have been developed to simulate internal conversion (IC), the nonadiabatic transfer between states of the same spin multiplicity. The methods have been extended, however, to deal with other types of processes like intersystem crossing (ISC; transfer between states of different multiplicities) and field-induced transfers. NA-MQC dynamics has been often used in theoretical investigations of photochemistry and femtochemistry, especially when time-resolved processes are relevant. == List of NA-MQC dynamics methods == NA-MQC dynamics is a general class of methods developed since the 1970s. It encompasses: Trajectory surface hopping (TSH; FSSH for fewest switches surface hopping); Mean-field Ehrenfest dynamics (MFE); Coherent Switching with Decay of Mixing (CSDM; MFE with Non-Markovian decoherence and stochastic pointer state switch); Multiple spawning (AIMS for ab initio multiple spawning; FMS for full multiple spawning); Coupled-Trajectory Mixed Quantum-Classical Algorithm (CT-MQC); Mixed quantum−classical Liouville equation (QCLE); Mapping approach; Nonadiabatic Bohmian dynamics (NABDY); Multiple cloning; (AIMC for ab initio multiple cloning) Global Flux Surface Hopping (GFSH); Decoherence Induced Surface Hopping (DISH) == Integration of NA-MQC dynamics == === Classical trajectories === The classical trajectories can be integrated with conventional methods, as the Verlet algorithm. Such integration requires the forces acting on the nuclei. They are proportional to the gradient of the potential energy of the electronic states and can be efficiently computed with diverse electronic structure methods for excited states, like the multireference configuration interaction (MRCI) or the linear-response time-dependent density functional theory (TDDFT). In NA-MQC methods like FSSH or MFE, the trajectories are independent of each other. In such a case, they can be separately integrated and only grouped afterward for the statistical analysis of the results. In methods like CT-MQC or diverse TSH variants, the trajectories are coupled and must be integrated simultaneously. === Electronic subsystem === In NA-MQC dynamics, the electrons are usually treated by a local approximation of the TDSE, i.e., they depend only on the electronic forces and couplings at the instantaneous position of the nuclei. === Nonadiabatic algorithms === There are three basic algorithms to recover nonadiabatic information in NA-MQC methods: Spawning - new trajectories are created at regions of large nonadiabatic coupling. Hopping - trajectories are propagated on a single potential energy surface (PES), but they are allowed to change surface near regions of large nonadiabatic couplings. Averaging - trajectories are propagated on a weighted average of potential energy surfaces. The weights are determined by the amount of nonadiabatic mixing. == Relation to other nonadiabatic methods == NA-MQC dynamics are approximated methods to solve the time-dependent Schrödinger equation for a molecular system. Methods like TSH, in particular in the fewest switches surface hopping (FSSH) formulation, do not have an exact limit. Other methods like MS or CT-MQC can in principle deliver the exact non-relativistic solution. In the case of multiple spawning, it is hierarchically connected to MCTDH, while CT-MQC is connected to the exact factorization method. == Drawbacks in NA-MQC dynamics == The most common approach in NA-MQC dynamics is to compute the electronic properties on-the-fly, i.e., at each timestep of the trajectory integration. Such an approach has the advantage of not requiring pre-computed multidimensional potential energy surfaces. Nevertheless, the costs associated with the on-the-fly approach are significantly high, leading to a systematic level downgrade of the simulations. This downgrade has been shown to lead to qualitatively wrong results. The local approximation implied by the classical trajectories in NA-MQC dynamics also leads to failing in the description of non-local quantum effects, as tunneling and quantum interference. Some methods like MFE and FSSH are also affected by decoherence errors. New algorithms have been developed to include tunneling and decoherence effects. Global quantum effects can also be considered by applying quantum forces between trajectories. == Software for NA-MQC dynamics == Survey of NA-MQC dynamics implementations in public software. a Development version. == References ==
Wikipedia/Mixed_quantum-classical_dynamics
Multiscale Green's function (MSGF) is a generalized and extended version of the classical Green's function (GF) technique for solving mathematical equations. The main application of the MSGF technique is in modeling of nanomaterials. These materials are very small – of the size of few nanometers. Mathematical modeling of nanomaterials requires special techniques and is now recognized to be an independent branch of science. A mathematical model is needed to calculate the displacements of atoms in a crystal in response to an applied static or time dependent force in order to study the mechanical and physical properties of nanomaterials. One specific requirement of a model for nanomaterials is that the model needs to be multiscale and provide seamless linking of different length scales. Green's function (GF) was originally formulated by the British mathematical physicist George Green in the year 1828 as a general technique for solution of operator equations. It has been extensively used in mathematical physics over the last almost two hundred years and applied to a variety of fields. Reviews of some applications of GFs such as for many body theory and Laplace equation are available in the Wikipedia. The GF based techniques are used for modeling of various physical processes in materials such as phonons, Electronic band structure and elastostatics. == Application of the MSGF method for modeling nanomaterials == The MSGF method is a relatively new GF technique for mathematical modeling of nanomaterials. Mathematical models are used for calculating the response of materials to an applied force in order to simulate their mechanical properties. The MSGF technique links different length scales in modeling of nanomaterials. Nanomaterials are of atomistic dimensions and need to be modeled at the length scales of nanometers. For example, a silicon nanowire, whose width is about five nanometers, contains just 10 – 12 atoms across its width. Another example is graphene and many new two-dimensional (2D) solids. These new materials are ultimate in thinness because they are just one or two atoms thick. Multiscale modeling is needed for such materials because their properties are determined by the discreteness of their atomistic arrangements as well as their overall dimensions. The MSGF method is multiscale in the sense that it links the response of materials to an applied force at atomistic scales to their response at the macroscopic scales. The response of materials at the macroscopic scales is calculated by using the continuum model of solids. In the continuum model, the discrete atomistic structure of solids is averaged out into a continuum. Properties of nanomaterials are sensitive to their atomistic structure as well as their overall dimensions. They are also sensitive to the macroscopic structure of the host material in which they are embedded. The MSGF method is used to model such composite systems. The MSGF method is also used for analyzing behavior of crystals containing lattice defects such as vacancies, interstitials, or foreign atoms. Study of these lattice defects is of interest as they play a role in materials technology. Presence of a defect in a lattice displaces the host atoms from their original position or the lattice gets distorted. This is shown in Fig 1 for a 1D lattice as an example. Atomistic scale modeling is needed to calculate this distortion near the defect, whereas the continuum model is used to calculate the distortion far away from the defect. The MSGF links these two scales seamlessly. == MSGF for nanomaterials == The MSGF model of nanomaterials accounts for multiparticles as well as multiscales in materials. It is an extension of the lattice statics Green’s function (LSGF) method that was originally formulated at the Atomic Energy Research Establishment Harwell in U.K. in 1973. It is also referred to as the Harwell approach or the Tewary method in the literature The LSGF method complements molecular dynamics (MD) method for modeling multiparticle systems. The LSGF method is based upon the use of the Born von Karman (BvK) model and can be applied to different lattice structures and defects. The MSGF method is an extended version of the LSGF method and has been applied to many nanomaterials and 2D materials At the atomistic scales, a crystal or a crystalline solid is represented by a collection of interacting atoms located at discrete sites on a geometric lattice. A perfect crystal consists of a regular and periodic geometric lattice. The perfect lattice has translation symmetry, which means that all the unit cells are identical. In a perfect periodic lattice, which is assumed to be infinite, all atoms are identical. At equilibrium each atom is assumed to be located at its lattice site. The force at any atom due to other atoms just cancels out so the net force at each atom is zero. These conditions break down in a distorted lattice in which atoms get displaced from their positions of equilibrium. The lattice distortion may be caused by an externally applied force. The lattice can also be distorted by introducing a defect in the lattice or displacing an atom that disturbs the equilibrium configuration and induces a force on the lattice sites. This is shown in Fig. 1. The objective of the mathematical model is to calculate the resulting values of the atomic displacements. The GF in the MSGF method is calculated by minimizing the total energy of the lattice. The potential energy of the lattice in the form of an infinite Taylor series in atomic displacements in the harmonic approximation as follows W = − ∑ L , a f a ( L ) u a ( L ) + 1 2 ∑ L , a ∑ L ′ , b K a b ( L , L ′ ) u a ( L ) u b ( L ′ ) ( 1 ) {\displaystyle W=-\sum _{L,a}f_{a}(L)u_{a}(L)+{\frac {1}{2}}\sum _{L,a}\sum _{L',b}K_{ab}(L,L')u_{a}(L)u_{b}(L')\qquad \qquad (1)} where L and L′ label the atoms, a and b denote the Cartesian coordinates, u denotes the atomic displacement, and −f and K are the first and second coefficients in the Taylor series. They are defined by f a ( L ) = − ∂ W ∂ u a ( L ) , ( 2 ) {\displaystyle f_{a}(L)=-{\frac {\partial W}{\partial u_{a}(L)}},\qquad \qquad (2)} and K a b ( L , L ′ ) = ∂ 2 W ∂ u a ( L ) ∂ u b ( L ′ ) , ( 3 ) {\displaystyle K_{ab}(L,L')={\frac {\partial ^{2}W}{\partial u_{a}(L)\,\partial u_{b}(L')}},\qquad \qquad (3)} where the derivatives are evaluated at zero displacements. The negative sign is introduced in the definition of f for convenience. Thus f(L) is a 3D vector that denotes the force at the atom L. Its three Cartesian components are denoted by fa(L) where a = x, y, or z. Similarly K(L,L’) is a 3x3 matrix, which is called the force- constant matrix between the atoms at L and L’. Its 9 elements are denoted by Kab(L,L′) for a, b = x, y, or z. At equilibrium, the energy W is minimum. Accordingly, the first derivative of W with respect to each u must be zero. This gives the following relation from Eq. (1) ∑ L ′ , b K a b ( L , L ′ ) u b ( L ′ ) = f a ( L ) . ( 4 ) {\displaystyle \sum _{L',b}K_{ab}(L,L')u_{b}(L')=f_{a}(L).\qquad \qquad (4)} It can be shown by direct substitution that the solution of Eq. (4) can be written as u a ( L ) = ∑ L ′ , b G a b ( L , L ′ ) f b ( L ′ ) ( 5 ) {\displaystyle u_{a}(L)=\sum _{L',b}G_{ab}(L,L')f_{b}(L')\qquad \qquad (5)} where G is defined by the following inversion relation ∑ L ″ , b ″ K a b ″ ( L , L ″ ) G b ″ b ( L ″ , L ′ ) = δ ( a , b ) δ ( L , L ′ ) . ( 6 ) {\displaystyle \sum _{L'',b''}K_{ab''}(L,L'')G_{b''b}(L'',L')=\delta (a,b)\delta (L,L').\qquad \qquad (6)} In Eq. (6), δ(m,n) is the discrete delta function of two discrete variable m and n. Similar to the case of Dirac delta function for continuous variables, it is defined to be 1 if m = n and 0 otherwise. Equations (4)–(6) can be written in the matrix notation as follows: K u = f ( 7 ) u = G f ( 8 ) and G = K − 1 ( 9 ) {\displaystyle {\begin{aligned}Ku&=f&&(7)\\u&=Gf&&(8)\\{\text{and }}G&=K^{-1}&&(9)\end{aligned}}} The matrices K and G in the above equations are 3N × 3N square matrices and u and f are 3N-dimensional column vectors, where N is the total number of atoms in the lattice. The matrix G is the multiparticle GF and is referred to as the lattice statics Green’s function (LSGF),. If G is known, the atomic displacements for all atoms can be calculated by using Eq. (8). One of the main objectives of modeling is the calculation of the atomistic displacements u caused by an applied force f. The displacements, in principle, are given by Eq. (8). However, it involves inversion of the matrix K which is 3N x 3N. For any calculation of practical interest N ~ 10,000 but preferably a million for more realistic simulations. Inversion of such a large matrix is computationally extensive and special techniques are needed for the calculation of u’s. For regular periodic lattices, LSGF is one such technique. It consists of calculating G in terms of its Fourier transform, and is similar to the calculation of the phonon GF. The LSGF method has now been generalized to include the multiscale effects in the MSGF method. The MSGF method is capable of linking length scales seamlessly. This property has been used in developing a hybrid MSGF method that combines the GF and the MD methods and has been used for simulating less symmetric nanoinclusions such as quantum dots in semiconductors. For a perfect lattice without defects, the MSGF links directly the atomistic scales in LSGF to the macroscopic scales through the continuum model. A perfect lattice has full translation symmetry so all the atoms are equivalent. In this case any atom can be chosen as the origin and G(L,L') can be expressed by a single index (L'-L) defined as G ( L , L ′ ) = G ( 0 , L − L ′ ) = G ( L ′ − L ) ( 10 ) {\displaystyle G(L,L')=G(0,L-L')=G(L'-L)\qquad \qquad (10)} The asymptotic limit of G(L), that satisfies Eq. (10), for large values of R(L) is given by lim R ( L ) → ∞ [ G ( 0 , L ) ] → G c ( x ) + O ( 1 / x 4 ) ( 11 ) {\displaystyle \lim _{R(L)\to \infty }[G(0,L)]\rightarrow G_{c}(x)+O(1/x^{4})\qquad \qquad (11)} where x = R(L) is the position vector of the atom L, and Gc(x) is the continuum Green's function (CGF), which is defined in terms of the elastic constants and used in modeling of conventional bulk materials at macroscales. In Eq. (11), O(1/xn) is the standard mathematical notation for a term of order 1/xn and higher. The magnitude of Gc(x) is O(1/x2). The LSGF G(0,L) in this equation reduces smoothly and automatically to the CGF for large enough x as terms O(1/x4) become gradually small and negligible. This ensures the seamless linkage of the atomistic length scale to the macroscopic continuum scale. Equations (8) and (9) along with the limiting relation given by Eq. (11), form the basic equations for the MSGF. Equation (9) gives the LSGF, which is valid at the atomistic scales and Eq. (11) relates it to the CGF, which is valid at the macro continuum scales. This equation also shows that the LSGF reduces seamlessly to the CGF. == MSGF method for calculating the effect of defects and discontinuities in nanomaterials == If a lattice contains defects, its translation symmetry is broken. Consequently, it is not possible to express G in terms of a single distance variable R(L). Hence Eq. (10) is not valid anymore and the correspondence between the LSGF and the CGF, needed for their seamless linking breaks down. In such cases the MSGF links the lattice and the continuum scales by using the following procedure: If p denotes the change in the matrix K, caused by the defect(s), the force constant matrix K* for the defective lattice is written as K ∗ = K − p ( 12 ) {\displaystyle K^{*}=K-p\qquad \qquad (12)} As in the case for the perfect lattice in Eq. (9), the corresponding defect GF is defined as the inverse of the full K* matrix. Use of Eq. (12), then leads to the following Dyson’s equation for the defect LSGF: G ∗ = G + G p G ∗ ( 13 ) {\displaystyle G^{*}=G+GpG^{*}\qquad \qquad (13)} The MSGF method consists of solving Eq. (13) for G* by using the matrix partitioning technique or double Fourier transform. Once G* is known, the displacement vector is given by the following GF equation similar to Eq. (8): u= G* f (14) Equation (14) gives the desired solution, that is, the atomic displacements or the lattice distortion caused by the force f. However, it does not show the linkage of the lattice and the continuum multiple scales, because Eqs. (10) and (11) are not valid for the defect LSGF G*. The linkage between the lattice and the continuum model in case of lattice with defects is achieved by using an exact transformation described below. Using Eq.(13), Eq. (14) can be written in the following exactly equivalent form: u = Gf + G p G* f . (15) Use of Eq. (14) again on the right hand side of Eq. (15) gives, u = G f* (16) where f* = f + p u. (17) Note that Eq. (17) defines an effective force f* such that Eqs. (14) and (16) are exactly equivalent. Equation (16) expresses the atomic displacements u in terms of G, the perfect LSGF even for lattices with defects. The effect of the defects is included exactly in f*. The LSGF G is independent of f or f* and reduces to the CGF asymptotically and smoothly as given in Eq. (11). The effective force f* can be determined in a separate calculation by using an independent method if needed, and the lattice statics or the continuum model can be used for G. This is the basis of a hybrid model that combines MSGF and MD for simulating a germanium quantum dot in a silicon lattice. Equation (16) is the master equation of the MSGF method. It is truly multiscale. All the discrete atomistic contributions are included in f*. The Green's function G can be calculated independently, which can be fully atomistic for nanomaterials or partly or fully continuum for macroscales to account for surfaces and interfaces in material systems as needed == GF as Physical Characteristic of Solids == Tewary, Quardokus and DelRio have suggested that Green's function is not just a mathematical artefact, but a physical characteristic of the solid, which can be measured by using scanning probe microscopy. Their suggestion is based upon the fact that any process of measurement on a system, requires quantifying the response of the system to an external probe and the Green’s function gives the total response of the system to an applied probe. For example, if we want to measure the characteristics of a spring, we fix it at one end and apply a force f or f* at the other end. A measurement of the stretching u of the spring will then give the value of the Green’s function of the spring by using Eq. 14 or 16. In this example, the applied force is the probe and the stretching of the spring is its response to the probe. If the measured values of the Green’s functions of a solid are available, it will accurately characterize the response of a solid for engineering applications. For this reason, the Green’s function is also called the response function. == Causal Green's Function Molecular Dynamics == An important application of the MSGF method is in modeling the temporal (time-dependent) processes in solids, especially in nanomaterials. This is needed in diverse applications like testing and characterization of materials, propagation of waves and heat in nanomaterials, and modeling of radiation damage in semiconductors. These processes need to be simulated over a wide range of times from femtoseconds to nanoseconds or even microseconds which is a challenging multiscale problem for nanomaterials. It was shown by Tewary that the use of the causal Green’s functions in molecular dynamics can significantly accelerate the temporal convergence of molecular dynamics. The new method, called CGFMD (Causal Green's Function Molecular Dynamics) is the temporal equivalent of the MSGF and is based upon the use of causal or retarded Green's functions. It has been applied to simulate the propagation of ripples in graphene, where it has been shown that the CGFMD can model time scales over 6 to 9 orders of magnitude at the atomistic level. At least in some idealized cases such as propagation of ripples in graphene, the CGFMD can bridge the time scales from femto to microseconds. The CGFMD has been refined and further developed in papers by Coluci, Dantas and Tewary.} == See also == Microscale and macroscale models == References ==
Wikipedia/Multiscale_Green's_function
The net electrostatic force acting on a charged particle with index i {\displaystyle i} contained within a collection of particles is given as: F ( r ) = ∑ j ≠ i F ( r ) r ^ ; F ( r ) = q i q j 4 π ε 0 r 2 {\displaystyle \mathbf {F} (\mathbf {r} )=\sum _{j\neq i}F(r)\mathbf {\hat {r}} \,\,\,;\,\,F(r)={\frac {q_{i}q_{j}}{4\pi \varepsilon _{0}r^{2}}}} where r {\displaystyle \mathbf {r} } is the spatial coordinate, j {\displaystyle j} is a particle index, r {\displaystyle r} is the separation distance between particles i {\displaystyle i} and j {\displaystyle j} , r ^ {\displaystyle \mathbf {\hat {r}} } is the unit vector from particle j {\displaystyle j} to particle i {\displaystyle i} , F ( r ) {\displaystyle F(r)} is the force magnitude, and q i {\displaystyle q_{i}} and q j {\displaystyle q_{j}} are the charges of particles i {\displaystyle i} and j {\displaystyle j} , respectively. With the electrostatic force being proportional to r − 2 {\displaystyle r^{-2}} , individual particle-particle interactions are long-range in nature, presenting a challenging computational problem in the simulation of particulate systems. To determine the net forces acting on particles, the Ewald or Lekner summation methods are generally employed. One alternative and usually computationally faster technique based on the notion that interactions over large distances (e.g. > 1 nm) are insignificant to the net forces acting in certain systems is the method of spherical truncation. The equations for basic truncation are: F C U T ( r ) = { q i q j 4 π ε 0 r 2 for r ≤ r c 0 for r > r c . {\displaystyle \displaystyle F_{CUT}(r)={\begin{cases}{\frac {q_{i}q_{j}}{4\pi \varepsilon _{0}r^{2}}}&{\text{for }}r\leq r_{c}\\0&{\text{for }}r>r_{c}.\end{cases}}} where r c {\displaystyle r_{c}} is the cutoff distance. Simply applying this cutoff method introduces a discontinuity in the force at r c {\displaystyle r_{c}} that results in particles experiencing sudden impulses when other particles cross the boundary of their respective interaction spheres. In the particular case of electrostatic forces, as the force magnitude is large at the boundary, this unphysical feature can compromise simulation accuracy. A way to correct this problem is to shift the force to zero at r c {\displaystyle r_{c}} , thus removing the discontinuity. This can be accomplished with a variety of functions, but the most simple/computationally efficient approach is to simply subtract the value of the electrostatic force magnitude at the cutoff distance as such: F S F ( r ) = { q i q j 4 π ε 0 r 2 − q i q j 4 π ε 0 r c 2 for r ≤ r c 0 for r > r c . {\displaystyle \displaystyle F_{SF}(r)={\begin{cases}{\frac {q_{i}q_{j}}{4\pi \varepsilon _{0}r^{2}}}-{\frac {q_{i}q_{j}}{4\pi \varepsilon _{0}r_{c}^{2}}}&{\text{for }}r\leq r_{c}\\0&{\text{for }}r>r_{c}.\end{cases}}} As mentioned before, the shifted force (SF) method is generally suited for systems that do not have net electrostatic interactions that are long-range in nature. This is the case for condensed systems that show electric-field screening effects. Note that anisotropic systems (e.g. interfaces) may not be accurately simulated with the SF method, although an adaption of the SF method for interfaces has been recently suggested. Additionally, note that certain system properties (e.g. energy-dependent observables) will be more greatly influenced by the use of the SF method than others. It is not safe to assume, without reasonable argument, that the SF method can be used to accurately determine a certain property for a given system. If the accuracy of the SF method need be tested, this may be done by testing for convergence (i.e. showing that simulation results do not significantly change with increasing cutoff) or by comparing with results obtained through other electrostatics techniques (such as Ewald) that are known to perform well. As a rough rule of thumb, results obtained with the SF method tend to be sufficiently accurate when the cutoff is at least five times larger than the distance of the near neighbor interactions. With the SF method, a discontinuity is still present in the derivative of the force, and it may be preferable for ionic liquids to further alter the force equation as to remove that discontinuity. == References ==
Wikipedia/Shifted_force_method
Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers – specifically polypeptides – formed from sequences of amino acids, which are the monomers of the polymer. A single amino acid monomer may also be called a residue, which indicates a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions, such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo-electron microscopy (cryo-EM) and dual polarisation interferometry, to determine the structure of proteins. Protein structures range in size from tens to several thousand amino acids. By physical size, proteins are classified as nanoparticles, between 1–100 nm. Very large protein complexes can be formed from protein subunits. For example, many thousands of actin molecules assemble into a microfilament. A protein usually undergoes reversible structural changes in performing its biological function. The alternative structures of the same protein are referred to as different conformations, and transitions between them are called conformational changes. == Levels of protein structure == There are four distinct levels of protein structure. === Primary structure === The primary structure of a protein refers to the sequence of amino acids in the polypeptide chain. The primary structure is held together by peptide bonds that are made during the process of protein biosynthesis. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is not involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of amino acids in insulin was discovered by Frederick Sanger, establishing that proteins have defining amino acid sequences. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often, however, it is read directly from the sequence of the gene using the genetic code. It is strictly recommended to use the words "amino acid residues" when discussing proteins because when a peptide bond is formed, a water molecule is lost, and therefore proteins are made up of amino acid residues. Post-translational modifications such as phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene. For example, insulin is composed of 51 amino acids in 2 chains. One chain has 31 amino acids, and the other has 20 amino acids. === Secondary structure === Secondary structure refers to highly regular local sub-structures on the actual polypeptide backbone chain. Two main types of secondary structure, the α-helix and the β-strand or β-sheets, were suggested in 1951 by Linus Pauling. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the α-helix and the β-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be confused with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit". === Tertiary structure === Tertiary structure refers to the three-dimensional structure created by a single protein molecule (a single polypeptide chain). It may include one or several domains. The α-helices and β-pleated-sheets are folded into a compact globular structure. The folding is driven by the non-specific hydrophobic interactions, the burial of hydrophobic residues from water, but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol (intracellular fluid) is generally a reducing environment. === Quaternary structure === Quaternary structure is the three-dimensional structure consisting of the aggregation of two or more individual polypeptide chains (subunits) that operate as a single functional unit (multimer). The resulting multimer is stabilized by the same non-covalent interactions and disulfide bonds as in tertiary structure. There are many possible quaternary structure organisations. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits, and so forth. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin. === Homomers === An assemblage of multiple copies of a particular polypeptide chain can be described as a homomer, multimer or oligomer. Bertolini et al. in 2021 presented evidence that homomer formation may be driven by interaction between nascent polypeptide chains as they are translated from mRNA by nearby adjacent ribosomes. Hundreds of proteins have been identified as being assembled into homomers in human cells. The process of assembly is often initiated by the interaction of the N-terminal region of polypeptide chains. Evidence that numerous gene products form homomers (multimers) in a variety of organisms based on intragenic complementation evidence was reviewed in 1965. == Domains, motifs, and folds in protein structure == Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds. Despite the fact that there are about 100,000 different proteins expressed in eukaryotic systems, there are many fewer different domains, structural motifs and folds. === Structural domain === A structural domain is an element of the protein's overall structure that is self-stabilizing and often folds independently of the rest of the protein chain. Many domains are not unique to the protein products of one gene or one gene family but instead appear in a variety of proteins. Domains often are named and singled out because they figure prominently in the biological function of the protein they belong to; for example, the "calcium-binding domain of calmodulin". Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimera proteins. A conservative combination of several domains that occur in different proteins, such as protein tyrosine phosphatase domain and C2 domain pair, was called "a superdomain" that may evolve as a single unit. === Structural and sequence motifs === The structural and sequence motifs refer to short segments of protein three-dimensional structure or amino acid sequence that were found in a large number of different proteins === Supersecondary structure === Tertiary protein structures can have multiple secondary elements on the same polypeptide chain. The supersecondary structure refers to a specific combination of secondary structure elements, such as β-α-β units or a helix-turn-helix motif. Some of them may be also referred to as structural motifs. === Protein fold === A protein fold refers to the general protein architecture, like a helix bundle, β-barrel, Rossmann fold or different "folds" provided in the Structural Classification of Proteins database. A related concept is protein topology. == Protein dynamics and conformational ensembles == Proteins are not static objects, but rather populate ensembles of conformational states. Transitions between these states typically occur on nanoscales, and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. Protein dynamics and conformational changes allow proteins to function as nanoscale biological machines within cells, often in the form of multi-protein complexes. Examples include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. " Proteins are often thought of as relatively stable tertiary structures that experience conformational changes after being affected by interactions with other proteins or as a part of enzymatic activity. However, proteins may have varying degrees of stability, and some of the less stable variants are intrinsically disordered proteins. These proteins exist and function in a relatively 'disordered' state lacking a stable tertiary structure. As a result, they are difficult to describe by a single fixed tertiary structure. Conformational ensembles have been devised as a way to provide a more accurate and 'dynamic' representation of the conformational state of intrinsically disordered proteins. Protein ensemble files are a representation of a protein that can be considered to have a flexible structure. Creating these files requires determining which of the various theoretically possible protein conformations actually exist. One approach is to apply computational algorithms to the protein data in order to try to determine the most likely set of conformations for an ensemble file. There are multiple methods for preparing data for the Protein Ensemble Database that fall into two general methodologies – pool and molecular dynamics (MD) approaches (diagrammed in the figure). The pool based approach uses the protein's amino acid sequence to create a massive pool of random conformations. This pool is then subjected to more computational processing that creates a set of theoretical parameters for each conformation based on the structure. Conformational subsets from this pool whose average theoretical parameters closely match known experimental data for this protein are selected. The alternative molecular dynamics approach takes multiple random conformations at a time and subjects all of them to experimental data. Here the experimental data is serving as limitations to be placed on the conformations (e.g. known distances between atoms). Only conformations that manage to remain within the limits set by the experimental data are accepted. This approach often applies large amounts of experimental data to the conformations which is a very computationally demanding task. The conformational ensembles were generated for a number of highly dynamic and partially unfolded proteins, such as Sic1/Cdc4, p15 PAF, MKK7, Beta-synuclein and P27 == Protein folding == As it is translated, polypeptides exit the ribosome mostly as a random coil and folds into its native state. The final structure of the protein chain is generally assumed to be determined by its amino acid sequence (Anfinsen's dogma). == Protein stability == Thermodynamic stability of proteins represents the free energy difference between the folded and unfolded protein states. This free energy difference is very sensitive to temperature, hence a change in temperature may result in unfolding or denaturation. Protein denaturation may result in loss of function, and loss of native state. The free energy of stabilization of soluble globular proteins typically does not exceed 50 kJ/mol. Taking into consideration the large number of hydrogen bonds that take place for the stabilization of secondary structures, and the stabilization of the inner core through hydrophobic interactions, the free energy of stabilization emerges as small difference between large numbers. == Protein structure determination == Around 90% of the protein structures available in the Protein Data Bank have been determined by X-ray crystallography. This method allows one to measure the three-dimensional (3-D) density distribution of electrons in the protein, in the crystallized state, and thereby infer the 3-D coordinates of all the atoms to be determined to a certain resolution. Roughly 7% of the known protein structures have been obtained by nuclear magnetic resonance (NMR) techniques. For larger protein complexes, cryo-electron microscopy can determine protein structures. The resolution is typically lower than that of X-ray crystallography, or NMR, but the maximum resolution is steadily increasing. This technique is still a particularly valuable for very large protein complexes such as virus coat proteins and amyloid fibers. General secondary structure composition can be determined via circular dichroism. Vibrational spectroscopy can also be used to characterize the conformation of peptides, polypeptides, and proteins. Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure. == Protein structure databases == A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein. == Structural classifications of proteins == Protein structures can be grouped based on their structural similarity, topological class or a common evolutionary origin. The Structural Classification of Proteins database and CATH database provide two different structural classifications of proteins. When the structural similarity is large the two proteins have possibly diverged from a common ancestor, and shared structure between proteins is considered evidence of homology. Structure similarity can then be used to group proteins together into protein superfamilies. If shared structure is significant but the fraction shared is small, the fragment shared may be the consequence of a more dramatic evolutionary event such as horizontal gene transfer, and joining proteins sharing these fragments into protein superfamilies is no longer justified. Topology of a protein can be used to classify proteins as well. Knot theory and circuit topology are two topology frameworks developed for classification of protein folds based on chain crossing and intrachain contacts respectively. == Computational prediction of protein structure == The generation of a protein sequence is much easier than the determination of a protein structure. However, the structure of a protein gives much more insight in the function of the protein than its sequence. Therefore, a number of methods for the computational prediction of protein structure from its sequence have been developed. Ab initio prediction methods use just the sequence of the protein. Threading and homology modeling methods can build a 3-D model for a protein of unknown structure from experimental structures of evolutionarily-related proteins, called a protein family. == See also == Biomolecular structure Gene structure Nucleic acid structure PCRPi-DB Ribbon diagram 3D schematic representation of proteins == References == == Further reading == 50 Years of Protein Structure Determination Timeline - HTML Version - National Institute of General Medical Sciences Archived 29 October 2018 at the Wayback Machine at NIH == External links == Media related to Protein structures at Wikimedia Commons Protein Structure drugdesign.org [1] Method_for_the_Characterization_of_the_Three-Dimensional_Structure_of_Proteins_Employing_Mass_Spectrometric_Analysis_and_Experimental-Computational_Feedback_Modeling [2] A_Method_for_the_Determination_of_the_Conformation_(Topology)_of_Proteins_Employing_Experimental-Computational_Feedback_Modeling
Wikipedia/Protein_structure
Smart Materials and Structures is a monthly peer-reviewed scientific journal covering technical advances in smart materials, systems and structures; including intelligent systems, sensing and actuation, adaptive structures, and active control. The initial editors-in-chief starting in 1992 were Vijay K. Varadan (Pennsylvania State University), Gareth J. Knowles (Grumman Corporation), and Richard O. Claus (Virginia Tech); in 2008 Ephrahim Garcia (Cornell University) took over as editor-in-chief until 2014. Christopher S. Lynch (University of California, Los Angeles) assumed the position of editor-in-chief in 2015 and was succeeded by Alper Erturk (Georgia Institute of Technology) in 2023, who serves as the current editor-in-chief. == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 3.7. == References == == External links == Official website
Wikipedia/Smart_Materials_and_Structures
Journal of Physics: Condensed Matter is a weekly peer-reviewed scientific journal established in 1989 and published by IOP Publishing. The journal covers all areas of condensed matter physics including soft matter and nanostructures. The editor-in-chief is Gianfranco Pacchioni (University of Milano-Bicocca). The journal was formed by the merger of Journal of Physics C: Solid State Physics and Journal of Physics F: Metal Physics in 1989. == Abstracting and indexing == This journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.3. == References == == External links == Official website
Wikipedia/Journal_of_Physics:_Condensed_Matter
Physics-Uspekhi is a peer-reviewed scientific journal. It is an English translation of the Russian journal of physics, Uspekhi Fizicheskikh Nauk (Russian: Успехи физических наук, Advances in Physical Sciences) which was established in 1918. The journal publishes long review papers which are intended to generalize and summarize previously published results, making them easier to use and to understand. The journal covers all topics of modern physics. The English version has existed since 1958, first under the name Soviet Physics Uspekhi and after 1993 as Physics-Uspekhi. The year 2008 marked the 90th birthday with a jubilee retrospective. The founder of the journal, Eduard Shpolsky, was editor-in-chief from 1918 to his death in 1975. Vitaly Ginzburg, connected with the journal since before World War II, was appointed editor-in-chief in 1998. In his 2006 Nobel autobiography, Ginzburg called it "a good and useful journal" and credited its "maintenance of the highest level" to long-term editorial manager M. S. Aksentyeva. == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.361. == References == == External links == Official website (in English) Official website (in Russian)
Wikipedia/Soviet_Physics_Uspekhi
Fourier transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. An FTIR spectrometer simultaneously collects high-resolution spectral data over a wide spectral range. This confers a significant advantage over a dispersive spectrometer, which measures intensity over a narrow range of wavelengths at a time. The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum. == Conceptual introduction == The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible ("UV-vis") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. The most straightforward way to do this, the "dispersive spectroscopy" technique, is to shine a monochromatic light beam at a sample, measure how much of the light is absorbed, and repeat for each different wavelength. (This is how some UV–vis spectrometers work, for example.) Fourier transform spectroscopy is a less intuitive way to obtain the same information. Rather than shining a monochromatic beam of light (a beam composed of only a single wavelength) at the sample, this technique shines a beam containing many frequencies of light at once and measures how much of that beam is absorbed by the sample. Next, the beam is modified to contain a different combination of frequencies, giving a second data point. This process is rapidly repeated many times over a short time span. Afterwards, a computer takes all this data and works backward to infer what the absorption is at each wavelength. The beam described above is generated by starting with a broadband light source—one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer—a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked, transmitted, blocked, transmitted, by the interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment or mirror position the beam coming out of the interferometer has a different spectrum. As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into the desired result (light absorption for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform. The Fourier transform converts one domain (in this case displacement of the mirror in cm) into its inverse domain (wavenumbers in cm−1). The raw data is called an "interferogram". == History == The first low-cost spectrophotometer capable of recording an infrared spectrum was the Perkin-Elmer Infracord produced in 1957. This instrument covered the wavelength range from 2.5 μm to 15 μm (wavenumber range 4,000 cm−1 to 660 cm−1). The lower wavelength limit was chosen to encompass the highest known vibration frequency due to a fundamental molecular vibration. The upper limit was imposed by the fact that the dispersing element was a prism made from a single crystal of rock-salt (sodium chloride), which becomes opaque at wavelengths longer than about 15 μm; this spectral region became known as the rock-salt region. Later instruments used potassium bromide prisms to extend the range to 25 μm (400 cm−1) and caesium iodide 50 μm (200 cm−1). The region beyond 50 μm (200 cm−1) became known as the far-infrared region; at very long wavelengths it merges into the microwave region. Measurements in the far infrared needed the development of accurately ruled diffraction gratings to replace the prisms as dispersing elements, since salt crystals are opaque in this region. More sensitive detectors than the bolometer were required because of the low energy of the radiation. One such was the Golay detector. An additional issue is the need to exclude atmospheric water vapour because water vapour has an intense pure rotational spectrum in this region. Far-infrared spectrophotometers were cumbersome, slow and expensive. The advantages of the Michelson interferometer were well-known, but considerable technical difficulties had to be overcome before a commercial instrument could be built. Also an electronic computer was needed to perform the required Fourier transform, and this only became practicable with the advent of minicomputers, such as the PDP-8, which became available in 1965. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969. Digilab FTIRs are now a part of Agilent Technologies's molecular product line after Agilent acquired spectroscopy business from Varian. == Michelson interferometer == In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. Ideally 50% of the light is refracted towards the fixed mirror and 50% is transmitted towards the moving mirror. Light is reflected from the two mirrors back to the beam splitter and some fraction of the original light passes into the sample compartment. There, the light is focused on the sample. On leaving the sample compartment the light is refocused on to the detector. The difference in optical path length between the two arms to the interferometer is known as the retardation or optical path difference (OPD). An interferogram is obtained by varying the OPD and recording the signal from the detector for various values of the OPD. The form of the interferogram when no sample is present depends on factors such as the variation of source intensity and splitter efficiency with wavelength. This results in a maximum at zero OPD, when there is constructive interference at all wavelengths, followed by series of "wiggles". The position of zero OPD is determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the background interferogram is modulated by the presence of absorption bands in the sample. Commercial spectrometers use Michelson interferometers with a variety of scanning mechanisms to generate the path difference. Common to all these arrangements is the need to ensure that the two beams recombine exactly as the system scans. The simplest systems have a plane mirror that moves linearly to vary the path of one beam. In this arrangement the moving mirror must not tilt or wobble as this would affect how the beams overlap as they recombine. Some systems incorporate a compensating mechanism that automatically adjusts the orientation of one mirror to maintain the alignment. Arrangements that avoid this problem include using cube corner reflectors instead of plane mirrors as these have the property of returning any incident beam in a parallel direction regardless of orientation. Systems where the path difference is generated by a rotary movement have proved very successful. One common system incorporates a pair of parallel mirrors in one beam that can be rotated to vary the path without displacing the returning beam. Another is the double pendulum design where the path in one arm of the interferometer increases as the path in the other decreases. A quite different approach involves moving a wedge of an IR-transparent material such as KBr into one of the beams. Increasing the thickness of KBr in the beam increases the optical path because the refractive index is higher than that of air. One limitation of this approach is that the variation of refractive index over the wavelength range limits the accuracy of the wavelength calibration. == Measuring and processing the interferogram == The interferogram has to be measured from zero path difference to a maximum length that depends on the resolution required. In practice the scan can be on either side of zero resulting in a double-sided interferogram. Mechanical design limitations may mean that for the highest resolution the scan runs to the maximum OPD on one side of zero only. The interferogram is converted to a spectrum by Fourier transformation. This requires it to be stored in digital form as a series of values at equal intervals of the path difference between the two beams. To measure the path difference a laser beam is sent through the interferometer, generating a sinusoidal signal where the separation between successive maxima is equal to the wavelength of the laser (typically a 633 nm HeNe laser is used). This can trigger an analog-to-digital converter to measure the IR signal each time the laser signal passes through zero. Alternatively, the laser and IR signals can be measured synchronously at smaller intervals with the IR signal at points corresponding to the laser signal zero crossing being determined by interpolation. This approach allows the use of analog-to-digital converters that are more accurate and precise than converters that can be triggered, resulting in lower noise. The result of Fourier transformation is a spectrum of the signal at a series of discrete wavelengths. The range of wavelengths that can be used in the calculation is limited by the separation of the data points in the interferogram. The shortest wavelength that can be recognized is twice the separation between these data points. For example, with one point per wavelength of a HeNe reference laser at 0.633 μm (15800 cm−1) the shortest wavelength would be 1.266 μm (7900 cm−1). Because of aliasing, any energy at shorter wavelengths would be interpreted as coming from longer wavelengths and so has to be minimized optically or electronically. The spectral resolution, i.e. the separation between wavelengths that can be distinguished, is determined by the maximum OPD. The wavelengths used in calculating the Fourier transform are such that an exact number of wavelengths fit into the length of the interferogram from zero to the maximum OPD as this makes their contributions orthogonal. This results in a spectrum with points separated by equal frequency intervals. For a maximum path difference d adjacent wavelengths λ1 and λ2 will have n and (n+1) cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2: The separation is the inverse of the maximum OPD. For example, a maximum OPD of 2 cm results in a separation of 0.5 cm−1. This is the spectral resolution in the sense that the value at one point is independent of the values at adjacent points. Most instruments can be operated at different resolutions by choosing different OPD's. Instruments for routine analyses typically have a best resolution of around 0.5 cm−1, while spectrometers have been built with resolutions as high as 0.001 cm−1, corresponding to a maximum OPD of 10 m. The point in the interferogram corresponding to zero path difference has to be identified, commonly by assuming it is where the maximum signal occurs. This so-called centerburst is not always symmetrical in real world spectrometers so a phase correction may have to be calculated. The interferogram signal decays as the path difference increases, the rate of decay being inversely related to the width of features in the spectrum. If the OPD is not large enough to allow the interferogram signal to decay to a negligible level there will be unwanted oscillations or sidelobes associated with the features in the resulting spectrum. To reduce these sidelobes the interferogram is usually multiplied by a function that approaches zero at the maximum OPD. This so-called apodization reduces the amplitude of any sidelobes and also the noise level at the expense of some reduction in resolution. For rapid calculation the number of points in the interferogram has to equal a power of two. A string of zeroes may be added to the measured interferogram to achieve this. More zeroes may be added in a process called zero filling to improve the appearance of the final spectrum although there is no improvement in resolution. Alternatively, interpolation after the Fourier transform gives a similar result. == Advantages == There are three principal advantages for an FT spectrometer compared to a scanning (dispersive) spectrometer. The multiplex or Fellgett's advantage (named after Peter Fellgett). This arises from the fact that information from all wavelengths is collected simultaneously. It results in a higher signal-to-noise ratio for a given scan-time for observations limited by a fixed detector noise contribution (typically in the thermal infrared spectral region where a photodetector is limited by generation-recombination noise). For a spectrum with m resolution elements, this increase is equal to the square root of m. Alternatively, it allows a shorter scan-time for a given resolution. In practice multiple scans are often averaged, increasing the signal-to-noise ratio by the square root of the number of scans. The throughput or Jacquinot's advantage (named after Pierre Jacquinot). This results from the fact that in a dispersive instrument, the monochromator has entrance and exit slits which restrict the amount of light that passes through it. The interferometer throughput is determined only by the diameter of the collimated beam coming from the source. Although no slits are needed, FTIR spectrometers do require an aperture to restrict the convergence of the collimated beam in the interferometer. This is because convergent rays are modulated at different frequencies as the path difference is varied. Such an aperture is called a Jacquinot stop. For a given resolution and wavelength this circular aperture allows more light through than a slit, resulting in a higher signal-to-noise ratio. The wavelength accuracy or Connes' advantage (named after Janine Connes). The wavelength scale is calibrated by a laser beam of known wavelength that passes through the interferometer. This is much more stable and accurate than in dispersive instruments where the scale depends on the mechanical movement of diffraction gratings. In practice, the accuracy is limited by the divergence of the beam in the interferometer which depends on the resolution. Another minor advantage is less sensitivity to stray light, that is radiation of one wavelength appearing at another wavelength in the spectrum. In dispersive instruments, this is the result of imperfections in the diffraction gratings and accidental reflections. In FT instruments there is no direct equivalent as the apparent wavelength is determined by the modulation frequency in the interferometer. === Resolution === The interferogram belongs in the length dimension. Fourier transform (FT) inverts the dimension, so the FT of the interferogram belongs in the reciprocal length dimension([L−1]), that is the dimension of wavenumber. The spectral resolution in cm−1 is equal to the reciprocal of the maximal OPD in cm. Thus a 4 cm−1 resolution will be obtained if the maximal OPD is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal OPD. This is not easy, as the moving mirror must travel in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful, as an outgoing ray from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes perpendicular to the axis of the light beam. A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits. In 1966 Janine Connes measured the temperature of the atmosphere of Venus by recording the vibration-rotation spectrum of Venusian CO2 at 0.1 cm−1 resolution. Michelson himself attempted to resolve the hydrogen Hα emission band in the spectrum of a hydrogen atom into its two components by using his interferometer. p25 == Motivation == FTIR is a method of measuring infrared absorption and emission spectra. For a discussion of why people measure infrared absorption and emission spectra, i.e. why and how substances absorb and emit infrared light, see the article: Infrared spectroscopy. == Components == === IR sources === FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2−25 μm (5,000–400 cm−1), the most common source is a silicon carbide (SiC) element heated to about 1,200 K (930 °C; 1,700 °F) (Globar). The output is similar to a blackbody. Shorter wavelengths of the near-IR, 1−2.5 μm (10,000–4,000 cm−1), require a higher temperature source, typically a tungsten-halogen lamp. The long wavelength output of these is limited to about 5 μm (2,000 cm−1) by the absorption of the quartz envelope. For the far-IR, especially at wavelengths beyond 50 μm (200 cm−1) a mercury discharge lamp gives higher output than a thermal source. === Detectors === Far-IR spectrometers commonly use pyroelectric detectors that respond to changes in temperature as the intensity of IR radiation falling on them varies. The sensitive elements in these detectors are either deuterated triglycine sulfate (DTGS) or lithium tantalate (LiTaO3). These detectors operate at ambient temperatures and provide adequate sensitivity for most routine applications. To achieve the best sensitivity the time for a scan is typically a few seconds. Cooled photoelectric detectors are employed for situations requiring higher sensitivity or faster response. Liquid nitrogen cooled mercury cadmium telluride (MCT) detectors are the most widely used in the mid-IR. With these detectors an interferogram can be measured in as little as 10 milliseconds. Uncooled indium gallium arsenide photodiodes or DTGS are the usual choices in near-IR systems. Very sensitive liquid-helium-cooled silicon or germanium bolometers are used in the far-IR where both sources and beamsplitters are inefficient. === Beam splitter === An ideal beam-splitter transmits and reflects 50% of the incident radiation. However, as any material has a limited range of optical transmittance, several beam-splitters may be used interchangeably to cover a wide spectral range. In a simple Michelson interferometer, one beam passes twice through the beamsplitter but the other passes through only once. To correct for this, an additional compensator plate of equal thickness is incorporated. For the mid-IR region, the beamsplitter is usually made of KBr with a germanium-based coating that makes it semi-reflective. KBr absorbs strongly at wavelengths beyond 25 μm (400 cm−1), so CsI or KRS-5 are sometimes used to extend the range to about 50 μm (200 cm−1). ZnSe is an alternative where moisture vapour can be a problem, but is limited to about 20 μm (500 cm−1). CaF2 is the usual material for the near-IR, being both harder and less sensitive to moisture than KBr, but cannot be used beyond about 8 μm (1,200 cm−1). Far-IR beamsplitters are mostly based on polymer films, and cover a limited wavelength range. === Attenuated total reflectance === Attenuated total reflectance (ATR) is one accessory of FTIR spectrophotometer to measure surface properties of solid or thin film samples rather than their bulk properties. Generally, ATR has a penetration depth of around 1 or 2 micrometers depending on sample conditions. === Fourier transform === The interferogram in practice consists of a set of intensities measured for discrete values of OPD. The difference between successive OPD values is constant. Thus, a discrete Fourier transform is needed. The fast Fourier transform (FFT) algorithm is used. == Spectral range == === Far-infrared === The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being used. For the relatively long wavelengths of the far infrared, ~10 μm tolerances are adequate, whereas for the rock-salt region tolerances have to be better than 1 μm. A typical instrument was the cube interferometer developed at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the detector response after each step was completed. === Mid-infrared === With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing ultra-high precision optical and mechanical components had to be solved. A wide range of instruments are now available commercially. Although instrument design has become more sophisticated, the basic principles remain the same. Nowadays, the moving mirror of the interferometer moves at a constant velocity, and sampling of the interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium–neon laser. In modern FTIR systems the constant mirror velocity is not strictly required, as long as the laser fringes and the original interferogram are recorded simultaneously with higher sampling rate and then re-interpolated on a constant grid, as pioneered by James W. Brault. This confers very high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration errors. === Near-infrared === The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial applications such as process control and chemical imaging. == Applications == FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In addition, the improved sensitivity and speed have opened up new areas of application. Spectra can be measured in situations where very little energy reaches the detector. Fourier transform infrared spectroscopy is used in geology, chemistry, materials, botany and biology research fields. === Nano and biological materials === FTIR is also used to investigate various nanomaterials and proteins in hydrophobic membrane environments. Studies show the ability of FTIR to directly determine the polarity at a given site along the backbone of a transmembrane protein. The bond features involved with various organic and inorganic nanomaterials and their quantitative analysis can be done with the help of FTIR. === Microscopy and imaging === An infrared microscope allows samples to be observed and spectra measured from regions as small as 5 microns across. Images can be generated by combining a microscope with linear or 2-D array detectors. The spatial resolution can approach 5 microns with tens of thousands of pixels. The images contain a spectrum for each pixel and can be viewed as maps showing the intensity at any wavelength or combination of wavelengths. This allows the distribution of different chemical species within the sample to be seen. This technique has been applied in various biological applications including the analysis of tissue sections as an alternative to conventional histopathology, examining the homogeneity of pharmaceutical tablets, and for differentiating morphologically-similar pollen grains. === Nanoscale and spectroscopy below the diffraction limit === The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution. === FTIR as detector in chromatography === The speed of FTIR allows spectra to be obtained from compounds as they are separated by a gas chromatograph. However this technique is little used compared to GC-MS (gas chromatography-mass spectrometry) which is more sensitive. The GC-IR method is particularly useful for identifying isomers, which by their nature have identical masses. Liquid chromatography fractions are more difficult because of the solvent present. One notable exception is to measure chain branching as a function of molecular size in polyethylene using gel permeation chromatography, which is possible using chlorinated solvents that have no absorption in the area in question. === TG-IR (thermogravimetric analysis-infrared spectrometry) === Measuring the gas evolved as a material is heated allows qualitative identification of the species to complement the purely quantitative information provided by measuring the weight loss. === Water content determination in plastics and composites === FTIR analysis is used to determine water content in fairly thin plastic and composite parts, more commonly in the laboratory setting. Such FTIR methods have long been used for plastics, and became extended for composite materials in 2018, when the method was introduced by Krauklis, Gagani and Echtermeyer. FTIR method uses the maxima of the absorbance band at about 5,200 cm−1 which correlates with the true water content in the material. == See also == Discrete Fourier transform – Function in discrete mathematics − for computing periodicity in evenly spaced data Fourier transform – Mathematical transform that expresses a function of time as a function of frequency Fourier transform spectroscopy – Spectroscopy based on time- or space-domain dataPages displaying short descriptions of redirect targets Least-squares spectral analysis – Periodicity computation method − for computing periodicity in unevenly spaced data == References == == External links == Infracord spectrometer photograph The Grubb-Parsons-NPL cube interferometer Spectroscopy, part 2 by Dudley Williams, page 81 Infrared materials Properties of many salt crystals and useful links. University FTIR lab example from the University of Bristol
Wikipedia/Fourier-transform_infrared_spectroscopy