text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Inductively coupled plasma mass spectrometry ( ICP-MS ) is a type of mass spectrometry that uses an inductively coupled plasma to ionize the sample. It atomizes the sample and creates atomic and small polyatomic ions , which are then detected. It is known and used for its ability to detect metals and several non-metals in liquid samples at very low concentrations. It can detect different isotopes of the same element, which makes it a versatile tool in isotopic labeling .
Compared to atomic absorption spectroscopy , ICP-MS has greater speed, precision, and sensitivity. However, compared with other types of mass spectrometry, such as thermal ionization mass spectrometry (TIMS) and glow discharge mass spectrometry (GD-MS), ICP-MS introduces many interfering species: argon from the plasma, component gases of air that leak through the cone orifices, and contamination from glassware and the cones.
An inductively coupled plasma is a plasma that is energized ( ionized ) by inductively heating the gas with an electromagnetic coil , and contains a sufficient concentration of ions and electrons to make the gas electrically conductive . Not all of the gas needs to be ionized for the gas to have the characteristics of a plasma; as little as 1% ionization creates a plasma. [ 1 ] The plasmas used in spectrochemical analysis are essentially electrically neutral, with each positive charge on an ion balanced by a free electron. In these plasmas the positive ions are almost all singly charged and there are few negative ions, so there are nearly equal numbers of ions and electrons in each unit volume of plasma.
The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density, and E to H heating mode transition occurs with external inputs. [ 2 ] The Inductively Coupled Plasma Mass Spectrometry is operated in the H mode.
What makes Inductively Coupled Plasma Mass Spectrometry (ICP-MS) unique to other forms of inorganic mass spectrometry is its ability to sample the analyte continuously, without interruption. This is in contrast to other forms of inorganic mass spectrometry; Glow Discharge Mass Spectrometry (GDMS) and Thermal Ionization Mass Spectrometry (TIMS), that require a two-stage process: Insert sample(s) into a vacuum chamber, seal the vacuum chamber, pump down the vacuum, energize sample, thereby sending ions into the mass analyzer. With ICP-MS the sample to be analyzed is sitting at atmospheric pressure. Through the effective use of differential pumping; multiple vacuum stages separate by differential apertures (holes), the ions created in the argon plasma are, with the aid of various electrostatic focusing techniques, transmitted through the mass analyzer to the detector(s) and counted. Not only does this enable the analyst to radically increase sample throughput (amount of samples over time), but has also made it possible to do what is called "time resolved acquisition". Hyphenated techniques like Liquid Chromatography ICP-MS (LC-ICP-MS); Laser Ablation ICP-MS (LA-ICP-MS); Flow Injection ICP-MS (FIA-ICP-MS), etc. have benefited from this relatively new technology. It has stimulated the development new tools for research including geochemistry and forensic chemistry; biochemistry and oceanography. Additionally, increases in sample throughput from dozens of samples a day to hundreds of samples a day have revolutionized environmental analysis, reducing costs. Fundamentally, this is all due to the fact that while the sample resides at environmental pressure, the analyzer and detector are at 1/10,000,000 of that same pressure during normal operation.
An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz , although the inner tube (injector) can be sapphire if hydrofluoric acid is being used. The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 13 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electric spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 million cycles per second). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions. The temperature of the plasma is very high, of the order of 10,000 K. The plasma also produces ultraviolet light, so for safety should not be viewed directly.
The ICP can be retained in the quartz torch because the flow of gas between the two outermost tubes keeps the plasma away from the walls of the torch. A second flow of argon (around 1 liter per minute) is usually introduced between the central tube and the intermediate tube to keep the plasma away from the end of the central tube. A third flow (again usually around 1 liter per minute) of gas is introduced into the central tube of the torch. This gas flow passes through the centre of the plasma, where it forms a channel that is cooler than the surrounding plasma but still much hotter than a chemical flame. Samples to be analyzed are introduced into this central channel, usually as a mist of liquid formed by passing the liquid sample into a nebulizer.
To maximise plasma temperature (and hence ionisation efficiency) and stability, the sample should be introduced through the central tube with as little liquid (solvent load) as possible, and with consistent droplet sizes. A nebuliser can be used for liquid samples, followed by a spray chamber to remove larger droplets, or a desolvating nebuliser can be used to evaporate most of the solvent before it reaches the torch. Solid samples can also be introduced using laser ablation. The sample enters the central channel of the ICP, evaporates, molecules break apart, and then the constituent atoms ionise. At the temperatures prevailing in the plasma a significant proportion of the atoms of many chemical elements are ionized, each atom losing its most loosely bound electron to form a singly charged ion. The plasma temperature is selected to maximise ionisation efficiency for elements with a high first ionisation energy, while minimising second ionisation (double charging) for elements that have a low second ionisation energy.
For coupling to mass spectrometry , the ions from the plasma are extracted through a series of cones into a mass spectrometer, usually a quadrupole . The ions are separated on the basis of their mass-to-charge ratio and a detector receives an ion signal proportional to the concentration.
The concentration of a sample can be determined through calibration with certified reference material such as single or multi-element reference standards. ICP-MS also lends itself to quantitative determinations through isotope dilution , a single point method based on an isotopically enriched standard. In order to increase reproducibility and compensate for errors by sensitivity variation, an internal standard can be added.
Other mass analyzers coupled to ICP systems include double focusing magnetic-electrostatic sector systems with both single and multiple collector, as well as time of flight systems (both axial and orthogonal accelerators have been used).
One of the largest volume uses for ICP-MS is in the medical and forensic field, specifically, toxicology. [ citation needed ] A physician may order a metal assay for a number of reasons, such as suspicion of heavy metal poisoning, metabolic concerns, and even hepatological issues. Depending on the specific parameters unique to each patient's diagnostic plan, samples collected for analysis can range from whole blood, urine, plasma, serum, to even packed red blood cells. Another primary use for this instrument lies in the environmental field. Such applications include water testing for municipalities or private individuals all the way to soil, water and other material analysis for industrial purposes. [ 3 ]
In recent years, industrial and biological monitoring has presented another major need for metal analysis via ICP-MS. Individuals working in factories where exposure to metals is likely and unavoidable, such as a battery factory, are required by their employer to have their blood or urine analyzed for metal toxicity on a regular basis. This monitoring has become a mandatory practice implemented by the U.S. Occupational Safety and Health Administration , in an effort to protect workers from their work environment and ensure proper rotation of work duties (i.e. rotating employees from a high exposure position to a low exposure position).
ICP-MS is also used widely in the geochemistry field for radiometric dating, in which it is used to analyze relative abundance of different isotopes, in particular uranium and lead. ICP-MS is more suitable for this application than the previously used thermal ionization mass spectrometry , as species with high ionization energy such as osmium and tungsten can be easily ionized. For high precision ratio work, multiple collector instruments are normally used to reduce the effect noise on the calculated ratios.
In the field of flow cytometry , a new technique uses ICP-MS to replace the traditional fluorochromes . Briefly, instead of labelling antibodies (or other biological probes) with fluorochromes, each antibody is labelled with a distinct combinations of lanthanides . When the sample of interest is analysed by ICP-MS in a specialised flow cytometer, each antibody can be identified and quantitated by virtue of a distinct ICP "footprint". In theory, hundreds of different biological probes can thus be analysed in an individual cell, at a rate of ca. 1,000 cells per second. Because elements are easily distinguished in ICP-MS, the problem of compensation in multiplex flow cytometry is effectively eliminated.
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is a powerful technique for the elemental analysis of a wide variety of materials encountered in forensic casework. (LA-ICP-MS) has already successfully been applied to applications in forensics, metals, glasses, soils, car paints, bones and teeth, printing inks, trace elemental, fingerprint, and paper. Among these, forensic glass analysis stands out as an application for which this technique has great utility to provide highly.
Car hit and runs, burglaries, assaults, drive-by shootings and bombings such as these situations may cause glass fragments that could be used as evidence of association in glass transfer conditions. LA-ICP-MS is considered one of the best techniques for analysis of glass due to the short time for sample preparation and sample, small sample size of less than 250 nanograms. In addition there is no need for complex procedure and handling of dangerous materials that is used for digestion of the samples. This allows detecting major, minor and tracing elements with high level of precision and accuracy. There are set of properties that are used to measure glass sample such as physical and optical properties including color, thickness, density, refractive index (RI) and also, if necessary, elemental analysis can be conducted in order to enhance the value of an association.
In the pharmaceutical industry, ICP-MS is used for detecting inorganic impurities in pharmaceuticals and their ingredients. New and reduced maximum permitted exposure levels of heavy metals from dietary supplements, introduced in USP ( United States Pharmacopeia ) « 〈232〉Elemental Impurities—Limits » [ 4 ] and USP « 〈232〉Elemental Impurities—Procedures », [ 5 ] will increase the need for ICP-MS technology, where, previously, other analytic methods have been sufficient. [ 6 ] Cosmetics, such as lipstick, recovered from a crime scene may provide valuable forensic information. Lipstick smears left on cigarette butts, glassware, clothing, bedding; napkins, paper, etc. may be valuable evidence. Lipstick recovered from clothing or skin may also indicate physical contact between individuals. Forensic analysis of recovered lipstick smear evidence can provide valuable information on the recent activities of a victim or suspect. Trace elemental analysis of lipstick smears could be used to complement existing visual comparative procedures to determine the lipstick brand and color.
Single Particle Inductively Coupled Plasma Mass Spectroscopy (SP ICP-MS) was designed for particle suspensions in 2000 by Claude Degueldre. He first tested this new methodology at the Forel Institute of the University of Geneva and presented this new analytical approach at the 'Colloid 2oo2' symposium during the spring 2002 meeting of the EMRS, and in the proceedings in 2003. [ 7 ] This study presents the theory of SP ICP-MS and the results of tests carried out on clay particles (montmorillonite) as well as other suspensions of colloids. This method was then tested on thorium dioxide nanoparticles by Degueldre & Favarger (2004), [ 8 ] zirconium dioxide by Degueldre et al (2004) [ 9 ] and gold nanoparticles, which are used as a substrate in nanopharmacy, and published by Degueldre et al (2006). [ 10 ] Subsequently, the study of uranium dioxide nano- and micro-particles gave rise to a detailed publication, Ref. Degueldre et al (2006). [ 11 ] Since 2010 the interest for SP ICP-MS has exploded.
Previous forensic techniques employed for the organic analysis of lipsticks by compositional comparison include thin layer chromatography (TLC), gas chromatography (GC), and high-performance liquid chromatography (HPLC). These methods provide useful information regarding the identification of lipsticks. However, they all require long sample preparation times and destroy the sample. Nondestructive techniques for the forensic analysis of lipstick smears include UV fluorescence observation combined with purge-and-trap gas chromatography, microspectrophotometry and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS), and Raman spectroscopy. [ 12 ]
A growing trend in the world of elemental analysis has revolved around the speciation , or determination of oxidation state of certain metals such as chromium and arsenic . The toxicity of those elements varies with the oxidation state, so new regulations from food authorities requires speciation of some elements. One of the primary techniques to achieve this is to separate the chemical species with high-performance liquid chromatography (HPLC) or field flow fractionation (FFF) and then measure the concentrations with ICP-MS.
There is an increasing trend of using ICP-MS as a tool in speciation analysis, which normally involves a front end chromatograph separation and an elemental selective detector , such as AAS and ICP-MS. For example, ICP-MS may be combined with size exclusion chromatography and preparative native PAGE for identifying and quantifying metalloproteins in biofluids. Also the phosphorylation status of proteins can be analyzed.
In 2007, a new type of protein tagging reagents called metal-coded affinity tags (MeCAT) were introduced to label proteins quantitatively with metals, especially lanthanides. [ 13 ] The MeCAT labelling allows relative and absolute quantification of all kind of proteins or other biomolecules like peptides. MeCAT comprises a site-specific biomolecule tagging group with at least a strong chelate group which binds metals. The MeCAT labelled proteins can be accurately quantified by ICP-MS down to low attomol amount of analyte which is at least 2–3 orders of magnitude more sensitive than other mass spectrometry based quantification methods. By introducing several MeCAT labels to a biomolecule and further optimization of LC-ICP-MS detection limits in the zeptomol range are within the realm of possibility. By using different lanthanides MeCAT multiplexing can be used for pharmacokinetics of proteins and peptides or the analysis of the differential expression of proteins ( proteomics ) e.g. in biological fluids. Breakable PAGE SDS-PAGE (DPAGE, dissolvable PAGE), two-dimensional gel electrophoresis or chromatography is used for separation of MeCAT labelled proteins. Flow-injection ICP-MS analysis of protein bands or spots from DPAGE SDS-PAGE gels can be easily performed by dissolving the DPAGE gel after electrophoresis and staining of the gel. MeCAT labelled proteins are identified and relatively quantified on peptide level by MALDI-MS or ESI-MS.
The ICP-MS allows determination of elements with atomic mass ranges 7 to 250 ( Li to U ), and sometimes higher. Some masses are prohibited, such as 40 Da, due to the abundance of argon in the sample. Other interference regions may include mass 80 (due to the argon dimer) and mass 56 (due to ArO), the latter of which greatly hinders Fe detection unless the instrument is fitted with a reaction chamber. Such interferences can be reduced by using a high resolution ICP-MS (HR-ICP-MS) which uses two or more slits to constrict the beam and distinguish between nearby peaks. This comes at the cost of sensitivity. For example, distinguishing iron from argon requires a resolving power of about 10,000, which may reduce the iron sensitivity by around 99%. Interfering species can alternatively be distinguished through the use of a collision chamber , which can filter gasses by either chemical reaction or physical collision.
A single collector ICP-MS may use a multiplier in pulse counting mode to amplify very low signals, an attenuation grid or a multiplier in analogue mode to detect medium signals, and a Faraday cup/bucket to detect larger signals. A multi-collector ICP-MS may have more than one of any of these, typically Faraday buckets which are more cost-effective than other collectors. With this combination, a dynamic range of 12 orders of magnitude, from 1 part per quadrillion (ppq) to 100 parts per million (ppm) is possible.
ICP-MS is a common method for the determination of cadmium in biological samples. [ 14 ]
Unlike atomic absorption spectroscopy , which can only measure a single element at a time, ICP-MS has the capability to scan for all elements simultaneously. This allows rapid sample processing. A simultaneous ICP-MS that can record the entire analytical spectrum from lithium to uranium in every analysis won the Silver Award at the 2010 Pittcon Editors' Awards . An ICP-MS may use multiple scan modes, each one striking a different balance between speed and precision. Using the magnet alone to scan is slow due to hysteresis but is precise. Electrostatic plates can be used in addition to the magnet to increase the speed, and with multiple collectors can allow a scan of every element from Lithium 6 to Uranium Oxide 256 in less than a quarter of a second. For low detection limits, interfering species and high precision, the counting time can increase substantially. The rapid scanning, large dynamic range and large mass range of ICP-MS is ideally suited to measuring multiple unknown concentrations and isotope ratios in samples that have had minimal preparation (an advantage over TIMS). The analysis of seawater, urine, and digested whole rock samples are examples of industry applications. These properties also lend well to laser-ablated rock samples, where the scanning rate is fast enough to enable a real-time plot of any number of isotopes. This also allows easy spatial mapping of mineral grains.
In terms of input and output , ICP-MS instrument consumes prepared sample material and translates it into mass-spectral data. Actual analytical procedure takes some time; after that time the instrument can be switched to work on the next sample. Series of such sample measurements requires the instrument to have plasma ignited, meanwhile a number of technical parameters has to be stable in order for the results obtained to have feasibly accurate and precise interpretation. Maintaining the plasma requires a constant supply of carrier gas (usually, pure argon) and increased power consumption of the instrument. When these additional running costs are not considered justified, plasma and most of auxiliary systems can be turned off. In such standby mode only pumps are working to keep proper vacuum in mass-spectrometer.
The constituents of ICP-MS instrument are designed to allow for reproducible and/or stable operation.
The first step in analysis is the introduction of the sample. This has been achieved in ICP-MS through a variety of means.
The most common method is the use of analytical nebulizers . A nebulizer converts liquids into an aerosol, and that aerosol can then be swept into the plasma to create the ions. Nebulizers work best with simple liquid samples (i.e. solutions). However, there have been instances of their use with more complex materials like a slurry . Many varieties of nebulizers have been coupled to ICP-MS, including pneumatic, cross-flow, Babington, ultrasonic, and desolvating types. The aerosol generated is often treated to limit it to only smallest droplets, commonly by means of a Peltier cooled double pass or cyclonic spray chamber. Use of autosamplers makes this easier and faster, especially for routine work and large numbers of samples. A Desolvating Nebuliser (DSN) may also be used; this uses a long heated capillary, coated with a fluoropolymer membrane, to remove most of the solvent and reduce the load on the plasma. Matrix removal introduction systems are sometimes used for samples, such as seawater, where the species of interest are at trace levels, and are surrounded by much more abundant contaminants.
Laser ablation is another method. Though less common in the past, it has become popular as a means of sample introduction, thanks to increased ICP-MS scanning speeds. In this method, a pulsed UV laser is focused on the sample and creates a plume of ablated material, which can be swept into the plasma. This allows geochemists to spatially map the isotope composition in cross-sections of rock samples, a tool which is lost if the rock is digested and introduced as a liquid sample. Lasers for this task are built to have highly controllable power outputs and uniform radial power distributions, to produce craters which are flat bottomed and of a chosen diameter and depth.
For both Laser Ablation and Desolvating Nebulisers, a small flow of nitrogen may also be introduced into the argon flow. Nitrogen exists as a dimer, so has more vibrational modes and is more efficient at receiving energy from the RF coil around the torch.
Other methods of sample introduction are also utilized. Electrothermal vaporization (ETV) and in torch vaporization (ITV) use hot surfaces (graphite or metal, generally) to vaporize samples for introduction. These can use very small amounts of liquids, solids, or slurries. Other methods like vapor generation are also known.
The plasma used in an ICP-MS is made by partially ionizing argon gas (Ar → Ar + + e − ). The energy required for this reaction is obtained by pulsing an alternating electric current in load coil that surrounds the plasma torch with a flow of argon gas.
After the sample is injected, the plasma's extreme temperature causes the sample to separate into individual atoms (atomization). Next, the plasma ionizes these atoms (M → M + + e − ) so that they can be detected by the mass spectrometer.
An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz. The two major designs are the Fassel and Greenfield torches. [ 15 ] The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 14 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electrical spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 MHz or 40 MHz ). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions.
Making the plasma from argon, instead of other gases, has several advantages. First, argon is abundant (in the atmosphere, as a result of the radioactive decay of potassium ) and therefore cheaper than other noble gases . Argon also has a higher first ionization potential than all other elements except He , F , and Ne . Because of this high ionization energy, the reaction (Ar + + e − → Ar) is more energetically favorable than the reaction (M + + e − → M). This ensures that the sample remains ionized (as M + ) so that the mass spectrometer can detect it.
Argon can be purchased for use with the ICP-MS in either a refrigerated liquid or a gas form. However it is important to note that whichever form of argon purchased, it should have a guaranteed purity of 99.9% Argon at a minimum. It is important to determine which type of argon will be best suited for the specific situation. Liquid argon is typically cheaper and can be stored in a greater quantity as opposed to the gas form, which is more expensive and takes up more tank space. If the instrument is in an environment where it gets infrequent use, then buying argon in the gas state will be most appropriate as it will be more than enough to suit smaller run times and gas in the cylinder will remain stable for longer periods of time, whereas liquid argon will suffer loss to the environment due to venting of the tank when stored over extended time frames. However, if the ICP-MS is to be used routinely and is on and running for eight or more hours each day for several days a week, then going with liquid argon will be the most suitable. If there are to be multiple ICP-MS instruments running for long periods of time, then it will most likely be beneficial for the laboratory to install a bulk or micro bulk argon tank which will be maintained by a gas supply company, thus eliminating the need to change out tanks frequently as well as minimizing loss of argon that is left over in each used tank as well as down time for tank changeover.
Helium can be used either in place of, or mixed with, argon for plasma generation. [ 16 ] [ 17 ] Helium's higher first ionisation energy allows greater ionisation and therefore higher sensitivity for hard-to-ionise elements. The use of pure helium also avoids argon-based interferences such as ArO. [ 18 ] However, many of the interferences can be mitigated by use of a collision cell , and the greater cost of helium has prevented its use in commercial ICP-MS. [ citation needed ]
The carrier gas is sent through the central channel and into the very hot plasma. The sample is then exposed to radio frequency which converts the gas into a plasma . The high temperature of the plasma is sufficient to cause a very large portion of the sample to form ions. This fraction of ionization can approach 100% for some elements (e.g. sodium), but this is dependent on the ionization potential. A fraction of the formed ions passes through a ~1 mm hole (sampler cone) and then a ~0.4 mm hole (skimmer cone). The purpose of which is to allow a vacuum that is required by the mass spectrometer .
The vacuum is created and maintained by a series of pumps. The first stage is usually based on a roughing pump, most commonly a standard rotary vane pump. This removes most of the gas and typically reaches a pressure of around 133 Pa. Later stages have their vacuum generated by more powerful vacuum systems, most often turbomolecular pumps. Older instruments may have used oil diffusion pumps for high vacuum regions.
Before mass separation, a beam of positive ions has to be extracted from the plasma and focused into the mass-analyzer. It is important to separate the ions from UV photons, energetic neutrals and from any solid particles that may have been carried into the instrument from the ICP. Traditionally, ICP-MS instruments have used transmitting ion lens arrangements for this purpose. Examples include the Einzel lens, the Barrel lens, Agilent's Omega Lens [ 19 ] and Perkin-Elmer's Shadow Stop. [ 20 ] Another approach is to use ion guides (quadrupoles, hexapoles, or octopoles) to guide the ions into mass analyzer along a path away from the trajectory of photons or neutral particles. Yet another approach is Varian patented used by Analytik Jena ICP-MS [ 21 ] 90 degrees reflecting parabolic "Ion Mirror" optics, which are claimed to provide more efficient ion transport into the mass-analyzer, resulting in better sensitivity and reduced background. Analytik Jena ICP-MS PQMS is the most sensitive instrument on the market. [ 22 ] [ 23 ] [ 24 ] [ failed verification ]
A sector ICP-MS will commonly have four sections: an extraction acceleration region, steering lenses, an electrostatic sector and a magnetic sector. The first region takes ions from the plasma and accelerates them using a high voltage. The second uses may use a combination of parallel plates, rings, quadrupoles, hexapoles and octopoles to steer, shape and focus the beam so that the resulting peaks are symmetrical, flat topped and have high transmission. The electrostatic sector may be before or after the magnetic sector depending on the particular instrument, and reduces the spread in kinetic energy caused by the plasma. This spread is particularly large for ICP-MS, being larger than Glow Discharge and much larger than TIMS. The geometry of the instrument is chosen so that the instrument the combined focal point of the electrostatic and magnetic sectors is at the collector, known as Double Focusing (or Double Focussing).
If the mass of interest has a low sensitivity and is just below a much larger peak, the low mass tail from this larger peak can intrude onto the mass of interest. A Retardation Filter might be used to reduce this tail. This sits near the collector, and applies a voltage equal but opposite to the accelerating voltage; any ions that have lost energy while flying around the instrument will be decelerated to rest by the filter.
The collision/reaction cell is used to remove interfering ions through ion/neutral reactions. [ 25 ] Collision/reaction cells are known under several names. The dynamic reaction cell is located before the quadrupole in the ICP-MS device. [ 26 ] [ 27 ] [ 28 ] [ 29 ] The chamber has a quadrupole and can be filled with reaction (or collision) gases ( ammonia , methane , oxygen or hydrogen ), with one gas type at a time or a mixture of two of them, which reacts with the introduced sample, eliminating some of the interference.
The integrated Collisional Reaction Cell (iCRC) used by Analytik Jena ICP-MS is a mini-collision cell installed in front of the parabolic ion mirror optics that removes interfering ions by injecting a collisional gas (He), or a reactive gas (H 2 ), or a mixture of the two, directly into the plasma as it flows through the skimmer cone and/or the sampler cone. [ 30 ] [ 31 ] The iCRC removed interfering ions using a collisional kinetic energy discrimination (KED) phenomenon [ citation needed ] and chemical reactions with interfering ions similarly to traditionally used larger collision cells.
As with any piece of instrumentation or equipment, there are many aspects of maintenance that need to be encompassed by daily, weekly and annual procedures. The frequency of maintenance is typically determined by the sample volume and cumulative run time that the instrument is subjected to.
One of the first things that should be carried out before the calibration of the ICP-MS is a sensitivity check and optimization. This ensures that the operator is aware of any possible issues with the instrument and if so, may address them before beginning a calibration. Typical indicators of sensitivity are Rhodium levels, Cerium/Oxide ratios and DI water blanks. One common standard practice is to measure a standard tuning solution provided by the ICP manufacturer every time the plasma torch is started. Then the instrument is auto-calibrated for optimum sensitivity and the operator obtains a report providing certain parameters such as sensitivity, mass resolution and estimated amount of oxidized species and double-positive charged species.
One of the most frequent forms of routine maintenance is replacing sample and waste tubing on the peristaltic pump, as these tubes can get worn fairly quickly resulting in holes and clogs in the sample line, resulting in skewed results. Other parts that will need regular cleaning and/or replacing are sample tips, nebulizer tips, sample cones, skimmer cones, injector tubes, torches and lenses. It may also be necessary to change the oil in the interface roughing pump as well as the vacuum backing pump, depending on the workload put on the instrument.
For most clinical methods using ICP-MS, there is a relatively simple and quick sample prep process. The main component to the sample is an internal standard, which also serves as the diluent. This internal standard consists primarily of deionized water , with nitric or hydrochloric acid and indium and/or gallium . The addition of volatile acids allows for the sample to decompose into its gaseous components in the plasma which minimizes the ability for concentrated salts and solvent loads to clog the cones and contaminate the instrument. [ 32 ] Depending on the sample type, usually 5 mL of the internal standard is added to a test tube along with 10–500 microliters of sample. This mixture is then vortexed for several seconds or until mixed well and then loaded onto the autosampler tray. For other applications that may involve very viscous samples or samples that have particulate matter, a process known as sample digestion may have to be carried out before it can be pipetted and analyzed. This adds an extra first step to the above process and therefore makes the sample prep more lengthy. | https://en.wikipedia.org/wiki/Inductively_coupled_plasma_mass_spectrometry |
Indulin AA-86 is the trade name (held by Ingevity) [ 1 ] for a proprietary formula used for an asphalt emulsifying agent . As such, it does not have a given CAS number . Its composition is only provided subject to a nondisclosure agreement . [ 2 ] The company reports that it is a fatty amine derivative, an amber viscous liquid, pH 9 to 11 at a 15% w/w concentration, reactive with acids and oxidizing agents, with a relative density of 0.89, boiling point greater than 180 C and a closed cup flash point of 126 C. It is not volatile, but is identified as a hazard for inhalation, eye or skin contact and must be used with adequate ventilation. The compound is stable and hazardous decomposition products should not be produced during normal use, but in a fire can produce carbon dioxide , carbon monoxide and nitrogen oxides , so firefighters are advised to wear self-contained breathing apparatus. [ 3 ] State regulatory disclosures indicate it contains ethyl acrylate . According to the US EPA, "the hydrochloric salt of this product is only acceptable for use in the production of asphalt emulsions, and the emulsions may only be used in asphalt paving applications." [ 4 ] Standard usage involves partial neutralization of basic indulin with hydrochloric acid to form a salt , for a 1.0:1.1 ratio of indulin to its salt. [ 5 ]
The compound is notable for a backflow of up to 24 gallons of the material, possibly in a mixture with hydrochloric acid , into the city water supply of Corpus Christi, Texas , [ 6 ] leading to a temporary ban (December 14, 2016) on use of tap water throughout the city of 320,000 residents. The ban remained in place in 85% of the city for more than two days, leading to school closures and emergency deliveries of bottled water, [ 7 ] [ 8 ] after which restrictions were tailored (December 17) to smaller portions of the city. City officials posted a warning to residents that "Boiling, freezing, filtering, adding chlorine or other disinfectants or letting the water stand will not make the water safe." The material originated from a plant leased to Ergon Asphalt and Emulsions on property adjacent to one of the two Valero refineries in the city's large refinery complex. [ 9 ] [ 10 ] A "white, sudsy liquid" was reported to the city at taps in the company's administration building on December 1 and then, after city workers had flushed the pipe, on December 7, and finally, after a third flush, reported again by Valero workers at the building on December 12. [ 11 ] A Valero spokesman described the contamination as "a localized backflow issue from third party operations in the area of Valero's asphalt terminal" and said that the company did not believe the city water had been impacted. [ 12 ] It was reported December 17 that city officials were investigating four cases of skin and intestinal issues that were consistent with possible symptoms of exposure, [ 13 ] but these claims were dismissed by Mayor Dan McQueen as "rumors", and twelve "reports of possibly related symptoms from prohibited water use" were described as "unconfirmed" by the EPA. [ 14 ] [ 5 ] The ban was lifted December 18 after 28 samples of city water failed to find Indulin AA-86 contamination. [ 15 ]
The solubility of the compound is thought to be relatively low. A blog for Hydroviv , a water filter manufacturer, suggested that the presence of hydrochloric acid might hint at the nature of the backflow: "Indulin AA-86 is prepared in a 0.3% solution to form an emulsion. Therefore, for 24 gallons of Indulin AA-86 would be diluted with water into 8,000 gallons, a volume that is a standard storage/mixing tank size in the industry." The diluted emulsion would be more capable of mixing with the city water supply during a backflow. [ 16 ] A statement by Ergon said that it purchases its water via Valero, its landlord at the site, and that a soap solution, consisting of 98% water and 2% indulin AA-86, would have backflowed through this separate supply line. [ 17 ] | https://en.wikipedia.org/wiki/Indulin_AA-86 |
Industrial & Engineering Chemistry Research is a peer-reviewed scientific journal published by the American Chemical Society covering all aspects of chemical engineering . The editor-in-chief is Michael Baldea ( University of Texas at Austin ).
The journal was established in 1909 as the Journal of Industrial & Engineering Chemistry . It was renamed in 1930 as Industrial & Engineering Chemistry before obtaining its current title in 1970. From 1911 to 1916 it was edited by Milton C. Whitaker . From 1921 to 1942 it was edited by Dr. Harrison E. Howe . [ 1 ] From 1962 to 1986, Industrial & Engineering Chemistry Fundamentals was edited by Robert L. Pigford. From 1986 to 2013 the journal was edited by Donald R. Paul, and from 2014 to 2023 by Phillip E. Savage.
The journal I&EC Product Research and Development was established in 1962. It was renamed Product R&D in 1969 and renamed again in 1978 as Industrial & Engineering Chemistry Product Research and Development . In 1986, it and the journals Industrial & Engineering Chemistry Fundamentals and Industrial & Engineering Chemistry Process Design and Development , both also established in 1962, were combined into Industrial & Engineering Chemistry Research . [ 2 ]
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2022 impact factor of 4.2. [ 3 ]
This article about a chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Industrial_&_Engineering_Chemistry_Research |
There are 31 Industrial Assessment Centers in the United States as of June 2021. These centers are located at universities across the US, and are funded by the United States Department of Energy (DOE) to spread ideas relating to industrial energy conservation .
The centers conduct research into energy conservation techniques for industrial applications. This is accomplished by performing energy audits or assessments at manufacturers near the particular center. The IAC program has achieved over $890 million of implemented and $2.6 billion of recommended energy cost savings since its inception. [ 1 ]
Industrial Assessment Centers (formerly called the Energy Analysis and Diagnostic Center (EADC) program) were created by the Department of Commerce in 1976 and later moved to the DOE. The IAC program is administered through the Advanced Manufacturing Office [ 2 ] under the Office of Energy Efficiency and Renewable Energy . The Centers were created to help small and medium-sized manufacturing facilities cut back on unnecessary costs from inefficient energy use, ineffective production procedures, excess waste production, and other production-related problems. [ 3 ] According to instructions from DOE, currently the centers are only required to focus on reducing wasted energy and increasing energy efficiency. While this remains the primary focus of the assessments, waste reduction and productivity improvements are still commonly recommended.
In addition to providing technical support to small to mid-sized manufacturers through energy assessments, the IAC program offers several other important benefits. Apart from the routine energy audits which cover a broad scope of industrial settings and subsystems, the IACs provide technical material and workshops promoting energy efficiency.
IAC Database Rutgers University maintains a large databases of energy efficiency projects in the industrial sector. The database contains recommendations from every audit completed by an IAC dating back to 1980. As of June 2021, the IAC program had finished 19,470 assessments and made over 146,500 recommendations. [ 4 ] This database is free and open to the public.
IAC Alumni The IAC program helps train the next generation of energy efficiency engineers. Hundreds of students participate in the program each year, [ 5 ] and over 56% of those students pursue careers in energy or energy efficiency. [ 6 ]
Map of Centers and Contact Information
IAC Websites:
IAC Field Management Office: | https://en.wikipedia.org/wiki/Industrial_Assessment_Center |
Industrial Ethernet ( IE ) is the use of Ethernet in an industrial environment with protocols that provide determinism and real-time control. [ 1 ] Protocols for industrial Ethernet include EtherCAT , EtherNet/IP , PROFINET , POWERLINK , SERCOS III , CC-Link IE , and Modbus TCP . [ 1 ] [ 2 ] Many industrial Ethernet protocols use a modified media access control (MAC) layer to provide low latency and determinism. [ 1 ] Some microprocessors provide industrial Ethernet support.
Industrial Ethernet can also refer to the use of standard Ethernet protocols with rugged connectors and extended temperature switches in an industrial environment, for automation or process control . Components used in plant process areas must be designed to work in harsh environments of temperature extremes, humidity, and vibration that exceed the ranges for information technology equipment intended for installation in controlled environments. The use of fiber-optic Ethernet variants reduces the problems of electrical noise and provides electrical isolation.
Some industrial networks emphasized deterministic delivery of transmitted data, whereas Ethernet used collision detection which made transport time for individual data packets difficult to estimate with increasing network traffic. Typically, industrial uses of Ethernet employ full-duplex standards and other methods so that collisions do not unacceptably influence transmission times.
Industrial use requires consideration of the environment in which the equipment must operate. Factory equipment must tolerate a wider range of temperature, vibration, physical contamination and electrical noise than equipment installed in dedicated information-technology wiring closets . Since critical process control may rely on an Ethernet link, the economic cost of interruptions may be high and high availability is therefore an essential criterion. Industrial Ethernet networks must interoperate with both current and legacy systems, and must provide predictable performance and maintainability. In addition to physical compatibility and low-level transport protocols, a practical industrial Ethernet system must also provide interoperability of higher levels of the OSI model . An industrial network must provide security both from intrusions from outside the plant, and inadvertent or unauthorized use within the plant. [ 3 ]
When an industrial network must connect to an office network or external networks, a firewall system can be inserted to control exchange of data between the networks. This network separation preserves the performance and reliability of the industrial network.
Industrial environments are often much harsher, often subject to oil sprays, water sprays, and physical vibrations, so often industrial Ethernet requires a more rugged and watertight connector on one or both ends of the Cat 5 or Cat 6 cable , such as M12 connectors or M8 connectors, rather than the 8P8C connectors commonly used in homes and businesses. [ 4 ] [ 5 ]
Programmable logic controllers (PLCs) communicate using one of several possible open or proprietary protocols, such as EtherNet/IP , EtherCAT , Modbus , Sinec H1 , Profibus , CANopen , DeviceNet or FOUNDATION Fieldbus . The idea to use standard Ethernet makes these systems more interoperable .
Some of the advantages over other types of industrial network include:
Difficulties of using industrial Ethernet include: | https://en.wikipedia.org/wiki/Industrial_Ethernet |
Industrial Green Chemistry World (IGCW), previously known as Industrial Green Chemistry Workshop, [ 1 ] is an industrial convention which focuses on expanding, implementing and commercializing green chemistry and green engineering based technologies and products in the chemical industry . [ 2 ] The first event was held in Powai , Mumbai , in 2009. [ 3 ] It is held biennially, and the recent series of their convention was held between 6-8 November, 2023 [ 2 ]
The event is mainly divided into four sub-events - the Symposium , the Expo , the Awards and the Seminars . The IGCW Symposium is a platform where expert speakers from the academia and the industry deliver presentations on green sustainable innovations and achievements. The Expo is a platform for organizations from the chemical and pharmaceutical industry to exhibit their latest green chemistry innovations. The IGCW award recognizes outstanding research and initiatives in green chemistry and engineering to promote innovation in cleaner, cheaper, smarter chemistry developments that have been or can be utilized by the industry to achieve pollution prevention goals. [ 4 ]
IGCW is a biennial event dedicated to the cause of implementing and commercializing green chemistry and engineering on a large scale. The event is organized by Green ChemisTree Foundation and previously in collaboration with Newreka Green Synth Technologies Pvt. Ltd. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] The first event, which was held in 2009, addressed the need of the Indian chemical industry's future direction with global trends in sustainability, besides exploring opportunities for leveraging industrial green chemistry models for business differentiation and competitiveness. [ 10 ] Over 300 participants from the chemical industry; 65% from chemical companies, 15% from academic and research institutes , 13% students and 7% officials from governments, their associates and societal bodies made up the attendees at the event. [ 11 ]
Industrial Green Chemistry World (IGCW) has a special focus primarily over the four most chemistry-intensive sectors: | https://en.wikipedia.org/wiki/Industrial_Green_Chemistry_World |
The Industry IoT Consortium (IIC) (previously the Industrial Internet Consortium ) is an open-member organization and a program of the Object Management Group (OMG). Founded by AT&T , Cisco , General Electric , IBM , and Intel in March 2014, with the stated goal "to deliver transformative business value to industry, organizations, and society by accelerating the adoption of a trustworthy internet of things". [ 1 ]
As of February 12, 2024, the IIC contains 224 member organizations. [ 2 ] The current executive director of the IIC is William Hoffman, and the current chief technical officer is Chuck Byers. [ 3 ]
The Industry IoT Consortium (IIC) was founded on March 27, 2014 by AT&T, Cisco, General Electric, IBM, and Intel. Though its parent company is the Object Management Group, the IIC is not a standards organization. [ 4 ] Rather, the consortium was formed with the stated goal to bring together industry professionals to promote the development and adoption of Industrial Internet technologies. [ 5 ]
Specifically, IIC members are concerned with "delivering transformative business value to industry, organizations, and society by accelerating the adoption of a trustworthy internet of things". [ 6 ] The IIC Technology Working Group ratified an Industrial Internet reference architecture on June 17, 2015, which defines functional areas, technologies, and standards for IIC members, including sensors, data analytics, and business applications. [ 7 ]
The development of testbeds to demonstrate the real-world implementation of Industrial Internet solutions is one of the goals of the IIC. [ 8 ] As of February 2024, the Consortium has publicly announced 27 testbeds. [ 8 ]
The goal of the Track and Trace testbed is to manage handheld power tools in manufacturing and maintenance environments. This "management" involves efficiently tracking and tracing the usage of these tools to ensure their proper use, prevent their misuse and collect data on their usage and status.
The tools in Track and Trace determine their own precise location and use it, therefore, will be able to determine the force and work needed to complete an exacting task. In addition, if a tool recognizes that it is being misused, it will promptly power down to avoid accident or injury. Over the two-year project, the testbed participants fine-tuned localization of tools to 30 centimeters, with the goal to get accuracy down to five centimeters at some point in the future. Near the start of the project, the accuracy was approximately one meter. These features of Track and Trace have been created with the goal of contributing to the safety and quality of the goods produced, as well as increasing productivity in manufacturing.
Over the two-year project, four Industrial Internet Consortium members lent their expertise to the testbed. Bosch supplied the necessary software; Cisco took care of the precision location identification feature; National Instruments interconnected the power tools; and Tech Mahindra was responsible for the application programming. [ 9 ]
Many industries have assets that are critical to their business processes. Availability and efficiency of these assets directly impact service and business. Using predictive analytics , the Asset Efficiency Testbed aims to collect real-time asset information efficiently and accurately and run analytics to make the right decisions in terms of operations, maintenance, overhaul and asset replacement. Infosys , a member of the Industrial Internet Consortium, is leading this project, with contribution from Consortium members Bosch , General Electric , IBM , Intel , National Instruments , and PTC .
Asset Efficiency is a vertical testbed, making it possible for the testbed to be applied to multiple solutions. The testbed will launch in two phases. In the first phase, the testbed will be created for a moving solution, in this case, aircraft landing gear. The focus of this phase will be on the creation of stack and the integration of technologies. In the second phase, the testbed will address fixed assets , like chillers, with the goals of finalizing the architecture and opening up the interfaces.
The Asset Efficiency Testbed monitors, controls and optimizes the assets holistically taking into consideration operational, energy, maintenance, service, and information efficiency and enhance their performance utilization. [ 10 ]
Many emerging industrial IoT applications require coordinated, real-time analytics at the "edge", using algorithms that require a scale of computation and data volume/velocity previously seen only in the data center . Frequently, the networks connecting these machines do not provide sufficient capability, bandwidth, reliability, or cost structure to enable analytics-based control or coordination algorithms to run in a separate location from the machines.
Industrial Internet Consortium members Hewlett-Packard and Real-Time Innovation have joined on the Edge Intelligence Testbed. The primary objective of the Edge Intelligence Testbed is to significantly accelerate the development of edge architectures and algorithms by removing the barriers that many developers face, such as access to a wide variety of advanced compute hardware and software configurable to directly resemble state-of-the-art edge systems at very low cost to the tester/developer. [ 11 ]
The Factory Operations Visibility & Intelligence (FOVI) Testbed makes it possible to simulate a factory environment in order to visualize results that can then be used to determine how the process can be optimized. The work on FOVI stems from two separate Operations Visibility and Intelligence applications in two factories in Japan: one for notebook computers and another for network appliances. Both use cases have a lot in common with respect to processing data, analytics, and visualization technologies. Ideally they should use a common software foundation while their future evolution requires a more open architecture .
Work on the testbed will be led by Industrial Internet Consortium member Fujitsu Limited with Industrial Internet Consortium founding member, Cisco , collaborating on the in-factory testbed edge infrastructure. [ 12 ]
The High-Speed Network Infrastructure testbed will introduce high-speed fiber optic lines to support Industrial Internet initiatives. The network will transfer data at 100 gigabits per second to support seamless machine-2-machines communications and data transfer across connected control systems, big infrastructure products, and manufacturing plants.
The 100 gigabit capability extends to the wireless edge, allowing the testbed leaders to provide more data and analytical results to mobile users through advanced communication techniques. Industrial Internet Consortium founder, General Electric , is leading efforts by installing the networking lines at its Global Research Center. Cisco - also a founder of the Consortium - contributed its expertise to the project by providing the infrastructure needed to give the network its national reach. Industrial Internet Consortium members Accenture and Bayshore Networks are currently demonstrating the application of the High-Speed Network Infrastructure for power generation. [ 13 ]
The Industrial Digital Thread (IDT) testbed drives efficiency, speed, and flexibility through digitization and automation of manufacturing processes and procedures. Beginning at design, the seamless digital integration of design systems into manufacturing, leveraging the model-based enterprise , helps to enable virtual manufacturing before even one physical part is created. Sensor enabled automation, manufacturing processes, procedures, and machine data will enable optimization in operations and supply chain. Once the manufacturing process is complete, the digital 'birth certificate' (as built-signature) can then be compared to the as-designed engineering intention. This provides the opportunity for powerful big data analytics to enable service teams and field engineers to have better awareness, insights, and practical actions to improve the servicing and maintenance of critical assets.
The Industrial Digital Thread is a complex and comprehensive concept and it will be implemented in multiple phases. Phase 1 focuses on assembling the software stack, establishing the architecture and connectivity, and addressing one use case around premature wear. Throughout Phase 1, the testbed will be run by IIC members General Electric and Infosys . In subsequent phases, this testbed will be able to support multiple use cases in design, manufacturing, services and supply-chain optimization. At this time, additional members will be invited to join. [ 14 ]
The goal of the International Future Industrial Internet Testbed (INFINITE) is to develop software-defined infrastructures to drive the growth of Industrial Internet products and services. INFINITE uses Big Data to not only create completely virtual domains with Software-Defined Networking, but it also makes it possible for multiple virtual domains to securely run via one physical network - thus making it ideal for use in mission critical systems. Even more interesting, INFINITE makes it possible to connect to these virtual domains through mobile networks.
Industrial Internet Consortium member, EMC Corporation , is leading the INFINITE testbed. Also contributing their expertise to this project is Industrial Internet Consortium member Cork Institute of Technology , as well as Vodafone , the Irish Government Networks, Asavie and Cork Internet Exchange.
The testbed will unfold in two phases in Ireland. In Phase One, three geographically dispersed data centers will be interconnected into a reconfigured EMC network. In Phase Two, INFINITE will be applied to a use case called "Bluelight". Bluelight will allow ambulances to securely connect to a hospital's system and relay information while en route, so hospital staff are prepared to take over the care of the patient once the ambulance arrives.
The INFINITE testbed is open to any Industrial Internet Consortium member as well as interested nonmembers companies who have a concept for an IoT-enabled solution that requires mobile communication and a dynamic configuration environment. [ 15 ]
The Condition Monitoring and Predictive Maintenance Testbed (CM/PM) will demonstrate the value and benefits of continuously monitoring industrial equipment to detect early signs of performance degradation or failure. CM/PM will also use modern analytical technologies to allow organizations to not only detect problems but proactively recommend actions for operations and maintenance personnel to correct the problem.
Condition Monitoring (CM) is the use of sensors in equipment to gather data and enable users to centrally monitor the data in real-time. Predictive Maintenance (PM) applies analytical models and rules against the data to proactively predict an impending issue; then deliver recommendations to operations, maintenance and IT departments to address the issue. These capabilities enable new ways to monitor the operation of the equipment - such as turbines and generators - and processes and to adopt proactive maintenance and repair procedures rather than fixed schedule-based procedures, potentially saving money on maintenance and repair, and saving cost and lost productivity of downtime caused by equipment failures. Furthermore, combining sensor data from multiple pieces of equipment and/or multiple processes can provide deeper insight into the overall impact of faulty or sub-optimal equipment, allowing organizations to identify and resolve problems before they impact operations and improve the quality and efficiency of industrial processes.
Through this testbed, the testbed leaders IBM and National Instruments will explore the application of a variety of analytics technologies for condition monitoring and predictive maintenance . The testbed application will initially be deployed to a power plant facility where performance and progress will be reported on, additional energy equipment will be added, and new models will be developed. It will then be expanded to adjacent, as yet to be determined, industries. [ 16 ]
Smart Airline Baggage Management Testbed
The Smart Airline Baggage Management testbed , part of a broader aviation ecosystem vision, is aimed at reducing the instances of delayed, damaged and lost bags leading to lower economic risk exposure to the airlines; increasing the ability to track and report on baggage including location and weight changes to prevent theft and loss; and improve customer satisfaction through better communication including offering new value-added services to frequent flyers.
The testbed is also aimed at helping airlines address the new baggage handling requirements set out by IATA in Resolution 753 requiring airlines to implement more comprehensive acquisition and delivery solutions for baggage tracking and handling by June 2018. This target is also outlined in the broader IATA 2015 White Paper titled "Simplifying the Business."
As of September 2021, the IIC has six working groups: Technology, Security, Liaison, Marketing, Industry and Digital Transformation. The last two reflect the drive to enable technology end users to deploy technology in their businesses and transform them digitally (The Industry Working Group used to be called the Testbed Working Group, but now includes test drives and challenges, and groups focused on specific verticals. The Digital Transformation Working Group used to be named Business Strategy and Solution Lifecycle, but has now broadened its remit). Each working group has a number of subgroups to further specific challenges. Each IIC member company can assign company representatives to these groups. [ 17 ] | https://en.wikipedia.org/wiki/Industrial_Internet_Consortium |
An industrial PC is a computer intended for industrial purposes ( production of goods and services ), with a form factor between a nettop and a server rack . Industrial PCs have higher dependability and precision standards, and are generally more expensive than consumer electronics . They often use complex instruction sets , such as x86 , where reduced instruction sets such as ARM would otherwise be used.
IBM released the 5531 Industrial Computer in 1984, [ 1 ] arguably the first "industrial PC". The IBM 7531, an industrial version of the IBM AT PC was released May 21, 1985. [ 2 ] Industrial Computer Source first offered the 6531 Industrial Computer [ 3 ] in 1985. This was a proprietary 4U rackmount industrial computer based on a clone IBM PC motherboard.
Industrial PCs are primarily used for process control and/or data acquisition. In some cases, an industrial PC is simply used as a front-end to another control computer in a distributed processing environment. Software can be custom written for a particular application or an off-the-shelf package such as TwinCAT , Wonder Ware, Labtech Notebook or LabView can be used to provide a base level of programming.
Analog Devices got exclusive sales for OEM European industrial market and provided MACSYM 120 combined IBM 5531 and MACBASIC a multitasking basic running on C/CPM from Digital Research. Analog and digital I/O cards plugged inside PC and/or extension rack made MAC120 as one of the most powerful and easy to use controller for plant applications at this date.
An application may simply require the I/O such as the serial port offered by the motherboard. In other cases, expansion cards are installed to provide analog and digital I/O, specific machine interface, expanded communications ports, and so forth, as required by the application.
Industrial PCs offer different features than consumer PCs in terms of reliability, compatibility, expansion options and long-term supply.
Industrial PCs are typically characterized by being manufactured in lower volumes than home or office PCs. A common category of industrial PC is the 19-inch rackmount form factor. Industrial PCs typically cost considerably more than comparable office style computers with similar performance. Single-board computers and back planes are used primarily in industrial PC systems. However, the majority of industrial PCs are manufactured with COTS motherboards.
A subset of industrial PCs is the Panel PC where a display, typically an LCD, is incorporated into the same enclosure as the motherboard and other electronics. These are typically panel mounted and often incorporate touch screens for user interaction. They are offered in low cost versions with no environmental sealing, heavier duty models sealed to IP67 standards to be waterproof at the front panel and including models which are explosion proof for installation into hazardous environments.
Virtually all industrial PCs share an underlying design philosophy of providing a controlled environment for the installed electronics to survive the rigors of the plant floor. The electronic components themselves may be selected for their ability to withstand higher and lower operating temperatures than typical commercial components. | https://en.wikipedia.org/wiki/Industrial_PC |
Industrial Union Department v. American Petroleum Institute (also known as the Benzene Case ), 448 U.S. 607 (1980), was a case decided by the Supreme Court of the United States . [ 1 ] This case represented a challenge to the OSHA practice of regulating carcinogens by setting the exposure limit "at the lowest technologically feasible level that will not impair the viability of the industries regulated." OSHA selected that standard because it believed that (1) it could not determine a safe exposure level and that (2) the authorizing statute did not require it to quantify such a level. [ 2 ] The AFL Industrial Union Department served as the petitioner; the American Petroleum Institute was the respondent. A plurality on the Court, led by Justice Stevens, wrote that the authorizing statute did indeed require OSHA to demonstrate a significant risk of harm (albeit not with mathematical certainty) in order to justify setting a particular exposure level.
Perhaps more important than the specific holding of the case, the Court noted in dicta that if the government's interpretation of the authorizing statute had been correct, it might violate the nondelegation doctrine . This line of reasoning may represent the "high-water mark" of recent attempts to revive the doctrine. [ 3 ]
The Occupational Safety and Health Act of 1970 delegated broad authority to the Secretary of Labor to promulgate standards to ensure safe and healthful working conditions for the Nation's workers (the Occupational Safety and Health Administration (OSHA) being the agency responsible for carrying out this authority). According to Section 3(8), standards created by the secretary must be “reasonably necessary or appropriate to provide safe or healthful employment and places of employment.” Section 6(b)(5) of the statute sets the principle for creating the safety regulations, directing the Secretary to “set the standard which most adequately assures, to the extent feasible , on the basis of the best available evidence, that no employee will suffer material impairment of health or functional capacity…”. [ 4 ] [ 2 ] At issue in the case, is the Secretary's interpretation of "extent feasible" to mean that if a material is unsafe he must “set an exposure limit at the lowest technologically feasible level that will not impair the viability of the industries regulated.”
The Court held the Secretary applied the act inappropriately. To comply with the statute, the secretary must determine 1) that a health risk of a substance exists at a particular threshold and 2) Decide whether to issue the most protective standard, or issue a standard that weighs the costs and benefits. Here, the secretary failed to first determine that a health risk of substance existed for the chemical benzene when workers were exposed at 1 part per million. Data only suggested the chemical was unsafe at 10 parts per million. Thus, the secretary had failed the first step of interpreting the statute, that is, finding that the substance posed a risk at that level. [ 2 ]
In its reasoning, the Court noted it would be unreasonable to Congress intended to give the Secretary “unprecedented power over American industry.” Such a delegation of power would likely be unconstitutional. The Court also cited the legislative history of the act, which suggested that Congress meant to address major workplace hazards, not hazards with low statistical likelihoods. [ 2 ]
In a famous concurrence, Justice Rehnquist argued that the section 6(b)(5) of the statute, which set forth the "extent feasible" principle, should be struck down on the basis of the non-delegation doctrine. The non-delegation doctrine, which has been recognized by the Supreme Court since the era of Chief Justice Marshall , holds that Congress cannot delegate law-making authority to other branches of government. Rehnquist offered three rationales for the application of the non-delegation doctrine. First, ensure Congress makes social policy, not agencies; delegation should only be used when the policy is highly technical or the ground too large to be covered. Second, agencies of the delegated authority require an “intelligible principle” to exercise discretion which was lacking in this case. Third, the intelligible principle must provide judges with a measuring stick for judicial review. [ 2 ]
Some scholars [ who? ] have said that the interpretation of the statute ignored a foundational principle of statutory interpretation, generalia specialibus non derogant ("the general does not derogate from the specific"). Generally, specific language governs general language. In this case, the court read the more general provision of Section 3(8) as governing the specific process specified in Section 6(b)(5).
The case also marks the current state of affairs for the non-delegation doctrine. [ 5 ] When the court is faced with a provision that appears to be an impermissible delegation of the authority, it will use tools of statutory interpretation to try to narrow the delegation of power. | https://en.wikipedia.org/wiki/Industrial_Union_Department_v._American_Petroleum_Institute |
The Industrial Union of Chemicals, Glass and Ceramics ( German : Industriegewerkschaft Chemie, Glas und Keramik , IG CGK) was a trade union representing workers in various industries in East Germany .
The union was founded by the Free German Trade Union Federation in 1946, initially as the Industrial Union of Chemicals, Paper, Stone and Earth . It initially had 230,464 members. In 1947, its name was changed to the Industrial Union of Chemicals, Paper and Ceramics , and then in 1950 it was shortened to the Industrial Union of Chemicals . [ 1 ]
The remit of the union also changed over the years. In 1955, its members in the building materials sector were transferred to the Industrial Union of Construction and Wood , and in 1956 various members moved to the Industrial Union of the Local Economy, although they returned in 1958. The biggest changes came in 1957, when the union's headquarters moved from Berlin to Halle , and its members in textile manufacturing and forestry were transferred to other unions. [ 1 ]
Internationally, the union affiliated to the Trade Unions International of Chemical, Oil and Allied Workers . The union became involved in sports associations , their names starting with "SV Chemie". [ 1 ]
The membership of the union continued to change until the 1972, when it also adopted its final name, the "Industrial Union of Chemicals, Glass and Ceramics". In addition to these areas, it also represented workers in the paper and petroleum industries, and in waste disposal. [ 1 ]
By 1989, the union had 531,301 members. It became independent in April 1990. It began working closely with the Chemical, Paper and Ceramic Union , and gradually merged into it, completing the process in June 1991. [ 1 ] | https://en.wikipedia.org/wiki/Industrial_Union_of_Chemicals,_Glass_and_Ceramics |
Industrial agitators are machines used to stir or mix fluids in industries that process products in the chemical , food , pharmaceutical and cosmetic industries . [ 1 ] Their uses include:
Several different kind of industrial agitators exist:
The choice of the agitator depends on the phase that needs to be mixed (one or several phases): liquids only, liquid and solid, liquid and gas or liquid with solids and gas.
Depending on the type of phase and the viscosity of the bulk, the agitator may be called a mixer, kneader, dough mixer, amongst others. Agitators used in liquids can be placed on the top of the tank in a vertical position, horizontally on the side of the tank, or less commonly, on the bottom of the tank.
The agitation is achieved by movement of the heterogeneous mass (liquid-solid phase). In mechanical agitators, this the result of the rotation of an impeller. The bulk can be composed of different substances and the aim of the operation is to blend it or to improve the efficiency of a reaction by a better contact between reactive product. Agitation may also be used to increase heat transfer or to maintain particles in suspension.
The agitation of liquid is made by one or several agitation impellers .
Depending on its shape, the impeller can generate:
These two phenomena provide energy consumption.
Propellers (marine or hydrofoil ) give an inlet and outlet which are on axial direction, preferably downward, they are characterized by a nice pumping flow, low energy consumption and low shear magnitude as well as low turbulence. An impeller is a rotor that produces a sucking force, and is part of a pump.
Turbines (flat blades or pitched blades) which inlet flow is axial and outlet flow is radial will provide shearing, turbulence and need approximately 20 time more energy than propellers, for the same diameter and same rotation speed.
An agitator is composed of a drive device ( motor, gear reducer, belts…), a guiding system of the shaft (lantern fitted with bearings),
a shaft and impellers .
If the operating conditions are under high pressure or high temperature, the agitator must be equipped with a sealing system to keep tightened the inside of the tank when the shaft is crossing it.
If the shaft is long (> 10m), it can be guided by a bearing located in the bottom of the tank (bottom bearing). | https://en.wikipedia.org/wiki/Industrial_agitator |
Industrial and production engineering ( IPE ) is an interdisciplinary engineering discipline that includes manufacturing technology, engineering sciences, management science , and optimization of complex processes , systems , or organizations . It is concerned with the understanding and application of engineering procedures in manufacturing processes and production methods. [ 1 ] Industrial engineering dates back all the way to the industrial revolution, initiated in 1700s by Sir Adam Smith , Henry Ford , Eli Whitney , Frank Gilbreth and Lilian Gilbreth , Henry Gantt , F.W. Taylor , etc. After the 1970s, industrial and production engineering developed worldwide and started to widely use automation and robotics. Industrial and production engineering includes three areas: Mechanical engineering (where the production engineering comes from), industrial engineering , and management science .
The objective is to improve efficiency, drive up effectiveness of manufacturing, quality control, and to reduce cost while making their products more attractive and marketable. Industrial engineering is concerned with the development, improvement, and implementation of integrated systems of people, money, knowledge, information, equipment, energy, materials, as well as analysis and synthesis. The principles of IPE include mathematical, physical and social sciences and methods of engineering design to specify, predict, and evaluate the results to be obtained from the systems or processes currently in place or being developed. [ 2 ] The target of production engineering is to complete the production process in the smoothest, most-judicious and most-economic way. Production engineering also overlaps substantially with manufacturing engineering and industrial engineering . [ 3 ] The concept of production engineering is interchangeable with manufacturing engineering.
As for education, undergraduates normally start off by taking courses such as physics, mathematics (calculus, linear analysis, differential equations), computer science, and chemistry. Undergraduates will take more major specific courses like production and inventory scheduling, process management , CAD/CAM manufacturing, ergonomics , etc., towards the later years of their undergraduate careers. In some parts of the world, universities will offer Bachelor's in Industrial and Production Engineering. However, most universities in the U.S. will offer them separately. Various career paths that may follow for industrial and production engineers include: Plant Engineers , Manufacturing Engineers , Quality Engineers , Process Engineers and industrial managers, project management , manufacturing , production and distribution, From the various career paths people can take as an industrial and production engineer, most average a starting salary of at least $50,000.
The roots of the industrial engineering profession date back to the Industrial Revolution . The technologies that helped mechanize traditional manual operations in the textile industry including the Flying shuttle , the Spinning jenny , and perhaps most importantly the Steam engine generated Economies of scale that made Mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations. [ 4 ]
Adam Smith's concepts of division of labour and the "invisible hand" of capitalism introduced in his treatise " The Wealth of Nations " motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the implementation of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen. [ 4 ]
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book "On the Economy of Machinery and Manufacturers" which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks. [ 4 ]
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later. [ 4 ]
In 1960 to 1975, with the development of decision support systems in supply such as the Material requirements planning (MRP), people can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide. [ 5 ]
In the seventies, with the penetration of Japanese management theories such as Kaizen and Kanban , Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own Continuous improvement programs. [ 5 ]
In the nineties, following the global industry globalization process, the emphasis was on supply chain management, and customer-oriented business process design. Theory of constraints developed by an Israeli scientist Eliyahu M. Goldratt (1985) is also a significant milestone in the field. [ 5 ]
Modern manufacturing engineering studies include all intermediate processes required for the production and integration of a product's components.
Some industries, such as semiconductor and steel manufacturers use the term "fabrication" for these processes.
Automation is used in different processes of manufacturing such as machining and welding. Automated manufacturing refers to the application of automation to produce goods in a factory. The main advantages of automated manufacturing for the manufacturing process are realized with effective implementation of automation and include: higher consistency and quality, reduction of lead times, simplification of production, reduced handling, improved work flow, and improved worker morale. [ 6 ]
Robotics is the application of mechatronics and automation to create robots, which are often used in manufacturing to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot). Robots are used extensively in manufacturing engineering. [ 7 ]
Robots allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform economically, and to ensure better quality. Many companies employ assembly lines of robots, and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications. [ 7 ]
Industrial engineering is the branch of engineering that involves figuring out how to make or do things better. Industrial engineers are concerned with reducing production costs, increasing efficiency, improving the quality of products and services, ensuring worker health and safety, protecting the environment and complying with government regulations. [ 8 ]
The various fields and topics that industrial engineers are involved with include:
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management ), and shortening lines (or queues ) at a bank, hospital, or a theme park. [ 26 ]
Modern industrial engineers typically use predetermined motion time system , computer simulation (especially discrete event simulation ), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory , and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory , linear algebra , and statistics , as well as having coding skills). [ 5 ]
Manufacturing Engineering is based on core industrial engineering and mechanical engineering skills, adding important elements from mechatronics, commerce, economics and business management. [ 27 ] This field also deals with the integration of different facilities and systems for producing quality products (with optimal expenditure) by applying the principles of physics and the results of manufacturing systems studies, [ 28 ] such as the following:
Manufacturing engineers develop and create physical artifacts, production processes, and technology. It is a very broad area which includes the design and development of products. Manufacturing engineering is considered to be a sub-discipline of industrial engineering / systems engineering and has very strong overlaps with mechanical engineering . Manufacturing engineers' success or failure directly impacts the advancement of technology and the spread of innovation. This field of manufacturing engineering emerged from tool and die discipline in the early 20th century. It expanded greatly from the 1960s when industrialized countries introduced factories with:
1. Numerical control machine tools and automated systems of production. [ 29 ]
2. Advanced statistical methods of quality control : These factories were pioneered by the American electrical engineer William Edwards Deming , who was initially ignored by his home country. The same methods of quality control later turned Japanese factories into world leaders in cost-effectiveness and production quality.
3. Industrial robots on the factory floor, introduced in the late 1970s: These computer-controlled welding arms and grippers could perform simple tasks such as attaching a car door quickly and flawlessly 24 hours a day. This cut costs and improved production speed. [ 30 ]
In the United States the undergraduate degree earned is the Bachelor of Science (B.S.) or Bachelor of Science and Engineering (B.S.E.) in Industrial Engineering (IE). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE). The typical curriculum includes a broad math and science foundation spanning chemistry , physics , mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design , and the standard range of engineering mathematics (i.e. calculus , linear algebra , differential equations , statistics ). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization , applied probability , stochastic modeling, design of experiments , statistical process control , simulation , manufacturing engineering , ergonomics / safety engineering , and engineering economics . Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing , supply chains and logistics , analytics and machine learning , production systems , human factors and industrial design , and service systems . [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ]
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
The usual graduate degree earned is the Master of Science (MS) or Master of Science and Engineering (MSE) in Industrial Engineering or various alternative related concentration titles. Typical MS curricula may cover:
Manufacturing engineers possess an associate's or bachelor's degree in engineering with a major in manufacturing engineering. The length of study for such a degree is usually two to five years followed by five more years of professional practice to qualify as a professional engineer. Working as a manufacturing engineering technologist involves a more applications-oriented qualification path.
Academic degrees for manufacturing engineers are usually the Associate or Bachelor of Engineering, [BE] or [BEng], and the Associate or Bachelor of Science, [BS] or [BSc]. For manufacturing technologists the required degrees are Associate or Bachelor of Technology [B.TECH] or Associate or Bachelor of Applied Science [BASc] in Manufacturing, depending upon the university. Master's degrees in engineering manufacturing include Master of Engineering [ME] or [MEng] in Manufacturing, Master of Science [M.Sc] in Manufacturing Management, Master of Science [M.Sc] in Industrial and Production Management, and Master of Science [M.Sc] as well as Master of Engineering [ME] in Design, which is a subdiscipline of manufacturing. Doctoral [PhD] or [DEng] level courses in manufacturing are also available depending on the university.
The undergraduate degree curriculum generally includes courses in physics, mathematics, computer science, project management, and specific topics in mechanical and manufacturing engineering. Initially such topics cover most, if not all, of the subdisciplines of manufacturing engineering. Students then choose to specialize in one or more sub disciplines towards the end of their degree work.
Specific to Industrial Engineers, people will see courses covering ergonomics, scheduling, inventory management, forecasting, product development, and in general courses that focus on optimization. Most colleges breakdown the large sections of industrial engineering into Healthcare, Ergonomics, Product Development, or Consulting sectors. This allows for the student to get a good grasp on each of the varying sub-sectors so they know what area they are most interested about pursuing a career in.
The Foundational Curriculum for a bachelor's degree of Manufacturing Engineering or Production Engineering includes below mentioned Syllabus. This Syllabus is closely related to Industrial Engineering and Mechanical Engineering. But it Differs by Placing more Emphasis on Manufacturing Science or Production Science. It includes following:
A degree in Manufacturing Engineering versus Mechanical Engineering will typically differ only by a few specialized classes. Mechanical Engineering degree focuses more on the Product Design Process and on Complex Products which requires more Mathematics Expertise.
A Professional Engineer , PE, is a licensed engineer who is permitted to offer professional services to the public. Professional Engineers may prepare, sign, seal, and submit engineering plans to the public. Before a candidate can become a professional engineer, they will need to receive a bachelor's degree from an ABET recognized university in the US, take and pass the Fundamentals of Engineering exam to become an "engineer-in-training", and work four years under the supervision of a professional engineer. After those tasks are complete the candidate will be able to take the PE exam. Upon receiving a passing score on the test, the candidate will receive their PE License . [ 36 ]
The SME (society) administers qualifications specifically for the manufacturing industry. These are not degree level qualifications and are not recognized at the professional engineering level. The SME offers two certifications for Manufacturing engineers: Certified Manufacturing Technologist Certificate (CMfgT) and Certified Manufacturing Engineer (CMfgE).
Qualified candidates for the Certified Manufacturing Technologist Certificate (CMfgT) must pass a three-hour, 130-question multiple-choice exam. The exam covers math, manufacturing processes, manufacturing management, automation, and related subjects. A score of 60% or higher must be achieved to pass the exam. Additionally, a candidate must have at least four years of combined education and manufacturing-related work experience. The CMfgT certification must be renewed every three years in order to stay certified. [ 37 ]
Certified Manufacturing Engineer (CMfgE) is an engineering qualification administered by the Society of Manufacturing Engineers, Dearborn, Michigan, USA. Candidates qualifying for a Certified Manufacturing Engineer credential must pass a four-hour, 180 question multiple-choice exam which covers more in-depth topics than does the CMfgT exam. A score of 60% or higher must be achieved to pass the exam. CMfgE candidates must also have eight years of combined education and manufacturing-related work experience, with a minimum of four years of work experience. The CMfgT certification must be renewed every three years in order to stay certified. [ 38 ]
The human factors area specializes in exploring how systems fit the people who must operate them, determining the roles of people with the systems, and selecting those people who can best fit particular roles within these systems. Students who focus on Human Factors will be able to work with a multidisciplinary team of faculty with strengths in understanding cognitive behavior as it relates to automation, air and ground transportation, medical studies, and space exploration.
The production systems area develops new solutions in areas such as engineering design, supply chain management (e.g. supply chain system design, error recovery , large scale systems), manufacturing (e.g. system design, planning and scheduling), and medicine (e.g. disease diagnosis, discovery of medical knowledge ). Students who focus on production systems will be able to work on topics related to computational intelligence theories for applications in industry, healthcare, and service organizations.
The objective of the reliability systems area is to provide students with advanced data analysis and decision making techniques that will improve quality and reliability of complex systems. Students who focus on system reliability and uncertainty will be able to work on areas related to contemporary reliability systems including integration of quality and reliability, simultaneous life cycle design for manufacturing systems, decision theory in quality and reliability engineering, condition-based maintenance and degradation modeling, discrete event simulation and decision analysis.
The Wind Power Management Program aims at meeting the emerging needs for graduating professionals involved in design, operations, and management of wind farms deployed in massive numbers all over the country. The graduates will be able to fully understand the system and management issues of wind farms and their interactions with alternative and conventional power generation systems. [ 39 ]
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react to changes, whether predicted or unpredicted. This flexibility is generally considered to fall into two categories, both of which have numerous subcategories. The first category, machine flexibility, covers the system's ability to be changed to produce new product types and the ability to change the order of operations executed on a part. The second category, called routing flexibility, consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability.
Most FMS systems comprise three main systems. The work machines, which are often automated CNC machines, are connected by a material handling system to optimize parts flow, and to a central control computer, which controls material movements and machine flow. The main advantages of an FMS is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product. The best application of an FMS is found in the production of small sets of products from a mass production.
Computer-integrated manufacturing (CIM) in engineering is a method of manufacturing in which the entire production process is controlled by computer. Traditionally separated process methods are joined through a computer by CIM. This integration allows the processes to exchange information and to initiate actions. Through this integration, manufacturing can be faster and less error-prone, although the main advantage is the ability to create automated manufacturing processes. Typically CIM relies on closed-loop control processes based on real-time input from sensors. It is also known as flexible design and manufacturing.
Friction stir welding was discovered in 1991 by The Welding Institute (TWI). This innovative steady state (non-fusion) welding technique joins previously un-weldable materials, including several aluminum alloys . It may play an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include: welding the seams of the aluminum main space shuttle external tank, the Orion Crew Vehicle test article, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket; armor plating for amphibious assault ships; and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation, among an increasingly growing range of uses.
The total number of engineers employed in the US in 2015 was roughly 1.6 million. Of these, 272,470 were industrial engineers (16.92%), the third most popular engineering specialty. [ 40 ] The median salaries by experience level are $62,000 with 0–5 years experience, $75,000 with 5–10 years experience, and $81,000 with 10–20 years experience. [ 41 ] The average starting salaries were $55,067 with a bachelor's degree, $77,364 with a master's degree, and $100,759 with a doctorate degree. This places industrial engineering at 7th of 15 among engineering bachelor's degrees, 3rd of 10 among master's degrees, and 2nd of 7 among doctorate degrees in average annual salary. [ 42 ] The median annual income of industrial engineers in the U.S. workforce is $83,470. [ 43 ]
Manufacturing engineering is just one facet of the engineering industry. Manufacturing engineers enjoy improving the production process from start to finish. They have the ability to keep the whole production process in mind as they focus on a particular portion of the process. Successful students in manufacturing engineering degree programs are inspired by the notion of starting with a natural resource, such as a block of wood, and ending with a usable, valuable product, such as a desk, produced efficiently and economically.
Manufacturing engineers are closely connected with engineering and industrial design efforts. Examples of major companies that employ manufacturing engineers in the United States include General Motors Corporation , Ford Motor Company, Chrysler , Boeing , Gates Corporation and Pfizer . Examples in Europe include Airbus , Daimler , BMW , Fiat , Navistar International , and Michelin Tyre. [ 44 ]
Industries where industrial and production engineers are generally employed include:
Many manufacturing companies, especially those in industrialized nations, have begun to incorporate computer-aided engineering (CAE) programs, such as SolidWorks and AutoCAD , into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and ease of use in designing mating interfaces and tolerances.
SolidWorks is an example of a CAD modeling computer program developed by Dassault Systèmes . SolidWorks is an industry standard for drafting designs and specifications for physical objects and has been used by more than 165,000 companies as of 2013. [ 45 ]
AutoCAD is an example of a CAD modeling computer program developed by Autodesk . AutoCad is also widely used for CAD modeling and CAE. [ 46 ]
Other CAE programs commonly used by product manufacturers include product life cycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM). Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. There is no need to create a physical prototype until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of relatively few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity , complex contact between mating parts, or non- Newtonian flows .
Just as manufacturing engineering is linked with other disciplines, such as mechatronics, multidisciplinary design optimization (MDO) is also being used with other CAE programs to automate and improve the iterative design process. [ 47 ] MDO tools wrap around existing CAE processes by automating the process of trial and error method used by classical engineers. MDO uses a computer based algorithm that will iteratively seek better alternatives from an initial guess within given constants. MDO uses this procedure to determine the best design outcome and lists various options as well. [ 47 ]
Classical Mechanics, attempts to use Newtons basic laws of motion to describe how a body will react when that body undergoes a force. [ 49 ] However modern mechanics includes the rather recent quantum theory . Sub disciplines of mechanics include:
Classical Mechanics:
Quantum:
If the engineering project were to design a vehicle, statics might be employed to design the frame of the vehicle in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the manufacture of the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle or to design the intake system for the engine.
Drafting or technical drawing is the means by which manufacturers create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A skilled worker who creates technical drawings may be referred to as a drafter or draftsman . Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions. Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Programs such as SolidWorks and AutoCAD [ 46 ] are examples of programs used to draft new parts and products under development.
Optionally, an engineer may also manually manufacture a part using the technical drawings, but this is becoming an increasing rarity with the advent of computer numerically controlled (CNC) manufacturing. Engineers primarily manufacture parts manually in the areas of applied spray coatings, finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every sub discipline of mechanical and manufacturing engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
Metal fabrication is the building of metal structures by cutting, bending, and assembling processes. Technologies such as electron beam melting, laser engineered net shape, and direct metal laser sintering has allowed for the production of metal structures to become much less difficult when compared to other conventional metal fabrication methods. [ 55 ] These help to alleviate various issues when the idealized CAD structures do not align with the actual fabricated structure.
Machine tools employ many types of tools that do the cutting or shaping of materials. Machine tools usually include many components consisting of motors, levers, arms, pulleys, and other basic simple systems to create a complex system that can build various things. All of these components must work correctly in order to stay on schedule and remain on task. Machine tools aim to efficiently and effectively produce good parts at a quick pace with a small amount of error. [ 56 ]
Computer-integrated manufacturing (CIM) is the manufacturing approach of using computers to control the entire production process. [ 57 ] Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. [ citation needed ] Computer-integrated manufacturing allows for data, through various sensing mechanisms to be observed during manufacturing. This type of manufacturing has computers controlling and observing every part of the process. This gives CIM a unique advantage over other manufacturing processes.
Mechatronics is an engineering discipline that deals with the convergence of electrical, mechanical and manufacturing systems. [ 58 ] Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various aircraft and automobile subsystems. [ 58 ] A mechatronic system typically includes a mechanical skeleton, motors, controllers, sensors, actuators, and digital hardware. [ 58 ] Mechatronics is greatly used in various applications of industrial processes and in automation.
The term mechatronics is typically used to refer to macroscopic systems, but futurists have predicted the emergence of very small electromechanical devices. Already such small devices, known as Microelectromechanical systems (MEMS), are used in automobiles to initiate the deployment of airbags, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high-definition printing. In future it is hoped that such devices will be used in tiny implantable medical devices and to improve optical communication.
Textile engineering courses deal with the application of scientific and engineering principles to the design and control of all aspects of fiber, textile, and apparel processes, products, and machinery. These include natural and man-made materials, interaction of materials with machines, safety and health, energy conservation, and waste and pollution control. Additionally, students are given experience in plant design and layout, machine and wet process design and improvement, and designing and creating textile products. Throughout the textile engineering curriculum, students take classes from other engineering and disciplines including: mechanical, chemical, materials and industrial engineering. [ 59 ]
Advanced composite materials (engineering) (ACMs) are also known as advanced polymer matrix composites. These are generally characterized or determined by unusually high strength fibres with unusually high stiffness, or modulus of elasticity characteristics, compared to other materials, while bound together by weaker matrices. Advanced composite materials have broad, proven applications, in the aircraft, aerospace, and sports equipment sectors. Even more specifically ACMs are very attractive for aircraft and aerospace structural parts. Manufacturing ACMs is a multibillion-dollar industry worldwide. Composite products range from skateboards to components of the space shuttle. The industry can be generally divided into two basic segments, industrial composites and advanced composites.
Associations | https://en.wikipedia.org/wiki/Industrial_and_production_engineering |
Nanotechnology is impacting the field of consumer goods , several products that incorporate nanomaterials are already in a variety of items; many of which people do not even realize contain nanoparticles , products with novel functions ranging from easy-to-clean to scratch-resistant . Examples of that car bumpers are made lighter, clothing is more stain repellant , sunscreen is more radiation resistant, synthetic bones are stronger, cell phone screens are lighter weight, glass packaging for drinks leads to a longer shelf-life, and balls for various sports are made more durable. [ 1 ] Using nanotech, in the mid-term modern textiles will become "smart", through embedded "wearable electronics", such novel products have also a promising potential especially in the field of cosmetics , and has numerous potential applications in heavy industry . Nanotechnology is predicted to be a main driver of technology and business in this century and holds the promise of higher performance materials, intelligent systems and new production methods with significant impact for all aspects of society.
A complex set of engineering and scientific challenges in the food and bioprocessing industry for manufacturing high quality and safe food through efficient and sustainable means can be solved through nanotechnology. Bacteria identification and food quality monitoring using biosensors ; intelligent, active, and smart food packaging systems; nanoencapsulation of bioactive food compounds are few examples of emerging applications of nanotechnology for the food industry. [ 2 ] Nanotechnology can be applied in the production, processing, safety and packaging of food. A nanocomposite coating process could improve food packaging by placing anti-microbial agents directly on the surface of the coated film. Nanocomposites could increase or decrease gas permeability of different fillers as is needed for different products. They can also improve the mechanical and heat-resistance properties and lower the oxygen transmission rate. Research is being performed to apply nanotechnology to the detection of chemical and biological substances for sensing biochemical changes in foods. [ citation needed ]
New foods are among the nanotechnology-created consumer products coming onto the market at the rate of 3 to 4 per week, according to the Project on Emerging Nanotechnologies (PEN), based on an inventory it has drawn up of 609 known or claimed nano-products. On PEN's list are three foods—a brand of canola cooking oil called Canola Active Oil, a tea called Nanotea and a chocolate diet shake called Nanoceuticals Slim Shake Chocolate. According to company information posted on PEN's Web site, the canola oil, by Shemen Industries of Israel, contains an additive called "nanodrops" designed to carry vitamins, minerals and phytochemicals through the digestive system and urea. [ 3 ] The shake, according to U.S. manufacturer RBC Life Sciences Inc., uses cocoa infused "NanoClusters" to enhance the taste and health benefits of cocoa without the need for extra sugar . [ 4 ]
The most prominent application of nanotechnology in the household is self-cleaning or " easy-to-clean " surfaces on ceramics or glasses. Nanoceramic particles have improved the smoothness and heat resistance of common household equipment such as the flat iron . [ citation needed ]
The first sunglasses using protective and anti-reflective ultrathin polymer coatings are on the market. For optics, nanotechnology also offers scratch resistant surface coatings based on nanocomposites. Nano-optics could allow for an increase in precision of pupil repair and other types of laser eye surgery. [ citation needed ]
The use of engineered nanofibers already makes clothes water- and stain-repellent or wrinkle-free. Textiles with a nanotechnological finish can be washed less frequently and at lower temperatures. Nanotechnology has been used to integrate tiny carbon particles membrane and guarantee full-surface protection from electrostatic charges for the wearer. Many other applications have been developed by research institutions such as the Textiles Nanotechnology Laboratory at Cornell University , and the UK's Dstl and its spin out company P2i . [ citation needed ]
Nanotechnology may also play a role in sports such as soccer , football , [ 5 ] and baseball . [ 6 ] Materials for new athletic shoes may be made in order to make the shoe lighter (and the athlete faster). [ 7 ] Baseball bats already on the market are made with carbon nanotubes that reinforce the resin, which is said to improve its performance by making it lighter. [ 6 ] Other items such as sport towels, yoga mats, exercise mats are on the market and used by players in the National Football League , which use antimicrobial nanotechnology to prevent illnesses caused by bacteria such as Methicillin-resistant Staphylococcus aureus (commonly known as MRSA). [ 5 ]
Lighter and stronger materials will be of immense use to aircraft manufacturers, leading to increased performance. Spacecraft will also benefit, where weight is a major factor. Nanotechnology might thus help to reduce the size of equipment and thereby decrease fuel-consumption required to get it airborne. Hang gliders may be able to halve their weight while increasing their strength and toughness through the use of nanotech materials. Nanotech is lowering the mass of supercapacitors that will increasingly be used to give power to assistive electrical motors for launching hang gliders off flatland to thermal-chasing altitudes. [ citation needed ]
Much like aerospace, lighter and stronger materials would be useful for creating vehicles that are both faster and safer. Combustion engines might also benefit from parts that are more hard-wearing and more heat-resistant. [ citation needed ]
Nanotechnology can improve the military's ability to detect biological agents. By using nanotechnology, the military would be able to create sensor systems that could detect biological agents. [ 8 ] The sensor systems are already well developed and will be one of the first forms of nanotechnology that the military will start to use. [ 9 ]
Nanoparticles can be injected into the material on soldiers’ uniforms to not only make the material more durable, but also to protect soldiers from many different dangers such as high temperatures, impacts and chemicals. [ 8 ] The nanoparticles in the material protect soldiers from these dangers by grouping together when something strikes the armor and stiffening the area of impact. This stiffness helps lessen the impact of whatever hit the armor, whether it was extreme heat or a blunt force. By reducing the force of the impact, the nanoparticles protect the soldier wearing the uniform from any injury the impact could have caused.
Another way nanotechnology can improve soldiers’ uniforms is by creating a better form of camouflage. Mobile pigment nanoparticles injected into the material can produce a better form of camouflage. [ 10 ] These mobile pigment particles would be able to change the color of the uniforms depending upon the area that the soldiers are in. There is still much research being done on this self-changing camouflage.
Nanotechnology can improve thermal camouflage . Thermal camouflage helps protect soldiers from people who are using night vision technology. Surfaces of many different military items can be designed in a way that electromagnetic radiation can help lower the infrared signatures of the object that the surface is on. [ 10 ] Surfaces of soldiers’ uniforms and surfaces of military vehicle are a few surfaces that can be designed in this way. By lowering the infrared signature of both the soldiers and the military vehicles the soldiers are using, it will provide better protection from infrared guided weapons or infrared surveillance sensors.
There is a way to use nanoparticles to create coated polymer threads that can be woven into soldiers’ uniforms. [ 11 ] These polymer threads could be used as a form of communication between the soldiers. The system of threads in the uniforms could be set to different light wavelengths, eliminating the ability for anyone else to listen in. [ 11 ] This would lower the risk of having anything intercepted by unwanted listeners.
A medical surveillance system for soldiers to wear can be made using nanotechnology. This system would be able to watch over their health and stress levels. The systems would be able to react to medical situations by releasing drugs or compressing wounds as necessary. [ 10 ] This means that if the system detected an injury that was bleeding, it would be able to compress around the wound until further medical treatment could be received. The system would also be able to release drugs into the soldier's body for health reasons, such as pain killers for an injury. The system would be able to inform the medics at base of the soldier's health status at all times that the soldier is wearing the system. The energy needed to communicate this information back to base would be produced through the soldier's body movements. [ 10 ]
Nanoweapon is the name given to military technology currently under development which seeks to exploit the power of nanotechnology in the modern battlefield . [ 12 ] [ 13 ] [ 14 ]
Chemical catalysis benefits especially from nanoparticles, due to the extremely large surface-to-volume ratio . The application potential of nanoparticles in catalysis ranges from fuel cell to catalytic converters and photocatalytic devices. Catalysis is also important for the production of chemicals. For example, nanoparticles with a distinct chemical surrounding ( ligands ), or specific optical properties . [ citation needed ]
Platinum nanoparticles are being considered in the next generation of automotive catalytic converters because the very high surface area of nanoparticles could reduce the amount of platinum required. [ 18 ] However, some concerns have been raised due to experiments demonstrating that they will spontaneously combust if methane is mixed with the ambient air. [ 19 ] Ongoing research at the Centre National de la Recherche Scientifique (CNRS) in France may resolve their true usefulness for catalytic applications. [ 20 ] Nanofiltration may come to be an important application, although future research must be careful to investigate possible toxicity. [ 21 ]
Nanotechnology has the potential to make construction faster, cheaper, safer, and more varied. Automation of nanotechnology construction can allow for the creation of structures from advanced homes to massive skyscrapers much more quickly and at much lower cost. In the near future,
Nanotechnology can be used to sense cracks in foundations of architecture and can send nanobots to repair them. [ 22 ] [ 23 ]
Nanotechnology is an active research area that encompasses a number of disciplines such as electronics, bio-mechanics and coatings. These disciplines assist in the areas of civil engineering and construction materials. [ 22 ] If nanotechnology is implemented in the construction of homes and infrastructure, such structures will be stronger. If buildings are stronger, then fewer of them will require reconstruction and less waste will be produced.
Nanotechnology in construction involves using nanoparticles such as alumina and silica. Manufacturers are also investigating the methods of producing nano-cement. If cement with nano-size particles can be manufactured and processed, it will open up a large number of opportunities in the fields of ceramics, high strength composites and electronic applications. [ 22 ]
Nanomaterials still have a high cost relative to conventional materials, meaning that they are not likely to feature in high-volume building materials. The day when this technology slashes the consumption of structural steel has not yet been contemplated. [ 24 ]
Much analysis of concrete is being done at the nano-level in order to understand its structure. Such analysis uses various techniques developed for study at that scale such as Atomic Force Microscopy (AFM), Scanning Electron Microscopy (SEM) and Focused Ion Beam (FIB). This has come about as a side benefit of the development of these instruments to study the nanoscale in general, but the understanding of the structure and behavior of concrete at the fundamental level is an important and very appropriate use of nanotechnology. One of the fundamental aspects of nanotechnology is its interdisciplinary nature and there has already been cross over research between the mechanical modeling of bones for medical engineering to that of concrete which has enabled the study of chloride diffusion in concrete (which causes corrosion of reinforcement). Concrete is, after all, a macro-material strongly influenced by its nano-properties and understanding it at this new level is yielding new avenues for improvement of strength, durability and monitoring as outlined in the following paragraphs
Silica (SiO2) is present in conventional concrete as part of the normal mix. However, one of the advancements made by the study of concrete at the nanoscale is that particle packing in concrete can be improved by using nano-silica which leads to a densifying of the micro and nanostructure resulting in improved mechanical properties. Nano-silica addition to cement based materials can also control the degradation of the fundamental C-S-H (calcium-silicatehydrate) reaction of concrete caused by calcium leaching in water as well as block water penetration and therefore lead to improvements in durability. Related to improved particle packing, high energy milling of ordinary Portland cement (OPC) clinker and standard sand, produces a greater particle size diminution with respect to conventional OPC and, as a result, the compressive strength of the refined material is also 3 to 6 times higher (at different ages). [ 23 ]
Steel is a widely available material that has a major role in the construction industry. The use of nanotechnology in steel helps to improve the physical properties of steel. Fatigue, or the structural failure of steel, is due to cyclic loading. Current steel designs are based on the reduction in the allowable stress, service life or regular inspection regime. This has a significant impact on the life-cycle costs of structures and limits the effective use of resources. Stress risers are responsible for initiating cracks from which fatigue failure results. The addition of copper nanoparticles reduces the surface un-evenness of steel, which then limits the number of stress risers and hence fatigue cracking. Advancements in this technology through the use of nanoparticles would lead to increased safety, less need for regular inspection, and more efficient materials free from fatigue issues for construction. [ 22 ]
Steel cables can be strengthened using carbon nanotubes. Stronger cables reduce the costs and period of construction, especially in suspension bridges, as the cables are run from end to end of the span. [ 22 ]
The use of vanadium and molybdenum nanoparticles improves the delayed fracture problems associated with high strength bolts. This reduces the effects of hydrogen embrittlement and improves steel micro-structure by reducing the effects of the inter-granular cementite phase. [ 22 ]
Welds and the Heat Affected Zone (HAZ) adjacent to welds can be brittle and fail without warning when subjected to sudden dynamic loading. The addition of nanoparticles such as magnesium and calcium makes the HAZ grains finer in plate steel. This nanoparticle addition leads to an increase in weld strength. The increase in strength results in a smaller resource requirement because less material is required in order to keep stresses within allowable limits. [ 22 ]
Nanotechnology represents a major opportunity for the wood industry to develop new products, substantially reduce processing costs, and open new markets for biobased materials.
Wood is also composed of nanotubes or “nanofibrils”; namely, lignocellulosic (woody tissue) elements which are twice as strong as steel. Harvesting these nanofibrils would lead to a new paradigm in sustainable construction as both the production and use would be part of a renewable cycle. Some developers have speculated that building functionality onto lignocellulosic surfaces at the nanoscale could open new opportunities for such things as self-sterilizing surfaces, internal self-repair, and electronic lignocellulosic devices. These non-obtrusive active or passive nanoscale sensors would provide feedback on product performance and environmental conditions during service by monitoring structural loads, temperatures, moisture content, decay fungi, heat losses or gains, and loss of conditioned air. Currently, however, research in these areas appears limited.
Due to its natural origins, wood is leading the way in cross-disciplinary research and modelling techniques. BASF have developed a highly water repellent coating based on the actions of the lotus leaf as a result of the incorporation of silica and alumina nanoparticles and hydrophobic polymers. Mechanical studies of bones have been adapted to model wood, for instance in the drying process. [ 23 ]
Research is being carried out on the application of nanotechnology to glass, another important material in construction. Titanium dioxide (TiO 2 ) nanoparticles are used to coat glazing since it has sterilizing and anti-fouling properties. The particles catalyze powerful reactions that break down organic pollutants, volatile organic compounds and bacterial membranes. TiO 2 is hydrophilic (attraction to water), which can attract rain drops that then wash off the dirt particles. Thus the introduction of nanotechnology in the Glass industry, incorporates the self-cleaning property of glass. [ 22 ]
Fire-protective glass is another application of nanotechnology. This is achieved by using a clear intumescent layer sandwiched between glass panels (an interlayer) formed of silica nanoparticles (SiO 2 ), which turns into a rigid and opaque fire shield when heated. Most of glass in construction is on the exterior surface of buildings. So the light and heat entering the building through glass has to be prevented. The nanotechnology can provide a better solution to block light and heat coming through windows. [ 22 ]
Coatings is an important area in construction coatings are extensively use to paint the walls, doors, and windows. Coatings should provide a protective layer bound to the base material to produce a surface of the desired protective or functional properties. The coatings should have self healing capabilities through a process of "self-assembly". Nanotechnology is being applied to paints to obtained the coatings having self healing capabilities and corrosion protection under insulation. Since these coatings are hydrophobic and repels water from the metal pipe and can also protect metal from salt water attack. [ 22 ]
Nanoparticle based systems can provide better adhesion and transparency. The TiO 2 coating captures and breaks down organic and inorganic air pollutants by a photocatalytic process, which leads to putting roads to good environmental use. [ 22 ]
Fire resistance of steel structures is often provided by a coating produced by a spray-on-cementitious process. The nano-cement has the potential to create a new paradigm in this area of application because the resulting material can be used as a tough, durable, high temperature coating. It provides a good method of increasing fire resistance and this is a cheaper option than conventional insulation. [ 22 ]
In building construction nanomaterials are widely used from self-cleaning windows to flexible solar panels to wi-fi blocking paint. The self-healing concrete, materials to block ultraviolet and infrared radiation, smog-eating coatings and light-emitting walls and ceilings are the new nanomaterials in construction. Nanotechnology is a promise for making the "smart home" a reality. Nanotech-enabled sensors can monitor temperature, humidity, and airborne toxins, which needs nanotech-based improved batteries. The building components will be intelligent and interactive since the sensor uses wireless components, it can collect the wide range of data. [ 22 ]
If nanosensors and nanomaterials become an everyday part of the buildings, as with smart homes , what are the consequences of these materials on human beings? [ 22 ] | https://en.wikipedia.org/wiki/Industrial_applications_of_nanotechnology |
Industrial architecture is the design and construction of buildings facilitating the needs of the industrial sector . The architecture revolving around the industrial world uses a variety of building designs and styles to consider the safe flow, distribution and production of goods and labor. [ 1 ] Such buildings rose in importance with the Industrial Revolution , starting in Britain , and were some of the pioneering structures of modern architecture . [ 2 ] Many of the architectural buildings revolving around the industry allowed for processing, manufacturing, distribution, and the storage of goods and resources. Architects also have to consider the safety measurements and workflow to ensure the smooth flow within the work environment located in the building. [ 1 ]
Industrial architects specialize in designing and planning of industrial buildings or infrastructure. They integrate different processes, machinery, equipment and industrial building code requirements into functional industrial buildings. They follow quality standards to ensure that industrial building are safely built for production or human use. Industrial architects are responsible for the design and planning of the following: markets, warehouses, factories, processing plants, power plants, commercial facilities, etc. [ 3 ]
Britain played an important role in the Industrial Revolution , which stimulated the expansion of trade and distribution of goods amongst Europe and the Atlantic Ocean. The technological advances from Europe were later spread to the United States in the late 1700s. Samuel Slater fled to the United States and later opened a textile mill in Rhode Island; shortly after that the cotton gin was invented by Eli Whitney . [ 4 ]
One of the first industrial buildings were built in Britain in the 1700s during the First Industrial Revolution , which later inspired other industrial architecture to arise throughout the world. The First Industrial Revolution lasted from mid-1700s to the mid-1800s and then later the Second Industrial Revolution came about which mainly focused on the use of new materials and production of goods. [ 1 ]
One of the earliest industrial buildings were relativity built at a domestic scale, for instance workshops for local craftsmen. [ 2 ]
This time period was the transformation of the British economy. The population in England had increased to 16 million people around 1841, with the majority moving to Northern Europe. Factories had been built and production in the factories had become dominant; production was not on a large-scale. [ 2 ]
The birth of all industrial architecture stemmed from England and the continuing expansions of the architecture was a product of the Industrial Revolution. [ 5 ] The usage and production of iron and steel became more prominent since they were used as the foundation for the industrial buildings. Steel is a durable material and was also used in other parts of the industry such as infrastructure , but it was difficult to make because it required high temperature to melt the metal. [ 5 ]
Britain saw a increase in production during this time period. Railways played an important role in transportation and distribution of resources throughout Europe and the United States. Industrial buildings were built at a larger scale to accommodate large machinery used in food production such as flour mills and breweries . With the implementation of the Planning Act of 1909 , the industry had a significant impact on the siting and layout of industrial facilities as it continued to progress throughout the years. [ 2 ]
As architecture became modernized throughout the years, the more traditional industrial sites throughout Europe and the United States continued to decrease. For instance, coal is a raw material that was heavily used throughout the industrial revolution, so there were coal mines. Buildings continued to increase in size to accommodate mass production. The overall design of modern-day buildings is sleeker and more spacious. [ 2 ]
The early 20th century saw multi-story factories influenced by high land costs and the need for vertical movement of goods. However, later designs, such as the one-story factories of the World War II era, became more prevalent due to their flexibility, ease of construction, and suitability for assembly lines. These designs also focused on the well-being of workers, with features like natural light, air, and better working conditions to boost productivity. [ 6 ]
Modern industrial architecture integrates smart technology, adaptable designs, and sustainable materials. Abandoned industrial spaces are frequently transformed into residential, commercial, or mixed-use developments, supporting urban revitalization. This design style, characterized by open layouts, exposed utilities, and eco-friendly materials, is popular in both urban and suburban settings, highlighting green living and historic charm. Repurposed structures play a key role in urban renewal, revitalizing neglected areas into thriving hubs for housing, businesses, and cultural activities. [ 7 ]
The future of industrial architecture is influenced by technological advancements such as automation, robotics, and integration of smart systems, which enhance efficiency, productivity, and safety. As manufacturing evolves, industrial buildings will continue to adapt, with a focus on sustainability and collaborative work environments. [ 8 ]
Industrial buildings are typically characterized by large, open spaces, high ceilings, and minimal ornamentation, utilizing durable materials like concrete, brick, metal, and glass. The design prioritizes practicality, with elements like exposed structural components and raw materials. Functional principles include adaptability for changing production needs, efficient circulation, zoning for different tasks, and proper ventilation. [ 8 ] | https://en.wikipedia.org/wiki/Industrial_architecture |
The first time a catalyst was used in the industry was in 1746 by J. Roebuck in the manufacture of lead chamber sulfuric acid . Since then catalysts have been in use in a large portion of the chemical industry. In the start only pure components were used as catalysts, but after the year 1900 multicomponent catalysts were studied and are now commonly used in the industry. [ 1 ] [ 2 ]
In the chemical industry and industrial research, catalysis play an important role. Different catalysts are in constant development to fulfil economic, political and environmental demands. [ 3 ] When using a catalyst, it is possible to replace a polluting chemical reaction with a more environmentally friendly alternative. Today, and in the future, this can be vital for the chemical industry. In addition, it's important for a company/researcher to pay attention to market development. If a company's catalyst is not continually improved, another company can make progress in research on that particular catalyst and gain market share. For a company, a new and improved catalyst can be a huge advantage for a competitive manufacturing cost. It's extremely expensive for a company to shut down the plant because of an error in the catalyst, so the correct selection of a catalyst or a new improvement can be key to industrial success.
To achieve the best understanding and development of a catalyst it is important that different special fields work together. These fields can be: organic chemistry, analytic chemistry, inorganic chemistry, chemical engineers and surface chemistry. The economics must also be taken into account. One of the issues that must be considered is if the company should use money on doing the catalyst research themselves or buy the technology from someone else. As the analytical tools are becoming more advanced, the catalysts used in the industry are improving. One example of an improvement can be to develop a catalyst with a longer lifetime than the previous version. Some of the advantages an improved catalyst gives, that affects people's lives, are: cheaper and more effective fuel, new drugs and medications and new polymers.
Some of the large chemical processes that use catalysis today are the production of methanol and ammonia. Both methanol and ammonia synthesis take advantage of the water-gas shift reaction and heterogeneous catalysis , while other chemical industries use homogenous catalysis . If the catalyst exists in the same phase as the reactants it is said to be homogenous; otherwise it is heterogeneous.
The water gas shift reaction was first used industrially at the beginning of the 20th century. Today the WGS reaction is used primarily to produce hydrogen that can be used for further production of methanol and ammonia. [ 4 ]
The reaction refers to carbon monoxide (CO) that reacts with water (H 2 O) to form carbon dioxide (CO 2 ) and hydrogen (H 2 ). The reaction is exothermic with ΔH= -41.1 kJ/mol and have an adiabatic temperature rise of 8–10 °C per percent CO converted to CO 2 and H 2 .
The most common catalysts used in the water-gas shift reaction are the high temperature shift (HTS) catalyst and the low temperature shift (LTS) catalyst. The HTS catalyst consists of iron oxide stabilized by chromium oxide, while the LTS catalyst is based on copper. The main purpose of the LTS catalyst is to reduce CO content in the reformate which is especially important in the ammonia production for high yield of H 2 . Both catalysts are necessary for thermal stability, since using the LTS reactor alone increases exit-stream temperatures to unacceptable levels.
The equilibrium constant for the reaction is given as:
Low temperatures will therefore shift the reaction to the right, and more products will be produced. The equilibrium constant is extremely dependent on the reaction temperature, for example is the Kp equal to 228 at 200 °C, but only 11.8 at 400 °C. [ 2 ] The WGS reaction can be performed both homogenously and heterogeneously, but only the heterogeneous method is used commercially.
The first step in the WGS reaction is the high temperature shift which is carried out at temperatures between 320 °C and 450 °C. As mentioned before, the catalyst is a composition of iron-oxide, Fe 2 O 3 (90-95%), and chromium oxides Cr 2 O 3 (5-10%) which have an ideal activity and selectivity at these temperatures. When preparing this catalyst, one of the most important step is washing to remove sulfate that can turn into hydrogen sulfide and poison the LTS catalyst later in the process. Chromium is added to the catalyst to stabilize the catalyst activity over time and to delay sintering of iron oxide. Sintering will decrease the active catalyst area, so by decreasing the sintering rate the lifetime of the catalyst will be extended. The catalyst is usually used in pellets form, and the size play an important role. Large pellets will be strong, but the reaction rate will be limited.
In the end, the dominant phase in the catalyst consist of Cr 3 + in α-Fe 2 O 3 but the catalyst is still not active. To be active α-Fe 2 O 3 must be reduced to Fe and CrO 3 must be reduced to Cr in presence of H 2 . This usually happens in the reactor start-up phase and because the reduction reactions are exothermic the reduction should happen under controlled circumstances. The lifetime of the iron-chrome catalyst is approximately 3–5 years, depending on how the catalyst is handled.
Even though the mechanism for the HTS catalyst has been done a lot of research on, there is no final agreement on the kinetics/mechanism. Research has narrowed it down to two possible mechanisms: a regenerative redox mechanism and an adsorptive(associative) mechanism.
The redox mechanism is given below:
First a CO molecule reduces an O molecule, yielding CO 2 and a vacant surface center:
The vacant side is then reoxidized by water, and the oxide center is regenerated:
The adsorptive mechanism assumes that format species is produced when an adsorbed CO molecule reacts with a surface hydroxyl group:
The format decomposes then in the presence of steam:
The low temperature process is the second stage in the process, and is designed to take advantage of higher hydrogen equilibrium at low temperatures. The reaction is carried out between 200 °C and 250 °C, and the most commonly used catalyst is based on copper. While the HTS reactor used an iron-chrome based catalyst, the copper-catalyst is more active at lower temperatures thereby yielding a lower equilibrium concentration of CO and a higher equilibrium concentration of H 2 . The disadvantage with a copper catalysts is that it is very sensitive when it comes to sulfide poisoning, a future use of for example a cobalt- molybdenum catalyst could solve this problem. The catalyst mainly used in the industry today is a copper - zinc - alumina (Cu/ZnO/Al 2 O 3 ) based catalyst.
Also the LTS catalyst has to be activated by reduction before it can be used. The reduction reaction CuO + H 2 →Cu + H 2 O is highly exothermic and should be conducted in dry gas for an optimal result.
As for the HTS catalyst mechanism, two similar reaction mechanisms are suggested. The first mechanism that was proposed for the LTS reaction was a redox mechanism, but later evidence showed that the reaction can proceed via associated intermediates. The different intermediates that is suggested are: HOCO , HCO and HCOO. In 2009 [ 5 ] there are in total three mechanisms that are proposed for the water-gas shift reaction over Cu(111), given below.
Intermediate mechanism (usually called associative mechanism): An intermediate is first formed and then decomposes into the final products:
Associative mechanism: CO 2 produced from the reaction of CO with OH without the formation of an intermediate:
Redox mechanism: Water dissociation that yields surface oxygen atoms which react with CO to produce CO 2 :
It is not said that just one of these mechanisms is controlling the reaction, it is possible that several of them are active. Q.-L. Tang et al. has suggested that the most favorable mechanism is the intermediate mechanism (with HOCO as intermediate) followed by the redox mechanism with the rate determining step being the water dissociation. [ 5 ]
For both HTS catalyst and LTS catalyst the redox mechanism is the oldest theory and most published articles support this theory, but as technology has developed the adsorptive mechanism has become more of interest. One of the reasons to the fact that the literature is not agreeing on one mechanism can be because of experiments are carried out under different assumptions.
CO must be produced for the WGS reaction to take place. This can be done in different ways from a variety of carbon sources such as: [ 6 ]
Both the reactions shown above are highly endothermic and can be coupled to an exothermic partial oxidation. The products of CO and H 2 are known as syngas .
When dealing with a catalyst and CO, it is common to assume that the intermediate CO-Metal is formed before the intermediate reacts further into the products. When designing a catalyst this is important to remember. The strength of interaction between the CO molecule and the metal should be strong enough to provide a sufficient concentration of the intermediate, but not so strong that the reaction will not continue.
CO is a common molecule to use in a catalytic reaction, and when it interacts with a metal surface it is actually the molecular orbitals of CO that interacts with the d-band of the metal surface. When considering a molecular orbital (MO)-diagram CO can act as an σ-donor via the lone pair of the electrons on C, and a π-acceptor ligand in transition metal complexes. When a CO molecule is adsorbed on a metal surface, the d-band of the metal will interact with the molecular orbitals of CO. It is possible to look at a simplified picture, and only consider the LUMO (2π*) and HOMO (5σ) to CO. The overall effect of the σ-donation and the π- back donation is that a strong bond between C and the metal is being formed and in addition the bond between C and O will be weakened. The latter effect is due to charge depletion of the CO 5σ bonding and charge increase of the CO 2π* antibonding orbital. [ 7 ]
When looking at chemical surfaces, many researchers seems to agree on that the surface of the Cu/Al 2 O 3 /ZnO is most similar to the Cu(111) surface. [ 8 ] Since copper is the main catalyst and the active phase in the LTS catalyst, many experiments has been done with copper. In the figure given here experiments has been done on Cu(110) and Cu(111). The figure shows Arrhenius plot derived from reaction rates. It can be seen from the figure that Cu(110) shows a faster reaction rate and a lower activation energy . This can be due to the fact that Cu(111) is more closely packed than Cu(110).
Production of methanol is an important industry today and methanol is one of the largest volume carbonylation products. The process uses syngas as feedstock and for that reason the water gas shift reaction is important for this synthesis. The most important reaction based on methanol is the decomposition of methanol to yield carbon monoxide and hydrogen. Methanol is therefore an important raw material for production of CO and H 2 that can be used in generation of fuel. [ 9 ]
BASF was the first company (in 1923) to produce methanol on large-scale, then using a sulfur-resistant ZnO/Cr 2 O 3 catalyst. The feed gas was produced by gasification over coal. Today the synthesis gas is usually manufactured via steam reforming of natural gas. The most effective catalysts for methanol synthesis are Cu, Ni, Pd and Pt, while the most common metals used for support are Al and Si. In 1966 ICI ( Imperial Chemical Industries ) developed a process that is still in use today. The process is a low-pressure process that uses a Cu/ZnO/Al 2 O 3 catalyst where copper is the active material. This catalyst is actually the same that the low-temperature shift catalyst in the WGS reaction is using. The reaction described below is carried out at 250 °C and 5-10 MPa:
Both of these reactions are exothermic and proceeds with volume contraction. Maximum yield of methanol is therefore obtained at low temperatures and high pressure and with use of a catalyst that has a high activity at these conditions. A catalyst with sufficiently high activity at the low temperature does still not exist, and this is one of the main reasons that companies keep doing research and catalyst development. [ 10 ]
A reaction mechanism for methanol synthesis has been suggested by Chinchen et al. : [ 11 ]
Today there are four different ways to catalytically obtain hydrogen production from methanol, and all reactions can be carried out by using a transition metal catalyst (Cu, Pd):
The reaction is given as:
Steam reforming is a good source for production of hydrogen, but the reaction is endothermic . The reaction can be carried out over a copper-based catalyst, but the reaction mechanism is dependent on the catalyst. For a copper-based catalyst two different reaction mechanisms have been proposed, a decomposition-water-gas shift sequence and a mechanism that proceeds via methanol dehydrogenation to methyl formate. The first mechanism aims at methanol decomposition followed by the WGS reaction and has been proposed for the Cu/ZnO/Al 2 O 3 :
The mechanism for the methyl format reaction can be dependent of the composition of the catalyst. The following mechanism has been proposed over Cu/ZnO/Al 2 O 3 :
When methanol is almost completely converted CO is being produced as a secondary product via the reverse water-gas shift reaction.
The second way to produce hydrogen from methanol is by methanol decomposition:
As the enthalpy shows, the reaction is endothermic and this can be taken further advantage of in the industry. This reaction is the opposite of the methanol synthesis from syngas, and the most effective catalysts seems to be Cu, Ni, Pd and Pt as mentioned before. Often, a Cu/ZnO-based catalyst is used at temperatures between 200 and 300 °C but by-products of production like dimethyl ether, methyl format, methane and water are common. The reaction mechanism is not fully understood and there are two possible mechanism proposed (2002) : one producing CO 2 and H 2 by decomposition of formate intermediates and the other producing CO and H 2 via a methyl formate intermediate.
Partial oxidation is a third way for producing hydrogen from methanol. The reaction is given below, and is often carried out with air or oxygen as oxidant :
The reaction is exothermic and has, under favorable conditions, a higher reaction rate than steam reforming. The catalyst used is often Cu (Cu/ZnO) or Pd and they differ in qualities such as by-product formation, product distribution and the effect of oxygen partial pressure.
Combined reforming is a combination of partial oxidation and steam reforming and is the last reaction that is used for hydrogen production. The general equation is given below:
s and p are the stoichiometric coefficients for steam reforming and partial oxidation, respectively.
The reaction can be both endothermic and exothermic determined by the conditions, and combine both the advantages of steam reforming and partial oxidation.
Ammonia synthesis was discovered by Fritz Haber, by using iron catalysts. The ammonia synthesis advanced between 1909 and 1913, and two important concepts were developed; the benefits of a promoter and the poisoning effect (see catalysis for more details). [ 12 ]
Ammonia production was one of the first commercial processes that required the production of hydrogen, and the cheapest and best way to obtain hydrogen was via the water-gas shift reaction. The Haber–Bosch process is the most common process used in the ammonia industry.
A lot of research has been done on the catalyst used in the ammonia process, but the main catalyst that is used today is not that dissimilar to the one that was first developed. The catalyst the industry use is a promoted iron catalyst, where the promoters can be K 2 O ( potassium oxide ), Al 2 O 3 ( aluminium oxide ) and CaO ( calcium oxide ) and the basic catalytic material is iron. The most common is to use fixed bed reactors for the synthesis catalyst.
The main ammonia reaction is given below:
The produced ammonia can be used further in production of nitric acid via the Ostwald process . | https://en.wikipedia.org/wiki/Industrial_catalysts |
Industrial computed tomography ( CT ) scanning is any computer-aided tomographic process, usually X-ray computed tomography , that uses irradiation to produce three-dimensional internal and external representations of a scanned object. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for industrial CT scanning have been flaw detection, failure analysis, metrology, assembly analysis and reverse engineering applications. [ 1 ] [ 2 ] Just as in medical imaging , industrial imaging includes both nontomographic radiography ( industrial radiography ) and computed tomographic radiography (computed tomography).
Line beam scanning is the traditional process of industrial CT scanning. [ 3 ] X-rays are produced and the beam is collimated to create a line. The X-ray line beam is then translated across the part and data is collected by the detector. The data is then reconstructed to create a 3-D volume rendering of the part.
In cone beam scanning , the part to be scanned is placed on a rotary table. [ 3 ] As the part rotates, the cone of X-rays produce a large number of 2D images that are collected by the detector. The 2D images are then processed to create a 3D volume rendering of the external and internal geometries of the part.
Industrial CT scanning technology was introduced in 1972 with the invention of the CT scanner for medical imaging by Godfrey Hounsfield . The invention earned him a Nobel Prize in medicine, which he shared with Allan McLeod Cormack . [ 4 ] [ 5 ] Many advances in CT scanning have allowed for its use in the industrial field for metrology in addition to the visual inspection primarily used in the medical field (medical CT scan ).
Various inspection uses and techniques include part-to-CAD comparisons, part-to-part comparisons, assembly and defect analysis, void analysis, wall thickness analysis, and generation of CAD data. The CAD data can be used for reverse engineering , geometric dimensioning and tolerance analysis, and production part approval. [ 6 ]
One of the most recognized forms of analysis using CT is for assembly, or visual analysis. CT scanning provides views inside components in their functioning position, without disassembly. Some software programs for industrial CT scanning allow for measurements to be taken from the CT dataset volume rendering. These measurements are useful for determining the clearances between assembled parts or the dimension of an individual feature.
Traditionally, determining defects, voids and cracks within an object would require destructive testing. CT scanning can detect internal features and flaws displaying this information in 3D without destroying the part. Industrial CT scanning (3D X-ray) is used to detect flaws inside a part such as porosity, [ 7 ] an inclusion, or a crack. [ 8 ] It has been also used to detect the origin and propagation of damages in concrete. [ 9 ]
Metal casting and moulded plastic components are typically prone to porosity because of cooling processes, transitions between thick and thin walls, and material properties. Void analysis can be used to locate, measure, and analyze voids inside plastic or metal components.
Traditionally, without destructive testing, full metrology has only been performed on the exterior dimensions of components, such as with a coordinate-measuring machine (CMM) or with a vision system to map exterior surfaces. Internal inspection methods would require using a 2D X-ray of the component or the use of destructive testing. Industrial CT scanning allows for full non-destructive metrology. With unlimited geometrical complexity, 3D printing allows for complex internal features to be created with no impact on cost, such features are not accessible using traditional CMM. The first 3D printed artefact that is optimised for characterisation of form using computed tomography CT [ 10 ]
Image-based finite element method converts the 3D image data from X-ray computed tomography directly into meshes for finite element analysis . Benefits of this method include modelling complex geometries (e.g. composite materials) or accurately modelling "as manufactured" components at the micro-scale. [ 11 ]
The industrial computed tomography market is forecast to reach a size of USD 773.45 million to USD 1,116.5 million between 2029 and 2030. Regional trends show that strong market growth is expected, particularly in the Asia-Pacific region, but also in North America and Europe, due to strict safety regulations and preventive maintenance of industrial equipment. [ 12 ] [ 13 ] Growth is being driven primarily by the ongoing development of CT devices and services that enable precise and non-destructive testing of components. Innovations such as the use of artificial intelligence for automated fault analyses and the development of mobile CT systems are expanding the possibilities. [ 14 ]
Computed Tomography (CT) has become an increasingly valuable tool in forensic science, particularly in conducting virtual autopsies. [ 15 ] [ 16 ] Unlike traditional autopsies, which require invasive procedures, CT scans allow for non-invasive internal examinations of the body, producing detailed 3D images of bones, organs, and soft tissues. [ 17 ] This technology is especially useful for detecting fractures, foreign objects (such as bullets or shrapnel), gas embolisms, and signs of trauma that may not be immediately visible externally. [ 18 ] CT scans can preserve forensic evidence more effectively and are particularly beneficial in cases involving mass disasters, decomposition, or cultural and religious objections to dissection. [ 15 ] Furthermore, digital imaging from CT can be stored and reviewed multiple times, aiding both legal investigations and educational purposes. [ 17 ] Overall, CT has enhanced the accuracy, efficiency, and accessibility of post-mortem examinations in forensic contexts.
Sources: [ 17 ] [ 18 ] [ 19 ] [ 16 ] | https://en.wikipedia.org/wiki/Industrial_computed_tomography |
An Industrial Dashboard is a graphical display of manufacturing information via programming. Much like the dashboard in a car, an Industrial Dashboard shows data collected from a multitude of sensors displayed as one quick overview of the general operating situation. The Industrial Dashboard typically involves use of Java Script, Html5, and PHP.
In the simplest form, an Industrial Dashboard may show just one metric from a manufacturing process. This might start with a count of product produced from a machine. A more complex approach to Industrial Dashboarding would be a series of "drill down" click points - starting with a dashboard screen showing a summary of production for the whole plant. Various points on that screen would be clickable to drill down into more and more dashboard screens until reaching a dashboard of very detailed data on a single machine or Machine Operator Efficiency of a single employee. [ 1 ]
There are several hardware technology approaches to retrieving data from the machinery. The science of interfacing industrial machines is widely referred to as Industry4.0 or IIoT (Industrial Internet of Things). Some industry standards such as MTConnect are emerging in an attempt make CNC machine tools produce production data in a uniform format to web servers. [ 2 ] | https://en.wikipedia.org/wiki/Industrial_dashboard |
Industrial data processing is a branch of applied computer science that covers the area of design and programming of computerized systems which are not computers as such — often referred to as embedded systems ( PLCs , automated systems , intelligent instruments, etc.). The products concerned contain at least one microprocessor or microcontroller , as well as couplers (for I/O ).
Another current definition of industrial data processing is that it concerns those computer programs whose variables in some way represent physical quantities; for example the temperature and pressure of a tank, the position of a robot arm, etc.
This computer science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Industrial_data_processing |
Throughout the 1850s, the Principality of Wallachia underwent an industrial revolution which yielded, among others, the first oil refinery in the World. Six years after the first Wallachian industrial establishment was completed (1853), the country united with the Principality of Moldavia to form Romania .
The first industrial establishment based on mechanized work and steam power was introduced in 1853, in the form of the Assan Steam Mill. The mill also carried out oil pressing and brandy distilling. Situated on the outskirts of Bucharest , it was founded by George Assan , using modern machinery from Vienna . Assan ran several pharmacies and wine shops which enabled him to purchase the machinery and build the mill. After Assan's death in 1866, the mill was taken over by his wife, Alexandrina. The facility changed hands again in 1884, when it was taken over by the Assans' sons. The mill remained active into the 20th century. [ 1 ] [ 2 ] [ 3 ]
The most notable accomplishment of Wallachia's industrial revolution was the building of the World's first industrial oil refinery in 1856-1857. It was built by Teodor Mehedinţeanu . Situated at Râfov , near Ploiești , the refinery had a processing capacity of 7.5 tons per day. Two more important firsts resulted from this achievement. Wallachia (and by extension, Romania) became the first country in the world with an officially recorded crude oil production, 275 tons. Also in 1857, Bucharest became the first city in the world to use lamp oil for public illumination. The next two countries to record oil production were the United States and Italy in 1860. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Food industries developed at Brăila starting from the 1830s. [ 8 ]
In 1855, a brick factory which employed the use of machinery was founded in Bucharest. It was a military asset , being run by the Cavalry commander Serdar Filipescu . [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Industrial_development_in_the_Principality_of_Wallachia |
Industrial dryers are used to efficiently process large quantities of bulk materials that need reduced moisture levels. Depending on the amount and the makeup of material needing to be dried, industrial dryers come in many different models constructed specifically for the type and quantity of material to be processed. The most common types of industrial dryers are fluidized bed dryers, rotary dryers , rolling bed dryers , conduction dryers, convection dryers, pharmaceutical dryers, suspension/paste dryers, toroidal bed or TORBED dryers and dispersion dryers. Various factors are considered in determining the correct type of dryer for any given application, including the material to be dried, drying process requirements, production requirements, final product quality requirements and available facility space. [ 1 ] [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Industrial_dryer |
Industrial engineering (IE) is concerned with the design, improvement and installation of integrated systems of people, materials, information, equipment and energy. It draws upon specialized knowledge and skill in the mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design, to specify, predict, and evaluate the results to be obtained from such systems. [ 1 ] Industrial engineering is a branch of engineering that focuses on optimizing complex processes, systems, and organizations by improving efficiency, productivity, and quality. It combines principles from engineering, mathematics, and business to design, analyze, and manage systems that involve people, materials, information, equipment, and energy. Industrial engineers aim to reduce waste, streamline operations, and enhance overall performance across various industries, including manufacturing, healthcare, logistics, and service sectors.
Industrial engineers are employed in numerous industries, such as automobile manufacturing, aerospace, healthcare, forestry, finance, leisure, and education. [ 2 ] Industrial engineering combines the physical and social sciences together with engineering principles to improve processes and systems. [ 3 ]
Several industrial engineering principles are followed to ensure the effective flow of systems, processes, and operations. Industrial engineers work to improve quality and productivity while simultaneously cutting waste. [ 3 ] They use principles such as lean manufacturing, six sigma, information systems, process capability, and more.
These principles allow the creation of new systems , processes or situations for the useful coordination of labor , materials and machines . [ 4 ] [ 5 ] Depending on the subspecialties involved, industrial engineering may also overlap with, operations research , systems engineering , manufacturing engineering , production engineering , supply chain engineering , management science , engineering management , financial engineering , ergonomics or human factors engineering , safety engineering , logistics engineering , quality engineering or other related capabilities or fields.
The origins of industrial engineering are generally traced back to the Industrial Revolution with the rise of factory systems and mass production. The fundamental concepts began to emerge through ideas like Adam Smith's division of labor and the implementation of interchangeable parts by Eli Whitney. [ 6 ] The term "industrial engineer" is credited to James Gunn who proposed the need for such an engineer focused on production and cost analysis in 1901. However, Frederick Taylor is widely credited as the "father of industrial engineering" for his focus on scientific management, emphasizing time studies and standardized work methods, with his principles being published in 1911. Notably, Taylor established the first department dedicated to industrial engineering work, called "Elementary Rate Fixing," in 1885 with the goal of process improvement and productivity increase. [ 7 ] Frank and Lillian Gilbreth further contributed significantly with their development of motion studies and therbligs for analyzing manual labor in the early 20th century. The early focus of the field was heavily on improving efficiency and productivity within manufacturing environments, driven in part by the call for cost reduction by engineering professionals, as highlighted by the first president of ASME in 1880. [ 8 ] The formalization of the discipline continued with the founding of the American Institute of Industrial Engineering (AIIE) in 1948. In more recent years, industrial engineering has expanded beyond manufacturing to include areas like healthcare, project management, and supply chain optimization. [ 9 ]
The origins of systems engineering as a recognized discipline can be traced back to World War II, where its principles began to emerge to manage the complexities of new war technologies. Although systems thinking predates this period, the analysis of the RAF Fighter Command C2 System during the Battle of Britain (even though the term wasn't yet invented) is considered an early example of high-caliber systems engineering. The first known public use of the term "systems engineering" occurred in March 1950 by Mervin J. Kelly of Bell Telephone Laboratories, who described it as crucial for defining new systems and guiding the application of research in creating new services. The first published paper specifically on the subject appeared in 1956 by Kenneth Schlager, who noted the growing importance of systems engineering due to increasing technological complexity and the formation of dedicated systems engineering groups. In 1957, E.W. Engstrom further elaborated on the concept, emphasizing the determination of objectives and the thorough consideration of all influencing factors as requirements for successful systems engineering. That same year also saw the publication of the first textbook on the subject, "Systems Engineering: An Introduction to the Design of Large-Scale Systems" by Goode and Mahol. Early practices of systems engineering were generally informal, transdisciplinary, and deeply rooted in the application domain. Following these initial mentions and publications, the field saw further development in the 1960s and 1970s, with figures like Arthur Hall defining traits of a systems engineer and viewing it as a comprehensive process. Despite its informal nature, systems engineering played a vital role in major achievements like the 1969 Apollo moon landing. A significant step towards formalization occurred in July 1969 with the introduction of the first formal systems engineering process, Military Standard (MIL-STD)-499: System Engineering Management, by the U.S. Air Force. This standard aimed to provide guidance for managing the systems engineering process and was later extended and updated. The need for formally trained systems engineers led to the formation of the National Council on Systems Engineering (NCOSE) in the late 1980s, which evolved into the International Council on Systems Engineering (INCOSE). INCOSE further contributed to the formalization of the field through publications like its journal "Systems Engineering" starting in 1994 and the first edition of the "Systems Engineering Handbook" in 1997. Additionally, organizations like NASA published their own systems engineering handbooks. In the 21st century, international standardization became a key aspect, with the International Standards Organization (ISO) publishing its first standard defining systems engineering application and management in 2005, further solidifying its standing as a formal discipline. [ 10 ]
Frederick Taylor (1856–1915) is generally credited as the father of the industrial engineering discipline. He earned a degree in mechanical engineering from Stevens Institute of Technology and earned several patents from his inventions. Taylor is the author of many well-known works, including a book, The Principles of Scientific Management , which became a classic of management literature. It is considered one of the most influential management books of the 20th century. [ 11 ] The book laid our three goals: to illustrate how the country loses through inefficiency, to show that the solution to inefficiency is systematic management, and to show that the best management rests on defined laws, rules, and principles that can be applied to all kinds of human activity. Taylor is remembered for developing the stopwatch time study. [ 6 ] Taylor's findings set the foundation for industrial engineering.
Frank Gilbreth (1868-1924), along with his wife Lillian Gilbreth (1878-1972), also had a significant influence on the development of Industrial Engineering. Their work is housed at Purdue University . In 1907, Frank Gilbreth met Frederick Taylor , and he learned tremendously from Taylor’s work. [ 12 ] Frank and Lillian created 18 kinds of elemental motions that make up a set of fundamental motions required for a worker to perform a manual operation or task. They named the elements therbligs , which are used in the study of motion in the workplace. [ 13 ] These developments were the beginning of a much broader field known as human factors or ergonomics.
Through the efforts of Hugo Diemer , the first course on industrial engineering was offered as an elective at Pennsylvania State University in 1908. [ 14 ] The first doctoral degree in industrial engineering was awarded in 1933 by Cornell University . [ 15 ]
Henry Gantt (1861-1919) immersed himself in the growing movement of Taylorism. Gantt is best known for creating a management tool, the Gantt chart . Gantt charts display dependencies pictorially, which allows project managers to keep everything organized. They are studied in colleges and used by project managers around the world. In addition to the creation of the Gannt chart, Gantt had many other significant contributions to scientific management. He cared about worker incentives and the impact businesses had on society. Today, the American Society of Mechanical Engineers awards a Gantt Medal for “distinguished achievement in management and for service to the community.” [ 16 ]
Henry Ford (1863-1947) further revolutionized factory production with the first installation of a moving assembly line . This innovation reduced the time it took to build a car from more than 12 hours to one hour and 33 minutes. [ 17 ] This continuous-flow inspired production method introduced a new way of automobile manufacturing. Ford is also known for transforming the workweek schedule. He cut the typical six-day workweek to five and doubled the daily pay. Thus, creating the typical 40-hour workweek. [ 18 ]
Total quality management (TQM) emerged in the 1940s and gained momentum after World War II. The term was coined to describe its Japanese-style management approach to quality improvement. Total quality management can be described as a management system for a customer-focused organization that engages all employees in continual improvement of the organization. Joseph Juran is credited with being a pioneer of TQM by teaching the concepts of controlling quality and managerial breakthrough. [ 19 ]
The American Institute of Industrial Engineering was formed in 1948. The early work by F. W. Taylor and the Gilbreths was documented in papers presented to the American Society of Mechanical Engineers as interest grew from merely improving machine performance to the performance of the overall manufacturing process, most notably starting with the presentation by Henry R. Towne (1844–1924) of his paper The Engineer as An Economist (1886). [ 20 ]
From 1960 to 1975, with the development of decision support systems in supply such as material requirements planning (MRP), one can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the 1970s, with the penetration of Japanese management theories such as Kaizen and Kanban , Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own continuous improvement programs. W. Edwards Deming made significant contributions in the minimization of variance starting in the 1950s and continuing to the end of his life.
In the 1990s, following the global industry globalization process, the emphasis was on supply chain management and customer-oriented business process design. The theory of constraints , developed by Israeli scientist Eliyahu M. Goldratt (1985), is also a significant milestone in the field.
In recent years (late 2000s to 2025), the traditional skills of industrial engineering, such as system optimization, process improvement, and efficiency management, remain essential. However, these foundational abilities are increasingly complemented by a deeper understanding of emerging technologies, such as artificial intelligence , machine learning, and IoT ( Internet of Things ). Proficiency in data analytics has become crucial, as it allows engineers to harness big data and derive insights that inform decision-making and innovation. Additionally, knowledge in fields such as cybersecurity, software development, and sustainable practices is becoming integral to the industrial engineering scope. [ 21 ]
As we navigate beyond 2025, it is imperative for professionals across various industries to stay abreast of these advancements. The ongoing evolution of industrial engineering will undoubtedly open new career pathways and reshape existing roles. Companies and individuals must be proactive in adapting to these changes to harness the full potential of this dynamic field. [ 21 ]
While originally applied to manufacturing , the use of industrial in industrial engineering can be somewhat misleading, since it has grown to encompass any methodical or quantitative approach to optimizing how a process, system, or organization operates. In fact, the industrial in industrial engineering means the industry in its broadest sense. [ 22 ] People have changed the term industrial to broader terms such as industrial and manufacturing engineering , industrial and systems engineering , industrial engineering and operations research , or industrial engineering and management .
There are numerous sub-disciplines associated with industrial engineering, including the following a non-exhaustive list. While some industrial engineers focus exclusively on one of these sub-disciplines, many deal with a combination of sub-disciplines. The first 14 of these sub-disciplines come from the IISE Body of Knowledge. [ 1 ] These are considered knowledge areas, and many of them contain an overlap of content.
Industrial engineering students take courses in work analysis and design, process design, human factors, facilities planning and layout, engineering economic analysis, production planning and control, systems engineering, computer utilization and simulation, operations research, quality control, automation, robotics, and productivity engineering. [ 23 ]
Various universities offer Industrial Engineering degrees across the world. The Edwardson School of Industrial Engineering at Purdue University, the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology, and the Department of Industrial and Operations Engineering at the University of Michigan are all named industrial engineering departments in the United States. Other universities include: Virginia Tech, Texas A&M, Northwestern University, University of Wisconsin–Madison, and the University of Southern California, and NC State University.
It is important to attend accredited universities because ABET accreditation ensures that graduates have met the educational requirements necessary to enter the profession. [ 24 ] This quality of education is recognized internationally and prepares students for successful careers.
Internationally, industrial engineering degrees accredited within any member country of the Washington Accord enjoy equal accreditation within all other signatory countries, thus allowing engineers from one country to practice engineering professionally in any other.
Universities offer degrees at the bachelor, master, and doctoral levels.
In the United States, the undergraduate degree earned is either a bachelor of science (BS) or a bachelor of science and engineering (BSE) in industrial engineering (IE). In South Africa , the undergraduate degree is a bachelor of engineering (BEng). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE or ISyE).
The typical curriculum includes a broad math and science foundation spanning chemistry , physics , mechanics (i.e., statics, kinematics, and dynamics), materials science , computer science , electronics/circuits, engineering design , and the standard range of engineering mathematics (i.e., calculus , linear algebra , differential equations , statistics ). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work, which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization , applied probability , stochastic modeling, design of experiments , statistical process control , simulation , manufacturing engineering , ergonomics / safety engineering , and engineering economics . Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing , supply chains and logistics , analytics and machine learning , production systems , human factors and industrial design , and service systems . [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ]
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
The usual graduate degree earned is the master of science (MS), master of science and engineering (MSE) or master of engineering (MEng) in industrial engineering or various alternative related concentration titles.
Typical MS curricula may cover: | https://en.wikipedia.org/wiki/Industrial_engineering |
Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals , chemical production, biofuels , food and beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. [ 1 ] Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor . Although co-factors may be generated in vitro , it is typically more cost-effective to use metabolically active cells. [ 1 ]
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. [ 2 ] Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. [ 3 ] Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. [ 4 ] After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waals forces , ionic interactions , and hydrogen bonding . These forces are weak, and as a result, do not affect the structure of the enzyme. A wide variety of enzyme carriers may be used. Selection of a carrier is dependent upon the surface area, particle size, pore structure, and type of functional group. [ 5 ]
Many binding chemistries may be used to adhere an enzyme to a surface to varying degrees of success. The most successful covalent binding techniques include binding via glutaraldehyde to amino groups and N-hydroxysuccinide esters . These immobilization techniques occur at ambient temperatures in mild conditions, which have limited potential to modify the structure and function of the enzyme. [ 6 ]
Immobilization using affinity relies on the specificity of an enzyme to couple an affinity ligand to an enzyme to form a covalently bound enzyme-ligand complex. The complex is introduced into a support matrix for which the ligand has high binding affinity, and the enzyme is immobilized through ligand-support interactions. [ 3 ]
Immobilization using entrapment relies on trapping enzymes within gels or fibers, using non-covalent interactions. Characteristics that define a successful entrapping material include high surface area, uniform pore distribution, tunable pore size, and high adsorption capacity. [ 3 ]
Enzymes typically constitute a significant operational cost for industrial processes, and in many cases, must be recovered and reused to ensure economic feasibility of a process. Although some biocatalytic processes operate using organic solvents, the majority of processes occur in aqueous environments, improving the ease of separation. [ 1 ] Most biocatalytic processes occur in batch, differentiating them from conventional chemical processes. As a result, typical bioprocesses employ a separation technique after bioconversion. In this case, product accumulation may cause inhibition of enzyme activity. Ongoing research is performed to develop in situ separation techniques, where product is removed from the batch during the conversion process. Enzyme separation may be accomplished through solid-liquid extraction techniques such as centrifugation or filtration, and the product-containing solution is fed downstream for product separation. [ 1 ]
To industrialize an enzyme, the following upstream and downstream enzyme production processes are considered:
Upstream processes are those that contribute to the generation of the enzyme.
An enzyme must be selected based upon the desired reaction. The selected enzyme defines the required operational properties, such as pH, temperature, activity, and substrate affinity. [ 12 ]
The choice of a source of enzymes is an important step in the production of enzymes. It is common to examine the role of enzymes in nature and how they relate to the desired industrial process. Enzymes are most commonly sourced through bacteria, fungi, and yeast. Once the source of the enzyme is selected, genetic modifications may be performed to increase the expression of the gene responsible for producing the enzyme. [ 12 ]
Process development is typically performed after genetic modification of the source organism, and involves the modification of the culture medium and growth conditions. In many cases, process development aims to reduce mRNA hydrolysis and proteolysis . [ 12 ]
Scaling up of enzyme production requires optimization of the fermentation process. Most enzymes are produced under aerobic conditions, and as a result, require constant oxygen input, impacting fermenter design. Due to variations in the distribution of dissolved oxygen, as well as temperature, pH, and nutrients, the transport phenomena associated with these parameters must be considered. The highest possible productivity of the fermenter is achieved at maximum transport capacity of the fermenter. [ 12 ] [ 13 ]
Downstream processes are those that contribute to separation or purification of enzymes.
The procedures for enzyme recovery depend on the source organism, and whether enzymes are intracellular or extracellular. Typically, intracellular enzymes require cell lysis and separation of complex biochemical mixtures. Extracellular enzymes are released into the culture medium, and are much simpler to separate. Enzymes must maintain their native conformation to ensure their catalytic capability. Since enzymes are very sensitive to pH, temperature, and ionic strength of the medium, mild isolation conditions must be used. [ 12 ]
Depending on the intended use of the enzyme, different levels purity are required. For example, enzymes used for diagnostic purposes must be separated to a higher purity than bulk industrial enzymes to prevent catalytic activity that provides erroneous results. Enzymes used for therapeutic purposes typically require the most rigorous separation. Most commonly, a combination of chromatography steps is employed for separation. [ 12 ]
The purified enzymes are either sold in pure form and sold to other industries, or added to consumer goods. | https://en.wikipedia.org/wiki/Industrial_enzymes |
Industrial fans and blowers are machines whose primary function is to provide and accommodate a large flow of air or gas to various parts of a building or other structures. This is achieved by rotating a number of blades, connected to a hub and shaft, and driven by a motor or turbine . The flow rates of these mechanical fans range from approximately 200 cubic feet (5.7 m 3 ) to 2,000,000 cubic feet (57,000 m 3 ) per minute. A blower is another name for a fan that operates where the resistance to the flow is primarily on the downstream side of the fan.
There are many uses for the continuous flow of air or gas that industrial fans generate, including combustion , ventilation , aeration , particulate transport , exhaust, cooling, air-cleaning , and drying , to name a few. The industries served include electrical power production , pollution control , metal manufacturing and processing , cement production , mining , petrochemical , food processing , cryogenics , and clean rooms .
Most industrial fans may be categorized into one of two general types: centrifugal fans and axial fans.
The centrifugal design uses the centrifugal force generated by a rotating disk, with blades mounted at right angles to the disk, to impart movement to the air or gas and increase its pressure. The assembly of the hub, disk and blades is known as the fan wheel, and often includes other components with aerodynamic or structural functions. The centrifugal fan wheel is typically contained within a scroll-shaped fan housing, resembling the shell of the nautilus sea creature with a central hole. The air or gas inside the spinning fan is thrown off the outside of the wheel, to an outlet at the housing's largest diameter. This simultaneously draws more air or gas into the wheel through the central hole. [ 1 ] Inlet and outlet ducting are often attached to the fan's housing, to supply and/or exhaust the air or gas to the industry's requirements.
There are many varieties of centrifugal fans, which may have fan wheels that range from less than 3 cm to over 16 feet (4.9 m) in diameter.
The axial design uses axial forces to achieve the movement of the air or gas, spinning a central hub with blades extending radially from its outer diameter. The fluid is moved parallel to the fan wheel's shaft, or axis of rotation. The axial fan is often contained within a short section of cylindrical ductwork, to which inlet and outlet ducting can be connected.
Axial fan types have fan wheels with diameters that usually range from less than a foot (0.3 meters) to over 30 feet (9.1 m), although axial cooling tower fan wheels may exceed 82 feet (25 m) in diameter.
In general, axial fans are used where the principal requirement is for a large volume of flow, and the centrifugal design where both flow and higher pressures are required. Axial fans provide huge airflow at low pressures. They draw air parallel to the axis and force it straight out.
There are several paths to determining a fan design for an application.
For industries where the application requirements do not vary greatly and applicable fan designs have diameters of around 4 feet (1.2 meters) or less, a standard or pre-engineered design might be selected.
When the application involves more complex specifications or a larger fan, then a design based on an existing model configuration will often satisfy the requirements. Many model configurations already cover the range of current industry processes. An appropriate model from the fan company's catalogue is selected, and the company's engineers apply design rules to calculate the dimensions and select options and material for the desired performance, strength and operating environment.
Some applications require a dedicated, custom configuration for a fan design to satisfy all specifications.
All industrial fan designs must be accurately engineered to meet performance specifications while maintaining structural integrity. For each application, there are specific flow and pressure requirements. Depending on the application, the fan may be subject to high rotating speeds, an operating environment with corrosive chemicals or abrasive air streams, and extreme temperatures. Larger fans and higher speeds produce greater forces on the rotating structures; for safety and reliability, the design must eliminate excessive stresses and excitable resonant frequencies. Computer modeling programs for computational fluid dynamics (CFD) and finite element analysis (FEA) are often employed in the design process, in addition to laboratory scale model testing. Even after the fan is built the verification might continue, using fan performance testing for flow and pressure, strain gage testing for stresses and tests to record the fan's resonant frequencies.
Fan types and their subtypes are industry standard, recognized by all major fan producers. [ 2 ]
Any of these fan subtypes can be built with long-lasting erosion-resistant liners.
Airfoil ( Air foil ) – Used for a wide range of applications in many industries, fans with hollow, airfoil-profiled blades are designed for use in airstreams where high efficiency and quiet operation are required. They are used extensively for continuous service at ambient and elevated temperatures in forced and induced draft applications in the metals, chemical, power generation, paper, rock products, glass, resource recovery , incineration and other industries throughout the world.
Backward curve – These fans have efficiencies nearly as high as the airfoil design. An advantage is that their single-thickness, curved plate blades prevent the possibility of dust particle buildup inside the blade, as may occur with perforated airfoil blades. The robust design allows high tip-speed operation, and therefore this fan is often used in high-pressure applications.
Backward inclined – These fans have simple flat blades, backwardly inclined to match the velocity pattern of the air passing through the fan wheel for high-efficiency operation. These fans are typically used in high-volume, relatively low-pressure, clean air applications.
Radial blade – The flat blades of this type are arranged in a radial pattern. These rugged fans offer high pressure capability with average efficiency. They are often fitted with erosion-resistant liners to extend rotor life. The housing design is compact to minimize the floor space requirement.
Radial tipped – These fans have wheels that are backward curved, but in a way slightly different from backward curved fans. Backward curved fans have wheels whose blades curve outward, while radial-tip fans' blades are curved inward and radial at their tips (hence the name "radial tip"), while still in a backwardly-curved configuration. Their curvature can also be thought of as radial at the tips but gradually sloping toward the direction of rotation. This rugged design is used in high-volume flow rate applications when the pressure requirement is rather high and erosion resistance is necessary. It offers medium efficiencies. A common application is the dirty side of a baghouse or precipitator. The design is more compact than airfoil, backward curved or backward inclined fans.
Paddle-wheel – This is an open impeller design without shrouds. Although the efficiency is not high, this fan is well suited for applications with extremely high dust loading. It can be offered with field-replaceable blade liners from ceramic tiles or tungsten carbide . This fan may also be used in high-temperature applications.
Forward-curve – This "squirrel cage" impeller generates the highest volume flow rate (for a given tip speed) of all the centrifugal fans. Therefore, it often has the advantage of offering the smallest physical package available for a given application. This type of fan is commonly used in high-temperature furnaces. However, these fans can only be used for conveying air with low dust loading because they are the most sensitive to particle build-up, but also due to the large number of blades that forward-curve wheels require.
Industrial exhausters – This is a relatively inexpensive, medium-duty, steeply inclined flat-bladed fan for exhausting gases, conveying chips, etc.
Pre-engineered fans (PE) – A series of fans of varying blade shapes that are usually available in only standard sizes. Because they are pre-engineered these fans may be available with relatively short delivery times. Often, pre-engineered rotors with various blade shapes may be installed into a common housing. These are often available in a wide range of volume and pressure requirements to meet the needs of many applications.
Pressure blowers – These are high-pressure, low-volume blowers used in combustion air applications in furnaces or to provide “blow-off” air for clearing and/or drying applications.
Surgeless blowers – These high-pressure, low-volume blowers have a reduced tendency for “surging” (periodic variation of flow rate) even at severely reduced fan speeds. This allows extreme turndown (low-flow) without significant pulsation.
Mechanical vapor recovery blowers -These specially designed centrifugal fans are designed to increase temperature and pressure of saturated steam in a closed-loop system.
Acid gas blowers - These very heavy construction blowers are suitable for inlet pressures from full vacuum to 100 psig. Materials are selected for corrosion resistance to the gases and particulate handled.
Specialty process gas blowers - These blowers are for high pressure petrochemical processes.
High-temperature axial fans – These are high-volume fans designed to operate against low flow resistance in industrial convection furnaces. They may be of either single-direction or bi-directional designs. Extremely rugged, they are most often used in high-temperature furnace (up to 1800 degF) application.
Tube axial fans – These are axial fan units with fan wheels located in cylindrical tubes, without inlet or outlet dampers.
Vaneaxial fans – These axial flow fans have a higher pressure capability due to the presence of static vanes.
Variable pitch axial fans – The blades on these axial fans are manually adjustable to permit the blade angle to be changed. This allows operation over a much wider range of volume/pressure relationships. The blades are adjusted periodically to optimize efficiency by matching the blade pitch to the varying conditions for the application. These fans are often used in mining applications.
Variable pitch on-the-fly axial fans – These are similar to “Variable Pitch Axial Fans” except they include an internal mechanism that allows the blade pitch to be adjusted while the fan rotor is in motion. These versatile fans offer high-efficiency operation at many different points of operation. This instantaneous blade adjustment capability is an advantage that is possible with axial fans only.
Cooling fans - (also referred to as "cooling tower fans") - These are axial fans, typically with large diameters, for low pressures and large volumes of airflow. Applications are in wet mechanical cooling towers, air-cooled steam condensers, air-cooled heat exchangers, radiators, or similar air-cooled applications.
Mixed-flow fans - The gas flow patterns these fans produce resemble a combination of axial and centrifugal patterns, although the fan wheels often appear similar to centrifugal wheels. There are various types of mixed-flow fans, including gas-tight high-pressure fans and blowers.
Jet Fans are used for daily ventilation requirements and smoke extraction in case of fire (250 ® C/120 min)
These Industrial fans have symmetrical impeller blades; 100% reversible with low noise emissions IP55 motors, insulation class H (smoke extraction version). Application for Basement Ventilation & Tunnel Ventilation etc.
There are several means of controlling the flow rate of a fan, e.g., temporarily reducing the air or gas flow rate; these can be applied to both centrifugal and axial fans.
Speed Variation - All of the fan types described above can be used in conjunction with a variable speed driver. This might be an adjustable frequency AC controller, a DC motor and drive, a steam turbine driver, or a hydraulic variable speed drive unit ("fluid drive"). Flow control by means of variable speed is typically smoother and more efficient than by means of damper control. Significant power savings (with reduced cost of operation) are possible if variable speed fan drives are used for applications that require reduced flow operation for a significant portion of the system operating life.
Industrial Dampers - These devices also allow fan volumetric flow control during operation, by means of panels so as to direct gas flow or restrict the inlet or outlet areas.
There is a variety of dampers available:
Louvered Inlet Box Dampers Radial Inlet Dampers Variable Inlet Vane (VIV) Dampers Vortex Dampers Discharge Dampers | https://en.wikipedia.org/wiki/Industrial_fan |
Industrial fermentation is the intentional use of fermentation in manufacturing processes. In addition to the mass production of fermented foods and drinks , industrial fermentation has widespread applications in chemical industry . Commodity chemicals , such as acetic acid , citric acid , and ethanol are made by fermentation. [ 1 ] Moreover, nearly all commercially produced industrial enzymes , such as lipase , invertase and rennet , are made by fermentation with genetically modified microbes . In some cases, production of biomass itself is the objective, as is the case for single-cell proteins , baker's yeast , and starter cultures for lactic acid bacteria used in cheesemaking .
In general, fermentations can be divided into four types: [ 2 ]
These types are not necessarily disjoined from each other, but provide a framework for understanding the differences in approach. The organisms used are typically microorganisms , particularly bacteria , algae , and fungi , such as yeasts and molds , but industrial fermentation may also involve cell cultures from plants and animals, such as CHO cells and insect cells . Special considerations are required for the specific organisms used in the fermentation, such as the dissolved oxygen level, nutrient levels, and temperature . The rate of fermentation depends on the concentration of microorganisms, cells, cellular components, and enzymes as well as temperature, pH [ 3 ] and level of oxygen for aerobic fermentation . [ 4 ] Product recovery frequently involves the concentration of the dilute solution .
In most industrial fermentations, the organisms or eukaryotic cells are submerged in a liquid medium; in others, such as the fermentation of cocoa beans , coffee cherries, and miso , fermentation takes place on the moist surface of the medium. [ 5 ] [ 6 ]
There are also industrial considerations related to the fermentation process. For instance, to avoid biological process contamination, the fermentation medium, air, and equipment are sterilized. Foam control can be achieved by either mechanical foam destruction or chemical anti-foaming agents . Several other factors must be measured and controlled such as pressure , temperature , agitator shaft power, and viscosity . An important element for industrial fermentations is scale up. This is the conversion of a laboratory procedure to an industrial process . It is well established in the field of industrial microbiology that what works well at the laboratory scale may work poorly or not at all when first attempted at large scale. It is generally not possible to take fermentation conditions that have worked in the laboratory and blindly apply them to industrial scale equipment. Although many parameters have been tested for use as scale up criteria, there is no general formula because of the variation in fermentation processes. The most important methods are the maintenance of constant power consumption per unit of broth and the maintenance of constant volumetric transfer rate. [ 3 ]
Fermentation begins once the growth medium is inoculated with the organism of interest. Growth of the inoculum does not occur immediately. This is the period of adaptation, called the lag phase. [ 7 ] Following the lag phase, the rate of growth of the organism steadily increases, for a certain period—this period is the log or exponential phase. [ 7 ]
After a phase of exponential growth, the rate of growth slows down, due to the continuously falling concentrations of nutrients and/or a continuously increasing (accumulating) concentrations of toxic substances. This phase, where the increase of the rate of growth is checked, is the deceleration phase. After the deceleration phase, growth ceases and the culture enters a stationary phase or a steady state. The biomass remains constant, except when certain accumulated chemicals in the culture chemically break down the cells in a process called chemolysis . Unless other microorganisms contaminate the culture, the chemical constitution remains unchanged. If all of the nutrients in the medium are consumed, or if the concentration of toxins is too great, the cells may become senescent and begin to die off. The total amount of biomass may not decrease, but the number of viable organisms will decrease. [ citation needed ]
The microbes or eukaryotic cells used for fermentation grow in (or on) specially designed growth medium which supplies the nutrients required by the organisms or cells. A variety of media exist, but invariably contain a carbon source, a nitrogen source, water, salts, and micronutrients . In the production of wine, the medium is grape must. In the production of bio-ethanol, the medium may consist mostly of whatever inexpensive carbon source is available. [ citation needed ]
Carbon sources are typically sugars or other carbohydrates, although in the case of substrate transformations (such as the production of vinegar) the carbon source may be an alcohol or something else altogether. For large scale fermentations, such as those used for the production of ethanol, inexpensive sources of carbohydrates, such as molasses , corn steep liquor , [ 8 ] sugar cane juice, or sugar beet juice are used to minimize costs. More sensitive fermentations may instead use purified glucose , sucrose , glycerol or other sugars, which reduces variation and helps ensure the purity of the final product. Organisms meant to produce enzymes such as beta galactosidase , invertase or other amylases may be fed starch to select for organisms that express the enzymes in large quantity. [ citation needed ]
Fixed nitrogen sources are required for most organisms to synthesize proteins , nucleic acids and other cellular components. Depending on the enzyme capabilities of the organism, nitrogen may be provided as bulk protein, such as soy meal; as pre-digested polypeptides, such as peptone or tryptone ; or as ammonia or nitrate salts. Cost is also an important factor in the choice of a nitrogen source. Phosphorus is needed for production of phospholipids in cellular membranes and for the production of nucleic acids . The amount of phosphate which must be added depends upon the composition of the broth and the needs of the organism, as well as the objective of the fermentation. For instance, some cultures will not produce secondary metabolites in the presence of phosphate. [ 9 ]
Growth factors and trace nutrients are included in the fermentation broth for organisms incapable of producing all of the vitamins they require. Yeast extract is a common source of micronutrients and vitamins for fermentation media. Inorganic nutrients, including trace elements such as iron, zinc, copper, manganese, molybdenum, and cobalt are typically present in unrefined carbon and nitrogen sources, but may have to be added when purified carbon and nitrogen sources are used. Fermentations which produce large amounts of gas (or which require the addition of gas) will tend to form a layer of foam, since fermentation broth typically contains a variety of foam-reinforcing proteins, peptides or starches. To prevent this foam from occurring or accumulating, antifoaming agents may be added. Mineral buffering salts, such as carbonates and phosphates, may be used to stabilize pH near optimum. When metal ions are present in high concentrations, use of a chelating agent may be necessary. [ citation needed ]
Developing an optimal medium for fermentation is a key concept to efficient optimization. One-factor-at-a-time (OFAT) is the preferential choice that researchers use for designing a medium composition. This method involves changing only one factor at a time while keeping the other concentrations constant. This method can be separated into some sub groups. One is Removal Experiments. In this experiment all the components of the medium are removed one at a time and their effects on the medium are observed. Supplementation experiments involve evaluating the effects of nitrogen and carbon supplements on production. The final experiment is a replacement experiment. This involves replacing the nitrogen and carbon sources that show an enhancement effect on the intended production. Overall OFAT is a major advantage over other optimization methods because of its simplicity. [ 10 ]
Microbial cells or biomass is sometimes the intended product of fermentation. Examples include single cell protein , bakers yeast , lactobacillus , E. coli , and others. In the case of single-cell protein, algae is grown in large open ponds which allow photosynthesis to occur. [ 11 ] If the biomass is to be used for inoculation of other fermentations, care must be taken to prevent mutations from occurring.
Metabolites can be divided into two groups: those produced during the growth phase of the organism, called primary metabolites and those produced during the stationary phase, called secondary metabolites . Some examples of primary metabolites are ethanol , citric acid , glutamic acid , lysine , vitamins and polysaccharides . Some examples of secondary metabolites are penicillin , cyclosporin A , gibberellin , and lovastatin . [ 9 ]
Primary metabolites are compounds made during the ordinary metabolism of the organism during the growth phase. A common example is ethanol or lactic acid, produced during glycolysis . Citric acid is produced by some strains of Aspergillus niger as part of the citric acid cycle to acidify their environment and prevent competitors from taking over. Glutamate is produced by some Micrococcus species, [ 12 ] and some Corynebacterium species produce lysine, threonine, tryptophan and other amino acids. All of these compounds are produced during the normal "business" of the cell and released into the environment. There is therefore no need to rupture the cells for product recovery.
Secondary metabolites are compounds made in the stationary phase; penicillin, for instance, prevents the growth of bacteria which could compete with Penicillium molds for resources. Some bacteria, such as Lactobacillus species, are able to produce bacteriocins which prevent the growth of bacterial competitors as well. These compounds are of obvious value to humans wishing to prevent the growth of bacteria, either as antibiotics or as antiseptics (such as gramicidin S ). Fungicides , such as griseofulvin are also produced as secondary metabolites. [ 9 ] Typically secondary metabolites are not produced in the presence of glucose or other carbon sources which would encourage growth, [ 9 ] and like primary metabolites are released into the surrounding medium without rupture of the cell membrane.
In the early days of the biotechnology industry, most biopharmaceutical products were made in E. coli ; by 2004 more biopharmaceuticals were manufactured in eukaryotic cells, such as CHO cells , than in microbes, but used similar bioreactor systems. [ 6 ] Insect cell culture systems came into use in the 2000s as well. [ 13 ]
Of primary interest among the intracellular components are microbial enzymes : catalase , amylase , protease , pectinase , cellulase , hemicellulase , lipase , lactase , streptokinase and many others. [ 14 ] Recombinant proteins , such as insulin , hepatitis B vaccine , interferon , granulocyte colony-stimulating factor , streptokinase and others are also made this way. [ 6 ] The largest difference between this process and the others is that the cells must be ruptured (lysed) at the end of fermentation, and the environment must be manipulated to maximize the amount of the product. Furthermore, the product (typically a protein) must be separated from all of the other cellular proteins in the lysate to be purified.
Substrate transformation involves the transformation of a specific compound into another, such as in the case of phenylacetylcarbinol , and steroid biotransformation , or the transformation of a raw material into a finished product, in the case of food fermentations and sewage treatment.
In the history of food , ancient fermented food processes, such as making bread , wine , cheese , curds , idli , dosa , among others can be dated to more than seven thousand years ago . [ 15 ] They were developed long before humanity had any knowledge of the existence of the microorganisms involved. Some foods such as Marmite are the byproduct of the fermentation process, in this case in the production of beer .
Fermentation is the main source [ citation needed ] of ethanol in the production of ethanol fuel . Common crops such as sugar cane , potato , cassava , and maize are fermented by yeast to produce ethanol which is further processed to become fuel.
In the process of sewage treatment , sewage is digested by enzymes secreted by bacteria. Solid organic matters are broken down into harmless, soluble substances and carbon dioxide. Liquids that result are disinfected to remove pathogens before being discharged into rivers or the sea or can be used as liquid fertilizers. Digested solids, known also as sludge, is dried and used as fertilizer. Gaseous byproducts such as methane can be utilized as biogas to fuel electrical generators . One advantage of bacterial digestion is that it reduces the bulk and odor of sewage, thus reducing space needed for dumping. The main disadvantage of bacterial digestion in sewage disposal is that it is a very slow process.
A wide variety of agroindustrial waste products can be fermented to use as food for animals, especially ruminants. Fungi have been employed to break down cellulosic wastes to increase protein content and improve in vitro digestibility. [ 16 ]
Precision fermentation is an approach to manufacturing specific functional products which intends to minimise the production of unwanted by-products through the application of synthetic biology , particularly by generating synthetic "cell factories" with engineered genomes and metabolic pathways optimised to produce the desired compounds as efficiently as possible with the available resources. [ 17 ] Precision fermentation of genetically modified microorganisms may be used to manufacture proteins needed for cell culture media, [ 18 ] providing for serum -free cell culture media in the manufacturing process of cultured meat . [ 19 ] A 2021 publication showed that photovoltaic-driven microbial protein production could use 10 times less land for an equivalent amount of protein compared to soybean cultivation. [ 20 ] Some Food Regulatory Agencies such as the FDA do not require the labeling of precision fermented foods as GMO since they are produced by, but do not contain the genetically engineered organisms. [ 21 ] [ self-published source? ] It is unclear how regulation will be handled in EU markets, with some Startups such as Formo and Those Vegan Cowboys forming the Food Fermentation Europe (FFE) alliance together with other alt-protein startups to seek regulatory approval. [ 22 ] | https://en.wikipedia.org/wiki/Industrial_fermentation |
Industrial gases are the gaseous materials that are manufactured for use in industry . The principal gases provided are nitrogen , oxygen , carbon dioxide , argon , hydrogen , helium and acetylene , although many other gases and mixtures are also available in gas cylinders. The industry producing these gases is also known as industrial gas , which is seen as also encompassing the supply of equipment and technology to produce and use the gases. [ 1 ] Their production is a part of the wider chemical Industry (where industrial gases are often seen as " specialty chemicals ").
Industrial gases are used in a wide range of industries, which include oil and gas , petrochemicals , chemicals , power , mining , steelmaking , metals , environmental protection , medicine , pharmaceuticals , biotechnology , food , water , fertilizers , nuclear power , electronics and aerospace . Industrial gas is sold to other industrial enterprises; typically comprising large orders to corporate industrial clients, covering a size range from building a process facility or pipeline down to cylinder gas supply.
Some trade scale business is done, typically through tied local agents who are supplied wholesale . This business covers the sale or hire of gas cylinders and associated equipment to tradesmen and occasionally the general public. This includes products such as balloon helium , dispensing gases for beer kegs , welding gases and welding equipment, LPG and medical oxygen .
Retail sales of small scale gas supply are not confined to just the industrial gas companies or their agents. A wide variety of hand-carried small gas containers, which may be called cylinders, bottles, cartridges, capsules or canisters are available to supply LPG, butane, propane, carbon dioxide or nitrous oxide. Examples are whipped-cream chargers , powerlets , campingaz and sodastream .
The first gas from the natural environment used by humans was almost certainly air when it was discovered that blowing on or fanning a fire made it burn brighter. Humans also used the warm gases from a fire to smoke foods and steam from boiling water to cook foods.
Carbon dioxide has been known from ancient times as the byproduct of fermentation , particularly for beverages , which was first documented dating from 7000 to 6600 B.C. in Jiahu , China . [ 2 ] Natural gas was used by the Chinese in about 500 B.C. when they discovered the potential to transport gas seeping from the ground in crude pipelines of bamboo to where it was used to boil sea water. [ 3 ] Sulfur dioxide was used by the Romans in winemaking as it had been discovered that burning candles made of sulfur [ 4 ] inside empty wine vessels would keep them fresh and prevent them gaining a vinegar smell. [ 5 ]
Early understanding consisted of empirical evidence and the protoscience of alchemy ; however with the advent of scientific method [ 6 ] and the science of chemistry , these gases became positively identified and understood.
The history of chemistry tells us that a number of gases were identified and either discovered or first made in relatively pure form during the Industrial Revolution of the 18th and 19th centuries by notable chemists in their laboratories . The timeline of attributed discovery for various gases are carbon dioxide (1754), [ 7 ] hydrogen (1766), [ 8 ] [ 9 ] nitrogen (1772), [ 8 ] nitrous oxide (1772), [ 10 ] oxygen (1773), [ 8 ] [ 11 ] [ 12 ] ammonia (1774), [ 13 ] chlorine (1774), [ 8 ] methane (1776), [ 14 ] hydrogen sulfide (1777), [ 15 ] carbon monoxide (1800), [ 16 ] hydrogen chloride (1810), [ 17 ] acetylene (1836), [ 18 ] helium (1868) [ 8 ] [ 19 ] fluorine (1886), [ 8 ] argon (1894), [ 8 ] krypton, neon and xenon (1898) [ 8 ] and radon (1899). [ 8 ]
Carbon dioxide, hydrogen, nitrous oxide, oxygen, ammonia, chlorine, sulfur dioxide and manufactured fuel gas were already being used during the 19th century, and mainly had uses in food , refrigeration , medicine , and for fuel and gas lighting . [ 20 ] For example, carbonated water was being made from 1772 and commercially from 1783, chlorine was first used to bleach textiles in 1785 [ 21 ] and nitrous oxide was first used for dentistry anaesthesia in 1844. [ 10 ] At this time gases were often generated for immediate use by chemical reactions . A notable example of a generator is Kipps apparatus which was invented in 1844 [ 22 ] and could be used to generate gases such as hydrogen, hydrogen sulfide , chlorine, acetylene and carbon dioxide by simple gas evolution reactions . Acetylene was manufactured commercially from 1893 and acetylene generators were used from about 1898 to produce gas for gas cooking and gas lighting , however electricity took over as more practical for lighting and once LPG was produced commercially from 1912, the use of acetylene for cooking declined. [ 20 ]
Once gases had been discovered and produced in modest quantities, the process of industrialisation spurred on innovation and invention of technology to produce larger quantities of these gases. Notable developments in the industrial production of gases include the electrolysis of water to produce hydrogen (in 1869) and oxygen (from 1888), the Brin process for oxygen production which was invented in the 1884, the chloralkali process to produce chlorine in 1892 and the Haber Process to produce ammonia in 1908. [ 23 ]
The development of uses in refrigeration also enabled advances in air conditioning and the liquefaction of gases. Carbon dioxide was first liquefied in 1823. The first Vapor-compression refrigeration cycle using ether was invented by Jacob Perkins in 1834 and a similar cycle using ammonia was invented in 1873 and another with sulfur dioxide in 1876. [ 20 ] Liquid oxygen and Liquid nitrogen were both first made in 1883; Liquid hydrogen was first made in 1898 and liquid helium in 1908. LPG was first made in 1910. A patent for LNG was filed in 1914 with the first commercial production in 1917. [ 24 ]
Although no one event marks the beginning of the industrial gas industry, many would take it to be the 1880s with the construction of the first high pressure gas cylinders . [ 20 ] Initially cylinders were mostly used for carbon dioxide in carbonation or dispensing of beverages.
In 1895 refrigeration compression cycles were further developed to enable the liquefaction of air , [ 25 ] most notably by Carl von Linde [ 26 ] allowing larger quantities of oxygen production and in 1896 the discovery that large quantities of acetylene could be dissolved in acetone and rendered nonexplosive allowed the safe bottling of acetylene. [ 27 ]
A particularly important use was the development of welding and metal cutting done with oxygen and acetylene from the early 1900s.
As production processes for other gases were developed many more gases came to be sold in cylinders without the need for a gas generator .
Air separation plants refine air in a separation process and so allow the bulk production of nitrogen and argon in addition to oxygen - these three are often also produced as cryogenic liquid . To achieve the required low distillation temperatures, an Air Separation Unit (ASU) uses a refrigeration cycle that operates by means of the Joule–Thomson effect .
In addition to the main air gases, air separation is also the only practical source for production of the rare noble gases neon , krypton and xenon .
Cryogenic technologies also allow the liquefaction of natural gas , hydrogen and helium . In natural-gas processing , cryogenic technologies are used to remove nitrogen from natural gas in a Nitrogen Rejection Unit ; a process that can also be used to produce helium from natural gas where natural gas fields contain sufficient helium to make this economic. The larger industrial gas companies have often invested in extensive patent libraries in all fields of their business, but particularly in cryogenics.
The other principal production technology in the industry is Reforming. Steam reforming is a chemical process used to convert natural gas and steam into a syngas containing hydrogen and carbon monoxide with carbon dioxide as a byproduct . Partial oxidation and autothermal reforming are similar processes but these also require oxygen from an ASU. Synthesis gas is often a precursor to the chemical synthesis of ammonia or methanol . The carbon dioxide produced is an acid gas and is most commonly removed by amine treating . This separated carbon dioxide can potentially be sequestrated to a carbon capture reservoir or used for Enhanced oil recovery .
Air Separation and hydrogen reforming technologies are the cornerstone of the industrial gases industry and also form part of the technologies required for many fuel gasification ( including IGCC ), cogeneration and Fischer-Tropsch gas to liquids schemes. Hydrogen has many production methods and may be almost a carbon neutral alternative fuel if produced by water electrolysis (assuming the electricity is produced in nuclear or other low carbon footprint power plant instead of reforming natural gas which is by far dominant method). One example of displacing the use of hydrocarbons is Orkney; [ 28 ] see hydrogen economy for more information on hydrogen's uses. Liquid hydrogen is used by NASA in the Space Shuttle as a rocket fuel .
Simpler gas separation technologies, such as membranes or molecular sieves used in pressure swing adsorption or vacuum swing adsorption are also used to produce low purity air gases in nitrogen generators and oxygen plants . Other examples producing smaller amounts of gas are chemical oxygen generators or oxygen concentrators .
In addition to the major gases produced by air separation and syngas reforming, the industry provides many other gases. Some gases are simply byproducts from other industries and others are sometimes bought from other larger chemical producers, refined and repackaged; although a few have their own production processes. Examples are hydrogen chloride produced by burning hydrogen in chlorine, nitrous oxide produced by thermal decomposition of ammonium nitrate when gently heated, electrolysis for the production of fluorine, chlorine and hydrogen, and electrical corona discharge to produce ozone from air or oxygen.
Related services and technology can be supplied such as vacuum , which is often provided in hospital gas systems ; purified compressed air ; or refrigeration . Another unusual system is the inert gas generator . Some industrial gas companies may also supply related chemicals , particularly liquids such as bromine , hydrogen fluoride and ethylene oxide .
Most materials that are gaseous at ambient temperature and pressure are supplied as compressed gas. A gas compressor is used to compress the gas into storage pressure vessels (such as gas canisters , gas cylinders or tube trailers ) through piping systems. Gas cylinders are by far the most common gas storage [ 29 ] and large numbers are produced at a "cylinder fill" facility.
However, not all industrial gases are supplied in the gaseous phase . A few gases are vapors that can be liquefied at ambient temperature under pressure alone, so they can also be supplied as a liquid in an appropriate container. This phase change also makes these gases useful as ambient refrigerants and the most significant industrial gases with this property are ammonia (R717), propane (R290), butane (R600), and sulfur dioxide (R764). Chlorine also has this property but is too toxic, corrosive and reactive to ever have been used as a refrigerant. Some other gases exhibit this phase change if the ambient temperature is low enough; this includes ethylene (R1150), carbon dioxide (R744), ethane (R170), nitrous oxide (R744A), and sulfur hexafluoride ; however, these can only be liquefied under pressure if kept below their critical temperatures which are 9 °C for C 2 H 4 ; 31 °C for CO 2 ; 32 °C for C 2 H 6 ; 36 °C for N 2 O ; 45 °C for SF 6 . [ 30 ] All of these substances are also provided as a gas (not a vapor) at the 200 bar pressure in a gas cylinder because that pressure is above their critical pressure . [ 30 ]
Permanent gases (those with a critical temperature below ambient) can only be supplied as liquid if they are also cooled. All gases can potentially be used as a refrigerant around the temperatures at which they are liquid; for example nitrogen (R728) and methane (R50) are used as refrigerant at cryogenic temperatures. [ 25 ]
Exceptionally carbon dioxide can be produced as a cold solid known as dry ice , which sublimes as it warms in ambient conditions, the properties of carbon dioxide are such that it cannot be liquid at a pressure below its triple point of 5.1 bar. [ 30 ]
Acetylene is also supplied differently. Since it is so unstable and explosive, this is supplied as a gas dissolved in acetone within a packing mass in a cylinder. Acetylene is also the only other common industrial gas that sublimes at atmospheric pressure. [ 30 ]
The major industrial gases can be produced in bulk and delivered to customers by pipeline , but can also be packaged and transported.
Most gases are sold in gas cylinders and some sold as liquid in appropriate containers (e.g. Dewars ) or as bulk liquid delivered by truck. The industry originally supplied gases in cylinders to avoid the need for local gas generation; but for large customers such as steelworks or oil refineries , a large gas production plant may be built nearby (typically called an "on-site" facility) to avoid using large numbers of cylinders manifolded together . Alternatively, an industrial gas company may supply the plant and equipment to produce the gas rather than the gas itself. An industrial gas company may also offer to act as plant operator under an operations and maintenance contract for a gases facility for a customer, since it usually has the experience of running such facilities for the production or handling of gases for itself.
Some materials are dangerous to use as a gas; for example, fluorine is highly reactive and industrial chemistry requiring fluorine often uses hydrogen fluoride (or hydrofluoric acid ) instead. Another approach to overcoming gas reactivity is to generate the gas as and when required, which is done, for example, with ozone .
The delivery options are therefore local gas generation, pipelines , bulk transport ( truck , rail , ship ), and packaged gases in gas cylinders or other containers. [ 1 ]
Bulk liquid gases are often transferred to end user storage tanks . Gas cylinders (and liquid gas containing vessels) are often used by end users for their own small scale distribution systems. Toxic or flammable gas cylinders are often stored by end users in gas cabinets for protection from external fire or from any leak.
Despite attempts at standardization to facilitate user and first responders' safety, no universal coding exists for cylinders with industrial gases, therefore several color coding standards are in usage. In most developed countries of the world, notably countries of European union and United Kingdom, EN 1089-3 is used, with cylinders of liquefied petroleum gas being an exception.
In United States of America, no official regulation of color coding for gas cylinders exists and none is enforced. [ 31 ]
Industrial gas is a group of materials that are specifically manufactured for use in industry and are also gaseous at ambient temperature and pressure. They are chemicals which can be an elemental gas or a chemical compound that is either organic or inorganic , and tend to be low molecular weight molecules. They could also be a mixture of individual gases. They have value as a chemical; whether as a feedstock , in process enhancement, as a useful end product, or for a particular use; as opposed to having value as a "simple" fuel .
The term “industrial gases” [ 32 ] is sometimes narrowly defined as just the major gases sold, which are: nitrogen, oxygen, carbon dioxide, argon, hydrogen, acetylene and helium. [ 33 ] Many names are given to gases outside of this main list by the different industrial gas companies, but generally the gases fall into the categories "specialty gases", “ medical gases ”, “ fuel gases ” or “ refrigerant gases ”. However gases can also be known by their uses or industries that they serve, hence "welding gases" or " breathing gases ", etc.; or by their source, as in "air gases"; or by their mode of supply as in "packaged gases". The major gases might also be termed "bulk gases" or "tonnage gases".
In principle any gas or gas mixture sold by the "industrial gases industry" probably has some industrial use and might be termed an "industrial gas". In practice, "industrial gases" are likely to be a pure compound or a mixture of precise chemical composition , packaged or in small quantities, but with high purity or tailored to a specific use (e.g. oxyacetylene ).
Lists of the more significant gases are listed in "The Gases" below.
There are cases when a gas is not usually termed an "industrial gas"; principally where the gas is processed for later use of its energy rather than manufactured for use as a chemical substance or preparation.
The oil and gas industry is seen as distinct. So, whilst it is true that natural gas is a "gas" used in "industry" - often as a fuel, sometimes as a feedstock, and in this generic sense is an "industrial gas"; this term is not generally used by industrial enterprises for hydrocarbons produced by the petroleum industry directly from natural resources or in an oil refinery . Materials such as LPG and LNG are complex mixtures often without precise chemical composition that often also changes whilst stored.
The petrochemical industry is also seen as distinct. So petrochemicals (chemicals derived from petroleum ) such as ethylene are also generally not described as "industrial gases".
Sometimes the chemical industry is thought of as distinct from industrial gases; so materials such as ammonia and chlorine might be considered " chemicals " (especially if supplied as a liquid) instead of or sometimes as well as "industrial gases".
Small scale gas supply of hand-carried containers is sometimes not considered to be industrial gas as the use is considered personal rather than industrial; and suppliers are not always gas specialists.
These demarcations are based on perceived boundaries of these industries (although in practice there is some overlap), and an exact scientific definition is difficult. To illustrate "overlap" between industries:
Manufactured fuel gas (such as town gas ) would historically have been considered an industrial gas. Syngas is often considered to be a petrochemical; although its production is a core industrial gases technology. Similarly, projects harnessing Landfill gas or biogas , Waste-to-energy schemes, as well as Hydrogen Production all exhibit overlapping technologies.
Helium is an industrial gas, even though its source is from natural gas processing .
Any gas is likely to be considered an industrial gas if it is put in a gas cylinder (except perhaps if it is used as a fuel)
Propane would be considered an industrial gas when used as a refrigerant, but not when used as a refrigerant in LNG production, even though this is an overlapping technology.
The known chemical elements which are, or can be obtained from natural resources (without transmutation ) and which are gaseous are hydrogen, nitrogen, oxygen, fluorine, chlorine, plus the noble gases; and are collectively referred to by chemists as the "elemental gases". [ 34 ] These elements are all primordial apart from the noble gas radon which is a trace radioisotope which occurs naturally since all isotopes are radiogenic nuclides from radioactive decay . These elements are all nonmetals .
( Synthetic elements have no relevance to the industrial gas industry; however for scientific completeness, note that it has been suggested, but not scientifically proven, that metallic elements 112 ( Copernicium ) and 114 ( Flerovium ) are gases. [ 35 ] )
The elements which are stable two atom homonuclear molecules at standard temperature and pressure (STP), are hydrogen (H 2 ), nitrogen (N 2 ) and oxygen (O 2 ), plus the halogens fluorine (F 2 ) and chlorine (Cl 2 ). The noble gases are all monatomic .
In the industrial gases industry the term "elemental gases" (or sometimes less accurately "molecular gases") is used to distinguish these gases from molecules that are also chemical compounds .
Radon is chemically stable, but it is radioactive and does not have a stable isotope . Its most stable isotope , 222 Rn , has a half-life of 3.8 days. Its uses are due to its radioactivity rather than its chemistry and it requires specialist handling outside of industrial gas industry norms. It can however be produced as a by-product of uraniferous ores processing. Radon is a trace naturally occurring radioactive material (NORM) encountered in the air processed in an ASU.
Chlorine is the only elemental gas that is technically a vapor since STP is below its critical temperature ; whilst bromine and mercury are liquid at STP, and so their vapor exists in equilibrium with their liquid at STP.
This list shows the other most common gases sold by industrial gas companies. [ 1 ]
There are many gas mixtures possible.
This list shows the most important liquefied gases: [ 1 ]
The uses of industrial gases are diverse.
The following is a small list of areas of use: | https://en.wikipedia.org/wiki/Industrial_gas |
The industrial internet of things ( IIoT ) refers to interconnected sensors, instruments, and other devices networked together with computers' industrial applications, including manufacturing and energy management. This connectivity allows for data collection, exchange, and analysis, potentially facilitating improvements in productivity and efficiency as well as other economic benefits. [ 1 ] [ 2 ] The IIoT is an evolution of a distributed control system (DCS) that allows for a higher degree of automation by using cloud computing to refine and optimize the process controls.
The IIoT is enabled by technologies such as cybersecurity , cloud computing , edge computing , mobile technologies , machine-to-machine , 3D printing , advanced robotics , big data , Internet of things , RFID technology, and cognitive computing . [ 3 ] [ 4 ] Five of the most important ones are described below:
IIoT systems are usually conceived as a layered modular architecture of digital technology. [ 15 ] The device layer refers to the physical components: CPS, sensors or machines. The network layer consists of physical network buses, cloud computing and communication protocols that aggregate and transport the data to the service layer , which consists of applications that manipulate and combine data into information that can be displayed on the driver dashboard. The top-most stratum of the stack is the content layer or the user interface. [ 16 ]
The history of the IIoT begins with the invention of the programmable logic controller (PLC) by Richard E. Morley in 1968, which was used by General Motors in their automatic transmission manufacturing division. [ 17 ] These PLCs allowed for fine control of individual elements in the manufacturing chain. In 1975, Honeywell and Yokogawa introduced the world's first DCSs, the TDC 2000 and the CENTUM system, respectively. [ 18 ] [ 19 ] These DCSs were the next step in allowing flexible process control throughout a plant, with the added benefit of backup redundancies by distributing control across the entire system, eliminating a singular point of failure in a central control room.
With the introduction of Ethernet in 1980, people began to explore the concept of a network of smart devices as early as 1982, when a modified Coke machine at Carnegie Mellon University became the first Internet-connected appliance, [ 20 ] able to report its inventory and whether newly loaded drinks were cold. [ 21 ] As early as in 1994, greater industrial applications were envisioned, as Reza Raji described the concept in IEEE Spectrum as "[moving] small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories". [ 22 ]
The concept of the Internet of things first became popular in 1999, through the Auto-ID Center at MIT and related market-analysis publications. [ 23 ] Radio-frequency identification ( RFID ) was seen by Kevin Ashton (one of the founders of the original Auto-ID Center) as a prerequisite for the Internet of things at that point. [ 24 ] If all objects and people in daily life were equipped with identifiers, computers could manage and inventory them. [ 25 ] [ 26 ] [ 27 ] Besides using RFID, the tagging of things may be achieved through such technologies as near field communication , barcodes , QR codes and digital watermarking . [ 28 ] [ 29 ]
The current conception of the IIoT arose after the emergence of cloud technology in 2002, which allows for the storage of data to examine for historical trends, and the development of the OPC Unified Architecture protocol in 2006, which enabled secure, remote communications between devices, programs, and data sources without the need for human intervention or interfaces.
One of the first consequences of implementing the industrial internet of things (by equipping objects with minuscule identifying devices or machine-readable identifiers) would be to create instant and ceaseless inventory control. [ 30 ] [ 31 ] Another benefit of implementing an IIoT system is the ability to create a digital twin of the system. Using this digital twin allows for further optimization of the system by allowing for experimentation with new data from the cloud without having to halt production or sacrifice safety, as the new processes can be refined virtually until they are ready to be implemented. A digital twin can also serve as a training ground for new employees who won't have to worry about real impacts on the live system. [ 32 ]
IoT frameworks help support the interaction between "things" and allow for more complex structures like distributed computing and the development of distributed applications .
The term industrial internet of things is often encountered in the manufacturing industries, referring to the industrial subset of the IoT. Potential benefits of the industrial internet of things include improved productivity, analytics and the transformation of the workplace. [ 40 ] The potential of growth by implementing IIoT is predicted to generate $15 trillion of global GDP by 2030. [ 40 ] [ 41 ]
While connectivity and data acquisition are imperative for IIoT, they are not the end goals, but rather the foundation and path to something bigger. Of all the technologies, predictive maintenance is an "easier” application, as it is applicable to existing assets and management systems. Intelligent maintenance systems can reduce unexpected downtime and increase productivity, which is projected to save up to 12% over scheduled repairs, reduce overall maintenance costs up to 30%, and eliminate breakdowns up to 70%, according to some studies. [ 40 ] [ 42 ] Cyber-physical systems (CPS) are the core technology of industrial big data and they will be an interface between human and the cyber world.
Integration of sensing and actuation systems connected to the Internet can optimize energy consumption as a whole. [ 43 ] It is expected that IoT devices will be integrated into all forms of energy consuming devices (switches, power outlets, bulbs, televisions, etc.) and be able to communicate with the utility supply company in order to effectively balance power generation and energy usage. [ 44 ] Besides home based energy management, the IIoT is especially relevant to the Smart Grid since it provides systems to gather and act on energy and power-related information in an automated fashion with the goal to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity. [ 44 ] Using advanced metering infrastructure (AMI) devices connected to the Internet backbone, electric utilities can not only collect data from end-user connections, but also manage other distribution automation devices like transformers and reclosers. [ 43 ]
As of 2016, other real-world applications include incorporating smart LEDs to direct shoppers to empty parking spaces or highlight shifting traffic patterns, using sensors on water purifiers to alert managers via computer or smartphone when to replace parts, attaching RFID tags to safety gear to track personnel and ensure their safety, embedding computers into power tools to record and track the torque level of individual tightenings, and collecting data from multiple systems to enable the simulation of new processes. [ 41 ]
Using IIoT in car manufacturing implies the digitalization of all elements of production. Software, machines, and humans are interconnected, enabling suppliers and manufacturers to rapidly respond to changing standards. [ 45 ] IIoT enables efficient and cost-effective production by moving data from the customers to the company's systems, and then to individual sections of the production process. With IIoT, new tools and functionalities can be included in the manufacturing process. For example, 3D printers simplify the way of shaping pressing tools by printing the shape directly from steel granulate. [ 46 ] These tools enable new possibilities for designing (with high precision). Customization of vehicles is also enabled by IIoT due to the modularity and connectivity of this technology. [ 45 ] While in the past they worked separately, IIoT now enables humans and robots to cooperate. [ 46 ] Robots take on heavy and repetitive activities, so the manufacturing cycles are quicker and the vehicle comes to the market more rapidly. Factories can quickly identify potential maintenance issues before they lead to downtime and many of them are moving to a 24-hour production plant, due to higher security and efficiency. [ 45 ] The majority of automotive manufacturers companies have production plants in different countries, where different components of the same vehicle are built. IIoT makes it possible to connect these production plants to each other, creating the possibility to move within facilities. Big data can be visually monitored which enables companies to respond faster to fluctuations in production and demand.
With IIOT support, large amounts of raw data can be stored and sent by the drilling gear and research stations for cloud storage and analysis. [ 47 ] With IIOT technologies, the oil and gas industry has the capability to connect machines, devices, sensors, and people through interconnectivity, which can help companies better address fluctuations in demand and pricing, address cybersecurity, and minimize environmental impact. [ 48 ]
Across the supply chain, IIOT can improve the maintenance process, the overall safety, and connectivity. [ 49 ] Drones can be used to detect possible oil and gas leaks at an early stage and at locations that are difficult to reach (e.g. offshore). They can also be used to identify weak spots in complex networks of pipelines with built-in thermal imaging systems. Increased connectivity (data integration and communication) can help companies with adjusting the production levels based on real-time data of inventory, storage, distribution pace, and forecasted demand. For example, a Deloitte report states that by implementing an IIOT solution integrating data from multiple internal and external sources (such as work management system, control center, pipeline attributes, risk scores, inline inspection findings, planned assessments, and leak history), thousands of miles of pipes can be monitored in real-time. This allows monitoring of pipeline threats, improving risk management, and providing situational awareness. [ 50 ]
Benefits also apply to specific processes of the oil and gas industry. [ 49 ] The exploration process of oil and gas can be done more precisely with 4D models built by seismic imaging. These models map fluctuations in oil reserves and gas levels, they strive to point out the exact quantity of resources needed, and they forecast the lifespan of wells. The application of smart sensors and automated drillers gives companies the opportunity to monitor and produce more efficiently. Further, the storing process can also be improved with the implementation of IIOT by collecting and analyzing real-time data to monitor inventory levels and temperature control. IIOT can enhance the transportation process of oil and gas by implementing smart sensors and thermal detectors to give real-time geolocation data and monitor the products for safety reasons. These smart sensors can monitor the refinery processes, and enhance safety. The demand for products can be forecasted more precisely and automatically be communicated to the refineries and production plants to adjust production levels.
In the agriculture industry, IIoT helps farmers to make decisions about when to harvest. Sensors collect data about soil and weather conditions and propose schedules for fertilizing and irrigating. [ 51 ] Some livestock farms implant microchips into animals. This allows the farmers not only to trace their animals, but also pull up information about the lineage, weight, or health. [ 52 ]
The integration of IIoT data in the photovoltaic (PV) industry can significantly enhance the efficiency, reliability, and performance of solar power systems. [ 53 ] IIoT with AI data can be utilized for real-time monitoring, performance optimization, fault detection, diagnostics. [ 54 ]
As the IIoT expands, new security concerns arise with it. Every new device or component that connects to the IIoT [ 55 ] can become a potential liability. Gartner estimates that by 2020, more than 25% of recognized attacks on enterprises will involve IoT-connected systems, despite accounting for less than 10% of IT security budgets. [ 56 ] Existing cybersecurity measures are vastly inferior for Internet-connected devices compared to their traditional computer counterparts, [ 57 ] which can allow for them to be hijacked for DDoS -based attacks by botnets like Mirai . Another possibility is the infection of Internet-connected industrial controllers, like in the case of Stuxnet , without the need for physical access to the system to spread the worm. [ 58 ]
Additionally, IIoT-enabled devices can allow for more “traditional” forms of cybercrime, as in the case of the 2013 Target data breach, where information was stolen after hackers gained access to Target's networks via credentials stolen from a third party HVAC vendor. [ 59 ] The pharmaceutical manufacturing industry has been slow to adopt IIoT advances because of security concerns such as these. [ 60 ] One of the difficulties in providing security solutions in IIoT applications is the fragmented nature of the hardware. [ 61 ] Consequently, security architectures are turning towards designs that are software-based or device-agnostic. [ 62 ]
Hardware-based approaches, like the use of data diodes , are often used when connecting critical infrastructure. [ 63 ] | https://en.wikipedia.org/wiki/Industrial_internet_of_things |
Industrial porcelain enamel (also known as glass lining , glass-lined steel , or glass fused to steel ) is the use of porcelain enamel (also known as vitreous enamel) for industrial, rather than artistic, applications. Porcelain enamel, a thin layer of ceramic or glass applied to a substrate of metal, [ 1 ] is used to protect surfaces from chemical attack and physical damage, modify the structural characteristics of the substrate, and improve the appearance of the product.
Enamel has been used for art and decoration since the period of Ancient Egypt , and for industry since the Industrial Revolution . [ 1 ] It is most commonly used in the production of cookware , home appliances, bathroom fixtures, water heaters, and scientific laboratory equipment. [ 2 ]
The most important characteristic of porcelain enamel, from an industrial perspective, is its resistance to corrosion . [ 3 ] Mild steel is used in almost every industry and a huge array of products; porcelain enamel is a very economic way of protecting this, and other chemically vulnerable materials, from corrosion. It can also produce very smooth, glossy finishes in a wide array of colours; these colours will not fade on exposure to UV light, as paint will. Being a fired ceramic, porcelain enamel is also highly heat-resistant; this allows it to be used in high-temperature applications where an organic anti-corrosion coating or galvanization may be impractical or even dangerous ( see Metal fume fever ). [ 3 ]
Porcelain enamel also sees less frequent employment of some of its other properties; examples are its abrasion resistance, where it may perform better than many metals; its resistance to organic solvents , where it is entirely impervious; its resistance to thermal shock , where it can resist rapid cooling from temperatures 500°C and higher; and its longevity. [ 3 ]
Porcelain enamel is used most often in the manufacture of products that will be expected to come under regular chemical attack or high heat such as cookware, burners, and laboratory equipment . It is used in the production of many household goods and appliances, especially those used in the kitchen or bathroom area: pots, pans, cooktops, appliances, sinks, toilets, bathtubs, even walls, counters, and other surfaces. [ 4 ]
Porcelain enamel is also used architecturally as a coating for wall panels. It may be used externally to provide weather resistance and desirable appearance, or internally to provide wear resistance; for example, on escalator side panels and tunnel walls. In recent years, agricultural silos have also been constructed with porcelain enamelled steel plates to protect the interior from corrosion and the exterior from weathering; this may indicate a future trend of coating all outdoor mild steel products in a weather-resistant porcelain enamel. [ 4 ]
The application of industrial porcelain enamel can be a complicated process involving many different and very technical steps. All enamelling processes involve the mixture and preparation of frit , the unfired enamel mixture; the preparation of the substrate; the application and firing; and then finishing processes. Most modern applications also involve two layers of enamel: a ground-coat to bond to the substrate and a cover-coat to provide the desired external properties.
Because frits frequently must be mixed at higher temperatures than the firing requires, most modern industrial enamellers do not mix their own frits completely; frit is most often purchased from dedicated frit producers in standard compositions and then any special ingredients added before application and firing. [ 5 ]
For ground coats, the composition of a frit for any given application is determined primarily by the metal used as the substrate: different varieties of steel, and different metals such as aluminium and copper , require different frit compositions to bond to them. For cover coats, the frit is composed to bind to the ground-coat and produce the desired external properties. [ 6 ] Frit is normally prepared by mixing the ingredients and then milling the mixture into a powder. The ingredients, most often metal oxides and minerals such as quartz (or silica sand ), soda ash , borax , and cobalt oxide , are acquired in particulate form; the precise chemical composition and amount of each ingredient must be carefully measured and regulated. [ 7 ] Once prepared, this powdered frit is then slumped and stirred to promote even distribution of materials; most frits are smelted at temperatures between 1150 and 1300°C . After smelting, the frit is again milled into a powder, most often by ball mill grinding. [ 8 ]
For wet application of enamel, a slurry of frit suspended in water must be created. To remain in suspension, frits must be milled to an extremely fine particle size, or mixed with a suspension agent such as clay or electrolytes . [ 9 ]
The metal to be used as a substrate is primarily determined by the application to which the product will be put, independent of any enamel considerations. Most commonly used are steels of various compositions, but also used are aluminium and copper . [ 10 ]
Before the application of enamel, the surface of the substrate must be prepared with a number of processes. The most important processes are the cleaning of the surface of the substrate; all remnants of chemicals, rusts , oils, and other contaminants must be completely removed. To facilitate this, frequent processes performed on substrates are degreasing , pickling (which can also etch the surface and provide anchoring points for the enamel), alkaline neutralization , and rinsing. [ 11 ]
Enamel may be applied to the substrate via many different methods. These methods are most often delineated into either wet or dry applications, determined by whether the enamel is applied as a dry powder or a liquid slurry suspension. [ citation needed ]
The simplest method of dry application, especially for cast-iron substrates, is to heat the substrate and roll it in powdered frit. The frit particles melt on contact with the hot substrate and adhere to its surface. This method requires a high level of operator skill and concentration to achieve an even coating, and due to its inconstant nature is not often used in industrial applications. [ 12 ]
The most common method of dry application used in industry today is electrostatic deposition . Before application, the dry frit must be encapsulated in an organic silane ; this allows the frit to hold an electrical charge during application. An electrostatic gun fires the dry frit powder onto the electrically earthed metal substrate; electrical forces bind the charged powder to the substrate and it adheres. [ 13 ]
The simplest method of wet application is to dip the substrate in a bath of liquid slurry; complete immersion coats all available surfaces of the substrate. Dipping is not often used in industry, however, because many preliminary trial dippings are required before the thickness of the coat can be predicted reliably enough for the desired application. [ 14 ]
A form of dipping suitable for modern industrial application is flow coating. Rather than dip the product in a bath of slurry, slurry is flowed over the surface of the substrate to be coated. This method allows for much more economical use of slurry and time; it is capable of allowing very rapid production runs. [ 15 ]
Wet enamel may also be sprayed onto the product using specialized spray guns. Liquid slurry is fed into the nozzle of a spray gun, and compressed air atomizes the slurry and ejects it from the nozzle of the gun in a controlled jet. [ 15 ]
Firing, where coated substrates are passed through a furnace to experience long periods of stable high temperatures, converts the adhering particles of frit into a continuous glass layer. The effectiveness of the process is highly dependent on the time, temperature, and the quality or thickness of the coating on the substrate. Most frits for industrial applications are fired for as low as 20 minutes, but frits for very heavy-duty industrial applications may take double this time. Porcelain enamel coatings on aluminium substrates may be fired at temperatures as low as 530°C, but most steel substrates require temperatures in excess of 800°C. [ 16 ]
Porcelain enamel has been applied to jewelry metals such as gold , silver , and copper since antiquity for the purposes of decoration. It was not until the Industrial Revolution that ferrous metals first became the subject of porcelain enamelling processes; these first attempts were met with limited success. A reliably successful technique was not developed until the middle of the 19th century, with the development of a method for enamelling cast-iron cooking pots in Germany . [ 1 ] It was not long before this method of enamelling became outdated with the development of new ferrous substrates, and most modern research into porcelain enamelling is concerned with creating an acceptable bond between enamels and new metal substrates. [ 17 ]
The production of porcelain enamelled products on an industrial scale first began in Germany in 1840. [ 18 ] The method used was very primitive compared to modern methods: the product was heated to a very high temperature and dusted with enamel, then immediately fired. This frequently resulted in poor adhesion or a spotty coat; two coats were always required to achieve a continuous, corrosion-resistant surface. [ 18 ] It could only be applied to cast- and wrought-iron , and only used for relatively simple products like pots and pans.
The ability to apply porcelain enamel to sheet steels was not developed until 1900, [ 19 ] with the discovery that making minor changes to the composition of the enamel, such as including cobalt oxides as minor components, could drastically improve its adhesion ability to carbon steels. Concurrent with this development was the first use of wet-slurry enamel application; this allowed porcelain enamel to be applied to much more complex shapes by dipping the shape into the liquid enamel slurry.
Up until the 1930s, all enamel applications required two coats of enamel: an undercoat to adhere to the substrate which was always blue (due in part to the presence of cobalt oxides), and a top coat of the desired colour (most often white). It was not until 1930 that the use of zero carbon steel (steel with less than 0.005% carbon content) as a substrate was linked to allowing lighter-colored enamels to adhere directly to the substrate. [ 20 ]
Bibliography | https://en.wikipedia.org/wiki/Industrial_porcelain_enamel |
Industrial process control (IPC) or simply process control is a system used in modern manufacturing which uses the principles of control theory and physical industrial control systems to monitor, control and optimize continuous industrial production processes using control algorithms. This ensures that the industrial machines run smoothly and safely in factories and efficiently use energy to transform raw materials into high-quality finished products with reliable consistency while reducing energy waste and economic costs , something which could not be achieved purely by human manual control. [ 1 ]
In IPC, control theory provides the theoretical framework to understand system dynamics, predict outcomes and design control strategies to ensure predetermined objectives, utilizing concepts like feedback loops, stability analysis and controller design. On the other hand, the physical apparatus of IPC, based on automation technologies, consists of several components. Firstly, a network of sensors continuously measure various process variables (such as temperature, pressure, etc.) and product quality variables. A programmable logic controller (PLC, for smaller, less complex processes) or a distributed control system (DCS, for large-scale or geographically dispersed processes) analyzes this sensor data transmitted to it, compares it to predefined setpoints using a set of instructions or a mathematical model called the control algorithm and then, in case of any deviation from these setpoints (e.g., temperature exceeding setpoint), makes quick corrective adjustments through actuators such as valves (e.g. cooling valve for temperature control), motors or heaters to guide the process back to the desired operational range. This creates a continuous closed-loop cycle of measurement, comparison, control action, and re-evaluation which guarantees that the process remains within established parameters. The HMI (Human-Machine Interface) acts as the "control panel" for the IPC system where small number of human operators can monitor the process and make informed decisions regarding adjustments. [ 1 ] IPCs can range from controlling the temperature and level of a single process vessel (controlled environment tank for mixing, separating, reacting, or storing materials in industrial processes.) to a complete chemical processing plant with several thousand control feedback loops.
IPC provides several critical benefits to manufacturing companies. By maintaining a tight control over key process variables, it helps reduce energy use, minimize waste and shorten downtime for peak efficiency and reduced costs. It ensures consistent and improved product quality with little variability, which satisfies the customers and strengthens the company's reputation. It improves safety by detecting and alerting human operators about potential issues early, thus preventing accidents, equipment failures, process disruptions and costly downtime. Analyzing trends and behaviors in the vast amounts of data collected real-time helps engineers identify areas of improvement, refine control strategies and continuously enhance production efficiency using a data-driven approach. [ 1 ]
IPC is used across a wide range of industries where precise control is important. [ 2 ] The applications can range from controlling the temperature and level of a single process vessel, to a complete chemical processing plant with several thousand control loops. In automotive manufacturing, IPC ensures consistent quality by meticulously controlling processes like welding and painting. Mining operations are optimized with IPC monitoring ore crushing and adjusting conveyor belt speeds for maximum output. Dredging benefits from precise control of suction pressure, dredging depth and sediment discharge rate by IPC, ensuring efficient and sustainable practices. Pulp and paper production leverages IPC to regulate chemical processes (e.g., pH and bleach concentration) and automate paper machine operations to control paper sheet moisture content and drying temperature for consistent quality. In chemical plants, it ensures the safe and efficient production of chemicals by controlling temperature, pressure and reaction rates. Oil refineries use it to smoothly convert crude oil into gasoline and other petroleum products. In power plants, it helps maintain stable operating conditions necessary for a continuous electricity supply. In food and beverage production, it helps ensure consistent texture, safety and quality. Pharmaceutical companies relies on it to produce life-saving drugs safely and effectively. The development of large industrial process control systems has been instrumental in enabling the design of large high volume and complex processes, which could not be otherwise economically or safely operated. [ 3 ]
Historical milestones in the development of industrial process control began in ancient civilizations, where water level control devices were used to regulate water flow for irrigation and water clocks. During the Industrial Revolution in the 18th century, there was a growing need for precise control over boiler pressure in steam engines. In the 1930s, pneumatic and electronic controllers, such as PID (Proportional-Integral-Derivative) controllers, were breakthrough innovations that laid the groundwork for modern control theory. The late 20th century saw the rise of programmable logic controllers (PLCs) and distributed control systems (DCS), while the advent of microprocessors further revolutionized IPC by enabling more complex control algorithms.
Early process control breakthroughs came most frequently in the form of water control devices. Ktesibios of Alexandria is credited for inventing float valves to regulate water level of water clocks in the 3rd century BC. In the 1st century AD, Heron of Alexandria invented a water valve similar to the fill valve used in modern toilets. [ 4 ]
Later process controls inventions involved basic physics principles. In 1620, Cornelis Drebbel invented a bimetallic thermostat for controlling the temperature in a furnace. In 1681, Denis Papin discovered the pressure inside a vessel could be regulated by placing weights on top of the vessel lid. [ 4 ] In 1745, Edmund Lee created the fantail to improve windmill efficiency; a fantail was a smaller windmill placed 90° of the larger fans to keep the face of the windmill pointed directly into the oncoming wind.
With the dawn of the Industrial Revolution in the 1760s, process controls inventions were aimed to replace human operators with mechanized processes. In 1784, Oliver Evans created a water-powered flourmill which operated using buckets and screw conveyors. Henry Ford applied the same theory in 1910 when the assembly line was created to decrease human intervention in the automobile production process. [ 4 ]
For continuously variable process control it was not until 1922 that a formal control law for what we now call PID control or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky . [ 5 ] Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman . He noted the helmsman steered the ship based not only on the current course error, but also on past error, as well as the current rate of change; [ 6 ] this was then given a mathematical treatment by Minorsky. [ 7 ] His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error ), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Process control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralization of all the localized panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.
With the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. [ 8 ] These could be distributed around the plant, and communicate with the graphic display in the control room or rooms. The distributed control system (DCS) was born.
The introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.
The accompanying diagram is a general model which shows functional manufacturing levels in a large process using processor and computer-based control.
Referring to the diagram: Level 0 contains the field devices such as flow and temperature sensors (process value readings - PV), and final control elements (FCE), such as control valves ; Level 1 contains the industrialized Input/Output (I/O) modules, and their associated distributed electronic processors; Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens; Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets; Level 4 is the production scheduling level.
To determine the fundamental model for any process, the inputs and outputs of the system are defined differently than for other chemical processes. [ 9 ] The balance equations are defined by the control inputs and outputs rather than the material inputs. The control model is a set of equations used to predict the behavior of a system and can help determine what the response to change will be. The state variable (x) is a measurable variable that is a good indicator of the state of the system, such as temperature (energy balance), volume (mass balance) or concentration (component balance). Input variable (u) is a specified variable that commonly include flow rates.
The entering and exiting flows are both considered control inputs. The control input can be classified as a manipulated, disturbance, or unmonitored variable. Parameters (p) are usually a physical limitation and something that is fixed for the system, such as the vessel volume or the viscosity of the material. Output (y) is the metric used to determine the behavior of the system. The control output can be classified as measured, unmeasured, or unmonitored.
Processes can be characterized as batch, continuous, or hybrid. [ 10 ] Batch applications require that specific quantities of raw materials be combined in specific ways for particular duration to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).
A continuous physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket , for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Such controls use feedback such as in the PID controller A PID Controller includes proportional, integrating, and derivative controller functions.
Applications having elements of batch and continuous process control are often called hybrid applications.
The fundamental building block of any industrial control system is the control loop , which controls just one process variable. An example is shown in the accompanying diagram, where the flow rate in a pipe is controlled by a PID controller , assisted by what is effectively a cascaded loop in the form of a valve servo-controller to ensure correct valve positioning.
Some large systems may have several hundreds or thousands of control loops. In complex processes the loops are interactive, so that the operation of one loop may affect the operation of another. The system diagram for representing control loops is a Piping and instrumentation diagram .
Commonly used control systems include programmable logic controller (PLC), Distributed Control System (DCS) or SCADA .
A further example is shown. If a control valve were used to hold level in a tank, the level controller would compare the equivalent reading of a level sensor to the level setpoint and determine whether more or less valve opening was necessary to keep the level constant. A cascaded flow controller could then calculate the change in the valve position.
The economic nature of many products manufactured in batch and continuous processes require highly efficient operation due to thin margins. The competing factor in process control is that products must meet certain specifications in order to be satisfactory. These specifications can come in two forms: a minimum and maximum for a property of the material or product, or a range within which the property must be. [ 11 ] All loops are susceptible to disturbances and therefore a buffer must be used on process set points to ensure disturbances do not cause the material or product to go out of specifications. This buffer comes at an economic cost (i.e. additional processing, maintaining elevated or depressed process conditions, etc.).
Process efficiency can be enhanced by reducing the margins necessary to ensure product specifications are met. [ 11 ] This can be done by improving the control of the process to minimize the effect of disturbances on the process. The efficiency is improved in a two step method of narrowing the variance and shifting the target. [ 11 ] Margins can be narrowed through various process upgrades (i.e. equipment upgrades, enhanced control methods, etc.). Once margins are narrowed, an economic analysis can be done on the process to determine how the set point target is to be shifted. Less conservative process set points lead to increased economic efficiency. [ 11 ] Effective process control strategies increase the competitive advantage of manufacturers who employ them. | https://en.wikipedia.org/wiki/Industrial_process_control |
Industrial process imaging , or industrial process tomography or process tomography are methods used to form an image of a cross-section of vessel or pipe in a chemical engineering or mineral processing, or petroleum extraction or refining plant. [ 1 ] [ 2 ] Process imaging is used for the development of process equipment such as filters, separators and conveyors, as well as monitoring of production plant including flow rate measurement. As well as conventional tomographic methods widely used in medicine such as X-ray computed tomography , magnetic resonance imaging and gamma ray tomography , and ultra-sound tomography , new and emerging methods such as electrical capacitance tomography and magnetic induction tomography and electrical resistivity tomography (similar to medical electrical impedance tomography ) are also used.
Although such techniques are not in widespread deployment in industrial plant there is an active research community, including a Virtual Center for industrial Process Tomography, [ 3 ] and a regular World Congress on Industrial Process Tomography, now organized by a learned society for this area, the International Society for Industrial Process Tomography [ 4 ]
A number of applications of tomography of process equipment were described in the 1970s, using Ionising Radiation from X-ray or isotope sources but routine use was limited by the high cost involved and safety constraints. Radiation-based methods used long exposure times which meant that dynamic measurements of the real-time behaviour of process systems were not feasible. The use of electrical methods to image industrial processes was pioneered by Maurice Beck at the UMIST in the mid-1980s [ 5 ] | https://en.wikipedia.org/wiki/Industrial_process_imaging |
An industrial robot is a robot system used for manufacturing . Industrial robots are automated, programmable and capable of movement on three or more axes. [ 1 ]
Typical applications of robots include welding , painting, assembly, disassembly , [ 2 ] pick and place for printed circuit boards , packaging and labeling , palletizing , product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling .
In the year 2023, an estimated 4,281,585 industrial robots were in operation worldwide according to International Federation of Robotics (IFR) . [ 3 ] [ 4 ]
There are six types of industrial robots. [ 5 ]
Articulated robots [ 5 ] are the most common industrial robots. [ 6 ] They look like a human arm , which is why they are also called robotic arm or manipulator arm . [ 7 ] Their articulations with several degrees of freedom allow the articulated arms a wide range of movements.
An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie , which were constructed in the late 1940s by W. Grey Walter . They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. [ 8 ] Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus. [ 9 ]
Cartesian robots, [ 5 ] also called rectilinear, gantry robots, and x-y-z robots [ 6 ] have three prismatic joints for the movement of the tool and three rotary joints for its orientation in space.
To be able to move and orient the effector organ in all directions, such a robot needs 6 axes (or degrees of freedom). In a 2-dimensional environment, three axes are sufficient, two for displacement and one for orientation. [ 10 ]
The cylindrical coordinate robots [ 5 ] are characterized by their rotary joint at the base and at least one prismatic joint connecting its links. [ 6 ] They can move vertically and horizontally by sliding. The compact effector design allows the robot to reach tight work-spaces without any loss of speed. [ 6 ]
Spherical coordinate robots only have rotary joints. [ 5 ] They are one of the first robots to have been used in industrial applications. [ 6 ] They are commonly used for machine tending in die-casting, plastic injection and extrusion, and for welding. [ 6 ]
SCARA [ 5 ] is an acronym for Selective Compliance Assembly Robot Arm. [ 11 ] SCARA robots are recognized by their two parallel joints which provide movement in the X-Y plane. [ 5 ] Rotating shafts are positioned vertically at the effector. SCARA robots are used for jobs that require precise lateral movements. They are ideal for assembly applications. [ 6 ]
Delta robots [ 5 ] are also referred to as parallel link robots. [ 6 ] They consist of parallel links connected to a common base. Delta robots are particularly useful for direct control tasks and high maneuvering operations (such as quick pick-and-place tasks). Delta robots take advantage of four bar or parallelogram linkage systems.
Furthermore, industrial robots can have a serial or parallel architecture.
Serial architectures a.k.a. serial manipulators are very common industrial robots; they are designed as a series of links connected by motor-actuated joints that extend from a base to an end-effector. SCARA, Stanford manipulators are typical examples of this category.
A parallel manipulator is designed so that each chain is usually short, simple and can thus be rigid against unwanted movement, compared to a serial manipulator . Errors in one chain's positioning are averaged in conjunction with the others, rather than being cumulative. Each actuator must still move within its own degree of freedom , as for a serial robot; however in the parallel robot the off-axis flexibility of a joint is also constrained by the effect of the other chains. It is this closed-loop stiffness that makes the overall parallel manipulator stiff relative to its components, unlike the serial chain that becomes progressively less rigid with more components.
A full parallel manipulator can move an object with up to 6 degrees of freedom (DoF), determined by 3 translation 3T and 3 rotation 3R coordinates for full 3T3R m obility. However, when a manipulation task requires less than 6 DoF, the use of lower mobility manipulators, with fewer than 6 DoF, may bring advantages in terms of simpler architecture, easier control, faster motion and lower cost. For example, the 3 DoF Delta robot has lower 3T mobility and has proven to be very successful for rapid pick-and-place translational positioning applications. The workspace of lower mobility manipulators may be decomposed into 'motion' and 'constraint' subspaces. For example, 3 position coordinates constitute the motion subspace of the 3 DoF Delta robot and the 3 orientation coordinates are in the constraint subspace. The motion subspace of lower mobility manipulators may be further decomposed into independent (desired) and dependent (concomitant) subspaces: consisting of 'concomitant' or 'parasitic' motion which is undesired motion of the manipulator. [ 12 ] The debilitating effects of concomitant motion should be mitigated or eliminated in the successful design of lower mobility manipulators. For example, the Delta robot does not have parasitic motion since its end effector does not rotate.
Robots exhibit varying degrees of autonomy .
Some robots are programmed to faithfully carry out specific actions over and over again (repetitive actions) without variation and with a high degree of accuracy. These actions are determined by programmed routines that specify the direction, acceleration, velocity, deceleration, and distance of a series of coordinated motions
Other robots are much more flexible as to the orientation of the object on which they are operating or even the task that has to be performed on the object itself, which the robot may even need to identify. For example, for more precise guidance, robots often contain machine vision sub-systems acting as their visual sensors, linked to powerful computers or controllers. [ 13 ] Artificial intelligence is becoming an increasingly important factor in the modern industrial robot.
The earliest known industrial robot, conforming to the ISO definition was completed by
"Bill" Griffith P. Taylor in 1937 and published in Meccano Magazine , March 1938. [ 14 ] [ 15 ] The crane-like device was built almost entirely using Meccano parts, and powered by a single electric motor. Five axes of movement were possible, including grab and grab rotation . Automation was achieved using punched paper tape to energise solenoids, which would facilitate the movement of the crane's control levers. The robot could stack wooden blocks in pre-programmed patterns. The number of motor revolutions required for each desired movement was first plotted on graph paper. This information was then transferred to the paper tape, which was also driven by the robot's single motor. Chris Shute built a complete replica of the robot in 1997.
George Devol applied for the first robotics patents in 1954 (granted in 1961). The first company to produce a robot was Unimation , founded by Devol and Joseph F. Engelberger in 1956. Unimation robots were also called programmable transfer machines since their main use at first was to transfer objects from one point to another, less than a dozen feet or so apart. They used hydraulic actuators and were programmed in joint coordinates , i.e. the angles of the various joints were stored during a teaching phase and replayed in operation. They were accurate to within 1/10,000 of an inch [ 16 ] (note: although accuracy is not an appropriate measure for robots, usually evaluated in terms of repeatability - see later). Unimation later licensed their technology to Kawasaki Heavy Industries and GKN , manufacturing Unimates in Japan and England respectively. For some time, Unimation's only competitor was Cincinnati Milacron Inc. of Ohio . This changed radically in the late 1970s when several big Japanese conglomerates began producing similar industrial robots.
In 1969 Victor Scheinman at Stanford University invented the Stanford arm , an all-electric, 6-axis articulated robot designed to permit an arm solution . This allowed it accurately to follow arbitrary paths in space and widened the potential use of the robot to more sophisticated applications such as assembly and welding. Scheinman then designed a second arm for the MIT AI Lab, called the "MIT arm." Scheinman, after receiving a fellowship from Unimation to develop his designs, sold those designs to Unimation who further developed them with support from General Motors and later marketed it as the Programmable Universal Machine for Assembly (PUMA).
Industrial robotics took off quite quickly in Europe, with both ABB Robotics and KUKA Robotics bringing robots to the market in 1973. ABB Robotics (formerly ASEA) introduced IRB 6, among the world's first commercially available all electric micro-processor controlled robot. The first two IRB 6 robots were sold to Magnusson in Sweden for grinding and polishing pipe bends and were installed in production in January 1974. Also in 1973 KUKA Robotics built its first robot, known as FAMULUS , [ 17 ] [ 18 ] also one of the first articulated robots to have six electromechanically driven axes.
Interest in robotics increased in the late 1970s and many US companies entered the field, including large firms like General Electric , and General Motors (which formed joint venture FANUC Robotics with FANUC LTD of Japan). U.S. startup companies included Automatix and Adept Technology , Inc. At the height of the robot boom in 1984, Unimation was acquired by Westinghouse Electric Corporation for 107 million U.S. dollars. Westinghouse sold Unimation to Stäubli Faverges SCA of France in 1988, which is still making articulated robots for general industrial and cleanroom applications and even bought the robotic division of Bosch in late 2004.
Only a few non-Japanese companies ultimately managed to survive in this market, the major ones being: Adept Technology , Stäubli , the Swedish - Swiss company ABB Asea Brown Boveri , the German company KUKA Robotics and the Italian company Comau .
Accuracy and repeatability are different measures. Repeatability is usually the most important criterion for a robot and is similar to the concept of 'precision' in measurement—see accuracy and precision . ISO 9283 [ 19 ] sets out a method whereby both accuracy and repeatability can be measured. Typically a robot is sent to a taught position a number of times and the error is measured at each return to the position after visiting 4 other positions. Repeatability is then quantified using the standard deviation of those samples in all three dimensions. A typical robot can, of course make a positional error exceeding that and that could be a problem for the process. Moreover, the repeatability is different in different parts of the working envelope and also changes with speed and payload. ISO 9283 specifies that accuracy and repeatability should be measured at maximum speed and at maximum payload. But this results in pessimistic values whereas the robot could be much more accurate and repeatable at light loads and speeds.
Repeatability in an industrial process is also subject to the accuracy of the end effector, for example a gripper, and even to the design of the 'fingers' that match the gripper to the object being grasped. For example, if a robot picks a screw by its head, the screw could be at a random angle. A subsequent attempt to insert the screw into a hole could easily fail. These and similar scenarios can be improved with 'lead-ins' e.g. by making the entrance to the hole tapered.
The setup or programming of motions and sequences for an industrial robot is typically taught by linking the robot controller to a laptop , desktop computer or (internal or Internet) network .
A robot and a collection of machines or peripherals is referred to as a workcell , or cell. A typical cell might contain a parts feeder, a molding machine and a robot. The various machines are 'integrated' and controlled by a single computer or PLC . How the robot interacts with other machines in the cell must be programmed, both with regard to their positions in the cell and synchronizing with them.
Software: The computer is installed with corresponding interface software. The use of a computer greatly simplifies the programming process. Specialized robot software is run either in the robot controller or in the computer or both depending on the system design.
There are two basic entities that need to be taught (or programmed): positional data and procedure. For example, in a task to move a screw from a feeder to a hole the positions of the feeder and the hole must first be taught or programmed. Secondly the procedure to get the screw from the feeder to the hole must be programmed along with any I/O involved, for example a signal to indicate when the screw is in the feeder ready to be picked up. The purpose of the robot software is to facilitate both these programming tasks.
Teaching the robot positions may be achieved a number of ways:
Positional commands The robot can be directed to the required position using a GUI or text based commands in which the required X-Y-Z position may be specified and edited.
Teach pendant: Robot positions can be taught via a teach pendant. This is a handheld control and programming unit. The common features of such units are the ability to manually send the robot to a desired position, or "inch" or "jog" to adjust a position. They also have a means to change the speed since a low speed is usually required for careful positioning, or while test-running through a new or modified routine. A large emergency stop button is usually included as well. Typically once the robot has been programmed there is no more use for the teach pendant. All teach pendants are equipped with a 3-position deadman switch . In the manual mode, it allows the robot to move only when it is in the middle position (partially pressed). If it is fully pressed in or completely released, the robot stops. This principle of operation allows natural reflexes to be used to increase safety.
Lead-by-the-nose: this is a technique offered by many robot manufacturers. In this method, one user holds the robot's manipulator, while another person enters a command which de-energizes the robot causing it to go into limp. The user then moves the robot by hand to the required positions and/or along a required path while the software logs these positions into memory. The program can later run the robot to these positions or along the taught path. This technique is popular for tasks such as paint spraying .
Offline programming is where the entire cell, the robot and all the machines or instruments in the workspace are mapped graphically. The robot can then be moved on screen and the process simulated. A robotics simulator is used to create embedded applications for a robot, without depending on the physical operation of the robot arm and end effector. The advantages of robotics simulation is that it saves time in the design of robotics applications. It can also increase the level of safety associated with robotic equipment since various "what if" scenarios can be tried and tested before the system is activated.[8] Robot simulation software provides a platform to teach, test, run, and debug programs that have been written in a variety of programming languages.
Robot simulation tools allow for robotics programs to be conveniently written and debugged off-line with the final version of the program tested on an actual robot. The ability to preview the behavior of a robotic system in a virtual world allows for a variety of mechanisms, devices, configurations and controllers to be tried and tested before being applied to a "real world" system. Robotics simulators have the ability to provide real-time computing of the simulated motion of an industrial robot using both geometric modeling and kinematics modeling.
Manufacturing independent robot programming tools are a relatively new but flexible way to program robot applications. Using a visual programming language , the programming is done via drag and drop of predefined template/building blocks. They often feature the execution of simulations to evaluate the feasibility and offline programming in combination. If the system is able to compile and upload native robot code to the robot controller, the user no longer has to learn each manufacturer's proprietary language . Therefore, this approach can be an important step to standardize programming methods.
Others in addition, machine operators often use user interface devices, typically touchscreen units, which serve as the operator control panel. The operator can switch from program to program, make adjustments within a program and also operate a host of peripheral devices that may be integrated within the same robotic system. These include end effectors , feeders that supply components to the robot, conveyor belts , emergency stop controls, machine vision systems, safety interlock systems, barcode printers and an almost infinite array of other industrial devices which are accessed and controlled via the operator control panel.
The teach pendant or PC is usually disconnected after programming and the robot then runs on the program that has been installed in its controller . However a computer is often used to 'supervise' the robot and any peripherals, or to provide additional storage for access to numerous complex paths and routines.
The most essential robot peripheral is the end effector , or end-of-arm-tooling (EOAT). Common examples of end effectors include welding devices (such as MIG-welding guns, spot-welders, etc.), spray guns and also grinding and deburring devices (such as pneumatic disk or belt grinders, burrs, etc.), and grippers (devices that can grasp an object, usually electromechanical or pneumatic ). Other common means of picking up objects is by vacuum or magnets . End effectors are frequently highly complex, made to match the handled product and often capable of picking up an array of products at one time. They may utilize various sensors to aid the robot system in locating, handling, and positioning products.
For a given robot the only parameters necessary to completely locate the end effector (gripper, welding torch, etc.) of the robot are the angles of each of the joints or displacements of the linear axes (or combinations of the two for robot formats such as SCARA). However, there are many different ways to define the points. The most common and most convenient way of defining a point is to specify a Cartesian coordinate for it, i.e. the position of the 'end effector' in mm in the X, Y and Z directions relative to the robot's origin. In addition, depending on the types of joints a particular robot may have, the orientation of the end effector in yaw, pitch, and roll and the location of the tool point relative to the robot's faceplate must also be specified. For a jointed arm these coordinates must be converted to joint angles by the robot controller and such conversions are known as Cartesian Transformations which may need to be performed iteratively or recursively for a multiple axis robot. The mathematics of the relationship between joint angles and actual spatial coordinates is called kinematics. See robot control
Positioning by Cartesian coordinates may be done by entering the coordinates into the system or by using a teach pendant which moves the robot in X-Y-Z directions. It is much easier for a human operator to visualize motions up/down, left/right, etc. than to move each joint one at a time. When the desired position is reached it is then defined in some way particular to the robot software in use, e.g. P1 - P5 below.
Most articulated robots perform by storing a series of positions in memory, and moving to them at various times in their programming sequence. For example, a robot which is moving items from one place (bin A) to another (bin B) might have a simple 'pick and place' program similar to the following:
Define points P1–P5:
Define program:
For examples of how this would look in popular robot languages see industrial robot programming .
The American National Standard for Industrial Robots and Robot Systems — Safety Requirements (ANSI/RIA R15.06-1999) defines a singularity as "a condition caused by the collinear alignment of two or more robot axes resulting in unpredictable robot motion and velocities." It is most common in robot arms that utilize a "triple-roll wrist". This is a wrist about which the three axes of the wrist, controlling yaw, pitch, and roll, all pass through a common point. An example of a wrist singularity is when the path through which the robot is traveling causes the first and third axes of the robot's wrist (i.e. robot's axes 4 and 6) to line up. The second wrist axis then attempts to spin 180° in zero time to maintain the orientation of the end effector. Another common term for this singularity is a "wrist flip". The result of a singularity can be quite dramatic and can have adverse effects on the robot arm, the end effector, and the process. Some industrial robot manufacturers have attempted to side-step the situation by slightly altering the robot's path to prevent this condition. Another method is to slow the robot's travel speed, thus reducing the speed required for the wrist to make the transition. The ANSI/RIA has mandated that robot manufacturers shall make the user aware of singularities if they occur while the system is being manually manipulated.
A second type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist center lies on a cylinder that is centered about axis 1 and with radius equal to the distance between axes 1 and 4. This is called a shoulder singularity. Some robot manufacturers also mention alignment singularities, where axes 1 and 6 become coincident. This is simply a sub-case of shoulder singularities. When the robot passes close to a shoulder singularity, joint 1 spins very fast.
The third and last type of singularity in wrist-partitioned vertically articulated six-axis robots occurs when the wrist's center lies in the same plane as axes 2 and 3.
Singularities are closely related to the phenomena of gimbal lock , which has a similar root cause of axes becoming lined up.
According to the International Federation of Robotics (IFR) study World Robotics 2024 , there were about 4,281,585 operational industrial robots by the end of 2023. [ 3 ] [ 4 ] For the year 2018 the IFR estimates the worldwide sales of industrial robots with US$16.5 billion. Including the cost of software, peripherals and systems engineering, the annual turnover for robot systems is estimated to be US$48.0 billion in 2018. [ 20 ]
China is the largest industrial robot market [ 21 ] : 256 with 154,032 units sold in 2018. [ 20 ] China had the largest operational stock of industrial robots, with 649,447 at the end of 2018. [ 22 ] The United States industrial robot-makers shipped 35,880 robot to factories in the US in 2018 and this was 7% more than in 2017. [ 23 ]
The biggest customer of industrial robots is automotive industry with 30% market share, then electrical/electronics industry with 25%, metal and machinery industry with 10%, rubber and plastics industry with 5%, food industry with 5%. [ 20 ] In textiles, apparel and leather industry, 1,580 units are operational. [ 24 ]
Estimated worldwide annual supply of industrial robots (in units): [ 3 ] [ 4 ] [ 25 ]
The International Federation of Robotics has predicted a worldwide increase in adoption of industrial robots and they estimated 1.7 million new robot installations in factories worldwide by 2020 [IFR 2017] Archived 2017-02-11 at the Wayback Machine . Rapid advances in automation technologies (e.g. fixed robots, collaborative and mobile robots, and exoskeletons) have the potential to improve work conditions but also to introduce workplace hazards in manufacturing workplaces. [ 26 ] [3] Despite the lack of occupational surveillance data on injuries associated specifically with robots, researchers from the US National Institute for Occupational Safety and Health (NIOSH) identified 61 robot-related deaths between 1992 and 2015 using keyword searches of the Bureau of Labor Statistics (BLS) Census of Fatal Occupational Injuries research database (see info from Center for Occupational Robotics Research ). Using data from the Bureau of Labor Statistics, NIOSH and its state partners have investigated 4 robot-related fatalities under the Fatality Assessment and Control Evaluation Program . In addition the Occupational Safety and Health Administration (OSHA) has investigated dozens of robot-related deaths and injuries, which can be reviewed at OSHA Accident Search page . Injuries and fatalities could increase over time because of the increasing number of collaborative and co-existing robots, powered exoskeletons, and autonomous vehicles into the work environment.
Safety standards are being developed by the Robotic Industries Association (RIA) in conjunction with the American National Standards Institute (ANSI). [4] On October 5, 2017, OSHA, NIOSH and RIA signed an alliance to work together to enhance technical expertise, identify and help address potential workplace hazards associated with traditional industrial robots and the emerging technology of human-robot collaboration installations and systems, and help identify needed research to reduce workplace hazards. On October 16 NIOSH launched the Center for Occupational Robotics Research to "provide scientific leadership to guide the development and use of occupational robots that enhance worker safety, health, and wellbeing." So far, the research needs identified by NIOSH and its partners include: tracking and preventing injuries and fatalities, intervention and dissemination strategies to promote safe machine control and maintenance procedures, and on translating effective evidence-based interventions into workplace practice. | https://en.wikipedia.org/wiki/Industrial_robot |
Industrial separation processes are technical procedures which are used in industry to separate a product from impurities or other products. The original mixture may either be a natural resource (like ore , oil or sugar cane) or the product of a chemical reaction (like a drug or an organic solvent ).
Separation processes are of great economic importance as they are accounting for 40 – 90% of capital and operating costs in industry. The separation processes of mixtures are including besides others washing, extraction, pressing, drying, clarification, evaporation, crystallization and filtration. Often several separation processes are performed successively. Separation operations are having several different functions: [ 1 ]
A heterogeneous mixture (e. g. liquid and solid) can be separated by mechanical separation processes like filtration or centrifugation. Homogeneous mixtures can be separated by molecular separation processes; these are either equilibrium -based or rate -controlled. Equilibrium-based processes are operating by the formation of two immiscible phases with different compositions at equilibrium, an example is distillation (in distillation the vapor has another composition than the liquid). Rate-controlled processes are based on different transport rates of compounds through a medium, examples are adsorption, ion exchange or crystallization.
Separation of a mixture into two phases can be done by an energy separating agent , a mass separating agent , a barrier or external fields. Energy-separating agents are used for creating a second phase ( immiscible of different composition than the first phase), they are the most common techniques used in industry. For example, leads the addition of heat (the separating agent) to a liquid (first phase) to the formation of vapor (second phase). Mass-separating agents are other chemicals. They selectively dissolve or absorb one of the products; they are either a liquid (for sorption, extractive distillation or extraction) or a solid (for adsorption or ion exchange). The use of a barrier which restricts the movement of one compound but not of the other one (semipermeable membranes) is less common; external fields are used just in special applications. [ 1 ] | https://en.wikipedia.org/wiki/Industrial_separation_processes |
An industrial shredder is a machine used to break down materials for various applications such as recycling, volume reduction, and product destruction. Industrial shredders come in many different sizes and design variations based on what particle size is needed for final shredded product.
The main categories of designs used today are as follows: low speed, high torque shear type shredders of single, dual, triple and quad shaft design, [ 1 ] single shaft grinders of single or dual shaft design, granulators, knife hogs, raspers, maulers, flails, crackermills, and refining mills. Industrial shredder components include a rotor, counter blades, housing, motor, transmission system, power system and electrical control system.
Some examples of materials that are commonly shredded are: tires , metals , construction and demolition debris, wood , plastics , leathers, papers and garbage , such as commercial and mixed waste. The industrial shredder is commonly used to process materials into different sizes for separation or to reduce the cost of transport. Waste materials such as municipal solid waste , radioactive waste , medical waste, and hazardous waste are shredded in treatment and disposal systems.
Because the hardness of materials differs, the blades on shredders are also slightly different.
An industrial shredder is any shredder that can be used in an industrial application (rather than a consumer application). They can be equipped with different types of cutting systems: horizontal shaft design, vertical shaft design, single-shaft, two-shaft, three-shaft and four-shaft cutting systems. These shredders are slow speed or high speed, and are not restricted in being classified as an industrial shredder by their speed or horsepower. Small, low-cost portable shredders have been developed; these are often suitable for personal use as well as for small scale industry.
The largest scrap metal shredder in the world was designed with 10,000 hp by the Schnitzer Steel group of Portland, Oregon in 1980. The 9,200 hp (6,860 kW) Lynxs at the Sims Metal Management plant at the mouth of the River Usk in Newport, Wales has access by road, rail and sea. It can process 450 cars per hour. [ 2 ] [ 3 ] [ 4 ]
This industry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Industrial_shredder |
Industrial waste is the waste produced by industrial activity which includes any material that is rendered useless during a manufacturing process such as that of factories , mills, and mining operations. Types of industrial waste include dirt and gravel , masonry and concrete , scrap metal, oil, solvents , chemicals, scrap lumber, even vegetable matter from restaurants. Industrial waste may be solid, semi-solid or liquid in form. It may be hazardous waste (some types of which are toxic ) or non-hazardous waste. Industrial waste may pollute the nearby soil or adjacent water bodies, and can contaminate groundwater, lakes, streams, rivers or coastal waters. [ 1 ] Industrial waste is often mixed into municipal waste , making accurate assessments difficult. An estimate for the US goes as high as 7.6 billion tons of industrial waste produced annually, as of 2017. [ 2 ] [ better source needed ] Most countries have enacted legislation to deal with the problem of industrial waste, but strictness and compliance regimes vary. Enforcement is always an issue.
Hazardous waste, chemical waste , industrial solid waste and municipal solid waste are classifications of wastes used by governments in different countries. Sewage treatment plants can treat some industrial wastes, i.e. those consisting of conventional pollutants such as biochemical oxygen demand (BOD). Industrial wastes containing toxic pollutants or high concentrations of other pollutants (such as ammonia ) require specialized treatment systems. ( See Industrial wastewater treatment ). [ 3 ]
Industrial wastes can be classified on the basis of their characteristics:
Many factories and most power plants are located near bodies of water to obtain large amounts of water for manufacturing processes or for equipment cooling . [ 4 ] In the US, electric power plants are the largest water users. Other industries using large amounts of water are pulp and paper mills , chemical plants , iron and steel mills , petroleum refineries , food processing plants and aluminum smelters . [ 5 ]
Many less-developed countries that are becoming industrialized do not yet have the resources or technology to dispose their wastes with minimal impacts on the environment. [ 6 ] Both untreated and partially treated wastewater are commonly fed back into a near lying body of water. Metals, chemicals and sewage released into bodies of water directly affect marine ecosystems and the health of those who depend on the waters as food or drinking water sources. Toxins from the wastewater can kill off marine life or cause varying degrees of illness to those who consume these marine animals, depending on the contaminant. Metals and chemicals released into bodies of water affect the marine ecosystems . [ 7 ]
Wastewater containing nutrients (nitrates and phosphates) often causes eutrophication which can kill off existing life in water bodies. A Thailand study focusing on water pollution origins found that the highest concentrations of water contamination in the U-tapao river had a direct correlation to industrial wastewater discharges. [ 8 ]
Thermal pollution —discharges of water at elevated temperature after being used for cooling—can also lead to polluted water. Elevated water temperatures decrease oxygen levels, which can kill fish and alter food chain composition, reduce species biodiversity , and foster invasion by new thermophilic species. [ 9 ]
Solid waste, often called municipal solid waste , typically refers to material that is not hazardous. This category includes trash, rubbish and refuse; and may include materials such as construction debris and yard waste. Hazardous waste typically has specific definitions, due to the more careful and complex handling required of such wastes. Under US law, waste may be classified as hazardous based on certain characteristics: ignitability , reactivity , corrosivity and toxicity . Some types of hazardous waste are specifically listed in regulations. [ 10 ] [ 11 ]
One of the most devastating effects of industrial waste is water pollution. For many industrial processes, water is used which comes in contact with harmful chemicals. These chemicals may include organic compounds (such as solvents), metals, nutrients or radioactive material. If the wastewater is discharged without treatment, groundwater and surface water bodies—lakes, streams, rivers and coastal waters—can become polluted, with serious impacts on human health and the environment. Drinking water sources and irrigation water used for farming may be affected. The pollutants may degrade or destroy habitat for animals and plants. In coastal areas, fish and other aquatic life can be contaminated by untreated waste; beaches and other recreational areas can be damaged or closed. [ 12 ] : 273–309 [ 13 ]
Hungary's first waste prevention program was their 2014-2020 national waste management plan. Their current program (2021-2027) is financed by European Union and international grants, domestic co-financing, product charges, and landfill taxes. [ 14 ]
In Thailand the roles in municipal solid waste (MSW) management and industrial waste management are organized by the Royal Thai Government, which is organized as central (national) government, regional government, and local government. Each government is responsible for different tasks. The central government is responsible for stimulating regulation, policies, and standards. The regional governments are responsible for coordinating the central and local governments. The local governments are responsible for waste management in their governed area. [ 15 ] However, the local governments do not dispose of the waste by themselves but instead hire private companies that have been granted the right from the Pollution Control Department (PCD) in Thailand. [ 16 ] The main companies are Bangpoo Industrial Waste Management Center, [ 17 ] General Environmental Conservation Public Company Limited (GENCO), [ 18 ] SGS Thailand, [ 19 ] Waste Management Siam LTD (WMS), [ 20 ] and Better World Green Public Company Limited (BWG). [ 21 ] These companies are responsible for the waste they have received from their customers before releasing it to the environment, burying it.
The 1976 Resource Conservation and Recovery Act (RCRA) provides for federal regulation of industrial, household, and manufacturing solid and hazardous wastes in the United States. [ 11 ] [ 22 ] RCRA aims to conserve natural resources and energy, protect human health, eliminate or reduce waste, and to clean up waste when needed. [ 22 ] RCRA first began as an amendment to the Solid Waste Disposal Act of 1965 , and in 1984, Congress passed the Hazardous and Solid Waste Amendments (HSWA) which strengthened RCRA by: [ 23 ]
Furthermore, the EPA uses Superfund to find sites of contamination, identify the parties responsible, and in the occurrences where said parties are not known or able to, the program funds cleanups. [ 28 ] Superfund also works on figuring out and applying final remedies for cleanups. The Superfund process is to: 1) collect necessary information (known as the Remedial Investigation (RI) phase); 2) assess alternatives to deal with any potential risks to the environmental and human health (known as the Feasibility Study (FS) stage); 3) determine the most suitable remedies that could lower the risks to more adequate levels. [ 28 ] Some sites are so contaminated because of past waste disposals that it takes decades to clean them up, or bring the contamination down to acceptable levels, thus requiring long-term management over those sites. Hence, sometimes figuring out a final remedy is not possible, and so, the EPA has developed the Adaptive Management plan. [ 28 ]
The EPA has issued national regulations regarding the handling, treatment and disposal of wastes. EPA has authorized individual state environmental agencies to implement and enforce the RCRA regulations through approved waste management programs. [ 29 ]
State compliance is monitored by EPA inspections. In the case that waste management guideline standards are not met, action against the site [ which? ] will be taken. Compliance errors may be corrected by enforced cleanup directly by the site responsible for the waste or by a third party hired by that site. [ 29 ] Prior to the enactment of the Clean Water Act (1972) and RCRA, open dumping or releasing wastewater into nearby bodies of water were common waste disposal methods. [ 30 ] The negative effects on human health and environmental health led to the need for such regulations. The RCRA framework provides specified subsections defining nonhazardous and hazardous waste materials and how each should be properly managed and disposed of. Guidelines for the disposal of nonhazardous solid waste includes the banning of open dumping. Hazardous waste is monitored in a " cradle to grave " fashion; each step in the process of waste generation, transport and disposal is tracked. The EPA now [ when? ] manages 2.96 million tons of solid, hazardous and industrial waste. Since establishment, the RCRA program has undergone reforms as inefficiencies arise and as waste management processes evolve. [ 29 ]
The 1972 Clean Water Act is a broad legislative mandate to protect surface waters (rivers, lakes and coastal water bodies). [ 31 ] A 1948 law had authorized research and development of voluntary water standards, and had provided limited financing for state and local government efforts. The 1972 law prohibited, for the first time, uncontrolled discharges of industrial waste, as well as municipal sewage, into waters of the United States. EPA was required to develop national standards for industrial facilities and standards for municipal sewage treatment plants. States were required to develop water quality standards for individual water bodies. Enforcement is mainly delegated to state agencies. Major amendments to the law were passed in 1977 and 1987. [ 32 ] | https://en.wikipedia.org/wiki/Industrial_waste |
Industrial wastewater treatment describes the processes used for treating wastewater that is produced by industries as an undesirable by-product. After treatment, the treated industrial wastewater (or effluent) may be reused or released to a sanitary sewer or to a surface water in the environment. Some industrial facilities generate wastewater that can be treated in sewage treatment plants . Most industrial processes, such as petroleum refineries , chemical and petrochemical plants have their own specialized facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans . [ 1 ] : 1412 This applies to industries that generate wastewater with high concentrations of organic matter (e.g. oil and grease), toxic pollutants (e.g. heavy metals, volatile organic compounds ) or nutrients such as ammonia . [ 2 ] : 180 Some industries install a pre-treatment system to remove some pollutants (e.g., toxic compounds), and then discharge the partially treated wastewater to the municipal sewer system. [ 3 ] : 60
Most industries produce some wastewater . Recent trends have been to minimize such production or to recycle treated wastewater within the production process. Some industries have been successful at redesigning their manufacturing processes to reduce or eliminate pollutants. [ 4 ] Sources of industrial wastewater include battery manufacturing, chemical manufacturing, electric power plants, food industry , iron and steel industry, metal working, mines and quarries, nuclear industry, oil and gas extraction , petroleum refining and petrochemicals , pharmaceutical manufacturing, pulp and paper industry , smelters, textile mills , industrial oil contamination , water treatment and wood preserving . Treatment processes include brine treatment, solids removal (e.g. chemical precipitation, filtration), oils and grease removal, removal of biodegradable organics, removal of other organics, removal of acids and alkalis, and removal of toxic materials.
Industrial facilities may generate the following industrial wastewater flows: [ citation needed ]
Industrial wastewater could add the following pollutants to receiving water bodies if the wastewater is not treated and managed properly:
The specific pollutants generated and the resultant effluent concentrations can vary widely among the industrial sectors. [ citation needed ]
Battery manufacturers specialize in fabricating small devices for electronics and portable equipment (e.g., power tools), or larger, high-powered units for cars, trucks and other motorized vehicles. Pollutants generated at manufacturing plants includes cadmium, chromium, cobalt, copper, cyanide, iron, lead, manganese, mercury, nickel, silver, zinc, oil and grease. [ 13 ]
A centralized waste treatment (CWT) facility processes liquid or solid industrial wastes generated by off-site manufacturing facilities. A manufacturer may send its wastes to a CWT plant, rather than perform treatment on site, due to constraints such as limited land availability, difficulty in designing and operating an on-site system, or limitations imposed by environmental regulations and permits. A manufacturer may determine that using a CWT is more cost-effective than treating the waste itself; this is often the case where the manufacturer is a small business. [ 14 ]
CWT plants often receive wastes from a wide variety of manufacturers, including chemical plants, metal fabrication and finishing; and used oil and petroleum products from various manufacturing sectors. The wastes may be classified as hazardous , have high pollutant concentrations or otherwise be difficult to treat. In 2000 the U.S. Environmental Protection Agency published wastewater regulations for CWT facilities in the US. [ 15 ]
The specific pollutants discharged by organic chemical manufacturers vary widely from plant to plant, depending on the types of products manufactured, such as bulk organic chemicals, resins, pesticides, plastics, or synthetic fibers. Some of the organic compounds that may be discharged are benzene , chloroform , naphthalene , phenols , toluene and vinyl chloride . Biochemical oxygen demand (BOD), which is a gross measurement of a range of organic pollutants, may be used to gauge the effectiveness of a biological wastewater treatment system, and is used as a regulatory parameter in some discharge permits. Metal pollutant discharges may include chromium , copper , lead , nickel and zinc . [ 16 ]
The inorganic chemicals sector covers a wide variety of products and processes, although an individual plant may produce a narrow range of products and pollutants. Products include aluminum compounds; calcium carbide and calcium chloride; hydrofluoric acid; potassium compounds; borax; chrome and fluorine-based compounds; cadmium and zinc-based compounds. The pollutants discharged vary by product sector and individual plant, and may include arsenic, chlorine, cyanide, fluoride; and heavy metals such as chromium, copper, iron, lead, mercury, nickel and zinc. [ 17 ]
Fossil-fuel power stations , particularly coal -fired plants, are a major source of industrial wastewater. Many of these plants discharge wastewater with significant levels of metals such as lead , mercury , cadmium and chromium , as well as arsenic , selenium and nitrogen compounds ( nitrates and nitrites ). Wastewater streams include flue-gas desulfurization , fly ash , bottom ash and flue gas mercury control. Plants with air pollution controls such as wet scrubbers typically transfer the captured pollutants to the wastewater stream. [ 18 ]
Ash ponds , a type of surface impoundment, are a widely used treatment technology at coal-fired plants. These ponds use gravity to settle out large particulates (measured as total suspended solids ) from power plant wastewater. This technology does not treat dissolved pollutants. Power stations use additional technologies to control pollutants, depending on the particular wastestream in the plant. These include dry ash handling, closed-loop ash recycling, chemical precipitation , biological treatment (such as an activated sludge process), membrane systems, and evaporation-crystallization systems. [ 18 ] Technological advancements in ion-exchange membranes and electrodialysis systems has enabled high efficiency treatment of flue-gas desulfurization wastewater to meet recent EPA discharge limits. [ 19 ] The treatment approach is similar for other highly scaling industrial wastewaters. [ citation needed ]
Wastewater generated from agricultural and food processing operations has distinctive characteristics that set it apart from common municipal wastewater managed by public or private sewage treatment plants throughout the world: it is biodegradable and non-toxic, but has high Biological Oxygen Demand (BOD) and suspended solids (SS). [ 20 ] The constituents of food and agriculture wastewater are often complex to predict, due to the differences in BOD and pH in effluents from vegetable, fruit, and meat products and due to the seasonal nature of food processing and post-harvesting. [ citation needed ]
Processing of food from raw materials requires large volumes of high grade water. Vegetable washing generates water with high loads of particulate matter and some dissolved organic matter . It may also contain surfactants and pesticides.
Aquaculture facilities (fish farms) often discharge large amounts of nitrogen and phosphorus, as well as suspended solids. Some facilities use drugs and pesticides, which may be present in the wastewater. [ 21 ]
Dairy processing plants generate conventional pollutants (BOD, SS). [ 22 ]
Animal slaughter and processing produces organic waste from body fluids, such as blood , and gut contents. Pollutants generated include BOD, SS, coliform bacteria , oil and grease, organic nitrogen and ammonia . [ 23 ]
Processing food for sale produces wastes generated from cooking which are often rich in plant organic material and may also contain salt , flavourings , colouring material and acids or alkali . Large quantities of fats, oil and grease ("FOG") may also be present, which in sufficient concentrations can clog sewer lines. Some municipalities require restaurants and food processing businesses to use grease interceptors and regulate the disposal of FOG in the sewer system. [ 24 ]
Food processing activities such as plant cleaning, material conveying, bottling, and product washing create wastewater. Many food processing facilities require on-site treatment before operational wastewater can be land applied or discharged to a waterway or a sewer system. High suspended solids levels of organic particles increase BOD and can result in significant sewer surcharge fees. Sedimentation, wedge wire screening, or rotating belt filtration (microscreening) are commonly used methods to reduce suspended organic solids loading prior to discharge. [ citation needed ]
Glass manufacturing wastes vary with the type of glass manufactured, which includes fiberglass , plate glass , rolled glass , and glass containers, among others. The wastewater discharged by glass plants may include ammonia, BOD, chemical oxygen demand (COD), fluoride , lead, oil, phenol, and/or phosphorus. The discharges may also be highly acidic (low pH) or alkaline (high pH). [ 25 ]
The production of iron from its ores involves powerful reduction reactions in blast furnaces. Cooling waters are inevitably contaminated with products especially ammonia and cyanide . Production of coke from coal in coking plants also requires water cooling and the use of water in by-products separation. Contamination of waste streams includes gasification products such as benzene , naphthalene , anthracene , cyanide, ammonia, phenols , cresols together with a range of more complex organic compounds known collectively as polycyclic aromatic hydrocarbons (PAH). [ 26 ]
The conversion of iron or steel into sheet, wire or rods requires hot and cold mechanical transformation stages frequently employing water as a lubricant and coolant. Contaminants include hydraulic oils , tallow and particulate solids. Final treatment of iron and steel products before onward sale into manufacturing includes pickling in strong mineral acid to remove rust and prepare the surface for tin or chromium plating or for other surface treatments such as galvanisation or painting . The two acids commonly used are hydrochloric acid and sulfuric acid . Wastewater include acidic rinse waters together with waste acid. Although many plants operate acid recovery plants (particularly those using hydrochloric acid), where the mineral acid is boiled away from the iron salts, there remains a large volume of highly acid ferrous sulfate or ferrous chloride to be disposed of. Many steel industry wastewaters are contaminated by hydraulic oil, also known as soluble oil. [ citation needed ]
Many industries perform work on metal feedstocks (e.g. sheet metal, ingots ) as they fabricate their final products. The industries include automobile, truck and aircraft manufacturing; tools and hardware manufacturing; electronic equipment and office machines; ships and boats; appliances and other household products; and stationary industrial equipment (e.g. compressors, pumps, boilers). Typical processes conducted at these plants include grinding , machining , coating and painting, chemical etching and milling , solvent degreasing , electroplating and anodizing . Wastewater generated from these industries may contain heavy metals (common heavy metal pollutants from these industries include cadmium, chromium, copper, lead, nickel, silver and zinc), cyanide and various chemical solvents, oil, and grease. [ 27 ] [ 28 ]
The principal waste-waters associated with mines and quarries are slurries of rock particles in water. These arise from rainfall washing exposed surfaces and haul roads and also from rock washing and grading processes. Volumes of water can be very high, especially rainfall related arisings on large sites. [ 29 ] Some specialized separation operations, such as coal washing to separate coal from native rock using density gradients , can produce wastewater contaminated by fine particulate haematite and surfactants . Oils and hydraulic oils are also common contaminants. [ 30 ]
Wastewater from metal mines and ore recovery plants are inevitably contaminated by the minerals present in the native rock formations. Following crushing and extraction of the desirable materials, undesirable materials may enter the wastewater stream. For metal mines, this can include unwanted metals such as zinc and other materials such as arsenic . Extraction of high value metals such as gold and silver may generate slimes containing very fine particles in where physical removal of contaminants becomes particularly difficult. [ 31 ]
Additionally, the geologic formations that harbour economically valuable metals such as copper and gold very often consist of sulphide-type ores. The processing entails grinding the rock into fine particles and then extracting the desired metal(s), with the leftover rock being known as tailings. These tailings contain a combination of not only undesirable leftover metals, but also sulphide components which eventually form sulphuric acid upon the exposure to air and water that inevitably occurs when the tailings are disposed of in large impoundments. The resulting acid mine drainage , which is often rich in heavy metals (because acids dissolve metals), is one of the many environmental impacts of mining . [ 31 ]
The waste production from the nuclear and radio-chemicals industry is dealt with as Radioactive waste . [ citation needed ]
Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus ( algae ) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. [ 32 ]
Oil and gas well operations generate produced water , which may contain oils, toxic metals (e.g. arsenic , cadmium , chromium , mercury, lead), salts, organic chemicals and solids. Some produced water contains traces of naturally occurring radioactive material . Offshore oil and gas platforms also generate deck drainage, domestic waste and sanitary waste. During the drilling process, well sites typically discharge drill cuttings and drilling mud (drilling fluid). [ 33 ]
Pollutants discharged at petroleum refineries and petrochemical plants include conventional pollutants (BOD, oil and grease, suspended solids ), ammonia, chromium, phenols and sulfides. [ 34 ]
Pharmaceutical plants typically generate a variety of process wastewaters, including solvents, spent acid and caustic solutions, water from chemical reactions, product wash water, condensed steam, blowdown from air pollution control scrubbers, and equipment washwater. Non-process wastewaters typically include cooling water and site runoff. Pollutants generated by the industry include acetone , ammonia, benzene, BOD, chloroform, cyanide, ethanol , ethyl acetate , isopropanol , methylene chloride , methanol , phenol and toluene. Treatment technologies used include advanced biological treatment (e.g. activated sludge with nitrification), multimedia filtration , cyanide destruction (e.g. hydrolysis ), steam stripping and wastewater recycling. [ 35 ]
Effluent from the pulp and paper industry is generally high in suspended solids and BOD. Plants that bleach wood pulp for paper making may generate chloroform , dioxins (including 2,3,7,8-TCDD ), furans , phenols, and chemical oxygen demand (COD). [ 36 ] Stand-alone paper mills using imported pulp may only require simple primary treatment, such as sedimentation or dissolved air flotation . Increased BOD or COD loadings, as well as organic pollutants, may require biological treatment such as activated sludge or upflow anaerobic sludge blanket reactors . For mills with high inorganic loadings like salt, tertiary treatments may be required, either general membrane treatments like ultrafiltration or reverse osmosis or treatments to remove specific contaminants, such as nutrients.
The pollutants discharged by nonferrous smelters vary with the base metal ore. Bauxite smelters generate phenols [ 37 ] : 131 but typically use settling basins and evaporation to manage these wastes, with no need to routinely discharge wastewater. [ 37 ] : 395 Aluminum smelters typically discharge fluoride , benzo(a)pyrene , antimony and nickel , as well as aluminum. Copper smelters typically generate cadmium , lead, zinc, arsenic and nickel, in addition to copper, in their wastewater. Lead smelters discharge lead and zinc. Nickel and cobalt smelters discharge ammonia and copper in addition to the base metals. Zinc smelters discharge arsenic, cadmium, copper, lead, selenium and zinc. [ 38 ]
Typical treatment processes used in the industry are chemical precipitation, sedimentation and filtration. [ 37 ] : 145
Textile mills , including carpet manufacturers, generate wastewater from a wide variety of processes, including cleaning and finishing, yarn manufacturing and fabric finishing (such as bleaching , dyeing , resin treatment, waterproofing and retardant flameproofing ). Pollutants generated by textile mills include BOD, SS, oil and grease, sulfide, phenols and chromium. [ 39 ] Insecticide residues in fleeces are a particular problem in treating waters generated in wool processing. Animal fats may be present in the wastewater, which if not contaminated, can be recovered for the production of tallow or further rendering. [ citation needed ]
Textile dyeing plants generate wastewater that contain synthetic (e.g., reactive dyes, acid dyes, basic dyes, disperse dyes, vat dyes, sulphur dyes, mordant dyes, direct dyes, ingrain dyes, solvent dyes, pigment dyes) [ 40 ] and natural dyestuff, gum thickener (guar) and various wetting agents, pH buffers and dye retardants or accelerators. Following treatment with polymer-based flocculants and settling agents, typical monitoring parameters include BOD, COD, color (ADMI), sulfide, oil and grease, phenol, TSS and heavy metals (chromium, zinc , lead, copper).
Industrial applications where oil enters the wastewater stream may include vehicle wash bays, workshops, fuel storage depots, transport hubs and power generation. Often the wastewater is discharged into local sewer or trade waste systems and must meet local environmental specifications. Typical contaminants can include solvents, detergents, grit, lubricants and hydrocarbons.
Many industries have a need to treat water to obtain very high quality water for their processes. This might include pure chemical synthesis or boiler feed water. Also, some water treatment processes produce organic and mineral sludges from filtration and sedimentation which require treatment. Ion exchange using natural or synthetic resins removes calcium , magnesium and carbonate ions from water, typically replacing them with sodium , chloride , hydroxyl and/or other ions. Regeneration of ion-exchange columns with strong acids and alkalis produces a wastewater rich in hardness ions which are readily precipitated out, especially when in admixture with other wastewater constituents.
Wood preserving plants generate conventional and toxic pollutants, including arsenic, COD, copper, chromium, abnormally high or low pH, phenols, suspended solids, oil and grease. [ 41 ]
The various types of contamination of wastewater require a variety of strategies to remove the contamination. [ 1 ] Most industrial processes, such as petroleum refineries , chemical and petrochemical plants have onsite facilities to treat their wastewaters so that the pollutant concentrations in the treated wastewater comply with the regulations regarding disposal of wastewaters into sewers or into rivers, lakes or oceans. [ 1 ] : 1412 Constructed wetlands are being used in an increasing number of cases as they provided high quality and productive on-site treatment. Other industrial processes that produce a lot of waste-waters such as paper and pulp production have created environmental concern, leading to development of processes to recycle water use within plants before they have to be cleaned and disposed. [ 42 ]
An industrial wastewater treatment plant may include one or more of the following rather than the conventional treatment sequence of sewage treatment plants:
Brine treatment involves removing dissolved salt ions from the waste stream. Although similarities to seawater or brackish water desalination exist, industrial brine treatment may contain unique combinations of dissolved ions, such as hardness ions or other metals, necessitating specific processes and equipment.
Brine treatment systems are typically optimized to either reduce the volume of the final discharge for more economic disposal (as disposal costs are often based on volume) or maximize the recovery of fresh water or salts. Brine treatment systems may also be optimized to reduce electricity consumption, chemical usage, or physical footprint.
Brine treatment is commonly encountered when treating cooling tower blowdown, produced water from steam-assisted gravity drainage (SAGD), produced water from natural gas extraction such as coal seam gas , frac flowback water, acid mine or acid rock drainage , reverse osmosis reject, chlor-alkali wastewater, pulp and paper mill effluent, and waste streams from food and beverage processing.
Brine treatment technologies may include: membrane filtration processes, such as reverse osmosis ; ion-exchange processes such as electrodialysis or weak acid cation exchange ; or evaporation processes, such as brine concentrators and crystallizers employing mechanical vapour recompression and steam. Due to the ever increasing discharge standards, there has been an emergence of the use of advance oxidation processes for the treatment of brine. Some notable examples such as Fenton's oxidation [ 45 ] [ 46 ] and ozonation [ 47 ] have been employed for degradation of recalcitrant compounds in brine from industrial plants.
Reverse osmosis may not be viable for brine treatment, due to the potential for fouling caused by hardness salts or organic contaminants, or damage to the reverse osmosis membranes from hydrocarbons .
Evaporation processes are the most widespread for brine treatment as they enable the highest degree of concentration, as high as solid salt. They also produce the highest purity effluent, even distillate-quality. Evaporation processes are also more tolerant of organics, hydrocarbons, or hardness salts. However, energy consumption is high and corrosion may be an issue as the prime mover is concentrated salt water. As a result, evaporation systems typically employ titanium or duplex stainless steel materials.
Brine management examines the broader context of brine treatment and may include consideration of government policy and regulations, corporate sustainability , environmental impact, recycling, handling and transport, containment, centralized compared to on-site treatment, avoidance and reduction, technologies, and economics. Brine management shares some issues with leachate management and more general waste management . In the recent years, there has been greater prevalence in brine management due to global push for zero liquid discharge (ZLD)/minimal liquid discharge (MLD). [ 48 ] In ZLD/MLD techniques, a closed water cycle is used to minimize water discharges from a system for water reuse . This concept has been gaining traction in recent years, due to increased water discharges and recent advancement in membrane technology. Increasingly, there has been also greater efforts to increase the recovery of materials from brines, especially from mining, geothermal wastewater or desalination brines. [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ] [ 54 ] Various literature demosntrates the vaibility of extraction of valuable materials like sodium bicarbonates, sodium chlorides and precious metals (like rubidium, cesium and lithium). The concept of ZLD/MLD encompasses the downstream management of wastewater brines, to reduce discharges and also derive valuable products from it.
Most solids can be removed using simple sedimentation techniques with the solids recovered as slurry or sludge. Very fine solids and solids with densities close to the density of water pose special problems. In such case filtration or ultrafiltration may be required. Although flocculation may be used, using alum salts or the addition of polyelectrolytes . Wastewater from industrial food processing often requires on-site treatment before it can be discharged to prevent or reduce sewer surcharge fees. The type of industry and specific operational practices determine what types of wastewater is generated and what type of treatment is required. Reducing solids such as waste product, organic materials, and sand is often a goal of industrial wastewater treatment. Some common ways to reduce solids include primary sedimentation (clarification), dissolved air flotation (DAF), belt filtration (microscreening), and drum screening.
The effective removal of oils and grease is dependent on the characteristics of the oil in terms of its suspension state and droplet size, which will in turn affect the choice of separator technology. Oil in industrial waste water may be free light oil, heavy oil, which tends to sink, and emulsified oil, often referred to as soluble oil. Emulsified or soluble oils will typically required "cracking" to free the oil from its emulsion. In most cases this is achieved by lowering the pH of the water matrix.
Most separator technologies will have an optimum range of oil droplet sizes that can be effectively treated. Each separator technology will have its own performance curve outlining optimum performance based on oil droplet size. the most common separators are gravity tanks or pits, API oil-water separators or plate packs, chemical treatment via dissolved air flotations, centrifuges, media filters and hydrocyclones.
Analyzing the oily water to determine droplet size can be performed with a video particle analyser.
Hydrocyclone separators operate on the process where wastewater enters the cyclone chamber and is spun under extreme centrifugal forces more than 1000 times the force of gravity. This force causes the water and oil droplets (or solid particles) to separate. The separated materials is discharged from one end of the cyclone where treated water is discharged through the opposite end for further treatment, filtration or discharge. Hydrocyclones can also be utilised in a variety of context from solid-liquid separation to oil-water separation. [ 56 ] [ 57 ] [ 58 ] [ 59 ]
Biodegradable organic material of plant or animal origin is usually possible to treat using extended conventional sewage treatment processes such as activated sludge or trickling filter . [ 1 ] [ 60 ] Problems can arise if the wastewater is excessively diluted with washing water or is highly concentrated such as undiluted blood or milk. The presence of cleaning agents, disinfectants, pesticides, or antibiotics can have detrimental impacts on treatment processes. [ citation needed ]
The activated sludge process is a type of biological wastewater treatment process for treating sewage or industrial wastewaters using aeration and a biological floc composed of bacteria and protozoa . It is one of several biological wastewater treatment alternatives in secondary treatment , which deals with the removal of biodegradable organic matter and suspended solids. It uses air (or oxygen ) and microorganisms to biologically oxidize organic pollutants, producing a waste sludge (or floc ) containing the oxidized material.
A trickling filter consists of a bed of rocks , gravel , slag , peat moss , or plastic media over which wastewater flows downward and contacts a layer (or film) of microbial slime covering the bed media. Aerobic conditions are maintained by forced air flowing through the bed or by natural convection of air. The process involves adsorption of organic compounds in the wastewater by the microbial slime layer, diffusion of air into the slime layer to provide the oxygen required for the biochemical oxidation of the organic compounds. The end products include carbon dioxide gas, water and other products of the oxidation. As the slime layer thickens, it becomes difficult for the air to penetrate the layer and an inner anaerobic layer is formed. [ citation needed ]
Synthetic organic materials including solvents, paints, pharmaceuticals, pesticides, products from coke production and so forth can be very difficult to treat. Treatment methods are often specific to the material being treated. Methods include advanced oxidation processing , distillation , adsorption, ozonation, vitrification , incineration , chemical immobilisation or landfill disposal. Some materials such as some detergents may be capable of biological degradation and in such cases, a modified form of wastewater treatment can be used.
Acids and alkalis can usually be neutralised under controlled conditions. Neutralisation frequently produces a precipitate that will require treatment as a solid residue that may also be toxic. In some cases, gases may be evolved requiring treatment for the gas stream. Some other forms of treatment are usually required following neutralisation.
Waste streams rich in hardness ions as from de-ionisation processes can readily lose the hardness ions in a buildup of precipitated calcium and magnesium salts. This precipitation process can cause severe furring of pipes and can, in extreme cases, cause the blockage of disposal pipes. A 1-metre diameter industrial marine discharge pipe serving a major chemicals complex was blocked by such salts in the 1970s. Treatment is by concentration of de-ionisation waste waters and disposal to landfill or by careful pH management of the released wastewater.
Toxic materials including many organic materials, metals (such as zinc, silver, cadmium , thallium , etc.) acids, alkalis, non-metallic elements (such as arsenic or selenium ) are generally resistant to biological processes unless very dilute. Metals can often be precipitated out by changing the pH or by treatment with other chemicals. Many, however, are resistant to treatment or mitigation and may require concentration followed by landfilling or recycling. Dissolved organics can be incinerated within the wastewater by the advanced oxidation process.
Molecular encapsulation is a technology that has the potential to provide a system for the recyclable removal of lead and other ions from polluted sources. Nano-, micro- and milli- capsules, with sizes in the ranges 10 nm–1μm,1μm–1mm and >1mm, respectively, are particles that have an active reagent (core) surrounded by a carrier (shell).There are three types of capsule under investigation: alginate -based capsules, carbon nanotubes , polymer swelling capsules. These capsules provide a possible means for the remediation of contaminated water. [ 61 ]
To remove heat from wastewater generated by power plants or manufacturing plants , and thus to reduce thermal pollution , the following technologies are used:
Some facilities such as oil and gas wells may be permitted to pump their wastewater underground through injection wells . However, wastewater injection has been linked to induced seismicity . [ 63 ]
Economies of scale may favor a situation where industrial wastewater (with pre-treatment or without treatment) is discharged to the sewer and then treated at a large municipal sewage treatment plant. Typically, trade waste charges are applied in that case. Or it might be more economical to have full treatment of industrial wastewater on the same site where it is generated and then discharging this treated industrial wastewater to a suitable surface water body. This effectively reduces wastewater treatment charges collected by municipal sewage treatment plants by pre-treating wastewaters to reduce concentrations of pollutants measured to determine user fees. [ 64 ] : 300–302
Industrial wastewater plants may also reduce raw water costs by converting selected wastewaters to reclaimed water used for different purposes.
The international community has defined the treatment of industrial wastewater as an important part of sustainable development by including it in Sustainable Development Goal 6 . Target 6.3 of this goal is to "By 2030, improve water quality by reducing pollution , eliminating dumping and minimizing release of hazardous chemicals and materials, halving the proportion of untreated wastewater and substantially increasing recycling and safe reuse globally". [ 65 ] One of the indicators for this target is the "proportion of domestic and industrial wastewater flows safely treated". [ 66 ] | https://en.wikipedia.org/wiki/Industrial_wastewater_treatment |
Industrialised building system (IBS) is a term used in Malaysia for a technique of construction where by components are manufactured in a controlled environment, either at site or off site, placed and assembled into construction works. [ 1 ] Worldwide, IBS is also known as Pre-fabricated /Pre-fab Construction, Modern Method of Construction (MMC) and Off-site Construction. CIDB Malaysia, through CIDB IBS SDN BHD is promoting the usage of IBS to increase productivity and quality at construction sites [ 2 ] [ 3 ] [ 4 ] through various promotion programmes, training and incentives. The content of IBS (IBS Score) is determined based on the Construction Industry Standard 18 (CIS 18: 2010); either manually, web application or fully automated CAD -based IBS Score calculator. For example, its use in the Forest City project. | https://en.wikipedia.org/wiki/Industrialised_building_system |
The UCSF Industry Documents Library (IDL) is a digital archive of internal tobacco, drug, food, chemical and fossil fuel corporate documents, acquired largely through litigation, which illustrate industry efforts to influence policies and regulations meant to protect public health . Created and maintained by the UCSF Library , the mission of the UCSF Industry Documents Library is to "identify, collect, curate, preserve, and make freely accessible internal documents created by industries and their partners which have an impact on public health, for the benefit and use of researchers, clinicians, educators, students, policymakers, media, and the general public at UCSF and internationally". [ 1 ]
The IDL includes the following archives: | https://en.wikipedia.org/wiki/Industry_Documents_Library |
The Industry Foundation Classes ( IFC ) is a CAD data exchange data schema intended for description of architectural, building and construction industry data (ABCII). The IFC file format is based on ISO 10303-21 standard and definitions of ABCII are documented by using underlying EXPRESS . [ 1 ]
It is a platform-neutral, open data schema specification that is not controlled by a single vendor or group of vendors. It is an object-based data schema with a data model developed by buildingSMART (formerly the International Alliance for Interoperability, IAI) to facilitate interoperability in the architecture , engineering and construction (AEC) industry, and is a commonly used collaboration format in Building information modeling (BIM) based projects. The IFC model specification is open and available. [ 2 ] It is registered by ISO and is an official International Standard ISO 16739-1:2024.
Because of its focus on interoperability the Danish government in 2010 made the use of IFC format(s) compulsory for publicly aided building projects. [ 3 ] In 2017 the Finnish state-owned facility management company Senate Properties started to demand use of IFC compatible software and BIM in all their projects. [ 4 ] Also the Norwegian Government, Health and Defense client organisations require use of IFC BIM in all projects as well as many municipalities, private clients, contractors and designers have integrated IFC BIM in their business. [ citation needed ] . The popularity of the IFC data schema in construction has continued to grow, primarily for the purpose of exchanging geometry.
The IFC initiative began in 1994, when Autodesk formed an industry consortium to advise the company on the development of a set of C++ classes that could support integrated application development. Twelve US companies joined the consortium. These companies included AT&T, HOK Architects, Honeywell, Carrier, Tishman and Butler Manufacturing. [ 5 ] Initially named the Industry Alliance for Interoperability, the Alliance opened membership to all interested parties in September, 1995 and changed its name in 1997 to the International Alliance for Interoperability. The new Alliance was reconstituted as a non-profit industry-led organization, with the goal of publishing the Industry Foundation Class (IFC) as a neutral AEC product model responding to the AEC building lifecycle. A further name change occurred in 2005, and the IFC specification is now developed and maintained by buildingSMART .
The following IFC Specification versions are available [ 6 ]
IFC defines multiple file formats that may be used, supporting various encodings of the same underlying data. [ 8 ]
IFC-SPF is in ASCII format which, while human-readable, suffers from common ASCII file issues, in that file-sizes are bloated, files must be read sequentially from start to finish, mid-file extraction is not possible, files are slow to parse, and definitions are non-hierarchical. [ 9 ] In addition to ifcXML and ifcZIP, modern data formats include RDF/XML or Turtle (using the ifcOWL ontology), ifcJSON ( JavaScript Object Notation , broadly available) and ifcHDF5 ( Hierarchical Data Format v5, binary). [ 9 ] In 2020, buildingSmart had two JSON projects underway: ifcJSON v4 (a direct mapping from EXPRESS-based IFC v4) and ifcJSON v5, plus a research project experimenting with turning IFC into a binary format. [ 9 ]
IFC defines an EXPRESS based entity-relationship model consisting of several hundred entities organized into an object-based inheritance hierarchy. Examples of entities include building elements such as IfcWall, geometry such as IfcExtrudedAreaSolid, and basic constructs such as IfcCartesianPoint. [ 10 ]
At the most abstract level, IFC divides all entities into rooted and non-rooted entities. Rooted entities derive from IfcRoot and have a concept of identity (having a GUID ), along with attributes for name, description, and revision control. Non-rooted entities do not have identity and instances only exist if referenced from a rooted instance directly or indirectly. IfcRoot is subdivided into three abstract concepts: object definitions, relationships, and property sets:
IfcObjectDefinition is split into object occurrences and object types. IfcObject captures object occurrences such as a product installation having serial number and physical placement. IfcTypeObject captures type definitions (or templates) such as a product type having a particular model number and common shape. Occurrences and types are further subdivided into six fundamental concepts: actors ("who"), controls ("why"), groups ("what"), products ("where"), processes ("when"), and resources ("how").
IfcRelationship captures relationships among objects. There are five fundamental relationship types: composition, assignment, connectivity, association, and definition.
IfcPropertyDefinition captures dynamically extensible property sets. A property set contains one or more properties which may be a single value (e.g. string, number, unit measurement), a bounded value (having minimum and maximum), an enumeration, a list of values, a table of values, or a data structure. While IFC defines several hundred property sets for specific types, custom property sets may be defined by application vendors or end users.
IfcProduct is the base class for all physical objects and is subdivided into spatial elements, physical elements, structural analysis items, and other concepts. Products may have associated materials, shape representations, and placement in space. Spatial elements include IfcSite, IfcBuilding, IfcBuildingStorey, and IfcSpace. Physical building elements include IfcWall, IfcBeam, IfcDoor, IfcWindow, IfcStair, etc. Distribution elements ( HVAC , electrical , plumbing ) have a concept of ports where elements may have specific connections for various services, and connected together using cables, pipes, or ducts to form a system. Various connectivity relationships are used for building elements such as walls having openings filled by doors or windows.
Materials may be defined for products as a whole, or as layers, profiles, or constituents for specified parts.
Representations may be defined for explicit 3D shape, and optionally as parametric constraints. Each representation is identified by IfcShapeRepresentation with a well-known name.
Placement may indicate position, vertical angle, and horizontal angle.
Quantities may be defined for take-off purposes such as Gross Area, Gross Volume, Gross Weight, Net Weight, etc. IFC defines various quantities specific to each element type and the method of calculation according to geometry and relationships.
IfcProcess is the base class for processes and is subdivided into tasks, events, and procedures. Processes may have durations and be scheduled to occur at specific time periods. Processes may be sequenced such that a successor task may start after a predecessor task finishes, following the Critical Path Method . Processes may be nested into sub-processes for summary roll-up. Processes may be assigned to products indicating the output produced by the work performed.
IfcResource is the base class for resources and is subdivided into materials, labor, equipment, subcontracts, crews, and more. Resources may have various costs and calendars of availability. Resources may be nested into sub-resources for granular allocation. Resources may be assigned to processes indicating tasks performed on behalf of a resource.
IfcProject encapsulates an overall project and indicates the project name, description, default units, currency, coordinate system, and other contextual information. A valid IFC file must always include exactly one IfcProject instance, from which all other objects relate directly or indirectly. A project may include multiple buildings, multiple participants, and/or multiple phases according to the particular use.
In addition to project-specific information, an IfcProject may also reference external projects from which shared definitions may be imported such as product types. Each external project is encapsulated using IfcProjectLibrary [IFC2x4] along with IfcRelAssociatesLibrary and IfcLibraryInformation to identify the particular revision of the imported project library.
Projects support revision control where any IfcRoot-based entity has a unique identifier and may be marked as added, modified, deleted, or having no change. Such capability allows multiple IFC files to be merged deterministically, ensuring data integrity without human intervention. | https://en.wikipedia.org/wiki/Industry_Foundation_Classes |
An inelastic collision , in contrast to an elastic collision , is a collision in which kinetic energy is not conserved due to the action of internal friction .
In collisions of macroscopic bodies, some kinetic energy is turned into vibrational energy of the atoms , causing a heating effect, and the bodies are deformed.
The molecules of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules' translational motion and their internal degrees of freedom with each collision. At any one instant, half the collisions are – to a varying extent – inelastic (the pair possesses less kinetic energy after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across an entire sample, molecular collisions are elastic. [ 1 ]
Although inelastic collisions do not conserve kinetic energy, they do obey conservation of momentum . [ 2 ] Simple ballistic pendulum problems obey the conservation of kinetic energy only when the block swings to its largest angle.
In nuclear physics , an inelastic collision is one in which the incoming particle causes the nucleus it strikes to become excited or to break up. Deep inelastic scattering is a method of probing the structure of subatomic particles in much the same way as Rutherford probed the inside of the atom (see Rutherford scattering ). Such experiments were performed on protons in the late 1960s using high-energy electrons at the Stanford Linear Accelerator (SLAC). As in Rutherford scattering, deep inelastic scattering of electrons by proton targets revealed that most of the incident electrons interact very little and pass straight through, with only a small number bouncing back. This indicates that the charge in the proton is concentrated in small lumps, reminiscent of Rutherford's discovery that the positive charge in an atom is concentrated at the nucleus. However, in the case of the proton, the evidence suggested three distinct concentrations of charge ( quarks ) and not one.
The formula for the velocities after a one-dimensional collision is: v a = C R m b ( u b − u a ) + m a u a + m b u b m a + m b v b = C R m a ( u a − u b ) + m a u a + m b u b m a + m b {\displaystyle {\begin{aligned}v_{a}&={\frac {C_{R}m_{b}(u_{b}-u_{a})+m_{a}u_{a}+m_{b}u_{b}}{m_{a}+m_{b}}}\\v_{b}&={\frac {C_{R}m_{a}(u_{a}-u_{b})+m_{a}u_{a}+m_{b}u_{b}}{m_{a}+m_{b}}}\end{aligned}}}
where
In a center of momentum frame the formulas reduce to:
v a = − C R u a v b = − C R u b {\displaystyle {\begin{aligned}v_{a}&=-C_{R}u_{a}\\v_{b}&=-C_{R}u_{b}\end{aligned}}}
For two- and three-dimensional collisions the velocities in these formulas are the components perpendicular to the tangent line/plane at the point of contact.
If assuming the objects are not rotating before or after the collision, the normal impulse is:
J n = m a m b m a + m b ( 1 + C R ) ( u b → − u a → ) ⋅ n → {\displaystyle J_{n}={\frac {m_{a}m_{b}}{m_{a}+m_{b}}}(1+C_{R})({\vec {u_{b}}}-{\vec {u_{a}}})\cdot {\vec {n}}}
where n → {\displaystyle {\vec {n}}} is the normal vector.
Assuming no friction, this gives the velocity updates:
Δ v a → = J n m a n → Δ v b → = − J n m b n → {\displaystyle {\begin{aligned}\Delta {\vec {v_{a}}}&={\frac {J_{n}}{m_{a}}}{\vec {n}}\\\Delta {\vec {v_{b}}}&=-{\frac {J_{n}}{m_{b}}}{\vec {n}}\end{aligned}}}
A perfectly inelastic collision occurs when the maximum amount of kinetic energy
of a system is lost. In a perfectly inelastic collision, i.e., a zero coefficient of restitution , the colliding particles stick together. In such a collision, kinetic energy is lost by bonding the two bodies together. This bonding energy usually results in a maximum kinetic energy loss of the system. It is necessary to consider conservation of momentum: (Note: In the sliding block example above, momentum of the two body system is only conserved if the surface has zero friction. With friction, momentum of the two bodies is transferred to the surface that the two bodies are sliding upon. Similarly, if there is air resistance, the momentum of the bodies can be transferred to the air.) The equation below holds true for the two-body (Body A, Body B) system collision in the example above. In this example, momentum of the system is conserved because there is no friction between the sliding bodies and the surface. m a u a + m b u b = ( m a + m b ) v {\displaystyle m_{a}u_{a}+m_{b}u_{b}=\left(m_{a}+m_{b}\right)v} where v is the final velocity, which is hence given by v = m a u a + m b u b m a + m b {\displaystyle v={\frac {m_{a}u_{a}+m_{b}u_{b}}{m_{a}+m_{b}}}} The reduction of total kinetic energy is equal to the total kinetic energy before the collision in a center of momentum frame with respect to the system of two particles, because in such a frame the kinetic energy after the collision is zero. In this frame most of the kinetic energy before the collision is that of the particle with the smaller mass. In another frame, in addition to the reduction of kinetic energy there may be a transfer of kinetic energy from one particle to the other; the fact that this depends on the frame shows how relative this is. The change in kinetic energy is hence: Δ K E = 1 2 μ u r e l 2 = 1 2 m a m b m a + m b | u a − u b | 2 {\displaystyle \Delta KE={1 \over 2}\mu u_{\rm {rel}}^{2}={\frac {1}{2}}{\frac {m_{a}m_{b}}{m_{a}+m_{b}}}|u_{a}-u_{b}|^{2}}
where μ is the reduced mass and u rel is the relative velocity of the bodies before collision. With time reversed we have the situation of two objects pushed away from each other, e.g. shooting a projectile , or a rocket applying thrust (compare the derivation of the Tsiolkovsky rocket equation ).
Partially inelastic collisions are the most common form of collisions in the real world. In this type of collision, the objects involved in the collisions do not stick, but some kinetic energy is still lost. Friction, sound and heat are some ways the kinetic energy can be lost through partial inelastic collisions. | https://en.wikipedia.org/wiki/Inelastic_collision |
The inelastic mean free path ( IMFP ) is an index of how far an electron on average travels through a solid before losing energy.
If a monochromatic , primary beam of electrons is incident on a solid surface, the majority of incident electrons lose their energy because they interact strongly with matter , leading to plasmon excitation, electron-hole pair formation, and vibrational excitation. [ 2 ] The intensity of the primary electrons, I 0 , is damped as a function of the distance, d , into the solid. The intensity decay can be expressed as follows:
where I ( d ) is the intensity after the primary electron beam has traveled through the solid to a distance d . The parameter λ( E ) , termed the inelastic mean free path (IMFP), is defined as the distance an electron beam can travel before its intensity decays to 1/ e of its initial value. (Note that this is equation is closely related to the Beer–Lambert law .)
The inelastic mean free path of electrons can roughly be described by a universal curve that is the same for all materials. [ 1 ] [ 3 ]
The knowledge of the IMFP is indispensable for several electron spectroscopy and microscopy measurements. [ 4 ]
Following, [ 5 ] the IMFP is employed to calculate the effective attenuation length (EAL), the mean escape depth (MED) and the information depth (ID). Besides, one can utilize the IMFP to make matrix corrections for the relative sensitivity factor in quantitative surface analysis. Moreover, the IMFP is an important parameter in Monte Carlo simulations of photoelectron transport in matter.
Calculations of the IMFP are mostly based on the algorithm (full Penn algorithm, FPA) developed by Penn, [ 6 ] experimental optical constants or calculated optical data (for compounds). [ 5 ] The FPA considers an inelastic scattering event and the dependence of the energy-loss function (EFL) on momentum transfer which describes the probability for inelastic scattering as a function of momentum transfer. [ 5 ]
To measure the IMFP, one well known method is elastic-peak electron spectroscopy (EPES). [ 5 ] [ 7 ] This method measures the intensity of elastically backscattered electrons with a certain energy from a sample material in a certain direction. Applying a similar technique to materials whose IMFP is known, the measurements are compared with the results from the Monte Carlo simulations under the same conditions. Thus, one obtains the IMFP of a certain material in a certain energy spectrum. EPES measurements show a root-mean-square (RMS) difference between 12% and 17% from the theoretical expected values. [ 5 ] Calculated and experimental results show higher agreement for higher energies. [ 5 ]
For electron energies in the range 30 keV – 1 MeV, IMFP can be directly measured by electron energy loss spectroscopy inside a transmission electron microscope , provided the sample thickness is known. Such measurements reveal that IMFP in elemental solids is not a smooth, but an oscillatory function of the atomic number . [ 8 ]
For energies below 100 eV, IMFP can be evaluated in high-energy secondary electron yield (SEY) experiments. [ 9 ] Therefore, the SEY for an arbitrary incident energy between 0.1 keV-10 keV is analyzed. According to these experiments, a Monte Carlo model can be used to simulate the SEYs and determine the IMFP below 100 eV.
Using the dielectric formalism, [ 4 ] the IMFP λ − 1 {\displaystyle \lambda ^{-1}} can be calculated by solving the following integral:
with the minimum (maximum) energy loss ω m i n {\displaystyle \omega _{\mathrm {min} }} ( ω m a x {\displaystyle \omega _{\mathrm {max} }} ), the dielectric function ϵ {\displaystyle \epsilon } , the energy loss function (ELF) I m ( − 1 ϵ ( k , ω ) ) {\displaystyle \mathrm {Im} ({\frac {-1}{\epsilon (k,\omega )}})} and the smallest and largest momentum transfer k ± = 2 E ± 2 ( E − ω ) {\displaystyle k_{\pm }={\sqrt {2E}}\pm {\sqrt {2(E-\omega )}}} . In general, solving this integral is quite challenging and only applies for energies above 100 eV. Thus, (semi)empirical formulas were introduced to determine the IMFP.
A first approach is to calculate the IMFP by an approximate form of the relativistic Bethe equation for inelastic scattering of electrons in matter. [ 5 ] [ 10 ] Equation 2 holds for energies between 50 eV and 200 keV:
with
and
and the electron energy E {\displaystyle E} in eV above the Fermi level (conductors) or above the bottom of the conduction band (non-conductors). m e {\displaystyle m_{e}} is the electron mass, c {\displaystyle c} the vacuum velocity of light, N ν {\displaystyle N_{\nu }} is the number of valence electrons per atom or molecule, ρ {\displaystyle \rho } describes the density (in g c m 3 {\displaystyle \mathrm {\frac {g}{cm^{3}}} } ), M {\displaystyle M} is the atomic or molecular weight and β {\displaystyle \beta } , γ {\displaystyle \gamma } , C {\displaystyle C} and D {\displaystyle D} are parameters determined in the following. Equation 2 calculates the IMFP and its dependence on the electron energy in condensed matter.
Equation 2 was further developed [ 5 ] [ 11 ] to find the relations for the parameters β {\displaystyle \beta } , γ {\displaystyle \gamma } , C {\displaystyle C} and D {\displaystyle D} for energies between 50 eV and 2 keV:
Here, the bandgap energy E g {\displaystyle E_{g}} is given in eV. Equation 2 an 3 are also known as the TTP-2M equations and are in general applicable for energies between 50 eV and 200 keV. Neglecting a few materials (diamond, graphite, Cs, cubic-BN and hexagonal BN) that are not following these equations (due to deviations in β {\displaystyle \beta } ), the TTP-2M equations show precise agreement with the measurements.
Another approach based on Equation 2 to determine the IMFP is the S1 formula. [ 5 ] [ 12 ] This formula can be applied for energies between 100 eV and 10 keV:
with the atomic number Z {\displaystyle Z} (average atomic number for a compound), W = 0.06 H {\displaystyle W=0.06H} or W = 0.02 E g {\displaystyle W=0.02E_{g}} ( H {\displaystyle H} is the heat of formation of a compound in eV per atom) and the average atomic spacing a {\displaystyle a} :
with the Avogadro constant N A {\displaystyle N_{A}} and the stoichiometric coefficients g {\displaystyle g} and h {\displaystyle h} describing binary compounds G g H h {\displaystyle G_{g}H_{h}} . In this case, the atomic number becomes
with the atomic numbers Z g {\displaystyle Z_{g}} and Z h {\displaystyle Z_{h}} of the two constituents. This S1 formula shows higher agreement with measurements compared to Equation 2 . [ 5 ]
Calculating the IMFP with either the TTP-2M formula or the S1 formula requires different knowledge of some parameters. [ 5 ] Applying the TTP-2M formula one needs to know M {\displaystyle M} , ρ {\displaystyle \rho } and N ν {\displaystyle N_{\nu }} for conducting materials (and also E g {\displaystyle E_{g}} for non-conductors). Employing S1 formula, knowledge of the atomic number Z {\displaystyle Z} (average atomic number for compounds), M {\displaystyle M} and ρ {\displaystyle \rho } is required for conductors. If non-conducting materials are considered, one also needs to know either E g {\displaystyle E_{g}} or H {\displaystyle H} .
An analytical formula for calculating the IMFP down to 50 eV was proposed in 2021. [ 4 ] Therefore, an exponential term was added to an analytical formula already derived from 1 that was applicible for energies down to 500 eV:
For relativistic electrons it holds:
with the electron velocity v {\displaystyle v} , v 2 = c 2 τ ( τ + 2 ) / ( τ + 1 ) 2 {\displaystyle v^{2}=c^{2}\tau (\tau +2)/(\tau +1)^{2}} and τ = E / c 2 {\displaystyle \tau =E/c^{2}} . c {\displaystyle c} denotes the velocity of light. λ {\displaystyle \lambda } and a 0 {\displaystyle a_{0}} are given in nanometers. The constants in 4 and 5 are defined as following:
IMFP data can be collected from the National Institute of Standards and Technology (NIST) Electron Inelastic-Mean-Free-Path Database [ 13 ] or the NIST Database for the Simulation of Electron Spectra for Surface Analysis (SESSA). [ 14 ] The data contains IMFPs determined by EPES for energies below 2 keV. Otherwise, IMFPs can be determined from the TPP-2M or the S1 formula. [ 5 ] | https://en.wikipedia.org/wiki/Inelastic_mean_free_path |
In chemistry , nuclear physics , and particle physics , inelastic scattering is a process in which the internal states of a particle or a system of particles change after a collision. Often, this means the kinetic energy of the incident particle is not conserved (in contrast to elastic scattering ). Additionally, relativistic collisions which involve a transition from one type of particle to another are referred to as inelastic even if the outgoing particles have the same kinetic energy as the incoming ones. [ 1 ] Processes which are governed by elastic collisions at a microscopic level will appear to be inelastic if a macroscopic observer only has access to a subset of the degrees of freedom. In Compton scattering for instance, the two particles in the collision transfer energy causing a loss of energy in the measured particle. [ 2 ]
When an electron is the incident particle, the probability of inelastic scattering, depending on the energy of the incident electron, is usually smaller than that of elastic scattering. Thus in the case of gas electron diffraction (GED), reflection high-energy electron diffraction (RHEED), and transmission electron diffraction, because the energy of the incident electron is high, the contribution of inelastic electron scattering can be ignored. Deep inelastic scattering of electrons from protons provided the first direct evidence for the existence of quarks .
When a photon is the incident particle, there is an inelastic scattering process called Raman scattering . In this scattering process, the incident photon interacts with matter (gas, liquid, and solid) and the frequency of the photon is shifted towards red or blue. A red shift can be observed when part of the energy of the photon is transferred to the interacting matter, where it adds to its internal energy in a process called Stokes Raman scattering. The blue shift can be observed when internal energy of the matter is transferred to the photon; this process is called anti-Stokes Raman scattering.
Inelastic scattering is seen in the interaction between an electron and a photon. When a high-energy photon collides with a free electron (more precisely, weakly bound since a free electron cannot participate in inelastic scattering with a photon) and transfers energy, the process is called Compton scattering. Furthermore, when an electron with relativistic energy collides with an infrared or visible photon, the electron gives energy to the photon. This process is called inverse Compton scattering .
Neutrons undergo many types of scattering, including both elastic and inelastic scattering. Whether elastic or inelastic scatter occurs is dependent on the speed of the neutron, whether fast or thermal , or somewhere in between. It is also dependent on the nucleus it strikes and its neutron cross section . In inelastic scattering, the neutron interacts with the nucleus and the kinetic energy of the system is changed. This often activates the nucleus, putting it into an excited, unstable, short-lived energy state which causes it to quickly emit some kind of radiation to bring it back down to a stable or ground state. Alpha, beta, gamma, and protons may be emitted. Particles scattered in this type of nuclear reaction may cause the nucleus to recoil in the other direction.
Inelastic scattering is common in molecular collisions. Any collision which leads to a chemical reaction will be inelastic, but the term inelastic scattering is reserved for those collisions which do not result in reactions. [ 3 ] There is a transfer of energy between the translational mode (kinetic energy) and rotational and vibrational modes.
If the transferred energy is small compared to the incident energy of the scattered particle, one speaks of quasielastic scattering . | https://en.wikipedia.org/wiki/Inelastic_scattering |
Inequalities are very important in the study of information theory . There are a number of different contexts in which these inequalities appear.
Consider a tuple X 1 , X 2 , … , X n {\displaystyle X_{1},X_{2},\dots ,X_{n}} of n {\displaystyle n} finitely (or at most countably) supported random variables on the same probability space . There are 2 n subsets, for which ( joint ) entropies can be computed. For example, when n = 2, we may consider the entropies H ( X 1 ) , {\displaystyle H(X_{1}),} H ( X 2 ) , {\displaystyle H(X_{2}),} and H ( X 1 , X 2 ) {\displaystyle H(X_{1},X_{2})} . They satisfy the following inequalities (which together characterize the range of the marginal and joint entropies of two random variables):
In fact, these can all be expressed as special cases of a single inequality involving the conditional mutual information , namely
where A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} each denote the joint distribution of some arbitrary (possibly empty) subset of our collection of random variables. Inequalities that can be derived as linear combinations of this are known as Shannon-type inequalities.
For larger n {\displaystyle n} there are further restrictions on possible values of entropy.
To make this precise, a vector h {\displaystyle h} in R 2 n {\displaystyle \mathbb {R} ^{2^{n}}} indexed by subsets of { 1 , … , n } {\displaystyle \{1,\dots ,n\}} is said to be entropic if there is a joint, discrete distribution of n random variables X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} such that h I = H ( X i : i ∈ I ) {\displaystyle h_{I}=H(X_{i}\colon i\in I)} is their joint entropy , for each subset I {\displaystyle I} .
The set of entropic vectors is denoted Γ n ∗ {\displaystyle \Gamma _{n}^{*}} , following the notation of Yeung. [ 1 ] It is not closed nor convex for n ≥ 3 {\displaystyle n\geq 3} , but its topological closure Γ n ∗ ¯ {\displaystyle {\overline {\Gamma _{n}^{*}}}} is known to be convex and hence it can be characterized by the (infinitely many) linear inequalities satisfied by all entropic vectors, called entropic inequalities .
The set of all vectors that satisfy Shannon-type inequalities (but not necessarily other entropic inequalities) contains Γ n ∗ ¯ {\displaystyle {\overline {\Gamma _{n}^{*}}}} .
This containment is strict for n ≥ 4 {\displaystyle n\geq 4} and further inequalities are known as non-Shannon type inequalities.
Zhang and Yeung reported the first non-Shannon-type inequality, [ 2 ] often referred to as the Zhang-Yeung inequality.
Matus [ 3 ] proved that no finite set of inequalities can characterize (by linear combinations) all entropic inequalities. In other words, the region Γ n ∗ ¯ {\displaystyle {\overline {\Gamma _{n}^{*}}}} is not a polytope .
A great many important inequalities in information theory are actually lower bounds for the Kullback–Leibler divergence . Even the Shannon-type inequalities can be considered part of this category, since the interaction information can be expressed as the Kullback–Leibler divergence of the joint distribution with respect to the product of the marginals, and thus these inequalities can be seen as a special case of Gibbs' inequality .
On the other hand, it seems to be much more difficult to derive useful upper bounds for the Kullback–Leibler divergence. This is because the Kullback–Leibler divergence D KL ( P || Q ) depends very sensitively on events that are very rare in the reference distribution Q . D KL ( P || Q ) increases without bound as an event of finite non-zero probability in the distribution P becomes exceedingly rare in the reference distribution Q , and in fact D KL ( P || Q ) is not even defined if an event of non-zero probability in P has zero probability in Q . (Hence the requirement that P be absolutely continuous with respect to Q .)
This fundamental inequality states that the Kullback–Leibler divergence is non-negative.
Another inequality concerning the Kullback–Leibler divergence is known as Kullback's inequality . [ 4 ] If P and Q are probability distributions on the real line with P absolutely continuous with respect to Q, and whose first moments exist, then
where Ψ Q ∗ {\displaystyle \Psi _{Q}^{*}} is the large deviations rate function , i.e. the convex conjugate of the cumulant -generating function, of Q , and μ 1 ′ ( P ) {\displaystyle \mu '_{1}(P)} is the first moment of P .
The Cramér–Rao bound is a corollary of this result.
Pinsker's inequality relates Kullback–Leibler divergence and total variation distance . It states that if P , Q are two probability distributions , then
where
is the Kullback–Leibler divergence in nats and
is the total variation distance.
In 1957, [ 5 ] Hirschman showed that for a (reasonably well-behaved) function f : R → C {\displaystyle f:\mathbb {R} \rightarrow \mathbb {C} } such that ∫ − ∞ ∞ | f ( x ) | 2 d x = 1 , {\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1,} and its Fourier transform g ( y ) = ∫ − ∞ ∞ f ( x ) e − 2 π i x y d x , {\displaystyle g(y)=\int _{-\infty }^{\infty }f(x)e^{-2\pi ixy}\,dx,} the sum of the differential entropies of | f | 2 {\displaystyle |f|^{2}} and | g | 2 {\displaystyle |g|^{2}} is non-negative, i.e.
Hirschman conjectured, and it was later proved, [ 6 ] that a sharper bound of log ( e / 2 ) , {\displaystyle \log(e/2),} which is attained in the case of a Gaussian distribution , could replace the right-hand side of this inequality. This is especially significant since it implies, and is stronger than, Weyl's formulation of Heisenberg's uncertainty principle .
Given discrete random variables X {\displaystyle X} , Y {\displaystyle Y} , and Y ′ {\displaystyle Y'} , such that X {\displaystyle X} takes values only in the interval [−1, 1] and Y ′ {\displaystyle Y'} is determined by Y {\displaystyle Y} (such that H ( Y ′ | Y ) = 0 {\displaystyle H(Y'|Y)=0} ), we have [ 7 ] [ 8 ]
relating the conditional expectation to the conditional mutual information . This is a simple consequence of Pinsker's inequality . (Note: the correction factor log 2 inside the radical arises because we are measuring the conditional mutual information in bits rather than nats .)
Several machine based proof checker algorithms are now available. Proof checker algorithms typically verify the inequalities as either true or false. More advanced proof checker algorithms can produce proof or counterexamples. [ 9 ] ITIP is a Matlab based proof checker for all Shannon type Inequalities. Xitip is an open source, faster version of the same algorithm implemented in C with a graphical front end. Xitip also has a built in language parsing feature which support a broader range of random variable descriptions as input. AITIP and oXitip are cloud based implementations for validating the Shannon type inequalities. oXitip uses GLPK optimizer and has a C++ backend based on Xitip with a web based user interface. AITIP uses Gurobi solver for optimization and a mix of python and C++ in the backend implementation. It can also provide the canonical break down of the inequalities in terms of basic Information measures. [ 9 ] Quantum information-theoretic inequalities can be checked by the contraction map proof method. [ 10 ] | https://en.wikipedia.org/wiki/Inequalities_in_information_theory |
In mathematics , an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. [ 1 ] It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than and greater than (denoted by < and > , respectively the less-than and greater-than signs).
There are several different notations used to represent different kinds of inequalities:
In either case, a is not equal to b . These relations are known as strict inequalities , [ 1 ] meaning that a is strictly less than or strictly greater than b . Equality is excluded.
In contrast to strict inequalities, there are two types of inequality relations that are not strict:
In the 17th and 18th centuries, personal notations or typewriting signs were used to signal inequalities. [ 2 ] For example, In 1670, John Wallis used a single horizontal bar above rather than below the < and >.
Later in 1734, ≦ and ≧, known as "less than (greater-than) over equal to" or "less than (greater than) or equal to with double horizontal bars", first appeared in Pierre Bouguer 's work . [ 3 ] After that, mathematicians simplified Bouguer's symbol to "less than (greater than) or equal to with one horizontal bar" (≤), or "less than (greater than) or slanted equal to" (⩽).
The relation not greater than can also be represented by a ≯ b , {\displaystyle a\ngtr b,} the symbol for "greater than" bisected by a slash, "not". The same is true for not less than , a ≮ b . {\displaystyle a\nless b.}
The notation a ≠ b means that a is not equal to b ; this inequation sometimes is considered a form of strict inequality. [ 4 ] It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set .
In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, [ 5 ] normally by several orders of magnitude .
This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics).
In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc.
Inequalities are governed by the following properties . All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to strictly monotonic functions .
The relations ≤ and ≥ are each other's converse , meaning that for any real numbers a and b :
The transitive property of inequality states that for any real numbers a , b , c : [ 8 ]
If either of the premises is a strict inequality, then the conclusion is a strict inequality:
A common constant c may be added to or subtracted from both sides of an inequality. [ 4 ] So, for any real numbers a , b , c :
In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition.
The properties that deal with multiplication and division state that for any real numbers, a , b and non-zero c :
In other words, the inequality relation is preserved under multiplication and division with positive constant, but is reversed when a negative constant is involved. More generally, this applies for an ordered field . For more information, see § Ordered fields .
The property for the additive inverse states that for any real numbers a and b :
If both numbers are positive, then the inequality relation between the multiplicative inverses is opposite of that between the original numbers. More specifically, for any non-zero real numbers a and b that are both positive (or both negative ):
All of the cases for the signs of a and b can also be written in chained notation , as follows:
Any monotonically increasing function , by its definition, [ 9 ] may be applied to both sides of an inequality without breaking the inequality relation (provided that both expressions are in the domain of that function). However, applying a monotonically decreasing function to both sides of an inequality means the inequality relation would be reversed. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function.
If the inequality is strict ( a < b , a > b ) and the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. In fact, the rules for additive and multiplicative inverses are both examples of applying a strictly monotonically decreasing function.
A few examples of this rule are:
A (non-strict) partial order is a binary relation ≤ over a set P which is reflexive , antisymmetric , and transitive . [ 10 ] That is, for all a , b , and c in P , it must satisfy the three following clauses:
A set with a partial order is called a partially ordered set . [ 11 ] Those are the very basic axioms that every kind of order has to satisfy.
A strict partial order is a relation < that satisfies
where ≮ means that < does not hold.
Some types of partial orders are specified by adding further axioms, such as:
If ( F , +, ×) is a field and ≤ is a total order on F , then ( F , +, ×, ≤) is called an ordered field if and only if:
Both ( Q , + , × , ≤ ) {\displaystyle (\mathbb {Q} ,+,\times ,\leq )} and ( R , + , × , ≤ ) {\displaystyle (\mathbb {R} ,+,\times ,\leq )} are ordered fields , but ≤ cannot be defined in order to make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field , [ 12 ] because −1 is the square of i and would therefore be positive.
Besides being an ordered field, R also has the Least-upper-bound property . In fact, R can be defined as the only ordered field with that quality. [ 13 ]
The notation a < b < c stands for " a < b and b < c ", from which, by the transitivity property above, it also follows that a < c . By the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. Hence, for example, a < b + e < c is equivalent to a − e < b < c − e .
This notation can be generalized to any number of terms: for instance, a 1 ≤ a 2 ≤ ... ≤ a n means that a i ≤ a i +1 for i = 1, 2, ..., n − 1. By transitivity, this condition is equivalent to a i ≤ a j for any 1 ≤ i ≤ j ≤ n .
When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. For instance, to solve the inequality 4 x < 2 x + 1 ≤ 3 x + 2, it is not possible to isolate x in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding x < 1 / 2 and x ≥ −1 respectively, which can be combined into the final solution −1 ≤ x < 1 / 2 .
Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For example, the defining condition of a zigzag poset is written as a 1 < a 2 > a 3 < a 4 > a 5 < a 6 > ... . Mixed chained notation is used more often with compatible relations, like <, =, ≤. For instance, a < b = c ≤ d means that a < b , b = c , and c ≤ d . This notation exists in a few programming languages such as Python . In contrast, in programming languages that provide an ordering on the type of comparison results, such as C , even homogeneous chains may have a completely different meaning. [ 14 ]
An inequality is said to be sharp if it cannot be relaxed and still be valid in general. Formally, a universally quantified inequality φ is called sharp if, for every valid universally quantified inequality ψ , if ψ ⇒ φ holds, then ψ ⇔ φ also holds. For instance, the inequality ∀ a ∈ R . a 2 ≥ 0 is sharp, whereas the inequality ∀ a ∈ R . a 2 ≥ −1 is not sharp. [ citation needed ]
There are many inequalities between means. For example, for any positive numbers a 1 , a 2 , ..., a n we have
where they represent the following means of the sequence:
The Cauchy–Schwarz inequality states that for all vectors u and v of an inner product space it is true that | ⟨ u , v ⟩ | 2 ≤ ⟨ u , u ⟩ ⋅ ⟨ v , v ⟩ , {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}\leq \langle \mathbf {u} ,\mathbf {u} \rangle \cdot \langle \mathbf {v} ,\mathbf {v} \rangle ,} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product . Examples of inner products include the real and complex dot product ; In Euclidean space R n with the standard inner product, the Cauchy–Schwarz inequality is ( ∑ i = 1 n u i v i ) 2 ≤ ( ∑ i = 1 n u i 2 ) ( ∑ i = 1 n v i 2 ) . {\displaystyle {\biggl (}\sum _{i=1}^{n}u_{i}v_{i}{\biggr )}^{2}\leq {\biggl (}\sum _{i=1}^{n}u_{i}^{2}{\biggr )}{\biggl (}\sum _{i=1}^{n}v_{i}^{2}{\biggr )}.}
A power inequality is an inequality containing terms of the form a b , where a and b are real positive numbers or variable expressions. They often appear in mathematical olympiads exercises.
Examples:
Mathematicians often use inequalities to bound quantities for which exact formulas cannot be computed easily. Some inequalities are used so often that they have names:
The set of complex numbers C {\displaystyle \mathbb {C} } with its operations of addition and multiplication is a field , but it is impossible to define any relation ≤ so that ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} becomes an ordered field . To make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field , it would have to satisfy the following two properties:
Because ≤ is a total order , for any number a , either 0 ≤ a or a ≤ 0 (in which case the first property above implies that 0 ≤ − a ). In either case 0 ≤ a 2 ; this means that i 2 > 0 and 1 2 > 0 ; so −1 > 0 and 1 > 0 , which means (−1 + 1) > 0; contradiction.
However, an operation ≤ can be defined so as to satisfy only the first property (namely, "if a ≤ b , then a + c ≤ b + c "). Sometimes the lexicographical order definition is used:
It can easily be proven that for this definition a ≤ b implies a + c ≤ b + c .
Systems of linear inequalities can be simplified by Fourier–Motzkin elimination . [ 17 ]
The cylindrical algebraic decomposition is an algorithm that allows testing whether a system of polynomial equations and inequalities has solutions, and, if solutions exist, describing them. The complexity of this algorithm is doubly exponential in the number of variables. It is an active research domain to design algorithms that are more efficient in specific cases. | https://en.wikipedia.org/wiki/Inequality_(mathematics) |
In mathematics , an inequation is a statement that either an inequality (relations "greater than" and "less than", < and >) or a relation " not equal to " (≠) holds between two values. [ 1 ] [ 2 ] It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between the two sides , indicating the specific inequality relation. Some examples of inequations are:
In some cases, the term "inequation" has a more restricted definition, reserved only for statements whose inequality relation is "not equal to" (or "distinct"). [ 2 ] [ 3 ]
A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain
is shorthand for
which also implies that 0 < b {\displaystyle 0<b} and a < 1 {\displaystyle a<1} .
In rare cases, chains without such implications about distant terms are used.
For example i ≠ 0 ≠ j {\displaystyle i\neq 0\neq j} is shorthand for i ≠ 0 a n d 0 ≠ j {\displaystyle i\neq 0~~\mathrm {and} ~~0\neq j} , which does not imply i ≠ j . {\displaystyle i\neq j.} [ citation needed ] Similarly, a < b > c {\displaystyle a<b>c} is shorthand for a < b a n d b > c {\displaystyle a<b~~\mathrm {and} ~~b>c} , which does not imply any order of a {\displaystyle a} and c {\displaystyle c} . [ 4 ]
Similar to equation solving , inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns , which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions.
Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. [ 5 ]
For example,
is a conjunction of inequations, partly written as chains (where ∧ {\displaystyle \land } can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example .
Computer support in solving inequations is described in constraint programming ; in particular, the simplex algorithm finds optimal solutions of linear inequations. [ 6 ] The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming .
Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation f ( x ) < g ( x ) {\displaystyle \textstyle {\sqrt {f(x)}}<g(x)} is logically equivalent to the following three inequations combined: | https://en.wikipedia.org/wiki/Inequation |
Inequity aversion ( IA ) is the preference for fairness and resistance to incidental inequalities. [ 1 ] The social sciences that study inequity aversion include sociology , economics , psychology , anthropology , and ethology . Researchers on inequity aversion aim to explain behaviors that are not purely driven by self-interests but fairness considerations.
In some literature, the terminology inequality aversion was used in the places of inequity aversion. [ 2 ] [ 3 ] The discourses in social studies argue that "inequality" pertains to the gap between the distribution of resources, while "inequity" pertains to the fundamental and institutional unfairness. [ 4 ] Therefore, the choice between using inequity or inequality aversion may depend on the specific context.
Inequity aversion research on humans mostly occurs in the discipline of economics though it is also studied in sociology .
Research on inequity aversion began in 1978 when studies suggested that humans are sensitive to inequities in favor of as well as those against them, and that some people attempt overcompensation when they feel "guilty" or unhappy to have received an undeserved reward. [ 5 ]
A more recent definition of inequity aversion (resistance to inequitable outcomes) was developed in 1999 by Fehr and Schmidt. [ 1 ] They postulated that people make decisions so as to minimize inequity in outcomes. Specifically, consider a setting with individuals {1,2,..., n } who receive pecuniary outcomes x i . Then the utility to person i would be given by
where α parametrizes the distaste of person i for disadvantageous inequality in the first nonstandard term, and β parametrizes the distaste of person i for advantageous inequality in the final term. The results suggested that a small fraction of selfish behaviors may influence the majority with a fair mind to act selfishly in some scenarios, while a minority of fair-minded behaviors may also affect selfish players to cooperate in games with punishment. In addition, the inequity aversion mindset may affect market outcomes even in the presence of very competitive competition.
Gary E Bolton and Axel Ockenfels provided a more general model called ERC ( e quity, r eciprocity, and c ompetition) in 2000. [ 2 ] The model built on the premise that not only pecuniary but also relative payoff can motivate behaviors. In this model, all payoffs are monetary and nonnegative and players aim to maximize the expected value of motivation function. The motivation function of individual ( i ) in n players is given by v i = v i ( y i , σ i ) {\displaystyle v_{i}=v_{i}(y_{i},\sigma _{i})} where σ i = σ i ( y i , c , n ) = { y i / c , if c > 0 1 / n , if c = 0 {\displaystyle \sigma _{i}=\sigma _{i}(y_{i},c,n)={\begin{cases}y_{i}/c,&{\text{if }}c>0\\1/n,&{\text{if }}c=0\end{cases}}} is i's relative share of payoff, and c = ∑ j = 1 n y j {\displaystyle c=\sum _{j=1}^{n}y_{j}} is the total pecuniary payout. The results showed that the behaviors in various games, including unknown pie-size games, best-shot games, Bertrand and Cournot games, guessing games etc., can be in fact deduced from ultimatum and dictator games.
Fehr and Schmidt showed that disadvantageous inequity aversion manifests itself in humans as the "willingness to sacrifice potential gain to block another individual from receiving a superior reward". They argue that this apparently self-destructive response is essential in creating an environment in which bilateral bargaining can thrive. Without inequity aversion's rejection of injustice, stable cooperation would be harder to maintain (for instance, there would be more opportunities for successful free riders ). [ 6 ]
James H. Fowler and his colleagues also argue that inequity aversion is essential for cooperation in multilateral settings. [ 7 ] In particular, they show that subjects in random income games (closely related to public goods games ) are willing to spend their own money to reduce the income of wealthier group members and increase the income of poorer group members even when there is no cooperation at stake. [ 8 ] Thus, individuals who free ride on the contributions of fellow group members are likely to be punished because they earn more, creating a decentralized incentive for the maintenance of cooperation.
Inequity aversion is broadly consistent with observations of behavior in three standard economics experiments :
In 2005, John List modified these experiments slightly to determine if something in the construction of the experiments was prompting specific behaviors. When given a choice to steal money from the other player, even a single dollar, the observed altruism all but disappeared. In another experiment, the two players were given a sum of money and the choice to give or take any amount from the other player. In this experiment, only 10% of the participants gave the other person any money at all, and fully 40% of the players opted to take all of the other player's money.
The last such experiment was identical to the former, where 40% were turned into a gang of robbers, with one catch: the two players were forced to earn the money by stuffing envelopes. In this last experiment, more than two thirds of the players neither took nor gave a cent, while just over 20% still took some of the other player's money.
In 2011, Ert, Erev and Roth [ 9 ] ran a model prediction competition on two datasets, each of which included 120 two-player games. In each game player 1 decides whether to "opt out" and determine the payoffs for both players, or to "opt in" and let player 2 decide about the payoff allocation by choosing between actions "left" or "right". The payoffs were randomly selected, so the dataset included games like the Ultimatum, Dictator, and Trust, as well as other games. The results suggested that inequity aversion could be described as one of many strategies that people might use in such games.
Other research in experimental economics addresses risk aversion in decision making [ 10 ] and the comparison of inequality measures to subjective judgments on perceived inequalities. [ 11 ]
Surveys of employee opinions within firms have shown modern labor economists that inequity aversion is very important to them. Employees compare not only relative salaries but also relative performance against that of co-workers. Where these comparisons lead to guilt or envy, inequity aversion may lower employee morale. According to Bewley (1999), the main reason that managers create formal pay structures is so that the inter-employee comparison is seen to be "fair", which they considered "key" for morale and job performance . [ 12 ]
It is natural to think of inequity aversion leading to greater solidarity within the labor pool, to the benefit of the average employee. However, a 2008 paper by Pedro Rey-Biel shows that this assumption can be subverted, and that an employer can use inequity aversion to get higher performance for less pay than would be possible otherwise. [ 13 ] This is done by moving away from formal pay structures and using off- equilibrium bonus payments as incentives for extra performance. He shows that the optimal contract for inequity aversion employees is less generous at the optimal production level than contracts for "standard agents" (who don't have inequity aversion) in an otherwise identical two-employee model.
In 2005 Avner Shaked distributed a "pamphlet" entitled "The Rhetoric of Inequity Aversion" that attacked the inequity aversion papers of Fehr & Schmidt. [ 14 ] In 2010, Shaked has published an extended version of the criticism together with Ken Binmore in the Journal of Economic Behavior and Organization (the same issue also contains a reply by Fehr and Schmidt and a rejoinder by Binmore and Shaked). [ 15 ] [ 16 ] [ 17 ] A problem of inequity aversion models is the fact that there are free parameters; standard theory is simply a special case of the inequity aversion model. Hence, by construction inequity aversion must always be at least as good as standard theory when the inequity aversion parameters can be chosen after seeing the data. Binmore and Shaked also point out that Fehr and Schmidt (1999) pick a distribution of alpha and beta without conducting a formal estimation. The perfect correlation between the alpha and beta parameters in Fehr and Schmidt (1999) is an assumption made in the appendix of their paper that is not justified by the data that they provide.
More recently, several papers have estimated Fehr-Schmidt inequity aversion parameters using estimation techniques such as maximum likelihood . The results are mixed. Some authors have found beta larger than alpha, which contradicts a central assumption made by Fehr and Schmidt (1999). [ 18 ] Other authors have found that inequity aversion with Fehr and Schmidt's (1999) distribution of alphas and betas explains data of contract-theoretic experiments not better than standard theory; they also estimate average values of alpha that are much smaller than suggested by Fehr and Schmidt (1999). [ 19 ] Moreover, Levitt and List (2007) have pointed out that laboratory experiments tend to exaggerate the importance of pro-social behaviors because the subjects in the laboratory know that they are being monitored. [ 20 ]
An alternative [ 11 ] to the concept of a general inequity aversion is the assumption that the degree and the structure of inequality could lead either to acceptance or to aversion of inequality.
Fehr and Schmidt proposed that additional research on the inequity aversion should emphasize explicitly formalizing the role of intentions and conducting more thorough testing of the theory against alternative hypotheses. [ 21 ]
Bolton and Ockenfels recommended that the ERC model would benefit from a dynamic theory support and additional research in order to effectively explain more complex games and games that occur over longer time spans. [ 2 ] An advanced definition on social preference and a more formal quantitative model would also be worth investigating.
An experiment on capuchin monkeys ( Brosnan, S and de Waal, F ) showed that the subjects would prefer receiving nothing to receiving a reward awarded inequitably in favor of a second monkey, and appeared to target their anger at the researchers responsible for the inequitable distribution of food. [ 22 ] Anthropologists suggest that this research indicates a biological and evolutionary sense of social "fair play" in primates , though others believe that this is learned behavior or explained by other mechanisms. [ citation needed ] There is also evidence for inequity aversion in chimpanzees [ 23 ] (though see a recent study questioning this interpretation [ 24 ] ). The latest study shows that chimpanzees play the Ultimatum Game in the same way as children, preferring equitable outcomes. The authors claim that we now are near the point of no difference between humans and apes with regard to a sense of fairness. [ 25 ] Recent studies suggest that animals in the canidae family also recognize a basic level of fairness, stemming from living in cooperative societies. [ 26 ] Animal cognition studies in other biological orders have not found similar importance on relative "equity" and "justice" as opposed to absolute utility .
Fehr and Schmidt's model may partially explain the widespread opposition to economic inequality in democracies , but a distinction should be drawn between inequity aversion's "guilt" and egalitarianism 's " compassion ", which does not necessarily imply injustice .
Inequity aversion should not be confused with the arguments against the consequences of inequality. For example, the pro- publicly funded health care slogan "Hospitals for the poor become poor hospitals" directly objects to a predicted decline in medical care, not the health-care apartheid that is supposed to cause it. The argument that average medical outcomes improve with reduction in healthcare inequality (at the same total spending) is separate from the case for public healthcare on the grounds of inequity aversion. | https://en.wikipedia.org/wiki/Inequity_aversion |
The inert-pair effect is the tendency of the two electrons in the outermost atomic s -orbital to remain unshared in compounds of post-transition metals . The term inert-pair effect is often used in relation to the increasing stability of oxidation states that are two less than the group valency for the heavier elements of groups 13 , 14 , 15 and 16 . The term "inert pair" was first proposed by Nevil Sidgwick in 1927. [ 1 ] The name suggests that the outermost s electron pairs are more tightly bound to the nucleus in these atoms, and therefore more difficult to ionize or share.
For example, the p-block elements of the 4th, 5th and 6th period come after d-block elements, but the electrons present in the intervening d- (and f-) orbitals do not effectively shield the s-electrons of the valence shell. As a result, the inert pair of n s electrons remains more tightly held by the nucleus and hence participates less in bond formation.
Consider as an example thallium (Tl) in group 13 . The +1 oxidation state of Tl is the most stable, while Tl 3+ compounds are comparatively rare. The stability of the +1 oxidation state increases in the following sequence: [ 2 ]
The same trend in stability is noted in groups 14 , 15 and 16 . The heaviest members of each group, i.e. lead , bismuth and polonium are comparatively stable in oxidation states +2, +3, and +4 respectively.
The lower oxidation state in each of the elements in question has two valence electrons in s orbitals. A partial explanation is that the valence electrons in an s orbital are more tightly bound and are of lower energy than electrons in p orbitals and therefore less likely to be involved in bonding. [ 3 ] If the total ionization energies (IE) (see below) of the two electrons in s orbitals (the 2nd + 3rd ionization energies) are examined, it can be seen that there is an expected decrease from B to Al associated with increased atomic size, but the values for Ga, In and Tl are higher than expected.
The high ionization energy (IE) (2nd + 3rd) of gallium is explained by d-block contraction , and the higher IE (2nd + 3rd) of thallium relative to indium, has been explained by relativistic effects . [ 4 ] The higher value for thallium compared to indium is partly attributable to the influence of the lanthanide contraction and the ensuing poor shielding from the nuclear charge by the intervening filled 4d and 5f subshells. [ 5 ]
An important consideration is that compounds in the lower oxidation state are ionic, whereas the compounds in the higher oxidation state tend to be covalent. Therefore, covalency effects must be taken into account. An alternative explanation of the inert pair effect by Drago in 1958 attributed the effect to low M−X bond enthalpies for the heavy p-block elements and the fact that it requires less energy to oxidize an element to a low oxidation state than to a higher oxidation state. [ 6 ] This energy has to be supplied by ionic or covalent bonds, so if bonding to a particular element is weak, the high oxidation state may be inaccessible. Further work involving relativistic effects confirms this. [ 7 ]
In the case of groups 13 to 15 the inert-pair effect has been further attributed to "the decrease in bond energy with the increase in size from Al to Tl so that the energy required to involve the s electron in bonding is not compensated by the energy released in forming the two additional bonds". [ 2 ] That said, the authors note that several factors are at play, including relativistic effects in the case of gold, and that "a quantitative rationalisation of all the data has not been achieved". [ 2 ]
The chemical inertness of the s electrons in the lower oxidation state is not always related to steric inertness (where steric inertness means that the presence of the s-electron lone pair has little or no influence on the geometry of the molecule or crystal). A simple example of steric activity is SnCl 2 , which is bent in accordance with VSEPR theory . Some examples where the lone pair appears to be inactive are bismuth(III) iodide , BiI 3 , and the BiI 3− 6 anion. In both of these the central Bi atom is octahedrally coordinated with little or no distortion, in contravention to VSEPR theory. [ 8 ] The steric activity of the lone pair has long been assumed to be due to the orbital having some p character, i.e. the orbital is not spherically symmetric. [ 2 ] More recent theoretical work shows that this is not always necessarily the case. For example, the litharge structure of PbO contrasts to the more symmetric and simpler rock-salt structure of PbS , and this has been explained in terms of Pb II –anion interactions in PbO leading to an asymmetry in electron density. Similar interactions do not occur in PbS. [ 9 ] Another example are some thallium(I) salts where the asymmetry has been ascribed to s electrons on Tl interacting with antibonding orbitals. [ 10 ] | https://en.wikipedia.org/wiki/Inert-pair_effect |
An inert gas is a gas that does not readily undergo chemical reactions with other chemical substances and therefore does not readily form chemical compounds . Though inert gases have a variety of applications, they are generally used to prevent unwanted chemical reactions with the oxygen ( oxidation ) and moisture ( hydrolysis ) in the air from degrading a sample. Generally, all noble gases except oganesson ( helium , neon , argon , krypton , xenon , and radon ), nitrogen , and carbon dioxide are considered inert gases. The term inert gas is context-dependent because several of the inert gases, including nitrogen and carbon dioxide, can be made to react under certain conditions. [ 1 ] [ 2 ]
Purified argon gas is the most commonly used inert gas due to its high natural abundance (78.3% N 2 , 1% Ar in air) [ 3 ] and low relative cost.
Unlike noble gases , an inert gas is not necessarily elemental and is often a compound gas. Like the noble gases, the tendency for non-reactivity is due to the valence , the outermost electron shell , being complete in all the inert gases. [ 4 ] This is a tendency, not a rule, as all noble gases and other "inert" gases can react to form compounds under some conditions.
The inert gases are obtained by fractional distillation of air , with the exception of helium which is separated from a few natural gas sources rich in this element, [ 5 ] through cryogenic distillation or membrane separation. [ 6 ] For specialized applications, purified inert gas shall be produced by specialized generators on-site. They are often used by chemical tankers and product carriers (smaller vessels). Benchtop specialized generators are also available for laboratories.
Because of the non-reactive properties of inert gases, they are often useful to prevent undesirable chemical reactions from taking place. Food is packed in an inert gas to remove oxygen gas. This prevents bacteria from growing. [ 7 ] It also prevents chemical oxidation by oxygen in normal air. An example is the rancidification (caused by oxidation) of edible oils. In food packaging , inert gases are used as a passive preservative, in contrast to active preservatives like sodium benzoate (an antimicrobial ) or BHT (an antioxidant ).
Historical documents may also be stored under inert gas to avoid degradation. For example, the original documents of the U.S. Constitution are stored under humidified argon. Helium was previously used, but it was less suitable because it diffuses out of the case more quickly than argon. [ 8 ]
Inert gases are often used in the chemical industry. In a chemical manufacturing plant, reactions can be conducted under inert gas to minimize fire hazards or unwanted reactions. In such plants and in oil refineries, transfer lines and vessels can be purged with inert gas as a fire and explosion prevention measure. At the bench scale, chemists perform experiments on air-sensitive compounds using air-free techniques developed to handle them under inert gas. Helium, neon, argon, krypton, xenon, and radon are inert gases.
Inert gas is produced on board crude oil carriers (above 8,000 tonnes from Jan 1, 2016) by burning kerosene in a dedicated inert gas generator . The inert gas system is used to prevent the atmosphere in cargo tanks or bunkers from coming into the explosive range. [ 9 ] Inert gases keep the oxygen content of the tank atmosphere below 5% (on crude carriers, less for product carriers and gas tankers), thus making any air/hydrocarbon gas mixture in the tank too rich (too high a fuel to oxygen ratio) to ignite. Inert gases are most important during discharging and during the ballast voyage when more hydrocarbon vapor is likely to be present in the tank atmosphere. Inert gas can also be used to purge the tank of the volatile atmosphere in preparation for gas freeing - replacing the atmosphere with breathable air - or vice versa.
The flue gas system uses the boiler exhaust as its source, so it is important that the fuel/air ratio in the boiler burners is properly regulated to ensure that high-quality inert gases are produced. Too much air would result in an oxygen content exceeding 5%, and too much fuel oil would result in the carryover of dangerous hydrocarbon gas.
The flue gas is cleaned and cooled by the scrubber tower. Various safety devices prevent overpressure, the return of hydrocarbon gas to the engine room, or having a supply of IG with too high oxygen content.
Gas tankers and product carriers cannot rely on flue gas systems (because they require IG with O 2 content of 1% or less) and so use inert gas generators instead. The inert gas generator consists of a combustion chamber and scrubber unit supplied by fans and a refrigeration unit which cools the gas. A drier in series with the system removes moisture from the gas before it is supplied to the deck. Cargo tanks on gas carriers are not inerted, but the whole space around them is.
Inert gas is produced on board commercial and military aircraft in order to passivate fuel tanks. On hot days, fuel vapour in fuel tanks may otherwise form a flammable or explosive mixture which if oxidized, could have catastrophic consequences. Conventionally, Air Separation Modules (ASMs) have been used to generate inert gas. ASMs contain selectively permeable membranes. They are fed compressed air that is extracted from a compressor stage of a gas turbine engine. The pressure drives the separation of oxygen from the air due to the increased permeability of oxygen through the ASMs in comparison to nitrogen. For fuel tank passivation, it is not necessary to remove all oxygen, but rather enough to stay below the lean flammability limit and the lean explosion limit. In contrast to the oxygen concentration of 21% in air, 10% to 12% in the ullage of a passivated fuel tank is common over the course of a flight.
In gas tungsten arc welding (GTAW), inert gases are used to shield the tungsten from contamination. It also shields the fluid metal (created from the arc) from the reactive gases in air which can cause porosity in the solidified weld puddle. Inert gases are also used in gas metal arc welding (GMAW) for welding non-ferrous metals. [ 10 ] Some gases which are not usually considered inert but which behave like inert gases in all the circumstances likely to be encountered in some use can often be used as a substitute for an inert gas. This is useful when an appropriate pseudo-inert gas can be found which is inexpensive and common. For example, carbon dioxide is sometimes used in gas mixtures for GMAW because it is not reactive to the weld pool created by arc welding. But it is reactive to the arc. The more carbon dioxide that is added to the inert gas, such as argon, will increase penetration. The amount of carbon dioxide is often determined by what kind of transfer is used in GMAW. The most common in industrial applications is spray arc transfer, and the most commonly used gas mixture for spray arc transfer is 90% argon and 10% carbon dioxide. In non-industrial applications, short circuit transfer is most commonly used, particularly in the US. With short circuit transfer, a gas mixture made up of 75% argon and 25% carbon dioxide (referred to as C25) is most often used. Outside the US, a mixture of 80% argon and 20% carbon dioxide is often used.
In underwater diving an inert gas is a component of the breathing mixture which is not metabolically active and serves to dilute the gas mixture. The inert gas may have effects on the diver, but these are thought to be mostly physical effects, such as tissue damage caused by bubbles in decompression sickness . The most common inert gas used in breathing gas for commercial diving is helium . | https://en.wikipedia.org/wiki/Inert_gas |
Inert gas asphyxiation is a form of asphyxiation which results from breathing a physiologically inert gas in the absence of oxygen , or a low amount of oxygen (hypoxia) , [ 1 ] rather than atmospheric air (which is composed largely of nitrogen and oxygen). Examples of physiologically inert gases, which have caused accidental or deliberate death by this mechanism, are argon , helium and nitrogen . [ citation needed ] The term "physiologically inert" is used to indicate a gas which has no toxic or anesthetic properties and does not act upon the heart or hemoglobin. Instead, the gas acts as a simple diluent to reduce the oxygen concentration in inspired gas and blood to dangerously low levels, thereby eventually depriving cells in the body of oxygen. [ 2 ]
According to the U.S. Chemical Safety and Hazard Investigation Board , in humans, "breathing an oxygen deficient atmosphere can have serious and immediate effects, including unconsciousness after only one or two breaths. The exposed person has no warning and cannot sense that the oxygen level is too low." In the US, at least 80 people died from accidental nitrogen asphyxiation between 1992 and 2002. [ 3 ] Hazards with inert gases and the risks of asphyxiation are well-established. [ 4 ]
An occasional cause of accidental death in humans, inert gas asphyxia has been used as a suicide method. Inert gas asphyxia has been advocated by proponents of euthanasia , using a gas-retaining plastic hood device colloquially referred to as a suicide bag .
Nitrogen asphyxiation has been approved in some places as a method of capital punishment . In the world's first instance of its use, on January 25, 2024, Alabama executed convicted murderer Kenneth Eugene Smith via this method. It was used once again in the execution of Alan Eugene Miller on September 26, 2024, the execution of Carey Dale Grayson on November 21, 2024, the execution of Demetrius Terrence Frazier on February 6, 2025, and the execution of Jessie Hoffman Jr. on March 18, 2025. [ 5 ]
Alternatively, the use of the term hypoxia has been used but this term is flawed given hypoxia does not necessarily imply death. On the other hand, asphyxiation is technically incorrect given respiration continues and the carbon dioxide metabolically produced from the oxygen inhaled prior to inert gas asphyxiation can be exhaled without restriction, which can prevent acidosis and the strong urge to breathe caused by hypercapnia . [ 6 ]
When humans breathe in an asphyxiant gas or any other physiologically inert gas, they exhale carbon dioxide without re-supplying oxygen. Physiologically inert gases (those that have no toxic effect, but merely dilute oxygen) are generally free of odor and taste. Accordingly, the human subject detects little abnormal sensation as the oxygen level falls. This leads to asphyxiation (death from lack of oxygen) without the painful and traumatic feeling of suffocation (the hypercapnic alarm response , which in humans arises mostly from carbon dioxide levels rising), or the side effects of poisoning. In scuba diving rebreather accidents, a slow decrease in oxygen breathing gas content can produce variable or no sensation. [ 7 ] By contrast, suddenly breathing pure inert gas causes oxygen levels in the blood to fall precipitously, and may lead to unconsciousness in only a few breaths, with no symptoms at all. [ 3 ]
Some animals are better equipped than humans to detect hypoxia, and these species are less comfortable in low-oxygen environments that result from inert gas exposure, though more averse to CO 2 exposure. [ 8 ]
A typical human breathes between 12 and 20 times per minute at a rate influenced primarily by carbon dioxide concentration, and thus pH , in the blood. With each breath, a volume of about 0.6 litres is exchanged from an active lung volume of about three litres. The normal composition of the Earth's atmosphere is about 78% nitrogen, 21% oxygen, and 1% argon, carbon dioxide, and other gases. After just two or three breaths of nitrogen, the oxygen concentration in the lungs would be low enough for some oxygen already in the bloodstream to exchange back to the lungs and be eliminated by exhalation.
Unconsciousness in cases of accidental asphyxia can occur within one minute. Loss of consciousness results from critical hypoxia , when arterial oxygen saturation is less than 60%. [ 9 ] "At oxygen concentrations [in air] of 4 to 6%, there is loss of consciousness in 40 seconds and death within a few minutes". [ 10 ] At an altitude over 43,000 ft (13,000 m), where the ambient oxygen concentration is equivalent to a concentration of 3.6% at sea level, an average individual can perform flying duties efficiently for only 9 to 12 seconds without oxygen supplementation. [ 9 ] The US Air Force trains air crews to recognize their subjective signs of approaching hypoxia. Some individuals experience headache, dizziness, fatigue, nausea and euphoria, and some become unconscious without warning. [ 9 ]
Loss of consciousness may be accompanied by convulsions [ 9 ] and is followed by cyanosis and cardiac arrest. In a 1963 study by the RAF Institute of Aviation Medicine , [ 11 ] subjects were asked to hyperventilate in a nitrogen atmosphere. Among the results:
When the duration of over-ventilation with nitrogen was greater than 8–10 sec the subject reported a transient dimming of vision. In the experiments in which nitrogen breathing was carried out for 15–16 sec the subject experienced some general clouding of consciousness and impairment of vision. Vision was frequently lost in these experiments for a short period. In the few experiments in which nitrogen was breathed for 17–20 sec unconsciousness supervened and was accompanied on most occasions by a generalized convulsion. The duration of the interval between the start of over-ventilation with nitrogen and the onset of symptoms was 12–14 sec.
The study did not report how much discomfort the subjects felt. [ 11 ]
Controlled atmosphere killing ( CAK ) or controlled atmosphere stunning ( CAS ) is a method for slaughtering or stunning animals such as swine , poultry , [ 12 ] or cane toads by placing the animals in a container in which the atmosphere lacks oxygen and consists of an asphyxiant gas (one or more of argon, nitrogen or carbon dioxide), causing the animals to lose consciousness. Argon and nitrogen are important components of a gassing process which seem to cause no pain, and for this reason many consider some types of controlled atmosphere killing more humane than other methods of killing. [ 13 ] [ 14 ] Most animals are stunned by carbon dioxide. [ 15 ] [ 16 ]
If carbon dioxide is used, controlled atmosphere killing is not the same as inert gas asphyxia, because carbon dioxide at high concentrations (above 5%) is not biologically inert, but rather is toxic and also produces initial distress in some animal species. [ 17 ] The addition of toxic carbon dioxide to hypoxic atmospheres used in slaughter without animal distress is a complex and highly species-specific matter, which also depends on the concentration of carbon dioxide. [ 18 ] [ 19 ] [ 20 ]
Diving animals such as rats and minks and burrowing animals are sensitive to low-oxygen atmospheres and will avoid them. For this reason, the use of inert gas (hypoxic) atmospheres (without CO 2 ) for euthanasia is also species-specific. [ 21 ]
Accidental nitrogen asphyxiation is a possible hazard where large quantities of nitrogen are used. It causes several deaths per year in the United States, [ 22 ] which is asserted to be more than from any other industrial gas. In one accident in 1981, shortly before the launch of the first Space Shuttle mission , five technicians lost consciousness and two of them died after they entered the aft compartment of the orbiter. Nitrogen had been used to flush oxygen from the compartment as a precaution against fire. They were not wearing air packs because of a last-minute change in safety procedures. [ 23 ]
During a pool party in Mexico in 2013, eight party-goers were rendered unconscious and one 21-year-old male went into a coma after liquid nitrogen was poured into the pool. [ 24 ] [ 25 ]
Occasional deaths are reported from recreational inhalation of helium, but these are very rarely from direct inhalation from small balloons. The inhalation from larger helium balloons has been reportedly fatal. [ 26 ] A fatal fall from a tree occurred after the inhalation of helium from a toy balloon, which caused the person to become either unconscious or lightheaded. [ 27 ]
In 2015, a technician at a health spa was asphyxiated while conducting unsupervised cryotherapy using nitrogen. [ 28 ] [ 29 ]
In 2021, six people died of asphyxiation and 11 more were hospitalized following a liquid nitrogen leak at a poultry plant in Gainesville, Georgia . [ 30 ] [ 31 ]
Use of inert gas for suicide was first proposed by a Canadian, Dr Bruce Dunn. [ 32 ] Dunn commented that "...the acquisition of a compressed gas cylinder, an appropriate pressure reducing regulator, and suitable administration equipment... [was] not inaccessible to a determined individual, but relatively difficult for a member of the public to acquire casually or quickly". [ 33 ] Dunn collaborated with other researchers, notably the Canadian campaigner John Hofsess , who in 1997 formed the group "NuTech" with Derek Humphry and Philip Nitschke. [ 34 ] Two years later, NuTech had streamlined Dunn's work by using readily-available party balloon cylinders of helium. [ 35 ]
The method of suicide based on self-administration of helium in a bag, a colloquial name being the "exit bag" or suicide bag, has been referenced by some medical euthanasia advocacy groups. [ 36 ] Originally, such bags were used with helium, and 30 deaths were reported with use of them from 2001 to 2005, and another 79 from 2005 to 2009. This suggested to one set of reviewers that the popularity of the technique was increasing, as also did the increase in helium suicides in Sweden during the latter half of the same decade. [ 37 ]
After attempts were made by authorities to control helium sales in Australia, a new method was introduced that instead uses nitrogen. [ 38 ] Nitrogen became the main gas promoted by euthanasia advocates, such as Philip Nitschke , who founded a company called Max Dog Brewing in order to import canisters of nitrogen into Australia. [ 39 ] Nitschke stated that the gas cylinders can be used for both brewing and, if required, to end life at a later stage in a "peaceful, reliable [and] totally legal" manner. [ 40 ] Nitschke said that nitrogen is "undetectable even by autopsy, which was important to some people". [ 41 ]
Nitschke produced a 3D printed pod, " Sarco ", that fills with nitrogen at the push of a button, claiming to cause its user to become unconscious within a minute and then die of oxygen deprivation. [ 42 ] [ 43 ]
Execution by nitrogen asphyxiation was discussed briefly in print as a theoretical method of capital punishment in a 1995 National Review article. [ 44 ] The idea was then proposed by Lawrence J. Gist II, an attorney at law, under the title, International Humanitarian Hypoxia Project. [ 45 ]
In a televised documentary in 2007, the British political commentator and former MP Michael Portillo examined execution techniques in use around the world and found them unsatisfactory; his conclusion was that nitrogen asphyxiation would be the best method. [ 46 ]
In April 2015, Governor Mary Fallin of Oklahoma signed a bill allowing nitrogen asphyxiation as an alternative execution method. [ 47 ] [ 48 ] Three years later, in March 2018, Oklahoma announced that, due to the difficulty in procuring lethal injection drugs, nitrogen gas would be used to carry out executions. [ 49 ] [ 50 ] After making "good progress" in designing a nitrogen execution protocol, but not actually carrying out any executions, Oklahoma announced in February 2020 it had found a new reliable source of lethal injection drugs, but would continue working on nitrogen execution as a contingency method. [ 51 ]
In March 2018, Alabama became the third state (after Oklahoma and Mississippi ), to authorize the use of nitrogen asphyxiation as a method of execution. [ 52 ]
In August 2023, the Alabama Department of Corrections released its protocol for nitrogen hypoxia executions, designating Kenneth Eugene Smith , convicted of murder for hire in 1996, as the first death row inmate to undergo this method. [ 53 ] [ 54 ] [ 55 ] On November 1, the Supreme Court of Alabama authorized the execution to go ahead using the nitrogen hypoxia protocol. [ 55 ] On 25 January 2024, he became the first person to be executed by nitrogen hypoxia in the world. [ 56 ] Though the State Attorney General said afterward that Smith's execution showed that nitrogen hypoxia was an "effective and humane method of execution", [ 57 ] several people watching the execution reported that Smith "thrashed violently on the gurney" [ 56 ] for several minutes, with his death reportedly occurring 10 minutes after the nitrogen was administered to the chamber. [ 58 ] [ 59 ] The United Nations High Commissioner for Human Rights condemned the use. [ 60 ]
On September 26, 2024 Alan Eugene Miller became the second convicted man put to death by way of nitrogen gas, in Alabama, followed by both Carey Dale Grayson and Demetrius Terrence Frazier on November 21, 2024, and February 6, 2025, respectively.
On March 5, 2024, Louisiana Governor Jeff Landry signed a law allowing executions to be carried out via nitrogen gas. [ 61 ] A year after Louisiana approved the method, convicted rapist-killer Jessie Hoffman Jr. became the first inmate executed by nitrogen hypoxia in Louisiana on March 18, 2025, making Louisiana the second state to carry out nitrogen gas executions, while at the same time putting an end to the state of Louisiana's 15-year pause on executions. [ 62 ]
After Smith's execution, several other states became open to the possibility of legally carrying out nitrogen gas executions. Lawmakers from Ohio , where a moratorium is in effect since the state's last execution in 2018 , were considering legalizing nitrogen gas as a new method of execution aside from lethal injection. [ 63 ] [ 64 ] [ 65 ]
In March 2025, the Arkansas Legislature voted for a bill that authorize the use of nitrogen asphyxiation as a method of execution. [ 66 ] [ 67 ] [ 68 ] Governor Sarah Huckabee Sanders signed the Bill into law on March 18, 2025. [ 69 ]
As of 2025, Alabama, Arkansas, Oklahoma, Mississippi and Louisiana are the only states that authorize the use of nitrogen asphyxiation as a method of execution.
In the case Bucklew v. Precythe in 2019, the U.S. Supreme Court ruled that a Missouri death row inmate with cavernous hemangioma , a rare disorder that causes swelling of blood-filled cavities, could not avoid death by lethal injection and choose inert gas asphyxiation using nitrogen, since it had never been used in any execution in the world. [ 70 ]
As of March 2025, five people were executed by nitrogen hypoxia, four in Alabama and one in Louisiana . | https://en.wikipedia.org/wiki/Inert_gas_asphyxiation |
Inert gas generator (IGG) refers to machinery on board marine product tankers . Inert gas generators consist distinctively of a gas producer and a scrubbing system. [ 1 ] [ 2 ]
Diesel is burned using atmospheric air in a combustion chamber and the exhaust gas collected, the resulting exhaust gas contains less than 5% oxygen , thereby creating " inert gas ", which mainly consist of nitrogen and partly carbon dioxide . The hot, dirty gas is then passed through a scrubbing tower which cleans and cools it using seawater. This gas is then delivered to cargo tanks to prevent explosion of flammable cargo . [ 1 ]
This generator is sometimes confused with flue gas systems , which draw inert gas from the boiler systems of the ship. Flue gas systems do not have a burner but only "clean" and measure the air before delivering it to the cargo hold. [ 1 ] | https://en.wikipedia.org/wiki/Inert_gas_generator |
In chemistry, an inert salt is a salt used to adjust the ionic strength of a solution . This is usually done in equilibrium or kinetic studies in order to reduce relative changes in the ionic strength of a solution. The real goal is to reduce changes in the activity coefficients of ionic species which allows the definition of conditional equilibrium or rate constants.
Any salt will affect the ionic strength, inert salts have the additional property that both the cations and the anions of the salt do or should not not interfere in any way with the molecules that are investigated. They are supposed to only influence the ionic strength.
Typical inert salts that are used include: NaClO 4 , NaCl , KNO 3 , NaNO 3 , triflates (e.g. NaOSO 2 CF 3 [ 1 ] ).
Inert salts are never perfectly inert and their use will always interfere with the process under investigation, although the influence may be negligible. [ 2 ] | https://en.wikipedia.org/wiki/Inert_salt |
Inertia is the natural tendency of objects in motion to stay in motion and objects at rest to stay at rest, unless a force causes the velocity to change. It is one of the fundamental principles in classical physics , and described by Isaac Newton in his first law of motion (also known as The Principle of Inertia). [ 1 ] It is one of the primary manifestations of mass , one of the core quantitative properties of physical systems . [ 2 ] Newton writes: [ 3 ] [ 4 ] [ 5 ] [ 6 ]
LAW I. Every object perseveres in its state of rest, or of uniform motion in a right line, except insofar as it is compelled to change that state by forces impressed thereon.
In his 1687 work Philosophiæ Naturalis Principia Mathematica , Newton defined inertia as a property:
DEFINITION III. The vis insita , or innate force of matter, is a power of resisting by which every body, as much as in it lies, endeavours to persevere in its present state, whether it be of rest or of moving uniformly forward in a right line. [ 8 ]
Professor John H. Lienhard points out the Mozi – based on a Chinese text from the Warring States period (475–221 BCE) – as having given the first description of inertia. [ 9 ] Before the European Renaissance , the prevailing theory of motion in western philosophy was that of Aristotle (384–322 BCE). On the surface of the Earth, the inertia property of physical objects is often masked by gravity and the effects of friction and air resistance , both of which tend to decrease the speed of moving objects (commonly to the point of rest). This misled the philosopher Aristotle to believe that objects would move only as long as force was applied to them. [ 10 ] [ 11 ] Aristotle said that all moving objects (on Earth) eventually come to rest unless an external power (force) continued to move them. [ 12 ] Aristotle explained the continued motion of projectiles, after being separated from their projector, as an (itself unexplained) action of the surrounding medium continuing to move the projectile. [ 13 ]
Despite its general acceptance, Aristotle's concept of motion [ 14 ] was disputed on several occasions by notable philosophers over nearly two millennia . For example, Lucretius (following, presumably, Epicurus ) stated that the "default state" of the matter was motion, not stasis (stagnation). [ 15 ] In the 6th century, John Philoponus criticized the inconsistency between Aristotle's discussion of projectiles, where the medium keeps projectiles going, and his discussion of the void, where the medium would hinder a body's motion. Philoponus proposed that motion was not maintained by the action of a surrounding medium, but by some property imparted to the object when it was set in motion. Although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. [ 16 ] [ 17 ] This view was strongly opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world , where Philoponus had several supporters who further developed his ideas.
In the 11th century, Persian polymath Ibn Sina (Avicenna) claimed that a projectile in a vacuum would not stop unless acted upon. [ 18 ]
In the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus , dissipated spontaneously. Buridan's position was that a moving object would be arrested by the resistance of the air and the weight of the body which would oppose its impetus. [ 19 ] Buridan also maintained that impetus increased with speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Despite the obvious similarities to more modern ideas of inertia, Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also believed that impetus could be not only linear but also circular in nature, causing objects (such as celestial bodies) to move in a circle. Buridan's theory was followed up by his pupil Albert of Saxony (1316–1390) and the Oxford Calculators , who performed various experiments which further undermined the Aristotelian model. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of illustrating the laws of motion with graphs.
Shortly before Galileo's theory of inertia, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
[Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path. [ 20 ]
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
According to science historian Charles Coulston Gillispie , inertia "entered science as a physical consequence of Descartes ' geometrization of space-matter, combined with the immutability of God." [ 21 ] The first physicist to completely break away from the Aristotelian model of motion was Isaac Beeckman in 1614. [ 22 ]
The term "inertia" was first introduced by Johannes Kepler in his Epitome Astronomiae Copernicanae [ 23 ] (published in three parts from 1617 to 1621). However, the meaning of Kepler's term, which he derived from the Latin word for "idleness" or "laziness", was not quite the same as its modern interpretation. Kepler defined inertia only in terms of resistance to movement, once again based on the axiomatic assumption that rest was a natural state which did not need explanation. It was not until the later work of Galileo and Newton unified rest and motion in one principle that the term "inertia" could be applied to those concepts as it is today. [ 24 ]
The principle of inertia, as formulated by Aristotle for "motions in a void", [ 25 ] includes that a mundane object tends to resist a change in motion. The Aristotelian division of motion into mundane and celestial became increasingly problematic in the face of the conclusions of Nicolaus Copernicus in the 16th century, who argued that the Earth is never at rest, but is actually in constant motion around the Sun. [ 26 ]
Galileo , in his further development of the Copernican model , recognized these problems with the then-accepted nature of motion and, at least partially, as a result, included a restatement of Aristotle's description of motion in a void as a basic physical principle:
A body moving on a level surface will continue in the same direction at a constant speed unless disturbed.
Galileo writes that "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in a movement towards the west (for example), it will maintain itself in that movement." [ 27 ] This notion, which is termed "circular inertia" or "horizontal circular inertia" by historians of science, is a precursor to, but is distinct from, Newton's notion of rectilinear inertia. [ 28 ] [ 29 ] For Galileo, a motion is " horizontal " if it does not carry the moving body towards or away from the center of the Earth, and for him, "a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping." [ 30 ] [ 31 ] It is also worth noting that Galileo later (in 1632) concluded that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside reference to compare it against. [ 32 ] This observation ultimately came to be the basis for Albert Einstein to develop the theory of special relativity .
Concepts of inertia in Galileo's writings would later come to be refined, modified, and codified by Isaac Newton as the first of his laws of motion (first published in Newton's work, Philosophiæ Naturalis Principia Mathematica , in 1687):
Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. [ 33 ]
Despite having defined the concept in his laws of motion, Newton did not actually use the term "inertia.” In fact, he originally viewed the respective phenomena as being caused by "innate forces" inherent in matter which resist any acceleration. Given this perspective, and borrowing from Kepler, Newton conceived of "inertia" as "the innate force possessed by an object which resists changes in motion", thus defining "inertia" to mean the cause of the phenomenon, rather than the phenomenon itself.
However, Newton's original ideas of "innate resistive force" were ultimately problematic for a variety of reasons, and thus most physicists no longer think in these terms. As no alternate mechanism has been readily accepted, and it is now generally accepted that there may not be one that we can know, the term "inertia" has come to mean simply the phenomenon itself, rather than any inherent mechanism. Thus, ultimately, "inertia" in modern classical physics has come to be a name for the same phenomenon as described by Newton's first law of motion, and the two concepts are now considered to be equivalent.
Albert Einstein 's theory of special relativity , as proposed in his 1905 paper entitled " On the Electrodynamics of Moving Bodies ", was built on the understanding of inertial reference frames developed by Galileo, Huygens and Newton. While this revolutionary theory did significantly change the meaning of many Newtonian concepts such as mass , energy , and distance , Einstein's concept of inertia remained at first unchanged from Newton's original meaning. However, this resulted in a limitation inherent in special relativity: the principle of relativity could only apply to inertial reference frames. To address this limitation, Einstein developed his general theory of relativity ("The Foundation of the General Theory of Relativity", 1916), which provided a theory including noninertial (accelerated) reference frames. [ 34 ]
In general relativity, the concept of inertial motion got a broader meaning. Taking into account general relativity, inertial motion is any movement of a body that is not affected by forces of electrical, magnetic, or other origin, but that is only under the influence of gravitational masses. [ 35 ] [ 36 ] Physically speaking, this happens to be exactly what a properly functioning three-axis accelerometer is indicating when it does not detect any proper acceleration .
The term inertia comes from the Latin word iners , meaning idle or sluggish. [ 37 ]
A quantity related to inertia is rotational inertia (→ moment of inertia ), the property that a rotating rigid body maintains its state of uniform rotational motion. Its angular momentum remains unchanged unless an external torque is applied; this is called conservation of angular momentum. Rotational inertia is often considered in relation to a rigid body. For example, a gyroscope uses the property that it resists any change in the axis of rotation. | https://en.wikipedia.org/wiki/Inertia |
In mathematics, especially in differential and algebraic geometries, an inertia stack of a groupoid X is a stack that parametrizes automorphism groups on X {\displaystyle X} and transitions between them. It is commonly denoted as Λ X {\displaystyle \Lambda X} and is defined as inertia groupoids as charts. The notion often appears in particular as an inertia orbifold .
Let U = ( U 1 ⇉ U 0 ) {\displaystyle U=(U_{1}\rightrightarrows U_{0})} be a groupoid. Then the inertia groupoid Λ U {\displaystyle \Lambda U} is a grouoiud (= a category whose morphisms are all invertible) where
For example, if U is a fundamental groupoid , then Λ U {\displaystyle \Lambda U} keeps track of the changes of base points.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inertia_stack |
An inertia wheel pendulum is a pendulum with an inertia wheel attached. It can be used as a pedagogical problem in control theory . This type of pendulum is often confused with the gyroscopic effect, which has completely different physical nature.
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inertia_wheel_pendulum |
An inertial audio effects controller is an electronic device that senses changes in acceleration, angular velocity and/or a magnetic field, [ 1 ] and relays those changes to an effects controller. Transmitting the sensed data can be done via wired or wireless methods. To be of use the effects controller must be connected to an effect unit so that an effect can be modulated, or connected to a MIDI controller or musical keyboard . The Wah-Wah effect is a classic example of effect modulation.
An inertial audio effects controller can be compared with a traditional expression pedal to explain its function. An inertial effects controller uses an inertial sensor to detect user directed changes, whereas a traditional expression pedal uses an electrically resistive element to detect changes. There are some advantages and disadvantages between the two. The main advantages of inertial control versus a traditional foot pedal, are an increased range of dynamic motion, remote control, finer modulation precision and software enabled features such as motion triggered ADSR envelopes and bi-directional motion control. The main disadvantages are the requirement for a power source and a more complicated setup. [ 2 ]
Due to their functional similarity with traditional expression pedals, they have been given the informal name, 'Expression box'.
Conceivably any or all of the inertial sensors ( accelerometer , gyroscope , magnetometer ), could be used for effect modulation. However, currently the only commercially available products use acceleration sensing only [ 3 ] [ 4 ] or acceleration combined with angular velocity, [ 5 ] as sensed by a gyroscope.
Inertial control of an audio device, whether wired or wireless, is a relatively recent and growing trend. Technology advances have reduced pricing and size, as well as improved usability and performance of the core components. [ 6 ] [ 7 ] Specifically the core components are an inertial device called a Mirco- Electro-Mechanical-System ( MEMs ), a microcontroller , and for wireless systems, a radio frequency transmitter /receiver. | https://en.wikipedia.org/wiki/Inertial_audio_effects_controller |
An inertial balance is a device that allows the measurement of inertial mass (as opposed to gravitational mass for a regular balance) that can be operated in the microgravity environment space where weight is negligible (e.g. in the International Space Station.) The principle of operation is based on a vibrating spring-mass system. The frequency of vibration will depend on the unknown mass, being higher for lower mass. The object to be measured is placed in the inertial balance, and a manual initial displacement of the spring mechanism starts the oscillation. The time needed to complete a given number of cycles is measured. Knowing the characteristic spring constant and damping coefficient of the spring system, the mass of the object can be computed according to the harmonic oscillator model . Alternatively, a calibration of the device with known masses can be performed, so that the spring constant and any appreciable damping will implicitly be accounted for, and need not be separately known or estimated. See the data analysis PDF under External Links below for a discussion of several calibration approaches.
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inertial_balance |
In classical physics and special relativity , an inertial frame of reference (also called an inertial space or a Galilean reference frame ) is a frame of reference in which objects exhibit inertia : they remain at rest or in uniform motion relative to the frame until acted upon by external forces. In such a frame, the laws of nature can be observed without the need to correct for acceleration.
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity , or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton , originally thought that one of these frames was absolute — the one approximated by the fixed stars . However, this is not required for the definition, and it is now known that those stars are in fact moving, relative to one another.
According to the principle of special relativity , all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity ; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light .
By contrast, a non-inertial reference frame is accelerating. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity , the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia . [ 1 ] [ 2 ] Viewed from the perspective of general relativity theory , the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime .
Due to Earth's rotation , its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth , and the centrifugal force will reduce the effective gravity at the equator . Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame.
The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference . According to the first postulate of special relativity , all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation : [ 3 ]
Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K' moving in uniform translation relatively to K.
This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. [ 4 ] The principle of simplicity can be used within Newtonian physics as well as in special relativity: [ 5 ] [ 6 ]
The laws of Newtonian mechanics do not always hold in their simplest form...If, for instance, an observer is placed on a disc rotating relative to the earth, he/she will sense a 'force' pushing him/her toward the periphery of the disc, which is not caused by any interaction with other bodies. Here, the acceleration is not the consequence of the usual force, but of the so-called inertial force. Newton's laws hold in their simplest form only in a family of reference frames, called inertial frames. This fact represents the essence of the Galilean principle of relativity: The laws of mechanics have the same form in all inertial frames.
However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects.
In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. [ 7 ] [ 8 ] According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. [ 9 ] In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries.
Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars . An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", [ 10 ] even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced.
The expression inertial frame of reference ( German : Inertialsystem ) was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition : [ 11 ] [ 12 ]
A reference frame in which a mass point thrown from the same point in three different (non co-planar) directions follows rectilinear paths each time it is thrown, is called an inertial frame. [ 13 ]
The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich: [ 14 ]
The utility of operational definitions was carried much further in the special theory of relativity. [ 15 ] Some historical background including Lange's definition is provided by DiSalle, who says in summary: [ 16 ]
The original question, "relative to what frame of reference do the laws of motion hold?" is revealed to be wrongly posed. The laws of motion essentially determine a class of reference frames, and (in principle) a procedure for constructing them.
Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, s {\displaystyle \mathbf {s} } , to another, s ′ {\displaystyle \mathbf {s} ^{\prime }} , by simple addition or subtraction of coordinates:
where r 0 and t 0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t 2 − t 1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, | r 2 − r 1 |) is also the same.
Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. [ 17 ] However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law.
Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space ; as a practical matter, "absolute space" was considered to be the fixed stars [ 18 ] [ 19 ] In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as: [ 20 ] [ 21 ]
An inertial frame of reference is one in which the motion of a particle not subject to forces is in a straight line at constant speed.
Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion ), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed . Newtonian inertial frames transform among each other according to the Galilean group of symmetries .
If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein: [ 22 ]
The weakness of the principle of inertia lies in this, that it involves an argument in a circle: a mass moves without acceleration if it is sufficiently far from other bodies; we know that it is sufficiently far from other bodies only by the fact that it moves without acceleration.
There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. [ 23 ] A possible issue with this approach is the historically long-lived view that the distant universe might affect matters ( Mach's principle ). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces.
Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: [ 24 ] [ 25 ]
The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.
This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. [ 26 ] The role of fictitious forces in classifying reference frames is pursued further below.
Einstein's theory of special relativity , like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant , the transformation between inertial frames is the Lorentz transformation , not the Galilean transformation which is used in Newtonian mechanics.
The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation , length contraction , and the relativity of simultaneity . The predictions of special relativity have been extensively verified experimentally. [ 27 ] The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. [ 28 ]
Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. [ 29 ]
First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance d = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where x 1 ( t ) {\displaystyle x_{1}(t)} is the position in meters of car one after time t in seconds and x 2 ( t ) {\displaystyle x_{2}(t)} is the position of car two after time t .
Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which x 1 = x 2 {\displaystyle x_{1}=x_{2}} . Therefore, we set x 1 = x 2 {\displaystyle x_{1}=x_{2}} and solve for t {\displaystyle t} , that is:
Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of v 2 − v 1 = 8 m/s . To catch up to the first car, it will take a time of d / v 2 − v 1 = 200 / 8 s , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m/s .
It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. One can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).
For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system.
For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x -axis, and the direction in front of him as the positive y -axis. To him, the car moves along the x axis with some velocity v in the positive x -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity.
Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x -axis, and the direction in front of her as the positive y -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y -direction. If she is driving north, then north is the positive y -direction; if she turns east, east becomes the positive y -direction.
Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, a in the negative y -direction. However, if she is accelerating at rate A in the negative y -direction (in other words, slowing down), she will find Candace's acceleration to be a′ = a − A in the negative y -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y -direction (speeding up), she will observe Candace's acceleration as a′ = a + A in the negative y -direction—a larger value than Alfred's measurement.
Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below.
General relativity is based upon the principle of equivalence: [ 30 ] [ 31 ]
There is no experiment observers can perform to distinguish whether an acceleration arises because of a gravitational force or because their reference frame is accelerating.
This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. [ 32 ] Support for this principle is found in the Eötvös experiment , which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 10 11 . [ 33 ] For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. [ 34 ]
Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion , whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity.
However, the general theory reduces to the special theory over sufficiently small regions of spacetime , where curvature effects become less important and the earlier inertial frame arguments can come back into play. [ 35 ] [ 36 ] Consequently, modern special relativity is now sometimes described as only a "local theory". [ 37 ] "Local" can encompass, for example, the entire Milky Way galaxy : The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System . Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian. [ 38 ]
In an inertial frame, Newton's first law , the law of inertia , is satisfied: Any free motion has a constant magnitude and direction. [ 39 ] Newton's second law for a particle takes the form:
with F the net force (a vector ), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces , electromagnetic, gravitational, and nuclear forces.
In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference ), rotating at angular rate Ω about an axis, takes the form:
which looks the same as in an inertial frame, but now the force F ′ is the resultant of not only F , but also additional terms (the paragraph following this equation presents the main points without detailed mathematics):
where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω , symbol × denotes the vector cross product , vector x B locates the body and vector v B is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer).
The extra terms in the force F ′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force , the second the centrifugal force , and the third the Euler force . These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω ; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter . Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces ). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis.
All observers agree on the real forces, F ; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present.
In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space . In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces , for example, the Coriolis force and the centrifugal force . Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket . In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket).
As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions . Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe , and partly due to peculiar velocities . [ 40 ] For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. [ 41 ] The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame.
The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion. [ 42 ]
In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. [ 43 ]
To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, [ 44 ] although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty , like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis . [ 45 ] [ 46 ] The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve . So far, observations show any rotation of the universe is very slow, no faster than once every 6 × 10 13 years (10 −13 rad/yr), [ 47 ] and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. [ 48 ]
When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames .
An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′ , y′ , a′ .
The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R . Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r , and the vector from the accelerated origin to the point is called r′ .
From the geometry of the situation
Taking the first and second derivatives of this with respect to time
where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame.
These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as
When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect ).
A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried).
This arrangement leads to the equation (see Fictitious force for a derivation):
or, to solve for the acceleration in the accelerated frame,
Multiplying through by the mass m gives
where
Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces . [ 1 ] [ 2 ]
The effect of this being in the noninertial frame is to require the observer to introduce a fictitious force into his calculations…
The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity , a frame where fictitious forces are present is not an inertial frame: [ 49 ]
The equations of motion in a non-inertial system differ from the equations in an inertial system by additional terms called inertial forces. This allows us to detect experimentally the non-inertial nature of a system.
Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames .
To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear.
For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers , no originating bodies. [ 50 ] A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame.
Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. [ 51 ] If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish.
For linear acceleration , Newton expressed the idea of undetectability of straight-line accelerations held in common: [ 25 ]
If bodies, any how moved among themselves, are urged in the direction of parallel lines by equal accelerative forces, they will continue to move among themselves, after the same manner as if they had been urged by no such forces.
This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set.
For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate.
Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. [ 52 ] : 59 Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. [ 53 ]
A gyrocompass , employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. [ 54 ] The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour. [ 55 ] | https://en.wikipedia.org/wiki/Inertial_frame_of_reference |
In mathematics, inertial manifolds are concerned with the long term behavior of the solutions of dissipative dynamical systems . Inertial manifolds are finite-dimensional, smooth, invariant manifolds that contain the global attractor and attract all solutions exponentially quickly. Since an inertial manifold is finite-dimensional even if the original system is infinite-dimensional, and because most of the dynamics for the system takes place on the inertial manifold, studying the dynamics on an inertial manifold produces a considerable simplification in the study of the dynamics of the original system. [ 1 ]
In many physical applications, inertial manifolds express an interaction law between the small and large wavelength structures. Some say that the small wavelengths are enslaved by the large (e.g. synergetics ). Inertial manifolds may also appear as slow manifolds common in meteorology, or as the center manifold in any bifurcation . Computationally, numerical schemes for partial differential equations seek to capture the long term dynamics and so such numerical schemes form an approximate inertial manifold.
Consider the dynamical system in just two variables p ( t ) {\displaystyle p(t)} and q ( t ) {\displaystyle q(t)} and with parameter a {\displaystyle a} : [ 2 ]
Hence the long term behavior of the original two dimensional dynamical system is given by the 'simpler' one dimensional dynamics on the inertial manifold M {\displaystyle {\mathcal {M}}} , namely d p d t = a p − 1 1 + 2 a p 3 {\displaystyle {\frac {dp}{dt}}=ap-{\frac {1}{1+2a}}p^{3}} .
Let u ( t ) {\displaystyle u(t)} denote a solution of a dynamical system. The solution u ( t ) {\displaystyle u(t)} may be an evolving vector in H = R n {\displaystyle H=\mathbb {R} ^{n}} or may be an evolving function in an infinite-dimensional Banach space H {\displaystyle H} .
In many cases of interest the evolution of u ( t ) {\displaystyle u(t)} is determined as the solution of a differential equation in H {\displaystyle H} , say d u / d t = F ( u ( t ) ) {\displaystyle {du}/{dt}=F(u(t))} with initial value u ( 0 ) = u 0 {\displaystyle u(0)=u_{0}} .
In any case, we assume the solution of the dynamical system can be written in terms of a semigroup operator, or state transition matrix , S : H → H {\displaystyle S:H\to H} such that u ( t ) = S ( t ) u 0 {\displaystyle u(t)=S(t)u_{0}} for all times t ≥ 0 {\displaystyle t\geq 0} and all initial values u 0 {\displaystyle u_{0}} .
In some situations we might consider only discrete values of time as in the dynamics of a map.
An inertial manifold [ 1 ] for a dynamical semigroup S ( t ) {\displaystyle S(t)} is a smooth manifold M {\displaystyle {\mathcal {M}}} such that
The restriction of the differential equation d u / d t = F ( u ) {\displaystyle du/dt=F(u)} to the inertial manifold M {\displaystyle {\mathcal {M}}} is therefore a well defined finite-dimensional system called the inertial system . [ 1 ] Subtly, there is a difference between a manifold being attractive, and solutions on the manifold being attractive.
Nonetheless, under appropriate conditions the inertial system possesses so-called asymptotic completeness : [ 3 ] that is, every solution of the differential equation has a companion solution lying in M {\displaystyle {\mathcal {M}}} and producing the same behavior for large time; in mathematics, for all u 0 {\displaystyle u_{0}} there exists v 0 ∈ M {\displaystyle v_{0}\in {\mathcal {M}}} and possibly a time shift τ ≥ 0 {\displaystyle \tau \geq 0} such that dist ( S ( t ) u 0 , S ( t + τ ) v 0 ) → 0 {\displaystyle {\text{dist}}(S(t)u_{0},S(t+\tau )v_{0})\to 0} as t → ∞ {\displaystyle t\to \infty } .
Researchers in the 2000s generalized such inertial manifolds to time dependent (nonautonomous) and/or stochastic dynamical systems (e.g. [ 4 ] [ 5 ] )
Existence results that have been proved address inertial manifolds that are expressible as a graph. [ 1 ] The governing differential equation is rewritten more specifically in the form d u / d t + A u + f ( u ) = 0 {\displaystyle du/dt+Au+f(u)=0} for unbounded self-adjoint closed operator A {\displaystyle A} with domain D ( A ) ⊂ H {\displaystyle D(A)\subset H} , and nonlinear operator f : D ( A ) → H {\displaystyle f:D(A)\to H} .
Typically, elementary spectral theory gives an orthonormal basis of H {\displaystyle H} consisting of eigenvectors v j {\displaystyle v_{j}} : A v j = λ j v j {\displaystyle Av_{j}=\lambda _{j}v_{j}} , j = 1 , 2 , … {\displaystyle j=1,2,\ldots } , for ordered eigenvalues 0 < λ 1 ≤ λ 2 ≤ ⋯ {\displaystyle 0<\lambda _{1}\leq \lambda _{2}\leq \cdots } .
For some given number m {\displaystyle m} of modes, P {\displaystyle P} denotes the projection of H {\displaystyle H} onto the space spanned by v 1 , … , v m {\displaystyle v_{1},\ldots ,v_{m}} , and Q = I − P {\displaystyle Q=I-P} denotes the orthogonal projection onto the space spanned by v m + 1 , v m + 2 , … {\displaystyle v_{m+1},v_{m+2},\ldots } .
We look for an inertial manifold expressed as the graph Φ : P H → Q H {\displaystyle \Phi :PH\to QH} .
For this graph to exist the most restrictive requirement is the spectral gap condition [ 1 ] λ m + 1 − λ m ≥ c ( λ m + 1 + λ m ) {\displaystyle \lambda _{m+1}-\lambda _{m}\geq c({\sqrt {\lambda _{m+1}}}+{\sqrt {\lambda _{m}}})} where the constant c {\displaystyle c} depends upon the system.
This spectral gap condition requires that the spectrum of A {\displaystyle A} must contain large gaps to be guaranteed of existence.
Several methods are proposed to construct approximations to
inertial manifolds, [ 1 ] including the
so-called intrinsic low-dimensional manifolds . [ 6 ] [ 7 ]
The most popular way to approximate follows from the
existence of a graph.
Define the m {\displaystyle m} slow variables p ( t ) = P u ( t ) {\displaystyle p(t)=Pu(t)} , and the 'infinite' fast variables q ( t ) = Q u ( t ) {\displaystyle q(t)=Qu(t)} .
Then project the differential equation d u / d t + A u + f ( u ) = 0 {\displaystyle du/dt+Au+f(u)=0} onto both P H {\displaystyle PH} and Q H {\displaystyle QH} to obtain the coupled system d p / d t + A p + P f ( p + q ) = 0 {\displaystyle dp/dt+Ap+Pf(p+q)=0} and d q / d t + A q + Q f ( p + q ) = 0 {\displaystyle dq/dt+Aq+Qf(p+q)=0} .
For trajectories on the graph of an inertial
manifold M {\displaystyle M} , the fast
variable q ( t ) = Φ ( p ( t ) ) {\displaystyle q(t)=\Phi (p(t))} .
Differentiating and using the coupled system form gives the
differential equation for the graph:
This differential equation is typically solved approximately
in an asymptotic expansion in 'small' p {\displaystyle p} to
give an invariant manifold model, [ 8 ] or a nonlinear Galerkin method, [ 9 ] both of which use a global basis whereas the so-called holistic discretisation uses a local basis. [ 10 ] Such approaches to approximation of inertial manifolds are
very closely related to approximating center manifolds for which a web service exists to construct approximations
for systems input by a
user. [ 11 ] | https://en.wikipedia.org/wiki/Inertial_manifold |
An inertial measurement unit ( IMU ) is an electronic device that measures and reports a body's specific force , angular rate, and sometimes the orientation of the body, using a combination of accelerometers , gyroscopes , and sometimes magnetometers . When the magnetometer is included, IMUs are referred to as IMMUs. [ 1 ]
IMUs are typically used to maneuver modern vehicles including motorcycles, missiles, aircraft (an attitude and heading reference system ), including uncrewed aerial vehicles (UAVs), among many others, and spacecraft , including satellites and landers . Recent developments allow for the production of IMU-enabled GPS devices. An IMU allows a GPS receiver to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present. [ 2 ]
IMUs are used in VR headsets and smartphones , and also in motion tracked game controllers like the Wii Remote .
An inertial measurement unit works by detecting linear acceleration using one or more accelerometers and rotational rate using one or more gyroscopes . [ 3 ] Some also include a magnetometer which is commonly used as a heading reference. Some IMUs, like Adafruit's 9-DOF IMU, include additional sensors like temperature. [ 4 ] Typical configurations contain one accelerometer, gyro, and magnetometer per axis for each of the three principal axes: pitch, roll and yaw .
IMUs are often incorporated into Inertial Navigation Systems , which utilize the raw IMU measurements to calculate attitude, angular rates, linear velocity, and position relative to a global reference frame. The IMU equipped INS forms the backbone for the navigation and control of many commercial and military vehicles, such as crewed aircraft, missiles, ships, submarines, and satellites. IMUs are also essential components in the guidance and control of uncrewed systems such as UAVs , UGVs , and UUVs . Simpler versions of INSs termed Attitude and Heading Reference Systems utilize IMUs to calculate vehicle attitude with heading relative to magnetic north. The data collected from the IMU's sensors allows a computer to track craft's position, using a method known as dead reckoning . This data is usually presented in Euler vectors representing the angles of rotation in the three primary axis or a quaternion .
In land vehicles, an IMU can be integrated into GPS based automotive navigation systems or vehicle tracking systems , giving the system a dead reckoning capability and the ability to gather as much accurate data as possible about the vehicle's current speed, turn rate, heading, inclination and acceleration, in combination with the vehicle's wheel speed sensor output and, if available, reverse gear signal, for purposes such as better traffic collision analysis.
Besides navigational purposes, IMUs serve as orientation sensors in many consumer products. Almost all smartphones and tablets contain IMUs as orientation sensors. Fitness trackers and other wearables may also include IMUs to measure motion, such as running. IMUs also have the ability to determine developmental levels of individuals when in motion by identifying specificity and sensitivity of specific parameters associated with running. Some gaming systems such as the remote controls for the Nintendo Wii use IMUs to measure motion. Low-cost IMUs have enabled the proliferation of the consumer drone industry. They are also frequently used for sports technology (technique training), [ 5 ] and animation applications. They are a competing technology for use in motion capture technology. [ 6 ] An IMU is at the heart of the balancing technology used in the Segway Personal Transporter .
In a navigation system, the data reported by the IMU is fed into a processor which calculates altitude, velocity and position. [ 7 ] A typical implementation referred to as a Strap Down Inertial System integrates angular rate from the gyroscope to calculate angular position. This is fused with the gravity vector measured by the accelerometers in a Kalman filter to estimate attitude. The attitude estimate is used to transform acceleration measurements into an inertial reference frame (hence the term inertial navigation) where they are integrated once to get linear velocity, and twice to get linear position. [ 8 ] [ 9 ] [ 10 ]
For example, if an IMU installed in an aeroplane moving along a certain direction vector were to measure a plane's acceleration as 5 m/s 2 for 1 second, then after that 1 second the guidance computer would deduce that the plane must be traveling at 5 m/s and must be 2.5 m from its initial position (assuming v 0 =0 and known starting position coordinates x 0 , y 0 , z 0 ). If combined with a mechanical paper map or a digital map archive (systems whose output is generally known as a moving map display since the guidance system position output is often taken as the reference point, resulting in a moving map), the guidance system could use this method to show a pilot where the plane is located geographically in a certain moment, as with a GPS navigation system, but without the need to communicate with or receive communication from any outside components, such as satellites or land radio transponders, though external sources are still used in order to correct drift errors, and since the position update frequency allowed by inertial navigation systems can be higher than the vehicle motion on the map display can be perceived as smooth. This method of navigation is called dead reckoning .
One of the earliest units was designed and built by Ford Instrument Company for the USAF to help aircraft navigate in flight without any input from outside the aircraft. Called the Ground-Position Indicator , once the pilot entered in the aircraft longitude and latitude at takeoff, the unit would show the pilot the longitude and latitude of the aircraft in relation to the ground. [ 11 ]
Positional tracking systems like GPS [ 12 ] can be used to continually correct drift errors (an application of the Kalman filter ).
A major disadvantage of using IMUs for navigation is that they typically suffer from accumulated error. Because the guidance system is continually integrating acceleration with respect to time to calculate velocity and position (see dead reckoning ) , any measurement errors, however small, are accumulated over time. This leads to 'drift': an ever-increasing difference between where the system thinks it is located and the actual location. Due to integration a constant error in acceleration results in a linear error growth in velocity and a quadratic error growth in position. A constant error in attitude rate (gyro) results in a quadratic error growth in velocity and a cubic error growth in position. [ 13 ]
A very wide variety of IMUs exists, [ 14 ] depending on application types, with performance ranging:
To get a rough idea, this means that, for a single, uncorrected accelerometer, the cheapest (at 100 mg) loses its ability to give 50-meter accuracy after around 10 seconds, while the best accelerometer (at 10 μg) loses its 50-meter accuracy after around 17 minutes. [ 15 ]
The accuracy of the inertial sensors inside a modern inertial measurement unit (IMU) has a more complex impact on the performance of an inertial navigation system (INS). [ 16 ]
Gyroscope and accelerometer sensor behavior is often represented by a model based on the following errors, assuming they have the proper measurement range and bandwidth: [ 17 ]
All these errors depend on various physical phenomena specific to each sensor technology. Depending on the targeted applications and to be able to make the proper sensor choice, it is very important to consider the needs regarding stability, repeatability, and environment sensitivity (mainly thermal and mechanical environments), on both short and long terms.
Targeted performance for applications is, most of the time, better than a sensor's absolute performance. However, sensor performance is repeatable over time, with more or less accuracy, and therefore can be assessed and compensated to enhance its performance.
This real-time performance enhancement is based on both sensors and IMU models. Complexity for these models will then be chosen according to the needed performance and the type of application considered. Ability to define this model is part of sensors and IMU manufacturers know-how.
Sensors and IMU models are computed in factories through a dedicated calibration sequence using multi-axis turntables and climatic chambers. They can either be computed for each individual product or generic for the whole production. Calibration will typically improve a sensor's raw performance by at least two decades.
High performance IMUs, or IMUs designed to operate under harsh conditions, are very often suspended by shock absorbers. These shock absorbers are required to master three effects:
Suspended IMUs can offer very high performance, even when submitted to harsh environments. However, to reach such performance, it is necessary to compensate for three main resulting behaviors:
Decreasing these errors tends to push IMU designers to increase processing frequencies, which becomes easier using recent digital technologies. However, developing algorithms able to cancel these errors requires deep inertial knowledge and strong intimacy with sensors/IMU design.
On the other hand, if suspension is likely to enable IMU performance increase, it has a side effect on size and mass.
A wireless IMU is known as a WIMU. [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Hemispherical resonator gyroscope – Type of gyroscope
PIGA accelerometer – Pendulous Integrating Gyroscopic Accelerometer, an inertial guidance instrument
Schuler tuning – Inertial navigation design principle
Vibrating structure gyroscope – Inexpensive gyroscope based on vibration
Intelsat 708 – Chinese rocket failure, cause of failure later determined to be the failure of the Inertial Measurement Unit | https://en.wikipedia.org/wiki/Inertial_measurement_unit |
An inertial navigation system ( INS ; also inertial guidance system , inertial instrument ) is a navigation device that uses motion sensors ( accelerometers ), rotation sensors ( gyroscopes ) and a computer to continuously calculate by dead reckoning the position, the orientation, and the velocity (direction and speed of movement) of a moving object without the need for external references. [ 1 ] Often the inertial sensors are supplemented by a barometric altimeter and sometimes by magnetic sensors ( magnetometers ) and/or speed measuring devices. INSs are used on mobile robots [ 2 ] [ 3 ] and on vehicles such as ships , aircraft , submarines , guided missiles , and spacecraft . [ 4 ] Older INS systems generally used an inertial platform as their mounting point to the vehicle and the terms are sometimes considered synonymous.
Inertial navigation is a self-contained navigation technique in which measurements provided by accelerometers and gyroscopes are used to track the position and orientation of an object relative to a known starting point, orientation and velocity. Inertial measurement units (IMUs) typically contain three orthogonal rate-gyroscopes and three orthogonal accelerometers, measuring angular velocity and linear acceleration respectively. By processing signals from these devices it is possible to track the position and orientation of a device.
An inertial navigation system includes at least a computer and a platform or module containing accelerometers , gyroscopes , or other motion-sensing devices. The INS is initially provided with its position and velocity from another source (a human operator, a GPS satellite receiver, etc.) accompanied with the initial orientation and thereafter computes its own updated position and velocity by integrating information received from the motion sensors. The advantage of an INS is that it requires no external references in order to determine its position, orientation, or velocity once it has been initialized.
An INS can detect a change in its geographic position (a move east or north, for example), a change in its velocity (speed and direction of movement) and a change in its orientation (rotation about an axis). It does this by measuring the linear acceleration and angular velocity applied to the system. Since it requires no external reference (after initialization), it is immune to jamming and deception.
Gyroscopes measure the angular displacement of the sensor frame with respect to the inertial reference frame . By using the original orientation of the system in the inertial reference frame as the initial condition and integrating the angular displacement, the system's current orientation is known at all times. This can be thought of as the ability of a blindfolded passenger in a car to feel the car turn left and right or tilt up and down as the car ascends or descends hills. Based on this information alone, the passenger knows what direction the car is facing, but not how fast or slow it is moving, or whether it is sliding sideways.
Accelerometers measure the linear acceleration of the moving vehicle in the sensor or body frame, but in directions that can only be measured relative to the moving system (since the accelerometers are fixed to the system and rotate with the system, but are not aware of their own orientation). This can be thought of as the ability of a blindfolded passenger in a car to feel themself pressed back into their seat as the vehicle accelerates forward or pulled forward as it slows down; and feel themself pressed down into their seat as the vehicle accelerates up a hill or rise up out of their seat as the car passes over the crest of a hill and begins to descend. Based on this information alone, they know how the vehicle is accelerating relative to itself; that is, whether it is accelerating forward, backward, left, right, up (toward the car's ceiling), or down (toward the car's floor), measured relative to the car, but not the direction relative to the Earth, since they did not know what direction the car was facing relative to the Earth when they felt the accelerations.
However, by tracking both the current angular velocity of the system and the current linear acceleration of the system measured relative to the moving system, it is possible to determine the linear acceleration of the system in the inertial reference frame. Performing integration on the inertial accelerations (using the original velocity as the initial conditions) using the correct kinematic equations yields the inertial velocities of the system and integration again (using the original position as the initial condition) yields the inertial position. In our example, if the blindfolded passenger knew how the car was pointed and what its velocity was before they were blindfolded, and if they are able to keep track of both how the car has turned and how it has accelerated and decelerated since, then they can accurately know the current orientation, position, and velocity of the car at any time.
Inertial navigation is used in a wide range of applications including the navigation of aircraft, tactical and strategic missiles, spacecraft, submarines and ships. It is also embedded in some mobile phones for purposes of mobile phone location and tracking. [ 5 ] [ 6 ] Recent advances in the construction of microelectromechanical systems (MEMS) have made it possible to manufacture small and light inertial navigation systems. These advances have widened the range of possible applications to include areas such as human and animal motion capture .
Inertial navigation systems are used in many different moving objects. However, their cost and complexity place constraints on the environments in which they are practical for use.
To support the use of inertial technology in the best way, already in 1965 a technical working group for Inertial Sensors had been established in Germany to bring together the users, the manufacturers and the researchers of inertial sensors. This working group has been continuously developed and today it is known as DGON ISA Inertial Sensors and Application Symposium, the leading conference for inertial technologies for more than 60 years. This Symposium DGON / IEEE ISA with about 200 international attendees is held annually in October in Germany. The publications of all DGON ISA conferences over the last more than 60 years are accessible.
All inertial navigation systems suffer from integration drift: small errors in the measurement of acceleration and angular velocity are integrated into progressively larger errors in velocity, which are compounded into still greater errors in position. [ 7 ] [ 8 ] Since the new position is calculated from the previous calculated position and the measured acceleration and angular velocity, these errors accumulate roughly proportionally to the time since the initial position was input. Even the best accelerometers, with a standard error of 10 micro-g, would accumulate a 50-meter (164-ft) error within 17 minutes. [ 9 ] Therefore, the position must be periodically corrected by input from some other type of navigation system.
Accordingly, inertial navigation is usually used to supplement other navigation systems, providing a higher degree of accuracy than is possible with the use of any single system. For example, if, in terrestrial use, the inertially tracked velocity is intermittently updated to zero by stopping, the position will remain precise for a much longer time, a so-called zero velocity update . In aerospace particularly, other measurement systems are used to determine INS inaccuracies, e.g. the Honeywell LaseRefV inertial navigation systems uses GPS and air data computer outputs to maintain required navigation performance . The navigation error rises with the lower sensitivity of the sensors used. Currently, devices combining different sensors are being developed, e.g. attitude and heading reference system . Because the navigation error is mainly influenced by the numerical integration of angular rates and accelerations, the pressure reference system was developed to use one numerical integration of the angular rate measurements.
Estimation theory in general and Kalman filtering in particular, [ 10 ] provide a theoretical framework for combining information from various sensors. One of the most common alternative sensors is a satellite navigation radio such as GPS , which can be used for all kinds of vehicles with direct sky visibility. Indoor applications can use pedometers , distance measurement equipment, or other kinds of position sensors . By properly combining the information from an INS and other systems ( GPS ), the errors in position and velocity are stable . Furthermore, INS can be used as a short-term fallback while GPS signals are unavailable, for example when a vehicle passes through a tunnel.
In 2011, GPS jamming at the civilian level became a governmental concern. [ 11 ] The relative ease in ability to jam these systems has motivated the military to reduce navigation dependence on GPS technology. [ 12 ] Because inertial navigation sensors do not depend on radio signals unlike GPS, they cannot be jammed. [ 13 ] In 2012, the U.S. Army Research Laboratory reported a method to merge measurements from 10 pairs of MEMS gyroscope and accelerometers (plus occasional GPS), reducing the positional error by two thirds for a projectile. The algorithm can correct for systemic biases in individual sensors, using both GPS and a heuristic based on the gun-firing acceleration force. If one sensor consistently over or underestimates distance, the system can adjust the corrupted sensor's contributions to the final calculation. [ 14 ]
Inertial navigation systems were originally developed for rockets . American rocketry pioneer Robert Goddard experimented with rudimentary gyroscopic systems. Goddard's systems were of great interest to contemporary German pioneers including Wernher von Braun . The systems entered more widespread use with the advent of spacecraft , guided missiles , and commercial airliners .
Early German World War II V2 guidance systems combined two gyroscopes and a lateral accelerometer with a simple analog computer to adjust the azimuth for the rocket in flight. Analog computer signals were used to drive four graphite rudders in the rocket exhaust for flight control. The GN&C (Guidance, Navigation, and Control) system for the V2 provided many innovations as an integrated platform with closed loop guidance. At the end of the war von Braun engineered the surrender of 500 of his top rocket scientists, along with plans and test vehicles, to the Americans. They arrived at Fort Bliss, Texas in 1945 under the provisions of Operation Paperclip and were subsequently moved to Huntsville, Alabama , in 1950 [ 15 ] where they worked for U.S. Army rocket research programs.
In the early 1950s, the US government wanted to insulate itself against over-dependency on the German team for military applications, including the development of a fully domestic missile guidance program. The MIT Instrumentation Laboratory (later to become the Charles Stark Draper Laboratory , Inc.) was chosen by the Air Force Western Development Division to provide a self-contained guidance system backup to Convair in San Diego for the new Atlas intercontinental ballistic missile [ 16 ] [ 17 ] [ 18 ] [ 19 ] (Construction and testing were completed by Arma Division of AmBosch Arma). The technical monitor for the MIT task was engineer Jim Fletcher, who later served as NASA Administrator. The Atlas guidance system was to be a combination of an on-board autonomous system and a ground-based tracking and command system. The self-contained system finally prevailed in ballistic missile applications for obvious reasons. In space exploration, a mixture of the two remains.
In the summer of 1952, Dr. Richard Battin and Dr. J. Halcombe "Hal" Laning, Jr. , researched computational based solutions to guidance and undertook the initial analytical work on the Atlas inertial guidance in 1954. Other key figures at Convair were Charlie Bossart, the Chief Engineer, and Walter Schweidetzky, head of the guidance group. Schweidetzky had worked with von Braun at Peenemünde during World War II.
The initial Delta guidance system assessed the difference in position from a reference trajectory. A velocity to be gained (VGO) calculation is made to correct the current trajectory with the objective of driving VGO to zero. The mathematics of this approach were fundamentally valid, but dropped because of the challenges in accurate inertial guidance and analog computing power. The challenges faced by the Delta efforts were overcome by the Q system (see Q-guidance ) of guidance. The Q system's revolution was to bind the challenges of missile guidance (and associated equations of motion) in the matrix Q. The Q matrix represents the partial derivatives of the velocity with respect to the position vector. A key feature of this approach allowed for the components of the vector cross product (v, xdv, /dt) to be used as the basic autopilot rate signals—a technique that became known as cross-product steering . The Q-system was presented at the first Technical Symposium on Ballistic Missiles held at the Ramo-Wooldridge Corporation in Los Angeles on 21 and 22 June 1956. The Q system was classified information through the 1960s. Derivations of this guidance are used for today's missiles.
In February 1961 NASA awarded MIT a contract for preliminary design study of a guidance and navigation system for the Apollo program . MIT and the Delco Electronics Div. of General Motors Corp. were awarded the joint contract for design and production of the Apollo Guidance and Navigation systems for the Command Module and the Lunar Module. Delco produced the IMUs ( Inertial Measurement Units ) for these systems, Kollsman Instrument Corp. produced the Optical Systems, and the Apollo Guidance Computer was built by Raytheon under subcontract. [ 20 ] [ 21 ]
For the Space Shuttle , open loop guidance was used to guide the Shuttle from lift-off until Solid Rocket Booster (SRB) separation. After SRB separation the primary Space Shuttle guidance is named PEG (Powered Explicit Guidance). PEG takes into account both the Q system and the predictor-corrector attributes of the original "Delta" System (PEG Guidance). Although many updates to the Shuttle's navigation system had taken place over the last 30 years (ex. GPS in the OI-22 build), the guidance core of the Shuttle GN&C system had evolved little. Within a crewed system, there is a human interface needed for the guidance system. As astronauts are the customer for the system, many new teams were formed that touch GN&C as it is a primary interface to "fly" the vehicle.
One example of a popular INS for commercial aircraft was the Delco Carousel , which provided partial automation of navigation in the days before complete flight management systems became commonplace. The Carousel allowed pilots to enter 9 waypoints at a time and then guided the aircraft from one waypoint to the next using an INS to determine aircraft position and velocity. Boeing Corporation subcontracted the Delco Electronics Div. of General Motors to design and build the first production Carousel systems for the early models (-100, -200 and -300) of the 747 aircraft. The 747 utilized three Carousel systems operating in concert for reliability purposes. The Carousel system and derivatives thereof were subsequently adopted for use in many other commercial and military aircraft. The USAF C-141 was the first military aircraft to utilize the Carousel in a dual system configuration, followed by the C-5A which utilized the triple INS configuration, similar to the 747. The KC-135A fleet was fitted with a single Carousel IV-E system that could operate as a stand-alone INS or can be aided by the AN/APN-81 or AN/APN-218 Doppler radar . Some special-mission variants of the C-135 were fitted with dual Carousel IV-E INSs. ARINC Characteristic 704 defines the INS used in commercial air transport.
INSs contain Inertial Measurement Units (IMUs) which have angular and linear accelerometers (for changes in position); some IMUs include a gyroscopic element (for maintaining an absolute angular reference).
Angular accelerometers measure how the vehicle is rotating in space. Generally, there is at least one sensor for each of the three axes: pitch (nose up and down), yaw (nose left and right) and roll (clockwise or counter-clockwise from the cockpit).
Linear accelerometers measure non-gravitational accelerations [ 22 ] of the vehicle. Since it can move in three axes (up and down, left and right, forward and back), there is a linear accelerometer for each axis.
A computer continually calculates the vehicle's current position. First, for each of the six degrees of freedom (x,y,z and θ x , θ y and θ z ), it integrates over time the sensed acceleration, together with an estimate of gravity, to calculate the current velocity. Then it integrates the velocity to calculate the current position.
Inertial guidance is difficult without computers. The desire to use inertial guidance in the Minuteman missile and Project Apollo drove early attempts to miniaturize computers.
Inertial guidance systems are now usually combined with satellite navigation systems through a digital filtering system. The inertial system provides short term data, while the satellite system corrects accumulated errors of the inertial system.
An inertial guidance system that will operate near the surface of the earth must incorporate Schuler tuning so that its platform will continue pointing towards the center of the Earth as a vehicle moves from place to place.
Some systems place the linear accelerometers on a gimballed gyrostabilized platform. The gimbals are a set of three rings, each with a pair of bearings initially at right angles. They let the platform twist about any rotational axis (or, rather, they let the platform keep the same orientation while the vehicle rotates around it). There are two gyroscopes (usually) on the platform.
Two gyroscopes are used to cancel gyroscopic precession , the tendency of a gyroscope to twist at right angles to an input torque. By mounting a pair of gyroscopes (of the same rotational inertia and spinning at the same speed in opposite directions) at right angles the precessions are cancelled and the platform will resist twisting. [ citation needed ]
This system allows a vehicle's roll, pitch and yaw angles to be measured directly at the bearings of the gimbals. Relatively simple electronic circuits can be used to add up the linear accelerations, because the directions of the linear accelerometers do not change.
The big disadvantage of this scheme is that it uses many expensive precision mechanical parts. It also has moving parts that can wear out or jam and is vulnerable to gimbal lock . The primary guidance system of the Apollo spacecraft used a three-axis gyrostabilized platform, feeding data to the Apollo Guidance Computer . Maneuvers had to be carefully planned to avoid gimbal lock.
Gimbal lock constrains maneuvering and it would be beneficial to eliminate the slip rings and bearings of the gimbals. Therefore, some systems use fluid bearings or a flotation chamber to mount a gyrostabilized platform. These systems can have very high precisions (e.g., Advanced Inertial Reference Sphere ). Like all gyrostabilized platforms, this system runs well with relatively slow, low-power computers.
The fluid bearings are pads with holes through which pressurized inert gas (such as helium) or oil presses against the spherical shell of the platform. The fluid bearings are very slippery and the spherical platform can turn freely. There are usually four bearing pads, mounted in a tetrahedral arrangement to support the platform.
In premium systems, the angular sensors are usually specialized transformer coils made in a strip on a flexible printed circuit board . Several coil strips are mounted on great circles around the spherical shell of the gyrostabilized platform. Electronics outside the platform uses similar strip-shaped transformers to read the varying magnetic fields produced by the transformers wrapped around the spherical platform. Whenever a magnetic field changes shape, or moves, it will cut the wires of the coils on the external transformer strips. The cutting generates an electric current in the external strip-shaped coils and electronics can measure that current to derive angles.
Cheap systems sometimes use bar codes to sense orientations and use solar cells or a single transformer to power the platform. Some small missiles have powered the platform with light from a window or optic fibers to the motor. A research topic is to suspend the platform with pressure from exhaust gases. Data is returned to the outside world via the transformers, or sometimes LEDs communicating with external photodiodes .
Lightweight digital computers permit the system to eliminate the gimbals, creating strapdown systems, so called because their sensors are simply strapped to the vehicle. This reduces the cost, eliminates gimbal lock , removes the need for some calibrations and increases the reliability by eliminating some of the moving parts. Angular rate sensors called rate gyros measure the angular velocity of the vehicle.
A strapdown system needs a dynamic measurement range several hundred times that required by a gimballed system. That is, it must integrate the vehicle's attitude changes in pitch, roll and yaw, as well as gross movements. Gimballed systems could usually do well with update rates of 50–60 Hz. However, strapdown systems normally update about 2000 Hz. The higher rate is needed to let the navigation system integrate the angular rate into an attitude accurately.
The data updating algorithms ( direction cosines or quaternions ) involved are too complex to be accurately performed except by digital electronics. However, digital computers are now so inexpensive and fast that rate gyro systems can now be practically used and mass-produced. The Apollo lunar module used a strapdown system in its backup Abort Guidance System (AGS).
Strapdown systems are nowadays commonly used in commercial and military applications (aircraft, ships, ROVs , missiles , etc.). State-of-the-art strapdown systems are based upon ring laser gyroscopes , fibre optic gyrocopes or hemispherical resonator gyroscopes . They are using digital electronics and advanced digital filtering techniques such as Kalman filter .
The orientation of a gyroscope system can sometimes also be inferred simply from its position history (e.g., GPS). This is, in particular, the case with planes and cars, where the velocity vector usually implies the orientation of the vehicle body.
For example, Honeywell 's Align in Motion [ 23 ] is an initialization process where the initialization occurs while the aircraft is moving, in the air or on the ground. This is accomplished using GPS and an inertial reasonableness test, thereby allowing commercial data integrity requirements to be met. This process has been FAA certified to recover pure INS performance equivalent to stationary alignment procedures for civilian flight times up to 18 hours.
It avoids the need for gyroscope batteries on aircraft.
Less-expensive navigation systems, intended for use in automobiles, may use a vibrating structure gyroscope to detect changes in heading and the odometer pickup to measure distance covered along the vehicle's track. This type of system is much less accurate than a higher-end INS, but it is adequate for the typical automobile application where GPS is the primary navigation system and dead reckoning is only needed to fill gaps in GPS coverage when buildings or terrain block the satellite signals.
If a standing wave is induced in a hemispheric resonant structure and then the resonant structure is rotated, the spherical harmonic standing wave rotates through an angle different from the quartz resonator structure due to the Coriolis force. The movement of the outer case with respect to the standing wave pattern is proportional to the total rotation angle and can be sensed by appropriate electronics. The system resonators are machined from fused quartz due to its excellent mechanical properties. The electrodes that drive and sense the standing waves are deposited directly onto separate quartz structures that surround the resonator. These gyros can operate in either a whole angle mode (which gives them nearly unlimited rate capability) or a force rebalance mode that holds the standing wave in a fixed orientation with respect to the gyro housing (which gives them much better accuracy).
This system has almost no moving parts and is very accurate. However it is still relatively expensive due to the cost of the precision ground and polished hollow quartz hemispheres. Northrop Grumman currently manufactures IMUs ( inertial measurement units ) for spacecraft that use HRGs. These IMUs have demonstrated extremely high reliability since their initial use in 1996. [ 24 ] Safran manufactures large numbers of HRG based inertial navigation systems dedicated to a wide range of applications. [ 25 ]
These products include "tuning fork gyros". Here, the gyro is designed as an electronically driven tuning fork, often fabricated out of a single piece of quartz or silicon. Such gyros operate in accordance with the dynamic theory that when an angle rate is applied to a translating body, a Coriolis force is generated.
This system is usually integrated on a silicon chip. It has two mass-balanced quartz tuning forks, arranged "handle-to-handle" so forces cancel. Aluminum electrodes evaporated onto the forks and the underlying chip both drive and sense the motion. The system is both manufacturable and inexpensive. Since quartz is dimensionally stable, the system can be accurate.
As the forks are twisted about the axis of the handle, the vibration of the tines tends to continue in the same plane of motion. This motion has to be resisted by electrostatic forces from the electrodes under the tines. By measuring the difference in capacitance between the two tines of a fork, the system can determine the rate of angular motion.
Current state-of-the-art non-military technology (as of 2005 [update] ) can build small solid-state sensors that can measure human body movements. These devices have no moving parts and weigh about 50 grams (2 ounces).
Solid-state devices using the same physical principles are used for image stabilization in small cameras or camcorders. These can be extremely small, around 5 millimetres (0.20 inches) and are built with microelectromechanical systems (MEMS) technologies. [ 26 ]
Sensors based on magnetohydrodynamic principles can be used to measure angular velocities.
MEMS gyroscopes typically rely on the Coriolis effect to measure angular velocity. It consists of a resonating proof mass mounted in silicon. The gyroscope is, unlike an accelerometer, an active sensor. The proof mass is pushed back and forth by driving combs. A rotation of the gyroscope generates a Coriolis force that is acting on the mass which results in a motion in a different direction. The motion in this direction is measured by electrodes and represents the rate of turn. [ 27 ]
A ring laser gyro (RLG) splits a beam of laser light into two beams in opposite directions through narrow tunnels in a closed circular optical path around the perimeter of a triangular block of temperature-stable Cervit glass with reflecting mirrors placed in each corner. When the gyro is rotating at some angular rate, the distance traveled by each beam will differ—the shorter path being opposite to the rotation. The phase shift between the two beams can be measured by an interferometer and is proportional to the rate of rotation ( Sagnac effect ).
In practice, at low rotation rates the output frequency can drop to zero as the result of backscattering causing the beams to synchronise and lock together. This is known as a lock-in , or laser-lock . The result is that there is no change in the interference pattern and therefore no measurement change.
To unlock the counter-rotating light beams, laser gyros either have independent light paths for the two directions (usually in fiber optic gyros), or the laser gyro is mounted on a piezo-electric dither motor that rapidly vibrates the laser ring back and forth about its input axis through the lock-in region to decouple the light waves.
The shaker is the most accurate, because both light beams use exactly the same path. Thus laser gyros retain moving parts, but they do not move as far.
A more recent variation on the optical gyroscope, the fiber optic gyroscope (FOG), uses an external laser and two beams going opposite directions (counter-propagating) in long spools (several kilometers) of fiber optic filament, with the phase difference of the two beams compared after their travel through the spools of fiber.
The basic mechanism, monochromatic laser light travelling in opposite paths and the Sagnac effect , is the same in a FOG and a RLG, but the engineering details are substantially different in the FOG compared to earlier laser gyros.
Precise winding of the fiber-optic coil is required to ensure the paths taken by the light in opposite directions are as similar as possible. The FOG requires more complex calibrations than a laser ring gyro making the development and manufacture of FOG's more technically challenging that for a RLG. However FOG's do not suffer from laser lock at low speeds and do not need to contain any moving parts, increasing the maximum potential accuracy and lifespan of a FOG over an equivalent RLG.
The basic, open-loop accelerometer consists of a mass attached to a spring. The mass is constrained to move only in line with the spring. Acceleration causes deflection of the mass and the offset distance is measured. The acceleration is derived from the values of deflection distance, mass and the spring constant. The system must also be damped to avoid oscillation. A closed-loop accelerometer achieves higher performance by using a feedback loop to cancel the deflection, thus keeping the mass nearly stationary. Whenever the mass deflects, the feedback loop causes an electric coil to apply an equally negative force on the mass, canceling the motion. Acceleration is derived from the amount of negative force applied. Because the mass barely moves, the effects of non-linearities of the spring and damping system are greatly reduced. In addition, this accelerometer provides for increased bandwidth beyond the natural frequency of the sensing element.
Both types of accelerometers have been manufactured as integrated micro-machinery on silicon chips.
DARPA 's Microsystems Technology Office (MTO) department is working on a Micro-PNT (Micro-Technology for Positioning, Navigation and Timing) program to design Timing & Inertial Measurement Unit (TIMU) chips that do absolute position tracking on a single chip without GPS-aided navigation. [ 28 ] [ 29 ] [ 30 ]
Micro-PNT adds a highly accurate master timing clock [ 31 ] integrated into an IMU (Inertial Measurement Unit) chip, making it a Timing & Inertial Measurement Unit chip. A TIMU chip integrates 3-axis gyroscope, 3-axis accelerometer and 3-axis magnetometer together with a highly accurate master timing clock, so that it can simultaneously measure the motion tracked and combine that with timing from the synchronized clock. [ 28 ] [ 29 ]
In one form, the navigational system of equations acquires linear and angular measurements from the inertial and body frame, respectively and calculates the final attitude and position in the NED frame of reference.
Where f is specific force, ω {\displaystyle \omega } is angular rate, a is acceleration, R is position, R ˙ {\displaystyle {\dot {R}}} and V are velocity, Ω {\displaystyle \Omega } is the angular velocity of the earth, g is the acceleration due to gravity, Φ , λ {\displaystyle \Phi ,\lambda } and h are the NED location parameters. Also, super/subscripts of E, I and B are representing variables in the Earth centered, inertial or body reference frame, respectively and C is a transformation of reference frames. [ citation needed ] | https://en.wikipedia.org/wiki/Inertial_navigation_system |
An inertial platform , also known as a gyroscopic platform or stabilized platform , is a system using gyroscopes to maintain a platform in a fixed orientation in space despite the movement of the vehicle that it is attached to. These can then be used to stabilize gunsights in tanks , anti-aircraft artillery on ships, and as the basis for older mechanically based inertial navigation systems (see Inertial measurement unit ) .
This technology-related article is a stub . You can help Wikipedia by expanding it .
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inertial_platform |
An inertial reference unit (IRU) is a type of inertial sensor which uses gyroscopes (electromechanical, ring laser gyro or MEMS ) and accelerometers (electromechanical or MEMS ) to determine a moving aircraft ’s or spacecraft ’s change in rotational attitude (angular orientation relative to some reference frame) and translational position (typically latitude , longitude and altitude ) over a period of time. In other words, an IRU allows a device, whether airborne or submarine, to travel from one point to another without reference to external information.
Another name often used interchangeably with IRU is Inertial Measurement Unit . The two basic classes of IRUs/IMUs are "gimballed" and "strapdown". The older, larger gimballed systems have become less prevalent over the years as the performance of newer, smaller strapdown systems has improved greatly via the use of solid-state sensors and advanced real-time computer algorithms. Gimballed systems are still used in some high-precision applications where strapdown performance may not be as good.
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inertial_reference_unit |
In fire and explosion prevention engineering, inerting refers to the introduction of an inert (non-combustible) gas into a closed system (e.g. a container or a process vessel) to make a flammable atmosphere oxygen deficient and non-ignitable. [ 1 ] [ 2 ]
Inerting relies on the principle that a combustible (or flammable) gas is able to undergo combustion (explode) only if mixed with air in the right proportions. The flammability limits of the gas define those proportions, i.e. the ignitable range. In combustion engineering terms, the admission of inert gas can be said to dilute the oxygen below the limiting oxygen concentration .
Inerting differs from purging . Purging, by definition, ensures that an ignitable mixture never forms . Inerting makes an ignitable mixture safe by introduction of an inert gas.
Because the mixture by definition is ignitable before inerting commence, it is imperative that the inerting procedure does not introduce a potential source of ignition, or an explosion will occur.
NFPA 77 states [ 2 ] that carbon dioxide from high-pressure cylinders or fire extinguishers should never be used to inert a container or vessel. The release of carbon dioxide may generate static electricity with enough energy to ignite the mixture, resulting in an explosion. [ 3 ] The release of CO 2 for fire fighting purposes has led to several accidental explosions of which the 1954 Bitburg explosion may be the most devastating.
Other unsafe processes that may generate static electricity include pneumatic transport of solids, a release of pressurized gas with solids, industrial vacuum cleaners, and spray painting operations. [ 4 ]
The term inerting is often loosely used for any application involving an inert gas , not conforming with the technical definitions in NFPA standards. For example, marine tankers carrying low-flash products like crude oil , naphtha , or gasoline have inerting systems on board. During the voyage, the vapor pressure of these liquids is so high, that the atmosphere above the liquid (the headspace) is too rich to burn, the atmosphere is unignitable. This may change during unloading. When a certain volume of liquid is drawn from a tank, a similar volume of air will enter the tank's headspace, potentially creating an ignitable atmosphere.
The inerting systems use an inert gas generator to supply inert make-up gas instead of air. This procedure is often referred to as inerting . Technically, the procedure ensures that the atmosphere in the tank's headspace remains unignitable. The gas mixture in the headspace is not inert per se, it's just unignitable. Because of its content of flammable vapors, it will burn if mixed with air. Only if enough inert gas is supplied as part of a purge-out-of-service procedure, will it be unable to burn when mixed with air. | https://en.wikipedia.org/wiki/Inerting_(gas) |
An inerting system decreases the probability of combustion of flammable materials stored in a confined space. The most common such system is a fuel tank containing a combustible liquid, such as gasoline , diesel fuel , aviation fuel , jet fuel , or rocket propellant . After being fully filled, and during use, there is a space above the fuel, called the ullage , that contains evaporated fuel mixed with air, which contains the oxygen necessary for combustion. Under the right conditions this mixture can ignite. An inerting system replaces the air with a gas that cannot support combustion, such as nitrogen . [ 1 ] [ 2 ]
Three elements are required to initiate and sustain combustion in the ullage: an ignition source (heat), fuel, and oxygen. Combustion may be prevented by reducing any one of these three elements. In many cases there is no ignition source, e.g. storage tanks . If the presence of an ignition source can not be prevented, as is the case with most tanks that feed fuel to internal combustion engines, then the tank may be made non-ignitable by progressively adding an inert gas to the ullage as the fuel is consumed. At present carbon dioxide or nitrogen are used almost exclusively, although some systems use nitrogen-enriched air, or steam. Using these inert gases reduces the oxygen concentration of the ullage to below the combustion threshold.
Oil tankers fill the empty space above the oil cargo with inert gas to prevent fire or explosion of hydrocarbon vapors. Oil vapors cannot burn in air with less than 11% oxygen content. The inert gas may be supplied by cooling and scrubbing the flue gas produced by the ship's boilers. Where diesel engines are used, the exhaust gas may contain too much oxygen so fuel-burning inert gas generators may be installed. One-way valves are installed in process piping to the tanker spaces to prevent volatile hydrocarbon vapors or mist from entering other equipment. [ 3 ] Inert gas systems have been required on oil tankers since the SOLAS regulations of 1974. The International Maritime Organization (IMO) publishes technical standard IMO-860 describing the requirements for inert gas systems. Other types of cargo such as bulk chemicals may also be carried in inerted tanks, but the inerting gas must be compatible with the chemicals used.
Fuel tanks for combat aircraft have long been inerted, as well as being self-sealing , but those for military cargo aircraft and civilian transport category aircraft usually were not. Early applications using nitrogen were on the Handley Page Halifax III and VIII , Short Stirling , and Avro Lincoln B.II , which incorporated inerting systems from around 1944. [ 4 ] [ 5 ] [ 6 ]
Cleve Kimmel first proposed an inerting system to passenger airlines in the early 1960s. [ 7 ] His proposed system for passenger aircraft would have used nitrogen. However, the US Federal Aviation Administration (FAA) did not mandate installation of an inerting system at that time. Early versions of Kimmel's system weighed 2,000 pounds. The FAA focused on keeping ignition sources out of the fuel tanks.
The FAA did not formally propose lightweight inerting systems for commercial jets until the 1996 explosion of TWA Flight 800 , a Boeing 747, caused by the ignition of fuel-air vapours in the center wing fuel tank. This tank is normally used only on very long flights, and little fuel was present in the tank at the time of the explosion. A small amount of fuel in a tank is more dangerous than a large amount, since it takes less heat to raise the temperature of the remaining fuel. This causes the ullage fuel-to-air ratio to increase and exceed the lower flammability limit. A small amount of fuel in the tank leaves pumps on the floor of the tank exposed to the air-fuel mixture, and an electric pump is a potential ignition source. The explosion of a Thai Airways International Boeing 737 in 2001 and a Philippine Airlines 737 in 1990 also occurred in tanks that had a small amount of residual fuel. These three explosions occurred on warm days, in the center wing tank (CWT) that is within the contours of the fuselage. These fuel tanks are located in the vicinity of external equipment that inadvertently heats the fuel tanks. The National Transportation Safety Board's (NTSB) final report on the crash of the TWA 747 concluded "The fuel air vapor in the ullage of the TWA flight 800 CWT was flammable at the time of the accident". NTSB identified "Elimination of Explosive Mixture in Fuel tanks in Transport Category Aircraft" as Number 1 item on its Most Wanted List in 1997. [ citation needed ]
After the TWA Flight 800 crash, a 2001 report by an FAA committee stated that U.S. airlines would have to spend US$35 billion to retrofit their existing aircraft fleets with inerting systems that might prevent such explosions. However, another FAA group developed a nitrogen-enriched air (NEA) based inerting system prototype that operated on compressed air supplied by the aircraft's propulsive engines. Also, the FAA determined that the fuel tank could be rendered inert by reducing the ullage oxygen concentration to 12% rather than the previously accepted threshold of 9 to 10%. Boeing commenced testing a derivative system of their own, performing successful test flights in 2003 with several Boeing 747 aircraft.
The new, simplified inerting system based on membrane gas separation technology was originally suggested to the FAA through public comment. It uses a hollow fiber membrane material to separate supplied air into nitrogen-enriched air (NEA) and oxygen enriched air (OEA). [ 8 ] This technology is extensively used for generating oxygen-enriched air for medical purposes. It uses a membrane that preferentially allows the nitrogen molecule (molecular weight 28) to pass through it but not the oxygen molecule (molecular weight 32).
Unlike the inerting systems on military aircraft, this inerting system runs continuously to reduce fuel vapor flammability whenever the aircraft's engines are running. The goal is to reduce oxygen content within the fuel tank to 12%, lower than normal atmospheric oxygen content of 21%, but higher than that of inerted military aircraft fuel tanks, which have a target of 9% oxygen. Inerting in military aircraft is typically accomplished by ventilating fuel-vapor laden ullage gas out of the tank and into the atmosphere.
After what it said was seven years of investigation, the FAA proposed a rule in November 2005, in response to an NTSB recommendation, which would require airlines to "reduce the flammability levels of fuel tank vapors on the ground and in the air". This was a shift from the previous 40 years of policy in which the FAA focused only on reducing possible sources of ignition of fuel tank vapors.
The FAA issued the final rule on 21 July 2008. The rule amends regulations applicable to the design of new airplanes (14CFR§25.981), and introduces new regulations for continued safety (14CFR§26.31–39), Operating Requirements for Domestic Operations (14CFR§121.1117) and Operating Requirements for Foreign Air Carriers (14CFR§129.117). The regulations apply to airplanes certificated after 1 January 1958 of passenger capacity of 30 or more or payload capacity of greater than 7500 pounds. The regulations are performance based and do not require the implementation of a particular method.
The proposed rule would affect all future fixed-wing aircraft designs (passenger capacity greater than 30), and require a retrofit of more than 3,200 Airbus and Boeing aircraft with center wing fuel tanks, over nine years. The FAA had initially planned to also order installation on cargo aircraft, but this was removed from the order by the Bush administration. Additionally, regional jets and smaller commuter planes would not be subject to the rule, because the FAA does not consider them at high risk for a fuel-tank explosion.
The FAA estimated the cost of the program at US$808 million over the next 49 years, including US$313 million to retrofit the existing fleet. It compared this cost to an estimated US$1.2 billion "cost to society" from a large airliner exploding in mid-air. The proposed rule came at a time when nearly half of the U.S. airlines' capacity was on carriers that were in bankruptcy. [ 9 ]
The order affects aircraft whose air conditioning units have a possibility of heating up what can be considered a normally empty center wing fuel tank. Some Airbus A320 and Boeing 747 aircraft are slated for "early action". Regarding new aircraft designs, the Airbus A380 does not have a center wing fuel tank and is therefore exempt, and the Boeing 787 has a fuel tank safety system that already complies with the proposed rule. The FAA has stated that there have been four fuel tank explosions in the previous 16 years—two on the ground, and two in the air—and that based on this statistic and on the FAA's estimate that one such explosion would happen every 60 million hours of flight time, about 9 such explosions will probably occur in the next 50 years. The inerting systems will probably prevent 8 of those 9 probable explosions, the FAA said.
Before the inerting system rule was proposed, Boeing stated that it would install its own inerting system on airliners it manufactures beginning in 2005. Airbus had argued that its planes' electrical wiring made the inerting system an unnecessary expense.
As of 2009 [update] , the FAA had a pending rule to increase the standards of on board inerting systems again. New technologies are being developed by others to provide fuel tank inerting:
Another method in current use to inert fuel tanks is an ullage system. The FAA has decided that the added weight of an ullage system makes it impractical for implementation in the aviation field. [ 12 ] Some U.S. military aircraft still use nitrogen based foam inerting systems, and some companies will ship containers of fuel with an ullage system across rail transportation routes. | https://en.wikipedia.org/wiki/Inerting_system |
An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form . In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. [ 1 ] Inexact differentials are primarily used in calculations involving heat and work because they are path functions , not state functions .
An inexact differential δ u {\displaystyle \delta u} is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths γ 1 , γ 2 : [ 0 , 1 ] → R {\displaystyle \gamma _{1},\gamma _{2}:[0,1]\to \mathbb {R} } such that γ 1 ( 0 ) = γ 2 ( 0 ) {\displaystyle \gamma _{1}(0)=\gamma _{2}(0)} , γ 1 ( 1 ) = γ 2 ( 1 ) {\displaystyle \gamma _{1}(1)=\gamma _{2}(1)} and ∫ γ 1 δ u ≠ ∫ γ 2 δ u {\displaystyle \int _{\gamma _{1}}\delta u\not =\int _{\gamma _{2}}\delta u} In this case, we denote the integrals as Δ u | γ 1 {\displaystyle \Delta u|_{\gamma _{1}}} and Δ u | γ 2 {\displaystyle \Delta u|_{\gamma _{2}}} respectively to make explicit the path dependence of the change of the quantity we are considering as u {\displaystyle u} .
More generally, an inexact differential δ u {\displaystyle \delta u} is a differential form which is not an exact differential , i.e., for all functions f {\displaystyle f} , d f ≠ δ u {\displaystyle \mathrm {d} f\neq \delta u}
The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is an unnecessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function.
Instead of the differential symbol d , the symbol δ is used, a convention which originated in the 19th century work of German mathematician Carl Gottfried Neumann , [ 2 ] indicating that Q (heat) and W (work) are path-dependent, while U (internal energy) is not.
Within statistical mechanics, inexact differentials are often denoted with a bar through the differential operator, đ . [ 3 ] In LaTeX the command "\rlap{\textrm{d}}{\bar{\phantom{w}}}" is an approximation or simply "\dj" for a dyet character, which needs the T1 encoding . [ citation needed ]
Within mathematics, inexact differentials are usually just referred more generally to as differential forms which are often written just as ω {\displaystyle \omega } . [ 4 ]
When you walk from a point A {\displaystyle A} to a point B {\displaystyle B} along a line A B ¯ {\displaystyle {\overline {AB}}} (without changing directions) your net displacement and total distance covered are both equal to the length of said line A B {\displaystyle AB} . If you then return to point A {\displaystyle A} (without changing directions) then your net displacement is zero while your total distance covered is 2 A B {\displaystyle 2AB} . This example captures the essential idea behind the inexact differential in one dimension. Note that if we allowed ourselves to change directions, then we could take a step forward and then backward at any point in time in going from A {\displaystyle A} to B {\displaystyle B} and in-so-doing increase the overall distance covered to an arbitrarily large number while keeping the net displacement constant.
Reworking the above with differentials and taking A B ¯ {\displaystyle {\overline {AB}}} to be along the x {\displaystyle x} -axis, the net distance differential is d f = d x {\displaystyle \mathrm {d} f=\mathrm {d} x} , an exact differential with antiderivative x {\displaystyle x} . On the other hand, the total distance differential is | d x | {\displaystyle |\mathrm {d} x|} , which does not have an antiderivative. The path taken is γ : [ 0 , 1 ] → A B ¯ {\displaystyle \gamma :[0,1]\to {\overline {AB}}} where there exists a time t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} such that γ {\displaystyle \gamma } is strictly increasing before t {\displaystyle t} and strictly decreasing afterward. Then d x {\displaystyle \mathrm {d} x} is positive before t {\displaystyle t} and negative afterward, yielding the integrals, Δ f = ∫ γ d x = 0 {\displaystyle \Delta f=\int _{\gamma }\mathrm {d} x=0} Δ g | γ = ∫ γ | d x | = ∫ A B d x + ∫ B A ( − d x ) = 2 ∫ A B d x = 2 A B {\displaystyle \Delta g|_{\gamma }=\int _{\gamma }|\mathrm {d} x|=\int _{A}^{B}\mathrm {d} x+\int _{B}^{A}(-\mathrm {d} x)=2\int _{A}^{B}\mathrm {d} x=2AB} exactly the results we expected from the verbal argument before.
Inexact differentials show up explicitly in the first law of thermodynamics , d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W} where U {\displaystyle U} is the energy, δ Q {\displaystyle \delta Q} is the differential change in heat and δ W {\displaystyle \delta W} is the differential change in work. Based on the constants of the thermodynamic system, we are able to parameterize the average energy in several different ways. E.g., in the first stage of the Carnot cycle a gas is heated by a reservoir, giving us an isothermal expansion of that gas. Some differential amount of heat δ Q = T d S {\displaystyle \delta Q=TdS} enters the gas. During the second stage, the gas is allowed to freely expand, outputting some differential amount of work δ W = P d V {\displaystyle \delta W=PdV} . The third stage is similar to the first stage, except the heat is lost by contact with a cold reservoir, while the fourth cycle is like the second except work is done onto the system by the surroundings to compress the gas. Because the overall changes in heat and work are different over different parts of the cycle, there is a nonzero net change in the heat and work, indicating that the differentials δ Q {\displaystyle \delta Q} and δ W {\displaystyle \delta W} must be inexact differentials.
Internal energy U is a state function , meaning its change can be inferred just by comparing two different states of the system (independently of its transition path), which we can therefore indicate with U 1 and U 2 .
Since we can go from state U 1 to state U 2 either by providing heat Q = U 2 − U 1 or work W = U 2 − U 1 , such a change of state does not uniquely identify the amount of work W done to the system or heat Q transferred, but only the change in internal energy Δ U .
A fire requires heat, fuel, and an oxidizing agent. The energy required to overcome the activation energy barrier for combustion is transferred as heat into the system, resulting in changes to the system's internal energy. In a process, the energy input to start a fire may comprise both work and heat, such as when one rubs tinder (work) and experiences friction (heat) to start a fire. The ensuing combustion is highly exothermic, which releases heat. The overall change in internal energy does not reveal the mode of energy transfer and quantifies only the net work and heat. The difference between initial and final states of the system's internal energy does not account for the extent of the energy interactions transpired. Therefore, internal energy is a state function (i.e. exact differential), while heat and work are path functions (i.e. inexact differentials) because integration must account for the path taken.
It is sometimes possible to convert an inexact differential into an exact one by means of an integrating factor .
The most common example of this in thermodynamics is the definition of entropy : d S = δ Q rev T {\displaystyle \mathrm {d} S={\frac {\delta Q_{\text{rev}}}{T}}} In this case, δQ is an inexact differential, because its effect on the state of the system can be compensated by δW .
However, when divided by the absolute temperature and when the exchange occurs at reversible conditions (therefore the rev subscript), it produces an exact differential: the entropy S is also a state function.
Consider the inexact differential form, δ u = 2 y d x + x d y . {\displaystyle \delta u=2y\,\mathrm {d} x+x\,\mathrm {d} y.} This must be inexact by considering going to the point (1,1) . If we first increase y and then increase x , then that corresponds to first integrating over y and then over x . Integrating over y first contributes ∫ 0 1 x d y | x = 0 = 0 {\textstyle \int _{0}^{1}x\,dy|_{x=0}=0} and then integrating over x contributes ∫ 0 1 2 y d x | y = 1 = 2 {\textstyle \int _{0}^{1}2y\,\mathrm {\mathrm {d} } x|_{y=1}=2} . Thus, along the first path we get a value of 2. However, along the second path we get a value of ∫ 0 1 2 y d x | y = 0 + ∫ 0 1 x d y | x = 1 = 1 {\textstyle \int _{0}^{1}2y\,\mathrm {d} x|_{y=0}+\int _{0}^{1}x\,\mathrm {d} y|_{x=1}=1} . We can make δ u {\displaystyle \delta u} an exact differential by multiplying it by x , yielding x δ u = 2 x y d x + x 2 d y = d ( x 2 y ) . {\displaystyle x\,\delta u=2xy\,\mathrm {d} x+x^{2}\,\mathrm {d} y=\mathrm {d} (x^{2}y).} And so x δ u {\displaystyle x\,\delta u} is an exact differential. | https://en.wikipedia.org/wiki/Inexact_differential |
An inexact differential equation is a differential equation of the form:
satisfying the condition
Leonhard Euler invented the integrating factor in 1739 to solve these equations. [ 1 ]
To solve an inexact differential equation, it may be transformed into an exact differential equation by finding an integrating factor μ {\displaystyle \mu } . [ 2 ] Multiplying the original equation by the integrating factor gives:
For this equation to be exact, μ {\displaystyle \mu } must satisfy the condition:
Expanding this condition gives:
Since this is a partial differential equation , it is generally difficult. However in some cases where μ {\displaystyle \mu } depends only on x {\displaystyle x} or y {\displaystyle y} , the problem reduces to a separable first-order linear differential equation . The solutions for such cases are:
or | https://en.wikipedia.org/wiki/Inexact_differential_equation |
In animals , infanticide involves the intentional killing of young offspring by a mature animal of the same species . [ 2 ] Animal infanticide is studied in zoology , specifically in the field of ethology . Ovicide is the analogous destruction of eggs . The practice has been observed in many species throughout the animal kingdom, especially primates ( primate infanticide ) but including microscopic rotifers , insects , fish , amphibians , birds and mammals . [ 3 ] Infanticide can be practiced by both males and females . [ 4 ]
Infanticide caused by sexual conflict has the general theme of the killer (often male) becoming the new sexual partner of the victim's parent, which would otherwise be unavailable. [ 5 ] This represents a gain in fitness by the killer, and a loss in fitness by the parents of the offspring killed. This is a type of evolutionary struggle between the two sexes , in which the victim sex may have counter-adaptations that reduce the success of this practice. [ 5 ] It may also occur for other reasons, such as the struggle for food between females. In this case individuals may even kill closely related offspring.
Filial infanticide occurs when a parent kills its own offspring. This sometimes involves consumption of the young themselves, which is termed filial cannibalism . The behavior is widespread in fishes, and is seen in terrestrial animals as well. Human infanticide has been recorded in almost every culture. A unique aspect of human infanticide is sex-selective infanticide .
Infanticide only came to be seen as a significant occurrence in nature quite recently. At the time it was first seriously treated by Yukimaru Sugiyama , [ 6 ] infanticide was attributed to stress causing factors like overcrowding and captivity, and was considered pathological and maladaptive. Classical ethology held that conspecifics (members of the same species) rarely killed each other. [ 7 ] By the 1980s it had gained much greater acceptance. Possible reasons it was not treated as a prevalent natural phenomenon include its abhorrence to people, the popular group and species selectionist notions of the time (the idea that individuals behave for the good of the group or species; compare with gene-centered view of evolution ), and the fact that it is very difficult to observe in the field. [ 8 ]
This form of infanticide represents a struggle between the sexes, where one sex exploits the other, much to the latter's disadvantage. It is usually the male who benefits from this behavior, though in cases where males play similar roles to females in parental care the victim and perpetrator may be reversed (see Bateman's principle for discussion of this asymmetry).
Hanuman langurs (or gray langurs) are Old World monkeys found in India . They are a social animal, living in groups that consist of a single dominant male and multiple females. The dominant male has a reproductive monopoly within the group, which causes sub-ordinate males to have a much lower fitness value in comparison. [ 9 ] To gain the opportunity to reproduce, sub-ordinate males try to take over the dominant role within a group, usually resulting in an aggressive struggle with the existing dominant male. [ 10 ] If successful in overthrowing the previous male, unrelated infants of the females are then killed. [ 11 ] This infanticidal period is limited to the window just after the group is taken over. Cannibalism, however, has not been observed in this species.
Infanticide not only reduces intraspecific competition between the incumbent's offspring and those of other males but also increases the parental investment afforded to their own young, and allows females to become fertile faster. [ 12 ] This is because females of this species, as well as many other mammals, do not ovulate during lactation . It then becomes easier to understand how infanticide evolved. If a male kills a female's young, she stops lactating and is able to become pregnant again. [ 12 ] Because of this, the newly dominant male is able to reproduce at a faster rate than without the act of infanticide. [ 10 ] As males are in a constant struggle to protect their group, those that express infanticidal behavior will contribute a larger portion to future gene pools (see natural selection ).
Similar behavior is also seen in male lions , among other species, who also kill young cubs, thereby enabling them to impregnate the females. Unlike langurs, male lions live in small groups, which cooperate to take control of a pride from an existing group. [ 1 ] They will attempt to kill any cubs that are roughly nine months old or younger, though as in other species, the female will attempt to defend her cubs viciously. Males have, on average, only a two-year window in which to pass on their genes , and lionesses only give birth once every two years, so the selective pressure on them to conform to this behavior is strong. In fact it is estimated that a quarter of cubs dying in the first year of life are victims of infanticide. [ 1 ]
Male mice show great variation in behavior over time. After fertilizing a female, they become aggressive towards mouse pups for three weeks, killing any they come across. After this period however, their behavior changes dramatically, and they become paternal, caring for their own offspring. This lasts for almost two months, but afterwards they become infanticidal once more. It is no coincidence here that the female gestation period is three weeks as well, or that it takes roughly two months for pups to become fully weaned and leave their nest. The proximate mechanism that allows for the correct timing of these periods involves circadian rhythms (see chronobiology ), each day and night cycle affecting the mouse's internal neural physiology, and disturbances in the duration of these cycles results in different periods of time between behaviors. [ 13 ] The adaptive value of this behavior switching is twofold; infanticide removes competitors for when the mouse does have offspring, and allows the female victims to be impregnated earlier than if they continued to care for their young, as mentioned above.
Gerbils , on the other hand, no longer commit infanticide once they have paired with a female, but actively kill and eat other offspring when young. The females of this species behave much like male mice, hunting down other litters except when rearing their own. [ 14 ]
Prospective infanticide is a subset of sexual competition infanticide in which young born after the arrival of the new male are killed. This is less common than infanticide of existing young, but can still increase fitness in cases where the offspring could not possibly have been fathered by the new mate, i.e. one gestation or fertility period. This is known to occur in lions and langurs, and has also been observed in other species such as house wrens . [ 15 ] In birds, however, the situation is more complex, as female eggs are fertilized one at a time, with a 24-hour delay between each. Males may destroy clutches laid 12 days or more after their arrival, though their investment of around 60 days of parental care is large, so a high level of parental certainty is needed. [ 15 ]
Females are also known to display infanticidal behavior. This may appear unexpected, as the conditions described above do not apply. Males are not always an unlimited resource though—in some species, males provide parental care to their offspring, and females may compete indirectly with others by killing their offspring, freeing up the limiting resource that the males represent. This has been documented in research by Stephen Emlen and Natalie Demong on wattled jacanas ( Jacana jacana ), a tropical wading bird . [ 16 ] In the wattled jacana, it is exclusively the male sex that broods , while females defend their territory . In this experiment Demong and Emlen found that removing females from a territory resulted in nearby females attacking the chicks of the male in most cases, evicting them from their nest. The males then fertilized the offending females and cared for their young. [ 17 ] Emlen describes how he "shot a female one night, and ... by first light a new female was already on the turf. I saw terrible things—pecking and picking up and throwing down chicks until they were dead. Within hours she was soliciting the male, and he was mounting her the same day. The next night I shot the other female, then came out the next morning and saw the whole thing again." [ 18 ]
Infanticide is also seen in giant water bugs . [ 19 ] Lethocerus deyrollei is a large and nocturnal predatory insect found in still waters near vegetation . In this species the males take care of masses of eggs by keeping them hydrated with water from their bodies. Without a male caring for the eggs like this, they become desiccated and will not hatch. In this species, males are a scarce resource that females must sometimes compete for. Those that cannot find a free male often stab the eggs of a brooding one. As in the above case, males then fertilize this female and care for her eggs. Noritaka Ichikawa has found that males only moisten their eggs during the first 90 seconds or so, after which all of the moisture on their bodies has evaporated. However, they guard the egg masses for as long as several hours at a time, when they could be hunting prey. They do not seem to prevent further evaporation by staying guard, as males that only guarded the nest for short periods were seen to have similar hatching rates in a controlled experiment where there were no females present. It seems rather that males are more successful in avoiding infanticidal females when they are out of the water with their eggs, which might well explain the ultimate cause of this behavior. [ 19 ]
Female rats will eat the kits of strange females for a source of nutrition, and to take over the nest for her own litter. [ 20 ]
Black-tailed prairie dogs are colonially-living, harem - polygynous squirrels found mainly in the United States . Their living arrangement involves one male living with four or so females in a territory defended by all individuals, and underground nesting. Black-tails only have one litter per year, and are in estrous for only a single day around the beginning of spring.
A seven-year natural experiment by John Hoogland and others from Princeton University revealed that infanticide is widespread in this species, including infanticide from invading males and immigrant females, as well as occasional cannibalism of an individual's own offspring. [ 3 ] The surprising finding of the study was that by far the most common type of infanticide involved the killing of close kin's offspring. This seems illogical, as kin selection favors behaviors that promote the well-being of closely related individuals. It was postulated that this form of infanticide is more successful than trying to kill young in nearby groups, as the whole group must be bypassed in this case, while within a group only the mother need be evaded. Marauding behavior is evidently adaptive, as infanticidal females had more and healthier young than others, and were heavier themselves as well. This behavior appears to reduce competition with other females for food, and future competition among offspring.
Similar behavior has been reported in the meerkat ( Suricata suricatta ), including cases of females killing their mother's, sister's, and daughter's offspring. Infanticidal raids from neighboring groups also occurred. [ 21 ]
Bottlenose dolphins have been reported to kill their young through impact injuries. [ 22 ] Dominant male langurs tend to kill the existing young upon taking control of a harem. [ 23 ] There have been sightings of infanticide in the leopard population. [ 24 ] The males of the Stegodyphus lineatus species of spider have been known to exhibit infanticide as a way to encourage females to mate again. There is at least one documented case of infanticide among Asian elephants at Dong Yai Wildlife Sanctuary, with the researchers describing it as likely normal behavior among aggressive musth elephants. [ 25 ]
In mammals, male infanticide is most often observed in non-seasonal breeders. [ 26 ] There is less fitness advantage for a conspecific to carry out infanticide if the interbirth period of the mother will not be decreased and the female will not return to estrous. In Felidae , birthing periods can happen anytime during the year, as long as there is not an unweaned offspring of that female. This is a contributor to the frequency of infanticide in carnivorous felids. [ 27 ] [ 26 ] Some species of seasonal breeders have been observed to commit infanticide. Cases in the snub-nosed monkey , a seasonal breeding primate, have shown that infanticide does lessen the interbirth period of the females and can allow them to breed with the next breeding group. [ 28 ] Other cases of seasonal breeding species where the infanticidal characteristic is observed has been explained as a way of preserving the mother's resources and energy in turn increasing the reproductive success of upcoming breeding periods. [ 29 ]
While it may be beneficial for some species to behave this way, infanticide is not without risks to the perpetrator. Having already expended energy and perhaps sustained serious wounds in a fight with another male, attacks from females who vigorously defend their offspring may be telling for harem-polygynous males, with a risk of infection . It is also energetically costly to pursue a mother's young, which may try to escape.
Costs of the behavior described in prairie dogs include the risk to an individual of losing their own young while killing another's, not to mention the fact that they are killing their own relatives. In a species where infanticide is common, perpetrators may well be victims themselves in the future, such that they come out no better off; but as long as an infanticidal individual gains in reproductive output by its behavior, it will tend to become common. Further costs of the behavior in general may be induced by counter-strategies evolved in the other sex, as described below.
Taking a broader view of the black-tailed prairie dog situation, infanticide can be seen as a cost of social living . [ 3 ] If each female were to have her own private nest away from others, she would be much less likely to have her infants killed when absent. This, and other costs such as increased spread of parasites , must be made up for by other benefits, such as group territory defense and increased awareness of predators.
An avian example published in Nature is acorn woodpeckers . Females nest together, possibly because those nesting alone have their eggs constantly destroyed by rivals. Even so, eggs are consistently removed at first by nest partners themselves, until the entire group lays on the same day. They then cooperate and incubate the eggs as a group, but by this time a significant proportion of their eggs have been lost because of this ovicidal behavior. [ 30 ]
Because this form of infanticide reduces the fitness of killed individuals' parents, animals have evolved a range of counter-strategies against this behavior. These may be divided into two very different classes - those that tend to prevent infanticide, and those that minimize losses.
Some females abort or resorb their own young while they are still in development after a new male takes over; this is known as the Bruce effect . [ 31 ] This may prevent their young from being killed after birth, saving the mother wasted time and energy. However, this strategy also benefits the new male. In mice this can occur by the proximate mechanism of the female smelling the odor of the new male's urine . [ 32 ]
Infanticide in burying beetles may have led to male parental care. [ 33 ] In this species males often cooperate with the female in preparing a piece of carrion, which is buried with the eggs and eaten by the larvae when they hatch. Males may also guard the site alongside the female. It is apparent from experiments that this behavior does not provide their young with any better nourishment, nor is it of any use in defending against predators. However, other burying bugs may try to take their nesting space. When this occurs, a male-female pair is over twice as successful in nest defense, preventing the ovicide of their offspring.
Female langurs may leave the group with their young alongside the outgoing male, and others may develop a false estrous and allow the male to copulate, deceiving him into thinking she is actually sexually receptive. [ 34 ] Females may also have sexual liaisons with other males. This promiscuous behavior is adaptive, because males will not know whether it is their own offspring they are killing or not, and may be more reluctant or invest less effort in infanticide attempts. [ 35 ] Lionesses cooperatively guard against scouting males, and a pair were seen to violently attack a male after it killed one of their young. [ 36 ] Resistance to infanticide is also costly, though: for instance, a female may sustain serious injuries in defending her young. At times it is simply more advantageous to submit than to fight. [ 37 ]
Infanticide, the destruction of offspring characteristic to many species, has posed so great a threat that there have been observable changes of behavior in respective female mothers; more specifically, these changes exist as preventive measures.
A common behavioral mechanism by females to reduce the risk of infanticide of future offspring is through the process of paternity confusion or dilution. In theory, this implies that a female that mates with multiple males will widely spread the assumption of paternity across many males, and therefore make them less likely to kill or attack offspring that could potentially carry their genes. This theory operates under the assumption that the specific males keep a memory of past mates, under a desire to perpetuate their own genes [ 38 ] In the Japanese macaque ( macaca fuscata ), female mating with multiple males, or dilution of paternity, was found to inhibit male-to-infant aggression and infanticide eight times less towards infants of females with which they had previously mated. [ 39 ] Multi-male mating, or MMM, is recorded as a measure to prevent infanticide in species where young is altricial , or heavily dependent, and where there is a high turnover rate for dominant males, which leads to infanticide of the previous dominant male's young. Examples include, but are not limited to; white-footed mice, hamsters, lions, langurs, baboons, and macaques. [ 35 ] Along with mating with multiple males, the mating of females throughout the entirety of a reproductive cycle also serves a purpose for inhibiting the chance of infanticide. This theory assumes that males use information on past matings to make decisions on committing infanticide, and that females subsequently manipulate that knowledge. Females which are able to appear sexually active or receptive at all stages of their cycle, even during pregnancy with another male's offspring, can confuse the males into believing that the subsequent children are theirs. [ 38 ] This "pseudo-estrus" theory applies to females within species that do not exhibit obvious clues to each stage of their cycle, such as langurs, rhesus macaques, and gelada baboons. [ 38 ] An alternative to paternity confusion as a method of infanticide prevention is paternity concentration. This is the behavior of females to concentrate paternity to one specific dominant male as a means of protection from infanticide at the hands of less-dominant males. [ 35 ] This particularly applies to species in which a male has a very long tenure as the dominant male, and faces little instability in this hierarchy . Females choose these dominant males as the best available form of protection, and therefore mate exclusively with this male. This is especially common within small rodents. [ 35 ] An additional behavioral strategy to prevent infanticide by males may be aggressive protection of the nest along with female presence. This strategy is commonly used in species such as European rabbits . [ 40 ] [ 41 ] Aggressive protection of the nest in an effort to reduce infanticide is observed with the Black Rock Skink . Egernia saxatilis live in small families and adults defend their territories against conspecifics. The small "nuclear families" live in the same permanent shelter and the parents protect their infants from infanticidal conspecifics in this way. Adults attack unrelated juveniles but not their own offspring. The presence of a parent significantly reduces the rate of infanticide because conspecific adults ignore juveniles when a parent is present, likely because another adult is more threatening to the aggressive lizard. Therefore, a juvenile living within its parents' own territory will experience far less attacks from conspecific adults. [ 42 ] [ 43 ]
Filial infanticide occurs when a parent kills its own offspring. Both male and female parents have been observed to do this, as well as sterile worker castes in some eusocial animals. Filial infanticide is also observed as a form of brood reduction in some birds species, such as the white stork . [ 45 ] This may be due to a lack of siblicide in this species. [ 46 ]
Maternal infanticide occurs when newborn offspring are killed by their mother . This is sometimes seen in pigs , [ 47 ] a behavior known as savaging , which affects up to 5% of gilts . Similar behavior has been observed in various animals such as rabbits , [ 48 ] hamsters , [ 49 ] burying beetles , [ 50 ] mice [ 51 ] and humans.
Paternal infanticide —where fathers eat their own offspring—may also occur. When young bass hatch from the spawn , the father guards the area, circling around them and keeping them together, as well as providing protection from would-be predators. After a few days, most of the fish will swim away. At this point the male's behavior changes: instead of defending the stragglers, he treats them as any other small prey, and eats them. [ 52 ]
Honey bees may become infected with a bacterial disease called foul brood , which attacks the developing bee larva while still living in the cell. Some hives however have evolved a behavioral adaptation that resists this disease: the worker bees selectively kill the infected individuals by removing them from their cells and tossing them out of the hive, preventing it from spreading. The genetics of this behavior are quite complex. Experiments by Rothenbuhler showed that the 'hygienic' behavior of the queen was lost by crossing with a non-hygienic drone. This means that the trait must be recessive , only being expressed when both alleles contain the gene for hygienic behavior. Furthermore, the behavior is dependent on two separate loci. A backcross produced a mixed result. The hives of some offspring were hygienic, while others were not. There was also a third type of hive where workers removed the wax cap of the infected cells, but did nothing more. What was not apparent was the presence of a fourth group who threw diseased larvae out of the hive, but did not have the uncapping gene. This was suspected by Rothenbuhler however, who manually removed the caps, and found some hives proceeded to clear out infected cells. [ 53 ] [ 54 ]
Family structure is the most important risk factor in child abuse and infanticide. Children who live with both their natural (biological) parents are at low risk for abuse. The risk increases greatly when children live with step-parents or with a single parent. Children living without either parent (foster children) are 10 times more likely to be abused than children who live with both biological parents. [ citation needed ]
Children who live with a single parent that has a live-in partner are at the highest risk: they are 20 times more likely to be victims of child abuse than children living with both biological parents. [ 55 ]
Infanticide is a subject that some humans may find discomforting. Cornell University ethologist Glenn Hausfater states that "infanticide has not received much study because it's a repulsive subject [...] Many people regard it as reprehensible to even think about it." Research into infanticide in animals is in part motivated by the desire to understand human behaviors, such as child abuse . Hausfater explains that researchers are "trying to see if there's any connection between animal infanticide and child abuse, neglect and killing by humans [...] We just don't know yet what the connections are." [ 56 ]
Infanticide has been, and still is, practiced by some human cultures, groups, or individuals. [ citation needed ] In many past societies, certain forms of infanticide were considered permissible, whereas in most modern societies the practice is considered immoral and criminal . It still takes place in the Western world usually because of the parent's mental illness or violent behavior , in addition to some poor countries as a form of population control — sometimes with tacit societal acceptance. Female infanticide, a form of sex-selective infanticide , is more common than the killing of male offspring, especially in cultures where male children are more desirable.
Amongst some hunter-gatherer communities, infanticide would sometimes be extended into child cannibalism . This is documented in many regions, but particularly amongst pre-colonial Aboriginal Australian tribes. Infants and young children would often be killed, roasted, and eaten by their mother and sometimes also fed to siblings, usually during times of famine. In non-filial cases when a child was "well-fed" and in the absence of its mother sometimes a man or the whole community would kill and consume the child. [ 57 ] [ 58 ] | https://en.wikipedia.org/wiki/Infanticide_(zoology) |
Infanticide in non-human primates occurs when an individual kills its own or another individual's dependent young. Five hypotheses have been proposed to explain infanticide in non-human primates : exploitation , resource competition , parental manipulation, sexual selection , and social pathology . [ 1 ]
Infanticide in non- human primates occurs as a result of exploitation when the individuals performing the infanticide directly benefit from consumption or use of their victim. [ 1 ] The individual can become a resource: food ( cannibalism ); a protective buffer against aggression, or a prop to obtain maternal experience.
The form of exploitation in non-human primates most attributable to adult females is when non-lactating females take an infant from its mother ( allomothering ) and forcibly retain it until starvation. This behavior is known as the "aunting to death" phenomenon; these non-lactating female primates gain mothering-like experience, yet lack the resources to feed the infant. [ 1 ] This behaviour has been seen in captive bonobos , but not wild ones. It is not clear if it is a natural bonobo trait or the result of living in captivity. [ 2 ] Male orangutans have not been directly observed practicing infanticide as a reproductive strategy, but recorded case of a male abducting an infant almost resulting in said infant dying from dehydration was observed. Additionally, a possible case of infanticide has been inferred, in which a mother orangutan had lost an infant and received a serious injury on her foot shortly after a new male had been introduced nearby. Although not directly observed, it is inferred this male attacked the female and killed her infant. [ 3 ]
Resource competition results when there are too few resources in a particular area to support the existing population. In primates, resource competition is a prime motivator for infanticide. Infanticide motivated by resource competition can occur both outside of and within familial groups. Dominant, high ranking, female chimpanzees have been shown to more often aggress towards a lower ranking female and her infant due to resource competition. [ 4 ] Primates from outside of familial groups might infiltrate areas and kill infants from other groups to eliminate competition for resources. When resources are limited, infants are easier to eliminate from the competition pool than other group members because they are the most defenseless and thus become targets of infanticide. Primate infanticide motivated by resource competition can also involve cannibalizing the infant as a source of nutrition. [ 1 ]
Resource competition is also a primary motivator in inter-species infanticide, or the killing of infants from one species by another species. Through eliminating infants of another species in the same environment, the probability that the aggressor and their own infants will obtain more resources increases. This behavior has been an observed consequence of multiple primate inter-species conflicts. In these cases, instances of direct aggression toward inter-specific infants in addition to infanticide have also been observed. In these instances of direct aggression, the aggressor was the previous target of intra-species aggression directed towards them. Therefore, the direct aggression and infanticide carried out by these aggressors could be attributed to re-directed aggression. [ 5 ]
Maternal Infanticide, the killing of dependent young by the mother, is rare in non-human primates and has been reported only a handful of times. Maternal infanticide has been reported once in brown mantled tamarins, Saguinus fuscicollis , once in black fronted titis, Callicebus nigrifrons , and four times in mustached tamarins, Saguinus mystax . [ 6 ] It is proposed that maternal infanticide occurs when the mother assesses the probability for infant survival based on previous infant deaths. [ 6 ] If it is unlikely that the infant will survive, infanticide may occur. This may allow the mother to invest more in her current offspring or future offspring, leading to a greater net reproductive fitness in the mother. [ 1 ]
In the instances of maternal infanticide in tamarins, there were multiple breeding females. [ 6 ] The parental manipulation hypothesis proposes that maternal infanticide occurs more frequently when the group has a poor capacity to raise offspring, multiple breeding females, birth intervals shorter than three months, and low infant survival probability. [ 6 ]
Maternal infanticide differs from other varieties of infanticide in that the resource competition and sexual selection hypotheses (see other sections) must be rejected. [ 6 ] Resource competition and sexual selection are ruled out because it is the mother that is performing the infanticide, not another female.
In one case of maternal infanticide in wild black-fronted titi monkeys ( Callicebus nigrifrons ), the observed deceased infant was clinically healthy with no signs of health abnormalities. Therefore, infanticide did not appear to occur due to low viability of infant. [ 7 ] Additionally, overcrowding or feeding competition were not factors in infanticide. In this case, there were no clear functions of the infanticide; the reason for infanticide in black-fronted titi monkeys is currently unknown.
Infanticide increases a male's reproductive success when he takes over a new troop of females. This behavior has been observed in langurs who live in single male breeding groups. [ 8 ] The females whose infants were killed exhibited estrous behavior and copulated with the new leader. These effects result from acceleration of the termination of lactational amenorrhea . [ 9 ] This provides an advantage to the male because the female will more quickly copulate with him and raise his young rather than the young from the previous mate; his fitness increases through use of infanticide. Infanticide in one-male breeding units has also been observed in red-tailed monkeys [ 10 ] and blue monkeys . [ 11 ] In addition to single male breeding groups, sexually selected infanticide often occurs in multi-male, multi-female breeding groups including the red howler and the mantled howler . [ 12 ] Adult Japanese macaque males were eight times more likely to attack infants when females had not mated with the male himself. [ 13 ]
Infanticide by females other than the mother have been observed in wild groups of common marmosets ( Callithrix jacchus ). [ 14 ] Most cases of such behavior have been attributed to the resource competition hypothesis, in which females can gain more access to resources for herself and for her young by killing unrelated infants. Although commonly used in the context of food or shelter, the resource competition model can be applied to other limited resources, such as breeding opportunities or access to helpers. Most callitrichids have restrictive breeding patterns, which would be compatible with the model, but this infanticide behavior has only been documented in wild groups of common marmosets and not in wild groups of other callitrichid species. The higher frequency in common marmosets may be due to a variety of social, reproductive, and ecological characteristics - including higher likelihood for overlapping pregnancies and births (due to short intervals between births), habitat saturation, and lower costs of infant care compared to other callitrichids - that increase the chance of two breeding females inhabiting the same group, leading to more intense competition. In most observed cases in common marmosets, the socially dominant breeding females killed the infants of a subordinate female, allowing them to maintain their dominance. [ 14 ]
Paternal infanticide is rarely observed in non-human primates. In an extensive study of wild Japanese macaques which tracked instances of infanticide, DNA analysis revealed that males would not attack their own offspring or offspring of a female with whom they mated. Further, females in the study were found to be motivated to form social bonds with males in order to protect them from infanticide. [ 13 ]
In mammals, interaction between the sexes is usually limited to the female estrous or copulation. However, in non-human primates, these male-female bonds persist past the estrous. Social relationships between males and females in primates are hypothesized to serve as protection against male infanticide. [ 15 ] Year-round association serves to lower the probability of infanticide by other males. [ 16 ] In addition, many primates live in multi-female groups, and it has been proposed that these females live together to reduce the risk of infanticide through paternity confusion or concealed ovulation . [ 17 ] However, complex interactions can arise when females have different social rankings and when resource availability is threatened. Most often, dominant females opportunistically kill the young of a less dominant female when competition arises. [ 4 ]
Many primate species have developed counter adaptations to reduce the likelihood of infanticide. These strategies include physical defense, paternity confusion, reproduction suppression, and accelerated development.
The most immediate and obvious form of protection against infanticide is physical defense wherein mothers either directly prevent aggressive acts toward their offspring or recruit other individuals for assistance. Female primates have been observed to actively defend territory from potentially infanticidal females, as seen in chimpanzees . [ 18 ] In order to recruit the non-parental assistance in defense, female chacma baboons utilize "friendships" with males, wherein the male forms a bond with the infant until weaning, that may serve to protect their offspring from aggression by higher ranking males or females. [ 19 ]
To protect their young from infanticide, many species of primate mothers will form social monogamous pairs to prevent paternal infanticide. In these pairs, the males will mate with other females but live exclusively with one female as a socially monogamous pair. Forming this socially monogamous pair causes the males to form parental relationships and social bonds with the female's offspring. These bonds motivate males to defend their offspring against infanticide from unrelated individuals and to never commit infanticide against their own offspring. [ 20 ] This form of social monogamy has been observed in gibbons , siamangs , baboons , and macaques . [ 21 ] [ 18 ]
One study demonstrated that for gorillas , living in harem -style groups reduces a female's risk of infanticide more than if she mated with multiple males. [ 22 ] A female gorilla benefits more from protection by the silverback male, despite the fact that mating with only one male increased paternity certainty and thus increases the number of males in the population that would benefit reproductively from infanticide. However, it is likely that antipredation is also a closely linked motivation to the formation of gorilla social units.
Females utilize paternity confusion to reduce the likelihood that a male she has mated with will kill her offspring. There are several ways this is accomplished including concealed ovulation . Female catarrhine primates such as hanuman langurs have evolved an extended estrous state with variable ovulation in order to conceal the paternity of the fertilization. [ 23 ] Another important situation in which paternity confusion can arise is when females mate with multiple males; this includes mating patterns such as polyandry and promiscuity in multi-male multi-female groups. [ 24 ] Similar to promiscuous mating, female primates are proceptive during the first and second trimester of pregnancy in order to increase paternity confusion of their offspring. [ 25 ] Finally, in multi-male multi-female groups, female synchrony, in which females are all fertile at the same time, can prohibit the dominant male from monopolizing all of the females. This also allows sneak copulations in which non-dominant males sire offspring. [ 26 ] Female synchrony also serves to reduce risk of female infanticide by forcing potentially infanticidal females to focus on provisioning their own infants rather than acting aggressively. But there is some evidence to suggest that female synchrony serves to increase competition pressures and thus aggression in females. [ 27 ]
Females may also avoid the costs of continued reproductive investment when infanticide is likely. One such occurrence is known as the Bruce Effect, in which female primates may abort the pregnancy when presented with a new male. This has been observed in wild geladas , where a majority of females abort pregnancies following the displacement of a dominant male. [ 28 ] Feticide is a related but distinct phenomenon by which physical or psychological trauma mediated by male behavior results in fetal loss. For example, in baboons at Amboseli , rates of fetal loss increase following the immigration of aggressive males. [ 29 ]
In some social systems, lower-ranking primate females may delay reproduction to avoid infanticide by dominant females, as seen in common marmosets . In one instance, the dominant marmoset female killed the offspring of a subordinate female. This phenomenon of reproduction suppression is also well observed in tamarins . [ 30 ]
In order to reduce the amount of time that infants are particularly vulnerable to infanticide, females have been shown to wean infants earlier when risk of infanticide is high. [ 31 ] For example, female white-headed leaf monkeys were observed to wean their infants significantly more quickly during male takeovers as compared to socially stable periods. [ 31 ] Females with infants too young to be weaned left with the old males and returned after their offspring had fully weaned, again after a significantly shorter weaning period than during stable times. [ 31 ] | https://en.wikipedia.org/wiki/Infanticide_in_primates |
Infectious tolerance is a term referring to a phenomenon where a tolerance-inducing state is transferred from one cell population to another. It can be induced in many ways; although it is often artificially induced, it is a natural in vivo process. [ 1 ] A number of research deal with the development of a strategy utilizing this phenomenon in transplantation immunology. The goal is to achieve long-term tolerance of the transplant through short-term therapy. [ 2 ]
The term "infectious tolerance" was originally used by Gershon and Kondo in 1970 [ 3 ] for suppression of naive lymphocyte populations by cells with regulatory function and for the ability to transfer a state of unresponsiveness from one animal to another. [ 4 ] Gershon and Kondo discovered that T cells can not only amplify but also diminish immune responses. [ 5 ] The T cell population causing this down-regulation was called suppressor T cells and was intensively studied for the following years (nowadays they are called regulatory T cells and are again a very attractive for research). [ 6 ] These and other research in the 1970s showed greater complexity of immune regulation, unfortunately these experiments were largely disregarded, as methodological difficulties prevented clear evidence. Later developed new tolerogenic strategies have provided strong evidence to re-evaluate the phenomenon of T cell mediated suppression, in particular the use of non-depleting anti-CD4 monoclonal antibodies, demonstrating that neither thymus nor clonal deletion is necessary to induce tolerance. [ 7 ] In 1989 was successfully induced classical transplantation tolerance to skin grafts in adult mice using antibodies blocking T cell coreceptors in CD4+ populations. [ 8 ] Later was shown that the effect of monoclonal antibodies is formation of regulatory T lymphocytes. [ 9 ] It has been shown that transfer of tolerance to other recipients can be made without further manipulation and that this tolerance transfer depends only on CD4+ T-lymphocytes. [ 10 ] Because second-generation tolerance arises in the absence of any monoclonal antibodies to CD4 or CD8, it probably represents a natural response of the immune system, which, once initiated, becomes self-sustaining. This ensures the long duration of once induced tolerance, for as long as the donor antigens are present. [ 11 ]
During a tolerant state potential effector cells remain but are tightly regulated by induced antigen-specific CD4+ regulatory T cells (iTregs). Many subsets of iTregs play a part in this process, but CD4 + CD25 + FoxP3 + Tregs play a key role, because they have the ability to convert conventional T cells into iTregs directly by secretion of the suppressive cytokines TGF-β , IL-10 or IL-35 , or indirectly via dendritic cells (DCs) . [ 12 ] Production of IL-10 induces the formation of another population of regulatory T cells called Tr1. Tr1 cells are dependent on IL-10 and TGF-β as well as Tregs, but differ from them by lacking expression of Foxp3. [ 13 ] High IL-10 production is characteristic for Tr1 cells themselves and they also produce TGF-β . [ 14 ] In the presence of IL-10 can be also induced tolerogenic DCs from monocytes, whose production of IL-10 is also important for Tr1 formation. [ 15 ] These interactions lead to the production of enzymes such as IDO (indolamine 2,3-dioxygenase) that catabolize essential amino acids. This microenvironment with a lack of essential amino acids together with other signals results in mTOR ( mammalian target of rapamycin ) inhibition which, particularly in synergy with TGF-β , direct the induction of new FoxP3 ( forkhead box protein 3 ) expressing Tregs. [ 16 ] | https://en.wikipedia.org/wiki/Infectious_tolerance |
Inferences are steps in logical reasoning , moving from premises to logical consequences ; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction , a distinction that in Europe dates at least to Aristotle (300s BCE). Deduction is inference deriving logical conclusions from premises known or assumed to be true , with the laws of valid inference being studied in logic . Induction is inference from particular evidence to a universal conclusion. A third type of inference is sometimes distinguished, notably by Charles Sanders Peirce , contradistinguishing abduction from induction.
Various fields study how inference is done in practice. Human inference (i.e. how humans draw conclusions) is traditionally studied within the fields of logic, argumentation studies, and cognitive psychology ; artificial intelligence researchers develop automated inference systems to emulate human inference. Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. This generalizes deterministic reasoning, with the absence of uncertainty as a special case. Statistical inference uses quantitative or qualitative ( categorical ) data which may be subject to random variations.
The process by which a conclusion is inferred from multiple observations is called inductive reasoning . The conclusion may be correct or incorrect, or correct to within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from multiple observations may be tested by additional observations.
This definition is disputable (due to its lack of clarity. Ref: Oxford English dictionary: "induction ... 3. Logic the inference of a general law from particular instances." [ clarification needed ] ) The definition given thus applies only when the "conclusion" is general.
Two possible definitions of "inference" are:
Ancient Greek philosophers defined a number of syllogisms , correct three part inferences, that can be used as building blocks for more complex reasoning. We begin with a famous example:
The reader can check that the premises and conclusion are true, but logic is concerned with inference: does the truth of the conclusion follow from that of the premises?
The validity of an inference depends on the form of the inference. That is, the word "valid" does not refer to the truth of the premises or the conclusion, but rather to the form of the inference. An inference can be valid even if the parts are false, and can be invalid even if some parts are true. But a valid form with true premises will always have a true conclusion.
For example, consider the form of the following symbological track:
If the premises are true, then the conclusion is necessarily true, too.
Now we turn to an invalid form.
To show that this form is invalid, we demonstrate how it can lead from true premises to a false conclusion.
A valid argument with a false premise may lead to a false conclusion, (this and the following examples do not follow the Greek syllogism):
When a valid argument is used to derive a false conclusion from a false premise, the inference is valid because it follows the form of a correct inference.
A valid argument can also be used to derive a true conclusion from a false premise:
In this case we have one false premise and one true premise where a true conclusion has been inferred.
Evidence: It is the early 1950s and you are an American stationed in the Soviet Union . You read in the Moscow newspaper that a soccer team from a small city in Siberia starts winning game after game. The team even defeats the Moscow team. Inference: The small city in Siberia is not a small city anymore. The Soviets are working on their own nuclear or high-value secret weapons program.
Knowns: The Soviet Union is a command economy : people and material are told where to go and what to do. The small city was remote and historically had never distinguished itself; its soccer season was typically short because of the weather.
Explanation: In a command economy , people and material are moved where they are needed. Large cities might field good teams due to the greater availability of high quality players; and teams that can practice longer (possibly due to sunnier weather and better facilities) can reasonably be expected to be better. In addition, you put your best and brightest in places where they can do the most good—such as on high-value weapons programs. It is an anomaly for a small city to field such a good team. The anomaly indirectly described a condition by which the observer inferred a new meaningful pattern—that the small city was no longer small. Why would you put a large city of your best and brightest in the middle of nowhere? To hide them, of course.
An incorrect inference is known as a fallacy . Philosophers who study informal logic have compiled large lists of them, and cognitive psychologists have documented many biases in human reasoning that favor incorrect reasoning.
AI systems first provided automated logical inference and these were once extremely popular research topics, leading to industrial applications under the form of expert systems and later business rule engines . More recent work on automated theorem proving has had a stronger
basis in formal logic.
An inference system's job is to extend a knowledge base automatically. The knowledge base (KB) is a set of propositions that represent what the system knows about the world. Several techniques can be used by that system to extend KB by means of valid inferences. An additional requirement is that the conclusions the system arrives at are relevant to its task.
Additionally, the term 'inference' has also been applied to the process of generating predictions from trained neural networks . In this context, an 'inference engine' refers to the system or hardware performing these operations. This type of inference is widely used in applications ranging from image recognition to natural language processing .
Prolog (for "Programming in Logic") is a programming language based on a subset of predicate calculus . Its main job is to check whether a certain proposition can be inferred from a KB (knowledge base) using an algorithm called backward chaining .
Let us return to our Socrates syllogism . We enter into our Knowledge Base the following piece of code:
( Here :- can be read as "if". Generally, if P → {\displaystyle \to } Q (if P then Q) then in Prolog we would code Q :- P (Q if P).) This states that all men are mortal and that Socrates is a man. Now we can ask the Prolog system about Socrates:
(where ?- signifies a query: Can mortal(socrates). be deduced from the KB using the rules)
gives the answer "Yes".
On the other hand, asking the Prolog system the following:
gives the answer "No".
This is because Prolog does not know anything about Plato , and hence defaults to any property about Plato being false (the so-called closed world assumption ). Finally
?- mortal(X) (Is anything mortal) would result in "Yes" (and in some implementations: "Yes": X=socrates) Prolog can be used for vastly more complicated inference tasks. See the corresponding article for further examples.
Recently automatic reasoners found in semantic web a new field of application. Being based upon description logic , knowledge expressed using one variant of OWL can be logically processed, i.e., inferences can be made upon it.
Philosophers and scientists who follow the Bayesian framework for inference use the mathematical rules of probability to find this best explanation. The Bayesian view has a number of desirable features—one of them is that it embeds deductive (certain) logic as a subset (this prompts some writers to call Bayesian probability "probability logic", following E. T. Jaynes ).
Bayesians identify probabilities with degrees of beliefs, with certainly true propositions having probability 1, and certainly false propositions having probability 0. To say that "it's going to rain tomorrow" has a 0.9 probability is to say that you consider the possibility of rain tomorrow as extremely likely.
Through the rules of probability, the probability of a conclusion and of alternatives can be calculated. The best explanation is most often identified with the most probable (see Bayesian decision theory ). A central rule of Bayesian inference is Bayes' theorem .
[ 1 ]
A relation of inference is monotonic if the addition of premises does not undermine previously reached conclusions; otherwise the relation is non-monotonic .
Deductive inference is monotonic: if a conclusion is reached on the basis of a certain set of premises, then that conclusion still holds if more premises are added.
By contrast, everyday reasoning is mostly non-monotonic because it involves risk: we jump to conclusions from deductively insufficient premises.
We know when it is worth or even necessary (e.g. in medical diagnosis) to take the risk. Yet we are also aware that such inference is defeasible—that new information may undermine old conclusions. Various kinds of defeasible but remarkably successful inference have traditionally captured the attention of philosophers (theories of induction, Peirce's theory of abduction , inference to the best explanation, etc.). More recently logicians have begun to approach the phenomenon from a formal point of view. The result is a large body of theories at the interface of philosophy, logic and artificial intelligence.
Inductive inference:
Abductive inference:
Psychological investigations about human reasoning: | https://en.wikipedia.org/wiki/Inference |
Inferential role semantics (also conceptual role semantics , functional role semantics , procedural semantics , semantic inferentialism ) is an approach to the theory of meaning that identifies the meaning of an expression with its relationship to other expressions (typically its inferential relations with other expressions), in contradistinction to denotationalism , according to which denotations are the primary sort of meaning. [ 1 ]
Georg Wilhelm Friedrich Hegel is considered an early proponent of what is now called inferentialism. [ 2 ] [ 3 ] He believed that the ground for the axioms and the foundation for the validity of the inferences are the right consequences and that the axioms do not explain the consequence. [ 3 ]
In its current form, inferential role semantics originated in the work of Wilfrid Sellars .
Contemporary proponents of semantic inferentialism include Robert Brandom , [ 4 ] [ 5 ] Gilbert Harman , [ 6 ] Paul Horwich , Ned Block , [ 7 ] and Luca Incurvati . [ 8 ]
Jerry Fodor coined the term "inferential role semantics" in order to criticise it as a holistic (i.e. essentially non-compositional) approach to the theory of meaning. Inferential role semantics is sometimes contrasted to truth-conditional semantics .
Semantic inferentialism is related to logical expressivism [ 9 ] and semantic anti-realism . [ 10 ] The approach also bears a resemblance to accounts of proof-theoretic semantics in the semantics of logic , which associate meaning with the reasoning process.
This semantics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Inferential_role_semantics |
Inferior: How Science Got Women Wrong and the New Research That's Rewriting the Story is a 2017 book by science journalist Angela Saini . The book discusses the effect of sexism on scientific research, and how that sexism influences social beliefs. [ 1 ] [ 2 ]
Inferior was launched in June 2017 at the Royal Academy of Engineering . [ 3 ] The book was published by Beacon Press in the United States and Fourth Estate Books in the United Kingdom. [ 4 ]
According to journalist Chantal Da Silva of The Independent , Angela Saini "paints a disturbing picture of just how deeply sexist notions have been woven into the fabric of scientific research" and concluded that her work "presents the rest of the scientific community with an important challenge: to acknowledge and correct a deep-rooted bias – and to help rewrite the role of women in the story of human evolution". [ 1 ]
Science journalist Nicola Davis writing for The Guardian stated that Saini "discovers that many of society’s traditional beliefs about women are built on shaky ground" and that "Saini’s scrutiny of the stereotype of men as hunters, leaving women to tend hearth and home, is eye-opening". [ 2 ]
Journalist Anjana Vaswani in the Ahmedabad Mirror wrote that Saini "exposes Charles Darwin 's prejudices and how his views on a woman's place in society tinted, or rather tainted, his theories." [ 5 ]
In a review by Chemistry World , journalist Jennifer Newton wrote that "Saini’s narrative is sharp, engaging and admirably tempered" "I cannot recommend it highly enough". [ 6 ]
A month after its release, Inferior was recommended by Scientific American . [ 7 ] It was a finalist in the Goodreads Choice Awards for "Best Science and Technology" in 2017 but ultimately lost to Astrophysics for People in a Hurry . [ 8 ] Inferior was chosen as the Physics World "Book of the Year" for 2017 by the editor Tushna Commissariat who called it "[i]ntrepid, detailed [and] upbeat". [ 9 ]
Egyptologist Julien Delhez, writing for the journal Evolution, Mind and Behaviour in 2019, criticized Inferior for being "imprecise", "hazy", stating that "[w]hile researchers often benefit from listening to those who disagree with them, innuendos and vague claims such as these will certainly not help". He also wrote that the book creates confusion that could potentially "seriously deteriorate the dialogue between the public and the scientific community", unless "evolutionary psychologists, personality researchers, and intelligence researchers take the time to respond to such critics [i.e. Saini]". [ 10 ]
Psychologist Felipe Carvalho Novaes in the Portuguese journal Revista Psicologia Organizações e Trabalho, wrote that the book was well-written, but that it suffers from excessive biases and several contradictions. [ 11 ] Novaes also recommended reading other books, such as The Sexual Paradox , so the reader could get different perspectives on the subject. [ 11 ]
After the release of Inferior , Angela Saini was invited to speak at universities and schools around the country, in what became a "scientific feminist book tour". [ 12 ] [ 13 ] [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Inferior_(book) |
In the Solar System , a planet is said to be inferior or interior with respect to another planet if its orbit lies inside the other planet's orbit around the Sun . [ 1 ] In this situation, the latter planet is said to be superior to the former. In the reference frame of the Earth , where the terms were originally used, the inferior planets are Mercury and Venus , while the superior planets are Mars , Jupiter , Saturn , Uranus and Neptune . Dwarf planets like Ceres or Pluto and most asteroids are 'superior' in the sense that they almost all orbit outside the orbit of Earth. [ 2 ]
These terms were originally used in the geocentric cosmology of Claudius Ptolemy to differentiate as inferior those planets ( Mercury and Venus ) whose epicycle remained co-linear with the Earth and Sun, and as superior those planets ( Mars , Jupiter , and Saturn ) that did not. [ 3 ] [ 4 ]
In the 16th century, the terms were modified by Copernicus , who rejected Ptolemy's geocentric model, to distinguish a planet 's orbit 's size in relation to the Earth 's. [ 5 ]
When Earth is stated or assumed to be the reference point:
The terms are sometimes used more generally; for example, Earth is an inferior planet relative to Mars.
Interior planet now seems to be the preferred term for astronomers. Inferior/interior and superior are different from the terms inner planet and outer planet , which designate those planets that lie inside the asteroid belt and those that lie outside it, respectively. Inferior planet is also different from minor planet or dwarf planet . Superior planet is also different from gas giant .
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Inferior_and_superior_planets |
Infestation is the state of being invaded or overrun by pests or parasites . [ 1 ] It can also refer to the actual organisms living on or within a host . [ 2 ]
In general, the term "infestation" refers to parasitic diseases caused by animals such as arthropods (i.e. mites , ticks , and lice ) and worms , but excluding (except) conditions caused by protozoa , fungi , bacteria , and viruses , [ 3 ] which are called infections .
Infestations can be classified as either external or internal with regards to the parasites' location in relation to the host.
External or ectoparasitic infestation is a condition in which organisms live primarily on the surface of the host (though porocephaliasis can penetrate viscerally) and includes those involving mites , ticks , head lice and bed bugs . [ 4 ]
An internal (or endoparasitic ) infestation is a condition in which organisms live within the host and includes those involving worms (though swimmer's itch stays near the surface).
Sometimes, the term "infestation" is reserved for external ectoparasitic infestations [ 5 ] while the term infection refers to internal endoparasitic conditions. [ 6 ] | https://en.wikipedia.org/wiki/Infestation |
In urban planning , infill , or in-fill, is the rededication of land in an urban environment , usually open-space , to new construction . [ 1 ] Infill also applies, within an urban polity , to construction on any undeveloped land that is not on the urban margin. The slightly broader term " land recycling " is sometimes used instead. Infill has been promoted as an economical use of existing infrastructure and a remedy for urban sprawl . [ 2 ] Detractors view increased urban density as overloading urban services, including increased traffic congestion and pollution , and decreasing urban green-space. [ 3 ] [ 4 ] Many also dislike it for social and historical reasons, partly due to its unproven effects and its similarity with gentrification. [ 5 ]
In the urban planning and development industries, infill has been defined as the use of land within a built-up area for further construction, especially as part of a community redevelopment or growth management program or as part of smart growth . [ 6 ] [ 7 ]
It focuses on the reuse and repositioning of obsolete or underutilized buildings and sites. [ 8 ]
Urban infill projects can also be considered as a means of sustainable land development close to a city's urban core .
Redevelopment or land recycling are broad terms which include redevelopment of previously developed land. Infill development more specifically describes buildings that are constructed on vacant or underused property or between existing buildings. [ 9 ] Terms describing types of redevelopment that do not involve using vacant land should not be confused with infill development. Infill development is commonly misunderstood to be gentrification, which is a different form of redevelopment. [ 5 ]
Infill development is sometimes a part of gentrification thus providing a source of confusion which may explain social opposition to infill development. [ 5 ]
Gentrification is a term that is challenging to define because it manifests differently by location, and describes a process of gradual change in the identity of a neighborhood. [ 10 ] Because gentrification represents a gradual change, scholars have struggled to draw a hard line between ordinary, natural changes in a neighborhood and special, unnatural ones based in larger socio-economic and political structures. [ 10 ]
While the exact definition of gentrification varies by scholar, most can agree that gentrification redevelops a lower income neighborhood in a way that attracts higher income residents, or caters to their increasing presence. [ 11 ] Peter Moskowitz, the author of How to Kill a City, has more specifically put gentrification into context by describing it as a process permitted by "decades of racist housing policy" and perpetuated through a "political system focused more on the creation and expansion of business opportunities than the well-being of its citizens." [ 11 ] Gentrification is most common in urban neighborhoods, although it has also been studied in suburban and rural areas. [ 10 ]
A defining feature of gentrification is the effect it has on residents. Specifically, gentrification results in the physical displacement of lower class residents by middle or upper class residents. [ 5 ] The mechanism by which this displacement most traditionally occurs is through rental increases and increases in property values. [ 11 ] As gentrifiers start moving into a neighborhood, developers make upgrades to the neighborhood that are catered to them. The initial influx of middle class gentry occurs due to the affordability of the neighborhood combined with attractive developments that have already been made in the neighborhood. [ 11 ] In order to accommodate these new residents, local governments will change zoning codes and give out subsidies to encourage the development of new living spaces. [ 12 ] Rental increases are then justified by the new capital and demand for housing coming into an area. [ 10 ] Through increased rents for existing shops and rental units, long time residents and shopkeepers are forced to move, making way for the more new development. [ 12 ]
The major difference between gentrification and infill development is that infill development does not always involve physical displacement whereas gentrification does. [ 5 ] This is because infill development describes any development on unused or blighted land. When successful, infill development creates stable, mixed income communities. [ 5 ] Gentrification is more strongly associated with the development of higher-end shopping centers, apartment complexes, and industrial sites. These structures are developed on used land, with the goal of attracting higher income residents to maximize the capital of a certain area. The mixed income communities seen during gentrification are inherently transitional (based on how gentrification is defined), whereas the mixed-income communities caused by infill development are ideally stable. [ 5 ]
Despite their differences, similarities between gentrification and infill development are apparent. Infill development can involve the development of the same high-end residential and non-residential structures seen with gentrification (i.e. malls, grocery stores, industrial sites, and apartment complexes) and it often brings middle and upper-class residents into the neighborhoods being developed. [ 5 ]
The similarities, and subsequent confusion, between gentrification and infill housing can be identified in John A. Powell’s broader scholarship on regional solutions to urban sprawl and concentrated poverty. This is particularly clear in his article titled Race, poverty, and urban sprawl: Access to opportunities through regional strategies . [ 5 ] In this work, he argues that urban civil rights advocates must focus on regional solutions to urban sprawl and concentrated poverty. [ 5 ] To make his point, powell focuses on infill development, explaining that one of the major challenges to it is the lack of advocacy that it receives locally from urban civil rights advocates and community members. [ 5 ] He cites that the concern within these groups is that infill development will bring in middle and upper-class residents and cause the eventual displacement of low-income residents. [ 5 ] The fact that infill development "is mistakenly perceived as a gentrification process that will displace inner city residents from their existing neighborhoods," demonstrates that there exists confusion between the definitions of the terms. [ 5 ]
Powell also acknowledges that there is historical merit to these concerns, citing how during the 1960s infill development proved to favor white residents over minorities and how white-flight to the suburbs occurred throughout the mid-to-late twentieth century. [ 5 ] Many opponents to infill development are "inner-city residents of color." [ 5 ] They often view "return by whites to the city as an effort to retake the city" that they had previously left. This alludes to the fear of cultural displacement, which has most often been associated with gentrification, [ 13 ] but can also apply to infill development. Cultural displacement describes the “changes in the aspects of a neighborhood that have provided long-time residents with a sense of belonging and allowed residents to live their lives in familiar ways.” [ 14 ] Due white flight throughout the mid-to-late 20th century, minorities began to constitute the dominant group in inner-city communities. In the decades following, they developed distinct cultural identities and power within these communities. Powell suggests that it is unsurprising that they would want to risk relinquishing this sense of belonging to an influx of upper class white people, especially considering the historical tensions leading up to white flight in urban areas across the country throughout the mid to late 20th century. [ 5 ]
Despite these concerns, Powell claims that, depending on the city, the benefits of infill development may outweigh the risks that such groups are concerned about. For example, poor cities with high levels of vacant land (such as Detroit) have much to gain through infill development. [ 5 ] He also addresses the concern that minority groups will lose power in these communities by explaining how "cities like Detroit and Cleveland are far from being at risk of political domination by whites." [ 5 ]
The ways that Powell believes infill development could help poor cities like Detroit and Cleveland are through the increase in middle class residents and the new buildings that are constructed in the neighborhoods. These new buildings are an attractive alternative to blight, so they can have the benefit of improving property values for lower-class homeowners. [ 5 ] While increases property values can sometimes force non-homeowners to relocate, Powell suggests that in poor cities there are enough options for relocation that the displacement often remains "intra-jurisdictional." [ 5 ] Another benefit of infill development is the raising of the tax base, which brings more revenue into the city and improves the city’s ability to serve its residents. [ 5 ] Infill development's ability to eradicate old industrial sites and city-wide blight also can improve the quality of life for residents and spark much-needed outside investment in cities. [ 5 ]
Considering the confusion between gentrification and infill development, a major obstacle for advocates of infill development is to educate community members on the differences between infill development and gentrification. [ 13 ] Doing so requires explaining that infill projects use vacant land and do not displace lower income residents, but instead benefit them in the creation of stable, mixed-income communities. [ 5 ] Addressing the issue of cultural displacement is also paramount, as infill development still has the potential to shift the cultural identity of a neighborhood even if there is no physical displacement associated with it. [ 13 ]
Although urban infill is an appealing tool for community redevelopment and growth management, it is often far more costly for developers to develop land within the city than it is to develop on the periphery, in suburban greenfield land . [ 15 ] Costs for developers include acquiring land, removing existing structures, [ 16 ] and testing for and cleaning up any environmental contamination. [ 15 ]
Scholars have argued that infill development is more financially feasible for development when it occurs on a large plot of land, with several acres. [ 16 ] Large-scale development benefits from what economists call economies of scale and reduces the surrounding negative influences of neighborhood blight, crime, or poor schools. [ 16 ] However, large scale infill development is often difficult in a blighted neighborhood for several reasons, such as the difficulties in acquiring land and in gaining community support.
Amassing land is one challenge that infill development poses, but greenfield development does not. Neighborhoods that are targets for infill often have parcels of blighted land scattered among places of residence. Developers must be persistent to amass land parcel by parcel and often find resistance from landowners in the target area. [ 16 ] One way to approach that problem is for city management to use eminent domain to claim land. However, that is often unpopular with city management and neighborhood residents. Developers must also deal with regulatory barriers, visit numerous government offices for permitting, interact with a city management that is frequently unwilling to use eminent domain to remove current residents, and generally engage in public-private partnerships with local government. [ 16 ]
Developers also meet with high social goal barriers in which the local officials and residents are not interested in the same type of development. Although citizen involvement has been found to facilitate the development of brownfield land , residents in blighted neighborhoods often want to convert vacant lots to parks or recreational facilities, but external actors seek to build apartment complexes, commercial shopping centers, or industrial sites. [ 4 ] [ 17 ]
Suburban infill is the development of land in existing suburban areas that was left vacant during the development of the suburb. It is one of the tenets of New Urbanism and smart growth , trends that urge densification to reduce the need for automobiles , encourage walking , and save energy ultimately. [ 18 ] In New Urbanism, an exception to infill is the practice of urban agriculture in which land in the urban or suburban area is retained to grow food for local consumption.
Infill housing is the insertion of additional housing units into an already-approved subdivision or neighborhood. They can be provided as additional units built on the same lot, by dividing existing homes into multiple units, or by creating new residential lots by further subdivision or lot line adjustments. Units may also be built on vacant lots.
Infill residential development does not require the subdivision of greenfield land , natural areas, or prime agricultural land, but it usually reduces green space. In some cases of residential infill, existing infrastructure may need expansion to provide enough utilities and other services: increased electrical and water usage, additional sewage, increased traffic control, and increased fire damage potential.
As with other new construction, structures built as infill may clash architecturally with older, existing buildings. | https://en.wikipedia.org/wiki/Infill |
The infill wall is the supported wall that closes the perimeter of a building constructed with a three-dimensional framework structure (generally made of steel or reinforced concrete ). Therefore, the structural frame ensures the bearing function, whereas the infill wall serves to separate inner and outer space, filling up the boxes of the outer frames. The infill wall has the unique static function to bear its own weight. The infill wall is an external vertical opaque type of closure. With respect to other categories of wall, the infill wall differs from the partition that serves to separate two interior spaces, yet also non-load bearing, and from the load bearing wall. The latter performs the same functions of the infill wall, hygro-thermically and acoustically, but performs static functions too.
The use of masonry infill walls, and to some extent veneer walls, especially in reinforced concrete frame structures, is common in many countries. In fact, the use of masonry infill walls offers an economical and durable solution. They are easy to build, attractive for architecture and have a very efficient cost-performance.
Today, masonry enclosures and partition walls are mainly made of clay units, but also aggregate concrete units (dense and lightweight aggregate) and autoclaved aerated concrete units are used. More recently, industry is also trying to introduce wood concrete blocks. Partition walls, made with both vertically and horizontally perforated clay blocks, represent two-thirds of the corresponding market.
Masonry enclosure walls systems, must meet some structural and non-structural requirements. [ 1 ]
The requirements relating structural stability are currently defined and regulated by Eurocode 6 for load bearing masonry structures and by Eurocode 8 for seismic safety. These codes impose requirements for masonry walls, particularly non-collapse (in-plane/out of plane) and damage limitation, providing methods of calculation to ensure these two requirements.
Some of the non-structural requirements are: fire safety , thermal comfort , acoustic comfort, durability and water leakage .
The safety against fire is one of the requirements that is often required to enclosures walls. However, as usually the more traditionally used materials (blocks, bricks and mortar) are not fuel products, it is relatively easy to achieve the requirements relating to the limitation of spread of fire, thermal insulation and structural strength, which in severe cases, must be guaranteed for 180 minutes.
The thermal comfort is a requirement with which the enclosure walls must comply. This requirement has a direct influence on the construction of the walls. The thermal regulations are demanding increasingly higher values of thermal resistance to the walls. To meet these demands new products and building systems, which ensure that the thermal resistances requested by the regulations will be provided, are developed. It is likely that in the near future traditional construction solutions with double leaf walls (with new, more thermally efficient bricks and blocks) will be adapted, and there will be increased use of thermal insulation systems for exterior (ETICS), such as use of single leaf walls. Also the use of insulation systems from the inside will increase. The development of new enclosure wall systems should, apart from trying to improve requisites relating to structural stability in case of earthquake, improve the thermal resistance of the solution.
To ensure durability and waterproofing, the most important thing is to avoid errors in design and construction, leading to the appearance of (structural and non-structural) pathologies. Some requisites that the walls must have in order to avoid pathologies are: adequate expansion joints, correct support of the walls in the correction of thermal bridges, appropriate clipping between masonry leafs, correct implementation of space between leafs, proper placement of thermal insulation. The proper use of paints, protection against moisture and the correct preparation and application of traditional plasters, among others, are important factors
When there is the perimeter contact between the masonry infill walls and the frame, in ordinary situations of adherent robust infill walls, the effect of stiffness increase (and also dissipation) influences the building response. In the case of infill walls built disconnected from the structure (not in adherence with the frame elements), it is likely that infill walls act as an additional mass applied to the structure only, and should not have other significant effects. In general, in the most frequent case of perimeter contact between the masonry panels and the beams and columns of the RC structure, the infill panels interact with the structure, regardless of the lateral resistance capacity of the structure, and act like structural elements, overtaking lateral loads until they are badly damaged or destroyed. In this case, the most important effects of the structure-infill interaction are:
The main problems in the local interaction between frame and infill are the formation of short beam, short column effect in the structural elements. The zones in which supplementary shear forces can occur, acting locally on the extremities of the beams and columns, should be dimensioned and transversally reinforced in order to overtake safely these forces.
A wall without a cavity or continuous vertical joint in its plane. [ 2 ]
A wall consisting of two parallel single-leaf walls, effectively tied together with wall ties or bed joint reinforcement. The space between the leaves is left as a continuous cavity or filled or partially filled with non-loadbearing thermal insulating material. A wall consisting of two leaves separated by a cavity, where one of the leaves is not contributing to the strength or stiffness of the other (possibly loadbearing) leaf, is to be regarded as a veneer wall.
A wall used as a facing but not bonded or contributing to the strength of the backing wall or framed structure. | https://en.wikipedia.org/wiki/Infill_wall |
Infiltration is the process by which water on the ground surface enters the soil . It is commonly used in both hydrology and soil sciences . The infiltration capacity is defined as the maximum rate of infiltration. It is most often measured in meters per day but can also be measured in other units of distance over time if necessary. [ 1 ] The infiltration capacity decreases as the soil moisture content of soils surface layers increases. If the precipitation rate exceeds the infiltration rate, runoff will usually occur unless there is some physical barrier.
Infiltrometers , parameters and rainfall simulators are all devices that can be used to measure infiltration rates. [ 2 ]
Infiltration is caused by multiple factors including; gravity, capillary forces, adsorption, and osmosis. Many soil characteristics can also play a role in determining the rate at which infiltration occurs.
Precipitation can impact infiltration in many ways. The amount, type, and duration of precipitation all have an impact. Rainfall leads to faster infiltration rates than any other precipitation event, such as snow or sleet. In terms of amount, the more precipitation that occurs, the more infiltration will occur until the ground reaches saturation, at which point the infiltration capacity is reached. The duration of rainfall impacts the infiltration capacity as well. Initially when the precipitation event first starts the infiltration is occurring rapidly as the soil is unsaturated, but as time continues the infiltration rate slows as the soil becomes more saturated. This relationship between rainfall and infiltration capacity also determines how much runoff will occur. If rainfall occurs at a rate faster than the infiltration capacity runoff will occur.
The porosity of soils is critical in determining the infiltration capacity. Soils that have smaller pore sizes, such as clay, have lower infiltration capacity and slower infiltration rates than soils that have large pore sizes, such as sands. One exception to this rule is when the clay is present in dry conditions. In this case, the soil can develop large cracks which lead to higher infiltration capacity. [ 3 ]
Soil compaction also impacts infiltration capacity. Compaction of soils results in decreased porosity within the soils, which decreases infiltration capacity. [ 4 ]
Hydrophobic soils can develop after wildfires have happened, which can greatly diminish or completely prevent infiltration from occurring.
Soil that is already saturated has no more capacity to hold more water, therefore infiltration capacity has been reached and the rate cannot increase past this point. This leads to much more surface runoff. When soil is partially saturated then infiltration can occur at a moderate rate and fully unsaturated soils have the highest infiltration capacity.
Organic materials in the soil (including plants and animals) all increase the infiltration capacity. Vegetation contains roots that extend into the soil which create cracks and fissures in the soil, allowing for more rapid infiltration and increased capacity. Vegetation can also reduce the surface compaction of the soil which again allows for increased infiltration. When no vegetation is present infiltration rates can be very low, which can lead to excessive runoff and increased erosion levels. [ 3 ] Similarly to vegetation, animals that burrow in the soil also create cracks in the soil structure.
If the land is covered by impermeable surfaces, such as pavement, infiltration cannot occur as the water cannot infiltrate through an impermeable surface. This relationship also leads to increased runoff. Areas that are impermeable often have storm drains that drain directly into water bodies, which means no infiltration occurs. [ 5 ]
Vegetative cover of the land also impacts the infiltration capacity. Vegetative cover can lead to more interception of precipitation, which can decrease intensity leading to less runoff, and more interception. Increased abundance of vegetation also leads to higher levels of evapotranspiration which can decrease the amount of infiltration rate. [ 5 ] Debris from vegetation such as leaf cover can also increase the infiltration rate by protecting the soils from intense precipitation events.
In semi-arid savannas and grasslands, the infiltration rate of a particular soil depends on the percentage of the ground covered by litter, and the basal cover of perennial grass tufts. On sandy loam soils, the infiltration rate under a litter cover can be nine times higher than on bare surfaces. The low rate of infiltration in bare areas is due mostly to the presence of a soil crust or surface seal. Infiltration through the base of a tuft is rapid and the tufts funnel water toward their own roots. [ 6 ]
When the slope of the land is higher runoff occurs more readily which leads to lower infiltration rates. [ 5 ]
The process of infiltration can continue only if there is room available for additional water at the soil surface. The available volume for additional water in the soil depends on the porosity of the soil [ 7 ] and the rate at which previously infiltrated water can move away from the surface through the soil. The maximum rate at that water can enter soil in a given condition is the infiltration capacity. If the arrival of the water at the soil surface is less than the infiltration capacity, it is sometimes analyzed using hydrology transport models , mathematical models that consider infiltration, runoff, and channel flow to predict river flow rates and stream water quality .
Robert E. Horton [ 8 ] suggested that infiltration capacity rapidly declines during the early part of a storm and then tends towards an approximately constant value after a couple of hours for the remainder of the event. Previously infiltrated water fills the available storage spaces and reduces the capillary forces drawing water into the pores. Clay particles in the soil may swell as they become wet and thereby reduce the size of the pores. In areas where the ground is not protected by a layer of forest litter, raindrops can detach soil particles from the surface and wash fine particles into surface pores where they can impede the infiltration process.
Wastewater collection systems consist of a set of lines, junctions, and lift stations to convey sewage to a wastewater treatment plant. When these lines are compromised by rupture, cracking, or tree root invasion , infiltration/inflow of stormwater often occurs. This circumstance can lead to a sanitary sewer overflow , or discharge of untreated sewage into the environment.
Infiltration is a component of the general mass balance hydrologic budget. There are several ways to estimate the volume and water infiltration rate into the soil. The rigorous standard that fully couples groundwater to surface water through a non-homogeneous soil is the numerical solution of Richards' equation . A newer method that allows 1-D groundwater and surface water coupling in homogeneous soil layers and that is related to the Richards equation is the Finite water-content vadose zone flow method solution of the Soil Moisture Velocity Equation . In the case of uniform initial soil water content and deep, well-drained soil, some excellent approximate methods exist to solve the infiltration flux for a single rainfall event. Among these are the Green and Ampt (1911) [ 9 ] method, Parlange et al. (1982). [ 10 ] Beyond these methods, there are a host of empirical methods such as SCS method, Horton's method, etc., that are little more than curve fitting exercises.
The general hydrologic budget, with all the components, with respect to infiltration F . Given all the other variables and infiltration is the only unknown, simple algebra solves the infiltration question.
where
The only note on this method is one must be wise about which variables to use and which to omit, for doubles can easily be encountered. An easy example of double counting variables is when the evaporation, E , and the transpiration, T , are placed in the equation as well as the evapotranspiration, ET . ET has included in it T as well as a portion of E . Interception also needs to be accounted for, not just raw precipitation.
The standard rigorous approach for calculating infiltration into soils is Richards' equation , which is a partial differential equation with very nonlinear coefficients. The Richards equation is computationally expensive, not guaranteed to converge, and sometimes has difficulty with mass conservation. [ 11 ]
This method approximates Richards' (1931) partial differential equation that de-emphasizes soil water diffusion. This was established by comparing the solution of the advection-like term of the Soil Moisture Velocity Equation [ 12 ] and comparing against exact analytical solutions of infiltration using special forms of the soil constitutive relations. Results showed that this approximation does not affect the calculated infiltration flux because the diffusive flux is small and that the finite water-content vadose zone flow method is a valid solution of the equation [ 13 ] is a set of three ordinary differential equations , is guaranteed to converge and to conserve mass. It requires the assumption that the flow occurs in the vertical direction only (1-dimensional) and that the soil is uniform within layers.
The name was derived from two men: Green and Ampt. The Green-Ampt [ 14 ] method of infiltration estimation accounts for many variables that other methods, such as Darcy's law, do not. It is a function of the soil suction head, porosity, hydraulic conductivity, and time.
where
Once integrated, one can easily choose to solve for either volume of infiltration or instantaneous infiltration rate:
Using this model one can find the volume easily by solving for F ( t ) {\displaystyle F(t)} . However, the variable being solved for is in the equation itself so when solving for this one must set the variable in question to converge on zero, or another appropriate constant. A good first guess for F {\displaystyle F} is the larger value of either K t {\displaystyle Kt} or 2 ψ Δ θ K t {\displaystyle {\sqrt {2\psi \,\Delta \theta Kt}}} . These values can be obtained by solving the model with a log replaced with its Taylor-Expansion around one, of the zeroth and second order respectively. The only note on using this formula is that one must assume that h 0 {\displaystyle h_{0}} , the water head or the depth of ponded water above the surface, is negligible. Using the infiltration volume from this equation one may then substitute F {\displaystyle F} into the corresponding infiltration rate equation below to find the instantaneous infiltration rate at the time, t {\displaystyle t} , F {\displaystyle F} was measured.
Named after the same Robert E. Horton mentioned above, Horton's equation [ 14 ] is another viable option when measuring ground infiltration rates or volumes. It is an empirical formula that says that infiltration starts at a constant rate, f 0 {\displaystyle f_{0}} , and is decreasing exponentially with time, t {\displaystyle t} . After some time when the soil saturation level reaches a certain value, the rate of infiltration will level off to the rate f c {\displaystyle f_{c}} .
Where
The other method of using Horton's equation is as below. It can be used to find the total volume of infiltration, F , after time t .
Named after its founder Kostiakov [ 15 ] is an empirical equation that assumes that the intake rate declines over time according to a power function.
Where a {\displaystyle a} and k {\displaystyle k} are empirical parameters.
The major limitation of this expression is its reliance on the zero final intake rate. In most cases, the infiltration rate instead approaches a finite steady value, which in some cases may occur after short periods of time. The Kostiakov-Lewis variant, also known as the "Modified Kostiakov" equation corrects this by adding a steady intake term to the original equation. [ 16 ]
in integrated form, the cumulative volume is expressed as:
Where
This method used for infiltration is using a simplified version of Darcy's law . [ 14 ] Many would argue that this method is too simple and should not be used. Compare it with the Green and Ampt (1911) solution mentioned previously. This method is similar to Green and Ampt, but missing the cumulative infiltration depth and is therefore incomplete because it assumes that the infiltration gradient occurs over some arbitrary length L {\displaystyle L} . In this model the ponded water is assumed to be equal to h 0 {\displaystyle h_{0}} and the head of dry soil that exists below the depth of the wetting front soil suction head is assumed to be equal to − ψ − L {\displaystyle -\psi -L} .
where
or | https://en.wikipedia.org/wiki/Infiltration_(hydrology) |
Infiltration and inflow ( I/I or I&I ) is the process of groundwater , or water from sources other than domestic wastewater, entering sanitary sewers . I/I causes dilution in sanitary sewers, which decreases the efficiency of treatment, and may cause sewage volumes to exceed design capacity. Although inflow is technically different from infiltration, it may be difficult to determine which is causing dilution problems in inaccessible sewers. The United States Environmental Protection Agency considers infiltration and inflow to be combined contributions from both. [ 1 ] [ 2 ]
Early combined sewers used surface runoff to dilute waste from toilets and carry it away from urban areas into natural waterways. Sewage treatment can remove some pollutants from toilet waste, but treatment of diluted flow from combined sewers produces larger volumes of treated sewage with similar pollutant concentrations. Modern sanitary sewers are designed to transport domestic and industrial wastewater directly to treatment facilities without dilution. [ 3 ]
Groundwater entering sanitary sewers through defective pipe joints and broken pipes is called infiltration . [ 4 ] Pipes may leak because of careless installation; they may also be damaged after installation by differential ground movement, heavy vehicle traffic on roadways above the sewer, careless construction practices in nearby trenches, or degradation of the sewer pipe materials. In general, volume of leakage will increase over time. Damaged and broken sewer cleanouts are a major cause of infiltration into municipal sewer systems. [ 5 ]
Infiltration will occur where local groundwater elevation is higher than the sewer pipe. Gravel bedding materials in sewer pipe trenches act as a French drain . Groundwater flows parallel to the sewer until it reaches the area of damaged pipe. In areas of low groundwater, sewage may exfiltrate into groundwater from a leaking sewer. [ 6 ]
Water entering sanitary sewers from inappropriate connections is called inflow . [ 4 ] Typical sources include sump pumps , roof drains, cellar drains, and yard drains where urban features prevent surface runoff, and storm drains are not conveniently accessible or identifiable. Inflow tends to peak during precipitation events, and causes greater flow variation than infiltration. Peak flows caused by inflow may generate a foul flush of accumulated biofilm and sanitary solids scoured from the dry weather wetted perimeter of oversized sewers during peak flow turbulence . [ 8 ] Sources of inflow can sometimes be identified by smoke testing . Smoke is blown into the sewer during dry weather while observers watch for smoke emerging from yards, cellars, or roof gutters. [ 9 ]
Dilution of sewage directly increases costs of pumping and chlorination , ozonation, or ultraviolet disinfection . Physical treatment structures including screens and pumps must be enlarged to handle the peak flow. Primary clarifiers must also be enlarged to treat average flows, although primary treatment of peak flows may be accomplished in detention basins . Biological secondary treatment is effective only while the concentration of soluble and colloidal pollutants (typically measured as biochemical oxygen demand or BOD) remains high enough to sustain a population of microorganisms digesting those pollutants. In U.S. federal regulations, secondary treatment is expected to remove 85 percent of soluble and colloidal organic pollutants from sewage containing 200 mg/L BOD. [ 10 ] BOD removal by conventional biological secondary treatment becomes less effective with dilution and practically ceases as BOD concentrations entering the treatment facility are diluted below about 20 mg/L. Unremoved organics are potentially converted to disinfection by-products by chemical disinfection prior to discharge.
High rates of infiltration and inflow may make the sanitary sewer incapable of carrying sewage from the design service area. Sewage may back up into the lowest homes during wet weather, or street manholes may overflow. [ 9 ]
Smoke test results may not correlate well with flow volumes; although they can identify potential problem locations. Where sewage flow is expected to be relatively uniform, significance of infiltration and inflow may be estimated by comparison of sewage flow at the same point during wet and dry weather or at two sequential points within the sewer system. Small areas with large flow differences can be identified if the sewer system provides adequate measuring locations. It may be necessary to replace a section of sewer line if flow differences cannot be corrected by removing identified connections. [ 9 ] | https://en.wikipedia.org/wiki/Infiltration_and_inflow |
An infiltration basin (or recharge basin ) is a form of engineered sump [ 1 ] or percolation pond [ 2 ] that is used to manage stormwater runoff , prevent flooding and downstream erosion , and improve water quality in an adjacent river , stream , lake or bay . It is essentially a shallow artificial pond that is designed to infiltrate stormwater through permeable soils into the groundwater aquifer . Infiltration basins do not release water except by infiltration, evaporation or emergency overflow during flood conditions. [ 3 ] [ 4 ] [ 5 ]
It is distinguished from a detention basin , sometimes called a dry pond , which is designed to discharge to a downstream water body (although it may incidentally infiltrate some of its volume to groundwater); and from a retention basin , which is designed to include a permanent pool of water.
Infiltration basins must be carefully designed to infiltrate the soil on a given site at a rate that will not cause flooding. They may be less effective in areas with:
At some sites infiltration basins have worked effectively where the installation also includes an extended detention basin as a pretreatment stage, to remove sediment. [ 7 ] The basins may fail where they cannot be frequently maintained, and their use is discouraged in some areas of the United States . For example, they are not recommended for use in the U.S. state of Georgia , which has many areas with high clay soil content, unless soil on the particular site is modified ("engineered soil") during construction, to improve the infiltration characteristics. [ 8 ] | https://en.wikipedia.org/wiki/Infiltration_basin |
An infiltrometer is a device used to measure the rate of water infiltration into soil or other porous media. [ 1 ] Commonly used infiltrometers are single-ring and double-ring infiltrometers, disc permeameters , and falling head infiltrometers.
A single-ring infiltrometer involves driving a ring into the soil and supplying water in the ring either at constant head or falling head condition. Constant head refers to condition where the amount of water in the ring is always held constant. Because infiltration capacity is the maximum infiltration rate, and if infiltration rate exceeds the infiltration capacity, runoff will be the consequence, therefore maintaining constant head means the rate of water supplied corresponds to the infiltration capacity. The supplying of water is done with a Mariotte's bottle . Falling head refers to condition where water is supplied in the ring, and the water is allowed to drop with time. The operator records how much water goes into the soil for a given time period. The rate of which water goes into the soil is related to the soil's hydraulic conductivity .
A double ring infiltrometer requires two rings: an inner and outer ring. The purpose is to create a one-dimensional flow of water from the inner ring, as the analysis of data is simplified. If water is flowing in one-dimension at steady state condition, and a unit gradient is present in the underlying soil, the infiltration rate is approximately equal to the saturated hydraulic conductivity.
An inner ring is driven into the ground, and a second bigger ring around that to help control the flow of water through the first ring. Water is supplied either with a constant or falling head condition, and the operator records how much water infiltrates from the inner ring into the soil over a given time period. The ASTM standard method [ 2 ] specifies inner and outer rings of 30 and 60 cm diameters, respectively.
There are several challenges related to the use of ring infiltrometers: | https://en.wikipedia.org/wiki/Infiltrometer |
TriCore is a 32-bit microcontroller architecture from Infineon . It unites the elements of a RISC processor core, a microcontroller and a DSP in one chip package .
In 1999, Infineon launched the first generation of AUDO (Automotive unified processor) which is based on what the company describes as a 32-bit "unified RISC / MCU / DSP microcontroller core ", called TriCore, which as of 2011 is on its fourth generation, called AUDO MAX (version 1.6).
TriCore is a heterogeneous, asymmetric dual core architecture with a peripheral control processor that enables user modes and core system protection.
Infineon's AUDO families [ 1 ] target gasoline and diesel engine control units (ECUs), applications in hybrid and electric vehicles as well as transmission, active and passive safety and chassis applications. It also targets industrial applications, e.g. optimized motor control applications and signal processing.
Different models offer different combinations of memories, peripheral sets, frequencies, temperatures and packaging. Infineon also offers software
claimed to help manufacturers meet SIL/ASIL [ 2 ] safety standards. All members of the AUDO family are binary-compatible and share the same development tools. An AUTOSAR library that enables existing code to be integrated is also available.
Infineon's portfolio includes microcontrollers with additional hardware features as well as SafeTcore safety software and a watchdog IC. [ 3 ]
AUDO families cover safety applications including active suspension and driver assistant systems and also EPS and chassis domain control. Some features of the product portfolio are memory protection, redundant peripherals, MemCheck units with integrated CRCs, ECC on memories, integrated test and debug functionality and FlexRay .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Infineon_TriCore |
InfiniBand ( IB ) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency . It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be scalable and uses a switched fabric network topology .
Between 2014 and June 2016, [ 1 ] it was the most commonly used interconnect in the TOP500 list of supercomputers.
Mellanox (acquired by Nvidia ) manufactures InfiniBand host bus adapters and network switches , which are used by large computer system and database vendors in their product lines. [ 2 ]
As a computer cluster interconnect, IB competes with Ethernet , Fibre Channel , and Intel Omni-Path . The technology is promoted by the InfiniBand Trade Association .
InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by Intel , with a specification released in 1998, [ 3 ] and joined by Sun Microsystems and Dell .
Future I/O was backed by Compaq , IBM , and Hewlett-Packard . [ 4 ] This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as Microsoft .
At the time it was thought some of the more powerful computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X . [ 5 ] Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room , cluster interconnect and Fibre Channel . IBTA also envisaged decomposing server hardware on an IB fabric .
Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10 Gbit/second speeds. [ 6 ] Following the burst of the dot-com bubble there was hesitation in the industry to invest in such a far-reaching technology jump. [ 7 ] By 2002, Intel announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI Express , and Microsoft discontinued IB development in favor of extending Ethernet. Sun Microsystems and Hitachi continued to support IB. [ 8 ]
In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what was estimated to be the third largest computer in the world at the time. [ 9 ] The OpenIB Alliance (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel. [ 10 ] [ 11 ] In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio. [ 12 ] Cisco, desiring to keep technology superior to Ethernet off the market, adopted a "buy to kill" strategy. Cisco successfully killed InfiniBand switching companies such as Topspin via acquisition. [ 13 ] [ citation needed ]
Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand. [ 14 ] In 2010, market leaders Mellanox and Voltaire merged, leaving just one other IB vendor, QLogic , primarily a Fibre Channel vendor. [ 15 ] At the 2011 International Supercomputing Conference , links running at about 56 gigabits per second (known as FDR, see below), were announced and demonstrated by connecting booths in the trade show. [ 16 ] In 2012, Intel acquired QLogic's InfiniBand technology, leaving only one independent supplier. [ 17 ]
By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although within two years, 10 Gigabit Ethernet started displacing it. [ 1 ]
In 2016, it was reported that Oracle Corporation (an investor in Mellanox) might engineer its own InfiniBand hardware. [ 2 ]
In 2019 Nvidia acquired Mellanox, the last independent supplier of InfiniBand products. [ 18 ]
Specifications are published by the InfiniBand trade association.
Original names for speeds were single-data rate (SDR), double-data rate (DDR) and quad-data rate (QDR) as given below. [ 12 ] Subsequently, other three-letter acronyms were added for even higher data rates. [ 19 ]
Each link is duplex. Links can be aggregated: most systems use a 4 link/lane connector (QSFP). HDR often makes use of 2x links (aka HDR100, 100 Gb link using 2 lanes of HDR, while still using a QSFP connector). 8x is called for with NDR switch ports using OSFP (Octal Small Form Factor Pluggable) connectors "Cable and Connector Definitions" .
InfiniBand provides remote direct memory access (RDMA) capabilities for low CPU overhead.
InfiniBand uses a switched fabric topology, as opposed to early shared medium Ethernet . All transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS).
InfiniBand transmits data in packets of up to 4 KB that are taken together to form a message. A message can be:
In addition to a board form factor connection, it can use both active and passive copper (up to 10 meters) and optical fiber cable (up to 10 km). [ 31 ] QSFP connectors are used.
The InfiniBand Association also specified the CXP connector system for speeds up to 120 Gbit/s over copper, active optical cables, and optical transceivers using parallel multi-mode fiber cables with 24-fiber MPO connectors. [ citation needed ]
Mellanox operating system support is available for Solaris , FreeBSD , [ 32 ] [ 33 ] Red Hat Enterprise Linux , SUSE Linux Enterprise Server (SLES), Windows , HP-UX , VMware ESX , [ 34 ] and AIX . [ 35 ]
InfiniBand has no specific standard application programming interface (API). The standard only lists a set of verbs such as ibv_open_device or ibv_post_send , which are abstract representations of functions or methods that must exist. The syntax of these functions is left to the vendors. Sometimes for reference this is called the verbs API. The de facto standard software is developed by OpenFabrics Alliance and called the Open Fabrics Enterprise Distribution (OFED). It is released under two licenses GPL2 or BSD license for Linux and FreeBSD, and as Mellanox OFED for Windows (product names: WinOF / WinOF-2; attributed as host controller driver for matching specific ConnectX 3 to 5 devices) [ 36 ] under a choice of BSD license for Windows.
It has been adopted by most of the InfiniBand vendors, for Linux , FreeBSD , and Microsoft Windows . IBM refers to a software library called libibverbs , for its AIX operating system, as well as "AIX InfiniBand verbs". [ 37 ] The Linux kernel support was integrated in 2005 into the kernel version 2.6.11. [ 38 ]
Ethernet over InfiniBand, abbreviated to EoIB, is an Ethernet implementation over the InfiniBand protocol and connector technology.
EoIB enables multiple Ethernet bandwidths varying on the InfiniBand (IB) version. [ 39 ] Ethernet's implementation of the Internet Protocol Suite , usually referred to as TCP/IP, is different in some details compared to the direct InfiniBand protocol in IP over IB (IPoIB). | https://en.wikipedia.org/wiki/InfiniBand |
In mathematics, infinitary combinatorics , or combinatorial set theory , is an extension of ideas in combinatorics to infinite sets .
Some of the things studied include continuous graphs and trees , extensions of Ramsey's theorem , and Martin's axiom .
Recent developments concern combinatorics of the continuum [ 1 ] and combinatorics on successors of singular cardinals. [ 2 ]
Write κ , λ {\displaystyle \kappa ,\lambda } for ordinals, m {\displaystyle m} for a cardinal number (finite or infinite) and n {\displaystyle n} for a natural number. Erdős & Rado (1956) introduced the notation
as a shorthand way of saying that every partition of the set [ κ ] n {\displaystyle [\kappa ]^{n}} of n {\displaystyle n} -element subsets of κ {\displaystyle \kappa } into m {\displaystyle m} pieces has a homogeneous set of order type λ {\displaystyle \lambda } . A homogeneous set is in this case a subset of κ {\displaystyle \kappa } such that every n {\displaystyle n} -element subset is in the same element of the partition. When m {\displaystyle m} is 2 it is often omitted. Such statements are known as partition relations.
Assuming the axiom of choice , there are no ordinals κ {\displaystyle \kappa } with κ → ( ω ) ω {\displaystyle \kappa \rightarrow (\omega )^{\omega }} , so n {\displaystyle n} is usually taken to be finite. An extension where n {\displaystyle n} is almost allowed to be infinite is the notation
which is a shorthand way of saying that every partition of the set of finite subsets of κ {\displaystyle \kappa } into m {\displaystyle m} pieces has a subset of order type λ {\displaystyle \lambda } such that for any finite n {\displaystyle n} , all subsets of size n {\displaystyle n} are in the same element of the partition. When m {\displaystyle m} is 2 it is often omitted.
Another variation is the notation
which is a shorthand way of saying that every coloring of the set [ κ ] n {\displaystyle [\kappa ]^{n}} of n {\displaystyle n} -element subsets of κ {\displaystyle \kappa } with 2 colors has a subset of order type λ {\displaystyle \lambda } such that all elements of [ λ ] n {\displaystyle [\lambda ]^{n}} have the first color, or a subset of order type μ {\displaystyle \mu } such that all elements of [ μ ] n {\displaystyle [\mu ]^{n}} have the second color.
Some properties of this include: (in what follows κ {\displaystyle \kappa } is a cardinal)
In choiceless universes, partition properties with infinite exponents may hold, and some of them are obtained as consequences of the axiom of determinacy (AD). For example, Donald A. Martin proved that AD implies
Wacław Sierpiński showed that the Ramsey theorem does not extend to sets of size ℵ 1 {\displaystyle \aleph _{1}} by showing that 2 ℵ 0 ↛ ( ℵ 1 ) 2 2 {\displaystyle 2^{\aleph _{0}}\nrightarrow (\aleph _{1})_{2}^{2}} . That is, Sierpiński constructed a coloring of pairs of real numbers into two colors such that for every uncountable subset of real numbers X {\displaystyle X} , [ X ] 2 {\displaystyle [X]^{2}} takes both colors. Taking any set of real numbers of size ℵ 1 {\displaystyle \aleph _{1}} and applying the coloring of Sierpiński to it, we get that ℵ 1 ↛ ( ℵ 1 ) 2 2 {\displaystyle \aleph _{1}\not \rightarrow (\aleph _{1})_{2}^{2}} . Colorings such as this are known as strong colorings [ 3 ] and studied in set theory. Erdős, Hajnal & Rado (1965) introduced a similar notation as above for this.
Write κ , λ {\displaystyle \kappa ,\lambda } for ordinals, m {\displaystyle m} for a cardinal number (finite or infinite) and n {\displaystyle n} for a natural number. Then
is a shorthand way of saying that there exists a coloring of the set [ κ ] n {\displaystyle [\kappa ]^{n}} of n {\displaystyle n} -element subsets of κ {\displaystyle \kappa } into m {\displaystyle m} pieces such that every set of order type λ {\displaystyle \lambda } is a rainbow set. A rainbow set is in this case a subset A {\displaystyle A} of κ {\displaystyle \kappa } such that [ A ] n {\displaystyle [A]^{n}} takes all m {\displaystyle m} colors. When m {\displaystyle m} is 2 it is often omitted. Such statements are known as negative square bracket partition relations.
Another variation is the notation
which is a shorthand way of saying that there exists a coloring of the set [ κ ] 2 {\displaystyle [\kappa ]^{2}} of 2-element subsets of κ {\displaystyle \kappa } with m {\displaystyle m} colors such that for every subset A {\displaystyle A} of order type λ {\displaystyle \lambda } and every subset B {\displaystyle B} of order type μ {\displaystyle \mu } , the set A × B {\displaystyle A\times B} takes all m {\displaystyle m} colors.
Some properties of this include: (in what follows κ {\displaystyle \kappa } is a cardinal)
Several large cardinal properties can be defined using this notation. In particular: | https://en.wikipedia.org/wiki/Infinitary_combinatorics |
An infinitary logic is a logic that allows infinitely long statements and/or infinitely long proofs . [ 1 ] The concept was introduced by Zermelo in the 1930s. [ 2 ]
Some infinitary logics may have different properties from those of standard first-order logic . In particular, infinitary logics may fail to be compact or complete . Notions of compactness and completeness that are equivalent in finitary logic sometimes are not so in infinitary logics. Therefore for infinitary logics, notions of strong compactness and strong completeness are defined. This article addresses Hilbert-type infinitary logics, as these have been extensively studied and constitute the most straightforward extensions of finitary logic. These are not, however, the only infinitary logics that have been formulated or studied.
Considering whether a certain infinitary logic named Ω-logic is complete promises to throw light on the continuum hypothesis . [ 3 ]
As a language with infinitely long formulae is being presented, it is not possible to write such formulae down explicitly. To get around this problem a number of notational conveniences, which, strictly speaking, are not part of the formal language, are used. ⋯ {\displaystyle \cdots } is used to point out an expression that is infinitely long. Where it is unclear, the length of the sequence is noted afterwards. Where this notation becomes ambiguous or confusing, suffixes such as ⋁ γ < δ A γ {\displaystyle \bigvee _{\gamma <\delta }{A_{\gamma }}} are used to indicate an infinite disjunction over a set of formulae of cardinality δ {\displaystyle \delta } . The same notation may be applied to quantifiers, for example ∀ γ < δ V γ : {\displaystyle \forall _{\gamma <\delta }{V_{\gamma }:}} . This is meant to represent an infinite sequence of quantifiers: a quantifier for each V γ {\displaystyle V_{\gamma }} where γ < δ {\displaystyle \gamma <\delta } .
All usage of suffixes and ⋯ {\displaystyle \cdots } are not part of formal infinitary languages.
The axiom of choice is assumed (as is often done when discussing infinitary logic) as this is necessary to have sensible distributivity laws.
A first-order infinitary language L κ , λ {\displaystyle L_{\kappa ,\lambda }} , κ {\displaystyle \kappa } regular , λ = 0 {\displaystyle \lambda =0} or ω ≤ λ ≤ κ {\displaystyle \omega \leq \lambda \leq \kappa } , has the same set of symbols as a finitary logic and may use all the rules for formation of formulae of a finitary logic together with some additional ones: [ 4 ]
The language may also have function, relation, and predicate symbols of finite arity. [ 5 ] Karp also defined languages L κ λ o π {\displaystyle L_{\kappa \,\lambda o\pi }} with π ≤ κ {\displaystyle \pi \leq \kappa } an infinite cardinal and some more complicated restrictions on o {\displaystyle \mathrm {o} } that allow for function and predicate symbols of infinite arity, with o {\displaystyle \mathrm {o} } controlling the maximum arity of a function symbol and π {\displaystyle \pi } controlling predicate symbols. [ 6 ]
The concepts of free and bound variables apply in the same manner to infinite formulae. Just as in finitary logic, a formula all of whose variables are bound is referred to as a sentence .
A theory T {\displaystyle T} in infinitary language L α , β {\displaystyle L_{\alpha ,\beta }} is a set of sentences in the logic. A proof in infinitary logic from a theory T {\displaystyle T} is a (possibly infinite) sequence of statements that obeys the following conditions: Each statement is either a logical axiom, an element of T {\displaystyle T} , or is deduced from previous statements using a rule of inference. As before, all rules of inference in finitary logic can be used, together with an additional one:
If β < α {\displaystyle \beta <\alpha } , forming universal closures may not always be possible, however extra constant symbols may be added for each variable with the resulting satisfiability relation remaining the same. [ 8 ] To avoid this, some authors use a different definition of the language L α , β {\displaystyle L_{\alpha ,\beta }} forbidding formulas from having more than β {\displaystyle \beta } free variables. [ 9 ]
The logical axiom schemata specific to infinitary logic are presented below. Global schemata variables: δ {\displaystyle \delta } and γ {\displaystyle \gamma } such that 0 < δ < α {\displaystyle 0<\delta <\alpha } .
The last two axiom schemata require the axiom of choice because certain sets must be well orderable . The last axiom schema is strictly speaking unnecessary, as Chang's distributivity laws imply it, [ 10 ] however it is included as a natural way to allow natural weakenings to the logic.
A theory is any set of sentences. The truth of statements in models are defined by recursion and will agree with the definition for finitary logic where both are defined. Given a theory T a sentence is said to be valid for the theory T if it is true in all models of T .
A logic in the language L α , β {\displaystyle L_{\alpha ,\beta }} is complete if for every sentence S valid in every model there exists a proof of S . It is strongly complete if for any theory T for every sentence S valid in T there is a proof of S from T . An infinitary logic can be complete without being strongly complete.
A cardinal κ ≠ ω {\displaystyle \kappa \neq \omega } is weakly compact when for every theory T in L κ , κ {\displaystyle L_{\kappa ,\kappa }} containing at most κ {\displaystyle \kappa } many formulas, if every S ⊆ {\displaystyle \subseteq } T of cardinality less than κ {\displaystyle \kappa } has a model, then T has a model. A cardinal κ ≠ ω {\displaystyle \kappa \neq \omega } is strongly compact when for every theory T in L κ , κ {\displaystyle L_{\kappa ,\kappa }} , without restriction on size, if every S ⊆ {\displaystyle \subseteq } T of cardinality less than κ {\displaystyle \kappa } has a model, then T has a model.
In the language of set theory the following statement expresses foundation :
Unlike the axiom of foundation, this statement admits no non-standard interpretations. The concept of well-foundedness can only be expressed in a logic that allows infinitely many quantifiers in an individual statement. As a consequence many theories, including Peano arithmetic , which cannot be properly axiomatised in finitary logic, can be in a suitable infinitary logic. Other examples include the theories of non-archimedean fields and torsion-free groups . [ citation needed ] These three theories can be defined without the use of infinite quantification; only infinite junctions [ 11 ] are needed.
Truth predicates for countable languages are definable in L ω 1 , ω {\displaystyle {\mathcal {L}}_{\omega _{1},\omega }} . [ 12 ]
Two infinitary logics stand out in their completeness. These are the logics of L ω , ω {\displaystyle L_{\omega ,\omega }} and L ω 1 , ω {\displaystyle L_{\omega _{1},\omega }} . The former is standard finitary first-order logic and the latter is an infinitary logic that only allows statements of countable size.
The logic of L ω , ω {\displaystyle L_{\omega ,\omega }} is also strongly complete, compact and strongly compact.
The logic of L ω 1 , ω {\displaystyle L_{\omega _{1},\omega }} fails to be compact, but it is complete (under the axioms given above). Moreover, it satisfies a variant of the Craig interpolation property.
If the logic of L α , α {\displaystyle L_{\alpha ,\alpha }} is strongly complete (under the axioms given above) then α {\displaystyle \alpha } is strongly compact (because proofs in these logics cannot use α {\displaystyle \alpha } or more of the given axioms). | https://en.wikipedia.org/wiki/Infinitary_logic |
In mathematics, an infinite-dimensional Lebesgue measure is a measure defined on infinite-dimensional normed vector spaces , such as Banach spaces , which resembles the Lebesgue measure used in finite-dimensional spaces.
However, the traditional Lebesgue measure cannot be straightforwardly extended to all infinite-dimensional spaces due to a key limitation: any translation-invariant Borel measure on an infinite-dimensional separable Banach space must be either infinite for all sets or zero for all sets. Despite this, certain forms of infinite-dimensional Lebesgue-like measures can exist in specific contexts. These include non-separable spaces like the Hilbert cube , or scenarios where some typical properties of finite-dimensional Lebesgue measures are modified or omitted.
The Lebesgue measure λ {\displaystyle \lambda } on the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is locally finite , strictly positive , and translation-invariant . That is:
Motivated by their geometrical significance, constructing measures satisfying the above set properties for infinite-dimensional spaces such as the L p {\displaystyle L^{p}} spaces or path spaces is still an open and active area of research.
Let X {\displaystyle X} be an infinite-dimensional, separable Banach space. Then, the only locally finite and translation invariant Borel measure μ {\displaystyle \mu } on X {\displaystyle X} is a trivial measure . Equivalently, there is no locally finite, strictly positive, and translation invariant measure on X {\displaystyle X} . [ 1 ]
More generally: on a non locally compact Polish group G {\displaystyle G} , there cannot exist a σ-finite and left-invariant Borel measure. [ 1 ]
This theorem implies that on an infinite dimensional separable Banach space (which cannot be locally compact ) a measure that perfectly matches the properties of a finite dimensional Lebesgue measure does not exist.
Let X {\displaystyle X} be an infinite-dimensional, separable Banach space equipped with a locally finite translation-invariant measure μ {\displaystyle \mu } . To prove that μ {\displaystyle \mu } is the trivial measure, it is sufficient and necessary to show that μ ( X ) = 0. {\displaystyle \mu (X)=0.}
Like every separable metric space , X {\displaystyle X} is a Lindelöf space , which means that every open cover of X {\displaystyle X} has a countable subcover. It is, therefore, enough to show that there exists some open cover of X {\displaystyle X} by null sets because by choosing a countable subcover, the σ-subadditivity of μ {\displaystyle \mu } will imply that μ ( X ) = 0. {\displaystyle \mu (X)=0.}
Using local finiteness of the measure μ {\displaystyle \mu } , suppose that for some r > 0 , {\displaystyle r>0,} the open ball B ( r ) {\displaystyle B(r)} of radius r {\displaystyle r} has a finite μ {\displaystyle \mu } -measure. Since X {\displaystyle X} is infinite-dimensional, by Riesz's lemma there is an infinite sequence of pairwise disjoint open balls B n ( r / 4 ) , {\displaystyle B_{n}(r/4),} n ∈ N {\displaystyle n\in \mathbb {N} } , of radius r / 4 , {\displaystyle r/4,} with all the smaller balls B n ( r / 4 ) {\displaystyle B_{n}(r/4)} contained within B ( r ) . {\displaystyle B(r).} By translation invariance, all the cover's balls have the same μ {\displaystyle \mu } -measure, and since the infinite sum of these finite μ {\displaystyle \mu } -measures are finite, the cover's balls must all have μ {\displaystyle \mu } -measure zero.
Since r {\displaystyle r} was arbitrary, every open ball in X {\displaystyle X} has zero μ {\displaystyle \mu } -measure, and taking a cover of X {\displaystyle X} which is the set of all open balls that completes the proof that μ ( X ) = 0 {\displaystyle \mu (X)=0} .
Here are some examples of infinite-dimensional Lebesgue measures that can exist if the conditions of the above theorem are relaxed.
One example for an entirely separable Banach space is the abstract Wiener space construction, similar to a product of Gaussian measures (which are not translation invariant). Another approach is to consider a Lebesgue measure of finite-dimensional subspaces within the larger space and look at prevalent and shy sets . [ 2 ]
The Hilbert cube carries the product Lebesgue measure [ 3 ] and the compact topological group given by the Tychonoff product of an infinite number of copies of the circle group is infinite-dimensional and carries a Haar measure that is translation-invariant. These two spaces can be mapped onto each other in a measure-preserving way by unwrapping the circles into intervals. The infinite product of the additive real numbers has the analogous product Haar measure, which is precisely the infinite-dimensional analog of the Lebesgue measure. [ citation needed ] | https://en.wikipedia.org/wiki/Infinite-dimensional_Lebesgue_measure |
An infinite-dimensional vector function is a function whose values lie in an infinite-dimensional topological vector space , such as a Hilbert space or a Banach space .
Such functions are applied in most sciences including physics .
Set f k ( t ) = t / k 2 {\displaystyle f_{k}(t)=t/k^{2}} for every positive integer k {\displaystyle k} and every real number t . {\displaystyle t.} Then the function f {\displaystyle f} defined by the formula f ( t ) = ( f 1 ( t ) , f 2 ( t ) , f 3 ( t ) , … ) , {\displaystyle f(t)=(f_{1}(t),f_{2}(t),f_{3}(t),\ldots )\,,} takes values that lie in the infinite-dimensional vector space X {\displaystyle X} (or R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} ) of real-valued sequences . For example, f ( 2 ) = ( 2 , 2 4 , 2 9 , 2 16 , 2 25 , … ) . {\displaystyle f(2)=\left(2,{\frac {2}{4}},{\frac {2}{9}},{\frac {2}{16}},{\frac {2}{25}},\ldots \right).}
As a number of different topologies can be defined on the space X , {\displaystyle X,} to talk about the derivative of f , {\displaystyle f,} it is first necessary to specify a topology on X {\displaystyle X} or the concept of a limit in X . {\displaystyle X.}
Moreover, for any set A , {\displaystyle A,} there exist infinite-dimensional vector spaces having the (Hamel) dimension of the cardinality of A {\displaystyle A} (for example, the space of functions A → K {\displaystyle A\to K} with finitely-many nonzero elements, where K {\displaystyle K} is the desired field of scalars). Furthermore, the argument t {\displaystyle t} could lie in any set instead of the set of real numbers.
Most theorems on integration and differentiation of scalar functions can be generalized to vector-valued functions, often using essentially the same proofs . Perhaps the most important exception is that absolutely continuous functions need not equal the integrals of their (a.e.) derivatives (unless, for example, X {\displaystyle X} is a Hilbert space); see Radon–Nikodym theorem
A curve is a continuous map of the unit interval (or more generally, of a non−degenerate closed interval of real numbers) into a topological space . An arc is a curve that is also a topological embedding . A curve valued in a Hausdorff space is an arc if and only if it is injective .
If f : [ 0 , 1 ] → X , {\displaystyle f:[0,1]\to X,} where X {\displaystyle X} is a Banach space or another topological vector space then the derivative of f {\displaystyle f} can be defined in the usual way: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle f'(t)=\lim _{h\to 0}{\frac {f(t+h)-f(t)}{h}}.}
If f {\displaystyle f} is a function of real numbers with values in a Hilbert space X , {\displaystyle X,} then the derivative of f {\displaystyle f} at a point t {\displaystyle t} can be defined as in the finite-dimensional case: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle f'(t)=\lim _{h\to 0}{\frac {f(t+h)-f(t)}{h}}.} Most results of the finite-dimensional case also hold in the infinite-dimensional case too, with some modifications. Differentiation can also be defined to functions of several variables (for example, t ∈ R n {\displaystyle t\in R^{n}} or even t ∈ Y , {\displaystyle t\in Y,} where Y {\displaystyle Y} is an infinite-dimensional vector space).
If X {\displaystyle X} is a Hilbert space then any derivative (and any other limit) can be computed componentwise: if f = ( f 1 , f 2 , f 3 , … ) {\displaystyle f=(f_{1},f_{2},f_{3},\ldots )} (that is, f = f 1 e 1 + f 2 e 2 + f 3 e 3 + ⋯ , {\displaystyle f=f_{1}e_{1}+f_{2}e_{2}+f_{3}e_{3}+\cdots ,} where e 1 , e 2 , e 3 , … {\displaystyle e_{1},e_{2},e_{3},\ldots } is an orthonormal basis of the space X {\displaystyle X} ), and f ′ ( t ) {\displaystyle f'(t)} exists, then f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , f 3 ′ ( t ) , … ) . {\displaystyle f'(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).} However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space.
Most of the above hold for other topological vector spaces X {\displaystyle X} too. However, not as many classical results hold in the Banach space setting, for example, an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.
If [ a , b ] {\displaystyle [a,b]} is an interval contained in the domain of a curve f {\displaystyle f} that is valued in a topological vector space then the vector f ( b ) − f ( a ) {\displaystyle f(b)-f(a)} is called the chord of f {\displaystyle f} determined by [ a , b ] {\displaystyle [a,b]} . [ 1 ] If [ c , d ] {\displaystyle [c,d]} is another interval in its domain then the two chords are said to be non−overlapping chords if [ a , b ] {\displaystyle [a,b]} and [ c , d ] {\displaystyle [c,d]} have at most one end−point in common. [ 1 ] Intuitively, two non−overlapping chords of a curve valued in an inner product space are orthogonal vectors if the curve makes a right angle turn somewhere along its path between its starting point and its ending point.
If every pair of non−overlapping chords are orthogonal then such a right turn happens at every point of the curve; such a curve can not be differentiable at any point. [ 1 ] A crinkled arc is an injective continuous curve with the property that any two non−overlapping chords are orthogonal vectors.
An example of a crinkled arc in the Hilbert L 2 {\displaystyle L^{2}} space L 2 ( 0 , 1 ) {\displaystyle L^{2}(0,1)} is: [ 2 ] f : [ 0 , 1 ] → L 2 ( 0 , 1 ) t ↦ 1 [ 0 , t ] {\displaystyle {\begin{alignedat}{4}f:\;&&[0,1]&&\;\to \;&L^{2}(0,1)\\[0.3ex]&&t&&\;\mapsto \;&\mathbb {1} _{[0,t]}\\\end{alignedat}}} where 1 [ 0 , t ] : ( 0 , 1 ) → { 0 , 1 } {\displaystyle \mathbb {1} _{[0,\,t]}:(0,1)\to \{0,1\}} is the indicator function defined by x ↦ { 1 if x ∈ [ 0 , t ] 0 otherwise {\displaystyle x\;\mapsto \;{\begin{cases}1&{\text{ if }}x\in [0,t]\\0&{\text{ otherwise }}\end{cases}}} A crinkled arc can be found in every infinite−dimensional Hilbert space because any such space contains a closed vector subspace that is isomorphic to L 2 ( 0 , 1 ) . {\displaystyle L^{2}(0,1).} [ 2 ] A crinkled arc f : [ 0 , 1 ] → X {\displaystyle f:[0,1]\to X} is said to be normalized if f ( 0 ) = 0 , {\displaystyle f(0)=0,} ‖ f ( 1 ) ‖ = 1 , {\displaystyle \|f(1)\|=1,} and the span of its image f ( [ 0 , 1 ] ) {\displaystyle f([0,1])} is a dense subset of X . {\displaystyle X.} [ 2 ]
Proposition [ 2 ] — Given any two normalized crinkled arcs in a Hilbert space, each is unitarily equivalent to a reparameterization of the other.
If h : [ 0 , 1 ] → [ 0 , 1 ] {\displaystyle h:[0,1]\to [0,1]} is an increasing homeomorphism then f ∘ h {\displaystyle f\circ h} is called a reparameterization of the curve f : [ 0 , 1 ] → X . {\displaystyle f:[0,1]\to X.} [ 1 ] Two curves f {\displaystyle f} and g {\displaystyle g} in an inner product space X {\displaystyle X} are unitarily equivalent if there exists a unitary operator L : X → X {\displaystyle L:X\to X} (which is an isometric linear bijection ) such that g = L ∘ f {\displaystyle g=L\circ f} (or equivalently, f = L − 1 ∘ g {\displaystyle f=L^{-1}\circ g} ).
The measurability of f {\displaystyle f} can be defined by a number of ways, most important of which are Bochner measurability and weak measurability .
The most important integrals of f {\displaystyle f} are called Bochner integral (when X {\displaystyle X} is a Banach space) and Pettis integral (when X {\displaystyle X} is a topological vector space). Both these integrals commute with linear functionals . Also L p {\displaystyle L^{p}} spaces have been defined for such functions. | https://en.wikipedia.org/wiki/Infinite-dimensional_vector_function |
Infinite Energy: The Magazine of New Energy Technology, [ 1 ] more commonly referred to simply as Infinite Energy , is a bi-monthly magazine published in New Hampshire that details theories and experiments concerning alternative energy , new science and new physics. The phrase "new energy" in the subtitle is a euphemism for perpetual motion . [ 2 ] The magazine was founded by the late Eugene Mallove , who was its editor-in-chief, [ 3 ] [ 4 ] and is owned by the non-profit New Energy Foundation . [ 5 ] It was established in 1994 as Cold Fusion magazine [ 6 ] and changed its name in March 1995. [ 7 ]
Topics of interest include "new hydrogen physics," also called cold fusion ; vacuum energy, or zero point energy ; and so-called "environmental energy" which they define as the attempt to violate the Second Law of Thermodynamics , [ 8 ] for example with a perpetual motion machine . This is done in pursuit of the founder's commitment to "unearthing new sources of energy and new paradigms in science." [ 5 ] The magazine has also published articles and book reviews that are critical of the Big Bang theory that describes the origin of the universe .
The magazine had a print run of 3,000, and is available on U.S. newsstands. The issues ranged in size from 48 to 100 pages.
Infinite Energy was founded by Dr. Eugene Mallove, a former chief science writer at the Massachusetts Institute of Technology (MIT), in response to what he and other proponents viewed as the premature dismissal of cold fusion by the mainstream scientific community. [ 9 ] The magazine emerged in the aftermath of the 1989 cold fusion controversy , when chemists Martin Fleischmann and Stanley Pons announced they had achieved nuclear fusion at room temperature—an extraordinary claim that drew global attention but was ultimately rejected by most physicists due to irreproducible results and methodological flaws. [ 2 ] [ 9 ]
Mallove, disillusioned by what he perceived as scientific misconduct and suppression of promising research, resigned from MIT and became one of the most vocal defenders of cold fusion , or what became known in later years as low-energy nuclear reactions (LENR). [ 10 ] [ 9 ] He launched Infinite Energy to serve as a platform for the continued exploration of LENR, alternative energy technologies, and unconventional scientific ideas that struggled to find a place in mainstream journals. [ 10 ] [ 2 ] [ 9 ]
Backed by the non-profit New Energy Foundation, [ 11 ] the magazine was published from Concord, New Hampshire , [ 12 ] and quickly became a hub for the cold fusion community, featuring articles, experimental reports, interviews, and editorials advocating for open inquiry and challenging the boundaries of accepted science. [ 10 ] Over the years, Infinite Energy also covered topics such as zero-point energy, over-unity devices, and breakthrough propulsion concepts, appealing to a niche readership interested in revolutionary, albeit controversial, scientific developments. [ 13 ]
Despite widespread skepticism from the broader scientific establishment, Infinite Energy persisted for decades, buoyed by a dedicated community of researchers and enthusiasts. [ 10 ] [ 4 ] The magazine’s existence reflects the enduring appeal of cold fusion and the broader tension between scientific orthodoxy and fringe innovation. [ 14 ]
In the 2000s, the editorship was taken over by György Egely; more recently Bill Zebuhr was writing Editorials. Issue 167 (March - June 2024) is the last extant magazine published.
Charles Platt , writing for Wired , described the magazine as "a wild grab bag of eye-popping assertions and evangelistic rants against the establishment", [ 14 ] though conceding that "at the same time, buried among the far-fetched claims were rigorous reports from credentialed scientists". [ 14 ] [ 10 ]
This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
This science and technology magazine–related article is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about magazines . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Infinite_Energy_(magazine) |
The infinite alleles model is a mathematical model for calculating genetic mutations . The Japanese geneticist Motoo Kimura and American geneticist James F. Crow (1964) introduced the infinite alleles model , an attempt to determine for a finite diploid population what proportion of loci would be homozygous . This was, in part, motivated by assertions by other geneticists that more than 50 percent of Drosophila loci were heterozygous , a claim they initially doubted. In order to answer this question they assumed first, that there were a large enough number of alleles so that any mutation would lead to a different allele (that is the probability of back mutation to the original allele would be low enough to be negligible); and second, that the mutations would result in a number of different outcomes from neutral to deleterious .
They determined that in the neutral case, the probability that an individual would be homozygous, F , was:
where u is the mutation rate, and N e is the effective population size . The effective number of alleles n maintained in a population is defined as the inverse of the homozygosity, that is
which is a lower bound for the actual number of alleles in the population.
If the effective population is large, then a large number of alleles can be maintained. However, this result only holds for the neutral case, and is not necessarily true for the case when some alleles are subject to selection , i.e. more or less fit than others, for example when the fittest genotype is a heterozygote (a situation often referred to as overdominance or heterosis ).
In the case of overdominance, because Mendel's second law (the law of segregation) necessarily results in the production of homozygotes (which are by definition in this case, less fit), this means that population will always harbor a number of less fit individuals, which leads to a decrease in the average fitness of the population. This is sometimes referred to as genetic load , in this case it is a special kind of load known as segregational load . Crow and Kimura showed that at equilibrium conditions, for a given strength of selection ( s ), that there would be an upper limit to the number of fitter alleles (polymorphisms) that a population could harbor for a particular locus. Beyond this number of alleles, the selective advantage of presence of those alleles in heterozygous genotypes would be cancelled out by continual generation of less fit homozygous genotypes.
These results became important in the formation of the neutral theory , because neutral (or nearly neutral) alleles create no such segregational load, and allow for the accumulation of a great deal of polymorphism. When Richard Lewontin and J. Hubby published their groundbreaking results in 1966 which showed high levels of genetic variation in Drosophila via protein electrophoresis , the theoretical results from the infinite alleles model were used by Kimura and others to support the idea that this variation would have to be neutral (or result in excess segregational load). | https://en.wikipedia.org/wiki/Infinite_alleles_model |
In mathematics , a group is said to have the infinite conjugacy class property , or to be an ICC group , if the conjugacy class of every group element but the identity is infinite . [ 1 ] : 907
The von Neumann group algebra of a group is a factor if and only if the group has the infinite conjugacy class property. It will then be, provided the group is nontrivial, of type II 1 , i.e. it will possess a unique, faithful, tracial state. [ 2 ]
Examples of ICC groups are the group of permutations of an infinite set that leave all but a finite subset of elements fixed, [ 1 ] : 908 and free groups on two generators. [ 1 ] : 908
In abelian groups , every conjugacy class consists of only one element, so ICC groups are, in a way, as far from being abelian as possible.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Infinite_conjugacy_class_property |
In mathematics , a total order or linear order is a partial order in which any two elements are comparable. That is, a total order is a binary relation ≤ {\displaystyle \leq } on some set X {\displaystyle X} , which satisfies the following for all a , b {\displaystyle a,b} and c {\displaystyle c} in X {\displaystyle X} :
Requirements 1. to 3. just make up the definition of a partial order.
Reflexivity (1.) already follows from strong connectedness (4.), but is required explicitly by many authors nevertheless, to indicate the kinship to partial orders. [ 1 ] Total orders are sometimes also called simple , [ 2 ] connex , [ 3 ] or full orders . [ 4 ]
A set equipped with a total order is a totally ordered set ; [ 5 ] the terms simply ordered set , [ 2 ] linearly ordered set , [ 3 ] [ 5 ] toset [ 6 ] and loset [ 7 ] [ 8 ] are also used. The term chain is sometimes defined as a synonym of totally ordered set , [ 5 ] but generally refers to a totally ordered subset of a given partially ordered set.
An extension of a given partial order to a total order is called a linear extension of that partial order.
For delimitation purposes, a total order as defined above is sometimes called non-strict order.
For each (non-strict) total order ≤ {\displaystyle \leq } there is an associated relation < {\displaystyle <} , called the strict total order associated with ≤ {\displaystyle \leq } that can be defined in two equivalent ways:
Conversely, the reflexive closure of a strict total order < {\displaystyle <} is a (non-strict) total order.
Thus, a strict total order on a set X {\displaystyle X} is a strict partial order on X {\displaystyle X} in which any two distinct elements are comparable. That is, a strict total order is a binary relation < {\displaystyle <} on some set X {\displaystyle X} , which satisfies the following for all a , b {\displaystyle a,b} and c {\displaystyle c} in X {\displaystyle X} :
Asymmetry follows from transitivity and irreflexivity; [ 9 ] moreover, irreflexivity follows from asymmetry. [ 10 ]
The term chain is sometimes defined as a synonym for a totally ordered set, but it is generally used for referring to a subset of a partially ordered set that is totally ordered for the induced order. [ 1 ] [ 12 ] Typically, the partially ordered set is a set of subsets of a given set that is ordered by inclusion, and the term is used for stating properties of the set of the chains. This high number of nested levels of sets explains the usefulness of the term.
A common example of the use of chain for referring to totally ordered subsets is Zorn's lemma which asserts that, if every chain in a partially ordered set X has an upper bound in X , then X contains at least one maximal element. [ 13 ] Zorn's lemma is commonly used with X being a set of subsets; in this case, the upper bound is obtained by proving that the union of the elements of a chain in X is in X . This is the way that is generally used to prove that a vector space has Hamel bases and that a ring has maximal ideals .
In some contexts, the chains that are considered are order isomorphic to the natural numbers with their usual order or its opposite order . In this case, a chain can be identified with a monotone sequence , and is called an ascending chain or a descending chain , depending whether the sequence is increasing or decreasing. [ 14 ]
A partially ordered set has the descending chain condition if every descending chain eventually stabilizes. [ 15 ] For example, an order is well founded if it has the descending chain condition. Similarly, the ascending chain condition means that every ascending chain eventually stabilizes. For example, a Noetherian ring is a ring whose ideals satisfy the ascending chain condition.
In other contexts, only chains that are finite sets are considered. In this case, one talks of a finite chain , often shortened as a chain . In this case, the length of a chain is the number of inequalities (or set inclusions) between consecutive elements of the chain; that is, the number minus one of elements in the chain. [ 16 ] Thus a singleton set is a chain of length zero, and an ordered pair is a chain of length one. The dimension of a space is often defined or characterized as the maximal length of chains of subspaces. For example, the dimension of a vector space is the maximal length of chains of linear subspaces , and the Krull dimension of a commutative ring is the maximal length of chains of prime ideals .
"Chain" may also be used for some totally ordered subsets of structures that are not partially ordered sets. An example is given by regular chains of polynomials. Another example is the use of "chain" as a synonym for a walk in a graph .
One may define a totally ordered set as a particular kind of lattice , namely one in which we have
We then write a ≤ b if and only if a = a ∧ b {\displaystyle a=a\wedge b} . Hence a totally ordered set is a distributive lattice .
A simple counting argument will verify that any non-empty finite totally ordered set (and hence any non-empty subset thereof) has a least element. Thus every finite total order is in fact a well order . Either by direct proof or by observing that every well order is order isomorphic to an ordinal one may show that every finite total order is order isomorphic to an initial segment of the natural numbers ordered by <. In other words, a total order on a set with k elements induces a bijection with the first k natural numbers. Hence it is common to index finite total orders or well orders with order type ω by natural numbers in a fashion which respects the ordering (either starting with zero or with one).
Totally ordered sets form a full subcategory of the category of partially ordered sets , with the morphisms being maps which respect the orders, i.e. maps f such that if a ≤ b then f ( a ) ≤ f ( b ).
A bijective map between two totally ordered sets that respects the two orders is an isomorphism in this category.
For any totally ordered set X we can define the open intervals
We can use these open intervals to define a topology on any ordered set, the order topology .
When more than one order is being used on a set one talks about the order topology induced by a particular order. For instance if N is the natural numbers, < is less than and > greater than we might refer to the order topology on N induced by < and the order topology on N induced by > (in this case they happen to be identical but will not in general).
The order topology induced by a total order may be shown to be hereditarily normal .
A totally ordered set is said to be complete if every nonempty subset that has an upper bound , has a least upper bound . For example, the set of real numbers R is complete but the set of rational numbers Q is not. In other words, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions . For example, over the real numbers a property of the relation ≤ is that every non-empty subset S of R with an upper bound in R has a least upper bound (also called supremum) in R . However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation ≤ to the rational numbers.
There are a number of results relating properties of the order topology to the completeness of X:
A totally ordered set (with its order topology) which is a complete lattice is compact . Examples are the closed intervals of real numbers, e.g. the unit interval [0,1], and the affinely extended real number system (extended real number line). There are order-preserving homeomorphisms between these examples.
For any two disjoint total orders ( A 1 , ≤ 1 ) {\displaystyle (A_{1},\leq _{1})} and ( A 2 , ≤ 2 ) {\displaystyle (A_{2},\leq _{2})} , there is a natural order ≤ + {\displaystyle \leq _{+}} on the set A 1 ∪ A 2 {\displaystyle A_{1}\cup A_{2}} , which is called the sum of the two orders or sometimes just A 1 + A 2 {\displaystyle A_{1}+A_{2}} :
Intuitively, this means that the elements of the second set are added on top of the elements of the first set.
More generally, if ( I , ≤ ) {\displaystyle (I,\leq )} is a totally ordered index set, and for each i ∈ I {\displaystyle i\in I} the structure ( A i , ≤ i ) {\displaystyle (A_{i},\leq _{i})} is a linear order, where the sets A i {\displaystyle A_{i}} are pairwise disjoint, then the natural total order on ⋃ i A i {\displaystyle \bigcup _{i}A_{i}} is defined by
The first-order theory of total orders is decidable , i.e. there is an algorithm for deciding which first-order statements hold for all total orders. Using interpretability in S2S , the monadic second-order theory of countable total orders is also decidable. [ 17 ]
There are several ways to take two totally ordered sets and extend to an order on the Cartesian product , though the resulting order may only be partial . Here are three of these possible orders, listed such that each order is stronger than the next:
Each of these orders extends the next in the sense that if we have x ≤ y in the product order, this relation also holds in the lexicographic order, and so on. All three can similarly be defined for the Cartesian product of more than two sets.
Applied to the vector space R n , each of these make it an ordered vector space .
See also examples of partially ordered sets .
A real function of n real variables defined on a subset of R n defines a strict weak order and a corresponding total preorder on that subset.
All definitions tacitly require the homogeneous relation R {\displaystyle R} be transitive : for all a , b , c , {\displaystyle a,b,c,} if a R b {\displaystyle aRb} and b R c {\displaystyle bRc} then a R c . {\displaystyle aRc.} A term's definition may require additional properties that are not listed in this table.
A binary relation that is antisymmetric, transitive, and reflexive (but not necessarily total) is a partial order .
A group with a compatible total order is a totally ordered group .
There are only a few nontrivial structures that are (interdefinable as) reducts of a total order. Forgetting the orientation results in a betweenness relation . Forgetting the location of the ends results in a cyclic order . Forgetting both data results use of point-pair separation to distinguish, on a circle, the two intervals determined by a point-pair. [ 18 ] | https://en.wikipedia.org/wiki/Infinite_descending_chain |
Infinite divisibility arises in different ways in philosophy , physics , economics , order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter , space , time , money , or abstract mathematical objects such as the continuum .
The origin of the idea in the Western tradition can be traced to the 5th century BCE starting with the Ancient Greek pre-Socratic philosopher Democritus and his teacher Leucippus , who theorized matter's divisibility beyond what can be perceived by the senses until ultimately ending at an indivisible atom. The Indian philosopher, Maharshi Kanada also proposed an atomistic theory, however there is ambiguity around when this philosopher lived, ranging from sometime between the 6th century to 2nd century BCE. Around 500 BC, he postulated that if we go on dividing matter ( padarth ), we shall get smaller and smaller particles. Ultimately, a time will come when we shall come across the smallest particles beyond which further division will not be possible. He named these particles Parmanu . Another Indian philosopher, Pakudha Katyayama , elaborated this doctrine and said that these particles normally exist in a combined form which gives us various forms of matter. [ 1 ] [ 2 ] Atomism is explored in Plato 's dialogue Timaeus . Aristotle proves that both length and time are infinitely divisible, refuting atomism. [ 3 ] Andrew Pyle gives a lucid account of infinite divisibility in the first few pages of his Atomism and its Critics . There he shows how infinite divisibility involves the idea that there is some extended item , such as an apple, which can be divided infinitely many times, where one never divides down to point, or to atoms of any sort. Many philosophers [ who? ] claim that infinite divisibility involves either a collection of an infinite number of items (since there are infinite divisions, there must be an infinite collection of objects), or (more rarely), point-sized items , or both. Pyle states that the mathematics of infinitely divisible extensions involve neither of these — that there are infinite divisions, but only finite collections of objects and they never are divided down to point extension-less items.
In Zeno's arrow paradox , Zeno questioned how an arrow can move if at one moment it is here and motionless and at a later moment be somewhere else and motionless.
Zeno's reasoning, however, is fallacious, when he says that if everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless. This is false, for time is not composed of indivisible moments any more than any other magnitude is composed of indivisibles. [ 4 ]
In reference to Zeno's paradox of the arrow in flight, Alfred North Whitehead writes that "an infinite number of acts of becoming may take place in a finite time if each subsequent act is smaller in a convergent series": [ 5 ]
The argument, so far as it is valid, elicits a contradiction from the two premises: (i) that in a becoming something ( res vera ) becomes, and (ii) that every act of becoming is divisible into earlier and later sections which are themselves acts of becoming. Consider, for example, an act of becoming during one second. The act is divisible into two acts, one during the earlier half of the second, the other during the later half of the second. Thus that which becomes during the whole second presupposes that which becomes during the first half-second. Analogously, that which becomes during the first half-second presupposes that which becomes during the first quarter-second, and so on indefinitely. Thus if we consider the process of becoming up to the beginning of the second in question, and ask what then becomes, no answer can be given. For, whatever creature we indicate presupposes an earlier creature which became after the beginning of the second and antecedently to the indicated creature. Therefore there is nothing which becomes, so as to effect a transition into the second in question. [ 5 ]
Until the discovery of quantum mechanics , no distinction was made between the question of whether matter is infinitely divisible and the question of whether matter can be cut into smaller parts ad infinitum .
As a result, the Greek word átomos ( ἄτομος ), which literally means "uncuttable", is usually translated as "indivisible". Whereas the modern atom is indeed divisible, it actually is uncuttable: there is no partition of space such that its parts correspond to material parts of the atom. In other words, the quantum-mechanical description of matter no longer conforms to the cookie cutter paradigm. [ 6 ] This casts fresh light on the ancient conundrum of the divisibility of matter. The multiplicity of a material object—the number of its parts—depends on the existence, not of delimiting surfaces, but of internal spatial relations (relative positions between parts), and these lack determinate values. According to the Standard Model of particle physics, the particles that make up an atom— quarks and electrons —are point particles : they do not take up space. What makes an atom nevertheless take up space is not any spatially extended "stuff" that "occupies space", and that might be cut into smaller and smaller pieces, but the indeterminacy of its internal spatial relations.
Physical space is often regarded as infinitely divisible: it is thought that any region in space, no matter how small, could be further split. Time is similarly considered as infinitely divisible.
However, according to the best currently accepted theory in physics, the Standard Model , there is a distance (called the Planck length , 1.616229(38)×10 −35 metres, named after one of the fathers of Quantum Theory, Max Planck ) and therefore a time interval (the amount of time which light takes to traverse that distance in a vacuum, 5.39116(13) × 10 −44 seconds, known as the Planck time ) at which the Standard Model is expected to break down – effectively making this the smallest physical scale about which meaningful statements can be currently made. To predict the physical behaviour of space-time and fundamental particles at smaller distances requires a new theory of Quantum Gravity , which unifies the hitherto incompatible theories of Quantum Mechanics and General Relativity. [ citation needed ]
One dollar , or one euro , is divided into 100 cents; one can only pay in increments of a cent. It is quite commonplace for prices of some commodities such as gasoline to be in increments of a tenth of a cent per gallon or per litre. If gasoline costs $3.979 per gallon and one buys 10 gallons, then the "extra" 9/10 of a cent comes to ten times that: an "extra" 9 cents, so the cent in that case gets paid. Money is infinitely divisible in the sense that it is based upon the real number system. However, modern day coins are not divisible (in the past some coins were weighed with each transaction, and were considered divisible with no particular limit in mind). There is a point of precision in each transaction that is useless because such small amounts of money are insignificant to humans. The more the price is multiplied the more the precision could matter. For example, when buying a million shares of stock, the buyer and seller might be interested in a tenth of a cent price difference, but it's only a choice. Everything else in business measurement and choice is similarly divisible to the degree that the parties are interested. For example, financial reports may be reported annually, quarterly, or monthly. Some business managers run cash-flow reports more than once per day.
Although time may be infinitely divisible, data on securities prices are reported at discrete times. For example, if one looks at records of stock prices in the 1920s, one may find the prices at the end of each day, but perhaps not at three-hundredths of a second after 12:47 PM. A new method, however, theoretically, could report at double the rate, which would not prevent further increases of velocity of reporting. Perhaps paradoxically, technical mathematics applied to financial markets is often simpler if infinitely divisible time is used as an approximation. Even in those cases, a precision is chosen with which to work, and measurements are rounded to that approximation. In terms of human interaction, money and time are divisible, but only to the point where further division is not of value, which point cannot be determined exactly.
To say that the field of rational numbers is infinitely divisible (i.e. order theoretically dense ) means that between any two rational numbers there is another rational number. By contrast, the ring of integers is not infinitely divisible.
Infinite divisibility does not imply gaplessness: the rationals do not enjoy the least upper bound property . That means that if one were to partition the rationals into two non-empty sets A and B where A contains all rationals less than some irrational number ( π , say) and B all rationals greater than it, then A has no largest member and B has no smallest member. The field of real numbers , by contrast, is both infinitely divisible and gapless. Any linearly ordered set that is infinitely divisible and gapless, and has more than one member, is uncountably infinite . For a proof, see Cantor's first uncountability proof . Infinite divisibility alone implies infiniteness but not uncountability, as the rational numbers exemplify.
To say that a probability distribution F on the real line is infinitely divisible means that if X is any random variable whose distribution is F , then for every positive integer n there exist n independent identically distributed random variables X 1 , ..., X n whose sum is equal in distribution to X (those n other random variables do not usually have the same probability distribution as X ).
The Poisson distribution , the stuttering Poisson distribution, [ citation needed ] the negative binomial distribution , and the Gamma distribution are examples of infinitely divisible distributions — as are the normal distribution , Cauchy distribution and all other members of the stable distribution family. The skew-normal distribution is an example of a non-infinitely divisible distribution. (See Domínguez-Molina and Rocha-Arteaga (2007).)
Every infinitely divisible probability distribution corresponds in a natural way to a Lévy process , i.e., a stochastic process { X t : t ≥ 0 } with stationary independent increments ( stationary means that for s < t , the probability distribution of X t − X s depends only on t − s ; independent increments means that that difference is independent of the corresponding difference on any interval not overlapping with [ s , t ], and similarly for any finite number of intervals).
This concept of infinite divisibility of probability distributions was introduced in 1929 by Bruno de Finetti . | https://en.wikipedia.org/wiki/Infinite_divisibility |
In mathematics , an infinite expression is an expression in which some operators take an infinite number of arguments , or in which the nesting of the operators continues to an infinite depth. [ 1 ] A generic concept for infinite expression can lead to ill-defined or self-inconsistent constructions (much like a set of all sets ), but there are several instances of infinite expressions that are well-defined.
Examples of well-defined infinite expressions are [ 2 ]
In infinitary logic , one can use infinite conjunctions and infinite disjunctions .
Even for well-defined infinite expressions, the value of the infinite expression may be ambiguous or not well-defined; for instance, there are multiple summation rules available for assigning values to series, and the same series may have different values according to different summation rules if the series is not absolutely convergent . | https://en.wikipedia.org/wiki/Infinite_expression |
In topology, a branch of mathematics, given a topological monoid X up to homotopy (in a nice way), an infinite loop space machine produces a group completion of X together with infinite loop space structure. For example, one can take X to be the classifying space of a symmetric monoidal category S ; that is, X = B S {\displaystyle X=BS} . Then the machine produces the group completion B S → K ( S ) {\displaystyle BS\to K(S)} . The space K ( S ) {\displaystyle K(S)} may be described by the K-theory spectrum of S .
In 1977 Robert Thomason proved the
equivalence of all infinite loop space machines [ 1 ] (he was just 25 years old at the moment.) He published this result next year in a joint paper with John Peter May.
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Infinite_loop_space_machine |
In algebraic geometry , an infinitely near point of an algebraic surface S is a point on a surface obtained from S by repeatedly blowing up points. Infinitely near points of algebraic surfaces were introduced by Max Noether ( 1876 ). [ 1 ]
There are some other meanings of "infinitely near point". Infinitely near points can also be defined for higher-dimensional varieties: there are several inequivalent ways to do this, depending on what one is allowed to blow up. Weil gave a definition of infinitely near points of smooth varieties, [ 2 ] though these are not the same as infinitely near points in algebraic geometry.
In the line of hyperreal numbers , an extension of the real number line, two points are called infinitely near if their difference is infinitesimal .
When blowing up is applied to a point P on a surface S , the new surface S * contains a whole curve C where P used to be. The points of C have the geometric interpretation as the tangent directions at P to S . They can be called infinitely near to P as way of visualizing them on S , rather than S *. More generally this construction can be iterated by blowing up a point on the new curve C , and so on.
An infinitely near point (of order n ) P n on a surface S 0 is given by a sequence of points P 0 , P 1 ,..., P n on surfaces S 0 , S 1 ,..., S n such that S i is given by blowing up S i –1 at the point P i –1 and P i is a point of the surface S i with image P i –1 .
In particular the points of the surface S are the infinitely near points on S of order 0.
Infinitely near points correspond to 1-dimensional valuations of the function field of S with 0-dimensional center, and in particular correspond to some of the points of the Zariski–Riemann surface . (The 1-dimensional valuations with 1-dimensional center correspond to irreducible curves of S .) It is also possible to iterate the construction infinitely often, producing an infinite sequence P 0 , P 1 ,... of infinitely near points. These infinite sequences correspond to the 0-dimensional valuations of the function field of the surface, which correspond to the "0-dimensional" points of the Zariski–Riemann surface .
If C and D are distinct irreducible curves on a smooth surface S intersecting at a point p , then the multiplicity of their intersection at p is given by
where m x ( C ) is the multiplicity of C at x . In general this is larger than m p ( C ) m p ( D ) if C and D have a common tangent line at x so that they also intersect at infinitely near points of order greater than 0, for example if C is the line y = 0 and D is the parabola y = x 2 and p = (0,0).
The genus of C is given by
where N is the normalization of C and m x is the multiplicity of the infinitely near point x on C . | https://en.wikipedia.org/wiki/Infinitely_near_point |
In mathematics , an infinitesimal number is a non-zero quantity that is closer to 0 than any non-zero real number is. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus , which originally referred to the " infinity - eth " item in a sequence .
Infinitesimals do not exist in the standard real number system, but they do exist in other number systems, such as the surreal number system and the hyperreal number system , which can be thought of as the real numbers augmented with both infinitesimal and infinite quantities; the augmentations are the reciprocals of one another.
Infinitesimal numbers were introduced in the development of calculus , in which the derivative was first conceived as a ratio of two infinitesimal quantities. This definition was not rigorously formalized . As calculus developed further, infinitesimals were replaced by limits , which can be calculated using the standard real numbers.
In the 3rd century BC Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. [ 1 ] In his formal published treatises, Archimedes solved the same problem using the method of exhaustion .
Infinitesimals regained popularity in the 20th century with Abraham Robinson 's development of nonstandard analysis and the hyperreal numbers , which, after centuries of controversy, showed that a formal treatment of infinitesimal calculus was possible. Following this, mathematicians developed surreal numbers, a related formalization of infinite and infinitesimal numbers that include both hyperreal cardinal and ordinal numbers , which is the largest ordered field .
Vladimir Arnold wrote in 1990:
Nowadays, when teaching analysis, it is not very popular to talk about infinitesimal quantities. Consequently, present-day students are not fully in command of this language. Nevertheless, it is still necessary to have command of it. [ 2 ]
The crucial insight [ whose? ] for making infinitesimals feasible mathematical entities was that they could still retain certain properties such as angle or slope , even if these entities were infinitely small. [ 3 ]
Infinitesimals are a basic ingredient in calculus as developed by Leibniz , including the law of continuity and the transcendental law of homogeneity . In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective in mathematics, infinitesimal means infinitely small, smaller than any standard real number. Infinitesimals are often compared to other infinitesimals of similar size, as in examining the derivative of a function. An infinite number of infinitesimals are summed to calculate an integral .
The modern concept of infinitesimals was introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz . [ 4 ] The 15th century saw the work of Nicholas of Cusa , further developed in the 17th century by Johannes Kepler , in particular, the calculation of the area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin 's work on the decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri 's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. [ clarification needed ] John Wallis 's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations.
The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving unassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange . Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse , and in defining an early form of a Dirac delta function . As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Émile Borel and Thoralf Skolem . Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed nonstandard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy Łoś in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality .
The notion of infinitely small quantities was discussed by the Eleatic School . The Greek mathematician Archimedes (c. 287 BC – c. 212 BC), in The Method of Mechanical Theorems , was the first to propose a logically rigorous definition of infinitesimals. [ 5 ] His Archimedean property defines a number x as infinite if it satisfies the conditions | x | > 1, | x | > 1 + 1, | x | > 1 + 1 + 1, ..., and infinitesimal if x ≠ 0 and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members.
The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections . The symbol, which denotes the reciprocal, or inverse, of ∞ , is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections , Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus . The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea , whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval.
Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632. [ 6 ]
Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat 's method of adequality and René Descartes ' method of normals . There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus , they made use of infinitesimals, Newton's fluxions and Leibniz' differential . The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst . [ 7 ] Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy , Bernard Bolzano , Karl Weierstrass , Cantor , Dedekind , and others using the (ε, δ)-definition of limit and set theory .
While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts , Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals. [ 8 ] The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita , Giuseppe Veronese , Paul du Bois-Reymond , and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis (see hyperreal numbers ).
In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically, elementary means that there is no quantification over sets , but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any number x , x + 0 = x " would still apply. The same is true for quantification over several numbers, e.g., "for any numbers x and y , xy = yx ." However, statements of the form "for any set S of numbers ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic .
The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4, and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism.
We can distinguish three levels at which a non-Archimedean number system could have first-order properties compatible with those of the reals:
Systems in category 1, at the weak end of the spectrum, are relatively easy to construct but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categories 2 and 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals.
An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real number 1, and the series with only the linear term x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers of x as negligible compared to lower powers. David O. Tall [ 9 ] refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimal x does not have a square root.
The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating-point. [ 10 ]
The field of transseries is larger than the Levi-Civita field. [ 11 ] An example of a transseries is:
where for purposes of ordering x is considered infinite.
Conway's surreal numbers fall into category 2, except that the surreal numbers form a proper class and not a set. [ 12 ] They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis, in the sense that every ordered field is a subfield of the surreal numbers. [ 13 ] There is a natural extension of the exponential function to the surreal numbers. [ 14 ] : ch. 10
The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle , proved by Jerzy Łoś in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers N {\displaystyle \mathbb {N} } has a natural counterpart ∗ N {\displaystyle ^{*}\mathbb {N} } , which contains both finite and infinite integers. A proposition such as ∀ n ∈ N , sin n π = 0 {\displaystyle \forall n\in \mathbb {N} ,\sin n\pi =0} carries over to the hyperreals as ∀ n ∈ ∗ N , ∗ sin n π = 0 {\displaystyle \forall n\in {}^{*}\mathbb {N} ,{}^{*}\!\!\sin n\pi =0} .
The superreal number system of Dales and Woodin is a generalization of the hyperreals. It is different from the super-real system defined by David Tall .
In linear algebra , the dual numbers extend the reals by adjoining one infinitesimal, the new element ε with the property ε 2 = 0 (that is, ε is nilpotent ). Every dual number has the form z = a + b ε with a and b being uniquely determined real numbers.
One application of dual numbers is automatic differentiation . This application can be generalized to polynomials in n variables, using the Exterior algebra of an n-dimensional vector space.
Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory . This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not ( a ≠ b ) does not have to mean a = b . A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x 2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic , it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first.
Cauchy used an infinitesimal α {\displaystyle \alpha } to write down a unit impulse, infinitely tall and narrow Dirac-type delta function δ α {\displaystyle \delta _{\alpha }} satisfying ∫ F ( x ) δ α ( x ) = F ( 0 ) {\displaystyle \int F(x)\delta _{\alpha }(x)=F(0)} in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot 's terminology.
Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter . The article by Yamashita (2007) contains bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals .
The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist.
In 1936 Maltsev proved the compactness theorem . This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0 < x < 1/ n , then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0 < x < 1/ n . The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/ n and zero, but this real number depends on n . Here, one chooses n first, then one finds the corresponding x . In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/ n for any n . In this case x is infinitesimal. This is not true in the real numbers ( R ) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model?
There are in fact many ways to construct such a one-dimensional linearly ordered set of numbers, but fundamentally, there are two different approaches:
In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard .
In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system, we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number.
In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level, there are no infinitesimals nor unlimited numbers. Infinitesimals are at a finer level and there are also infinitesimals with respect to this new level and so on.
Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can" [ 15 ] ) and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R. Neuendorff. [ 16 ]
Pioneering works based on Abraham Robinson 's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler ( Elementary Calculus: An Infinitesimal Approach ). Students easily relate to the intuitive notion of an infinitesimal difference 1-" 0.999... ", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1. [ 17 ] [ 18 ]
Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979. [ 19 ] The authors introduce the language of first-order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyper reals, and demonstrate some applications for the extended model.
An elementary calculus text based on smooth infinitesimal analysis is Bell, John L. (2008). A Primer of Infinitesimal Analysis, 2nd Edition. Cambridge University Press. ISBN 9780521887182.
A more recent calculus text utilizing infinitesimals is Dawson, C. Bryan (2022), Calculus Set Free: Infinitesimals to the Rescue, Oxford University Press. ISBN 9780192895608.
In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, I {\displaystyle {\mathfrak {I}}} , as a subset of functions f : V → W {\displaystyle f:V\to W} between normed vector spaces by
I ( V , W ) = { f : V → W | f ( 0 ) = 0 , ( ∀ ϵ > 0 ) ( ∃ δ > 0 ) ∍ | | ξ | | < δ ⟹ | | f ( ξ ) | | < ϵ } {\displaystyle {\mathfrak {I}}(V,W)=\{f:V\to W\ |\ f(0)=0,(\forall \epsilon >0)(\exists \delta >0)\ \backepsilon \ ||\xi ||<\delta \implies ||f(\xi )||<\epsilon \}} ,
as well as two related classes O , o {\displaystyle {\mathfrak {O}},{\mathfrak {o}}} (see Big-O notation ) by
O ( V , W ) = { f : V → W | f ( 0 ) = 0 , ( ∃ r > 0 , c > 0 ) ∍ | | ξ | | < r ⟹ | | f ( ξ ) | | ≤ c | | ξ | | } {\displaystyle {\mathfrak {O}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ (\exists r>0,c>0)\ \backepsilon \ ||\xi ||<r\implies ||f(\xi )||\leq c||\xi ||\}} , and
o ( V , W ) = { f : V → W | f ( 0 ) = 0 , lim | | ξ | | → 0 | | f ( ξ ) | | / | | ξ | | = 0 } {\displaystyle {\mathfrak {o}}(V,W)=\{f:V\to W\ |\ f(0)=0,\ \lim _{||\xi ||\to 0}||f(\xi )||/||\xi ||=0\}} . [ 20 ]
The set inclusions o ( V , W ) ⊊ O ( V , W ) ⊊ I ( V , W ) {\displaystyle {\mathfrak {o}}(V,W)\subsetneq {\mathfrak {O}}(V,W)\subsetneq {\mathfrak {I}}(V,W)} generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable f : x ↦ | x | 1 / 2 {\displaystyle f:x\mapsto |x|^{1/2}} , g : x ↦ x {\displaystyle g:x\mapsto x} , and h : x ↦ x 2 {\displaystyle h:x\mapsto x^{2}} :
f , g , h ∈ I ( R , R ) , g , h ∈ O ( R , R ) , h ∈ o ( R , R ) {\displaystyle f,g,h\in {\mathfrak {I}}(\mathbb {R} ,\mathbb {R} ),\ g,h\in {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} ),\ h\in {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} but f , g ∉ o ( R , R ) {\displaystyle f,g\notin {\mathfrak {o}}(\mathbb {R} ,\mathbb {R} )} and f ∉ O ( R , R ) {\displaystyle f\notin {\mathfrak {O}}(\mathbb {R} ,\mathbb {R} )} .
As an application of these definitions, a mapping F : V → W {\displaystyle F:V\to W} between normed vector spaces is defined to be differentiable at α ∈ V {\displaystyle \alpha \in V} if there is a T ∈ H o m ( V , W ) {\displaystyle T\in \mathrm {Hom} (V,W)} [i.e, a bounded linear map V → W {\displaystyle V\to W} ] such that
[ F ( α + ξ ) − F ( α ) ] − T ( ξ ) ∈ o ( V , W ) {\displaystyle [F(\alpha +\xi )-F(\alpha )]-T(\xi )\in {\mathfrak {o}}(V,W)}
in a neighborhood of α {\displaystyle \alpha } . If such a map exists, it is unique; this map is called the differential and is denoted by d F α {\displaystyle dF_{\alpha }} , [ 21 ] coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F . This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces.
Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} be a probability space and let n ∈ N {\displaystyle n\in \mathbb {N} } . An array { X n , k : Ω → R ∣ 1 ≤ k ≤ k n } {\displaystyle \{X_{n,k}:\Omega \to \mathbb {R} \mid 1\leq k\leq k_{n}\}} of random variables is called infinitesimal if for every ϵ > 0 {\displaystyle \epsilon >0} , we have: [ 22 ]
The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem ). | https://en.wikipedia.org/wiki/Infinitesimal |
An infinitesimal rotation matrix or differential rotation matrix is a matrix representing an infinitely small rotation .
While a rotation matrix is an orthogonal matrix R T = R − 1 {\displaystyle R^{\mathsf {T}}=R^{-1}} representing an element of S O ( n ) {\displaystyle SO(n)} (the special orthogonal group ), the differential of a rotation is a skew-symmetric matrix A T = − A {\displaystyle A^{\mathsf {T}}=-A} in the tangent space s o ( n ) {\displaystyle {\mathfrak {so}}(n)} (the special orthogonal Lie algebra ), which is not itself a rotation matrix.
An infinitesimal rotation matrix has the form
where I {\displaystyle I} is the identity matrix, d θ {\displaystyle d\theta } is vanishingly small, and A ∈ s o ( n ) . {\displaystyle A\in {\mathfrak {so}}(n).}
For example, if A = L x , {\displaystyle A=L_{x},} representing an infinitesimal three-dimensional rotation about the x -axis, a basis element of s o ( 3 ) , {\displaystyle {\mathfrak {so}}(3),} then
and
The computation rules for infinitesimal rotation matrices are the usual ones except that infinitesimals of second order are dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. [ 1 ] It turns out that the order in which infinitesimal rotations are applied is irrelevant .
An infinitesimal rotation matrix is a skew-symmetric matrix where:
The shape of the matrix is as follows: A = ( 1 − d ϕ z ( t ) d ϕ y ( t ) d ϕ z ( t ) 1 − d ϕ x ( t ) − d ϕ y ( t ) d ϕ x ( t ) 1 ) {\displaystyle A={\begin{pmatrix}1&-d\phi _{z}(t)&d\phi _{y}(t)\\d\phi _{z}(t)&1&-d\phi _{x}(t)\\-d\phi _{y}(t)&d\phi _{x}(t)&1\\\end{pmatrix}}}
Associated to an infinitesimal rotation matrix A {\displaystyle A} is an infinitesimal rotation tensor d Φ ( t ) = A − I {\displaystyle d\Phi (t)=A-I} :
d Φ ( t ) = ( 0 − d ϕ z ( t ) d ϕ y ( t ) d ϕ z ( t ) 0 − d ϕ x ( t ) − d ϕ y ( t ) d ϕ x ( t ) 0 ) {\displaystyle d\Phi (t)={\begin{pmatrix}0&-d\phi _{z}(t)&d\phi _{y}(t)\\d\phi _{z}(t)&0&-d\phi _{x}(t)\\-d\phi _{y}(t)&d\phi _{x}(t)&0\\\end{pmatrix}}}
Dividing it by the time difference yields the angular velocity tensor :
These matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. [ 2 ] To understand what this means, consider
First, test the orthogonality condition, Q T Q = I . The product is
differing from an identity matrix by second-order infinitesimals, discarded here. So, to first order, an infinitesimal rotation matrix is an orthogonal matrix.
Next, examine the square of the matrix,
Again discarding second-order effects, note that the angle simply doubles. This hints at the most essential difference in behavior, which we can exhibit with the assistance of a second infinitesimal rotation,
Compare the products dA x dA y to dA y dA x ,
Since d θ d ϕ {\displaystyle d\theta \,d\phi } is second-order, we discard it: thus, to first order, multiplication of infinitesimal rotation matrices is commutative . In fact,
again to first order. In other words, the order in which infinitesimal rotations are applied is irrelevant .
This useful fact makes, for example, derivation of rigid body rotation relatively simple. But one must always be careful to distinguish (the first-order treatment of) these infinitesimal rotation matrices from both finite rotation matrices and from Lie algebra elements. When contrasting the behavior of finite rotation matrices in the Baker–Campbell–Hausdorff formula above with that of infinitesimal rotation matrices, where all the commutator terms will be second-order infinitesimals, one finds a bona fide vector space. Technically, this dismissal of any second-order terms amounts to Group contraction .
Suppose we specify an axis of rotation by a unit vector [ x , y , z ], and suppose we have an infinitely small rotation of angle Δ θ about that vector. Expanding the rotation matrix as an infinite addition, and taking the first-order approach, the rotation matrix Δ R is represented as:
A finite rotation through angle θ about this axis may be seen as a succession of small rotations about the same axis. Approximating Δ θ as θ / N , where N is a large number, a rotation of θ about the axis may be represented as:
It can be seen that Euler's theorem essentially states that all rotations may be represented in this form. The product Aθ is the "generator" of the particular rotation, being the vector ( x , y , z ) associated with the matrix A . This shows that the rotation matrix and the axis-angle format are related by the exponential function.
One can derive a simple expression for the generator G . One starts with an arbitrary plane [ 3 ] defined by a pair of perpendicular unit vectors a and b . In this plane one can choose an arbitrary vector x with perpendicular y . One then solves for y in terms of x and substituting into an expression for a rotation in a plane yields the rotation matrix R , which includes the generator G = ba T − ab T .
To include vectors outside the plane in the rotation one needs to modify the above expression for R by including two projection operators that partition the space. This modified rotation matrix can be rewritten as an exponential function .
Analysis is often easier in terms of these generators, rather than the full rotation matrix. Analysis in terms of the generators is known as the Lie algebra of the rotation group.
Connecting the Lie algebra to the Lie group is the exponential map , which is defined using the standard matrix exponential series for e A [ 4 ] For any skew-symmetric matrix A , exp( A ) is always a rotation matrix. [ a ]
An important practical example is the 3 × 3 case. In rotation group SO(3) , it is shown that one can identify every A ∈ so (3) with an Euler vector ω = θ u , where u = ( x , y , z ) is a unit magnitude vector.
By the properties of the identification su (2) ≅ R 3 , u is in the null space of A . Thus, u is left invariant by exp( A ) and is hence a rotation axis.
Using Rodrigues' rotation formula on matrix form with θ = θ ⁄ 2 + θ ⁄ 2 , together with standard double angle formulae one obtains,
This is the matrix for a rotation around axis u by the angle θ in half-angle form. For full detail, see exponential map SO(3) .
Notice that for infinitesimal angles second-order terms can be ignored and remains exp( A ) = I + A
Skew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group O ( n ) {\displaystyle O(n)} at the identity matrix; formally, the special orthogonal Lie algebra . In this sense, then, skew-symmetric matrices can be thought of as infinitesimal rotations .
Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra o ( n ) {\displaystyle o(n)} of the Lie group O ( n ) . {\displaystyle O(n).} The Lie bracket on this space is given by the commutator :
It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:
The matrix exponential of a skew-symmetric matrix A {\displaystyle A} is then an orthogonal matrix R {\displaystyle R} :
The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group that contains the identity element. In the case of the Lie group O ( n ) , {\displaystyle O(n),} this connected component is the special orthogonal group S O ( n ) , {\displaystyle SO(n),} consisting of all orthogonal matrices with determinant 1. So R = exp ( A ) {\displaystyle R=\exp(A)} will have determinant +1. Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that every orthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix.
In the particular important case of dimension n = 2 , {\displaystyle n=2,} the exponential representation for an orthogonal matrix reduces to the well-known polar form of a complex number of unit modulus. Indeed, if n = 2 , {\displaystyle n=2,} a special orthogonal matrix has the form
with a 2 + b 2 = 1 {\displaystyle a^{2}+b^{2}=1} . Therefore, putting a = cos θ {\displaystyle a=\cos \theta } and b = sin θ , {\displaystyle b=\sin \theta ,} it can be written
which corresponds exactly to the polar form cos θ + i sin θ = e i θ {\displaystyle \cos \theta +i\sin \theta =e^{i\theta }} of a complex number of unit modulus.
In 3 dimensions, the matrix exponential is Rodrigues' rotation formula in matrix notation , and when expressed via the Euler-Rodrigues formula , the algebra of its four parameters gives rise to quaternions .
The exponential representation of an orthogonal matrix of order n {\displaystyle n} can also be obtained starting from the fact that in dimension n {\displaystyle n} any special orthogonal matrix R {\displaystyle R} can be written as R = Q S Q T , {\displaystyle R=QSQ^{\textsf {T}},} where Q {\displaystyle Q} is orthogonal and S is a block diagonal matrix with ⌊ n / 2 ⌋ {\textstyle \lfloor n/2\rfloor } blocks of order 2, plus one of order 1 if n {\displaystyle n} is odd; since each single block of order 2 is also an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of a skew-symmetric block matrix Σ {\displaystyle \Sigma } of the form above, S = exp ( Σ ) , {\displaystyle S=\exp(\Sigma ),} so that R = Q exp ( Σ ) Q T = exp ( Q Σ Q T ) , {\displaystyle R=Q\exp(\Sigma )Q^{\textsf {T}}=\exp(Q\Sigma Q^{\textsf {T}}),} exponential of the skew-symmetric matrix Q Σ Q T . {\displaystyle Q\Sigma Q^{\textsf {T}}.} Conversely, the surjectivity of the exponential map, together with the above-mentioned block-diagonalization for skew-symmetric matrices, implies the block-diagonalization for orthogonal matrices. | https://en.wikipedia.org/wiki/Infinitesimal_rotation_matrix |
In continuum mechanics , the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller (indeed, infinitesimally smaller) than any relevant dimension of the body; so that its geometry and the constitutive properties of the material (such as density and stiffness ) at each point of space can be assumed to be unchanged by the deformation.
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory , small displacement theory , or small displacement-gradient theory . It is contrasted with the finite strain theory where the opposite assumption is made.
The infinitesimal strain theory is commonly adopted in civil and mechanical engineering for the stress analysis of structures built from relatively stiff elastic materials like concrete and steel , since a common goal in the design of such structures is to minimize their deformation under typical loads . However, this approximation demands caution in the case of thin flexible bodies, such as rods, plates, and shells which are susceptible to significant rotations, thus making the results unreliable. [ 1 ]
For infinitesimal deformations of a continuum body , in which the displacement gradient tensor (2nd order tensor) is small compared to unity, i.e. ‖ ∇ u ‖ ≪ 1 {\displaystyle \|\nabla \mathbf {u} \|\ll 1} ,
it is possible to perform a geometric linearization of any one of the finite strain tensors used in finite strain theory, e.g. the Lagrangian finite strain tensor E {\displaystyle \mathbf {E} } , and the Eulerian finite strain tensor e {\displaystyle \mathbf {e} } . In such a linearization, the non-linear or second-order terms of the finite strain tensor are neglected. Thus we have
E = 1 2 ( ∇ X u + ( ∇ X u ) T + ( ∇ X u ) T ∇ X u ) ≈ 1 2 ( ∇ X u + ( ∇ X u ) T ) {\displaystyle \mathbf {E} ={\frac {1}{2}}\left(\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\nabla _{\mathbf {X} }\mathbf {u} \right)\approx {\frac {1}{2}}\left(\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\right)} or E K L = 1 2 ( ∂ U K ∂ X L + ∂ U L ∂ X K + ∂ U M ∂ X K ∂ U M ∂ X L ) ≈ 1 2 ( ∂ U K ∂ X L + ∂ U L ∂ X K ) {\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial U_{K}}{\partial X_{L}}}+{\frac {\partial U_{L}}{\partial X_{K}}}+{\frac {\partial U_{M}}{\partial X_{K}}}{\frac {\partial U_{M}}{\partial X_{L}}}\right)\approx {\frac {1}{2}}\left({\frac {\partial U_{K}}{\partial X_{L}}}+{\frac {\partial U_{L}}{\partial X_{K}}}\right)} and e = 1 2 ( ∇ x u + ( ∇ x u ) T − ∇ x u ( ∇ x u ) T ) ≈ 1 2 ( ∇ x u + ( ∇ x u ) T ) {\displaystyle \mathbf {e} ={\frac {1}{2}}\left(\nabla _{\mathbf {x} }\mathbf {u} +(\nabla _{\mathbf {x} }\mathbf {u} )^{T}-\nabla _{\mathbf {x} }\mathbf {u} (\nabla _{\mathbf {x} }\mathbf {u} )^{T}\right)\approx {\frac {1}{2}}\left(\nabla _{\mathbf {x} }\mathbf {u} +(\nabla _{\mathbf {x} }\mathbf {u} )^{T}\right)} or e r s = 1 2 ( ∂ u r ∂ x s + ∂ u s ∂ x r − ∂ u k ∂ x r ∂ u k ∂ x s ) ≈ 1 2 ( ∂ u r ∂ x s + ∂ u s ∂ x r ) {\displaystyle e_{rs}={\frac {1}{2}}\left({\frac {\partial u_{r}}{\partial x_{s}}}+{\frac {\partial u_{s}}{\partial x_{r}}}-{\frac {\partial u_{k}}{\partial x_{r}}}{\frac {\partial u_{k}}{\partial x_{s}}}\right)\approx {\frac {1}{2}}\left({\frac {\partial u_{r}}{\partial x_{s}}}+{\frac {\partial u_{s}}{\partial x_{r}}}\right)}
This linearization implies that the Lagrangian description and the Eulerian description are approximately the same as there is little difference in the material and spatial coordinates of a given material point in the continuum. Therefore, the material displacement gradient tensor components and the spatial displacement gradient tensor components are approximately equal. Thus we have E ≈ e ≈ ε = 1 2 ( ( ∇ u ) T + ∇ u ) {\displaystyle \mathbf {E} \approx \mathbf {e} \approx {\boldsymbol {\varepsilon }}={\frac {1}{2}}\left((\nabla \mathbf {u} )^{T}+\nabla \mathbf {u} \right)} or E K L ≈ e r s ≈ ε i j = 1 2 ( u i , j + u j , i ) {\displaystyle E_{KL}\approx e_{rs}\approx \varepsilon _{ij}={\frac {1}{2}}\left(u_{i,j}+u_{j,i}\right)} where ε i j {\displaystyle \varepsilon _{ij}} are the components of the infinitesimal strain tensor ε {\displaystyle {\boldsymbol {\varepsilon }}} , also called Cauchy's strain tensor , linear strain tensor , or small strain tensor .
ε i j = 1 2 ( u i , j + u j , i ) = [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] = [ ∂ u 1 ∂ x 1 1 2 ( ∂ u 1 ∂ x 2 + ∂ u 2 ∂ x 1 ) 1 2 ( ∂ u 1 ∂ x 3 + ∂ u 3 ∂ x 1 ) 1 2 ( ∂ u 2 ∂ x 1 + ∂ u 1 ∂ x 2 ) ∂ u 2 ∂ x 2 1 2 ( ∂ u 2 ∂ x 3 + ∂ u 3 ∂ x 2 ) 1 2 ( ∂ u 3 ∂ x 1 + ∂ u 1 ∂ x 3 ) 1 2 ( ∂ u 3 ∂ x 2 + ∂ u 2 ∂ x 3 ) ∂ u 3 ∂ x 3 ] {\displaystyle {\begin{aligned}\varepsilon _{ij}&={\frac {1}{2}}\left(u_{i,j}+u_{j,i}\right)\\&={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\\\end{bmatrix}}\\&={\begin{bmatrix}{\frac {\partial u_{1}}{\partial x_{1}}}&{\frac {1}{2}}\left({\frac {\partial u_{1}}{\partial x_{2}}}+{\frac {\partial u_{2}}{\partial x_{1}}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{1}}{\partial x_{3}}}+{\frac {\partial u_{3}}{\partial x_{1}}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{2}}{\partial x_{1}}}+{\frac {\partial u_{1}}{\partial x_{2}}}\right)&{\frac {\partial u_{2}}{\partial x_{2}}}&{\frac {1}{2}}\left({\frac {\partial u_{2}}{\partial x_{3}}}+{\frac {\partial u_{3}}{\partial x_{2}}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{3}}{\partial x_{1}}}+{\frac {\partial u_{1}}{\partial x_{3}}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{3}}{\partial x_{2}}}+{\frac {\partial u_{2}}{\partial x_{3}}}\right)&{\frac {\partial u_{3}}{\partial x_{3}}}\\\end{bmatrix}}\end{aligned}}} or using different notation: [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ∂ u x ∂ x 1 2 ( ∂ u x ∂ y + ∂ u y ∂ x ) 1 2 ( ∂ u x ∂ z + ∂ u z ∂ x ) 1 2 ( ∂ u y ∂ x + ∂ u x ∂ y ) ∂ u y ∂ y 1 2 ( ∂ u y ∂ z + ∂ u z ∂ y ) 1 2 ( ∂ u z ∂ x + ∂ u x ∂ z ) 1 2 ( ∂ u z ∂ y + ∂ u y ∂ z ) ∂ u z ∂ z ] {\displaystyle {\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}{\frac {\partial u_{x}}{\partial x}}&{\frac {1}{2}}\left({\frac {\partial u_{x}}{\partial y}}+{\frac {\partial u_{y}}{\partial x}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{x}}{\partial z}}+{\frac {\partial u_{z}}{\partial x}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}\right)&{\frac {\partial u_{y}}{\partial y}}&{\frac {1}{2}}\left({\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\right)\\{\frac {1}{2}}\left({\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}\right)&{\frac {1}{2}}\left({\frac {\partial u_{z}}{\partial y}}+{\frac {\partial u_{y}}{\partial z}}\right)&{\frac {\partial u_{z}}{\partial z}}\\\end{bmatrix}}}
Furthermore, since the deformation gradient can be expressed as F = ∇ u + I {\displaystyle {\boldsymbol {F}}={\boldsymbol {\nabla }}\mathbf {u} +{\boldsymbol {I}}} where I {\displaystyle {\boldsymbol {I}}} is the second-order identity tensor, we have ε = 1 2 ( F T + F ) − I {\displaystyle {\boldsymbol {\varepsilon }}={\frac {1}{2}}\left({\boldsymbol {F}}^{T}+{\boldsymbol {F}}\right)-{\boldsymbol {I}}}
Also, from the general expression for the Lagrangian and Eulerian finite strain tensors we have E ( m ) = 1 2 m ( U 2 m − I ) = 1 2 m [ ( F T F ) m − I ] ≈ 1 2 m [ { ∇ u + ( ∇ u ) T + I } m − I ] ≈ ε e ( m ) = 1 2 m ( V 2 m − I ) = 1 2 m [ ( F F T ) m − I ] ≈ ε {\displaystyle {\begin{aligned}\mathbf {E} _{(m)}&={\frac {1}{2m}}(\mathbf {U} ^{2m}-{\boldsymbol {I}})={\frac {1}{2m}}[({\boldsymbol {F}}^{T}{\boldsymbol {F}})^{m}-{\boldsymbol {I}}]\approx {\frac {1}{2m}}[\{{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}+{\boldsymbol {I}}\}^{m}-{\boldsymbol {I}}]\approx {\boldsymbol {\varepsilon }}\\\mathbf {e} _{(m)}&={\frac {1}{2m}}(\mathbf {V} ^{2m}-{\boldsymbol {I}})={\frac {1}{2m}}[({\boldsymbol {F}}{\boldsymbol {F}}^{T})^{m}-{\boldsymbol {I}}]\approx {\boldsymbol {\varepsilon }}\end{aligned}}}
Consider a two-dimensional deformation of an infinitesimal rectangular material element with dimensions d x {\displaystyle dx} by d y {\displaystyle dy} (Figure 1), which after deformation, takes the form of a rhombus. From the geometry of Figure 1 we have
a b ¯ = ( d x + ∂ u x ∂ x d x ) 2 + ( ∂ u y ∂ x d x ) 2 = d x 1 + 2 ∂ u x ∂ x + ( ∂ u x ∂ x ) 2 + ( ∂ u y ∂ x ) 2 {\displaystyle {\begin{aligned}{\overline {ab}}&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&=dx{\sqrt {1+2{\frac {\partial u_{x}}{\partial x}}+\left({\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\\end{aligned}}}
For very small displacement gradients, i.e., ‖ ∇ u ‖ ≪ 1 {\displaystyle \|\nabla \mathbf {u} \|\ll 1} , we have a b ¯ ≈ d x + ∂ u x ∂ x d x {\displaystyle {\overline {ab}}\approx dx+{\frac {\partial u_{x}}{\partial x}}dx}
The normal strain in the x {\displaystyle x} -direction of the rectangular element is defined by ε x = a b ¯ − A B ¯ A B ¯ {\displaystyle \varepsilon _{x}={\frac {{\overline {ab}}-{\overline {AB}}}{\overline {AB}}}} and knowing that A B ¯ = d x {\displaystyle {\overline {AB}}=dx} , we have ε x = ∂ u x ∂ x {\displaystyle \varepsilon _{x}={\frac {\partial u_{x}}{\partial x}}}
Similarly, the normal strain in the y {\displaystyle y} -direction, and z {\displaystyle z} -direction, becomes ε y = ∂ u y ∂ y , ε z = ∂ u z ∂ z {\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}}
The engineering shear strain , or the change in angle between two originally orthogonal material lines, in this case line A C ¯ {\displaystyle {\overline {AC}}} and A B ¯ {\displaystyle {\overline {AB}}} , is defined as γ x y = α + β {\displaystyle \gamma _{xy}=\alpha +\beta }
From the geometry of Figure 1 we have tan α = ∂ u y ∂ x d x d x + ∂ u x ∂ x d x = ∂ u y ∂ x 1 + ∂ u x ∂ x , tan β = ∂ u x ∂ y d y d y + ∂ u y ∂ y d y = ∂ u x ∂ y 1 + ∂ u y ∂ y {\displaystyle \tan \alpha ={\frac {{\dfrac {\partial u_{y}}{\partial x}}dx}{dx+{\dfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\dfrac {\partial u_{y}}{\partial x}}{1+{\dfrac {\partial u_{x}}{\partial x}}}}\quad ,\qquad \tan \beta ={\frac {{\dfrac {\partial u_{x}}{\partial y}}dy}{dy+{\dfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\dfrac {\partial u_{x}}{\partial y}}{1+{\dfrac {\partial u_{y}}{\partial y}}}}}
For small rotations, i.e., α {\displaystyle \alpha } and β {\displaystyle \beta } are ≪ 1 {\displaystyle \ll 1} we have tan α ≈ α , tan β ≈ β {\displaystyle \tan \alpha \approx \alpha \quad ,\qquad \tan \beta \approx \beta } and, again, for small displacement gradients, we have α = ∂ u y ∂ x , β = ∂ u x ∂ y {\displaystyle \alpha ={\frac {\partial u_{y}}{\partial x}}\quad ,\qquad \beta ={\frac {\partial u_{x}}{\partial y}}} thus γ x y = α + β = ∂ u y ∂ x + ∂ u x ∂ y {\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}} By interchanging x {\displaystyle x} and y {\displaystyle y} and u x {\displaystyle u_{x}} and u y {\displaystyle u_{y}} , it can be shown that γ x y = γ y x {\displaystyle \gamma _{xy}=\gamma _{yx}} .
Similarly, for the y {\displaystyle y} - z {\displaystyle z} and x {\displaystyle x} - z {\displaystyle z} planes, we have γ y z = γ z y = ∂ u y ∂ z + ∂ u z ∂ y , γ z x = γ x z = ∂ u z ∂ x + ∂ u x ∂ z {\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}}
It can be seen that the tensorial shear strain components of the infinitesimal strain tensor can then be expressed using the engineering strain definition, γ {\displaystyle \gamma } , as [ ε x x ε x y ε x z ε y x ε y y ε y z ε z x ε z y ε z z ] = [ ε x x γ x y / 2 γ x z / 2 γ y x / 2 ε y y γ y z / 2 γ z x / 2 γ z y / 2 ε z z ] {\displaystyle {\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&\gamma _{xy}/2&\gamma _{xz}/2\\\gamma _{yx}/2&\varepsilon _{yy}&\gamma _{yz}/2\\\gamma _{zx}/2&\gamma _{zy}/2&\varepsilon _{zz}\\\end{bmatrix}}}
From finite strain theory we have d x 2 − d X 2 = d X ⋅ 2 E ⋅ d X or ( d x ) 2 − ( d X ) 2 = 2 E K L d X K d X L {\displaystyle d\mathbf {x} ^{2}-d\mathbf {X} ^{2}=d\mathbf {X} \cdot 2\mathbf {E} \cdot d\mathbf {X} \quad {\text{or}}\quad (dx)^{2}-(dX)^{2}=2E_{KL}\,dX_{K}\,dX_{L}}
For infinitesimal strains then we have d x 2 − d X 2 = d X ⋅ 2 ε ⋅ d X or ( d x ) 2 − ( d X ) 2 = 2 ε K L d X K d X L {\displaystyle d\mathbf {x} ^{2}-d\mathbf {X} ^{2}=d\mathbf {X} \cdot 2\mathbf {\boldsymbol {\varepsilon }} \cdot d\mathbf {X} \quad {\text{or}}\quad (dx)^{2}-(dX)^{2}=2\varepsilon _{KL}\,dX_{K}\,dX_{L}}
Dividing by ( d X ) 2 {\displaystyle (dX)^{2}} we have d x − d X d X d x + d X d X = 2 ε i j d X i d X d X j d X {\displaystyle {\frac {dx-dX}{dX}}{\frac {dx+dX}{dX}}=2\varepsilon _{ij}{\frac {dX_{i}}{dX}}{\frac {dX_{j}}{dX}}}
For small deformations we assume that d x ≈ d X {\displaystyle dx\approx dX} , thus the second term of the left hand side becomes: d x + d X d X ≈ 2 {\displaystyle {\frac {dx+dX}{dX}}\approx 2} .
Then we have d x − d X d X = ε i j N i N j = N ⋅ ε ⋅ N {\displaystyle {\frac {dx-dX}{dX}}=\varepsilon _{ij}N_{i}N_{j}=\mathbf {N} \cdot {\boldsymbol {\varepsilon }}\cdot \mathbf {N} } where N i = d X i d X {\displaystyle N_{i}={\frac {dX_{i}}{dX}}} , is the unit vector in the direction of d X {\displaystyle d\mathbf {X} } , and the left-hand-side expression is the normal strain e ( N ) {\displaystyle e_{(\mathbf {N} )}} in the direction of N {\displaystyle \mathbf {N} } . For the particular case of N {\displaystyle \mathbf {N} } in the X 1 {\displaystyle X_{1}} direction, i.e., N = I 1 {\displaystyle \mathbf {N} =\mathbf {I} _{1}} , we have e ( I 1 ) = I 1 ⋅ ε ⋅ I 1 = ε 11 . {\displaystyle e_{(\mathbf {I} _{1})}=\mathbf {I} _{1}\cdot {\boldsymbol {\varepsilon }}\cdot \mathbf {I} _{1}=\varepsilon _{11}.}
Similarly, for N = I 2 {\displaystyle \mathbf {N} =\mathbf {I} _{2}} and N = I 3 {\displaystyle \mathbf {N} =\mathbf {I} _{3}} we can find the normal strains ε 22 {\displaystyle \varepsilon _{22}} and ε 33 {\displaystyle \varepsilon _{33}} , respectively. Therefore, the diagonal elements of the infinitesimal strain tensor are the normal strains in the coordinate directions.
If we choose an orthonormal coordinate system ( e 1 , e 2 , e 3 {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}} ) we can write the tensor in terms of components with respect to those base vectors as ε = ∑ i = 1 3 ∑ j = 1 3 ε i j e i ⊗ e j {\displaystyle {\boldsymbol {\varepsilon }}=\sum _{i=1}^{3}\sum _{j=1}^{3}\varepsilon _{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}} In matrix form, ε _ _ = [ ε 11 ε 12 ε 13 ε 12 ε 22 ε 23 ε 13 ε 23 ε 33 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{12}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{13}&\varepsilon _{23}&\varepsilon _{33}\end{bmatrix}}} We can easily choose to use another orthonormal coordinate system ( e ^ 1 , e ^ 2 , e ^ 3 {\displaystyle {\hat {\mathbf {e} }}_{1},{\hat {\mathbf {e} }}_{2},{\hat {\mathbf {e} }}_{3}} ) instead. In that case the components of the tensor are different, say ε = ∑ i = 1 3 ∑ j = 1 3 ε ^ i j e ^ i ⊗ e ^ j ⟹ ε ^ _ _ = [ ε ^ 11 ε ^ 12 ε ^ 13 ε ^ 12 ε ^ 22 ε ^ 23 ε ^ 13 ε ^ 23 ε ^ 33 ] {\displaystyle {\boldsymbol {\varepsilon }}=\sum _{i=1}^{3}\sum _{j=1}^{3}{\hat {\varepsilon }}_{ij}{\hat {\mathbf {e} }}_{i}\otimes {\hat {\mathbf {e} }}_{j}\quad \implies \quad {\underline {\underline {\hat {\boldsymbol {\varepsilon }}}}}={\begin{bmatrix}{\hat {\varepsilon }}_{11}&{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{13}\\{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{22}&{\hat {\varepsilon }}_{23}\\{\hat {\varepsilon }}_{13}&{\hat {\varepsilon }}_{23}&{\hat {\varepsilon }}_{33}\end{bmatrix}}} The components of the strain in the two coordinate systems are related by ε ^ i j = ℓ i p ℓ j q ε p q {\displaystyle {\hat {\varepsilon }}_{ij}=\ell _{ip}~\ell _{jq}~\varepsilon _{pq}} where the Einstein summation convention for repeated indices has been used and ℓ i j = e ^ i ⋅ e j {\displaystyle \ell _{ij}={\hat {\mathbf {e} }}_{i}\cdot {\mathbf {e} }_{j}} . In matrix form ε ^ _ _ = L _ _ ε _ _ L _ _ T {\displaystyle {\underline {\underline {\hat {\boldsymbol {\varepsilon }}}}}={\underline {\underline {\mathbf {L} }}}~{\underline {\underline {\boldsymbol {\varepsilon }}}}~{\underline {\underline {\mathbf {L} }}}^{T}} or [ ε ^ 11 ε ^ 12 ε ^ 13 ε ^ 21 ε ^ 22 ε ^ 23 ε ^ 31 ε ^ 32 ε ^ 33 ] = [ ℓ 11 ℓ 12 ℓ 13 ℓ 21 ℓ 22 ℓ 23 ℓ 31 ℓ 32 ℓ 33 ] [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] [ ℓ 11 ℓ 12 ℓ 13 ℓ 21 ℓ 22 ℓ 23 ℓ 31 ℓ 32 ℓ 33 ] T {\displaystyle {\begin{bmatrix}{\hat {\varepsilon }}_{11}&{\hat {\varepsilon }}_{12}&{\hat {\varepsilon }}_{13}\\{\hat {\varepsilon }}_{21}&{\hat {\varepsilon }}_{22}&{\hat {\varepsilon }}_{23}\\{\hat {\varepsilon }}_{31}&{\hat {\varepsilon }}_{32}&{\hat {\varepsilon }}_{33}\end{bmatrix}}={\begin{bmatrix}\ell _{11}&\ell _{12}&\ell _{13}\\\ell _{21}&\ell _{22}&\ell _{23}\\\ell _{31}&\ell _{32}&\ell _{33}\end{bmatrix}}{\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\end{bmatrix}}{\begin{bmatrix}\ell _{11}&\ell _{12}&\ell _{13}\\\ell _{21}&\ell _{22}&\ell _{23}\\\ell _{31}&\ell _{32}&\ell _{33}\end{bmatrix}}^{T}}
Certain operations on the strain tensor give the same result without regard to which orthonormal coordinate system is used to represent the components of strain. The results of these operations are called strain invariants . The most commonly used strain invariants are I 1 = t r ( ε ) I 2 = 1 2 { [ t r ( ε ) ] 2 − t r ( ε 2 ) } I 3 = det ( ε ) {\displaystyle {\begin{aligned}I_{1}&=\mathrm {tr} ({\boldsymbol {\varepsilon }})\\I_{2}&={\tfrac {1}{2}}\{[\mathrm {tr} ({\boldsymbol {\varepsilon }})]^{2}-\mathrm {tr} ({\boldsymbol {\varepsilon }}^{2})\}\\I_{3}&=\det({\boldsymbol {\varepsilon }})\end{aligned}}} In terms of components I 1 = ε 11 + ε 22 + ε 33 I 2 = ε 11 ε 22 + ε 22 ε 33 + ε 33 ε 11 − ε 12 2 − ε 23 2 − ε 31 2 I 3 = ε 11 ( ε 22 ε 33 − ε 23 2 ) − ε 12 ( ε 21 ε 33 − ε 23 ε 31 ) + ε 13 ( ε 21 ε 32 − ε 22 ε 31 ) {\displaystyle {\begin{aligned}I_{1}&=\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}\\I_{2}&=\varepsilon _{11}\varepsilon _{22}+\varepsilon _{22}\varepsilon _{33}+\varepsilon _{33}\varepsilon _{11}-\varepsilon _{12}^{2}-\varepsilon _{23}^{2}-\varepsilon _{31}^{2}\\I_{3}&=\varepsilon _{11}(\varepsilon _{22}\varepsilon _{33}-\varepsilon _{23}^{2})-\varepsilon _{12}(\varepsilon _{21}\varepsilon _{33}-\varepsilon _{23}\varepsilon _{31})+\varepsilon _{13}(\varepsilon _{21}\varepsilon _{32}-\varepsilon _{22}\varepsilon _{31})\end{aligned}}}
It can be shown that it is possible to find a coordinate system ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) in which the components of the strain tensor are ε _ _ = [ ε 1 0 0 0 ε 2 0 0 0 ε 3 ] ⟹ ε = ε 1 n 1 ⊗ n 1 + ε 2 n 2 ⊗ n 2 + ε 3 n 3 ⊗ n 3 {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{1}&0&0\\0&\varepsilon _{2}&0\\0&0&\varepsilon _{3}\end{bmatrix}}\quad \implies \quad {\boldsymbol {\varepsilon }}=\varepsilon _{1}\mathbf {n} _{1}\otimes \mathbf {n} _{1}+\varepsilon _{2}\mathbf {n} _{2}\otimes \mathbf {n} _{2}+\varepsilon _{3}\mathbf {n} _{3}\otimes \mathbf {n} _{3}} The components of the strain tensor in the ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) coordinate system are called the principal strains and the directions n i {\displaystyle \mathbf {n} _{i}} are called the directions of principal strain. Since there are no shear strain components in this coordinate system, the principal strains represent the maximum and minimum stretches of an elemental volume.
If we are given the components of the strain tensor in an arbitrary orthonormal coordinate system, we can find the principal strains using an eigenvalue decomposition determined by solving the system of equations ( ε _ _ − ε i I _ _ ) n i = 0 _ {\displaystyle ({\underline {\underline {\boldsymbol {\varepsilon }}}}-\varepsilon _{i}~{\underline {\underline {\mathbf {I} }}})~\mathbf {n} _{i}={\underline {\mathbf {0} }}} This system of equations is equivalent to finding the vector n i {\displaystyle \mathbf {n} _{i}} along which the strain tensor becomes a pure stretch with no shear component.
The volumetric strain , also called bulk strain , is the relative variation of the volume, as arising from dilation or compression ; it is the first strain invariant or trace of the tensor: δ = Δ V V 0 = I 1 = ε 11 + ε 22 + ε 33 {\displaystyle \delta ={\frac {\Delta V}{V_{0}}}=I_{1}=\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}} Actually, if we consider a cube with an edge length a , it is a quasi-cube after the deformation (the variations of the angles do not change the volume) with the dimensions a ⋅ ( 1 + ε 11 ) × a ⋅ ( 1 + ε 22 ) × a ⋅ ( 1 + ε 33 ) {\displaystyle a\cdot (1+\varepsilon _{11})\times a\cdot (1+\varepsilon _{22})\times a\cdot (1+\varepsilon _{33})} and V 0 = a 3 , thus Δ V V 0 = ( 1 + ε 11 + ε 22 + ε 33 + ε 11 ⋅ ε 22 + ε 11 ⋅ ε 33 + ε 22 ⋅ ε 33 + ε 11 ⋅ ε 22 ⋅ ε 33 ) ⋅ a 3 − a 3 a 3 {\displaystyle {\frac {\Delta V}{V_{0}}}={\frac {\left(1+\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}+\varepsilon _{11}\cdot \varepsilon _{33}+\varepsilon _{22}\cdot \varepsilon _{33}+\varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}\right)\cdot a^{3}-a^{3}}{a^{3}}}} as we consider small deformations, 1 ≫ ε i i ≫ ε i i ⋅ ε j j ≫ ε 11 ⋅ ε 22 ⋅ ε 33 {\displaystyle 1\gg \varepsilon _{ii}\gg \varepsilon _{ii}\cdot \varepsilon _{jj}\gg \varepsilon _{11}\cdot \varepsilon _{22}\cdot \varepsilon _{33}} therefore the formula.
In case of pure shear, we can see that there is no change of the volume.
The infinitesimal strain tensor ε i j {\displaystyle \varepsilon _{ij}} , similarly to the Cauchy stress tensor , can be expressed as the sum of two other tensors:
ε i j = ε i j ′ + ε M δ i j {\displaystyle \varepsilon _{ij}=\varepsilon '_{ij}+\varepsilon _{M}\delta _{ij}} where ε M {\displaystyle \varepsilon _{M}} is the mean strain given by ε M = ε k k 3 = ε 11 + ε 22 + ε 33 3 = 1 3 I 1 e {\displaystyle \varepsilon _{M}={\frac {\varepsilon _{kk}}{3}}={\frac {\varepsilon _{11}+\varepsilon _{22}+\varepsilon _{33}}{3}}={\tfrac {1}{3}}I_{1}^{e}}
The deviatoric strain tensor can be obtained by subtracting the mean strain tensor from the infinitesimal strain tensor: ε i j ′ = ε i j − ε k k 3 δ i j [ ε 11 ′ ε 12 ′ ε 13 ′ ε 21 ′ ε 22 ′ ε 23 ′ ε 31 ′ ε 32 ′ ε 33 ′ ] = [ ε 11 ε 12 ε 13 ε 21 ε 22 ε 23 ε 31 ε 32 ε 33 ] − [ ε M 0 0 0 ε M 0 0 0 ε M ] = [ ε 11 − ε M ε 12 ε 13 ε 21 ε 22 − ε M ε 23 ε 31 ε 32 ε 33 − ε M ] {\displaystyle {\begin{aligned}\ \varepsilon '_{ij}&=\varepsilon _{ij}-{\frac {\varepsilon _{kk}}{3}}\delta _{ij}\\{\begin{bmatrix}\varepsilon '_{11}&\varepsilon '_{12}&\varepsilon '_{13}\\\varepsilon '_{21}&\varepsilon '_{22}&\varepsilon '_{23}\\\varepsilon '_{31}&\varepsilon '_{32}&\varepsilon '_{33}\\\end{bmatrix}}&={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}\\\end{bmatrix}}-{\begin{bmatrix}\varepsilon _{M}&0&0\\0&\varepsilon _{M}&0\\0&0&\varepsilon _{M}\\\end{bmatrix}}\\&={\begin{bmatrix}\varepsilon _{11}-\varepsilon _{M}&\varepsilon _{12}&\varepsilon _{13}\\\varepsilon _{21}&\varepsilon _{22}-\varepsilon _{M}&\varepsilon _{23}\\\varepsilon _{31}&\varepsilon _{32}&\varepsilon _{33}-\varepsilon _{M}\\\end{bmatrix}}\\\end{aligned}}}
Let ( n 1 , n 2 , n 3 {\displaystyle \mathbf {n} _{1},\mathbf {n} _{2},\mathbf {n} _{3}} ) be the directions of the three principal strains. An octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on an octahedral plane is called the octahedral shear strain and is given by γ o c t = 2 3 ( ε 1 − ε 2 ) 2 + ( ε 2 − ε 3 ) 2 + ( ε 3 − ε 1 ) 2 {\displaystyle \gamma _{\mathrm {oct} }={\tfrac {2}{3}}{\sqrt {(\varepsilon _{1}-\varepsilon _{2})^{2}+(\varepsilon _{2}-\varepsilon _{3})^{2}+(\varepsilon _{3}-\varepsilon _{1})^{2}}}} where ε 1 , ε 2 , ε 3 {\displaystyle \varepsilon _{1},\varepsilon _{2},\varepsilon _{3}} are the principal strains. [ citation needed ]
The normal strain on an octahedral plane is given by ε o c t = 1 3 ( ε 1 + ε 2 + ε 3 ) {\displaystyle \varepsilon _{\mathrm {oct} }={\tfrac {1}{3}}(\varepsilon _{1}+\varepsilon _{2}+\varepsilon _{3})} [ citation needed ]
A scalar quantity called the equivalent strain , or the von Mises equivalent strain, is often used to describe the state of strain in solids. Several definitions of equivalent strain can be found in the literature. A definition that is commonly used in the literature on plasticity is ε e q = 2 3 ε d e v : ε d e v = 2 3 ε i j d e v ε i j d e v ; ε d e v = ε − 1 3 t r ( ε ) I {\displaystyle \varepsilon _{\mathrm {eq} }={\sqrt {{\tfrac {2}{3}}{\boldsymbol {\varepsilon }}^{\mathrm {dev} }:{\boldsymbol {\varepsilon }}^{\mathrm {dev} }}}={\sqrt {{\tfrac {2}{3}}\varepsilon _{ij}^{\mathrm {dev} }\varepsilon _{ij}^{\mathrm {dev} }}}~;~~{\boldsymbol {\varepsilon }}^{\mathrm {dev} }={\boldsymbol {\varepsilon }}-{\tfrac {1}{3}}\mathrm {tr} ({\boldsymbol {\varepsilon }})~{\boldsymbol {I}}} This quantity is work conjugate to the equivalent stress defined as σ e q = 3 2 σ d e v : σ d e v {\displaystyle \sigma _{\mathrm {eq} }={\sqrt {{\tfrac {3}{2}}{\boldsymbol {\sigma }}^{\mathrm {dev} }:{\boldsymbol {\sigma }}^{\mathrm {dev} }}}}
For prescribed strain components ε i j {\displaystyle \varepsilon _{ij}} the strain tensor equation u i , j + u j , i = 2 ε i j {\displaystyle u_{i,j}+u_{j,i}=2\varepsilon _{ij}} represents a system of six differential equations for the determination of three displacements components u i {\displaystyle u_{i}} , giving an over-determined system. Thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations , are imposed upon the strain components. With the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant , and are called the " Saint Venant compatibility equations ".
The compatibility functions serve to assure a single-valued continuous displacement function u i {\displaystyle u_{i}} . If the elastic medium is visualised as a set of infinitesimal cubes in the unstrained state, after the medium is strained, an arbitrary strain tensor may not yield a situation in which the distorted cubes still fit together without overlapping.
In index notation, the compatibility equations are expressed as ε i j , k m + ε k m , i j − ε i k , j m − ε j m , i k = 0 {\displaystyle \varepsilon _{ij,km}+\varepsilon _{km,ij}-\varepsilon _{ik,jm}-\varepsilon _{jm,ik}=0}
In engineering notation,
In real engineering components, stress (and strain) are 3-D tensors but in prismatic structures such as a long metal billet, the length of the structure is much greater than the other two dimensions. The strains associated with length, i.e., the normal strain ε 33 {\displaystyle \varepsilon _{33}} and the shear strains ε 13 {\displaystyle \varepsilon _{13}} and ε 23 {\displaystyle \varepsilon _{23}} (if the length is the 3-direction) are constrained by nearby material and are small compared to the cross-sectional strains . Plane strain is then an acceptable approximation. The strain tensor for plane strain is written as: ε _ _ = [ ε 11 ε 12 0 ε 21 ε 22 0 0 0 0 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{11}&\varepsilon _{12}&0\\\varepsilon _{21}&\varepsilon _{22}&0\\0&0&0\end{bmatrix}}} in which the double underline indicates a second order tensor . This strain state is called plane strain . The corresponding stress tensor is: σ _ _ = [ σ 11 σ 12 0 σ 21 σ 22 0 0 0 σ 33 ] {\displaystyle {\underline {\underline {\boldsymbol {\sigma }}}}={\begin{bmatrix}\sigma _{11}&\sigma _{12}&0\\\sigma _{21}&\sigma _{22}&0\\0&0&\sigma _{33}\end{bmatrix}}} in which the non-zero σ 33 {\displaystyle \sigma _{33}} is needed to maintain the constraint ϵ 33 = 0 {\displaystyle \epsilon _{33}=0} . This stress term can be temporarily removed from the analysis to leave only the in-plane terms, effectively reducing the 3-D problem to a much simpler 2-D problem.
Antiplane strain is another special state of strain that can occur in a body, for instance in a region close to a screw dislocation . The strain tensor for antiplane strain is given by ε _ _ = [ 0 0 ε 13 0 0 ε 23 ε 13 ε 23 0 ] {\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}0&0&\varepsilon _{13}\\0&0&\varepsilon _{23}\\\varepsilon _{13}&\varepsilon _{23}&0\end{bmatrix}}}
The infinitesimal strain tensor is defined as ε = 1 2 [ ∇ u + ( ∇ u ) T ] {\displaystyle {\boldsymbol {\varepsilon }}={\frac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}]} Therefore the displacement gradient can be expressed as ∇ u = ε + W {\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\boldsymbol {\varepsilon }}+{\boldsymbol {W}}} where W := 1 2 [ ∇ u − ( ∇ u ) T ] {\displaystyle {\boldsymbol {W}}:={\frac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} -({\boldsymbol {\nabla }}\mathbf {u} )^{T}]} The quantity W {\displaystyle {\boldsymbol {W}}} is the infinitesimal rotation tensor or infinitesimal angular displacement tensor (related to the infinitesimal rotation matrix ). This tensor is skew symmetric . For infinitesimal deformations the scalar components of W {\displaystyle {\boldsymbol {W}}} satisfy the condition | W i j | ≪ 1 {\displaystyle |W_{ij}|\ll 1} . Note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal.
A skew symmetric second-order tensor has three independent scalar components. These three components are used to define an axial vector , w {\displaystyle \mathbf {w} } , as follows W i j = − ϵ i j k w k ; w i = − 1 2 ϵ i j k W j k {\displaystyle W_{ij}=-\epsilon _{ijk}~w_{k}~;~~w_{i}=-{\tfrac {1}{2}}~\epsilon _{ijk}~W_{jk}} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the permutation symbol . In matrix form W _ _ = [ 0 − w 3 w 2 w 3 0 − w 1 − w 2 w 1 0 ] ; w _ = [ w 1 w 2 w 3 ] {\displaystyle {\underline {\underline {\boldsymbol {W}}}}={\begin{bmatrix}0&-w_{3}&w_{2}\\w_{3}&0&-w_{1}\\-w_{2}&w_{1}&0\end{bmatrix}}~;~~{\underline {\mathbf {w} }}={\begin{bmatrix}w_{1}\\w_{2}\\w_{3}\end{bmatrix}}} The axial vector is also called the infinitesimal rotation vector . The rotation vector is related to the displacement gradient by the relation w = 1 2 ∇ × u {\displaystyle \mathbf {w} ={\tfrac {1}{2}}~{\boldsymbol {\nabla }}\times \mathbf {u} } In index notation w i = 1 2 ϵ i j k u k , j {\displaystyle w_{i}={\tfrac {1}{2}}~\epsilon _{ijk}~u_{k,j}} If ‖ W ‖ ≪ 1 {\displaystyle \lVert {\boldsymbol {W}}\rVert \ll 1} and ε = 0 {\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {0}}} then the material undergoes an approximate rigid body rotation of magnitude | w | {\displaystyle |\mathbf {w} |} around the vector w {\displaystyle \mathbf {w} } .
Given a continuous, single-valued displacement field u {\displaystyle \mathbf {u} } and the corresponding infinitesimal strain tensor ε {\displaystyle {\boldsymbol {\varepsilon }}} , we have (see Tensor derivative (continuum mechanics) ) ∇ × ε = e i j k ε l j , i e k ⊗ e l = 1 2 e i j k [ u l , j i + u j , l i ] e k ⊗ e l {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=e_{ijk}~\varepsilon _{lj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{l}={\tfrac {1}{2}}~e_{ijk}~[u_{l,ji}+u_{j,li}]~\mathbf {e} _{k}\otimes \mathbf {e} _{l}} Since a change in the order of differentiation does not change the result, u l , j i = u l , i j {\displaystyle u_{l,ji}=u_{l,ij}} . Therefore e i j k u l , j i = ( e 12 k + e 21 k ) u l , 12 + ( e 13 k + e 31 k ) u l , 13 + ( e 23 k + e 32 k ) u l , 32 = 0 {\displaystyle e_{ijk}u_{l,ji}=(e_{12k}+e_{21k})u_{l,12}+(e_{13k}+e_{31k})u_{l,13}+(e_{23k}+e_{32k})u_{l,32}=0} Also 1 2 e i j k u j , l i = ( 1 2 e i j k u j , i ) , l = ( 1 2 e k i j u j , i ) , l = w k , l {\displaystyle {\tfrac {1}{2}}~e_{ijk}~u_{j,li}=\left({\tfrac {1}{2}}~e_{ijk}~u_{j,i}\right)_{,l}=\left({\tfrac {1}{2}}~e_{kij}~u_{j,i}\right)_{,l}=w_{k,l}} Hence ∇ × ε = w k , l e k ⊗ e l = ∇ w {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=w_{k,l}~\mathbf {e} _{k}\otimes \mathbf {e} _{l}={\boldsymbol {\nabla }}\mathbf {w} }
From an important identity regarding the curl of a tensor we know that for a continuous, single-valued displacement field u {\displaystyle \mathbf {u} } , ∇ × ( ∇ u ) = 0 . {\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\mathbf {u} )={\boldsymbol {0}}.} Since ∇ u = ε + W {\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\boldsymbol {\varepsilon }}+{\boldsymbol {W}}} we have ∇ × W = − ∇ × ε = − ∇ w . {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {W}}=-{\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=-{\boldsymbol {\nabla }}\mathbf {w} .}
In cylindrical polar coordinates ( r , θ , z {\displaystyle r,\theta ,z} ), the displacement vector can be written as u = u r e r + u θ e θ + u z e z {\displaystyle \mathbf {u} =u_{r}~\mathbf {e} _{r}+u_{\theta }~\mathbf {e} _{\theta }+u_{z}~\mathbf {e} _{z}} The components of the strain tensor in a cylindrical coordinate system are given by: [ 2 ] ε r r = ∂ u r ∂ r ε θ θ = 1 r ( ∂ u θ ∂ θ + u r ) ε z z = ∂ u z ∂ z ε r θ = 1 2 ( 1 r ∂ u r ∂ θ + ∂ u θ ∂ r − u θ r ) ε θ z = 1 2 ( ∂ u θ ∂ z + 1 r ∂ u z ∂ θ ) ε z r = 1 2 ( ∂ u r ∂ z + ∂ u z ∂ r ) {\displaystyle {\begin{aligned}\varepsilon _{rr}&={\cfrac {\partial u_{r}}{\partial r}}\\\varepsilon _{\theta \theta }&={\cfrac {1}{r}}\left({\cfrac {\partial u_{\theta }}{\partial \theta }}+u_{r}\right)\\\varepsilon _{zz}&={\cfrac {\partial u_{z}}{\partial z}}\\\varepsilon _{r\theta }&={\cfrac {1}{2}}\left({\cfrac {1}{r}}{\cfrac {\partial u_{r}}{\partial \theta }}+{\cfrac {\partial u_{\theta }}{\partial r}}-{\cfrac {u_{\theta }}{r}}\right)\\\varepsilon _{\theta z}&={\cfrac {1}{2}}\left({\cfrac {\partial u_{\theta }}{\partial z}}+{\cfrac {1}{r}}{\cfrac {\partial u_{z}}{\partial \theta }}\right)\\\varepsilon _{zr}&={\cfrac {1}{2}}\left({\cfrac {\partial u_{r}}{\partial z}}+{\cfrac {\partial u_{z}}{\partial r}}\right)\end{aligned}}}
In spherical coordinates ( r , θ , ϕ {\displaystyle r,\theta ,\phi } ), the displacement vector can be written as u = u r e r + u θ e θ + u ϕ e ϕ {\displaystyle \mathbf {u} =u_{r}~\mathbf {e} _{r}+u_{\theta }~\mathbf {e} _{\theta }+u_{\phi }~\mathbf {e} _{\phi }} The components of the strain tensor in a spherical coordinate system are given by [ 2 ] ε r r = ∂ u r ∂ r ε θ θ = 1 r ( ∂ u θ ∂ θ + u r ) ε ϕ ϕ = 1 r sin θ ( ∂ u ϕ ∂ ϕ + u r sin θ + u θ cos θ ) ε r θ = 1 2 ( 1 r ∂ u r ∂ θ + ∂ u θ ∂ r − u θ r ) ε θ ϕ = 1 2 r ( 1 sin θ ∂ u θ ∂ ϕ + ∂ u ϕ ∂ θ − u ϕ cot θ ) ε ϕ r = 1 2 ( 1 r sin θ ∂ u r ∂ ϕ + ∂ u ϕ ∂ r − u ϕ r ) {\displaystyle {\begin{aligned}\varepsilon _{rr}&={\cfrac {\partial u_{r}}{\partial r}}\\\varepsilon _{\theta \theta }&={\cfrac {1}{r}}\left({\cfrac {\partial u_{\theta }}{\partial \theta }}+u_{r}\right)\\\varepsilon _{\phi \phi }&={\cfrac {1}{r\sin \theta }}\left({\cfrac {\partial u_{\phi }}{\partial \phi }}+u_{r}\sin \theta +u_{\theta }\cos \theta \right)\\\varepsilon _{r\theta }&={\cfrac {1}{2}}\left({\cfrac {1}{r}}{\cfrac {\partial u_{r}}{\partial \theta }}+{\cfrac {\partial u_{\theta }}{\partial r}}-{\cfrac {u_{\theta }}{r}}\right)\\\varepsilon _{\theta \phi }&={\cfrac {1}{2r}}\left({\cfrac {1}{\sin \theta }}{\cfrac {\partial u_{\theta }}{\partial \phi }}+{\cfrac {\partial u_{\phi }}{\partial \theta }}-u_{\phi }\cot \theta \right)\\\varepsilon _{\phi r}&={\cfrac {1}{2}}\left({\cfrac {1}{r\sin \theta }}{\cfrac {\partial u_{r}}{\partial \phi }}+{\cfrac {\partial u_{\phi }}{\partial r}}-{\cfrac {u_{\phi }}{r}}\right)\end{aligned}}} | https://en.wikipedia.org/wiki/Infinitesimal_strain_theory |
In mathematics , an infinitesimal transformation is a limiting form of small transformation . For example one may talk about an infinitesimal rotation of a rigid body , in three-dimensional space. This is conventionally represented by a 3×3 skew-symmetric matrix A . It is not the matrix of an actual rotation in space; but for small real values of a parameter ε the transformation
is a small rotation, up to quantities of order ε 2 .
A comprehensive theory of infinitesimal transformations was first given by Sophus Lie . This was at the heart of his work, on what are now called Lie groups and their accompanying Lie algebras ; and the identification of their role in geometry and especially the theory of differential equations . The properties of an abstract Lie algebra are exactly those definitive of infinitesimal transformations, just as the axioms of group theory embody symmetry . The term "Lie algebra" was introduced in 1934 by Hermann Weyl , for what had until then been known as the algebra of infinitesimal transformations of a Lie group.
For example, in the case of infinitesimal rotations, the Lie algebra structure is that provided by the cross product , once a skew-symmetric matrix has been identified with a 3- vector . This amounts to choosing an axis vector for the rotations; the defining Jacobi identity is a well-known property of cross products.
The earliest example of an infinitesimal transformation that may have been recognised as such was in Euler's theorem on homogeneous functions . Here it is stated that a function F of n variables x 1 , ..., x n that is homogeneous of degree r , satisfies
with
the Theta operator . That is, from the property
it is possible to differentiate with respect to λ and then set λ equal to 1. This then becomes a necessary condition on a smooth function F to have the homogeneity property; it is also sufficient (by using Schwartz distributions one can reduce the mathematical analysis considerations here). This setting is typical, in that there is a one-parameter group of scalings operating; and the information is coded in an infinitesimal transformation that is a first-order differential operator .
The operator equation
where
is an operator version of Taylor's theorem — and is therefore only valid under caveats about f being an analytic function . Concentrating on the operator part, it shows that D is an infinitesimal transformation, generating translations of the real line via the exponential . In Lie's theory, this is generalised a long way. Any connected Lie group can be built up by means of its infinitesimal generators (a basis for the Lie algebra of the group); with explicit if not always useful information given in the Baker–Campbell–Hausdorff formula . | https://en.wikipedia.org/wiki/Infinitesimal_transformation |
In philosophy and theology, infinity is explored in articles under headings such as the Absolute , God , and Zeno's paradoxes .
In Greek philosophy , for example in Anaximander , 'the Boundless' is the origin of all that is. He took the beginning or first principle to be an endless, unlimited primordial mass (ἄπειρον, apeiron ). The Jain metaphysics and mathematics were the first to define and delineate different "types" of infinities. [ 1 ] The work of the mathematician Georg Cantor first placed infinity into a coherent mathematical framework. Keenly aware of his departure from traditional wisdom, Cantor also presented a comprehensive historical and philosophical discussion of infinity. [ 2 ] In Christian theology, for example in the work of Duns Scotus , the infinite nature of God invokes a sense of being without constraint, rather than a sense of being unlimited in quantity.
Anaximander was an early thinker who engaged with the idea of infinity, considering it a foundational and primitive basis of reality. [ 3 ] Anaximander was the first in the Greek philosophical tradition to propose that the universe is infinite. [ 4 ]
Anaxagoras (500–428 BCE) believed that the matter in the universe has an innate capacity for infinite division. [ 5 ]
A group of thinkers of ancient Greece (later identified as the Atomists ) all similarly considered matter to be made of an infinite number of structures as considered by imagining dividing or separating matter from itself an infinite number of times. [ 6 ]
Aristotle , alive for the period 384–322 BCE, is credited with being the root of a field of thought, in his influence of succeeding thinking for a period spanning more than one subsequent millennium, by his rejection of the idea of actual infinity . [ 7 ]
In Book 3 of his work entitled Physics , Aristotle deals with the concept of infinity in terms of his notion of actuality and of potentiality . [ 8 ] [ 9 ] [ 10 ]
... It is always possible to think of a larger number: for the number of times a magnitude can be bisected is infinite. Hence the infinite is potential, never actual; the number of parts that can be taken always surpasses any assigned number.
This is often called potential infinity; however, there are two ideas mixed up with this. One is that it is always possible to find a number of things that surpasses any given number, even if there are not actually such things. The other is that we may quantify over infinite sets without restriction. For example, ∀ n ∈ Z ( ∃ m ∈ Z [ m > n ∧ P ( m ) ] ) {\displaystyle \forall n\in \mathbb {Z} (\exists m\in \mathbb {Z} [m>n\wedge P(m)])} , which reads, "for any integer n, there exists an integer m > n such that P(m)". The second view is found in a clearer form by medieval writers such as William of Ockham :
Sed omne continuum est actualiter existens. Igitur quaelibet pars sua est vere existens in rerum natura. Sed partes continui sunt infinitae quia non tot quin plures, igitur partes infinitae sunt actualiter existentes. But every continuum is actually existent. Therefore any of its parts is really existent in nature. But the parts of the continuum are infinite because there are not so many that there are not more, and therefore the infinite parts are actually existent.
The parts are actually there, in some sense. However, in this view, no infinite magnitude can have a number, for whatever number we can imagine, there is always a larger one: "There are not so many (in number) that there are no more."
Aristotle's views on the continuum foreshadow some topological aspects of modern mathematical theories of the continuum. Aristotle's emphasis on the connectedness of the continuum may have inspired—in different ways—modern philosophers and mathematicians such as Charles Sanders Peirce, Cantor, and LEJ Brouwer. [ 11 ] [ 12 ]
Among the scholastics, Aquinas also argued against the idea that infinity could be in any sense complete or a totality.
Aristotle deals with infinity in the context of the prime mover , in Book 7 of the same work, the reasoning of which was later studied and commented on by Simplicius . [ 13 ]
Plotinus considered infinity, while he was alive, during the 3rd century A.D. [ 3 ]
Simplicius, [ 14 ] alive circa 490 to 560 AD, [ 15 ] thought the concept "Mind" was infinite. [ 14 ]
Augustine thought infinity to be "incomprehensible for the human mind". [ 14 ]
The Jain upanga āgama Surya Prajnapti (c. 400 BC) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three orders:
The Jains were the first to discard the idea that all infinities were the same or equal. They recognized different types of infinities: infinite in length (one dimension ), infinite in area (two dimensions), infinite in volume (three dimensions), and infinite perpetually (infinite number of dimensions).
According to Singh (1987), Joseph (2000) and Agrawal (2000), the highest enumerable number N of the Jains corresponds to the modern concept of aleph-null ℵ 0 {\displaystyle \aleph _{0}} (the cardinal number of the infinite set of integers 1, 2, ...), the smallest cardinal transfinite number . The Jains also defined a whole system of infinite cardinal numbers, of which the highest enumerable number N is the smallest.
In the Jaina work on the theory of sets , two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between asaṃkhyāta ("countless, innumerable") and ananta ("endless, unlimited"), between rigidly bounded and loosely bounded infinities.
Galileo Galilei (February 15, 1564 – January 8, 1642 [ 16 ] ) discussed the example of comparing the square numbers {1, 4, 9, 16, ...} with the natural numbers {1, 2, 3, 4, ...} as follows:
It appeared by this reasoning as though a "set" (Galileo did not use the terminology) which is naturally smaller than the "set" of which it is a part (since it does not contain all the members) is in some sense the same "size". Galileo found no way around this problem:
So far as I see we can only infer that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.
The idea that size can be measured by one-to-one correspondence is today known as Hume's principle , although Hume, like Galileo, believed the principle could not be applied to the infinite. The same concept, applied by Georg Cantor , is used in relation to infinite sets.
Famously, the ultra-empiricist Hobbes (April 5, 1588 – December 4, 1679 [ 17 ] ) tried to defend the idea of a potential infinity in light of the discovery, by Evangelista Torricelli , of a figure ( Gabriel's Horn ) whose surface area is infinite, but whose volume is finite. Not reported, this motivation of Hobbes came too late as curves having infinite length yet bounding finite areas were known much before.
Locke (August 29, 1632 – October 28, 1704 [ 18 ] ) in common with most of the empiricist philosophers, also believed that we can have no proper idea of the infinite. They believed all our ideas were derived from sense data or "impressions," and since all sensory impressions are inherently finite, so too are our thoughts and ideas. Our idea of infinity is merely negative or privative.
Whatever positive ideas we have in our minds of any space, duration, or number, let them be never so great, they are still finite; but when we suppose an inexhaustible remainder, from which we remove all bounds, and wherein we allow the mind an endless progression of thought, without ever completing the idea, there we have our idea of infinity... yet when we would frame in our minds the idea of an infinite space or duration, that idea is very obscure and confused, because it is made up of two parts very different, if not inconsistent. For let a man frame in his mind an idea of any space or number, as great as he will, it is plain the mind rests and terminates in that idea; which is contrary to the idea of infinity, which consists in a supposed endless progression.
He considered that in considerations on the subject of eternity, which he classified as an infinity, humans are likely to make mistakes. [ 19 ]
Modern discussion of the infinite is now regarded as part of set theory and mathematics. Contemporary philosophers of mathematics engage with the topic of infinity and generally acknowledge its role in mathematical practice. Although set theory is now widely accepted, this was not always so. Influenced by L.E.J Brouwer and verificationism in part, Wittgenstein (April 26, 1889 – April 29, 1951 [ 20 ] ) made an impassioned attack upon axiomatic set theory , and upon the idea of the actual infinite, during his "middle period". [ 21 ]
Does the relation m = 2 n {\displaystyle m=2n} correlate the class of all numbers with one of its subclasses? No. It correlates any arbitrary number with another, and in that way we arrive at infinitely many pairs of classes, of which one is correlated with the other, but which are never related as class and subclass. Neither is this infinite process itself in some sense or other such a pair of classes... In the superstition that m = 2 n {\displaystyle m=2n} correlates a class with its subclass, we merely have yet another case of ambiguous grammar.
Unlike the traditional empiricists, he thought that the infinite was in some way given to sense experience .
... I can see in space the possibility of any finite experience... we recognize [the] essential infinity of space in its smallest part." "[Time] is infinite in the same sense as the three-dimensional space of sight and movement is infinite, even if in fact I can only see as far as the walls of my room.
... what is infinite about endlessness is only the endlessness itself.
The philosopher Emmanuel Levinas (January 12, 1906 – December 25, 1995 [ 22 ] ) uses infinity to designate that which cannot be defined or reduced to knowledge or power. In Levinas' magnum opus Totality and Infinity he says :
...infinity is produced in the relationship of the same with the other, and how the particular and the personal, which are unsurpassable, as it were magnetize the very field in which the production of infinity is enacted...
The idea of infinity is not an incidental notion forged by a subjectivity to reflect the case of an entity encountering on the outside nothing that limits it, overflowing every limit, and thereby infinite. The production of the infinite entity is inseparable from the idea of infinity, for it is precisely in the disproportion between the idea of infinity and the infinity of which it is the idea that this exceeding of limits is produced. The idea of infinity is the mode of being, the infinition, of infinity... All knowing qua intentionality already presupposes the idea of infinity, which is preeminently non-adequation.
Levinas also wrote a work entitled Philosophy and the Idea of Infinity , which was published during 1957. [ 23 ] | https://en.wikipedia.org/wiki/Infinity_(philosophy) |
In mathematics , infinity plus one is a concept which has a well-defined formal meaning in some number systems, and may refer to: | https://en.wikipedia.org/wiki/Infinity_plus_one |
Infix notation
Infix notation is the notation commonly used in arithmetical and logical formulae and statements. It is characterized by the placement of operators between operands —"infixed operators"—such as the plus sign in 2 + 2 .
Binary relations are often denoted by an infix symbol such as set membership a ∈ A when the set A has a for an element. In geometry , perpendicular lines a and b are denoted a ⊥ b , {\displaystyle a\perp b\ ,} and in projective geometry two points b and c are in perspective when b ⩞ c {\displaystyle b\ \doublebarwedge \ c} while they are connected by a projectivity when b ⊼ c . {\displaystyle b\ \barwedge \ c.}
Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 + ). However many programming languages use it due to its familiarity. It is more used in arithmetic, e.g. 5 × 6. [ 1 ]
Infix notation may also be distinguished from function notation, where the name of a function suggests a particular operation, and its arguments are the operands. An example of such a function notation would be S(1, 3) in which the function S denotes addition ("sum"): S (1, 3) = 1 + 3 = 4 .
In infix notation, unlike in prefix or postfix notations, parentheses surrounding groups of operands and operators are necessary to indicate the intended order in which operations are to be performed. In the absence of parentheses, certain precedence rules determine the order of operations . | https://en.wikipedia.org/wiki/Infix_notation |
Inflammation (from Latin : inflammatio ) is part of the biological response of body tissues to harmful stimuli, such as pathogens , damaged cells, or irritants . [ 1 ] The five cardinal signs are heat, pain, redness, swelling, and loss of function (Latin calor , dolor , rubor , tumor , and functio laesa ).
Inflammation is a generic response, and therefore is considered a mechanism of innate immunity , whereas adaptive immunity is specific to each pathogen. [ 2 ]
Inflammation is a protective response involving immune cells , blood vessels , and molecular mediators. The function of inflammation is to eliminate the initial cause of cell injury, clear out damaged cells and tissues, and initiate tissue repair. Too little inflammation could lead to progressive tissue destruction by the harmful stimulus (e.g. bacteria) and compromise the survival of the organism. However inflammation can also have negative effects. [ 3 ] Too much inflammation, in the form of chronic inflammation, is associated with various diseases, such as hay fever , periodontal disease , atherosclerosis , and osteoarthritis .
Inflammation can be classified as acute or chronic . Acute inflammation is the initial response of the body to harmful stimuli, and is achieved by the increased movement of plasma and leukocytes (in particular granulocytes ) from the blood into the injured tissues. A series of biochemical events propagates and matures the inflammatory response, involving the local vascular system , the immune system , and various cells in the injured tissue. Prolonged inflammation, known as chronic inflammation , leads to a progressive shift in the type of cells present at the site of inflammation, such as mononuclear cells , and involves simultaneous destruction and healing of the tissue.
Inflammation has also been classified as Type 1 and Type 2 based on the type of cytokines and helper T cells (Th1 and Th2) involved. [ 4 ]
The earliest known reference for the term inflammation is around the early 15th century. The word root comes from Old French inflammation around the 14th century, which then comes from Latin inflammatio or inflammationem . Literally, the term relates to the word "flame", as the property of being "set on fire" or "to burn". [ 5 ]
The term inflammation is not a synonym for infection . Infection describes the interaction between the action of microbial invasion and the reaction of the body's inflammatory response—the two components are considered together in discussion of infection, and the word is used to imply a microbial invasive cause for the observed inflammatory reaction. Inflammation , on the other hand, describes just the body's immunovascular response, regardless of cause. But, because the two are often correlated , words ending in the suffix -itis (which means inflammation) are sometimes informally described as referring to infection: for example, the word urethritis strictly means only "urethral inflammation", but clinical health care providers usually discuss urethritis as a urethral infection because urethral microbial invasion is the most common cause of urethritis. However, the inflammation–infection distinction is crucial in situations in pathology and medical diagnosis that involve inflammation that is not driven by microbial invasion, such as cases of atherosclerosis , trauma , ischemia , and autoimmune diseases (including type III hypersensitivity ).
Biological:
Chemical: [ 6 ]
Psychological:
Acute inflammation is a short-term process, usually appearing within a few minutes or hours and begins to cease upon the removal of the injurious stimulus. [ 9 ] It involves a coordinated and systemic mobilization response locally of various immune, endocrine and neurological mediators of acute inflammation. In a normal healthy response, it becomes activated, clears the pathogen and begins a repair process and then ceases. [ 10 ]
Acute inflammation occurs immediately upon injury, lasting only a few days. [ 11 ] Cytokines and chemokines promote the migration of neutrophils and macrophages to the site of inflammation. [ 11 ] Pathogens, allergens, toxins, burns, and frostbite are some of the typical causes of acute inflammation. [ 11 ] Toll-like receptors (TLRs) recognize microbial pathogens. [ 11 ] Acute inflammation can be a defensive mechanism to protect tissues against injury. [ 11 ] Inflammation lasting 2–6 weeks is designated subacute inflammation. [ 11 ] [ 12 ]
Inflammation is characterized by five cardinal signs , [ 15 ] [ 16 ] (the traditional names of which come from Latin):
The first four (classical signs) were described by Celsus ( c. 30 BC –38 AD). [ 18 ]
Pain is due to the release of chemicals such as bradykinin and histamine that stimulate nerve endings. [ 15 ] Acute inflammation of the lung (usually in response to pneumonia ) does not cause pain unless the inflammation involves the parietal pleura , which does have pain-sensitive nerve endings . [ 15 ] Heat and redness are due to increased blood flow at body core temperature to the inflamed site. Swelling is caused by accumulation of fluid.
The fifth sign, loss of function , is believed to have been added later by Galen , [ 19 ] Thomas Sydenham [ 20 ] or Rudolf Virchow . [ 9 ] [ 15 ] [ 16 ] Examples of loss of function include pain that inhibits mobility, severe swelling that prevents movement, having a worse sense of smell during a cold, or having difficulty breathing when bronchitis is present. [ 21 ] [ 22 ] Loss of function has multiple causes. [ 15 ]
The process of acute inflammation is initiated by resident immune cells already present in the involved tissue, mainly resident macrophages , dendritic cells , histiocytes , Kupffer cells and mast cells . These cells possess surface receptors known as pattern recognition receptors (PRRs), which recognize (i.e., bind) two subclasses of molecules: pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs). PAMPs are compounds that are associated with various pathogens , but which are distinguishable from host molecules. DAMPs are compounds that are associated with host-related injury and cell damage.
At the onset of an infection, burn, or other injuries, these cells undergo activation (one of the PRRs recognize a PAMP or DAMP) and release inflammatory mediators responsible for the clinical signs of inflammation. Vasodilation and its resulting increased blood flow causes the redness ( rubor ) and increased heat ( calor ). Increased permeability of the blood vessels results in an exudation (leakage) of plasma proteins and fluid into the tissue ( edema ), which manifests itself as swelling ( tumor ). Some of the released mediators such as bradykinin increase the sensitivity to pain ( hyperalgesia , dolor ). The mediator molecules also alter the blood vessels to permit the migration of leukocytes, mainly neutrophils and macrophages , to flow out of the blood vessels (extravasation) and into the tissue. The neutrophils migrate along a chemotactic gradient created by the local cells to reach the site of injury. [ 9 ] The loss of function ( functio laesa ) is probably the result of a neurological reflex in response to pain.
In addition to cell-derived mediators, several acellular biochemical cascade systems—consisting of preformed plasma proteins—act in parallel to initiate and propagate the inflammatory response. These include the complement system activated by bacteria and the coagulation and fibrinolysis systems activated by necrosis (e.g., burn, trauma). [ 9 ]
Acute inflammation may be regarded as the first line of defense against injury. Acute inflammatory response requires constant stimulation to be sustained. Inflammatory mediators are short-lived and are quickly degraded in the tissue. Hence, acute inflammation begins to cease once the stimulus has been removed. [ 9 ]
Chronic inflammation is inflammation that lasts for months or years. [ 12 ] Macrophages, lymphocytes , and plasma cells predominate in chronic inflammation, in contrast to the neutrophils that predominate in acute inflammation. [ 12 ] Diabetes , cardiovascular disease , allergies , and chronic obstructive pulmonary disease are examples of diseases mediated by chronic inflammation. [ 12 ] Obesity , smoking, stress and insufficient diet are some of the factors that promote chronic inflammation. [ 12 ]
Common signs and symptoms that develop during chronic inflammation are: [ 12 ]
As defined, acute inflammation is an immunovascular response to inflammatory stimuli, which can include infection or trauma. [ 24 ] [ 25 ] This means acute inflammation can be broadly divided into a vascular phase that occurs first, followed by a cellular phase involving immune cells (more specifically myeloid granulocytes in the acute setting). [ 24 ] The vascular component of acute inflammation involves the movement of plasma fluid , containing important proteins such as fibrin and immunoglobulins ( antibodies ), into inflamed tissue.
Upon contact with PAMPs, tissue macrophages and mastocytes release vasoactive amines such as histamine and serotonin , as well as eicosanoids such as prostaglandin E2 and leukotriene B4 to remodel the local vasculature. [ 26 ] Macrophages and endothelial cells release nitric oxide . [ 27 ] These mediators vasodilate and permeabilize the blood vessels , which results in the net distribution of blood plasma from the vessel into the tissue space. The increased collection of fluid into the tissue causes it to swell ( edema ). [ 26 ] This exuded tissue fluid contains various antimicrobial mediators from the plasma such as complement , lysozyme , antibodies , which can immediately deal damage to microbes, and opsonise the microbes in preparation for the cellular phase. If the inflammatory stimulus is a lacerating wound, exuded platelets , coagulants , plasmin and kinins can clot the wounded area using vitamin K-dependent mechanisms [ 28 ] and provide haemostasis in the first instance. These clotting mediators also provide a structural staging framework at the inflammatory tissue site in the form of a fibrin lattice – as would construction scaffolding at a construction site – for the purpose of aiding phagocytic debridement and wound repair later on. Some of the exuded tissue fluid is also funneled by lymphatics to the regional lymph nodes, flushing bacteria along to start the recognition and attack phase of the adaptive immune system .
Acute inflammation is characterized by marked vascular changes, including vasodilation , increased permeability and increased blood flow, which are induced by the actions of various inflammatory mediators. [ 26 ] Vasodilation occurs first at the arteriole level, progressing to the capillary level, and brings about a net increase in the amount of blood present, causing the redness and heat of inflammation. Increased permeability of the vessels results in the movement of plasma into the tissues, with resultant stasis due to the increase in the concentration of the cells within blood – a condition characterized by enlarged vessels packed with cells. Stasis allows leukocytes to marginate (move) along the endothelium , a process critical to their recruitment into the tissues. Normal flowing blood prevents this, as the shearing force along the periphery of the vessels moves cells in the blood into the middle of the vessel.
* non-exhaustive list
The cellular component involves leukocytes , which normally reside in blood and must move into the inflamed tissue via extravasation to aid in inflammation. [ 24 ] Some act as phagocytes , ingesting bacteria, viruses, and cellular debris. Others release enzymatic granules that damage pathogenic invaders. Leukocytes also release inflammatory mediators that develop and maintain the inflammatory response. In general, acute inflammation is mediated by granulocytes , whereas chronic inflammation is mediated by mononuclear cells such as monocytes and lymphocytes .
Various leukocytes , particularly neutrophils, are critically involved in the initiation and maintenance of inflammation. These cells must be able to move to the site of injury from their usual location in the blood, therefore mechanisms exist to recruit and direct leukocytes to the appropriate place. The process of leukocyte movement from the blood to the tissues through the blood vessels is known as extravasation and can be broadly divided up into a number of steps:
Extravasated neutrophils in the cellular phase come into contact with microbes at the inflamed tissue. Phagocytes express cell-surface endocytic pattern recognition receptors (PRRs) that have affinity and efficacy against non-specific microbe-associated molecular patterns (PAMPs). Most PAMPs that bind to endocytic PRRs and initiate phagocytosis are cell wall components, including complex carbohydrates such as mannans and β- glucans , lipopolysaccharides (LPS), peptidoglycans , and surface proteins. Endocytic PRRs on phagocytes reflect these molecular patterns, with C-type lectin receptors binding to mannans and β-glucans, and scavenger receptors binding to LPS.
Upon endocytic PRR binding, actin - myosin cytoskeletal rearrangement adjacent to the plasma membrane occurs in a way that endocytoses the plasma membrane containing the PRR-PAMP complex, and the microbe. Phosphatidylinositol and Vps34 - Vps15 - Beclin1 signalling pathways have been implicated to traffic the endocytosed phagosome to intracellular lysosomes , where fusion of the phagosome and the lysosome produces a phagolysosome. The reactive oxygen species , superoxides and hypochlorite bleach within the phagolysosomes then kill microbes inside the phagocyte.
Phagocytic efficacy can be enhanced by opsonization . Plasma derived complement C3b and antibodies that exude into the inflamed tissue during the vascular phase bind to and coat the microbial antigens. As well as endocytic PRRs, phagocytes also express opsonin receptors Fc receptor and complement receptor 1 (CR1), which bind to antibodies and C3b, respectively. The co-stimulation of endocytic PRR and opsonin receptor increases the efficacy of the phagocytic process, enhancing the lysosomal elimination of the infective agent.
* non-exhaustive list
Specific patterns of acute and chronic inflammation are seen during particular situations that arise in the body, such as when inflammation occurs on an epithelial surface, or pyogenic bacteria are involved.
Inflammatory abnormalities are a large group of disorders that underlie a vast variety of human diseases. The immune system is often involved with inflammatory disorders, as demonstrated in both allergic reactions and some myopathies , with many immune system disorders resulting in abnormal inflammation. Non-immune diseases with causal origins in inflammatory processes include cancer, atherosclerosis , and ischemic heart disease . [ 9 ]
Examples of disorders associated with inflammation include:
Atherosclerosis, formerly considered a lipid storage disorder, is now understood as a chronic inflammatory condition involving the arterial walls. [ 33 ] Research has established a fundamental role for inflammation in mediating all stages of atherosclerosis from initiation through progression and, ultimately, the thrombotic complications from it. [ 33 ] These new findings reveal links between traditional risk factors like cholesterol levels and the underlying mechanisms of atherogenesis .
Clinical studies have shown that this emerging biology of inflammation in atherosclerosis applies directly to people. [ 33 ] For instance, elevation in markers of inflammation predicts outcomes of people with acute coronary syndromes , independently of myocardial damage. In addition, low-grade chronic inflammation, as indicated by levels of the inflammatory marker C-reactive protein , prospectively defines risk of atherosclerotic complications, thus adding to prognostic information provided by traditional risk factors, such as LDL levels. [ 34 ] [ 33 ]
Moreover, certain treatments that reduce coronary risk also limit inflammation. Notably, lipid-lowering medications such as statins have shown anti-inflammatory effects, which may contribute to their efficacy beyond just lowering LDL levels. [ 35 ] This emerging understanding of inflammation's role in atherosclerosis has had significant clinical implications, influencing both risk stratification and therapeutic strategies.
Recent developments in the treatment of atherosclerosis have focused on addressing inflammation directly. New anti-inflammatory drugs, such as monoclonal antibodies targeting IL-1β, have been studied in large clinical trials, showing promising results in reducing cardiovascular events. [ 36 ] These drugs offer a potential new avenue for treatment, particularly for patients who do not respond adequately to statins. However, concerns about long-term safety and cost remain significant barriers to widespread adoption.
Inflammatory processes can be triggered by negative cognition or their consequences, such as stress, violence, or deprivation. Negative cognition may therefore contribute to inflammation, which in turn can lead to depression. A 2019 meta-analysis found that chronic inflammation is associated with a 30% increased risk of developing major depressive disorder , supporting the link between inflammation and mental health . [ 37 ]
An allergic reaction, formally known as type 1 hypersensitivity , is the result of an inappropriate immune response triggering inflammation, vasodilation, and nerve irritation. A common example is hay fever , which is caused by a hypersensitive response by mast cells to allergens . Pre-sensitised mast cells respond by degranulating , releasing vasoactive chemicals such as histamine. These chemicals propagate an excessive inflammatory response characterised by blood vessel dilation, production of pro-inflammatory molecules, cytokine release, and recruitment of leukocytes. [ 9 ] Severe inflammatory response may mature into a systemic response known as anaphylaxis .
Inflammatory myopathies are caused by the immune system inappropriately attacking components of muscle, leading to signs of muscle inflammation. They may occur in conjunction with other immune disorders, such as systemic sclerosis , and include dermatomyositis , polymyositis , and inclusion body myositis . [ 9 ]
Due to the central role of leukocytes in the development and propagation of inflammation, defects in leukocyte functionality often result in a decreased capacity for inflammatory defense with subsequent vulnerability to infection. [ 9 ] Dysfunctional leukocytes may be unable to correctly bind to blood vessels due to surface receptor mutations, digest bacteria ( Chédiak–Higashi syndrome ), or produce microbicides ( chronic granulomatous disease ). In addition, diseases affecting the bone marrow may result in abnormal or few leukocytes.
Certain drugs or exogenous chemical compounds are known to affect inflammation. Vitamin A deficiency, for example, causes an increase in inflammatory responses, [ 38 ] and anti-inflammatory drugs work specifically by inhibiting the enzymes that produce inflammatory eicosanoids . Additionally, certain illicit drugs such as cocaine and ecstasy may exert some of their detrimental effects by activating transcription factors intimately involved with inflammation (e.g. NF-κB ). [ 39 ] [ 40 ]
Inflammation orchestrates the microenvironment around tumours, contributing to proliferation, survival and migration. [ 41 ] Cancer cells use selectins , chemokines and their receptors for invasion, migration and metastasis. [ 42 ] On the other hand, many cells of the immune system contribute to cancer immunology , suppressing cancer. [ 43 ] Molecular intersection between receptors of steroid hormones, which have important effects on cellular development, and transcription factors that play key roles in inflammation, such as NF-κB , may mediate some of the most critical effects of inflammatory stimuli on cancer cells. [ 44 ] This capacity of a mediator of inflammation to influence the effects of steroid hormones in cells is very likely to affect carcinogenesis. On the other hand, due to the modular nature of many steroid hormone receptors, this interaction may offer ways to interfere with cancer progression, through targeting of a specific protein domain in a specific cell type. Such an approach may limit side effects that are unrelated to the tumor of interest, and may help preserve vital homeostatic functions and developmental processes in the organism.
There is some evidence from 2009 to suggest that cancer-related inflammation (CRI) may lead to accumulation of random genetic alterations in cancer cells. [ 45 ] [ needs update ]
In 1863, Rudolf Virchow hypothesized that the origin of cancer was at sites of chronic inflammation. [ 42 ] [ 46 ] As of 2012, chronic inflammation was estimated to contribute to approximately 15% to 25% of human cancers. [ 46 ] [ 47 ]
An inflammatory mediator is a messenger that acts on blood vessels and/or cells to promote an inflammatory response. [ 48 ] Inflammatory mediators that contribute to neoplasia include prostaglandins , inflammatory cytokines such as IL-1β , TNF-α , IL-6 and IL-15 and chemokines such as IL-8 and GRO-alpha . [ 49 ] [ 46 ] These inflammatory mediators, and others, orchestrate an environment that fosters proliferation and survival. [ 42 ] [ 49 ]
Inflammation also causes DNA damages due to the induction of reactive oxygen species (ROS) by various intracellular inflammatory mediators. [ 42 ] [ 49 ] [ 46 ] In addition, leukocytes and other phagocytic cells attracted to the site of inflammation induce DNA damages in proliferating cells through their generation of ROS and reactive nitrogen species (RNS). ROS and RNS are normally produced by these cells to fight infection. [ 42 ] ROS, alone, cause more than 20 types of DNA damage. [ 50 ] Oxidative DNA damages cause both mutations [ 51 ] and epigenetic alterations. [ 52 ] [ 46 ] [ 53 ] RNS also cause mutagenic DNA damages. [ 54 ]
A normal cell may undergo carcinogenesis to become a cancer cell if it is frequently subjected to DNA damage during long periods of chronic inflammation. DNA damages may cause genetic mutations due to inaccurate repair . In addition, mistakes in the DNA repair process may cause epigenetic alterations. [ 46 ] [ 49 ] [ 53 ] Mutations and epigenetic alterations that are replicated and provide a selective advantage during somatic cell proliferation may be carcinogenic.
Genome-wide analyses of human cancer tissues reveal that a single typical cancer cell may possess roughly 100 mutations in coding regions , 10–20 of which are "driver mutations" that contribute to cancer development. [ 46 ] However, chronic inflammation also causes epigenetic changes such as DNA methylations , that are often more common than mutations. Typically, several hundreds to thousands of genes are methylated in a cancer cell (see DNA methylation in cancer ). Sites of oxidative damage in chromatin can recruit complexes that contain DNA methyltransferases (DNMTs), a histone deacetylase ( SIRT1 ), and a histone methyltransferase (EZH2) , and thus induce DNA methylation. [ 46 ] [ 55 ] [ 56 ] DNA methylation of a CpG island in a promoter region may cause silencing of its downstream gene (see CpG site and regulation of transcription in cancer ). DNA repair genes, in particular, are frequently inactivated by methylation in various cancers (see hypermethylation of DNA repair genes in cancer ). A 2018 report [ 57 ] evaluated the relative importance of mutations and epigenetic alterations in progression to two different types of cancer. This report showed that epigenetic alterations were much more important than mutations in generating gastric cancers (associated with inflammation). [ 58 ] However, mutations and epigenetic alterations were of roughly equal importance in generating esophageal squamous cell cancers (associated with tobacco chemicals and acetaldehyde , a product of alcohol metabolism).
It has long been recognized that infection with HIV is characterized not only by development of profound immunodeficiency but also by sustained inflammation and immune activation. [ 59 ] [ 60 ] [ 61 ] A substantial body of evidence implicates chronic inflammation as a critical driver of immune dysfunction, premature appearance of aging-related diseases, and immune deficiency. [ 59 ] [ 62 ] Many now regard HIV infection not only as an evolving virus-induced immunodeficiency, but also as chronic inflammatory disease. [ 63 ] Even after the introduction of effective antiretroviral therapy (ART) and effective suppression of viremia in HIV-infected individuals, chronic inflammation persists. Animal studies also support the relationship between immune activation and progressive cellular immune deficiency: SIV sm infection of its natural nonhuman primate hosts, the sooty mangabey , causes high-level viral replication but limited evidence of disease. [ 64 ] [ 65 ] This lack of pathogenicity is accompanied by a lack of inflammation, immune activation and cellular proliferation. In sharp contrast, experimental SIV sm infection of rhesus macaque produces immune activation and AIDS-like disease with many parallels to human HIV infection. [ 66 ]
Delineating how CD4 T cells are depleted and how chronic inflammation and immune activation are induced lies at the heart of understanding HIV pathogenesis—one of the top priorities for HIV research by the Office of AIDS Research, National Institutes of Health . Recent studies demonstrated that caspase-1 -mediated pyroptosis , a highly inflammatory form of programmed cell death, drives CD4 T-cell depletion and inflammation by HIV. [ 67 ] [ 68 ] [ 69 ] These are the two signature events that propel HIV disease progression to AIDS . Pyroptosis appears to create a pathogenic vicious cycle in which dying CD4 T cells and other immune cells (including macrophages and neutrophils) release inflammatory signals that recruit more cells into the infected lymphoid tissues to die. The feed-forward nature of this inflammatory response produces chronic inflammation and tissue injury. [ 70 ] Identifying pyroptosis as the predominant mechanism that causes CD4 T-cell depletion and chronic inflammation, provides novel therapeutic opportunities, namely caspase-1 which controls the pyroptotic pathway. In this regard, pyroptosis of CD4 T cells and secretion of pro-inflammatory cytokines such as IL-1β and IL-18 can be blocked in HIV-infected human lymphoid tissues by addition of the caspase-1 inhibitor VX-765, [ 67 ] which has already proven to be safe and well tolerated in phase II human clinical trials. [ 71 ] These findings could propel development of an entirely new class of "anti-AIDS" therapies that act by targeting the host rather than the virus. Such agents would almost certainly be used in combination with ART. By promoting "tolerance" of the virus instead of suppressing its replication, VX-765 or related drugs may mimic the evolutionary solutions occurring in multiple monkey hosts (e.g. the sooty mangabey) infected with species-specific lentiviruses that have led to a lack of disease, no decline in CD4 T-cell counts, and no chronic inflammation.
The inflammatory response must be actively terminated when no longer needed to prevent unnecessary "bystander" damage to tissues. [ 9 ] Failure to do so results in chronic inflammation, and cellular destruction. Resolution of inflammation occurs by different mechanisms in different tissues.
Mechanisms that serve to terminate inflammation include: [ 9 ] [ 72 ]
Acute inflammation normally resolves by mechanisms that have remained somewhat elusive. Emerging evidence now suggests that an active, coordinated program of resolution initiates in the first few hours after an inflammatory response begins. After entering tissues, granulocytes promote the switch of arachidonic acid –derived prostaglandins and leukotrienes to lipoxins, which initiate the termination sequence. Neutrophil recruitment thus ceases and programmed death by apoptosis is engaged. These events coincide with the biosynthesis, from omega-3 polyunsaturated fatty acids , of resolvins and protectins , which critically shorten the period of neutrophil infiltration by initiating apoptosis. As a consequence, apoptotic neutrophils undergo phagocytosis by macrophages , leading to neutrophil clearance and release of anti-inflammatory and reparative cytokines such as transforming growth factor-β1. The anti-inflammatory program ends with the departure of macrophages through the lymphatics . [ 83 ]
There is evidence for a link between inflammation and depression . [ 84 ] Inflammatory processes can be triggered by negative cognitions or their consequences, such as stress, violence, or deprivation. Thus, negative cognitions can cause inflammation that can, in turn, lead to depression. [ 85 ] [ 86 ] [ dubious – discuss ] In addition, there is increasing evidence that inflammation can cause depression because of the increase of cytokines, setting the brain into a "sickness mode". [ 87 ]
Classical symptoms of being physically sick, such as lethargy, show a large overlap in behaviors that characterize depression. Levels of cytokines tend to increase sharply during the depressive episodes of people with bipolar disorder and drop off during remission. [ 88 ] Furthermore, it has been shown in clinical trials that anti-inflammatory medicines taken in addition to antidepressants not only significantly improves symptoms but also increases the proportion of subjects positively responding to treatment. [ 89 ] Inflammations that lead to serious depression could be caused by common infections such as those caused by a virus, bacteria or even parasites. [ 90 ]
There is evidence for a link between inflammation and delirium based on the results of a recent longitudinal study investigating CRP in COVID-19 patients. [ 91 ]
An infectious organism can escape the confines of the immediate tissue via the circulatory system or lymphatic system , where it may spread to other parts of the body. If an organism is not contained by the actions of acute inflammation, it may gain access to the lymphatic system via nearby lymph vessels . An infection of the lymph vessels is known as lymphangitis , and infection of a lymph node is known as lymphadenitis . When lymph nodes cannot destroy all pathogens, the infection spreads further. A pathogen can gain access to the bloodstream through lymphatic drainage into the circulatory system.
When inflammation overwhelms the host, systemic inflammatory response syndrome is diagnosed. When it is due to infection, the term sepsis is applied, with the terms bacteremia being applied specifically for bacterial sepsis and viremia specifically to viral sepsis. Vasodilation and organ dysfunction are serious problems associated with widespread infection that may lead to septic shock and death. [ 92 ]
Inflammation also is characterized by high systemic levels of acute-phase proteins . In acute inflammation, these proteins prove beneficial; however, in chronic inflammation, they can contribute to amyloidosis . [ 9 ] These proteins include C-reactive protein , serum amyloid A , and serum amyloid P , which cause a range of systemic effects including: [ 9 ]
Inflammation often affects the numbers of leukocytes present in the body:
With the discovery of interleukins (IL), the concept of systemic inflammation developed. Although the processes involved are identical to tissue inflammation, systemic inflammation is not confined to a particular tissue but involves the endothelium and other organ systems.
Chronic inflammation is widely observed in obesity . [ 93 ] [ 94 ] Obese people commonly have many elevated markers of inflammation, including: [ 95 ] [ 96 ]
Low-grade chronic inflammation is characterized by a two- to threefold increase in the systemic concentrations of cytokines such as TNF-α, IL-6, and CRP. [ 99 ] Waist circumference correlates significantly with systemic inflammatory response. [ 100 ]
Loss of white adipose tissue reduces levels of inflammation markers. [ 93 ] As of 2017 the association of systemic inflammation with insulin resistance and type 2 diabetes , and with atherosclerosis was under preliminary research, although rigorous clinical trials had not been conducted to confirm such relationships. [ 101 ]
C-reactive protein (CRP) is generated at a higher level in obese people, and may increase the risk for cardiovascular diseases . [ 102 ]
The outcome in a particular circumstance will be determined by the tissue in which the injury has occurred—and the injurious agent that is causing it. Here are the possible outcomes to inflammation: [ 9 ]
Inflammation is usually indicated by adding the suffix " itis ", as shown below. However, some conditions, such as asthma and pneumonia , do not follow this convention. More examples are available at List of types of inflammation . | https://en.wikipedia.org/wiki/Inflammation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.