text
stringlengths
11
320k
source
stringlengths
26
161
Indium(III) sulfide (Indium sesquisulfide, Indium sulfide (2:3), Indium (3+) sulfide) is the inorganic compound with the formula In 2 S 3 . It has a "rotten egg" odor characteristic of sulfur compounds, and produces hydrogen sulfide gas when reacted with mineral acids. [ 2 ] Three different structures (" polymorphs ") are known: yellow, α-In 2 S 3 has a defect cubic structure, red β-In 2 S 3 has a defect spinel , tetragonal, structure, and γ-In 2 S 3 has a layered structure. The red, β, form is considered to be the most stable form at room temperature, although the yellow form may be present depending on the method of production. In 2 S 3 is attacked by acids and by sulfide. It is slightly soluble in Na 2 S. [ 3 ] Indium sulfide was the first indium compound ever described, being reported in 1863. [ 4 ] Reich and Richter determined the existence of indium as a new element from the sulfide precipitate. In 2 S 3 features tetrahedral In(III) centers linked to four sulfido ligands. α-In 2 S 3 has a defect cubic structure. The polymorph undergoes a phase transition at 420 °C and converts to the spinel structure of β-In 2 S 3 . Another phase transition at 740 °C produces the layered γ-In 2 S 3 polymorph. [ 5 ] β-In 2 S 3 has a defect spinel structure. The sulfide anions are closely packed in layers, with octahedrally-coordinated In(III) cations present within the layers, and tetrahedrally-coordinated In(III) cations between them. A portion of the tetrahedral interstices are vacant, which leads to the defects in the spinel. [ 6 ] β-In 2 S 3 has two subtypes. In the T-In 2 S 3 subtype, the tetragonally-coordinated vacancies are in an ordered arrangement, whereas the vacancies in C-In 2 S 3 are disordered. The disordered subtype of β-In 2 S 3 shows activity for photocatalytic H 2 production with a noble metal cocatalyst, but the ordered subtype does not. [ 7 ] β-In 2 S 3 is an N-type semiconductor with an optical band gap of 2.1 eV. It has been proposed to replace the hazardous cadmium sulfide , CdS, as a buffer layer in solar cells, [ 8 ] and as an additional semiconductor to increase the performance of TiO 2 -based photovoltaics . [ 7 ] The unstable γ-In 2 S 3 polymorph has a layered structure. Indium sulfide is usually prepared by direct combination of the elements. Production from volatile complexes of indium and sulfur, for example dithiocarbamates (e.g. Et 2 In III S 2 CNEt 2 ), has been explored for vapor deposition techniques. [ 9 ] Thin films of the beta complex can be grown by chemical spray pyrolysis . Solutions of In(III) salts and organic sulfur compounds (often thiourea ) are sprayed onto preheated glass plates, where the chemicals react to form thin films of indium sulfide. [ 10 ] Changing the temperature at which the chemicals are deposited and the In:S ratio can affect the optical band gap of the film. [ 11 ] Single-walled indium sulfide nanotubes can be formed in the laboratory, by the use of two solvents (one in which the compound dissolves poorly and one in which it dissolves well). There is partial replacement of the sulfido ligands with O 2− , and the compound forms thin nanocoils, which self-assemble into arrays of nanotubes with diameters on the order of 10 nm, and walls approximately 0.6 nm thick. The process mimics protein crystallization . [ 12 ] The β-In 2 S 3 polymorph, in powdered form, can irritate eyes, skin and respiratory organs. It is toxic if swallowed, but can be handled safely under conventional laboratory conditions. It should be handled with gloves, and care should be taken to keep from inhaling the compound, and to keep it from contact with the eyes. [ 13 ] There is considerable interest in using In 2 S 3 to replace the semiconductor CdS (cadmium sulfide) in photoelectronic devices. β-In 2 S 3 has a tunable band gap, which makes it attractive for photovoltaic applications, [ 11 ] and it shows promise when used in conjunction with TiO 2 in solar panels, indicating that it could replace CdS in that application as well. [ 7 ] Cadmium sulfide is toxic and must be deposited with a chemical bath , [ 14 ] but indium(III) sulfide shows few adverse biological effects and can be deposited as a thin film through less hazardous methods. [ 11 ] [ 14 ] Thin films β-In 2 S 3 can be grown with varying band gaps, which make them widely applicable as photovoltaic semiconductors, especially in heterojunction solar cells . [ 11 ] Plates coated with beta-In 2 S 3 nanoparticles can be used efficiently for PEC (photoelectrochemical) water splitting. [ 15 ] A preparation of indium sulfide made with the radioactive 113 In can be used as a lung scanning agent for medical imaging . [ 16 ] It is taken up well by lung tissues, but does not accumulate there. In 2 S 3 nanoparticles luminesce in the visible spectrum. Preparing In 2 S 3 nanoparticles in the presence of other heavy metal ions creates highly efficient blue, green, and red phosphors , which can be used in projectors and instrument displays. [ 17 ]
https://en.wikipedia.org/wiki/In2S3
Indium(III) selenide is a compound of indium and selenium . It has potential for use in photovoltaic devices and has been the subject of extensive research. The two most common phases, α and β, have a layered structure, while γ has a "defect wurtzite structure ." In all, five polymorphs are known: α, β, γ, δ, κ. [ 1 ] The α-β phase transition is accompanied by a change in electrical conductivity. [ 2 ] The band gap of γ-In 2 Se 3 is approximately 1.9 eV. The method of production influences the polymorph generated. For example, thin films of pure γ-In 2 Se 3 have been produced from trimethylindium (InMe 3 ) and hydrogen selenide via MOCVD techniques. [ 3 ] A conventional route entails heating the elements in a sealed tube: [ 4 ] Greenwood, Norman N. ; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann . ISBN 978-0-08-037941-8 .
https://en.wikipedia.org/wiki/In2Se3
Indium(III) telluride ( In 2 Te 3 ) is an inorganic compound . A black solid, it is sometimes described as an intermetallic compound , because it has properties that are metal-like and salt like. It is a semiconductor that has attracted occasional interest for its thermoelectric and photovoltaic applications. No applications have been implemented commercially however. [ 2 ] A conventional route entails heating the elements in a seal-tube: [ 3 ] Indium(III) telluride reacts with strong acids to produce hydrogen telluride . This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/In2Te3
Indium(III) bromide , (indium tribromide), InBr 3 , is a chemical compound of indium and bromine . It is a Lewis acid and has been used in organic synthesis. [ 2 ] It has the same crystal structure as aluminium trichloride , with 6 coordinate indium atoms. [ 3 ] When molten it is dimeric, In 2 Br 6 , and predominantly dimeric in the gas phase. The dimer has bridging bromine atoms with a structure similar to dimeric aluminium trichloride Al 2 Cl 6 . [ 3 ] It is formed by the reaction of indium and bromine. [ 4 ] InBr 3 forms complexes with ligands , L, InBr 3 L, InBr 3 L 2 , InBr 3 L 3 . [ 3 ] Reaction with indium metal forms lower valent indium bromides, InBr 2 , In 4 Br 7 , In 2 Br 3 , In 5 Br 7 , In 7 Br 9 , indium(I) bromide . [ 5 ] [ 6 ] [ 7 ] [ 8 ] In refluxing xylene solution InBr 3 and In metal react to form InBr 2 . [ 9 ]
https://en.wikipedia.org/wiki/InBr3
Indium(III) chloride is the chemical compound with the formula In Cl 3 which forms a tetrahydrate. This salt is a white, flaky solid with applications in organic synthesis as a Lewis acid . It is also the most available soluble derivative of indium. [ 2 ] This is one of three known indium chlorides . Being a relatively electropositive metal, indium reacts quickly with chlorine to give the trichloride. Indium trichloride is very soluble and deliquescent. [ 3 ] A synthesis has been reported using an electrochemical cell in a mixed methanol - benzene solution. [ 4 ] Like AlCl 3 and TlCl 3 , InCl 3 crystallizes as a layered structure consisting of a close-packed chloride arrangement containing layers of octahedrally coordinated In(III) centers, [ 5 ] a structure akin to that seen in YCl 3 . [ 6 ] In contrast, GaCl 3 crystallizes as dimers containing Ga 2 Cl 6 . [ 6 ] Molten InCl 3 conducts electricity, [ 5 ] whereas AlCl 3 does not as it converts to the molecular dimer, Al 2 Cl 6 . [ 7 ] InCl 3 is a Lewis acid and forms complexes with donor ligands , L, InCl 3 L, InCl 3 L 2 , InCl 3 L 3 . For example, with the chloride ion it forms tetrahedral InCl 4 − , trigonal bipyramidal InCl 5 2− , and octahedral InCl 6 3− . [ 5 ] In diethyl ether solution, InCl 3 reacts with lithium hydride , LiH, to form LiInH 4 . This unstable compound decomposes below 0 °C, [ 8 ] and is reacted in situ in organic synthesis as a reducing agent [ 9 ] and to prepare tertiary amine and phosphine complexes of InH 3 . [ 10 ] Trimethylindium , InMe 3 , can be produced by reacting InCl 3 in diethyl ether solution either with the Grignard reagent Me Mg I or methyllithium , LiMe. Triethylindium can be prepared in a similar fashion but with the grignard reagent EtMgBr. [ 11 ] InCl 3 reacts with indium metal at high temperature to form the lower valent indium chlorides In 5 Cl 9 , In 2 Cl 3 and InCl. [ 5 ] Indium chloride is a Lewis acid catalyst in organic reactions such as Friedel-Crafts acylations and Diels-Alder reactions . As an example of the latter, [ 12 ] the reaction proceeds at room temperature , with 1 mole% catalyst loading in an acetonitrile -water solvent mixture. The first step is a Knoevenagel condensation between the barbituric acid and the aldehyde; the second step is a reverse electron-demand Diels-Alder reaction , which is a multicomponent reaction of N,N'-dimethyl- barbituric acid , benzaldehyde and ethyl vinyl ether . With the catalyst, the reported chemical yield is 90% and the percentage trans isomer is 70%. Without the catalyst added, the yield drops to 65% with 50% trans product.
https://en.wikipedia.org/wiki/InCl3
Indium(III) fluoride or indium trifluoride is the inorganic compound with the formula InF 3 . It is a white solid. It has a rhombohedral crystal structure very similar to that of rhodium(III) fluoride . Each In center is octahedral. It is formed by the reaction of indium(III) oxide with hydrogen fluoride or hydrofluoric acid . [ 3 ] Indium(III) fluoride is used in the synthesis of non- oxide glasses . It catalyzes the addition of trimethylsilyl cyanide (TMSCN) to aldehydes to form cyanohydrins . [ 2 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/InF3
Indium trihydride is an inorganic compound with the chemical formula ( InH 3 ). It has been observed in matrix isolation and laser ablation experiments. [ 2 ] [ 3 ] Gas phase stability has been predicted. [ 4 ] The infrared spectrum was obtained in the gas phase by laser ablation of indium in presence of hydrogen gas [ 5 ] InH 3 is of no practical importance. Solid InH 3 is a three-dimensional network polymeric structure, where In atoms are connected by In-H-In bridging bonds, is suggested to account for the growth of broad infrared bands when samples of InH 3 and InD 3 produced on a solid hydrogen matrix are warmed. [ 5 ] Such a structure is known for solid AlH 3 . [ 6 ] When heated above −90 °C , indium trihydride decomposes to produce indium–hydrogen alloy and elemental hydrogen . As of 2013, the only known method of synthesising indium trihydride is the autopolymerisation of indane below −90 °C . Several compounds with In-H bonds have been reported. [ 7 ] Examples of complexes with two hydride ligands replaced by other ligands are (K + ) 3 [K((CH 3 ) 2 SiO) + 7 ][InH(CH 2 C(CH 3 ) 3 ) − 3 ] 4 [ 8 ] and HIn(−C 6 H 4 − ortho -CH 2 N(CH 3 ) 2 ) 2 . Although InH 3 is labile, adducts are known with the stoichiometry InH 3 L n ( n = 1 or 2). [ 9 ] 1:1 amine adducts are made by the reaction of Li + [InH 4 ] − (lithium tetrahydridoindate(III)) with a trialkylammonium salt. The trimethylamine complex is only stable below −30 °C or in dilute solution. The 1:1 and 1:2 complexes with tricyclohexylphosphine ( PCy 3 ) have been characterised crystallographically. The average In-H bond length is 168 pm. [ 7 ] Indium hydride is also known to form adducts with NHCs . [ 10 ]
https://en.wikipedia.org/wiki/InH3
Indium(III) iodide or indium triiodide is a chemical compound of indium and iodine with the formula InI 3 . Indium(III) iodide can be obtained by reacting indium with iodine vapor: [ 1 ] Indium(III) iodide can also be obtained by evaporation of a solution of indium in HI. [ 2 ] Indium(III) iodide is a pale yellow, very hygroscopic monoclinic solid ( space group P 2 1 /c (space group no. 14), a = 9.837 Å, b = 6.102 Å, c = 12.195 Å, β = 107.69°), [ 3 ] which melts at 210 °C to form a dark brown liquid and is highly soluble in water. Its crystals consist of dimeric molecules. [ 4 ] The yellow β form slowly converts to the red α form. [ 5 ] In the presence of water vapor, the compound reacts with oxygen at 245 °C to form indium(III) oxide iodide. [ 6 ] Distinct yellow and red forms are known. The red form undergoes a transition to the yellow at 57 °C. The structure of the red form has not been determined by X-ray crystallography ; however, spectroscopic evidence indicates that indium may be six coordinate. [ 7 ] The yellow form consists of In 2 I 6 with 4 coordinate indium centres.
https://en.wikipedia.org/wiki/InI3
Indium phosphide ( InP ) is a binary semiconductor composed of indium and phosphorus . It has a face-centered cubic (" zincblende ") crystal structure , identical to that of GaAs and most of the III-V semiconductors . Indium phosphide can be prepared from the reaction of white phosphorus and indium iodide at 400 °C., [ 5 ] also by direct combination of the purified elements at high temperature and pressure, or by thermal decomposition of a mixture of a trialkyl indium compound and phosphine . [ 6 ] The application fields of InP splits up into three main areas. It is used as the basis for optoelectronic components, [ 7 ] high-speed electronics, [ 8 ] and photovoltaics [ 9 ] InP is used as a substrate for epitaxial optoelectronic devices based other semiconductors, such as indium gallium arsenide . The devices include pseudomorphic heterojunction bipolar transistors that could operate at 604 GHz. [ 10 ] InP itself has a direct bandgap , making it useful for optoelectronics devices like laser diodes and photonic integrated circuits for the optical telecommunications industry, to enable wavelength-division multiplexing applications. [ 11 ] It is used in high-power and high-frequency electronics because of its superior electron velocity with respect to the more common semiconductors silicon and gallium arsenide . InP is used in lasers, sensitive photodetectors and modulators in the wavelength window typically used for telecommunications, i.e., 1550 nm wavelengths, as it is a direct bandgap III-V compound semiconductor material. The wavelength between about 1510 nm and 1600 nm has the lowest attenuation available on optical fibre (about 0.2 dB/km). [ 12 ] Further, O-band and C-band wavelengths supported by InP facilitate single-mode operation , reducing effects of intermodal dispersion . InP can be used in photonic integrated circuits that can generate, amplify, control and detect laser light. [ 13 ] Optical sensing applications of InP include
https://en.wikipedia.org/wiki/InP
inSSIDer is a Wi-Fi network scanner application for Microsoft Windows and OS X developed by MetaGeek, LLC. [ 4 ] It has received awards such as a 2008 Infoworld Bossie Award for "Best of Open Source Software in Networking", [ 5 ] but as of inSSIDer 3, it is no longer open-source. inSSIDer began as a replacement for NetStumbler , a popular Windows Wi-Fi scanner, which had not been actively developed for several years and reputedly did not work with modern 64-bit operating systems or versions of Windows higher than Windows XP . The project was inspired by Charles Putney on The Code Project .
https://en.wikipedia.org/wiki/InSSIDer
Indium(II) selenide (InSe) is an inorganic compound composed of indium and selenium. It is a III-VI layered semiconductor. The solid has a structure consisting of two-dimensional layers bonded together only by van der Waals forces . Each layer has the atoms in the order Se-In-In-Se. [ 2 ] Potential applications are for field effect transistors , optoelectronics , photovoltaic , non-linear optics , strain gauges , [ 2 ] and methanol gas sensors . [ 3 ] Indium(II) selenide can be formed via a number of different methods. A method to make the bulk solid is the Bridgman/Stockbarger method, in which the elements indium and selenium are heated to over 900 °C in a sealed capsule, and then slowly cooled over about a month. [ 4 ] Another method is electrodeposition from a water solution of indium(I) sulfate and selenium dioxide . [ 5 ] There are three polytopes or crystal forms. β, ε are hexagonal with unit cells spanning two layers. γ has rhombohedral crystal system, with the unit cell including four layers. [ 2 ] β-Indium(II) selenide can be exfoliated into two-dimensional sheets using sticky tape. In a vacuum these form smooth layers. However, when exposed to air, the layers become corrugated because of chemisorption of air molecules. [ 6 ] Exfoliation can also take place in isopropanol liquid. [ 7 ] Indium (II) selenide is stable in ambient conditions of oxygen and water vapour, unlike many other semiconductors. [ 2 ] The properties of indium(II) selenide can be varied by way of altering the exact ratio of elements from 1:1, creating vacancies. It is hard to get an exact equality. The properties can be compensated by transition element doping. Other elements that can be included in small concentrations are boron , [ 8 ] silver , [ 9 ] and cadmium . [ 10 ]
https://en.wikipedia.org/wiki/InSe
In natura ( Latin for "in Nature") is a phrase to describe conditions present in a non-laboratory environment, to differentiate it from in vivo (experiments on live organisms in a lab) and ex vivo (experiments on cultivated cells isolated from multicellular organisms) conditions. [ 1 ] [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/In_natura
In ovo is Latin for in the egg . In medical usage it refers to the growth of live virus in chicken egg embryos for vaccine development for human use, as well as an effective method for vaccination of poultry against various Avian influenza and coronaviruses . During the incubation period , the virus replicates in the cells that make up the chorioallantoic membrane . [ 2 ] [ 3 ] In human vaccine development, the main advantage is rapid propagation, and high yield, of viruses for vaccine production. This method is most commonly used for growth of influenza virus, both attenuated vaccine and inactivated vaccine forms. It is recommended by the World Health Organization in managing influenza pandemics because it is high-yield and cost effective. [ 3 ] In poultry, In ovo vaccination improves hatchability and efficient protection against Avian influenza (AI), Newcastle disease (ND) and Coronaviruses (Av-CoV). Seroconversion rates of chickens vaccinated as embryos ranged from 27% to 100% with ND vaccination and 85% to 100% for AI vaccination. The birds are protected before delivery to a commercial operation such as a farm, thus preventing the spread of Avian viruses. [ 4 ] [ 5 ] In ovo vaccination is carried out by machines. These machines perform a number of actions to ensure good vaccination of the chick inside the egg. Benefits of In ovo vaccination include avoidance of bird stress, controlled hygienic conditions, and earlier immunity with less interference from maternal antibodies. [ 6 ] In ovo feeding is a considered as a potential tool to provide nutrient to embryo as well as to modulate performance and gut health of pre and post hatched chicks. [ 1 ] Based on the purpose the in ovo injection could be considered as in ovo stimulation or in ovo feeding. [ 1 ]
https://en.wikipedia.org/wiki/In_ovo
In re Roslin Institute (Edinburgh) , 750 F.3d 1333 (Fed. Cir. 2014), [ 1 ] is a 2014 decision of the United States Court of Appeals for the Federal Circuit rejecting a patent for a cloned sheep known as "Dolly the Sheep" — the first mammal ever cloned from an adult somatic cell. [ 2 ] Dolly was cloned in 1996 by Ian Wilmut , Keith Campbell and colleagues at the Roslin Institute , part of the University of Edinburgh Scotland. [ 3 ] The cloning method Campbell and Wilmut used to create Dolly constituted a breakthrough in scientific discovery. Known as somatic cell nuclear transfer , this process involves removing the nucleus of a regular body cell and implanting that nucleus into an egg cell that has had its cell nucleus removed. A nucleus is the organelle that holds a cell's genetic material (its DNA). Campbell and Wilmut found that if the donor, somatic cell is arrested in the stage of the cell cycle where it is dormant and non-replicating (the quiescent phase) prior to nuclear transfer, the resulting fused cell will develop into an embryo . The resulting cloned animal is an exact genetic replica of the adult mammal from which the somatic cell nucleus was taken. [ 2 ] The patent application claims the cloned animal. Claim 155 is representative: The US Patent and Trademark Office ( USPTO ) rejected the claims as patent ineligible under 35 U.S.C. § 101 "because it constituted a natural phenomenon that did not possess 'markedly different characteristics than any found in nature.' " A patent on the method was allowed, but it is not involved in this case. [ 4 ] The Federal Circuit unanimously affirmed the PTO rejection of the claims in opinion by Judge Timothy Dyk joined by Judges Kimberly Ann Moore and Evan Wallach . It is "clear that naturally occurring organisms are not patentable." [ 5 ] The patent in Chakrabarty claimed a genetically engineered bacterium that was capable of breaking down various components of crude oil. The patent applicant created this non-naturally occurring bacterium by adding four plasmids to a specific strain of bacteria. The Court held that the modified bacterium was patentable because it was "new" with "markedly different characteristics from any found in nature and one having the potential for significant utility." Accordingly, discoveries that possess "markedly different characteristics from any found in nature" are eligible for patent protection, but any existing organism or newly discovered plant found in the wild is not patentable. Similarly in Association for Molecular Pathology v. Myriad Genetics, Inc. , [ 6 ] the Court held that claims on two naturally occurring, isolated genes (BRCA1 and BRCA2), which can be examined to determine whether a person is likely to develop breast cancer, were patent ineligible invalid under § 101, because the BRCA genes themselves were unpatentable products of nature. [ 7 ] It is not disputed that the donor sheep from which Dolly was cloned could not be patented, but Dolly is an exact copy of that unpatentable sheep. "Dolly's genetic identity to her donor parent renders her unpatentable." An exact copy of a preexisting animal in not patent eligible. The court added that related Supreme Court rulings "reinforce this conclusion": For example, Supreme Court decisions regarding the preemptive force of federal patent law confirm that individuals are free to copy any unpatentable article, such as a live farm animal, so long as they do not infringe a patented method of copying. Sears, Roebuck & Co. v. Stiffel Co. clarified that a state may not "prohibit the copying of [an] article itself or award damages for such copying" when that article is ineligible for patent protection. [ 8 ] In Sears , the question was whether the defendant, Sears Roebuck & Co., could be held liable under state law for copying a lamp design whose patent protection had expired. The Court explained that "when the patent expires the monopoly created by it expires, too, and the right to make the article—including the right to make it in precisely the shape it carried when patented—passes to the public." The Court further clarified that "[a]n unpatentable article, like an article on which the patent has expired, is in the public domain and may be made and sold by whoever chooses to do so." Roslin's claimed clones are exact genetic copies of patent ineligible subject matter. Accordingly, they are not eligible for patent protection. [ 9 ] Roslin argued that "environmental factors" lead to differences in shape, size, color, and behavior, that result from aging and the interaction of the animal with its environment. But Roslin acknowledged that any differences came about or were produced "quite independently of any effort of the patentee." As in the Funk case: "Their qualities are the work of nature. Those qualities are of course not patentable. For patents cannot issue for the discovery of the phenomena of nature." Roslin also argued that its clones are distinguishable from their original donor mammals because of differences in mitochondrial DNA , which originates from the donor egg rather than the donor nucleus. But the claims do not describe clones that have markedly different characteristics from the donor animals of which they are copies. Finally, Roslin argued that its clones are patent eligible because "they are time-delayed versions of their donor mammals, and therefore different from their original mammals," but that is always true of any copy of an original. [ 10 ] ● Professor Dan Burk finds that "the Roslin opinion is hardly a model of coherent judicial reasoning, either on its own terms or with regard to the Supreme Court's subject matter jurisprudence to that point." He insists that Dolly the cloned sheep was not something found in nature, because "genetically identical mammals are not what one finds in the wild." Rather, "mammals such as sheep propagate via sexual recombination which typically renders them not genetically identical." Furthermore, Dolly was born as an old sheep: By virtue of inheriting a mature set of somatic cell chromosomes, rather than the freshly recombined set of germ-line chromosomes that would accompany natural conception, Dolly began life with shortened telomeres . Thus, Dolly was in a genetic sense 'born old' and lived a shortened life as a result. Burk argues that the proper test of patent eligibility for such a product as Dolly the sheep is whether the claim preempts field of the described subject matter, so that "fundamental concepts and materials, on which all inventors must draw, [are] caught up in patent claims." He says that the Roslin patent would not "capture fundamental or basic science on which future invention will depend, or if it does so, there is no indication in the Roslin opinion that this informed the analysis of patentable subject matter." According to Burk, the patent would not be preemptive because its covers only sheep or other mammals "produced by the cloning process, a limitation that constrains the patent to the specific and novel implementation disclosed by the applicant." [ 11 ] ● Gene Quinn in IP Watchdog deplores the Roslin decision: Another nail in the coffin of innovation and a functioning patent system all because decision makers don't have enough guts to state the obvious. Being able to create something that is identical to what nature creates is an extraordinary achievement that should be celebrated, should be fostered and incentivized, and should be awarded with a patent. [ 12 ] ● Student author Miriam Swedlow argues that Roslin would not prevent the patenting of extinct animals, such as the dodo, passenger pigeon, or wooly mammoth, because the cloned animal would not be identical to a natural animal. Like the cDNA in Association for Molecular Pathology v. Myriad Genetics, Inc. , [ 6 ] large portions of the genome of an extinct animal would need to be guessed at and extrapolated from non-extinct animals, because the DNA recoverable from fossils is decayed. Therefore, the "re-created animal will not be an exact genetic copy of an animal that already exists and will have different structural characteristics than the original species." They would therefore be comparable to artificial animals such as the Oncomouse , which was patented. [ 13 ] Moreover, the "fragile nature of extinct animal DNA allows for multiple avenues of differentiation" from the original, now-extinct animal, preempting none. [ 14 ] The citations in this article are written in Bluebook style. Please see the talk page for more information.
https://en.wikipedia.org/wiki/In_re_Roslin_Institute_(Edinburgh)
In biology and other experimental sciences, an in silico experiment is one performed on a computer or via computer simulation software. The phrase is pseudo-Latin for 'in silicon' (correct Latin : in silicio ), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases in vivo , in vitro , and in situ , which are commonly used in biology (especially systems biology ). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature. The earliest known use of the phrase was by Christopher Langton to describe artificial life , in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. [ 1 ] [ 2 ] The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico , by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report " DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation . [ 3 ] In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. [ 4 ] The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute. [ 5 ] The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically. In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expensive lab work and clinical trials. One way to achieve this is by producing and screening drug candidates more effectively. In 2010, for example, using the protein docking algorithm EADock (see Protein-ligand docking ), researchers found potential inhibitors to an enzyme associated with cancer activity in silico . Fifty percent of the molecules were later shown to be active inhibitors in vitro . [ 6 ] [ 7 ] This approach differs from use of expensive high-throughput screening (HTS) robotic labs to physically test thousands of diverse compounds a day, often with an expected hit rate on the order of 1% or less, with still fewer expected to be real leads following further testing (see drug discovery ). As an example, the technique was utilized for a drug repurposing study in order to search for potential cures for COVID-19 (SARS-CoV-2). [ 8 ] Efforts have been made to establish computer models of cellular behavior. For example, in 2007 researchers developed an in silico model of tuberculosis to aid in drug discovery, with the prime benefit of its being faster than real time simulated growth rates, allowing phenomena of interest to be observed in minutes rather than months. [ 9 ] More work can be found that focus on modeling a particular cellular process such as the growth cycle of Caulobacter crescentus . [ 10 ] These efforts fall far short of an exact, fully predictive computer model of a cell's entire behavior. Limitations in the understanding of molecular dynamics and cell biology , as well as the absence of available computer processing power, force large simplifying assumptions that constrain the usefulness of present in silico cell models. Digital genetic sequences obtained from DNA sequencing may be stored in sequence databases , be analyzed (see Sequence analysis ), be digitally altered or be used as templates for creating new actual DNA using artificial gene synthesis . In silico computer-based modeling technologies have also been applied in:
https://en.wikipedia.org/wiki/In_silico
In silico PCR [ 1 ] refers to computational tools used to calculate theoretical polymerase chain reaction (PCR) results using a given set of primers ( probes ) to amplify DNA sequences from a sequenced genome or transcriptome . [ 2 ] [ 3 ] [ 4 ] [ 5 ] These tools are used to optimize the design of primers for target DNA or cDNA sequences. Primer optimization has two goals: efficiency and selectivity. Efficiency involves taking into account such factors as GC-content, efficiency of binding, complementarity, secondary structure , and annealing and melting point (Tm) . Primer selectivity requires that the primer pairs not fortuitously bind to random sites other than the target of interest, nor should the primer pairs bind to conserved regions of a gene family. If the selectivity is poor, a set of primers will amplify multiple products besides the target of interest. [ 6 ] The design of appropriate short or long primer pairs is only one goal of PCR product prediction. Other information provided by in silico PCR tools may include determining primer location, orientation, length of each amplicon , simulation of electrophoretic mobility, identification of open reading frames , and links to other web resources. [ 7 ] [ 8 ] [ 9 ] Many software packages are available offering differing balances of feature set, ease of use, efficiency, and cost. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Primer-BLAST is widely used, and freely accessible from the National Center for Biotechnology Information (NCBI) website. On the other hand, FastPCR , [ 10 ] a commercial application, allows simultaneous testing of a single primer or a set of primers designed for multiplex target sequences. It performs a fast, gapless alignment to test the complementarity of the primers to the target sequences. Probable PCR products can be found for linear and circular templates using standard or inverse PCR as well as for multiplex PCR . Dicey [ 15 ] is free software that outputs in-silico PCR products from primer sets provided in a FASTA file. It is fast (through use of a genome's FM-index ) and can account for primer melting temperature and tolerated edit distances between primers and hit locations on the genome. VPCR [ 3 ] runs a dynamic simulation of multiplex PCR, allowing for an estimate of quantitative competition effects between multiple amplicons in one reaction. The UCSC Genome Browser offers isPCR , which provides graphical as well text-file output to view PCR products on more than 100 sequenced genomes. A primer may bind to many predicted sequences, but only sequences with no or few mismatches (1 or 2, depending on location and nucleotide) at the 3' end of the primer can be used for polymerase extension. The last 10-12 bases at the 3' end of a primer are sensitive to initiation of polymerase extension and general primer stability on the template binding site. The effect of a single mismatch at these last 10 bases at the 3' end of the primer depends on its position and local structure, reducing the primer binding, selectivity, and PCR efficiency. [ 7 ] [ 9 ]
https://en.wikipedia.org/wiki/In_silico_PCR
An in silico clinical trial , also known as a virtual clinical trial, is an individualized computer simulation used in the development or regulatory evaluation of a medicinal product , device , or intervention. While completely simulated clinical trials are not feasible with current technology and understanding of biology, its development would be expected to have major benefits over current in vivo clinical trials, and research on it is being pursued. The term in silico indicates any use of computers in clinical trials, even if limited to management of clinical information in a database. [ 1 ] The traditional model for the development of medical treatments and devices begins with pre-clinical development . In laboratories, test-tube and other in vitro experiments establish the plausibility for the efficacy of the treatment. Then in vivo animal models , with different species, provide guidance on the efficacy and safety of the product for humans. With success in both in vitro and in vivo studies, scientist can propose that clinical trials test whether the product be made available for humans. Clinical trials are often divided into four phases. Phase 3 involves testing a large number of people. [ 2 ] When a medication fails at this stage, the financial losses can be catastrophic. [ 3 ] Predicting low-frequency side effects has been difficult, because such side effects need not become apparent until the treatment is adopted by many patients. The appearance of severe side-effects in phase three often causes development to stop, for ethical and economic reasons. [ 2 ] [ 4 ] [ 5 ] Also, in recent years many candidate drugs failed in phase 3 trials because of lack of efficacy rather than for safety reasons. [ 2 ] [ 3 ] One reason for failure is that traditional trials aim to establish efficacy and safety for most subjects, rather than for individual subjects, and so efficacy is determined by a statistic of central tendency for the trial. Traditional trials do not adapt the treatment to the covariates of subjects: Accurate computer models of a treatment and its deployment, as well as patient characteristics, are necessary precursors for the development of in silico clinical trials. [ 5 ] [ 6 ] [ 8 ] [ 9 ] In such a scenario, ‘virtual’ patients would be given a ‘virtual’ treatment, enabling observation through a computer simulation of how the candidate biomedical product performs and whether it produces the intended effect, without inducing adverse effects. Such in silico clinical trials could help to reduce, refine, and partially replace real clinical trials by: In addition, real clinical trials may indicate that a product is unsafe or ineffective, but rarely indicate why or suggest how it might be improved. As such, a product that fails during clinical trials may simply be abandoned, even if a small modification would solve the problem. This stifles innovation, decreasing the number of truly original biomedical products presented to the market every year, and at the same time increasing the cost of development. [ 12 ] Analysis through in silico clinical trials is expected to provide a better understanding of the mechanism that caused the product to fail in testing, [ 8 ] [ 13 ] and may be able to provide information that could be used to refine the product to such a degree that it could successfully complete clinical trials. In silico clinical trials would also provide significant benefits over current pre-clinical practices. Unlike animal models, the virtual human models can be re-used indefinitely, providing significant cost savings. Compared to trials in animals or a small sample of humans, in silico trials might more effectively predict the behaviour of the drug or device in large-scale trials, identifying side effects that were previously difficult or impossible to detect, helping to prevent unsuitable candidates from progressing to the costly phase 3 trials. [ 12 ] One relatively well-developed field of in-silico clinical trials is radiology, where the entire imaging process is digitized. [ 14 ] [ 15 ] The development has accelerated in recent years following the growth of computer capacity and more advanced simulation models, and is now at the point that virtual platforms are gaining acceptance by regulatory bodies as a complement to conventional clinical trials for new product introductions. [ 16 ] A complete framework for in-silico clinical trials in radiology needs to include the following three components: 1) A realistic patient population, which is computer simulated using software phantoms; 2) The simulated response of the imaging system; 3) Image evaluation in a systematic way by human or model observers. [ 14 ] [ 15 ] Computational phantoms for imaging in-silico trials require a high degree of realism because images will be produced and evaluated. To date, the most realistic whole-body phantoms are so-called boundary representation (BREP) phantoms, which are surface representations of segmented 3D patient data (MRI or CT). [ 17 ] The fitted surfaces allow for modelling anatomical changes or motion in addition to realistic anatomy. Existing models for generating intra-organ structures are based on mathematical modelling, patient images, or generative adversarial network (GAN) modelling of patient images. [ 16 ] [ 18 ] Models of pathologies are important for simulating clinical applications targeted on specific diseases. State-of-the-art models are based on segmented lesions with enhancements for structures above the resolution limit of the imaging system using digital pathology or physiological growth models. [ 19 ] GAN models have been used to simulate disease as well. [ 20 ] In addition to the above, models have been developed for organ and patient motion, blood flow and contrast agent perfusion. The response of the imaging system is generally simulated with Monte-Carlo or raytracing system models, benchmarked to measurements on physical phantoms. [ 21 ] [ 22 ] Medical imaging has a long history of system simulation for technology development and proprietary as well as public-domain models exist for a wide range of imaging systems. The final step of an imaging in-silico trial is evaluation and interpretation of the generated images in a systematic way. The images can be evaluated by humans in ways similar to a conventional clinical trial, but for an in-silico trial to be really effective, image interpretation as well needs to be automized. For detection and quantification tasks, so-called observer models have been thoroughly studied and validated against human observers, and a range of spatial-domain models exist in the literature. [ 23 ] Image interpretation based on deep learning and artificial intelligence (AI) is an active research field, [ 24 ] and might become a valuable aid for the radiologist to find abnormalities or to make decisions. Applying AI observers in in-silico trials is relatively straightforward as the entire image chain is digitized. This article incorporates text available under the CC BY 4.0 license.
https://en.wikipedia.org/wiki/In_silico_clinical_trials
In silico medicine (also known as " computational medicine ") is the application of in silico research to problems involving health and medicine. It is the direct use of computer simulation in the diagnosis, treatment, or prevention of a disease . More specifically, in silico medicine is characterized by modeling , simulation , and visualization of biological and medical processes in computers with the goal of simulating real biological processes in a virtual environment. [ 1 ] The term in silico was first used in 1989 at a workshop "Cellular Automata: Theory and Applications" by a mathematician from National Autonomous University of Mexico (UNAM). [ 2 ] The term in silico radiation oncology , a precursor of generic in silico medicine was coined and first introduced by G. Stamatakos in Proceedings of the IEEE in 2002. [ 3 ] The same researcher coined and introduced the more generic term in silico oncology . [ 4 ] In silico medicine is considered an extension of previous work using mathematical models of biological systems. [ 4 ] It became apparent that the techniques used to model biological systems has utility to explain and predict dynamics in the medical field. The first fields in medicine to use in silico modeling were genetics, physiology and biochemistry. The field saw a dramatic influx of data when the human genome was sequenced in the 1980s and 1990s. Concurrently the increase in available computational power allowed for modeling of complex systems that were previously impractical. [ 5 ] There are numerous reasons why in silico medicine is used. For example, in silico medical modeling can allow for early prediction of success of a compound for a medicinal purpose and elucidate potential adverse effects early in the drug discovery process. [ 6 ] In silico modeling can also provide a humane alternative to animal testing. [ 2 ] It has been purported by a company in the field, that computer-aided models will make the use of testing on living organisms obsolete. [ 7 ] The term in silico medicine is exemplified in initiatives such as the Virtual Physiological Human by the European Commission [ 8 ] and in institutes such as the VPH Institute and the INSIGNEO Institute at the University of Sheffield. The In Silico Oncology Group (ISOG) [ 9 ] at the Institute of Communication and Computer Systems, National Technical Institute of Athens (ICCS-NTUA) aims at developing clinically driven and oriented multiscale simulation models of malignant tumors (cancer) to be utilized as patient individualized decision support and treatment planning systems following completion of clinical adaptation and validation. An additional aim of the Group's research is to simulate oncological clinical trials which would otherwise be too costly or time intensive and to this direction, grid computing infrastructures have been exploited, such as the European Grid Infrastructure , to increase the performance and effectiveness of the simulations. [ 10 ] ISOG has led the development of the first technologically integrated Oncosimulator , a joint Euro-Japanese research venture. [ 11 ] In 2003, the first vaccine based solely off of genomic information was developed. The technique of developing the vaccine, termed "reverse vaccinology", used the genomic information and not the infectious bacteria itself to develop the vaccine. [ 12 ] In December 2018, the four-year PRIMAGE project was launched. This EU-funded Horizon 2020 project proposes a cloud-based platform to support decision making in the clinical management of malignant solid tumours, offering predictive tools to assist diagnosis, prognosis, therapies choice and treatment follow up, based on the use of novel imaging biomarkers, in-silico tumour growth simulation, advanced visualization of predictions with weighted confidence scores and machine-learning based translation of this knowledge into predictors for the most relevant, disease-specific, Clinical End Points. [ 13 ] [ 14 ] In 2020, the first CADFEM Medical Conference on the topic of in vivo, in vitro, in silico was held with the guiding theme: "The central role of in silico medicine – what it can do and what we need for its practice." [ 15 ] As modeling of human, social, behavioral, and cultural (HSBC) characteristics of patient behavior becomes more sophisticated, there is speculation that virtual patients may replace patient actors in medical school curriculum. Additionally, there are projects underway that utilize virtual cadavers, computer simulated models of human anatomy based on CT images of real people. [ 16 ]
https://en.wikipedia.org/wiki/In_silico_medicine
In situ [ a ] is a Latin phrase meaning 'in place' or 'on site', derived from in ('in') and situ ( ablative of situs , lit. ' place ' ). [ 3 ] The term typically refers to the examination or occurrence of a process within its original context, without relocation. The term is used across many disciplines to denote methods, observations, or interventions carried out in their natural or intended environment. By contrast, ex situ methods involve the removal or displacement of materials, specimens , or processes for study, preservation, or modification in a controlled setting, often at the cost of contextual integrity. The earliest known use of in situ in the English language dates back to the mid-17th century. In scientific literature , its usage increased from the late 19th century onward, initially in medicine and engineering. The natural sciences typically use in situ methods to study phenomena in their original context. In geology , field analysis of soil composition and rock formations provides direct insights into Earth's processes. Biological field research observes organisms in their natural habitats , revealing behaviors and ecological interactions that cannot be replicated in a laboratory. In chemistry and experimental physics , in situ techniques allow scientists to observe substances and reactions as they occur, capturing dynamic processes in real time. In situ methods have applications in diverse fields of applied science . In the aerospace industry , in situ inspection protocols and monitoring systems assess operational performance without disrupting functionality. Environmental science employs in situ ecosystem monitoring to collect accurate data without artificial interference. In medicine, particularly oncology , carcinoma in situ refers to early-stage cancers that remain confined to their point of origin. This classification, indicating no invasion of surrounding tissues, plays a crucial role in determining treatment plans and prognosis . Space exploration relies on in situ research methods to conduct direct observational studies and data collection on celestial bodies , avoiding the challenges of sample-return missions . In the humanities , in situ methodologies preserve contextual authenticity. Archaeology maintains the spatial relationships and environmental conditions of artifacts at excavation sites , allowing for more accurate historical interpretation. In art theory and practice, the in situ principle informs both creation and exhibition. Site-specific artworks , such as environmental sculptures or architectural installations , are designed to integrate seamlessly with their surroundings, emphasizing the relationship between artistic expression and its cultural or environmental context. The term in situ is not found in Classical Latin . Its earliest recorded use is in Late Latin during the 4th century, with the first known instance by Augustine of Hippo . The term was widely used in Medieval Latin . [ 4 ] : 1536 The term's earliest known use in the English language dates back to the mid-17th century. The Oxford English Dictionary cites the first English-language appearance of in situ in 1648 in the writings of William Molins, author of the anatomical text Myskotomia . [ 1 ] The usage of in situ in scientific literature increased from the late 19th century onward, initially in medicine and engineering, including geological surveys and petroleum extraction . During this period, the term described analyses conducted within the living human body or inside oil wells , among other applications. [ 4 ] : 1534 In situ entered French medical discourse by 1877 in the Journal de médecine et de chirurgie pratiques ( transl. Journal of Practical Medicine and Surgery ). [ 5 ] The compound term carcinoma in situ , referring to abnormal cells that confined to their original location without invasion of surrounding tissue , was first used in a 1932 paper by U.S. surgical pathologist Albert C. Broders . [ 6 ] [ 7 ] The concept of in situ in contemporary art emerged as a framework in the late 1960s and 1970s, referring to artworks created specifically for a particular space. [ 8 ] : 160–162 By the mid-1980s, the term was adopted in materials science , particularly in the field of heterogeneous catalysis , where a catalyst in one phase facilitates a chemical reaction in a different phase. Its usage later expanded beyond catalysis and is now applied across various disciplines within materials science. [ 4 ] : 1534 As of August 2022 [update] , the term in situ had been used in more than 910,000 scientific publications since 1874, while ex situ had appeared in over 29,000 scientific publications since 1958. [ 4 ] : 1535 In situ remains one of the most widely used and versatile Latin terms in contemporary medical discourse. [ 9 ] In astronomy , in situ measurement involves collecting data directly at or near a celestial object using spacecraft or instruments physically present at the location. [ 10 ] For example, the Parker Solar Probe conducts in situ studies of Sun's atmosphere , [ 11 ] while the Cassini–Huygens mission similarly analyzed Saturn 's magnetosphere . [ 12 ] In situ formation refers to astronomical objects that formed at their current locations without significant migration. Some theories propose that planets, such as Earth, formed in their present orbits rather than moving from elsewhere. Star clusters may form within their host galaxy, rather than being accreted from external sources. [ 13 ] [ 14 ] In cell biology , in situ techniques allow the examination of cells or tissues within their native environment, preserving their natural structure and context. These approaches contrast with techniques requiring the extraction or isolation of cellular components. One example is in situ hybridization (ISH), a technique designed to identify and localize specific nucleic acid sequences within intact cells or tissue sections. ISH employs labeled probes, which are strands of nucleic acids engineered to bind selectively to target sequences. These probes are tagged with detectable markers, such as fluorophores or radioactive isotopes , enabling visualization of the precise spatial distribution of the targeted DNA or RNA . By maintaining the structural integrity of the sample, the technique facilitates mapping of genetic material within its original cellular or tissue framework. [ 15 ] [ 16 ] In biological field research , the term in situ refers to the study of living organisms within their natural habitat . This includes collecting biological samples, conducting experiments, measuring abiotic factors, and documenting ecological or behavioral observations without relocating the subject. [ 17 ] [ 18 ] In organic chemistry , in situ refers to processes that take place within the reaction mixture without isolating intermediates . This approach is useful for handling unstable compounds that decompose rapidly, and enhances laboratory safety by eliminating the need to isolate potentially hazardous intermediates. In one-pot synthetic sequences, in situ work-up modifications enable multiple reaction steps to proceed within a single vessel, reducing exposure to unstable or hazardous substances, such as azide intermediates, [ b ] which may pose safety risks if isolated. [ 21 ] : 872 Another example is the Corey–Chaykovsky reagent , a sulfur ylide , is generated in situ by deprotonating sulfonium halides with a strong base . [ 22 ] [ 23 ] This approach is used because unstablized sulfur ylides are highly reactive. If isolated, the ylide could decompose or lose reactivity , making its direct generation and use in the reaction mixture more practical. [ 24 ] Analytical techniques such as nuclear magnetic resonance (NMR) spectroscopy, Raman spectroscopy , and mass spectrometry facilitate real-time monitoring of in situ reactions. These methods enable researchers to detect short-lived substances that form during a reaction, such as intermediates that might not be stable enough to isolate, and adjust conditions to improve the process—all without disturbing the reaction itself. [ 25 ] [ 26 ] [ 27 ] In electrochemistry , in situ experiments are performed under the normal operating conditions of an electrochemical cell , with the electrode maintained at a controlled potential (typically by a potentiostat ). [ 28 ] By contrast, ex situ experiments occur outside those operating conditions, usually without potential control—for example, after the electrode has been removed from the cell or left at open-circuit . Maintaining potential control in in situ measurements preserves the electrochemical environment at the electrode–electrolyte interface, ensuring that the double layer and ongoing electron-transfer reactions remain intact at a given electrode potential. [ 28 ] [ 29 ] [ 30 ] In aerospace structural health monitoring , in situ inspection involves diagnostic techniques that assess components within their operational environments, avoiding the need for disassembly or service interruptions. The nondestructive testing (NDT) methods commonly used for in situ damage detection include infrared thermography , which measures thermal emissions to identify structural anomalies but is less effective on low- emissivity materials; [ 31 ] speckle shearing interferometry ( shearography ), which analyzes surface deformation patterns but requires carefully controlled environmental conditions; [ 32 ] and ultrasonic testing , which uses sound waves to detect internal defects in composite materials but can be time-intensive for large structures. [ 33 ] Despite these individual limitations, the integration of these complementary techniques enhances overall diagnostic accuracy. [ 34 ] Another approach involves real-time monitoring using alternating current (AC) and direct current (DC) sensor arrays. These systems detect structural degradation, including matrix discontinuities, interlaminar delaminations , and fiber fractures , by analyzing variations in electrical resistance and capacitance within composite laminate structures. [ 34 ] Future space exploration and terraforming efforts may depend on in situ resource utilization , reducing reliance on Earth-based supplies. Proposed missions, such as Orion and Mars Direct , have explored this approach by leveraging locally available materials. The Orion space vehicle was once considered for propulsion using fuel extracted from the Moon, while Mars Direct relies on the Sabatier reaction to synthesize methane and water from atmospheric carbon dioxide and hydrogen on Mars. [ 35 ] [ 36 ] In biological engineering , in situ describes experimental treatments applied to cells or tissues while they remain intact, rather than using extracts. It also refers to assays or manipulations performed on whole tissues without disrupting their natural structure. [ 37 ] : 295–296 In biomedical engineering , in situ polymerization is used to produce protein nanogels , which serve as a versatile platform for the storage and release of therapeutic proteins . This approach has applications in cancer treatment , vaccination, diagnostics, regenerative medicine , and therapies for loss-of-function genetic diseases. [ 38 ] In construction engineering , in situ construction refers to building work carried out directly on-site using raw materials , as opposed to prefabrication , where components are manufactured off-site and assembled on-site. In situ concrete is poured at its final location, offering structural stability compared to precast construction. [ 39 ] : 117–119 In wall construction, reinforcing bars are assembled first, followed by the installation of formwork to contain the poured concrete. Once the concrete has cured, the formwork is removed, leaving the wall in place. [ 39 ] : 117 Prefabrication, by contrast, enhances efficiency by reducing on-site labor and accelerating project timelines, though it requires precise pre-planning and incurs higher manufacturing and transportation costs. [ 40 ] [ 41 ] [ 42 ] In geotechnical engineering , the term in situ describes soil in its natural, undisturbed state, [ 43 ] : 15 as opposed to fill material , which has been excavated and relocated. The differences between undisturbed soil and fill material affect how well a site can support structures, install underground utilities, and manage water drainage . Proper assessment of soil conditions is necessary to prevent issues such as uneven settling, unstable foundations , and poor water infiltration . [ 44 ] [ 45 ] In computer science , in situ refers to the use of technology and user interfaces to provide continuous access to situationally relevant information across different locations and contexts. [ 46 ] [ 47 ] Examples include athletes viewing biometric data on smartwatches to improve their performance [ 48 ] or a presenter looking at tips on a smart glass to reduce their speaking rate during a speech. [ 49 ] An algorithm is said to be an in situ algorithm, or in-place algorithm , if the extra amount of memory required to execute the algorithm is O(1) , [ 50 ] that is, does not exceed a constant no matter how large the input. Typically such an algorithm operates on data objects directly in place rather than making copies of them. With big data , in situ data would mean bringing the computation to where data is located, rather than the other way like in traditional RDBMS systems where data is moved to computational space. [ 51 ] This is also known as in-situ processing . In Earth sciences , particularly in geomorphology , in situ refers to natural materials or processes occurring at their point of origin without being transported. An example is weathering , in which rocks undergo physical or chemical disintegration in place, [ 52 ] in contrast to erosion , which involves the removal and relocation of materials by agents such as wind, water, or ice. [ 53 ] Soil formed from the weathering of underlying bedrock is an example of an in situ formation. [ 54 ] : 246 In situ measurements, such as those of soil moisture , rock stress, groundwater trends, or radiation levels, are conducted on-site to provide direct data. These measurements are often essential for validating remote sensing data, such as satellite imagery , which is widely used for large-scale environmental monitoring but may require in situ confirmation to ensure accuracy. [ 55 ] [ 56 ] [ 57 ] In oceanography , in situ observational methods involve direct measurements of oceanic conditions, typically conducted during shipboard surveys. These methods employ specialized instruments, such as the Conductivity, Temperature, and Depth (CTD) device , which records parameters such as salinity , temperature, pressure , and biogeochemical properties like oxygen saturation . [ 58 ] Historically, oceanographers used reversing thermometers , which were inverted at specific depths to trap mercury and preserve temperature readings for subsequent analysis. [ 59 ] These instruments have been largely replaced by CTD devices and expendable bathythermographs . [ 60 ] In atmospheric sciences , in situ measurements refer to observations of atmospheric properties obtained using instruments placed within the environment being studied. Aircraft, balloons, and rockets are used to carry some of these instruments, allowing for direct interaction with the air to collect data. [ 61 ] For example, radiosondes , carried aloft by weather balloons , measure atmospheric parameters such as temperature, humidity, and pressure as they ascend through the atmosphere , [ 54 ] : 396 while anemometers , typically positioned at ground level or on towers, record wind speed and direction at specific locations. [ 62 ] In contrast, remote sensing techniques, such as weather radar and satellite observations, collect atmospheric data from a distance by using electromagnetic radiation to infer properties without direct contact with the atmosphere. [ 63 ] By the mid-1980s, the term in situ was adopted in materials science , particularly in the field of heterogeneous catalysis , where a catalyst in one phase facilitates a chemical reaction in a different phase. The term later expanded beyond catalysis and is now applied across various disciplines of materials science, alongside the opposite designation ex situ . [ 4 ] : 1534 For example, in situ describes the study of a sample maintained in a steady state [ c ] condition within a controlled environment, where specific parameters such as temperature or pressure are regulated. This approach allows researchers to observe materials under conditions that replicate their functional states. Examples include a sample held at a fixed temperature inside a cryostat , an electrode material operating within an electric battery , or a specimen enclosed within a sealed container to protect it from external influences. [ 4 ] : 1532 In transmission electron microscopy (TEM) and scanning transmission electron microscopy (STEM), in situ refers to the observation of materials as they are exposed to external stimuli within the microscope, under conditions that mimic their natural environments. This enables real-time observation of material behavior at the nanoscale . External stimuli in in situ TEM / STEM experiments may include mechanical loading, pressure, temperature variation, electrical biasing , radiation, and environmental exposure to gases, liquids, or magnetic fields , individually or in combination. These conditions allow researchers to study atomic-level processes—such as phase transformations , chemical reactions, or mechanical deformations —thereby providing insights into material properties and behavior essential for advances in materials science. [ 64 ] [ 65 ] In medical terminology , in situ belongs to a group of two-word Latin expressions, including in vitro ('within the glass', e.g., laboratory experiments), in vivo ('within the living', e.g., experiments on living organisms ), and ex vivo ('out of the living', e.g., experiments on extracted tissues ), which facilitate communication of experimental or clinical contexts. Like abbreviations, these terms convey essential information concisely. In situ is a widely employed term in the medical field, used to describe phenomena or processes as they occur in their original location. It is applied in diverse contexts such as oncology , measurement acquisition, medical simulation , and anatomical examination. Because of its versatility across these varied applications, in situ is considered one of the most productive Latin expressions in contemporary medical discourse. [ 9 ] In oncology, in situ is commonly applied in the context of carcinoma in situ (CIS), a term describing abnormal cells confined to their original location without invasion of surrounding tissue. [ 9 ] [ 66 ] The earliest known use of the term dates back to 1932 in the writing of U.S. surgical pathologist Albert C. Broders . [ 6 ] Broders introduced both the term and the concept, and the concept of carcinoma in situ was initially controversial. [ 7 ] CIS is a critical term in early cancer diagnosis , as it signifies a non-invasive stage, allowing for more targeted interventions such as localized excision or monitoring—before potential progression to invasive cancer. [ 67 ] [ 68 ] Melanoma in situ is an early, localized form of melanoma , a type of malignant skin cancer . In this stage, the cancerous melanocytes —the pigment-producing cells that give skin its color—are confined to the epidermis , the outermost layer of the skin. The melanoma has not yet penetrated into the deeper dermal layers or metastasized to other parts of the body. [ 69 ] Beyond oncology, in situ is used in fields where maintaining natural anatomical or physiological positions is essential. [ 9 ] In orthopedic surgery , the term refers to procedures that preserve the natural alignment or position of bones or joints. For example, orthopedic plates or screws may be placed without altering the bone's original structure, as in "[the patient] was treated operatively with an in situ cannulated hip screw fixation". [ 70 ] In cardiothoracic surgery , in situ often describes techniques where blood vessels are utilized in their original anatomical position for surgical purposes. For example, the internal thoracic artery can be left attached to the subclavian artery while rerouting blood flow to bypass occluded coronary arteries and improve heart circulation. [ 71 ] [ 72 ] In organ transplantation , in situ is used to describe procedures performed within the donor's body to preserve organ viability. In situ perfusion is a technique employed during organ retrieval to restore blood flow to organs while they remain in their original location. This method minimizes ischemic injury and preserves organ viability for transplantation. In contrast, ex situ machine perfusion involves perfusing the organ outside the donor's body, typically after it has been removed. [ 73 ] [ 74 ] [ d ] In petroleum engineering , in situ techniques involve the application of heat or solvents to extract heavy crude oil or bitumen from reservoirs located beneath the Earth's surface . Several in situ methods exist, but those that utilize heat, particularly steam, have proven to be the most effective for oil sands extraction. The most widely used in situ technique is steam-assisted gravity drainage (SAGD). [ 75 ] This method employs two horizontal wells: the upper well injects steam to heat the bitumen, reducing its viscosity, while the lower well collects the mobilized oil for extraction. [ 76 ] SAGD has gained prominence in the Canadian province of Alberta , due to its efficiency in recovering bitumen from deep reservoirs. Approximately 80% of Alberta's oil sands deposits are located at depths that render open-pit mining impractical, making in situ techniques such as SAGD the primary method of extraction. [ 77 ] In urban planning , in situ upgrading is an approach to and method of upgrading informal settlements . [ 78 ] In archaeology , the term in situ has been used variably to describe artifacts or features discovered in a presumed original context, yet its precise definition remains contested. Scholars distinguish between a broad usage—denoting materials recovered through controlled excavation —and a stricter usage reserved for those found in undisturbed, primary depositional settings. [ 79 ] Between these poles lies a continuum of depositional scenarios, from sealed habitation floors to slope or fluvial deposits , meaning that whether an object is truly in situ depends on site-specific formation processes and the degree to which stratigraphic as well as spatial relationships can be reconstructed. [ 79 ] Recording the exact spatial coordinates , stratigraphic position, and surrounding matrix of depositional materials is necessary for understanding past human activities and historical processes. While artifacts are often removed for analysis, certain archaeological features—such as hearths , postholes , and architectural foundations —have to be thoroughly documented in place to preserve their contextual information during excavation. [ 80 ] : 121 This documentation relies on various methods, including detailed field notes, scaled technical drawings , cartographic representation, and high-resolution photographic records. Current archaeological practice incorporates advanced digital technologies, including 3D laser scanning , photogrammetry , unmanned aerial vehicles , and Geographic Information Systems (GIS), to capture complex spatial relationships. [ 81 ] Artifacts found outside their original context or ex situ , often due to natural disturbances or unrecorded excavations, have less interpretive value. However, these displaced materials can still provide clues about the spatial distribution and typological characteristics of unexcavated in situ deposits, guiding future excavation efforts. [ 82 ] [ 83 ] The Convention on the Protection of the Underwater Cultural Heritage sets mandatory guidelines for signatory states regarding the treatment of underwater shipwrecks . One of its key principles is that in situ preservation is the preferred approach. [ 80 ] : 558 [ † 1 ] : 13 This policy is based on the unique conditions of underwater environments, where low oxygen levels and stable temperatures help preserve artifacts over long periods. Removing artifacts from these conditions and exposing them to the atmosphere often accelerates deterioration, particularly the oxidation of iron-based materials. [ † 1 ] : 5 In mortuary archaeology , in situ documentation involves systematically recording and cataloging human remains in their original depositional positions. These remains are often embedded in complex matrices of sediment , clothing, and associated artifacts. Excavating mass graves presents additional challenges, as they may contain hundreds of individuals. Before identifying individuals or determining causes of death, archaeologists must carefully document spatial relationships and contextual details to preserve forensic and historical information. [ 84 ] The concept of in situ in contemporary art emerged as a framework in the late 1960s and 1970s, referring to artworks created specifically for a particular space. These works integrate the site's physical, historical, political, and sociological characteristics as essential elements of their composition. [ 8 ] : 160–162 This approach contrasts with autonomous artistic production, where artworks are independent of their eventual display locations. [ 85 ] Theoretical discussions, particularly in the writings and practice of French conceptual artist and sculptor Daniel Buren , have emphasized the dynamic relationship between artistic intervention and its surrounding environment. [ 8 ] : 161 The site-specific installations of Christo and Jeanne-Claude exemplify the application of in situ principles in art. Their large-scale interventions such as The Pont Neuf Wrapped (1985) and Wrapped Reichstag (1995) involved the systematic wrapping of buildings and landscape elements in fabric, temporarily transforming familiar spaces and altering public perception. The concept of in situ art further evolved with the land art movement, wherein artists such as Robert Smithson and Michael Heizer integrated their works directly into natural landscapes and created an inseparable connection between the artwork and its environment. [ 85 ] In contemporary aesthetic discourse, in situ has expanded into a broader theoretical construct, describing artistic practices that reinforce the fundamental unity between a work and its site. [ 8 ] : 160–161 In legal contexts, in situ is often used for its literal sense, meaning 'in its original place'. In Hong Kong , in-situ land exchange refers to a mechanism whereby landowners can swap their existing or expired land leases for new grants covering the same land parcel. This approach facilitates redevelopment —such as modernizing buildings or increasing land usage density—in a crowded, land-scarce environment without displacing ownership from the original location. The Hong Kong government, through the Development Bureau and Lands Department , has implemented arrangements to expedite lease modifications and land exchanges. [ 86 ] : 283–285 [ † 2 ] [ † 3 ] In public international law , the term in situ is used to distinguish between a government that exercises effective control over a state's territory and population and a government-in-exile , which operates from outside its national borders. A government in situ is the de facto governing authority, while a government-in-exile may still claim legitimacy despite lacking territorial control. The recognition of a government generally depends on its ability to maintain authority over its state, though exceptions exist, particularly when a government-in-exile is displaced due to unlawful foreign occupation . [ 87 ] : 115–117 [ 88 ] : 2 In linguistics , particularly in syntax , an element is described as in situ when it is pronounced in the same position where it receives its semantic interpretation . This concept is especially relevant in the analysis of wh- questions across languages. For example, in Mandarin Chinese and Kurdish , wh-elements remain in situ , producing structures analogous to "John bought what?" where the interrogative word occupies the same syntactic position as the direct object would in a declarative sentence ("John bought bread"). [ 89 ] [ 90 ] By contrast, languages like English and French typically employ wh-movement , where the interrogative element is displaced from its base position to the beginning of the clause, as in "What did John buy?" Here, the wh-word what has moved from its original post-verbal position to the sentence-initial position, leaving behind a trace or gap in the object position. This typological distinction between in situ wh-elements and moved wh-elements represents one of the fundamental parameters of variation in natural language syntax and has been extensively studied within generative grammar frameworks. In economics , in situ storage refers to the practice of retaining a product, usually a natural resource , in its original location rather than extracting and storing it elsewhere. This method avoids direct out-of-pocket costs , such as those for transportation or storage facilities, with the primary expense being the opportunity cost of delaying potential revenue . It applies to resources like oil and gas left unextracted in wells, minerals and gemstones remaining underground, and timber left standing until extraction is economically favorable. Certain agricultural products, such as hay , can be stored in situ under suitable conditions. [ 91 ] : 54 [ 92 ] : 35 In psychology , in situ typically refers to studies conducted in a natural or real-world setting, as opposed to a controlled laboratory environment. This approach allows researchers to observe and measure psychological processes as they occur, increasing ecological validity —though often at the expense of experimental control over variables. [ 93 ] : 84–85 In gastronomy , in situ refers to the art of cooking with the different resources that are available at the site of the event. Here a person is not going to the restaurant, but the restaurant comes to the person's home. [ 94 ] In situ leaching or in situ recovery refers to the mining technique of injecting lixiviant underground to dissolve ore and bringing the pregnant leach solution to surface for extraction. Commonly used in uranium mining but has also been used for copper mining. [ 95 ]
https://en.wikipedia.org/wiki/In_situ
Bioremediation is the process of decontaminating polluted sites through the usage of either endogenous or external microorganism . [ 1 ] In situ is a term utilized within a variety of fields meaning "on site" and refers to the location of an event. [ 2 ] Within the context of bioremediation, in situ indicates that the location of the bioremediation has occurred at the site of contamination without the translocation of the polluted materials. Bioremediation is used to neutralize pollutants including Hydrocarbons , chlorinated compounds, nitrates, toxic metals and other pollutants through a variety of chemical mechanisms. [ 1 ] Microorganism used in the process of bioremediation can either be implanted or cultivated within the site through the application of fertilizers and other nutrients. Common polluted sites targeted by bioremediation are groundwater/aquifers and polluted soils. Aquatic ecosystems affected by oil spills have also shown improvement through the application of bioremediation. [ 3 ] The most notable cases being the Deepwater Horizon oil spill in 2010 [ 4 ] and the Exxon Valdez oil spill in 1989. [ 5 ] Two variations of bioremediation exist defined by the location where the process occurs. Ex situ bioremediation occurs at a location separate from the contaminated site and involves the translocation of the contaminated material. In situ occurs within the site of contamination [ 1 ] In situ bioremediation can further be categorized by the metabolism occurring, aerobic and anaerobic , and by the level of human involvement. The Sun Oil pipeline spill in Ambler, Pennsylvania spurred the first commercial usage of in situ bioremediation in 1972 to remove hydrocarbons from contaminated sites. [ 6 ] A patent was filed in 1974 by Richard Raymond, Reclamation of Hydrocarbon Contaminated Ground Waters, which provided the basis for the commercialization of in situ bioremediation. [ 6 ] Accelerated in situ bioremediation is defined when a specified microorganism is targeted for growth through the application of either nutrients or an electron donor to the contaminated site. Within aerobic metabolism the nutrient added to the soil can be solely Oxygen. Anaerobic in situ bioremediation often requires a variety of electron donors or acceptors such as benzoate and lactate. [ 7 ] Besides nutrients, microorganisms can also be introduced directly to the site within accelerated in situ bioremediation. [ 8 ] The addition of extraneous microorganisms to a site is termed bioaugmentation and is used when a particular microorganism is effective at degrading the pollutant at the site and is not found either naturally or at a high enough population to be effective. [ 7 ] Accelerated in situ bioremediation is utilized when the desired population of microorganisms within a site is not naturally present at a sufficient level to effectively degrade the pollutants. It also is used when the required nutrients within the site are either not at a concentration sufficient to support growth or are unavailable. [ 7 ] The Raymond Process is a type of accelerated in situ bioremediation that was developed by Richard Raymond and involves the introduction of nutrients and electron acceptors to a contaminated site. [ 9 ] This process is primarily used to treat polluted groundwater. In the Raymond process a loop system is created. Contaminated Groundwater from downstream of the groundwater flow is pumped to the surface and infused with nutrients and an electron donor, often oxygen. This treated water is then pumped back down below the water table upstream of where it was originally taken. This process introduces nutrients and electron donors into the site allowing for the growth of a determined microbial population. [ 9 ] In contaminated sites where the desired microbial metabolism is aerobic the introduction of oxygen to the site can be used to increase the population of targeted microorganisms. [ 10 ] The injection of Oxygen can occur through a variety of processes. Oxygen can be injected into the subsurface through injection wells. It can also be introduced through an injection gallery. The presence of oxygen within a site is often the limiting factor when determining the time frame and efficacy of a proposed in situ bioremediation process. Ozone injected into the subsurface can also be a means of introducing oxygen into a contaminated site. [ 10 ] Despite being a strong oxidizing agent and potentially having a toxic effect on subsurface microbial populations, ozone can be an efficient means of spreading oxygen throughout a site due to its high solubility. [ 10 ] Within twenty minutes after being injected into the subsurface, fifty percent of the ozone will have decomposed to Oxygen. [ 10 ] Ozone is commonly introduced to the soil in either a dissolved or gaseous state. [ 10 ] Within accelerated anaerobic in situ bioremediation electron donors and acceptors are introduced into a contaminated site in order to increase the population of anaerobic microorganisms. [ 9 ] Monitored Natural Attenuation is in situ bioremediation that occurs with little to no human intervention. [ 11 ] This process relies on the natural microbial populations sustained within the contaminated sites to over time reduce the contaminants to a desired level. [ 11 ] During monitored natural attenuation the site is monitored in order to track the progress of the bioremediation. [ 11 ] Monitored natural attenuation is used in sites where the source of contamination is no longer present, often after other more active types of in situ bioremediation have been conducted. [ 11 ] Naturally occurring within the soil are microbial populations that utilize hydrocarbons as a source of energy and carbon. [ 9 ] Upwards to twenty percent of microbial soil populations have the ability to metabolize hydrocarbons. [ 9 ] These populations can through either accelerated or natural monitored attenuation be utilized to neutralize within the soil hydrocarbon pollutants. The metabolic mode of hydrocarbon remediation is primarily aerobic. [ 9 ] The end products of the remediation for hydrocarbons are Carbon Dioxide and water. [ 9 ] Hydrocarbons vary in ease of degradation based on their structure. Long chain aliphatic carbons are the most effectively degraded. Short chained, branched, and quaternary aliphatic hydrocarbons are less effectively degraded. [ 9 ] Alkene degradation is dependent on the saturation of the chain with saturated alkenes being more readily degraded. [ 9 ] Large numbers of microbes with the ability to metabolize aromatic hydrocarbons are present within the soil. Aromatic hydrocarbons are also susceptible to being degraded through anaerobic metabolism. [ 9 ] Hydrocarbon metabolism is an important facet of in situ bioremediation due to the severity of petroleum spills around the world. Polynuclear aromatic carbons susceptibility to degradation is related to the number of aromatic rings within the compound. [ 9 ] Compounds with two or three rings are degraded at an effective rate, but compounds possessing four or more rings can be more resilient to bioremediation efforts. [ 9 ] Degradation of polynuclear aromatic carbons with less than four rings is accomplished by various aerobic microbes present in the soil. Meanwhile, for larger-molecular-sized compounds, the only metabolic mode that has shown to be effective is cometabolism . [ 9 ] The fungus genus Phanerochaete under anaerobic conditions has species with the ability to metabolize some polynuclear aromatic carbons utilizing a peroxidase enzyme. [ 9 ] [ 12 ] A variety of metabolic modes exist capable of degrading chlorinated aliphatic compounds . Anaerobic reduction, oxidation of the compound, and cometabolism under aerobic conditions are the three main metabolic modes utilized by microorganisms to degrade chlorinated aliphatic compounds. [ 9 ] Organisms that can readily metabolize chlorinated aliphatic compounds are not common in the environment. [ 9 ] One and two carbons that have little chlorination are the compounds most effectively metabolized by soil microbial populations. [ 9 ] The degradation of chlorinated aliphatic compounds is most often performed through cometabolism. [ 9 ] Chlorinated aromatic hydrocarbons are resistant to bioremediation and many microorganisms lack the ability to degrade the compounds. Chlorinated aromatic hydrocarbons are most often degraded through a process of reductive dechlorination under anaerobic conditions. [ 9 ] Polychlorinated biphenyls (PCBs) are primarily degraded through cometabolism. There are also some fungi that can degrade these compounds as well. Studies show an increase in PCB degradation when biphenyl is added to the site due to cometabolic effects that the enzymes used to degrade biphenyl have on PCBs. [ 9 ] Due to in situ bioremediation taking place at the site of contamination there is a lessened risk of cross contamination as opposed to ex situ bioremediation where the polluted material is transported to other sites. In situ bioremediation can also have lower costs and a higher rate of decontamination than ex situ bioremediation.
https://en.wikipedia.org/wiki/In_situ_bioremediation
In-Situ Capping (ISC) of Subaqueous Waste is a non-removal remediation technique for contaminated sediment that involves leaving the waste in place and isolating it from the environment by placing a layer of soil and/or material over the contaminated waste as to prevent further spread of the contaminant . In-situ capping provides a viable way to remediate an area that is contaminated. It is an option when pump and treat becomes too expensive and the area surrounding the site is a low energy system. The design of the cap and the characterization of the surrounding areas are of equal importance and drive the feasibility of the entire project. Numerous successful cases exist and more will exist in the future as the technology expands and grows more popular. [ 1 ] [ 2 ] In-situ capping uses techniques developed in chemistry , biology , geotechnical engineering , environmental engineering , and environmental geotechnical engineering . Contaminants located in sediments still pose a risk to the environment and human health. Some of the direct effects on aquatic life that can be associated with contaminated sediment include “the development of cancerous tumors in fish exposed to polycyclic aromatic hydrocarbons in sediments." [ 1 ] These high-risk sediments need to be remediated. There are usually only four options for remediation: The cap can be made up of many different things, including but not limited to sand, gravel, geotextiles , and multiple layers of these options. [ 1 ] There are many ways that a contaminant inside sediment can become introduced to the environment. These ways include but are not limited to advection , diffusion, benthic organisms mixing and reworking of the upper layer of the contaminated sediment, and sediment re-suspension by different subaquatic forces. [ 1 ] In-situ capping (ISC) can fix all of these adverse effects with three primary functions: A fourth, although not necessary, function of an in-situ cap should be the "encouragement of habitat values." This should not be made a primary goal except in extreme circumstances. This can be achieved by altering the superficial characteristics of the cap to "encourage desirable species or discourage undesirable species." [ 3 ] The obvious advantage of using in-situ capping is that the waste will not be disturbed, and it prevents further contamination of the surrounding area from movement of the contaminant by removal. Sadly, the long-term effects of ISC have not been studied since it is an emerging technology. [ 2 ] In-situ capping has been effective in numerous locations. For example, in several places in the interior of Japan “in-situ capping of nutrient-laden sediments with sand” has worked very well in preserving water quality by reducing “the release of nutrients (nitrogen and phosphorus)” and oxygen depletion by bottom sediments. [ 4 ] It is very important to evaluate the site and goals of a specific project to determine if ISC is the right technique to use. First, it is important to find out if ISC will satisfy all of the desired remedial objectives. To determine if ISC will satisfy the remedial objectives it is important to look at the three primary functions previously listed for ISC. For the first function it is important to realize that “the ability of an ISC to isolate aquatic organisms from the sediment contaminants is dependent upon” the deposition of new sediment contaminants being deposited on the cap. If contaminated sediment is deposited back on top of the cap then a cap was built to separate to contaminated layers. Thus, “ISC should only be considered if source control has been implemented." Stabilization of the contaminated sediment could be a design function if the goal of remediation is to prevent negative environmental impacts due to “resuspension, transport and redeposition” of the contaminated sediments to other remote areas. Furthermore, if a remedial objective is desired, then the purpose of the ISC could be to isolate the contaminated soil from the surrounding environment, thus controlling the environment of the contaminated soil and causing possible degradation of the contaminate. [ 1 ] On site evaluation to see if ISC is a good remediation technique is based on several criteria: the surrounding physical environment, current and long-term hydrodynamic conditions, the geotechnical and geological conditions, hydrogeological conditions, on-site sediment characterization, and current and long-term waterway uses. [ 2 ] Many of the physical properties of the surrounding area where the cap would be placed are important. Some things to consider when constructing a cap would be “waterway dimensions, water depths, tidal patterns, ice formations, aquatic vegetation, bridge crossings and proximity of lands or marine structures”. [ 1 ] It is best if the area surrounding the ISC is flat for ease of installation. The hydrodynamic conditions are of equal importance. It is best if in-situ capping projects are performed in low-energy waterways such as harbors, low flow streams, or estuaries. [ 2 ] High energy and high flow environments can affect the long-term stability of the cap and cause plausible erosion over time. Currents are also important. Currents vary along a water column and placement of the ISC can be negatively affected by changing currents. It is important to take into consideration the long-term impacts of episodic events such as tidal flow on bottom current velocities. Modeling must be done to determine if placement of the in-situ cap will alter existing hydrodynamic conditions. [ 1 ] A study of the geotechnical and geological conditions must be made before the placement of the in-situ cap because of potential settling underneath the cap. If settling is predicted to be significant, the cap design may have to be designed thicker than originally projected to allow the settling to not alter the integrity of the cap. [ 1 ] Hydrogeological conditions are important to consider before placement. It is important to locate areas of discharge, which are areas where the groundwater flow path has an upward component. [ 5 ] This discharge can cause the in-situ cap to become displaced or cause containments to be transported to the surface water, thus causing decreased effectiveness of the in-situ cap. [ 1 ] Typical sediment characterization is needed before construction and design of the ISC can be implemented. These tests on the sediments include: “visual classification, natural water content/solids concentrations, plasticity indices ( Atterberg limits ), total organic carbon (TOC) content, grain size distribution, specific gravity , and Unified Soil Classification System (USCS)”. [ 1 ] It is important to realize what the current waterway uses are and how they may be affected with the placement of an in-situ cap. Some waterway uses that may be affected by the construction of an in-situ cap include but are not limited to “navigation, flood control, recreation, water supply, storm water or effluent discharge, waterfront development, and utility crossing." Since the construction of an in-situ cap may limit some of these activities due to the importance that the caps integrity be maintained over an extended period of time, any use that may cause displacement of the cap should be limited. Furthermore, the construction of an in-situ cap will cause a drop in water depth thus limiting the size of ships that may cross the area. These limitations on the waterway may also have social and economic impacts that must be considered. [ 1 ] It is important to know all of the regulatory standards in place for the desired location of the ISC. All ISC must comply with the requirements in the Resource Conservation and Recovery Act (RCRA) and the Toxic Substances Control Act (TSCA), although the ability of in-situ capping to meet those standards in the long term has not been successfully researched and studied enough due to lack of data. Cap design, which includes the composition and dimensions of the components, is probably the most important aspect of in-situ capping. The cap designs “must be compatible with available construction and placement techniques” along with meeting the three previously mentioned criteria above. The cap designs usually are over small areas with small volumes of contaminants. The cap is usually constructed with many layers of granular media, armor stone, and geotextiles. Presently, laboratory tests and models of the various processes involved (advection, diffusion, bioturbation, consolidation, erosion), limited field experience, and monitoring data drive cap design. Since data and field experience is limited a conservative approach is used when designing an in-situ cap. This approach uses the idea that the many different components are additive and no cap component provides a dual function, although a component may provide a dual function in actual practice. [ 1 ] The six general steps for in-situ cap design, provided by Palermo et al. are listed below: [ 1 ] Identifying the materials should be assessed at the beginning of the project because they typically represent the largest cost to the project. Thus, if the materials needed cost too much the project may not be feasible at all. [ 1 ] Granular materials are used in most cases. These can include but are not limited to “quarry sand, naturally occurring sediments or soil materials”. [ 1 ] Studies have shown that fine-grained materials and sandy materials can be effective in the construction of an in-situ cap. [ 6 ] Furthermore, fine grain materials have been shown to act as better chemical barriers than sand caps. [ 7 ] Thus a fine grain material is a better capping component than factory-washed sand. It is important to have control the amount of organic material within the cap because the benthic organisms have shown interest in burrowing within any unconsolidated fine grained sediments containing organic matter. [ 1 ] Increased levels of organic matter in sands have shown an increase in the retardation of hydrophobic organic contaminants through the cap and encourage degradation of contaminant. [ 3 ] Thus a careful balance of organics is necessary. Geomembranes can serve numerous purposes in a cap design, including “provide a bioturbation barrier; stabilize the cap; reduce contaminant flux; prevent mixing of cap materials with underlying sediments; promote uniform consolidation, and; reduce erosion of capping materials”. [ 1 ] Geomembranes have been used for stabilization in two projects along with granular media for the ISC constructed at Sheboygan River and in Eitrheim Bay, Norway. [ 8 ] Although geomembranes seem to have great benefits, the problem of uplift and ballooning has arisen and not much research has gone into assessing what causes the lift of the geomembranes off of the surface. [ 9 ] Further research is needed to determine the overall effectiveness of geosynthetics for chemical isolation. [ 1 ] Armoring stone , which is any stone that is used to "shield" the rest of the in-situ cap, can be used for resistance to erosion and should be considered in cap design. [ 1 ] [ 3 ] The long-term ability of the cap to perform depends primarily on its ability to withstand external forces, mostly hydraulic forces. [ 3 ] There are three basic approaches that may be used to have long-term cap stability: Bioturbation is defined as the disturbance and mixing of sediments by benthic organisms. Many aquatic organisms live on or in the bottom sediments and can greatly increase the “migration of sediment contaminants through the direct movement of sediment particles, increasing the surface area of sediments exposed to the water column, and as a food for epibenthic or pelagic organism grazing on the benthos." The depth of bioturbation in marine environments is greater than that in fresh water environments. To prevent and reduce the impact of bioturbation on the cap, the cap should be designed with a sacrificial layer, typically only a few centimeters thick (5–10 cm). This layer will be assumed to be completely mixed with the environment and should prevent benthic organism from descending further into the in-situ cap. The thickness of the sacrificial layer should be based on a study of the local organisms and their behavior in the surrounding sediment near the area of the cap construction, since some benthic organisms have been known to burrow at depths of 1m or more. The presence of armor stone has been known to limit the colonization by deep burrowing benthic creatures. Another method of preventing benthic organisms from destroying the integrity of the cap design is to pick a granular media that the local benthic organism find unattractive and are not known to readily colonize on that surface, thus limiting the chance a benthic organism will grown on the cap. The consolidation of the in-situ cap must be considered, provided that the selected material for the cap “is fine-grained granular material ." The consolidation of the underlying material should be taken into account due to “advection of pore water upward into the cap during consolidation." [ 1 ] Erosion should be carefully considered. To determine the level of protection against erosion it is important to look at “the potential severity of the environmental impacts associated with cap erosion and potential dispersion of the sediment contaminants in an extreme event” (such as a 100-year event). An under-designed in-situ cap could be compromised by erosion resulting in the release of contaminants. An over-designed cap would result in extremely high costs. [ 1 ] [ 3 ] Since the construction of the cap will directly affect the ability of the in-situ cap to perform it is important to plan carefully. It is important to note that "many contaminated sediment sites exhibit exceedingly soft sediments that can be easily disturbed, may be dislocated or destabilized by uneven placement, and may have insufficient load bearing capacity to support some cap materials." [ 3 ] There are two basic was to construct an in-situ cap: Fredette et al. outlines five steps for the development of a physical/biological monitoring program for ISC projects: [ 10 ] Thus it is important a monitoring program be put into place at the onset of construction. A short-term monitoring program should be used to monitor the in-situ cap during construction and immediately following construction. This monitoring program should include frequent testing so real-time data is provided to allow quick adjustments to the overall cap design. A long-term monitoring program should be established to provide data about the overall effectiveness of the cap design and to make sure the cap is meeting all of its required regulations and that the cap is not excessively eroded. This long-term monitoring need only be assessed on a yearly to bi-yearly basis unless a problem is discovered; then more frequent testing will be required. [ 1 ] During monitoring, it is important to schedule routine maintenance. This may include placement of material equal to the predicted amount of material removed due to erosion. [ 1 ] Although ISC is a relatively new remediation procedure several groups have used it with great success. In Massena, New York , at the General Motors Superfund site , PCB -contaminated soils were dredged repeatedly but some areas still had high levels of contaminant (>10ppm). These areas were capped, an approximate area of 75,000 square feet (7,000 m 2 ), with a three-layer ISC composed of 6 inches of sand, 6 inches of gravel and 6 inches of armor stone. [ 1 ] In Manistique River , Michigan , PCB-contaminated sediments were capped with a 40mm thick plastic liner over an area of 20,000 square feet (1,900 m 2 ) with varying depths of up to 15 ft. [ 1 ] In Sheboygan River, Wisconsin, PCB-contaminated sediments were capped with a sand layer and armor stone layer. This was done in shallow regions were direct placement was possible. [ 11 ] In Cold Spring, New York , in the Hudson River , sediment was contaminated with cadmium and nickel from a battery manufacturing facility. A Geosynthetic clay liner (GCL) and a 12-inch covering of sandy loam was planted on top of the contaminated area. [ 2 ] In Elkton, Maryland , contaminated sediment was discovered with excess amounts of volatile organic components and dense non-aqueous phase liquids , resulting is severe discharge. The cap system constructed over the contaminated waste involved a geotextile working mat, a GCL, a scrim-reinforced polypropylene liner, a geotextile cushion, and a gabion mat. [ 2 ] There are four major areas of research that currently need to be assessed:
https://en.wikipedia.org/wiki/In_situ_capping_of_subaqueous_waste
In situ chemical reduction (ISCR) is a type of environmental remediation technique used for soil and/or groundwater remediation to reduce the concentrations of targeted environmental contaminants to acceptable levels. It is the mirror process of In Situ Chemical Oxidation (ISCO). ISCR is usually applied in the environment by injecting chemically reductive additives in liquid form into the contaminated area or placing a solid medium of chemical reductants in the path of a contaminant plume. [ 1 ] It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation. The in situ in ISCR is just Latin for "in place", signifying that ISCR is a chemical reduction reaction that occurs at the site of the contamination. Like ISCO, it is able to decontaminate many compounds, and, in theory, ISCR could be more effective in ground water remediation than ISCO. Chemical reduction is one half of a redox reaction, which results in the gain of electrons. One of the reactants in the reaction becomes oxidized, or loses electrons, while the other reactant becomes reduced, or gains electrons. In ISCR, reducing compounds, compounds that accept electrons given by other compounds in a reaction, are used to change the contaminants into harmless compounds. Early work examined the dechlorinations with copper. Substrates included DDT , endrin , chloroform , and hexachlorocyclopentadiene . Aluminum and magnesium behave similarly in the laboratory. Ground water treatment most generally focuses on the use of iron. [ 2 ] Zero-valent metals are the main reductants used in ISCR. The most common metal used is iron, in the form of ZVI (zero valent iron), and it is also the metal longest in use. However, some studies show that zero valent zinc (ZVZ) could be up to ten times more effective at eradicating the contaminants than ZVI. [ 3 ] Some applications of ZVMs are to clean up Trichloroethylene (TCE) and Hexavalent chromium (Cr(VI)). [ 4 ] ZVMs are usually implemented by a permeable reactive barrier . For example, iron that has been embedded in a swellable, organically modified silica creates a permanent soft barrier underground to capture and reduce small, organic compounds as groundwater passes through it. [ 5 ] Iron minerals can active for dechlorination. These minerals use Fe 2+ . Particular minerals that can be used include green rust , magnetite , pyrite , and glauconite . [ 6 ] The most reactive of the iron minerals are the iron sulfides and oxides . Pyrite, an iron sulfide, is able to dechlorinate carbon tetrachloride in suspension. [ 2 ] Polysulfides are compounds that have chains of sulfur atoms. This reactant has been tested on the field in treating TCE and in comparison to EHC. The use of polysulfides is a type of abiotic reduction and works best in anaerobic conditions where iron (III) is available. The benefit of using polysulfides is that they do not produce any biological waste products; however, the reaction rates are slow and they require more time to create the DVI (dual valent iron) minerals that are needed for the reduction to occur. [ 7 ] Dithionite ( S 2 O 2− 4 ) can also be used as a reductant. It is usually used in addition to iron reduce contaminants. A number of reactions take place and eventually the contaminant is removed. In the process, ditionite is consumed and the final product of all the reactions is 2 sulfur dioxide anions. The dithionite is not stable for a long period of time. Bimetallic materials are materials that are made out of two different metals or alloys that are tightly bonded together. A good example of a bimetallic material would be a bimetallic strip which is used in some kinds of thermometers. In ISCR, bimetallic materials are small pieces of metals that are coated lightly with a catalyst such as palladium, silver, or platinum. The catalyst drives a faster reaction and the small size of the particles allows them to effectively move into and remain in the target zone. [ 8 ] One proprietary material for ISCR is the EHC technology created by Adventus. This particular product is actually a mixture of carbon, nutrients, and zero-valent iron. The theory behind this product is that the carbon in the mixture will promote bacterial growth in the subsurface. The growing bacteria consume oxygen, which easily accepts electrons, present in the subsurface which increases reducing potential. The growing bacteria also ferment and produce fatty acids that act as electron donors to other bacteria and substances. Adventus uses this combination of biotic and abiotic processes to implement ISCR. EHC is injected as a "slurry" (a mixture that is 15 to 40% solids and weight with the rest being liquid) into the substratum. [ 9 ] Another material worth mentioning is EZVI (emulsified ZVI) which is a NASA technology. EZVI is used mainly to treat halogenated hydrocarbons and DNAPLs . EZVI is nanoscale iron that is placed into a biodegradable oil emulsion . The emulsion is then injected into the substratum. [ 10 ] In ISCR, many reductive processes can take place. There are hydrogenolysis , β-elimination, hydrogenation , α-elimination, and electron transfer . The specific combination of reductive processes that actually take place in the subsurface depends on the species of contaminant that is present and also the type of reduction being used. The natural and biological processes that take place in the substratum also affect the kinds of reductive processes that are found. [ 6 ] The reactions that occur with permeable reactive barriers and ferrous iron are surface based. The surface reactions take three different forms: direct reduction, electron shunting through ferrous iron, and reduction by production and reaction of hydrogen. Pathway A represents direct electron transfer (ET) for Fe 0 to the adsorbed halocarbon (RX) at the metal/water point of contact, resulting in dechlorination and production of Fe 2+ . Pathway B shows that Fe 2+ (resulting from corrosion of Fe 0 ) may also dechlorinate RX, producing Fe 3+ . Pathway C shows that H 2 from the anaerobic corrosion of Fe 2+ might react with RX if a catalyst is present. The reductive processes discussed above can be enhanced in two ways. One is by increasing the amount of usable iron in the subsurface to increase the rate of the reduction by chemical or biological means. The second method is to enhance the reducing ability of the iron by coupling it with other chemical reductants or using biological reduction with it. Using this processes, scientists combined sodium dithionite with iron to treat Chrominum VI and TCE effectively. [ 2 ] Combining bacterial action and biological processes with iron is also known to be effective. The most evident uses of biological processes are with the EZVI technology created by NASA and with the EHC product created by Adventus. Both of these materials have iron within some biological matrix (iron is suspended in vegetable oil in EZVI and in organic carbon in EHC) and use microbial organisms to enhance the reduction zone and to create a more anaerobic environment for the reactions to take place in. The most common type of implementation of ISCR is the installation of permeable reactive barriers (PRBs), but there are instances when the reductant can be directly injected into the subsurface to treat source areas. These barriers are usually made out of zero-valent iron (ZVI) but can also be made with any other zero-valent metal. The most common way they are made is by filling a trench with ZVI, nanoscale iron, or palladium. Nanoscale iron particles can also be injected directly into the subsurface to treat plumes, and they have large surface areas and, therefore, high reactivities and can be distributed more evenly in the contamination site. Palladium's reaction rates are rapid. The main advantages of PRBs are that it can reduce many a variety of contaminants and it has no above-ground structure. Problems with PRBs include that even with well constructed barriers, there might be the problem of hydraulic short-circuiting. [ 11 ] Nanoscale iron can be directly into the subsurface because they are small enough to be distributed thoroughly. Because the particles are so small, they have a comparatively large reactive surface, providing a more effective reaction. As of now, nanoscale iron is the only material that has been used with this injection strategy, and it is probably the only material that is effective in injection. [ 12 ]
https://en.wikipedia.org/wiki/In_situ_chemical_reduction
The in situ cyclization of proteins ( INCYPRO ) is a protein engineering technology that increases the durability of proteins and enzymes for biotechnological and biomedical applications. [ 1 ] [ 2 ] For such applications, it is essential that the used proteins maintain their structural integrity. [ 3 ] This is, however, often challenged due to the conditions required for these applications which necessitates protein engineering to stabilize the protein structure. [ 4 ] The INCYPRO technology involves the attachment of molecular claps (crosslinks) to a protein, thereby reducing the tendency of the protein to unfold. The resulting INCYPRO-crosslinked proteins are more stable at elevated temperature and in presence of chemical denaturants. [ 5 ] The INCYPRO technology utilizes tris-reactive molecules to crosslink three defined positions within a protein [ 1 ] or protein complex. [ 6 ] For example, INCYPRO can involve the introduction of three spatially aligned and solvent-accessible cysteines into the protein that are then reacted with a tris-electrophilic agent. The resulting crosslinked proteins or protein complexes have been shown to exhibit increased stability towards thermal and chemical stress and a lower tendency towards aggregation. [ 1 ] [ 6 ] So far, the melting temperature of proteins was increased by up to 39°C in a single design step. [ 6 ] An early example, involved the stabilization of the transpeptidase Sortase A which resulted in INCYPRO-stabilized variants with activity under elevated temperature and in the presence of guanidinium chloride . [ 1 ] [ 5 ] INCYPRO has also been applied to stabilize the human adaptor KIX domain utilizing different crosslinker molecules. Here, a dependency of protein stability on the hydrophilicity of the crosslink was observed. [ 2 ] In addition, a number of homo-trimeric protein complexes was stabilized including the Pseudomonas fluorescens esterase (PFE) and an Enoyl-CoA hydratase . [ 6 ] In these cases, enzyme conjugates with overall bicyclic topology were generated.
https://en.wikipedia.org/wiki/In_situ_cyclization_of_proteins
In situ hybridization (ISH) is a type of hybridization that uses a labeled complementary DNA , RNA or modified nucleic acid strand (i.e., a probe ) to localize a specific DNA or RNA sequence in a portion or section of tissue ( in situ ) or if the tissue is small enough (e.g., plant seeds, Drosophila embryos), in the entire tissue (whole mount ISH), in cells, and in circulating tumor cells (CTCs). This is distinct from immunohistochemistry , which usually localizes proteins in tissue sections. In situ hybridization is used to reveal the location of specific nucleic acid sequences on chromosomes or in tissues, a crucial step for understanding the organization, regulation, and function of genes. The key techniques currently in use include in situ hybridization to mRNA with oligonucleotide and RNA probes (both radio-labeled and hapten-labeled), analysis with light and electron microscopes, whole mount in situ hybridization, double detection of RNAs and RNA plus protein, and fluorescent in situ hybridization to detect chromosomal sequences. DNA ISH can be used to determine the structure of chromosomes. Fluorescent DNA ISH (FISH) can, for example, be used in medical diagnostics to assess chromosomal integrity. RNA ISH (RNA in situ hybridization) is used to measure and localize RNAs (mRNAs, lncRNAs, and miRNAs) within tissue sections, cells, whole mounts, and circulating tumor cells (CTCs). In situ hybridization was invented by American biologists Mary-Lou Pardue and Joseph G. Gall . [ 1 ] [ 2 ] [ 3 ] In situ hybridization is a powerful technique for identifying specific mRNA species within individual cells in tissue sections, providing insights into physiological processes and disease pathogenesis. However, in situ hybridization requires that many steps be taken with precise optimization for each tissue examined and for each probe used. In order to preserve the target mRNA within tissues, it is often required that crosslinking fixatives (such as formaldehyde ) be used. [ 4 ] In addition, in-situ hybridization on tissue sections require that tissue slices be very thin, usually 3 μm to 7 μm in thickness. Common methods of preparing tissue sections for in-situ hybridization processing include cutting specimens with a cryostat or a Compresstome tissue slicer . A cryostat takes fresh or fixed tissue and immerses it into liquid nitrogen for flash freezing. Then tissue is embedded in freeze media called OCT and thin sections are cut. Obstacles include getting freeze artifacts on tissue that may interfere with proper mRNA staining. The Compresstome cuts tissue into thin slices without a freeze process; free-floating sections are cut after being embedded in agarose for stability. This method avoids freezing tissue and thus associated freeze artifacts. The process is permanent and irreversible once its complete. [ 5 ] For hybridization histochemistry , sample cells and tissues are usually treated to fix the target transcripts in place and to increase access of the probe. As noted above, the probe is either a labeled complementary DNA or, now most commonly, a complementary RNA ( riboprobe ). The probe hybridizes to the target sequence at elevated temperature, and then the excess probe is washed away (after prior hydrolysis using RNase in the case of unhybridized, excess RNA probe). Solution parameters such as temperature, salt, and/or detergent concentration can be manipulated to remove any non-identical interactions (i.e., only exact sequence matches will remain bound). Then, the probe that was labeled with either radio-, fluorescent- or antigen-labeled bases (e.g., digoxigenin ) is localized and quantified in the tissue using either autoradiography , fluorescence microscopy , or immunohistochemistry , respectively. ISH can also use two or more probes, labeled with radioactivity or the other non-radioactive labels, to simultaneously detect two or more transcripts. An alternative technology, branched DNA assay , can be used for RNA (mRNA, lncRNA, and miRNA ) in situ hybridization assays with single molecule sensitivity without the use of radioactivity. This approach (e.g., ViewRNA assays) can be used to visualize up to four targets in one assay, and it uses patented probe design and bDNA signal amplification to generate sensitive and specific signals. Samples (cells, tissues, and CTCs) are fixed, then treated to allow RNA target accessibility (RNA un-masking). Target-specific probes hybridize to each target RNA. Subsequent signal amplification is predicated on specific hybridization of adjacent probes (individual oligonucleotides [oligos] that bind side by side on RNA targets). A typical target-specific probe will contain 40 oligonucleotides, resulting in 20 oligo pairs that bind side-by-side on the target for detection of mRNA and lncRNA, and 2 oligos or a single pair for miRNA detection. Signal amplification is achieved via a series of sequential hybridization steps. A pre-amplifier molecule hybridizes to each oligo pair on the target-specific RNA, then multiple amplifier molecules hybridize to each pre-amplifier. Next, multiple label probe oligonucleotides (conjugated to alkaline phosphatase or directly to fluorophores) hybridize to each amplifier molecule. A fully assembled signal amplification structure “Tree” has 400 binding sites for the label probes. When all target-specific probes bind to the target mRNA transcript, an 8,000 fold signal amplification occurs for that one transcript. Separate but compatible signal amplification systems enable the multiplex assays. The signal can be visualized using a fluorescence or brightfield microscope. The protocol takes around 2–3 days and takes some time to set up. Some companies sell robots to automate the process (e.g., CEM InsituPro). As a result, large-scale screenings have been conducted in laboratories on thousands of genes. The results can usually be accessed via websites (see external links). [ 6 ]
https://en.wikipedia.org/wiki/In_situ_hybridization
In polymer chemistry , in situ polymerization is a preparation method that occurs "in the polymerization mixture" and is used to develop polymer nanocomposites from nanoparticles. There are numerous unstable oligomers ( molecules ) which must be synthesized in situ (i.e. in the reaction mixture but cannot be isolated on their own) for use in various processes. The in situ polymerization process consists of an initiation step followed by a series of polymerization steps, which results in the formation of a hybrid between polymer molecules and nanoparticles . [ 1 ] Nanoparticles are initially spread out in a liquid monomer or a precursor of relatively low molecular weight. Upon the formation of a homogeneous mixture, initiation of the polymerization reaction is carried out by addition of an adequate initiator, which is exposed to a source of heat, radiation, etc. [ 1 ] After the polymerization mechanism is completed, a nanocomposite is produced, which consists of polymer molecules bound to nanoparticles. In order to perform the in situ polymerization of precursor polymer molecules to form a polymer nanocomposite, certain conditions must be fulfilled which include the use of low viscosity pre-polymers (typically less than 1 pascal), a short period of polymerization, the use of polymer with advantageous mechanical properties, and no formation of side products during the polymerization process. [ 1 ] There are several advantages of the in situ polymerization process, which include the use of cost-effective materials, being easy to automate, and the ability to integrate with many other heating and curing methods. Some downsides of this preparation method, however, include limited availability of usable materials, a short time period to execute the polymerization process, and expensive equipment is required. [ 1 ] The next sections will cover the various examples of polymer nanocomposites produced using the in situ polymerization technique, and their real life applications. Towards the end of the 20th century, Toyota Motor Corp devised the first commercial application of the clay-polyamide-6 nanocomposite, which was prepared via in situ polymerization. [ 2 ] Once Toyota laid the groundwork for polymer layered silicate nanocomposites, extensive research in this particular area was conducted afterwards. Clay nanocomposites can experience a significant increase in strength, thermal stability, and ability to penetrate barriers upon addition of a minute portion of nanofiller into the polymer matrix. [ 3 ] A standard technique to prepare clay nancomposites is in situ polymerization, which consists of intercalation of the monomer with the clay surface, followed by initiation by the functional group in the organic cation and then polymerization. [ 3 ] A study by Zeng and Lee investigated the role of the initiator in the in situ polymerization process of clay nanocomposites. [ 3 ] One of the major findings was that the more favorable nanocomposite product was produced with a more polar monomer and initiator. [ 3 ] In situ polymerization is an important method of preparing polymer grafted nanotubes using carbon nanotubes . Due to their remarkable mechanical, thermal and electronic properties, including high conductivity , large surface area, and excellent thermal stability, carbon nanotubes (CNT) have been heavily studied since their discovery to develop various real world applications. [ 4 ] Two particular applications that carbon nanotubes have made major contributions to include strengthening composites as filler material and energy production via thermally conductive composites. [ 4 ] [ 5 ] Currently, the two principal types of carbon nanotubes are single walled nanotubes (SWNT) and multi-walled nanotubes (MWNT). [ 4 ] In situ polymerization offers several advantages in the preparation of polymer grafted nanotubes compared to other methods. First and foremost, it allows polymer macromolecules to attach to CNT walls. [ 4 ] Additionally, the resulting composite is miscible with most types of polymers. [ 4 ] Unlike solution or melt processing, in situ polymerization can prepare insoluble and thermally unstable polymers. [ 4 ] Lastly, in situ polymerization can achieve stronger covalent interactions between polymer and CNTs earlier in the process. [ 4 ] Recent improvements in the in situ polymerization process have led to the production of polymer-carbon nanotube composites with enhanced mechanical properties. With regards to their energy-related applications, carbon nanotubes have been used to make electrodes , with one specific example being the CNT/PMMA composite electrode. [ 5 ] [ 6 ] In situ polymerization has been studied to streamline the construction process of such electrodes. [ 5 ] [ 6 ] Huang, Vanhaecke, and Chen found that in situ polymerization can potentially produce composites of conductive CNTs on a grand scale. [ 6 ] Some aspects of in situ polymerization that can help achieve this feat are that it is cost effective with regards to operation, requires minimal sample, has high sensitivity, and offers many promising environmental and bioanalytical applications. [ 6 ] Proteins , DNAs , and RNAs are just a few examples of biopharmaceuticals that hold the potential to treat various disorders and diseases, ranging from cancer to infectious diseases. [ 7 ] However, due to certain undesirable properties such as poor stability, susceptibility to enzyme degradation, and insufficient capability to penetrate biological barriers, the application of such biopharmaceuticals in delivering medical treatment has been severely hindered. [ 7 ] The formation of polymer-biomacromolecule nanocomposites via in situ polymerization offers an innovative means of overcoming these obstacles and improving the overall effectiveness of biopharmaceuticals. [ 7 ] Recent studies have demonstrated how in situ polymerization can be implemented to improve the stability, bioactivity , and ability to cross biological barriers of biopharmaceuticals. [ 7 ] The two main types of nanocomposites formed by in situ polymerization are 1) biomolecule-linear polymer hybrids, which are linear or have a star-like shape, and contain covalent bonds between individual polymer chains and the biomolecular surface and 2) biomolecule-crosslinked polymer nanocapsules, which are nanocapsules with biomacromolecules centered within the polymer shells. [ 7 ] Biomolecule-linear polymer hybrids are formed via “grafting-from” polymerization, which is an in situ approach that differs from the standard “grafting to” polymerization. [ 7 ] Whereas “grafting to” polymerization involves the straightforward attachment of polymers to the biomolecule of choice, the “grafting from” method takes place on proteins that are pre-modified with initiators. [ 7 ] Some examples of “grafting to” polymerization include atom transfer radical polymerization (ATRP) and reversible addition-fragmentation chain transfer (RAFT) . [ 7 ] These methods are similar in that they both lead to narrow molecular weight distributions and can make block copolymer. [ 7 ] On the other hand, they each have distinct properties that need to be analyzed on a case-by-case basis. For example, ATRP is sensitive to oxygen whereas RAFT is insensitive to oxygen; in addition, RAFT has a much greater compatibility with monomers than ATRP. [ 7 ] Radical polymerization with crosslinkers is the other in situ polymerization method, and this process leads to the formation of biomolecule-crosslinked polymer nanocapsules. [ 7 ] This process produces nanogels/nanocapsules via a covalent or non-covalent approach. [ 7 ] In the covalent approach, the two steps are the conjugation of acryloyl groups to protein followed by in situ free radical polymerization. [ 7 ] In the non-covalent approach, proteins are entrapped within nanocapsules. [ 7 ] Nanogels , which are microscopic hydrogel particles held together by a cross-linked polymer network, offer a desirable mode of drug delivery that has a variety of biomedical applications. In situ polymerization can be used to prepare protein nanogels that help facilitate the storage and delivery of protein. The preparation of such nanogels via the in situ polymerization method begins with free proteins dispersed in an aqueous solution along with cross-linkers and monomers, followed by addition of radical initiators, which leads to the polymerization of a nanogel polymer shell that encloses a protein core. Additional modification of the polymeric nanogel enables delivery to specific target cells. Three classes of in situ polymerized nanogels are 1) direct covalent conjugation via chemical modifications, 2) noncovalent encapsulation, and 3) cross-linking of preformed crosslinkable polymers. Protein nanogels have tremendous applications for cancer treatment, vaccination, diagnosis, regenerative medicine, and therapies for loss-of-function genetic diseases. In situ polymerized nanogels are capable of delivering the appropriate amount of protein to the site of treatment; certain chemical and physical factors including pH , temperature , and redox potential manage the protein delivery process of nanogels. [ 8 ] Urea-formaldehyde (UF) and melamine formaldehyde (MF) encapsulation systems are other examples that utilize in situ polymerization. In such type of in situ polymerization a chemical encapsulation technique is involved very similar to interfacial coating. The distinguishing characteristic of in situ polymerization is that no reactants are included in the core material. All polymerization occurs in the continuous phase, rather than on both sides of the interface between the continuous phase and the core material. In situ polymerization of such formaldehyde systems usually involves the emulsification of an oil-phase in water. Then, water-soluble urea/melamine formaldehyde resin monomers are added, which are allowed to disperse. The initiation step occurs when acid is added to lower the pH of the mixture. Crosslinking of the resins completes the polymerization process and results in a shell of polymer-encapsulated oil droplets. [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/In_situ_polymerization
In situ thermal desorption ( ISTD ) is an intensive thermally enhanced environmental remediation technology that uses thermal conductive heating (TCH) elements to directly transfer heat to environmental media. The ISTD/TCH process can be applied at low (<100 °C), moderate (~100 °C) and higher (>100 °C) temperature levels to accomplish the remediation of a wide variety of contaminants, both above and below the water table. ISTD/TCH is the only major in situ thermal remediation (ISTR) technology capable of achieving subsurface target treatment temperatures above the boiling point of water and is effective at virtually any depth in almost any media. TCH works in tight soils, clay layers, and soils with wide heterogeneity in permeability or moisture content that are impacted by a broad range of volatile and semi-volatile organic contaminants. ISTD using TCH was developed by Shell Oil Co. in the late 1980s and grew out of research and development for enhanced oil recovery . During the mid-1990s Shell Oil Company commercialized ISTD with an investment of over $30 million. [ citation needed ] Thermal conductive heating is the application of heat to subsurface soils through conductive heat transfer. The source of the heat is applied via electric or gas powered thermal wells. Thermal wells are inserted vertically, or horizontally, in an array within the soil. Heat flows from the heating elements by conduction. The heating process causes contaminants to be vaporized or destroyed by means of: Vaporized contaminants are collected from vapor extraction wells and containerized for removal or recycling.
https://en.wikipedia.org/wiki/In_situ_thermal_desorption
In situ or direct dosing water treatment involves mixing and dosing chemical reagents directly into the affected water body for the treatment of a number of issues, instead of pumping water for treatment through a water treatment plant. Applications of in situ water treatment include acid mine drainage (AMD) treatment, turbidity control, algal control, nutrient pollution control, metal control, disinfection, chlorination, cyanide destruction, pH control, salinity control and lowering dissolved metals. In situ water treatment is commonly used in the mining industry for a number of applications including the treatment of acid mine drainage and turbidity . In situ treatment of turbidity is often used for controlling turbidity in stormwater collection ponds at coal mine sites and coal loading facilities especially in Australia and Indonesia. Some municipalities with numerous small water storage reservoirs spread out over a geographical location use in situ treatment for the control of turbidity. Although rare chlorine has been dosed directly into water bodies for the purpose of disinfection. There are a number of technologies available for the in situ water treatment. Reagent can be mixed into a slurry in a vertical mixing tank, or similar, and dosed directly into the water body (e.g. a tailings storage facility (TSF) or pond) via a pipeline. Some systems mix the reagent in a tank and spray it into the water body using a spray cannon. Other technologies mix and dose into the water body using a mobile water based floating system. The technology must be selected carefully based on the application and the reagent selected. For example for the treatment of turbidity the flocculant must be evenly dispensed over the surface of the water body to allow it to settle through creating flocs.
https://en.wikipedia.org/wiki/In_situ_water_treatment
In vitro (meaning in glass , or in the glass ) studies are performed with microorganisms , cells , or biological molecules outside their normal biological context. Colloquially called " test-tube experiments", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes , and microtiter plates . Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials , and whole plants. [ 1 ] [ 2 ] In vitro ( Latin for "in glass"; often not italicized in English usage [ 3 ] [ 4 ] [ 5 ] ) studies are conducted using components of an organism that have been isolated from their usual biological surroundings, such as microorganisms, cells, or biological molecules. For example, microorganisms or cells can be studied in artificial culture media , and proteins can be examined in solutions . Colloquially called "test-tube experiments", these studies in biology, medicine, and their subdisciplines are traditionally done in test tubes, flasks, Petri dishes, etc. [ 6 ] [ 7 ] They now involve the full range of techniques used in molecular biology, such as the omics . [ 8 ] In contrast, studies conducted in living beings (microorganisms, animals, humans, or whole plants) are called in vivo . [ 9 ] Examples of in vitro studies include: the isolation , growth and identification of cells derived from multicellular organisms (in cell or tissue culture ); subcellular components (e.g. mitochondria or ribosomes ); cellular or subcellular extracts (e.g. wheat germ or reticulocyte extracts); purified molecules (such as proteins , DNA , or RNA ); and the commercial production of antibiotics and other pharmaceutical products. [ 10 ] [ 11 ] [ 12 ] [ 13 ] Viruses, which only replicate in living cells, are studied in the laboratory in cell or tissue culture, and many animal virologists refer to such work as being in vitro to distinguish it from in vivo work in whole animals. [ 14 ] [ 15 ] In vitro studies permit a species-specific, simpler, more convenient, and more detailed analysis than can be done with the whole organism. Just as studies in whole animals more and more replace human trials, so are in vitro studies replacing studies in whole animals. Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. [ 23 ] [ 24 ] These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance. This complexity makes it difficult to identify the interactions between individual components and to explore their basic biological functions. In vitro work simplifies the system under study, so the investigator can focus on a small number of components. [ 25 ] [ 26 ] For example, the identity of proteins of the immune system (e.g. antibodies), and the mechanism by which they recognize and bind to foreign antigens would remain very obscure if not for the extensive use of in vitro work to isolate the proteins, identify the cells and genes that produce them, study the physical properties of their interaction with antigens, and identify how those interactions lead to cellular signals that activate other components of the immune system. Another advantage of in vitro methods is that human cells can be studied without "extrapolation" from an experimental animal's cellular response. [ 27 ] [ 28 ] [ 29 ] In vitro methods can be miniaturized and automated, yielding high-throughput screening methods for testing molecules in pharmacology or toxicology. [ 30 ] The primary disadvantage of in vitro experimental studies is that it may be challenging to extrapolate from the results of in vitro work back to the biology of the intact organism. Investigators doing in vitro work must be careful to avoid over-interpretation of their results, which can lead to erroneous conclusions about organismal and systems biology. [ 10 ] [ 31 ] For example, scientists developing a new viral drug to treat an infection with a pathogenic virus (e.g., HIV-1) may find that a candidate drug functions to prevent viral replication in an in vitro setting (typically cell culture). However, before this drug is used in the clinic, it must progress through a series of in vivo trials to determine if it is safe and effective in intact organisms (typically small animals, primates, and humans in succession). Typically, most candidate drugs that are effective in vitro prove to be ineffective in vivo because of issues associated with delivery of the drug to the affected tissues, toxicity towards essential parts of the organism that were not represented in the initial in vitro studies, or other issues. [ 32 ] A method which could help decrease animal testing is the use of in vitro batteries, where several in vitro assays are compiled to cover multiple endpoints. Within developmental neurotoxicity and reproductive toxicity there are hopes for test batteries to become easy screening methods for prioritization for which chemicals to be risk assessed and in which order. [ 33 ] [ 34 ] [ 35 ] [ 36 ] Within ecotoxicology in vitro test batteries are already in use for regulatory purpose and for toxicological evaluation of chemicals. [ 37 ] In vitro tests can also be combined with in vivo testing to make a in vitro in vivo test battery, for example for pharmaceutical testing. [ 38 ] Results obtained from in vitro experiments cannot usually be transposed, as is, to predict the reaction of an entire organism in vivo . Building a consistent and reliable extrapolation procedure from in vitro results to in vivo is therefore extremely important. Solutions include: These two approaches are not incompatible; better in vitro systems provide better data to mathematical models. However, increasingly sophisticated in vitro experiments collect increasingly numerous, complex, and challenging data to integrate. Mathematical models, such as systems biology models, are much needed here. [ 41 ] In pharmacology, IVIVE can be used to approximate pharmacokinetics (PK) or pharmacodynamics (PD). [ 42 ] Since the timing and intensity of effects on a given target depend on the concentration time course of candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ sensitivities can be completely different or even inverse of those observed on cells cultured and exposed in vitro . That indicates that extrapolating effects observed in vitro needs a quantitative model of in vivo PK. Physiologically based PK ( PBPK ) models are generally accepted to be central to the extrapolations. [ 43 ] In the case of early effects or those without intercellular communications, the same cellular exposure concentration is assumed to cause the same effects, both qualitatively and quantitatively, in vitro and in vivo . In these conditions, developing a simple PD model of the dose–response relationship observed in vitro , and transposing it without changes to predict in vivo effects is not enough. [ 44 ]
https://en.wikipedia.org/wiki/In_vitro
In vitro compartmentalization ( IVC ) is an emulsion -based technology that generates cell-like compartments in vitro . These compartments are designed such that each contains no more than one gene . When the gene is transcribed and/or translated , its products ( RNAs and/or proteins ) become 'trapped' with the encoding gene inside the compartment. By coupling the genotype ( DNA ) and phenotype (RNA, protein), compartmentalization allows the selection and evolution of phenotype. In vitro compartmentalization method was first developed by Dan Tawfik and Andrew Griffiths. [ 1 ] Based on the idea that Darwinian evolution relies on the linkage of genotype to phenotype, Tawfik and Griffiths designed aqueous compartments of water-in-oil (w/o) emulsions to mimic cellular compartments that can link genotype and phenotype . Emulsions of cell-like compartments were formed by adding in vitro transcription / translation reaction mixture to stirred mineral oil containing surfactants . The mean droplet diameter was measured to be 2.6 μm by laser diffraction. As a proof of concept, Tawfik and Griffiths designed a selection experiment using a pool of DNA sequences, including the gene encoding HaeIII DNA methyltransferase (M.HaeIII) in the presence of 10 7 -fold excess of genes encoding a different enzyme folA. The 3’ of each DNA sequences was purposely designed to contain a HaeIII recognition site which, in the presence of expressed methyltransferase, would be methylated and, thus, resistant to restriction enzyme digestion. By selecting for DNA sequences that survive the endonuclease digestion, Tawfik and Griffiths found that the M.HaeIII genes were enriched by at least 1000-fold over the folA genes within the first round of selection. Water-in-oil (w/o) emulsions are created by mixing aqueous and oil phases with the help of surfactants. A typical IVC emulsion is formed by first generating oil-surfactant mixture by stirring, and then gradually adding the aqueous phase to the oil-surfactant mixture. For stable emulsion formation, a mixture of HLB (hydrophile-lipophile balance) and low HLB surfactants are needed. [ 3 ] Some combinations of surfactants used to generate oil-surfactant mixture are mineral oil / 0.5% Tween 80 / 4.5% Span 80 / sodium deoxycholate [ 1 ] and a more heat stable version, light mineral oil / 0.4% Tween 80 / 4.5% Span 80 / 0.05% Triton X-100 . [ 4 ] The aqueous phase containing transcription and/or translation components is slowly added to the oil surfactants, and the formation of w/o is facilitated by homogenizing, stirring or using hand extruding device. The emulsion quality can be determined by light microscopy and/or dynamic light scattering techniques. The emulsion is quite diverse, and greater homogenization speeds helps to produce smaller droplets with narrower size distribution. However, homogenization speeds has to be controlled, since speed over 13,500 r.p.m tends to result in a significant loss of enzyme activity on the level of transcription. The most widely used emulsion formation gives droplets with a mean diameter of 2-3μm, and an average volume of ~5 femtoliters, or 10 10 aqueous droplet per ml of emulsions. [ 5 ] The ratio of genes to droplets is designed such that most of the droplets contains no more than a single gene statistically. IVC enables the miniaturization of large-scale techniques that can now be done on the micro scale including coupled in vitro transcription and translation (IVTT) experiments. Streamlining and integrating transcription and translation allows for fast and highly controllable experimental designs. [ 6 ] [ 7 ] [ 8 ] IVTT can be done both in bulk emulsions and in microdroplets by utilizing droplet-based microfluidics . Microdroplets, droplets on the scale of pico to femtoliters, have been successfully used as single DNA molecule vessels. [ 9 ] [ 10 ] This droplet technology allows high throughput analysis with many different selection pressures in a single experimental setup. [ 6 ] [ 10 ] IVTT in microdroplets is preferred when overexpression of a desired protein would be toxic to a host cell minimizing the utility of the transcription and translation mechanisms. [ 11 ] IVC has used bacterial cell, wheat germ and rabbit reticulocyte (RRL) extracts for transcription and translation. It is also possible to use bacterial reconstituted translation system such as PURE in which translation components are individually purified and later combined. When expressing eukaryote or complex proteins, it is desirable to use eukaryotic translation systems such as wheat germ extract or more superior alternative, RRL extract. In order to use RRL for transcription and translation, traditional emulsion formulation cannot be used as it abolishes translation. Instead, a novel emulsion formulation: 4% Abil EM90 / light mineral oil was developed and demonstrated to be functional in expressing luciferase and human telomerase . [ 12 ] Once transcription and/or translation has completed in the droplets, emulsion will be broken by successive steps of removing mineral oil and surfactants to allow for subsequent selection. At this stage, it is crucial to have a method to ‘track’ each gene products to the encoding gene as they become free floating in a heterogeneous population of molecules. There are three major approaches to track down each phenotype to its genotype. [ 13 ] The first method is to attach each DNA molecule with a biotin group and an additional coding sequence for streptavidin (STABLE display). [ 14 ] All the newly formed proteins/peptides will be in fusion with streptavidin molecules and bind to their biotinylated coding sequence. An improved version attached two biotin molecules to the ends of a DNA molecule to increase the avidity between DNA molecule and streptavidin-fused peptides, and used a low GC content synthetic streptavidin gene to increase efficiency and specificity during PCR amplification. [ 15 ] The second method is to covalently link DNA and protein. Two strategies have been demonstrated. The first is to form M.HaeIII fusion proteins. [ 16 ] Each expressed protein/polypeptide will be in fusion with Hae III DNA methyltransferase domain, which is able to bind covalently to DNA fragments containing the sequence 5′-GGC*-3′, where C* is 5-fluoro-2 deoxycytidine. The second strategy is to use monomeric mutant of VirD2 enzyme. [ 17 ] When a protein/peptide is expressed in fusion with Agrobacterium protein VirD2, it will bind to its DNA coding sequence that has a single-stranded overhang comprising VirD2 T-border recognition sequences. The third method is to link phenotype and genotype via beads. [ 18 ] The beads used will be coated with streptavidin to allow for the binding of biotinylated DNA, in addition, the beads will also display cognate binding partner to the affinity tag that will be expressed in fusion with the protein/peptide. Depending on the phenotype to be selected, difference selection strategies will be used. Selection strategy can be divided into three major categories: selection for binding, selection for catalysis and selection for regulation. [ 19 ] The phenotype to be selected can range from RNA to peptide to protein. By selecting for binding, the most commonly evolved phenotypes are peptide/proteins that have selective affinity to a specific antibody or DNA molecule. An example is the selection of proteins that have affinity to zinc finger DNA by Sepp et al. [ 20 ] By selecting for catalytic proteins/RNAs, new variants with novel or improved enzymatic property are usually isolated. For example, new ribozyme variants with trans-ligase activity were selected and exhibited multiple turnovers. [ 21 ] By selecting for regulation, inhibitors of DNA nucleases can be selected, such as protein inhibitors of the Colicin E7 DNase. [ 22 ] Comparing to other in vitro display technologies, IVC has two major advantages. The first advantage is its ability to control reactions within the droplets. Hydrophobic and hydrophilic components can be delivered to each droplet in a step-wise fashion without compromising the chemical integrity of the droplet, and thus by controlling what to be added and when to be added, the reaction in each droplet is controlled. In addition, depending on the nature of the reaction to be carried out, the pH of each droplet can also be changed. More recently, photocaged substrates were used and their participation in a reaction was regulated by photo-activation. [ 19 ] The second advantage is that IVC allows the selection of catalytic molecules. As an example, Griffiths et al. was able to select for phosphotriesterase variants with higher K cat by detecting product formation and amount using anti-product antibody and flow cytometry respectively. [ 23 ]
https://en.wikipedia.org/wiki/In_vitro_compartmentalization
In vitro fertilisation ( IVF ) is a process of fertilisation in which an egg is combined with sperm in vitro ("in glass"). The process involves monitoring and stimulating the ovulatory process , then removing an ovum or ova (egg or eggs) from the ovaries and enabling sperm to fertilise them in a culture medium in a laboratory. After a fertilised egg ( zygote ) undergoes embryo culture for 2–6 days, it is transferred by catheter into the uterus , with the intention of establishing a successful pregnancy . IVF is a type of assisted reproductive technology used to treat infertility , enable gestational surrogacy , and, in combination with pre-implantation genetic testing , avoid the transmission of abnormal genetic conditions. When a fertilised egg from egg and sperm donors implants in the uterus of a genetically unrelated surrogate, the resulting child is also genetically unrelated to the surrogate. Some countries have banned or otherwise regulated the availability of IVF treatment, giving rise to fertility tourism . Financial cost and age may also restrict the availability of IVF as a means of carrying a healthy pregnancy to term. In July 1978, Louise Brown was the first child successfully born after her mother received IVF treatment. [ 1 ] Brown was born as a result of natural-cycle IVF, where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital in Royton , Oldham, England. Robert Edwards , surviving member of the development team, was awarded the Nobel Prize in Physiology or Medicine in 2010. [ 2 ] [ 3 ] When assisted by egg donation and IVF, many women who have reached menopause , have infertile partners, or have idiopathic female-fertility issues, can still become pregnant. After the IVF treatment, some couples get pregnant without any fertility treatments. [ 4 ] In 2023, it was estimated that twelve million children had been born worldwide using IVF and other assisted reproduction techniques. [ 5 ] A 2019 study that evaluated the use of 10 adjuncts with IVF (screening hysteroscopy, DHEA, testosterone, GH, aspirin, heparin, antioxidants, seminal plasma and PRP) suggested that (with the exception of hysteroscopy) these adjuncts should be avoided until there is more evidence to show that they are safe and effective. [ 6 ] The Latin term in vitro , meaning "in glass", is used because early biological experiments involving cultivation of tissues outside the living organism were carried out in glass containers, such as beakers, test tubes, or Petri dishes. The modern scientific term "in vitro" refers to any biological procedure that is performed outside the organism in which it would normally have occurred, to distinguish it from an in vivo procedure (such as in vivo fertilisation ), where the tissue remains inside the living organism in which it is normally found. A colloquial term for babies conceived as the result of IVF, "test tube babies", refers to the tube-shaped containers of glass or plastic resin, called test tubes , that are commonly used in chemistry and biology labs. However, IVF is usually performed in Petri dishes , which are both wider and shallower and often used to cultivate cultures. IVF is a form of assisted reproductive technology . The first successful birth of a child after IVF treatment, Louise Brown , occurred in 1978. Louise Brown was born as a result of natural cycle IVF where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (now Dr Kershaw's Hospice) in Royton , Oldham, England. Robert G. Edwards , the physiologist who co-developed the treatment, was awarded the Nobel Prize in Physiology or Medicine in 2010. His co-workers, Patrick Steptoe and Jean Purdy , were not eligible for consideration as the Nobel Prize is not awarded posthumously. [ 2 ] [ 3 ] The second successful birth of a 'test tube baby' occurred in India on October 3, 1978, just 67 days after Louise Brown was born. The girl, named Durga, was conceived in vitro using a method developed independently by Subhash Mukhopadhyay , a physician and researcher from Hazaribag . Mukhopadhyay had been performing experiments on his own with primitive instruments and a household refrigerator. [ 7 ] However, state authorities prevented him from presenting his work at scientific conferences, [ 8 ] and it was many years before Mukhopadhyay's contribution was acknowledged in works dealing with the subject. [ 9 ] [ better source needed ] Adriana Iliescu held the record as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. After the IVF treatment some couples are able to get pregnant without any fertility treatments. [ 4 ] In 2018 it was estimated that eight million children had been born worldwide using IVF and other assisted reproduction techniques. [ 10 ] IVF may be used to overcome female infertility when it is due to problems with the fallopian tubes , making in vivo fertilisation difficult. It can also assist in male infertility , in those cases where there is a defect in sperm quality ; in such situations intracytoplasmic sperm injection (ICSI) may be used, where a sperm cell is injected directly into the egg cell. This is used when sperm has difficulty penetrating the egg. ICSI is also used when sperm numbers are very low. When indicated, the use of ICSI has been found to increase the success rates of IVF. According to UK's National Institute for Health and Care Excellence (NICE) guidelines, IVF treatment is appropriate in cases of unexplained infertility for people who have not conceived after 2 years of regular unprotected sexual intercourse. [ 11 ] In people with anovulation , it may be an alternative after 7–12 attempted cycles of ovulation induction , since the latter is expensive and more easy to control. [ 12 ] IVF success rates are the percentage of all IVF procedures that result in favourable outcomes. Depending on the type of calculation used, this outcome may represent the number of confirmed pregnancies, called the pregnancy rate , or the number of live births, called the live birth rate . Due to advances in reproductive technology, live birth rates by cycle five of IVF have increased from 76% in 2005 to 80% in 2010, despite a reduction in the number of embryos being transferred (which decreased the multiple birth rate from 25% to 8%). [ 13 ] The success rate depends on variable factors such as age of the woman, cause of infertility, embryo status, reproductive history, and lifestyle factors. Younger candidates of IVF are more likely to get pregnant. People older than 41 are more likely to get pregnant with a donor egg. [ 14 ] People who have been previously pregnant are in many cases more successful with IVF treatments than those who have never been pregnant. [ 14 ] The live birth rate is the percentage of all IVF cycles that lead to a live birth. This rate does not include miscarriage or stillbirth ; multiple-order births, such as twins and triplets, are counted as one pregnancy. A 2021 summary compiled by the Society for Assisted Reproductive Technology (SART) which reports the average IVF success rates in the United States per age group using non-donor eggs compiled the following data: [ 15 ] In 2006, Canadian clinics reported a live birth rate of 27%. [ 16 ] Birth rates in younger patients were slightly higher, with a success rate of 35.3% for those 21 and younger, the youngest group evaluated. Success rates for older patients were also lower and decrease with age, with 37-year-olds at 27.4% and no live births for those older than 48, the oldest group evaluated. [ 17 ] Some clinics exceeded these rates, but it is impossible to determine if that is due to superior technique or patient selection, since it is possible to artificially increase success rates by refusing to accept the most difficult patients or by steering them into oocyte donation cycles (which are compiled separately). Further, pregnancy rates can be increased by the placement of several embryos at the risk of increasing the chance for multiples. Because not each IVF cycle that is started will lead to oocyte retrieval or embryo transfer, reports of live birth rates need to specify the denominator, namely IVF cycles started, IVF retrievals, or embryo transfers. The SART summarised 2008–9 success rates for US clinics for fresh embryo cycles that did not involve donor eggs and gave live birth rates by the age of the prospective mother, with a peak at 41.3% per cycle started and 47.3% per embryo transfer for patients under 35 years of age. IVF attempts in multiple cycles result in increased cumulative live birth rates. Depending on the demographic group, one study reported 45% to 53% for three attempts, and 51% to 71% to 80% for six attempts. [ 18 ] According to the 2021 National Summary Report compiled by the Society for Assisted Reproductive Technology (SART), the mean number of embryos transfers for patients achieving live birth go as follows: [ 19 ] Effective from 15 February 2021 the majority of Australian IVF clinics publish their individual success rate online via YourIVFSuccess.com.au. This site also contains a predictor tool. [ 20 ] Pregnancy rate may be defined in various ways. In the United States, SART and the Centers for Disease Control (and appearing in the table in the Success Rates section above) include statistics on positive pregnancy test and clinical pregnancy rate. The 2019 summary compiled by the SART the following data for non-donor eggs (first embryo transfer) in the United States: [ 21 ] In 2006, Canadian clinics reported an average pregnancy rate of 35%. [ 16 ] A French study estimated that 66% of patients starting IVF treatment finally succeed in having a child (40% during the IVF treatment at the centre and 26% after IVF discontinuation). Achievement of having a child after IVF discontinuation was mainly due to adoption (46%) or spontaneous pregnancy (42%). [ 22 ] According to a study done by the Mayo Clinic , miscarriage rates for IVF are somewhere between 15 and 25% for those under the age of 35. [ 23 ] In naturally conceived pregnancies, the rate of miscarriage is between 10 and 20% for those under the age of 35. [ 24 ] Risk of miscarriage, regardless of the method of conception, does increase with age. [ 23 ] The main potential factors that influence pregnancy (and live birth) rates in IVF have been suggested to be maternal age , duration of infertility or subfertility, bFSH and number of oocytes, all reflecting ovarian function . [ 25 ] Optimal age is 23–39 years at time of treatment. [ 26 ] Biomarkers that affect the pregnancy chances of IVF include: Other determinants of outcome of IVF include: Aspirin is sometimes prescribed to people for the purpose of increasing the chances of conception by IVF, but as of 2016 [update] there was no evidence to show that it is safe and effective. [ 42 ] [ 43 ] A 2013 review and meta analysis of randomised controlled trials of acupuncture as an adjuvant therapy in IVF found no overall benefit, and concluded that an apparent benefit detected in a subset of published trials where the control group (those not using acupuncture) experienced a lower than average rate of pregnancy requires further study, due to the possibility of publication bias and other factors. [ 44 ] A Cochrane review came to the result that endometrial injury performed in the month prior to ovarian induction appeared to increase both the live birth rate and clinical pregnancy rate in IVF compared with no endometrial injury. There was no evidence of a difference between the groups in miscarriage, multiple pregnancy or bleeding rates. Evidence suggested that endometrial injury on the day of oocyte retrieval was associated with a lower live birth or ongoing pregnancy rate. [ 40 ] Intake of antioxidants (such as N-acetyl-cysteine , melatonin , vitamin A , vitamin C , vitamin E , folic acid , myo-inositol , zinc or selenium ) has not been associated with a significantly increased live birth rate or clinical pregnancy rate in IVF according to Cochrane reviews . [ 40 ] The review found that oral antioxidants given to the sperm donor with male factor or unexplained subfertility may improve live birth rates, but more evidence is needed. [ 40 ] A Cochrane review in 2015 came to the result that there is no evidence identified regarding the effect of preconception lifestyle advice on the chance of a live birth outcome. [ 40 ] Theoretically, IVF could be performed by collecting the contents from the fallopian tubes or uterus after natural ovulation, mixing it with sperm , and reinserting the fertilised ova into the uterus. However, without additional techniques, the chances of pregnancy would be extremely small. The additional techniques that are routinely used in IVF include ovarian hyperstimulation to generate multiple eggs, ultrasound-guided transvaginal oocyte retrieval directly from the ovaries, co-incubation of eggs and sperm, as well as culture and selection of resultant embryos before embryo transfer into a uterus. Ovarian hyperstimulation is the stimulation to induce development of multiple follicles of the ovaries. It should start with response prediction based on factors such as age, antral follicle count and level of anti-Müllerian hormone . [ 45 ] The resulting prediction (e.g. poor or hyper-response to ovarian hyperstimulation) determines the protocol and dosage for ovarian hyperstimulation. [ 45 ] Ovarian hyperstimulation also includes suppression of spontaneous ovulation, for which two main methods are available: Using a (usually longer) GnRH agonist protocol or a (usually shorter) GnRH antagonist protocol. [ 45 ] In a standard long GnRH agonist protocol the day when hyperstimulation treatment is started and the expected day of later oocyte retrieval can be chosen to conform to personal choice, while in a GnRH antagonist protocol it must be adapted to the spontaneous onset of the previous menstruation. On the other hand, the GnRH antagonist protocol has a lower risk of ovarian hyperstimulation syndrome (OHSS), which is a life-threatening complication. [ 45 ] For the ovarian hyperstimulation in itself, injectable gonadotropins (usually FSH analogues) are generally used under close monitoring. Such monitoring frequently checks the estradiol level and, by means of gynecologic ultrasonography , follicular growth. Typically approximately 10 days of injections will be necessary. When stimulating ovulation after suppressing endogenous secretion, it is necessary to supply exogenous gonadotropines. The most common one is the human menopausal gonadotropin (hMG), which is obtained by donation of menopausal women. Other pharmacological preparations are FSH+LH or coripholitropine alpha. There are several methods termed natural cycle IVF: [ 46 ] IVF using no drugs for ovarian hyperstimulation was the method for the conception of Louise Brown . This method can be successfully used when people want to avoid taking ovarian stimulating drugs with its associated side-effects. HFEA has estimated the live birth rate to be approximately 1.3% per IVF cycle using no hyperstimulation drugs for women aged between 40 and 42. [ 48 ] Mild IVF [ 49 ] is a method where a small dose of ovarian stimulating drugs are used for a short duration during a natural menstrual cycle aimed at producing 2–7 eggs and creating healthy embryos. This method appears to be an advance in the field to reduce complications and side-effects for women, and it is aimed at quality, and not quantity of eggs and embryos. One study comparing a mild treatment (mild ovarian stimulation with GnRH antagonist co-treatment combined with single embryo transfer ) to a standard treatment (stimulation with a GnRH agonist long-protocol and transfer of two embryos) came to the result that the proportions of cumulative pregnancies that resulted in term live birth after 1 year were 43.4% with mild treatment and 44.7% with standard treatment. [ 50 ] Mild IVF can be cheaper than conventional IVF and with a significantly reduced risk of multiple gestation and OHSS . [ 51 ] When the ovarian follicles have reached a certain degree of development, induction of final oocyte maturation is performed, generally by an injection of human chorionic gonadotropin (hCG). Commonly, this is known as the "trigger shot." [ 52 ] hCG acts as an analogue of luteinising hormone , and ovulation would occur between 38 and 40 hours after a single HCG injection, [ 53 ] but the egg retrieval is performed at a time usually between 34 and 36 hours after hCG injection, that is, just prior to when the follicles would rupture. This avails for scheduling the egg retrieval procedure at a time where the eggs are fully mature. HCG injection confers a risk of ovarian hyperstimulation syndrome . Using a GnRH agonist instead of hCG eliminates most of the risk of ovarian hyperstimulation syndrome, but with a reduced delivery rate if the embryos are transferred fresh. [ 54 ] For this reason, many centers will freeze all oocytes or embryos following agonist trigger. The eggs are retrieved from the patient using a transvaginal technique called transvaginal ultrasound aspiration involving an ultrasound-guided needle being injected through follicles upon collection. Through this needle, the oocyte and follicular fluid are aspirated and the follicular fluid is then passed to an embryologist to identify ova. It is common to remove between ten and thirty eggs. The retrieval process, which lasts approximately 20 to 40 minutes, is performed under conscious sedation or general anesthesia to ensure patient comfort. Following optimal follicular development, the eggs are meticulously retrieved using transvaginal ultrasound guidance with the aid of a specialised ultrasound probe and a fine needle aspiration technique. The follicular fluid, containing the retrieved eggs, is expeditiously transferred to the embryology laboratory for subsequent processing. [ 55 ] In the laboratory, for ICSI treatments, the identified eggs are stripped of surrounding cells (also known as cumulus cells ) and prepared for fertilisation . An oocyte selection may be performed prior to fertilisation to select eggs that can be fertilised, as it is required they are in metaphase II. There are cases in which if oocytes are in the metaphase I stage, they can be kept being cultured so as to undergo a posterior sperm injection. In the meantime, semen is prepared for fertilisation by removing inactive cells and seminal fluid in a process called sperm washing . If semen is being provided by a sperm donor , it will usually have been prepared for treatment before being frozen and quarantined, and it will be thawed ready for use. [ citation needed ] The sperm and the egg are incubated together at a ratio of about 75,000:1 in a culture media in order for the actual fertilisation to take place. A review in 2013 came to the result that a duration of this co-incubation of about 1 to 4 hours results in significantly higher pregnancy rates than 16 to 24 hours. [ 56 ] In most cases, the egg will be fertilised during co-incubation and will show two pronuclei . In certain situations, such as low sperm count or motility, a single sperm may be injected directly into the egg using intracytoplasmic sperm injection (ICSI). The fertilised egg is passed to a special growth medium and left for about 48 hours until the embryo consists of six to eight cells. In gamete intrafallopian transfer , eggs are removed from the woman and placed in one of the fallopian tubes, along with the man's sperm. This allows fertilisation to take place inside the woman's body. Therefore, this variation is actually an in vivo fertilisation, not in vitro. [ 57 ] [ 58 ] The main durations of embryo culture are until cleavage stage (day two to four after co-incubation ) or the blastocyst stage (day five or six after co-incubation ). [ 59 ] Embryo culture until the blastocyst stage confers a significant increase in live birth rate per embryo transfer , but also confers a decreased number of embryos available for transfer and embryo cryopreservation , so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. [ 40 ] Transfer day two instead of day three after fertilisation has no differences in live birth rate . [ 40 ] There are significantly higher odds of preterm birth ( odds ratio 1.3) and congenital anomalies ( odds ratio 1.3) among births having from embryos cultured until the blastocyst stage compared with cleavage stage. [ 59 ] Laboratories have developed grading methods to judge ovocyte and embryo quality. In order to optimise pregnancy rates , there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. [ 60 ] Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. [ 61 ] However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. [ 62 ] Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Assistant ( ERICA ), [ 63 ] is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. [ 64 ] Studies on this area are still pending and current feasibility studies support its potential. [ 65 ] The number to be transferred depends on the number available, the age of the patient and other health and diagnostic factors. In countries such as Canada, the UK, Australia and New Zealand, a maximum of two embryos are transferred except in unusual circumstances. In the UK and according to HFEA regulations, a woman over 40 may have up to three embryos transferred, whereas in the US, there is no legal limit on the number of embryos which may be transferred, although medical associations have provided practice guidelines. Most clinics and country regulatory bodies seek to minimise the risk of multiple pregnancy, as it is not uncommon for multiple embryos to implant if multiple embryos are transferred. Embryos are transferred to the patient's uterus through a thin, plastic catheter , which goes through their vagina and cervix. Several embryos may be passed into the uterus to improve chances of implantation and pregnancy. [ 66 ] [ 67 ] Luteal support is the administration of medication, generally progesterone , progestins , hCG, or GnRH agonists , and often accompanied by estradiol, to increase the success rate of implantation and early embryogenesis , thereby complementing and/or supporting the function of the corpus luteum . A Cochrane review found that hCG or progesterone given during the luteal phase may be associated with higher rates of live birth or ongoing pregnancy, but that the evidence is not conclusive. [ 68 ] Co-treatment with GnRH agonists appears to improve outcomes, [ 68 ] by a live birth rate RD of +16% (95% confidence interval +10 to +22%). [ 69 ] On the other hand, growth hormone or aspirin as adjunctive medication in IVF have no evidence of overall benefit. [ 40 ] There are various expansions or additional techniques that can be applied in IVF, which are usually not necessary for the IVF procedure itself, but would be virtually impossible or technically difficult to perform without concomitantly performing methods of IVF. Preimplantation genetic screening (PGS) or preimplantation genetic diagnosis (PGD) has been suggested to be able to be used in IVF to select an embryo that appears to have the greatest chances for successful pregnancy. However, a systematic review and meta-analysis of existing randomised controlled trials came to the result that there is no evidence of a beneficial effect of PGS with cleavage-stage biopsy as measured by live birth rate . [ 70 ] On the contrary, for those of advanced maternal age , PGS with cleavage-stage biopsy significantly lowers the live birth rate. [ 70 ] Technical drawbacks, such as the invasiveness of the biopsy, and non-representative samples because of mosaicism are the major underlying factors for inefficacy of PGS. [ 70 ] Still, as an expansion of IVF, patients who can benefit from PGS/PGD include: PGS screens for numeral chromosomal abnormalities while PGD diagnosis the specific molecular defect of the inherited disease. In both PGS and PGD, individual cells from a pre-embryo , or preferably trophectoderm cells biopsied from a blastocyst , are analysed during the IVF process. Before the transfer of a pre-embryo back to a person's uterus, one or two cells are removed from the pre-embryos (8-cell stage), or preferably from a blastocyst . These cells are then evaluated for normality. Typically within one to two days, following completion of the evaluation, only the normal pre-embryos are transferred back to the uterus. Alternatively, a blastocyst can be cryopreserved via vitrification and transferred at a later date to the uterus. In addition, PGS can significantly reduce the risk of multiple pregnancies because fewer embryos, ideally just one, are needed for implantation. Cryopreservation can be performed as oocyte cryopreservation before fertilisation, or as embryo cryopreservation after fertilisation. The Rand Consulting Group has estimated there to be 400,000 frozen embryos in the United States in 2006. [ 72 ] The advantage is that patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another aspiring parent, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm. Also, oocyte cryopreservation can be used for those who are likely to lose their ovarian reserve due to undergoing chemotherapy . [ 73 ] By 2017, many centres have adopted embryo cryopreservation as their primary IVF therapy, and perform few or no fresh embryo transfers. The two main reasons for this have been better endometrial receptivity when embryos are transferred in cycles without exposure to ovarian stimulation and also the ability to store the embryos while awaiting the results of preimplantation genetic testing . The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities. [ 74 ] The major complication of IVF is the risk of multiple births . This is directly related to the practice of transferring multiple embryos at embryo transfer. Multiple births are related to increased risk of pregnancy loss, obstetrical complications , prematurity , and neonatal morbidity with the potential for long term damage. Strict limits on the number of embryos that may be transferred have been enacted in some countries (e.g. Britain, Belgium) to reduce the risk of high-order multiples (triplets or more), but are not universally followed or accepted. Spontaneous splitting of embryos in the uterus after transfer can occur, but this is rare and would lead to identical twins. A double blind, randomised study followed IVF pregnancies that resulted in 73 infants, and reported that 8.7% of singleton infants and 54.2% of twins had a birth weight of less than 2,500 grams (5.5 lb). [ 78 ] There is some evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies. [ 79 ] Certain kinds of IVF have been shown to lead to distortions in the sex ratio at birth. Intracytoplasmic sperm injection (ICSI), which was first applied in 1991, leads to slightly more female births (51.3% female). Blastocyst transfer , which was first applied in 1984, leads to significantly more male births (56.1% male). Standard IVF done at the second or third day leads to a normal sex ratio. [ citation needed ] Epigenetic modifications caused by extended culture leading to the death of more female embryos has been theorised as the reason why blastocyst transfer leads to a higher male sex ratio; however, adding retinoic acid to the culture can bring this ratio back to normal. [ 80 ] A second theory is that the male-biased sex ratio may due to a higher rate of selection of male embryos. Male embryos develop faster in vitro, and thus may appear more viable for transfer. [ 81 ] By sperm washing , the risk that a chronic disease in the individual providing the sperm would infect the birthing parent or offspring can be brought to negligible levels. If the sperm donor has hepatitis B , The Practice Committee of the American Society for Reproductive Medicine advises that sperm washing is not necessary in IVF to prevent transmission, unless the birthing partner has not been effectively vaccinated. [ 82 ] [ 83 ] In women with hepatitis B, the risk of vertical transmission during IVF is no different from the risk in spontaneous conception. [ 83 ] However, there is not enough evidence to say that ICSI procedures are safe in women with hepatitis B in regard to vertical transmission to the offspring. [ 83 ] Regarding potential spread of HIV/AIDS , Japan's government prohibited the use of IVF procedures in which both partners are infected with HIV. Despite the fact that the ethics committees previously allowed the Ogikubo, Tokyo Hospital, located in Tokyo, to use IVF for couples with HIV, the Ministry of Health, Labour and Welfare of Japan decided to block the practice. Hideji Hanabusa, the vice president of the Ogikubo Hospital, states that together with his colleagues, he managed to develop a method through which scientists are able to remove HIV from sperm. [ 84 ] In the United States, people seeking to be an embryo recipient undergo infectious disease screening required by the Food and Drug Administration (FDA), and reproductive tests to determine the best placement location and cycle timing before the actual embryo transfer occurs. The amount of screening the embryo has already undergone is largely dependent on the genetic parents' own IVF clinic and process. The embryo recipient may elect to have their own embryologist conduct further testing. A risk of ovarian stimulation is the development of ovarian hyperstimulation syndrome , particularly if hCG is used for inducing final oocyte maturation. This results in swollen, painful ovaries. It occurs in 30% of patients. Mild cases can be treated with over the counter medications and cases can be resolved in the absence of pregnancy. In moderate cases, ovaries swell and fluid accumulated in the abdominal cavities and may have symptoms of heartburn, gas, nausea or loss of appetite. In severe cases, patients have sudden excess abdominal pain, nausea, vomiting and will result in hospitalisation. During egg retrieval, there exists a small chance of bleeding, infection, and damage to surrounding structures such as bowel and bladder (transvaginal ultrasound aspiration) as well as difficulty in breathing, chest infection, allergic reactions to medication, or nerve damage (laparoscopy). Ectopic pregnancy may also occur if a fertilised egg develops outside the uterus, usually in the fallopian tubes and requires immediate destruction of the foetus. IVF does not seem to be associated with an elevated risk of cervical cancer , nor with ovarian cancer or endometrial cancer when neutralising the confounder of infertility itself. [ 85 ] Nor does it seem to impart any increased risk for breast cancer . [ 86 ] Regardless of pregnancy result, IVF treatment is usually stressful for patients. [ 87 ] Neuroticism and the use of escapist coping strategies are associated with a higher degree of distress, while the presence of social support has a relieving effect. [ 87 ] A negative pregnancy test after IVF is associated with an increased risk for depression , but not with any increased risk of developing anxiety disorders . [ 88 ] Pregnancy test results do not seem to be a risk factor for depression or anxiety among men in the case of relationships between two cisgender, heterosexual people. [ 88 ] Hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist) are associated with depression . [ 89 ] Studies show that there is an increased risk of venous thrombosis or pulmonary embolism during the first trimester of IVF. [ 90 ] When looking at long-term studies comparing patients who received or did not receive IVF, there seems to be no correlation with increased risk of cardiac events. There are more ongoing studies to solidify this. [ 91 ] Spontaneous pregnancy has occurred after successful and unsuccessful IVF treatments. [ 92 ] Within 2 years of delivering an infant conceived through IVF, subfertile patients had a conception rate of 18%. [ 93 ] A review in 2013 came to the result that infants resulting from IVF (with or without ICSI) have a relative risk of birth defects of 1.32 (95% confidence interval 1.24–1.42) compared to naturally conceived infants. [ 94 ] In 2008, an analysis of the data of the National Birth Defects Study in the US found that certain birth defects were significantly more common in infants conceived through IVF, notably septal heart defects , cleft lip with or without cleft palate , esophageal atresia , and anorectal atresia ; the mechanism of causality is unclear. [ 95 ] However, in a population-wide cohort study of 308,974 births (with 6,163 using assisted reproductive technology and following children from birth to age five) researchers found: "The increased risk of birth defects associated with IVF was no longer significant after adjustment for parental factors." [ 96 ] Parental factors included known independent risks for birth defects such as maternal age, smoking status, etc. Multivariate correction did not remove the significance of the association of birth defects and ICSI (corrected odds ratio 1.57), although the authors speculate that underlying male infertility factors (which would be associated with the use of ICSI) may contribute to this observation and were not able to correct for these confounders. The authors also found that a history of infertility elevated risk itself in the absence of any treatment (odds ratio 1.29), consistent with a Danish national registry study [ 97 ] and "implicates patient factors in this increased risk." [ 96 ] The authors of the Danish national registry study speculate: "our results suggest that the reported increased prevalence of congenital malformations seen in singletons born after assisted reproductive technology is partly due to the underlying infertility or its determinants." [ 97 ] If the underlying infertility is related to abnormalities in spermatogenesis , male offspring will have a higher risk for sperm abnormalities. In some cases genetic testing may be recommended to help assess the risk of transmission of defects to progeny and to consider whether treatment is desirable. [ 99 ] IVF does not seem to confer any risks regarding cognitive development, school performance, social functioning, and behaviour. [ 100 ] Also, IVF infants are known to be as securely attached to their parents as those who were naturally conceived, and IVF adolescents are as well-adjusted as those who have been naturally conceived. [ 101 ] Limited long-term follow-up data suggest that IVF may be associated with an increased incidence of hypertension , impaired fasting glucose , increase in total body fat composition, advancement of bone age , subclinical thyroid disorder , early adulthood clinical depression and binge drinking in the offspring. [ 100 ] [ 102 ] It is not known, however, whether these potential associations are caused by the IVF procedure in itself, by adverse obstetric outcomes associated with IVF, by the genetic origin of the children or by yet unknown IVF-associated causes. [ 100 ] [ 102 ] Increases in embryo manipulation during IVF result in more deviant fetal growth curves, but birth weight does not seem to be a reliable marker of fetal stress. [ 103 ] IVF, including ICSI , is associated with an increased risk of imprinting disorders (including Prader–Willi syndrome and Angelman syndrome ), with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7). [ 104 ] An IVF-associated incidence of cerebral palsy and neurodevelopmental delay are believed to be related to the confounders of prematurity and low birthweight. [ 100 ] Similarly, an IVF-associated incidence of autism and attention-deficit disorder are believed to be related to confounders of maternal and obstetric factors. [ 100 ] Overall, IVF does not cause an increased risk of childhood cancer . [ 105 ] Studies have shown a decrease in the risk of certain cancers and an increased risks of certain others including retinoblastoma , [ 106 ] hepatoblastoma [ 105 ] and rhabdomyosarcoma . [ 105 ] In some cases, laboratory mix-ups (misidentified gametes, transfer of wrong embryos) have occurred, leading to legal action against the IVF provider and complex paternity suits. An example is the case of a woman in California who received the embryo of another couple and was notified of this mistake after the birth of her son. [ 107 ] This has led to many authorities and individual clinics implementing procedures to minimise the risk of such mix-ups. The HFEA , for example, requires clinics to use a double witnessing system, the identity of specimens is checked by two people at each point at which specimens are transferred. Alternatively, technological solutions are gaining favour, to reduce the manpower cost of manual double witnessing, and to further reduce risks with uniquely numbered RFID tags which can be identified by readers connected to a computer. The computer tracks specimens throughout the process and alerts the embryologist if non-matching specimens are identified. Although the use of RFID tracking has expanded in the US, [ 108 ] it is still not widely adopted. [ 109 ] Pre-implantation genetic diagnosis (PGD) is criticised for giving select demographic groups disproportionate access to a means of creating a child possessing characteristics that they consider "ideal". Many fertile couples [ 110 ] [ 111 ] now demand equal access to embryonic screening so that their child can be just as healthy as one created through IVF. Mass use of PGD, especially as a means of population control or in the presence of legal measures related to population or demographic control, can lead to intentional or unintentional demographic effects such as the skewed live-birth sex ratios seen in China following implementation of its one-child policy . While PGD was originally designed to screen for embryos carrying hereditary genetic diseases, the method has been applied to select features that are unrelated to diseases, thus raising ethical questions. Examples of such cases include the selection of embryos based on histocompatibility (HLA) for the donation of tissues to a sick family member, the diagnosis of genetic susceptibility to disease, and sex selection . [ 112 ] These examples raise ethical issues because of the morality of eugenics . It becomes frowned upon because of the advantage of being able to eliminate unwanted traits and selecting desired traits. By using PGD, individuals are given the opportunity to create a human life unethically and rely on science and not by natural selection . [ 113 ] For example, a deaf British couple, Tom and Paula Lichy, have petitioned to create a deaf baby using IVF. [ 114 ] Some medical ethicists have been very critical of this approach. Jacob M. Appel wrote that "intentionally culling out blind or deaf embryos might prevent considerable future suffering, while a policy that allowed deaf or blind parents to select for such traits intentionally would be far more troublesome." [ 115 ] Robert Winston, professor of fertility studies at Imperial College London, had called the industry "corrupt" and "greedy" stating that "one of the major problems facing us in healthcare is that IVF has become a massive commercial industry," and that "what has happened, of course, is that money is corrupting this whole technology", and accused authorities of failing to protect couples from exploitation: "The regulatory authority has done a consistently bad job. It's not prevented the exploitation of people, it's not put out very good information to couples, it's not limited the number of unscientific treatments people have access to". [ 116 ] The IVF industry has been described as a market-driven construction of health, medicine and the human body. [ 117 ] The industry has been accused of making unscientific claims, and distorting facts relating to infertility, in particular through widely exaggerated claims about how common infertility is in society, in an attempt to get as many couples as possible and as soon as possible to try treatments (rather than trying to conceive naturally for a longer time). [ citation needed ] This risks removing infertility from its social context and reducing the experience to a simple biological malfunction, which not only can be treated through bio-medical procedures, but should be treated by them. [ 118 ] [ 119 ] All pregnancies can be risky, but there are greater risk for mothers who are older and are over the age of 40. As people get older, they are more likely to develop conditions such as gestational diabetes and pre-eclampsia. If the mother does conceive over the age of 40, their offspring may be of lower birth weight, and more likely to requires intensive care. Because of this, the increased risk is a sufficient cause for concern. The high incidence of caesarean in older patients is commonly regarded as a risk. [ 120 ] Those conceiving at 40 have a greater risk of gestational hypertension and premature birth. The offspring is at risk when being born from older mothers, and the risks associated with being conceived through IVF. [ 121 ] Adriana Iliescu held the record for a while as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66. [ citation needed ] In September 2019, a 74-year-old woman became the oldest-ever to give birth after she delivered twins at a hospital in Guntur , Andhra Pradesh . [ 122 ] Although menopause is a natural barrier to further conception, IVF has allowed people to be pregnant in their fifties and sixties. People whose uteruses have been appropriately prepared receive embryos that originated from an egg donor. Therefore, although they do not have a genetic link with the child, they have a physical link through pregnancy and childbirth . Even after menopause, the uterus is fully capable of carrying out a pregnancy. [ 123 ] A 2009 statement from the ASRM found no persuasive evidence that children are harmed or disadvantaged solely by being raised by single parents, unmarried parents, or homosexual parents. It did not support restricting access to assisted reproductive technologies on the basis of a prospective parent's marital status or sexual orientation. [ 124 ] A 2018 study found that children's psychological well-being did not differ when raised by either same-sex parents or heterosexual parents, even finding that psychological well-being was better amongst children raised by same-sex parents. [ 125 ] Ethical concerns include reproductive rights, the welfare of offspring, nondiscrimination against unmarried individuals, homosexual, and professional autonomy. [ 124 ] A controversy in California focused on the question of whether physicians opposed to same-sex relationships should be required to perform IVF for a lesbian couple. Guadalupe T. Benitez, a lesbian medical assistant from San Diego, sued doctors Christine Brody and Douglas Fenton of the North Coast Woman's Care Medical Group after Brody told her that she had "religious-based objections to treating her and homosexuals in general to help them conceive children by artificial insemination," and Fenton refused to authorise a refill of her prescription for the fertility drug Clomid on the same grounds. [ 126 ] [ 127 ] The California Medical Association had initially sided with Brody and Fenton, but the case, North Coast Women's Care Medical Group v. Superior Court , was decided unanimously by the California State Supreme Court in favour of Benitez on 19 August 2008. [ 128 ] [ 129 ] Nadya Suleman came to international attention after having twelve embryos implanted, eight of which survived, resulting in eight newborns being added to her existing six-child family. The Medical Board of California sought to have fertility doctor Michael Kamrava, who treated Suleman, stripped of his licence. State officials allege that performing Suleman's procedure is evidence of unreasonable judgment, substandard care, and a lack of concern for the eight children she would conceive and the six she was already struggling to raise. On 1 June 2011 the Medical Board issued a ruling that Kamrava's medical licence be revoked effective 1 July 2011. [ 130 ] [ 131 ] [ 132 ] The research on transgender reproduction and family planning is limited. [ 133 ] A 2020 comparative study of children born to a transgender father and cisgender mother via donor sperm insemination in France showed no significant differences to IVF and naturally conceived children of cisgender parents. [ 134 ] Transgender men can experience challenges in pregnancy and birthing from the cis-normative structure within the medical system, [ 133 ] as well as psychological challenges such as renewed gender dysphoria. [ 135 ] The effect of continued testosterone therapy during pregnancy and breastfeeding is undetermined. [ 136 ] Ethical concerns include reproductive rights, reproductive justice, physician autonomy, and transphobia within the health care setting. [ 133 ] Alana Stewart, who was conceived using donor sperm, began an online forum for donor children called AnonymousUS in 2010. The forum welcomes the viewpoints of anyone involved in the IVF process. [ 137 ] In May 2012, a court ruled making anonymous sperm and egg donation in British Columbia illegal. [ 138 ] In the UK, Sweden, Norway, Germany, Italy, New Zealand, and some Australian states, donors are not paid and cannot be anonymous. [ citation needed ] In 2000, a website called Donor Sibling Registry was created to help biological children with a common donor connect with each other. [ 139 ] [ 140 ] There may be leftover embryos or eggs from IVF procedures if the person for whom they were originally created has successfully carried one or more pregnancies to term, and no longer wishes to use them. With the patient's permission, these may be donated to help others conceive by means of third party reproduction . In embryo donation , these extra embryos are given to others for transfer , with the goal of producing a successful pregnancy. Embryo recipients have genetic issues or poor-quality embryos or eggs of their own. The resulting child is considered the child of whoever birthed them, and not the child of the donor, the same as occurs with egg donation or sperm donation . As per The National Infertility Association, typically, genetic parents donate the eggs or embryos to a fertility clinic where they are preserved by oocyte cryopreservation or embryo cryopreservation until a carrier is found for them. The process of matching the donation with the prospective parents is conducted by the agency itself, at which time the clinic transfers ownership of the embryos to the prospective parent(s). [ 141 ] Alternatives to donating unused embryos are destroying them (or having them transferred at a time when pregnancy is very unlikely), [ 142 ] keeping them frozen indefinitely, or donating them for use in research (rendering them non-viable). [ 143 ] Individual moral views on disposing of leftover embryos may depend on personal views on the beginning of human personhood and the definition and/or value of potential future persons , and on the value that is given to fundamental research questions. Some people believe donation of leftover embryos for research is a good alternative to discarding the embryos when patients receive proper, honest and clear information about the research project, the procedures and the scientific values. [ 144 ] During the embryo selection and transfer phases, many embryos may be discarded in favour of others. This selection may be based on criteria such as genetic disorders or the sex. One of the earliest cases of special gene selection through IVF was the case of the Collins family in the 1990s, who selected the sex of their child. [ 145 ] The ethic issues remain unresolved as no worldwide consensus exists in science, religion, and philosophy on when a human embryo should be recognised as a person. For those who believe that this is at the moment of conception, IVF becomes a moral question when multiple eggs are fertilised, begin development, and only a few are chosen for uterus transfer. [ citation needed ] If IVF were to involve the fertilisation of only a single egg, or at least only the number that will be transferred , then this would not be an issue. However, this has the chance of increasing costs dramatically as only a few eggs can be attempted at a time. As a result, the couple must decide what to do with these extra embryos. Depending on their view of the embryo's humanity or the chance the couple will want to try to have another child, the couple has multiple options for dealing with these extra embryos. Couples can choose to keep them frozen, donate them to other infertile couples, thaw them, or donate them to medical research. [ 142 ] Keeping them frozen costs money, donating them does not ensure they will survive, thawing them renders them immediately unviable, and medical research results in their termination. In the realm of medical research, the couple is not necessarily told what the embryos will be used for, and as a result, some can be used in stem cell research. In February 2024, the Alabama Supreme Court ruled in LePage v. Center for Reproductive Medicine that cryopreserved embryos were "persons" or "extrauterine children". After Dobbs v. Jackson Women's Health Organization (2022), some antiabortionists had hoped to get a judgement that fetuses and embryos were "person[s]". [ 146 ] The Catholic Church opposes all kinds of assisted reproductive technology and artificial contraception , on the grounds that they separate the procreative goal of marital sex from the goal of uniting married couples. The Catholic Church permits the use of a small number of reproductive technologies and contraceptive methods such as natural family planning , which involves charting ovulation times, and allows other forms of reproductive technologies that allow conception to take place from normative sexual intercourse, such as a fertility lubricant. Pope Benedict XVI had publicly re-emphasised the Catholic Church's opposition to in vitro fertilisation, saying that it replaces love between a husband and wife. [ 147 ] The Catechism of the Catholic Church, in accordance with the Catholic understanding of natural law , teaches that reproduction has an "inseparable connection" to the sexual union of married couples. [ 148 ] In addition, the church opposes IVF because it might result in the disposal of embryos; in Catholicism, an embryo is viewed as an individual with a soul that must be treated as a person. [ 149 ] The Catholic Church maintains that it is not objectively evil to be infertile, and advocates adoption as an option for such couples who still wish to have children. [ 150 ] The Lutheran Council in the United States of America , organised by the Lutheran Church–Missouri Synod and parent bodies of the Evangelical Lutheran Church in America , produced an authoritative document on the issue of in-vitro fertilisation, which "unanimously concluded that IVF does not in and of itself violate the will of God as reflected in the Bible, when the wife’s egg and husband’s sperm are used" (LCUSA n.d.:31). [ 151 ] The Lutheran Churches approve of artificial insemination by a husband (AIH), though representatives from the Lutheran Church-Missouri Synod hold that such IVF is only unobjectionable if the sperm and egg come from husband and wife and all of the fertilised eggs are implanted in the womb of the wife. [ 151 ] With regard to artificial insemination by a donor (AID), the Evangelical Lutheran Church in America teaches that it is a "cause for moral concern", while the Lutheran Church–Missouri Synod rejects it. [ 151 ] Regarding the response to IVF by Islam , a general consensus from the contemporary Sunni scholars concludes that IVF methods are immoral and prohibited. However, Gad El-Hak Ali Gad El-Hak's ART fatwa includes that: [ 152 ] Within the Orthodox Jewish community the concept is debated as there is little precedent in traditional Jewish legal textual sources. Regarding laws of sexuality , religious challenges include masturbation (which may be regarded as "seed wasting" [ 149 ] ), laws related to sexual activity and menstruation ( niddah ) and the specific laws regarding intercourse. An additional major issue is that of establishing paternity and lineage. For a baby conceived naturally, the father's identity is determined by a legal presumption ( chazakah ) of legitimacy: rov bi'ot achar ha'baal – a woman's sexual relations are assumed to be with her husband. Regarding an IVF child, this assumption does not exist and as such Rabbi Eliezer Waldenberg (among others) requires an outside supervisor to positively identify the father. [ 153 ] Reform Judaism has generally approved IVF. [ 149 ] Many women of sub-Saharan Africa choose to foster their children to infertile women. IVF enables these infertile women to have their own children, which imposes new ideals to a culture in which fostering children is seen as both natural and culturally important. Many infertile women are able to earn more respect in their society by taking care of the children of other mothers, and this may be lost if they choose to use IVF instead. As IVF is seen as unnatural, it may even hinder their societal position as opposed to making them equal with fertile women. It is also economically advantageous for infertile women to raise foster children as it gives these children greater ability to access resources that are important for their development and also aids the development of their society at large. If IVF becomes more popular without the birth rate decreasing, there could be more large family homes with fewer options to send their newborn children. This could result in an increase of orphaned children and/or a decrease in resources for the children of large families. This would ultimately stifle the children's and the community's growth. [ 154 ] In the US, the pineapple has emerged as a symbol of IVF users, possibly because some people thought, without scientific evidence, that eating pineapple might slightly increase the success rate for the procedure. [ 155 ] Studies have indicated that IVF mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception. Similarly, studies have indicated that IVF fathers express more warmth and emotional involvement than fathers by adoption and natural conception and enjoy fatherhood more. Some IVF parents become overly involved with their children. [ 156 ] Research has shown that men largely view themselves as "passive contributors" [ 157 ] : 340 since they have "less physical involvement" [ 158 ] in IVF treatment. Despite this, many men feel distressed after seeing the toll of hormonal injections and ongoing physical intervention on their female partner. [ 157 ] : 344 Fertility was found to be a significant factor in a man's perception of his masculinity, driving many to keep the treatment a secret. [ 157 ] : 344 In cases where the men did share that he and his partner were undergoing IVF, they reported to have been teased, mainly by other men, although some viewed this as an affirmation of support and friendship. For others, this led to feeling socially isolated. [ 157 ] : 336 In comparison with females, males showed less deterioration in mental health in the years following a failed treatment. [ 159 ] However, many men did feel guilt, disappointment and inadequacy, stating that they were simply trying to provide an "emotional rock" for their partners. [ 157 ] : 336 In certain countries, including Austria, Italy, Estonia, Hungary, Spain and Israel, the male does not have the full ability to withdraw consent to storage or use of embryos once they are fertilised. In the United States, the matter has been left to the courts on a more or less ad hoc basis. If embryos are implanted and a child is born contrary to the wishes of the male, he still has legal and financial responsibilities of a father. [ 160 ] Costs of IVF can be broken down into direct and indirect costs. Direct costs include the medical treatments themselves, including doctor consultations, medications, ultrasound scanning, laboratory tests, the actual IVF procedure, and any associated hospital charges and administrative costs. Indirect costs includes the cost of addressing any complications with treatments, compensation for the gestational surrogate , patients' travel costs, and lost hours of productivity. [ 161 ] These costs can be exaggerated by the increasing age of the woman undergoing IVF treatment (particularly those over the age of 40), and the increase costs associated with multiple births. For instance, a pregnancy with twins can cost up to three times that of a singleton pregnancy. [ 162 ] While some insurances cover one cycle of IVF, it takes multiple cycles of IVF to have a successful outcome. [ 163 ] A study completed in Northern California reveals that the IVF procedure alone that results in a successful outcome costs $61,377, and this can be more costly with the use of a donor egg. [ 163 ] The cost of IVF rather reflects the costliness of the underlying healthcare system than the regulatory or funding environment, [ 164 ] and ranges, on average for a standard IVF cycle and in 2006 United States dollars, between $12,500 in the United States to $4,000 in Japan. [ 164 ] In Ireland, IVF costs around €4,000, with fertility drugs, if required, costing up to €3,000. [ 165 ] The cost per live birth is highest in the United States ($41,000 [ 164 ] ) and United Kingdom ($40,000 [ 164 ] ) and lowest in Scandinavia and Japan (both around $24,500 [ 164 ] ). The high cost of IVF is also a barrier to access for disabled individuals, who typically have lower incomes, face higher health care costs, and seek health care services more often than non-disabled individuals. [ 166 ] Navigating insurance coverage for transgender expectant parents presents a unique challenge. Insurance plans are designed to cater towards a specific population, meaning that some plans can provide adequate coverage for gender-affirming care but fail to provide fertility services for transgender patients. [ 167 ] Additionally, insurance coverage is constructed around a person's legally recognised sex and not their anatomy; thus, transgender people may not get coverage for the services they need, including transgender men for fertility services. [ 167 ] In larger urban centres, studies have noted that lesbian, gay, bisexual, transgender and queer (LGBTQ+) populations are among the fastest-growing users of fertility care. [ 168 ] IVF is increasingly being used to allow lesbian and other LGBT couples to share in the reproductive process through a technique called reciprocal IVF . [ 169 ] The eggs of one partner are used to create embryos which the other partner carries through pregnancy. For gay male couples, many elect to use IVF through gestational surrogacy, where one partner's sperm is used to fertilise a donor ovum, and the resulting embryo is transplanted into a surrogate carrier's womb. [ 170 ] There are various IVF options available for same-sex couples including, but not limited to, IVF with donor sperm, IVF with a partner's oocytes, reciprocal IVF, IVF with donor eggs, and IVF with gestational surrogate. IVF with donor sperm can be considered traditional IVF for lesbian couples, but reciprocal IVF or using a partner's oocytes are other options for lesbian couples trying to conceive to include both partners in the biological process. Using a partner's oocytes is an option for partners who are unsuccessful in conceiving with their own, and reciprocal IVF involves undergoing reproduction with a donor egg and sperm that is then transferred to a partner who will gestate. Donor IVF involves conceiving with a third party's eggs. Typically, for gay male couples hoping to use IVF, the common techniques are using IVF with donor eggs and gestational surrogates. [ 171 ] Many LGBT communities centre their support around cisgender gay, lesbian and bisexual people and neglect to include proper support for transgender people. [ 172 ] The same 2020 literature review analyses the social, emotional and physical experiences of pregnant transgender men. [ 133 ] A common obstacle faced by pregnant transgender men is the possibility of gender dysphoria . Literature shows that transgender men report uncomfortable procedures and interactions during their pregnancies as well as feeling misgendered due to gendered terminology used by healthcare providers. Outside of the healthcare system, pregnant transgender men may experience gender dysphoria due to cultural assumptions that all pregnant people are cisgender women. [ 133 ] These people use three common approaches to navigating their pregnancy: passing as a cisgender woman, hiding their pregnancy, or being out and visibly pregnant as a transgender man. [ 133 ] Some transgender and gender diverse patients describe their experience in seeking gynaecological and reproductive health care as isolating and discriminatory, as the strictly binary healthcare system often leads to denial of healthcare coverage or unnecessary revelation of their transgender status to their employer. [ 173 ] Many transgender people retain their original sex organs and choose to have children through biological reproduction. Advances in assisted reproductive technology and fertility preservation have broadened the options transgender people have to conceive a child using their own gametes or a donor's. Transgender men and women may opt for fertility preservation before any gender affirming surgery, but it is not required for future biological reproduction. [ 133 ] [ 174 ] It is also recommended that fertility preservation is conducted before any hormone therapy. [ 171 ] Additionally, while fertility specialists often suggest that transgender men discontinue their testosterone hormones prior to pregnancy, research on this topic is still inconclusive. [ 175 ] [ 133 ] However, a 2019 study found that transgender male patients seeking oocyte retrieval via assisted reproductive technology (including IVF) were able to undergo treatment four months after stopping testosterone treatment, on average. [ 176 ] All patients experienced menses and normal AMH, FSH and E 2 levels and antral follicle counts after coming off testosterone, which allowed for successful oocyte retrieval. [ 176 ] Despite assumptions that the long-term androgen treatment negatively impacts fertility, oocyte retrieval, an integral part of the IVF process, does not appear to be affected. Biological reproductive options available to transgender women include, but are not limited to, IVF and IUI with the trans woman's sperm and a donor or a partner's eggs and uterus. Fertility treatment options for transgender men include, but are not limited to, IUI or IVF using his own eggs with a donor's sperm and/or donor's eggs, his uterus, or a different uterus, whether that is a partner's or a surrogate's. [ 177 ] People with disabilities who wish to have children are equally or more likely than the non-disabled population to experience infertility, [ 166 ] yet disabled individuals are much less likely to have access to fertility treatment such as IVF. There are many extraneous factors that hinder disabled individuals access to IVF, such as assumptions about decision-making capacity, sexual interests and abilities, heritability of a disability, and beliefs about parenting ability. [ 178 ] [ 179 ] These same misconceptions about people with disabilities that once led health care providers to sterilise thousands of women with disabilities now lead them to provide or deny reproductive care on the basis of stereotypes concerning people with disabilities and their sexuality. [ 166 ] Not only do misconceptions about disabled individuals parenting ability, sexuality, and health restrict and hinder access to fertility treatment such as IVF, structural barriers such as providers uneducated in disability healthcare and inaccessible clinics severely hinder disabled individuals access to receiving IVF. [ 166 ] In Australia, the average age of women undergoing ART treatment is 35.5 years among those using their own eggs (one in four being 40 or older) and 40.5 years among those using donated eggs . [ 180 ] While IVF is available in Australia, Australians using IVF are unable to choose their baby's gender. [ 181 ] Ernestine Gwet Bell supervised the first Cameroonian child born by IVF in 1998. [ 182 ] In Canada, one cycle of IVF treatment can cost between $7,750 to $12,250 CAD, and medications alone can cost between $2,500 to over $7,000 CAD. [ 183 ] The funding mechanisms that influence accessibility in Canada vary by province and territory, with some provinces providing full, partial or no coverage. New Brunswick provides partial funding through their Infertility Special Assistance Fund – a one time grant of up to $5,000. Patients may only claim up to 50% of treatment costs or $5,000 (whichever is less) occurred after April 2014. Eligible patients must be a full-time New Brunswick resident with a valid Medicare card [ 184 ] and have an official medical infertility diagnosis by a physician. [ 185 ] In December 2015, the Ontario provincial government enacted the Ontario Fertility Program for patients with medical and non-medical infertility, regardless of sexual orientation, gender or family composition. Eligible patients for IVF treatment must be Ontario residents under the age of 43 and have a valid Ontario Health Insurance Plan card and have not already undergone any IVF cycles. Coverage is extensive, but not universal. Coverage extends to certain blood and urine tests, physician/nurse counselling and consultations, certain ultrasounds, up to two cycle monitorings, embryo thawing, freezing and culture, fertilisation and embryology services, single transfers of all embryos, and one surgical sperm retrieval using certain techniques only if necessary. Drugs and medications are not covered under this Program, along with psychologist or social worker counselling, storage and shipping of eggs, sperm or embryos, and the purchase of donor sperm or eggs. [ 186 ] IVF is expensive in China and not generally accessible to unmarried women. [ 187 ] In August 2022, China's National Health Authority announced that it will take steps to make assisted reproductive technology more accessible, including by guiding local governments to include such technology in its national medical system. [ 187 ] Croatia No egg or sperm donations take place in Croatia, however using donated sperm or egg in ART and IUI is allowed. With donated eggs, sperm or embryo, a heterosexual couple and single women have legal access to IVF. Male or female couples do not have access to ART as a form of reproduction. The minimum age for males and females to access ART in Croatia is 18 there is no maximum age. Donor anonymity applies, but the born child can be given access to the donor's identity at a certain age [ 188 ] The penetration of the IVF market in India is quite low, with only 2,800 cycles per million infertile people in the reproductive age group (20–44 years), as compared to China, which has 6,500 cycles. The key challenges are lack of awareness, affordability and accessibility. [ 189 ] Since 2018, however, India has become a destination for fertility tourism, because of lower costs than in the Western world. In December 2021, the Lok Sabha passed the Assisted Reproductive Technology (Regulation) Bill 2020, to regulate ART services including IVF centres, sperm and egg banks. [ 190 ] Israel has the highest rate of IVF in the world, with 1,657 procedures performed per million people per year. [ 191 ] Couples without children can receive funding for IVF for up to two children. The same funding is available for people without children who will raise up to two children in a single parent home. IVF is available for people aged 18 to 45. [ 192 ] The Israeli Health Ministry says it spends roughly $3450 per procedure. [ citation needed ] One, two or three IVF treatments are government subsidised for people who are younger than 40 and have no children. The rules for how many treatments are subsidised, and the upper age limit for the people, vary between different county councils . [ 193 ] Single people are treated, and embryo adoption is allowed. There are also private clinics that offer the treatment for a fee. [ 194 ] Availability of IVF in England is determined by Clinical Commissioning Groups (CCGs). The National Institute for Health and Care Excellence (NICE) recommends up to 3 cycles of treatment for people under 40 years old with minimal success conceiving after 2 years of unprotected sex. Cycles will not be continued for people who are older than 40 years. [ 195 ] CCGs in Essex , Bedfordshire and Somerset have reduced funding to one cycle, or none, and it is expected that reductions will become more widespread. Funding may be available in "exceptional circumstances" – for example if a male partner has a transmittable infection or one partner is affected by cancer treatment. According to the campaign group Fertility Fairness "at the end of 2014 every CCG in England was funding at least one cycle of IVF". [ 196 ] Prices paid by the NHS in England varied between under £3,000 to more than £6,000 in 2014/5. [ 197 ] In February 2013, the cost of implementing the NICE guidelines for IVF along with other treatments for infertility was projected to be £236,000 per year per 100,000 members of the population. [ 198 ] IVF increasingly appears on NHS treatments blacklists . [ 199 ] In August 2017 five of the 208 CCGs had stopped funding IVF completely and others were considering doing so. [ 200 ] By October 2017 only 25 CCGs were delivering the three recommended NHS IVF cycles to eligible people under 40. [ 201 ] Policies could fall foul of discrimination laws if they treat same sex couples differently from heterosexual ones. [ 202 ] In July 2019 Jackie Doyle-Price said that women were registering with surgeries further away from their own home in order to get around CCG rationing policies. [ citation needed ] The Human Fertilisation and Embryology Authority said in September 2018 that parents who are limited to one cycle of IVF, or have to fund it themselves, are more likely choose to implant multiple embryos in the hope it increases the chances of pregnancy. This significantly increases the chance of multiple births and the associated poor outcomes, which would increase NHS costs. The president of the Royal College of Obstetricians and Gynaecologists said that funding 3 cycles was "the most important factor in maintaining low rates of multiple pregnancies and reduce(s) associated complications". [ 203 ] In the United States, overall availability of IVF in 2005 was 2.5 IVF physicians per 100,000 population, and utilisation was 236 IVF cycles per 100,000. [ 204 ] 126 procedures are performed per million people per year. Utilisation highly increases with availability and IVF insurance coverage, and to a significant extent also with percentage of single persons and median income. [ 204 ] In the US, an average cycle, from egg retrieval to embryo implantation, costs $12,400, and insurance companies that do cover treatment, even partially, usually cap the number of cycles they pay for. [ 205 ] As of 2015, more than 1 million babies had been born utilising IVF technologies. [ 36 ] In the US, as of September 2023, 21 states and the District of Columbia had passed laws for fertility insurance coverage. In 15 of those jurisdictions, some level of IVF coverage is included, and in 17, some fertility preservation services are included. Eleven states require coverage for both fertility preservation and IVF: Colorado, Connecticut, Delaware, Maryland, Maine, New Hampshire, New Jersey, New York, Rhode Island, Utah, and Washington D.C. [ 206 ] The states that have infertility coverage laws are Arkansas, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Louisiana, Maryland, Massachusetts, Montana, New Hampshire, New Jersey, New York, Ohio, Rhode Island, Texas, Utah, and West Virginia. [ 207 ] As of July 2023, New York was reportedly the only state Medicaid program to cover IVF. [ 208 ] These laws differ by state but many require an egg be fertilised with sperm from a spouse and that in order to be covered you must show you cannot become pregnant through penile-vaginal sex. [ 207 ] These requirements are not possible for a same-sex couple to meet. [ 208 ] Many fertility clinics in the United States limit the upper age at which people are eligible for IVF to 50 or 55 years. [ 209 ] These cut-offs make it difficult for people older than fifty-five to utilise the procedure. [ 209 ] Government agencies in China passed bans on the use of IVF in 2003 by unmarried people or by couples with certain infectious diseases. [ 210 ] In India, the use of IVF as a means of sex selection ( preimplantation genetic diagnosis ) is banned under the Pre-Conception and Pre-Natal Diagnostic Techniques Act, 1994 . [ 211 ] [ 212 ] [ 213 ] Sunni Muslim nations generally allow IVF between married couples when conducted with their own respective sperm and eggs, but not with donor eggs from other couples. But Iran, which is Shi'a Muslim, has a more complex scheme. Iran bans sperm donation but allows donation of both fertilised and unfertilised eggs. Fertilised eggs are donated from married couples to other married couples, while unfertilised eggs are donated in the context of mut'ah or temporary marriage to the father. [ 214 ] By 2012 Costa Rica was the only country in the world with a complete ban on IVF technology, it having been ruled unconstitutional by the nation's Supreme Court because it "violated life." [ 215 ] Costa Rica had been the only country in the western hemisphere that forbade IVF. A law project sent reluctantly by the government of President Laura Chinchilla was rejected by parliament. President Chinchilla has not publicly stated her position on the question of IVF. However, given the massive influence of the Catholic Church in her government any change in the status quo seems very unlikely. [ 216 ] [ 217 ] In spite of Costa Rican government and strong religious opposition, the IVF ban has been struck down by the Inter-American Court of Human Rights in a decision of 20 December 2012. [ 218 ] The court said that a long-standing Costa Rican guarantee of protection for every human embryo violated the reproductive freedom of infertile couples because it prohibited them from using IVF, which often involves the disposal of embryos not implanted in a woman's uterus. [ 219 ] On 10 September 2015, President Luis Guillermo Solís signed a decree legalising in-vitro fertilisation. The decree was added to the country's official gazette on 11 September. Opponents of the practice have since filed a lawsuit before the country's Constitutional Court. [ 220 ] All major restrictions on single but infertile people using IVF were lifted in Australia in 2002 after a final appeal to the Australian High Court was rejected on procedural grounds in the Leesa Meldrum case. A Victorian federal court had ruled in 2000 that the existing ban on all single women and lesbians using IVF constituted sex discrimination. [ 221 ] Victoria's government announced changes to its IVF law in 2007 eliminating remaining restrictions on fertile single women and lesbians, leaving South Australia as the only state maintaining them. [ 222 ] Despite strong popular support (7 out of 10 adults consider IVF access a good thing [ 223 ] and 67% believe that health insurance plans should cover IVF [ 224 ] ), IVF can involve complicated legal issues and has become a contentious issue in US politics. [ 225 ] [ 226 ] Federal regulations include screening requirements and restrictions on donations, [ 227 ] but these generally do not affect heterosexually intimate partners. [ 228 ] Doctors may be required to provide treatments to unmarried or LGBTQ couples under non-discrimination laws, as for example in California. [ 129 ] The state of Tennessee proposed a bill in 2009 that would have defined donor IVF as adoption. [ 229 ] During the same session, another bill proposed barring adoption from any unmarried and cohabitating couple, and activist groups stated that passing the first bill would effectively stop unmarried women from using IVF. [ 230 ] [ 231 ] Neither of these bills passed. [ 232 ] In 2023, the Practice Committee of the American Society for Reproductive Medicine (ASRM) updated its guidelines for the definition of “infertility” to include those who need medical interventions “in order to achieve a successful pregnancy either as an individual or with a partner.” [ 233 ] In many states, legal and financial decisions about provision of infertility treatments reference this “official” definition. [ 234 ] On September 29, 2024, California Governor Gavin Newsom signed SB 729, legislation which aligns with the ASRM definition of “infertility”. [ 235 ] [ 236 ] In the United States, much of the opposition to the use of IVF is associated with the anti-abortion movement , evangelicals , and denominations such as the Southern Baptists . [ 237 ] Current legal opposition to IVF and other fertility treatment access has stemmed from recent court rulings regarding women's reproductive healthcare. In the 2022 Dobbs v. Jackson Women's Health Organization decision, [ 238 ] the U.S. Supreme Court overturned the 1973 Roe v. Wade [ 239 ] decision which had federally protected the right to abortion. The 2024 Alabama Supreme Court decision regarding IVF has since threatened IVF access and legality in the U.S. Frozen embryos at an IVF clinic were accidentally destroyed resulting in a lawsuit during which the attorneys for the plaintiff sought damages under the Wrongful Death of a Minor Act. The court ruled in favor of the plaintiffs, setting a state-level precedent that embryos and fetuses are given the same rights as minors/children, regardless of whether they are in utero or not. [ 240 ] This has created confusion over the status of unused embryos and questions surrounding when life begins. After the court's decision, numerous IVF clinics in Alabama halted IVF treatment services [ 241 ] for fears of civil and criminal liability associated with the new rights granted to embryos. Since, laws proposing embryonic personhood have been proposed in 13 other states, [ 242 ] creating fear of further state restrictions. This ruling raised concerns from The National Infertility Association and the American Society for Reproductive Medicine that the decision would mean Alabama 's bans on abortion prohibit IVF as well, [ 243 ] while the University of Alabama at Birmingham health system paused IVF treatments. [ 244 ] Eight days later the Alabama legislature voted to protect IVF providers and patients from criminal or civil liability. [ 245 ] [ 246 ] The Right to IVF Act , federal legislation that would have codified a right to fertility treatments and provided insurance coverage for in vitro fertilisation treatments, was twice brought to a vote in the Senate in 2024. Both times it was blocked by Senate Republicans, of whom only Lisa Murkowski and Susan Collins voted to move the bill forward. [ 247 ] [ 237 ] [ 248 ] [ 249 ] Few American courts have addressed the issue of the "property" status of a frozen embryo. This issue might arise in the context of a divorce case, in which a court would need to determine which spouse would be able to decide the disposition of the embryos. It could also arise in the context of a dispute between a sperm donor and egg donor, even if they were unmarried. In 2015, an Illinois court held that such disputes could be decided by reference to any contract between the parents-to-be. In the absence of a contract, the court would weigh the relative interests of the parties. [ 250 ] On February 18, 2025, President Donald Trump signed an executive order which, according to the White House website, "directs policy recommendations to protect IVF access and aggressively reduce out-of-pocket and health plan costs for such treatments". [ 251 ] Trump has expressed support for IVF programs in the past, aiming to reduce the cost of such procedures. [ 252 ] Some alternatives to IVF are:
https://en.wikipedia.org/wiki/In_vitro_fertilisation
In vitro models for calcification may refer to systems that have been developed in order to reproduce, in the best possible way, the calcification process that tissues or biomaterials undergo inside the body. The aim of these systems is to mimic the high levels of calcium and phosphate present in the blood and measure the extent of the crystal's deposition. Different variations can include other parameters to increase the veracity of these models, such as flow, pressure, compliance and resistance. All the systems have different limitations that have to be acknowledged regarding the operating conditions and the degree of representation. The rational of using such is to partially replace in vivo animal testing, whilst rendering much more controllable and independent parameters compared to an animal model. The main use of these models is to study the calcification potential of prostheses that are in direct contact with the blood. In this category we find examples such as animal tissue prostheses ( xenogeneic bioprosthesis). Xenogeneic heart valves are of special importance for this area of study as they demonstrate a limited durability mainly due to the fatigue of the tissue and the calcific deposits (see Aortic valve replacement ). In vitro calcification models have been used in medical implant development to evaluate the calcification potential of the medical device or tissue. They can be considered a subfamily of the bioreactors that have been used in the field of tissue engineering for tissue culture and growth. These calcification bioreactors are designed to mimic and maintain the mechano-chemical environment that the tissue encounters in vivo with a view to generating the pathological environment that would favor calcium deposition . Parameters including medium flow , pH , temperature and supersaturation of the calcifying solution used in the bioreactor are maintained and closely monitored. The monitoring of these parameters allows to obtain information about the calcification potential of the medical device or tissue. In vitro calcification models can be categorized according to the level of representation of the physiological conditions, as static culture, constant supersaturation, and dynamic models. The simplest in vitro model for calcification is the static culture method. This method uses cell culture media enriched with different ions found in the blood plasma , such as calcium and phosphate, to produce a calcification effect on the cells. [ 1 ] This model, which simulates physiological temperature and pH, has been used to study living tissues. However, a major drawback is the lack of regulation regarding the levels of calcium and phosphate as it occurs in the human body (see Metabolism, Minerals and cofactors ). The "constant supersaturation method" also known as "constant composition", [ 3 ] is based in the consumption and successive replacement of the ions that are deposited to form apatitic structures onto the tissue under evaluation. The strategy of this model is to reproduce the chemical environment present in the body with solutions high in calcium and phosphate concentrations. The model incorporates a bioreactor vessel, a controlling mechanism and a set of burettes that replace the ions deposited during the calcific process. The kinetics of the reaction is monitored by the measurement of pH, which is proportional to the deprotonation of the acid phosphate via hydrolysis . [ 4 ] The pH change drives the addition of titrants in the system that replaces the amount of calcium and phosphate deposited onto the tissue and at the same time maintains the ionic strength of the solution constant, usually kept close to the physiological level at 0.15M. The volume of titrants added to maintain the pH is proportional to the quantity of crystallization sites and the supersaturation degree of the solution. The titrant addition rate will determine the mass deposition of crystals onto the tissue. This model does not provide the flow or the mechanical stimuli to the tissue. Both flow and mechanical stimuli affect the course and sites of calcium deposition. [ 5 ] [ 6 ] Dynamic calcification models employ a mock circulation to provide the chemical conditions for calcification, whilst at the same time subject the construct to a mechanical stimulation. This stimulation tries to mimic the mechanical environment encountered in vivo. These models can combine the constant supersaturation principle together with pulsatile flow, which is characteristic of the human cardiovascular system. The calcification solution used in such models is similar to the one used in the constant supersaturation reactor. The concept of dynamic calcification models was first introduced after it was realized that the mechanical stresses affected the tissue calcification, especially in the case of heart valves. The dynamic calcification systems aim at recreating the stresses and strains that tissues experience in vivo and combining them with an environment that enhances calcification. These systems incorporate flowmeters, pressure transducers and temperature sensors to closely monitor the simulated conditions. In these models, the kinetics of calcification remains the same as in the case of the static systems but the introduction of mechanical stimulation may affect the sites and extent of the deposition. [ 6 ] Dynamic models can vary in terms of the means of providing the flow in the system, as well as in terms of the dynamic stimulation rate. Accelerated frequencies are employed with a view to simulating longer equivalent in vivo durations. Accelerated models can provide long term calcification predictions but bearing in mind that the mechanical and flow stresses might be extra-physiological. [ 9 ] The gold standard for calcification experiments is the in vivo model. However, it is morally debatable and it is difficult to control and monitor the parameters under evaluation. Furthermore, the cost of an in vivo experience is much more elevated than the in vitro models. Several models can simulate the in vivo situation with certain degree of representation. Static cultures can be of great help to study living tissues but they are not suitable to keep the levels of calcium and phosphate constant as in the human body. Constant supersaturation systems fulfill this requirement but they are not suitable for living tissues. Finally, dynamic models add a mechanical stimulation not present in the other models. The dynamic models can apply physiological or extra-physiological stimulation to the device or tissue being tested (for the case of accelerated systems) but they share the same disadvantages with the constant supersaturation bioreactors.
https://en.wikipedia.org/wiki/In_vitro_models_for_calcification
Recombinant DNA (rDNA), or molecular cloning , is the process by which a single gene , or segment of DNA , is isolated and amplified . Recombinant DNA is also known as in vitro recombination . A cloning vector is a DNA molecule that carries foreign DNA into a host cell , where it replicates, producing many copies of itself along with the foreign DNA. There are many types of cloning vectors such as plasmids and phages . In order to carry out recombination between vector and the foreign DNA, it is necessary the vector and DNA to be cloned by digestion , ligase the foreign DNA into the vector with the enzyme DNA ligase . And DNA is inserted by introducing the DNA into bacteria cells by transformation . There are two major sources of foreign DNA for molecular cloning is genomic DNA (gDNA) and complementary (or copy) DNA (cDNA). cDNA molecules are DNA copies of mRNA molecules, produced in vitro by action of the enzyme reverse transcriptase . In order to obtain the cDNA for a specific gene, it is first necessary to construct a cDNA library . The DNA of interest needs to be fragmented to provide a relevant DNA segment of suitable size. Preparation of DNA fragments for cloning is achieved by means of PCR , but it may also be accomplished by restriction enzyme digestion and fractionation by gel electrophoresis . To prepare a cDNA library, the first step is to isolate the total mRNA from the cell type of interest. Then, the enzyme reverse transcriptase is used to generate cDNAs. Reverse transcriptase is a RNA-dependent DNA polymerase. It depends on the presence of a primer, usually a poly-dT oligonucleotide , to prime DNA synthesis . DNA polymerase can use these single–stranded primers to initiate second strand DNA synthesis on the mRNA templates. After the single-stranded DNA molecules are converted into double-stranded DNA molecules by DNA polymerase, they are inserted into vectors and cloned. To do this, the cDNA are frequently methylated with a specific methyl transferase. These synthetic oligonucleotides can either be “linkers”, the overhangs are generated by digestion with the appropriate restriction enzyme. Recombinant DNA vectors function as carriers of the foreign DNA. Plasmids are small, closed-circular DNA molecules that exist from the chromosomes of their host. Their replication is to be under stringent control (low copy number) or relaxed (high copy number). The restriction sites, called the multiple cloning site or polylinker , give a wide choice of restriction site for use in the cloning step. Mark F. Wiser,"Lecture Notes for Methods in Cell Biology This molecular biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/In_vitro_recombination
In vitro spermatogenesis is the process of creating male gametes ( spermatozoa ) outside of the body in a culture system. The process could be useful for fertility preservation, infertility treatment and may further develop the understanding of spermatogenesis at the cellular and molecular level. Spermatogenesis is a highly complex process and artificially rebuilding it in vitro is challenging. [ 1 ] These include creating a similar microenvironment to that of the testis as well as supporting endocrine and paracrine signalling, and ensuring survival of the somatic and germ cells from spermatogonial stem cells (SSCs) to mature spermatozoa. [ 2 ] Different methods of culturing can be used in the process such as isolated cell cultures , fragment cultures and 3D cultures [ 1 ] Cell cultures can include either monocultures, where one cell population is cultured, or co-culturing systems, where several cell lines (must be at least two) can be cultured together. [ 3 ] Cells are initially isolated for culture by enzymatically digesting the testis tissue to separate out the different cell types for culture [ 4 ] The process of isolating cells can lead to cell damage. [ 5 ] The main advantage of monoculture is that the effect of different influences on one specific cell population of cells can be investigated. [ citation needed ] Co-culture allows for the interactions between cell populations to be observed and experimented on, which is seen as an advantage over the monoculture model. [ 3 ] Isolated cell culture, specifically co-culture of testis tissue, has been a useful technique for examining the influences of specific factors such as hormones or different feeder cells on the progression of spermatogenesis in vitro . [ citation needed ] For example, factors such as temperature, feeder cell influence and the role of testosterone and follicle-stimulating hormone (FSH) have all been investigated using isolated cell culture techniques. [ 3 ] Studies have concluded that different factors can influence the culture of germ cells e.g. media, growth factors, hormones and temperature. For example, when culturing immortalized mouse germ cells at temperatures of 35, 37 and 29°C, these cells proliferate most rapidly at the highest temperature and least rapidly at the lowest but there were varying levels of differentiation. At the highest temperature no differentiation were detected, some was seen at 37°C and some early spermatids appearing at 32°C. [ 3 ] Isolated cell culture technique has been successfully used for in vitro production of sperm using mouse as an animal model. [ 6 ] Investigations of appropriate feeder cells concluded that a variety of cells could encourage development of germ cells such as Sertoli cells , Leydig cells and peritubular myoid cells but the most essential is Sertoli cells, but Leydig and peritubular myoid cells both contribute to the microenvironment that encourage stem cells to remain pluripotent and self renew in the testis. [ 7 ] In fragment cultures, the testis is removed and fragments of tissue are cultured in supplemental media containing different growth factors to induce spermatogenesis and form functional gametes. [ 2 ] The development of this culture technique has taken place mainly with the use of animal models e.g. mice or rat testis tissue. The advantage of using this method is that it maintains the natural spatial arrangement of the seminiferous tubules . However, hypoxia is a recurring problem in these cultures where the low oxygen supply hinders the development and maturation of spermatids (significantly more in adult than immature testis tissues). [ 2 ] Other challenges with this type of culture include maintaining the structure of the seminiferous tubules which makes it more difficult for longer-term cell cultures as the tissue structures can flatten out making it hard to work with. [ 7 ] To resolve some of these issues, 3D cultures can be used. In 2012, mature spermatozoa capable of fertilization was isolated from in vitro culture of immature mouse testis tissue. [ 8 ] 3D cultures use sponge, models or scaffolds that resemble the elements of the extracellular matrix to achieve a more natural spatial structure of the seminiferous tubules and to better represent the tissues and the interaction between different cell types in an ex vivo experiment. Different components of the extracellular matrix such as collagen, agar and calcium alginate are commonly used to form the gel or scaffold which can provide oxygen and nutrients. [ 3 ] To propagate 3D cultures, testicular cell cultures are imbedded into the porous sponge/scaffold and allowed to colonise the structure which can then survive for several weeks to allow spermatogonia to differentiate and mature into spermatozoa. In addition, shaking 3D cultures during the seeding process allows for an increased oxygen supply which helps overcome the issue of hypoxia and so improves the lifespan of cells. [ 3 ] In contrast to monocultures, fragment/3D cultures are able to establish in vitro conditions that can somewhat resemble the testicular microenvironment to allow a more accurate study of the testicular physiology and its associations with the in vitro development of sperm cells. [ 3 ] The ability to recapitulate spermatogenesis In vitro provides a unique opportunity to study this biological process through oftentimes cheaper and faster method of research than in vivo work. Observation is often easier in vitro , as the targeted cells are mostly isolated and immobile. Another significant advantage of in vitro research is the ease with which environmental factors can be changed and monitored. There are also techniques which are not practical or feasible in vivo which can now be explored. [ 8 ] In vitro work is not without its own challenges. For example, one loses the natural structure provided by the in vivo tissue, and thus cell connections which could be important to the function of the tissue. [ 2 ] While rodent spermatogenesis is not identical to its human counterpart, especially due to the high evolution rate of the male reproductive tract, these techniques are a solid starting point for future human applications. [ 8 ] Various categories of infertile men may benefit from advances in these techniques, especially those with a lack of viable gamete production. These men cannot benefit, for example, from sperm extraction techniques, and currently have little to no options for producing genetic descendants. [ 9 ] Notably, males who have undergone chemo/radiotherapy prepubertally may benefit from in vitro spermatogenesis. [ 1 ] These people did not have the option to cryopreserve viable sperm before their procedure, and thus the ability to generate genetically descended sperm later in life is invaluable. Possible methods that could be applied (to this and other groups) are induction of spermatogenesis in testis samples taken prepubertally, or, if these samples are not available/viable, new methods that manipulate stem cell differentiation could produce SSCs 'from scratch', using adult stem cell samples. [ 8 ] An alternative method is to graft preserved tissue back onto adult cancer survivors, however this comes with operational risks, as well as a risk of reintroducing malignant cells. [ 10 ] Even if using this method however, in vitro spermatogenesis advances would allow for sample expansion and observation to better ensure quality and quantity of graft tissue. [ 7 ] In those with healthy or preserved SSCs but without a cellular environment to support them, in vitro spermatogenesis could be used following transplant of the SSCs into healthy donor tissue. [ 7 ] Another group that could be helped by in vitro spermatogenesis are those with any form of genetic impediment to sperm production. Those with no viable SSC development are an obvious target, but also those with varying levels of spermatogenic arrest; previously their underdeveloped germ cells have been injected into oocytes, however this has a success rate of only 3% in humans. [ 7 ] Finally, in vitro spermatogenesis using animal or human cells can be used to assess the effects and toxicity of drugs before in vivo testing. [ 3 ]
https://en.wikipedia.org/wiki/In_vitro_spermatogenesis
In vitro to in vivo extrapolation (IVIVE) refers to the qualitative or quantitative transposition of experimental results or observations made in vitro to predict phenomena in vivo , biological organisms. The problem of transposing in vitro results is particularly acute in areas such as toxicology where animal experiments are being phased out and are increasingly being replaced by alternative tests . Results obtained from in vitro experiments cannot often be directly applied to predict biological responses of organisms to chemical exposure in vivo . Therefore, it is extremely important to build a consistent and reliable in vitro to in vivo extrapolation method. Two solutions are now commonly accepted: The two approaches can be applied simultaneously allowing in vitro systems to provide adequate data for the development of mathematical models. To comply with push for the development of alternative testing methods, increasingly sophisticated in vitro experiments are now collecting numerous, complex, and challenging data that can be integrated into mathematical models. IVIVE in pharmacology can be used to assess pharmacokinetics (PK) or pharmacodynamics (PD).. [ citation needed ] Since biological perturbation depends on concentration of the toxicant as well as exposure duration of a candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ effects can either be completely different or similar to those observed in vitro . Therefore, extrapolating adverse effects observed in vitro is incorporated into a quantitative model of in vivo PK model. It is generally accepted that physiologically based PK ( PBPK ) models, including absorption, distribution, metabolism, and excretion of any given chemical are central to in vitro - in vivo extrapolations. [ 3 ] In the case of early effects or those without inter-cellular communications, it is assumed that the same cellular exposure concentration cause the same effects, both experimentally and quantitatively, in vitro and in vivo . In these conditions, it is enough to (1) develop a simple pharmacodynamics model of the dose–response relationship observed in vitro and (2) transpose it without changes to predict in vivo effects. [ 4 ] However, cells in cultures do not mimic perfectly cells in a complete organism. To solve that extrapolation problem, more statistical models with mechanistic information are needed, or we can rely on mechanistic systems of biology models of the cell response. Those models are characterized by a hierarchical structure, such as molecular pathways, organ function, whole-cell response, cell-to- cell communications, tissue response and inter-tissue communications. [ 5 ]
https://en.wikipedia.org/wiki/In_vitro_to_in_vivo_extrapolation
In vitro toxicity testing is the scientific analysis of the toxic effects of chemical substances on cultured bacteria or mammalian cells . [ 1 ] In vitro (literally 'in glass') testing methods are employed primarily to identify potentially hazardous chemicals and/or to confirm the lack of certain toxic properties in the early stages of the development of potentially useful new substances such as therapeutic drugs , agricultural chemicals and food additives. In vitro assays for xenobiotic toxicity are recently carefully considered by key government agencies (e.g., EPA; NIEHS/NTP; FDA), to better assess human risks. There are substantial activities in using in vitro systems to advance mechanistic understanding of toxicant activities, and the use of human cells and tissue to define human-specific toxic effects. [ 2 ] Most toxicologists believe that in vitro toxicity testing methods can be more useful, more time and cost-effective than toxicology studies in living animals [ 3 ] (which are termed in vivo or "in life" methods). However, the extrapolation from in vitro to in vivo requires some careful consideration and is an active research area. Due to regulatory constraints and ethical considerations, the quest for alternatives to animal testing has gained a new momentum. In many cases the in vitro tests are better than animal tests because they can be used to develop safer products. [ 4 ] The United States Environmental Protection Agency studied 1,065 chemical and drug substances in their ToxCast program (part of the CompTox Chemicals Dashboard ) using in silica modelling and a human pluripotent stem cell -based assay to predict in vivo developmental intoxicants based on changes in cellular metabolism following chemical exposure. Major findings from the analysis of this ToxCast_STM dataset published in 2020 include: (1) 19% of 1065 chemicals yielded a prediction of developmental toxicity , (2) assay performance reached 79%–82% accuracy with high specificity (> 84%) but modest sensitivity (< 67%) when compared with in vivo animal models of human prenatal developmental toxicity, (3) sensitivity improved as more stringent weights of evidence requirements were applied to the animal studies, and (4) statistical analysis of the most potent chemical hits on specific biochemical targets in ToxCast revealed positive and negative associations with the STM response, providing insights into the mechanistic underpinnings of the targeted endpoint and its biological domain. [ 5 ] Many methods of analysis exist for assaying test substances for cytotoxicity and other cellular responses. The hemolysis assay examines the propensity of chemicals, drugs or medication , or any blood-contacting medical device or material to lyse red blood cells (erythrocytes). The lysis is easily detected due to the release of hemoglobin . [ 6 ] MTT assay is used often in determining cell viability and has been validated for use by international organisations. MTT assay involves two steps of introducing the assay to the chemicals and then a solubilisation step. The colorimetric MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2Htetrazolium) in vitro assay is an updated version of the validated MTT method, MTS assay has the advantage of being soluble. Hence, no solubilisation step is required. ATP assay has the main advantage of providing results quickly (within 15 minutes) and only requires fewer sample cells. The assay performs lysis on the cells and the following chemical reaction between the assay and ATP content of cells produces luminescence. The amount of luminescence is then measured by a photometer and can be translated into number cells alive since Another cell viability endpoint can be neutral red (NR) uptake. Neutral red, a weak cationic dye penetrates cellular membranes by non-diffusion and accumulates intercellularly in lysosomes. Viable cells take up the NR dye, damaged or dead cells do not. ELISA kits can be used to examine up and down regulation of proinflammatory mediators such as cytokines (IL-1, TNF alpha, PGE2).... Measurement of these types of cellular responses can be windows into the interaction of the test article on the test models (monolayer cell cultures, 3D tissue models, tissue explants). Broadly speaking, there are two different types of in vitro studies depending on the type system developed to perform the experiment. The two types of systems generally used are : a) Static well plate system and b) the multi-compartmental perfused systems. The static well plate or layer systems are the most traditional and simplest form of assays widely used for in vitro study. These assays are quite beneficial as they are quite simple and provide a very accessible testing environment for monitoring chemicals in the culture medium as well as in the cell. However the disadvantage of using these simple static well plate assays is that, they cannot represent the cellular interactions and physiologic fluid flow conditions taking place inside the body. New testing platforms are now developed to solve problems related to cellular interactions. These new platforms are much more complex based on multi-compartmental perfused systems. [ 7 ] The main objective of these systems is to reproduce in vivo mechanisms more reliably by providing cell culture environment close to the in vivo situation. Each compartment in the system represent a specific organ of the living organism and thus each compartment has a specific characteristics and criteria. Each compartment in these systems are connected by tubes and pumps through which the fluid flows thus mimicking the blood flow in the in vivo situation. The draw back behind the use of these perfused systems is that, the adverse effects ( influence of both the biological and non-biological components of the system on the fate of the chemical under study) are more compared to the static systems. In order to reduce the effect of non-biological components of the system, all the compartments are made of glass and the connecting tubes are made up of teflon. A number of kinetic models have been proposed to take care of these non-specific bindings taking place in these in vitro systems. [ 8 ] To improve the biological difficulties arising from the use of different culture in vitro conditions, the traditional models used in flasks or micro-well plates has to be modified. With parallel development in micro-technologies and tissue engineering, these problems are solved using new pertinent tools called "micro-fluidic biochips". [ 9 ]
https://en.wikipedia.org/wiki/In_vitro_toxicology
Studies that are in vivo ( Latin for "within the living"; often not italicized in English [ 1 ] [ 2 ] [ 3 ] ) are those in which the effects of various biological entities are tested on whole, living organisms or cells , usually animals , including humans , and plants, as opposed to a tissue extract or dead organism. This is not to be confused with experiments done in vitro ("within the glass"), i.e., in a laboratory environment using test tubes , Petri dishes , etc. Examples of investigations in vivo include: the pathogenesis of disease by comparing the effects of bacterial infection with the effects of purified bacterial toxins ; the development of non-antibiotics, antiviral drugs, and new drugs generally; and new surgical procedures. Consequently, animal testing and clinical trials are major elements of in vivo research. In vivo testing is often employed over in vitro because it is better suited for observing the overall effects of an experiment on a living subject. In drug discovery , for example, verification of efficacy in vivo is crucial, because in vitro assays can sometimes yield misleading results with drug candidate molecules that are irrelevant in vivo (e.g., because such molecules cannot reach their site of in vivo action, for example as a result of rapid catabolism in the liver). [ 4 ] The English microbiologist Professor Harry Smith and his colleagues in the mid-1950s found that sterile filtrates of serum from animals infected with Bacillus anthracis were lethal for other animals, whereas extracts of culture fluid from the same organism grown in vitro were not. This discovery of anthrax toxin through the use of in vivo experiments had a major impact on studies of the pathogenesis of infectious disease. The maxim in vivo veritas ("in a living thing [there is] truth") [ 5 ] is a play on in vino veritas , ("in wine [there is] truth"), a well-known proverb. In microbiology , in vivo is often used to refer to experimentation done in a whole organism , rather than in live isolated cells , for example, cultured cells derived from biopsies . In this situation, the more specific term is ex vivo . Once cells are disrupted and individual parts are tested or analyzed, this is known as in vitro . [ citation needed ] According to Christopher Lipinski and Andrew Hopkins, "Whether the aim is to discover drugs or to gain knowledge of biological systems, the nature and properties of a chemical tool cannot be considered independently of the system it is to be tested in. Compounds that bind to isolated recombinant proteins are one thing; chemical tools that can perturb cell function another; and pharmacological agents that can be tolerated by a live organism and perturb its systems are yet another. If it were simple to ascertain the properties required to develop a lead discovered in vitro to one that is active in vivo , drug discovery would be as reliable as drug manufacturing." [ 6 ] Studies on In vivo behavior, determined the formulations of set specific drugs and their habits in a Biorelevant (or Biological relevance) medium. [ 7 ]
https://en.wikipedia.org/wiki/In_vivo
The in vivo bioreactor is a tissue engineering paradigm that uses bioreactor methodology to grow neotissue in vivo that augments or replaces malfunctioning native tissue. Tissue engineering principles are used to construct a confined, artificial bioreactor space in vivo that hosts a tissue scaffold and key biomolecules necessary for neotissue growth. Said space often requires inoculation with pluripotent or specific stem cells to encourage initial growth, and access to a blood source. A blood source allows for recruitment of stem cells from the body alongside nutrient delivery for continual growth. This delivery of cells and nutrients to the bioreactor eventually results in the formation of a neotissue product. Conceptually, the in vivo bioreactor was borne from complications in a repair method of bone fracture, bone loss, necrosis, and tumor reconstruction known as bone grafting . Traditional bone grafting strategies require fresh, autologous bone harvested from the iliac crest ; this harvest site is limited by the amount of bone that can safely be removed, as well as associated pain and morbidity. [ 1 ] Other methods include cadaverous allografts and synthetic options (often made of hydroxyapatite ) that have become available in recent years. In response to the question of limited bone sourcing, it has been posited that bone can be grown to fit a damaged region within the body through the application of tissue engineering principles. [ 2 ] Tissue engineering is a biomedical engineering discipline that combines biology, chemistry, and engineering to design neotissue (newly formed tissue) on a scaffold. [ 3 ] Tissues scaffolds are functionally identical to the extracellular matrix found, acting as a site upon which regenerative cellular components adsorb to encourage cellular growth . [ 4 ] This cellular growth is then artificially stimulated by additive growth factors in the environment that encourage tissue formation . The scaffold is often seeded with stem cells and growth additives to encourage a smooth transition from cells to tissues, and more recently, organs. Traditionally, this method of tissue engineering is performed in vitro , where scaffold components and environmental manipulation recreate in vivo stimuli that direct growth. Environmental manipulation includes changes in physical stimulation, pH, potential gradients, cytokine gradients, and oxygen concentration. [ 5 ] The overarching goal of in vitro tissue engineering is to create a functional tissue that is equivalent to native tissue in terms of composition, biomechanical properties, and physiological performance. [ 6 ] However, in vitro tissue engineering suffers from a limited ability to mimic in vitro conditions, often leading to inadequate tissue substitutes. Therefore, in vivo tissue engineering has been suggested as a method to circumvent the tedium of environmental manipulation and use native in vivo stimuli to direct cell growth. To achieve in vivo tissue growth, an artificial bioreactor space must be established in which cells may grow. The in vivo bioreactor depends on harnessing the reparative qualities of the body to recruit stem cells into an implanted scaffold, and utilize vasculature to supply all necessary growth components. Tissue engineering done in vivo is capable of recruiting local cellular populations into a bioreactor space. [ 2 ] [ 7 ] Indeed a range of neotissue growth has been shown: bone, cartilage , fat , and muscle . [ 7 ] [ 8 ] [ 9 ] [ 10 ] In theory, any tissue type could be grown in this manner if all necessary components (growth factors, environmental and physical ques) are met. Recruitment of stem cells require a complex process of mobilization from their niche, [ 11 ] though research suggests that mature cells transplanted upon the bioreactor scaffold can improve stem cell recruitment. [ 12 ] [ 13 ] [ 14 ] These cells secrete growth factors that promote repair and can be co-cultured with stem cells to improve tissue formation. Scaffold materials are designed to enhance tissue formation through control of the local and surrounding environments. [ 15 ] [ 16 ] [ 17 ] Scaffolds are critical in regulating cellular growth and provide a volume in which vascularization and stem cell differentiation can occur. [ 18 ] Scaffold geometry significantly affects tissue differentiation through physical growth ques. Predicting tissue formation computationally requires theories that link physical growth ques to cell differentiation. Current models rely on mechano-regulation theory, widely shaped by Prendergast et al. for predicting cell growth. [ 19 ] Thus a quantitative analysis of geometry and materials commonly used in tissue scaffolds is capable. Such materials include: Initially, focusing on bone growth, subcutaneous pockets were used for bone prefabrication as a simple in vivo bioreactor model. The pocket is an artificially created space between varying levels of subcutaneous fascia . The location provides regenerative ques to the bioreactor implant but does not rely on pre-existing bone tissue as a substrate. Furthermore, these bioreactors may be wrapped with muscle tissue to encourage vascularization and bone growth. Another strategy is through the use of a periosteal flap wrapped around the bioreactor, or the scaffold itself to create an in vivo bioreactor. This strategy utilizes the guided bone regeneration treatment scheme, and is a safe method for bone prefabrication. These 'flap' methods of packing the bioreactor within fascia, or wrapping it in tissue is effective, though somewhat random due to the non-directed vascularization these methods incur. The axial vascular bundle (AVB) strategy requires that an artery and vein are inserted in an in vitro bioreactor to transport growth factors, cells, and remove waste. This ultimately results in extensive vascularization of the bioreactor space and a vast improvement in growth capability. This vascularization, though effective, is limited by the surface contact that it can achieve between the scaffold and the capillaries filling the bioreactor space. Thus, a combination of the flap and AVB techniques can maximize the growth rate and vascular contact of the bioreactor as suggested by Han and Dai, by inserting a vascular bundle into a scaffold wrapped in either musculature or periosteum. [ 28 ] If inadequate pre-existing vasculature is present in the growth site due to damage or disease, an arteriovenous loop (AVL) can be used. The AVL strategy requires a surgical connection be made between an artery of vein to form an arteriovenous fistula which is then placed within an in vitro bioreactor space containing a scaffold. A capillary network will form from this loop and accelerate the vascularization of new tissue. [ 29 ] Materials used in the construction of an in vivo bioreactor space vary widely depending on the type of substrate, type of tissue, and mechanical demands of said tissue being grown. At its simplest, a bioreactor space will be created between tissue layers through the use of hydrogel injections to create a bioreactor space. Early models used an impermeable silicone shroud to encase a scaffold, [ 6 ] though more recent studies have begun 3D printing custom bioreactor molds to further enhance the mechanical growth properties of the bioreactors. The choice of bioreactor chamber material generally requires that it is nontoxic and medical grade, examples include: "silicon, polycarbonate , and acrylic polymer". [ 27 ] Recently both Teflon and titanium have been used in the growth of bone. [ 27 ] One study utilized Polymethyl methacrylate as a chamber material and 3D printed hollow rectangular blocks. [ 30 ] Yet another study pushed the limits of the in vivo bioreactor by proving that the omentum is suitable as a bioreactor space and chamber. Specifically, highly vascularized and functional bladder tissue was grown within the omentum space. [ 31 ] An example of the implementation of the IVB approach was in the engineering of autologous bone by injecting calcium alginate in a sub-periosteal location. [ 32 ] [ 33 ] The periosteum is a membrane that covers the long bones, jawbone, ribs and the skull. This membrane contains an endogenous population of pluripotent cells called the periosteal cells, which are a type of mesenchymal stem cells (MSC), which reside in the cambium layer, i.e., the side facing the bone. A key step in the procedure is the elevation of the periosteum without damaging the cambium surface and to ensure this a new technique called hydraulic elevation was developed. [ 34 ] The choice of the sub-periosteum site is used because stimulation of the cambium layer using transforming growth factor–beta resulted in enhanced chondrogenesis , i.e., formation of cartilage. In development the formation of bone can either occur via a Cartilage template initially formed by the MSCs that then gets ossified through a process called endochondral ossification or directly from MSC differentiation to bone via a process termed intra-membranous ossification . Upon exposure of the periosteal cells to calcium from the alginate gel, these cells become bone cells and start producing bone matrix through the intra-membranous ossification process, recapitulating all steps of bone matrix deposition. The extension of the IVB paradigm to engineering autologous hyaline cartilage was also recently demonstrated. [ 35 ] In this case, agarose is injected and this triggers local hypoxia , which then results in the differentiation of the periosteal MSCs into articular chondrocytes, i.e. cells similar to those found in the joint cartilage. Since this processes occurs in a relative short period of less than two weeks and cartilage can remodel into bone, this approach might provide some advantages in treatment of both cartilage and bone loss. The IVB concept needs to be however realized in humans and this is currently being undertaken.
https://en.wikipedia.org/wiki/In_vivo_bioreactor
Ina Kersten (born 1946) [ 1 ] is a German mathematician and former president of the German Mathematical Society . Her research concerns abstract algebra including the theory of field extensions and algebraic groups . [ 2 ] She is a professor emerita at the University of Göttingen . Kersten was born in Hamburg , [ 2 ] and earned her Ph.D. at the University of Hamburg in 1977. Her dissertation, p-Algebren über semilokalen Ringen , was supervised by Ernst Witt . [ 3 ] She completed a habilitation at the University of Regensburg in 1983. [ 4 ] Kersten was president of the German Mathematical Society from 1995 to 1997, [ 2 ] which meant she was the first woman to head the society. [ 5 ] Under her leadership, the society founded the journal Documenta Mathematica . [ 6 ]
https://en.wikipedia.org/wiki/Ina_Kersten
Phallodrilus leukodermatus Giere, 1979 Inanidrilus leukodermatus is a species of annelid worm. [ 1 ] [ 2 ] It is known from poorly oxygenated intertidal and subtidal carbonate sands in Belize (Caribbean Sea) and Bermuda (Atlantic Ocean). Living specimens typically measure 20 mm (0.79 in) in length and can measure as much as 25 mm (0.98 in), but preserved specimens are only up to 11.4 mm (0.45 in). [ 2 ] This annelid -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inanidrilus_leukodermatus
Inborn errors of metabolism form a large class of genetic diseases involving congenital disorders of enzyme activities. [ 1 ] The majority are due to defects of single genes that code for enzymes that facilitate conversion of various substances ( substrates ) into others ( products ). In most of the disorders, problems arise due to accumulation of substances which are toxic or interfere with normal function, or due to the effects of reduced ability to synthesize essential compounds. Inborn errors of metabolism are often referred to as congenital metabolic diseases or inherited metabolic disorders . [ 2 ] Another term used to describe these disorders is "enzymopathies". This term was created following the study of biodynamic enzymology , a science based on the study of the enzymes and their products. Finally, inborn errors of metabolism were studied for the first time by British physician Archibald Garrod (1857–1936), in 1908. He is known for work that prefigured the "one gene–one enzyme" hypothesis , based on his studies on the nature and inheritance of alkaptonuria. His seminal text, Inborn Errors of Metabolism , was published in 1923. [ 3 ] Traditionally the inherited metabolic diseases were classified as disorders of carbohydrate metabolism, amino acid metabolism, organic acid metabolism, or lysosomal storage diseases . [ 4 ] In recent decades, hundreds of new inherited disorders of metabolism have been discovered and the categories have proliferated. Following are some of the major classes of congenital metabolic diseases, with prominent examples of each class. [ 5 ] Because of the enormous number of these diseases and the numerous systems negatively impacted, nearly every "presenting complaint" to a healthcare provider may have a congenital metabolic disease as a possible cause, especially in childhood and adolescence. The following are examples of potential manifestations affecting each of the major organ systems. [ citation needed ] Dozens of congenital metabolic diseases are now detectable by newborn screening tests, especially expanded testing using mass spectrometry. [ 6 ] Gas chromatography–mass spectrometry -based technology with an integrated analytics system has now made it possible to test a newborn for over 100 mm genetic metabolic disorders. Because of the multiplicity of conditions, many different diagnostic tests are used for screening. An abnormal result is often followed by a subsequent "definitive test" to confirm the suspected diagnosis. [ citation needed ] Common screening tests used in the last sixty years: [ citation needed ] Specific diagnostic tests (or focused screening for a small set of disorders): [ citation needed ] A 2015 review reported that even with all these diagnostic tests, there are cases when "biochemical testing, gene sequencing, and enzymatic testing can neither confirm nor rule out an IEM, resulting in the need to rely on the patient's clinical course". [ 7 ] A 2021 review showed that several neurometabolic disorders converge on common neurochemical mechanisms that interfere with biological mechanisms also considered central in ADHD pathophysiology and treatment. This highlights the importance of close collaboration between health services to avoid clinical overshadowing . [ 8 ] In the middle of the 20th century the principal treatment for some of the amino acid disorders was restriction of dietary protein and all other care was simply management of complications. In the past twenty years, new medications, enzyme replacement, gene therapy, and organ transplantation have become available and beneficial for many previously untreatable disorders. Some of the more common or promising therapies are listed: [ citation needed ] In a study in British Columbia , the overall incidence of the inborn errors of metabolism were estimated to be 40 per 100,000 live births or 1 in 2,500 births, [ 9 ] overall representing more than approximately 15% of single gene disorders in the population. [ 9 ] While a Mexican study established an overall incidence of 3.4:1,000 live newborns and a carrier detection of 6.8:1,000 NBS. [ 10 ]
https://en.wikipedia.org/wiki/Inborn_errors_of_metabolism
Inborn errors of purine–pyrimidine metabolism are a class of inborn error of metabolism disorders specifically affecting purine metabolism and pyrimidine metabolism . An example is Lesch–Nyhan syndrome . Urine tests may be of use in identifying some of these disorders. [ 1 ] This article about an endocrine, nutritional, or metabolic disease is a stub . You can help Wikipedia by expanding it . This article about metabolism is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inborn_errors_of_purine–pyrimidine_metabolism
Inbreeding avoidance , or the inbreeding avoidance hypothesis , is a concept in evolutionary biology that refers to the prevention of the harmful effects of inbreeding . Animals only rarely exhibit inbreeding avoidance. [ 1 ] The inbreeding avoidance hypothesis posits that certain mechanisms develop within a species, or within a given population of a species, as a result of assortative mating and natural and sexual selection , in order to prevent breeding among related individuals. Although inbreeding may impose certain evolutionary costs, inbreeding avoidance, which limits the number of potential mates for a given individual, can inflict opportunity costs. [ 2 ] Therefore, a balance exists between inbreeding and inbreeding avoidance. This balance determines whether inbreeding mechanisms develop and the specific nature of such mechanisms. [ 3 ] A 2007 study showed that inbred mice had significantly reduced survival when they were reintroduced into a natural habitat. [ 4 ] Inbreeding can result in inbreeding depression , which is the reduction of fitness of a given population due to inbreeding. Inbreeding depression occurs via appearance of disadvantageous traits due to the pairing of deleterious recessive alleles in a mating pair's progeny . [ 5 ] When two related individuals mate, the probability of deleterious recessive alleles pairing in the resulting offspring is higher as compared to when non-related individuals mate because of increased homozygosity . However, inbreeding also gives opportunity for genetic purging of deleterious alleles that otherwise would continue to exist in population and could potentially increase in frequency over time. Another possible negative effect of inbreeding is weakened immune system due to less diverse immunity alleles as a result of outbreeding depression . [ 6 ] A review of the genetics of inbreeding depression in wild animal and plant populations, as well as in humans, led to the conclusion that inbreeding depression and its opposite, heterosis (hybrid vigor), are predominantly caused by the presence of recessive deleterious alleles in populations. [ 7 ] Inbreeding , including self-fertilization in plants and automictic parthenogenesis ( thelytoky ) in hymenoptera , tends to lead to the harmful expression of deleterious recessive alleles (inbreeding depression). Cross-fertilization between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny. [ 8 ] [ 9 ] Many studies have demonstrated that homozygous individuals are often disadvantaged with respect to heterozygous individuals. [ 10 ] For example, a study conducted on a population of South African cheetahs demonstrated that the lack of genetic variability among individuals in the population has resulted in negative consequences for individuals, such as a greater rate of juvenile mortality and spermatozoal abnormalities. [ 11 ] When heterozygotes possess a fitness advantage relative to a homozygote, a population with a large number of homozygotes will have a relatively reduced fitness, thus leading to inbreeding depression. Through these described mechanisms, the effects of inbreeding depression are often severe enough to cause the evolution of inbreeding avoidance mechanisms. [ 12 ] Inbreeding avoidance mechanisms have evolved in response to selection against inbred offspring. Inbreeding avoidance occurs in nature by at least four mechanisms: kin recognition , dispersal, extra-pair/extra-group copulations, and delayed maturation/reproductive suppression. [ 3 ] [ 12 ] These mechanisms are not mutually exclusive and more than one can occur in a population at a given time. Kin recognition is the mechanism by which individuals identify and avoid mating with closely related conspecifics . There have been numerous documented examples of instances in which individuals are shown to find closely related conspecifics unattractive. In one set of studies, researchers formed artificial relative and non-relative mate-pairs (artificial meaning they preferentially paired individuals to mate for the purposes of the experiments) and compared the reproductive results of the two groups. In these studies, paired relatives demonstrated reduced reproduction and higher mating reluctance when compared with non-relatives. [ 12 ] [ 13 ] [ 14 ] [ 15 ] For example, in a study by Simmons in field crickets, female crickets exhibited greater mating latency for paired siblings and half-siblings than with non-siblings. [ 13 ] In another set of studies, researchers allowed individuals to choose their mates from conspecifics that lie on a spectrum of relatedness. In this set, individuals were more likely to choose non-related over related conspecifics. [ 12 ] [ 14 ] [ 16 ] For example, in a study by Krackow et al., male wild house mice were set up in an arena with four separate openings leading to cages with bedding from conspecifics. The conspecifics exhibited a range of relatedness to the test subjects, and the males significantly preferred the bedding of non-siblings to the bedding of related females. [ 14 ] Studies have shown that kin recognition is more developed in species in which dispersal patterns facilitate frequent adult kin encounters. [ 12 ] There is a significant amount of variation in the mechanisms used for kin recognition. These mechanisms include recognition based on association or familiarity, an individual's own phenotypic cues, chemical cues, and the MHC genes . In association/familiarity mechanisms, individuals learn the phenotypic profiles of their kin and use this template for kin recognition. [ 12 ] Many species accomplish this by becoming "familiar" with their siblings, litter mates, or nestmates. These species rely on offspring being reared in close proximity to achieve kin recognition. This is called the Westermarck effect . [ 17 ] For example, Holmes and Sherman conducted a comparative study in Arctic ground squirrels and Belding's ground squirrels. They manipulated the reared groups to include both siblings and cross-fostered nestmates and found that in both species the individuals were equally aggressive toward their nestmates, regardless of kinship. [ 18 ] In certain species where social groups are highly stable, relatedness and association between infants and other individuals are usually highly correlated. [ 12 ] [ 19 ] Therefore, degree of association can be used as a meter for kin recognition. Individuals can also use their own characteristics or phenotype as a template in kin recognition. For example, in one study, Mateo and Johnston had golden hamsters reared with only non-kin then later had them differentiate between odors of related and non-related individuals without any postnatal encounters with kin. The hamsters were able to discriminate between the odors, demonstrating the use of their own phenotype for the purpose of kin recognition. [ 20 ] This study also provides an example of a species utilizing chemical cues for kin recognition. The major histocompatibility complex genes , or MHC genes, have been implicated in kin recognition. [ 21 ] One idea is that the MHC genes code for a specific pheromone profile for each individual, which are used to discriminate between kin and non-kin conspecifics. Several studies have demonstrated the involvement of the MHC genes in kin recognition. For example, Manning et al. conducted a study in house mice that looked at the species's behavior of communal nesting, or nursing one's own pups as well as the pups of other individuals. As Manning et al. state, kin selection theory predicts that the house mice will selectively nurse the pups of their relatives in order to maximize inclusive fitness. Manning et al. demonstrate that the house mice utilize the MHC genes in the process of discriminating between kin by preferring individuals who share the same allelic forms the MHC genes. [ 22 ] The possible use of olfaction -biased mechanisms in human kin recognition and inbreeding avoidance was examined in three different types of study. [ 23 ] The results indicated that olfaction may help mediate the development during childhood of incest avoidance (the Westermarck effect ). Experiments using in vitro fertilization in the mouse, provided evidence of sperm selection at the gametic level. [ 24 ] When sperm of sibling and non-sibling males were mixed, a fertilization bias towards the sperm of the non-sibling males was observed. The results were interpreted as egg-driven sperm selection against related sperm. Experiments were performed with the dioecious plant Silene latifolia to test whether post- pollination selection favors less related pollen donors and reduces inbreeding . [ 25 ] The results showed that in S. latifolia , and presumably in other plant systems with inbreeding depression , pollen or embryo selection after multiple-donor pollination may reduce inbreeding. Some species will adopt dispersal as a way to separate close relatives and prevent inbreeding. [ 12 ] The initial dispersal route species may take is known as natal dispersal, whereby individuals move away from the area of birth. Subsequently, species may then resort to breeding dispersal, whereby individuals move from one non-natal group to another. Nelson-Flower et al. (2012) conducted a study on southern pied babblers and found that individuals may travel farther distances from natal groups than from non-natal groups. [ 26 ] This may be attributed to the possibility of encountering kin within local ranges when dispersing. The extent to which an individual in a particular species will disperse depends on whether the benefits of dispersing can outweigh both the costs of inbreeding and the costs of dispersal. Long‐distance movements can bear mortality risks and energetic costs. [ 27 ] In many cases of dispersal, one sex shows a greater tendency to disperse from their natal area than the opposite sex. [ 28 ] The extent of bias for a particular sex is dependent on numerous factors which include, but are not limited to: mating system, social organization, inbreeding and dispersal costs, and physiological factors. [ 27 ] [ 28 ] [ 29 ] [ 30 ] When the costs and benefits of dispersal are symmetric for both males and females, then no sex-biased dispersal is expected to be observed in species. [ 27 ] Birds tend to adopt monogamous mating systems in which the males remain in their natal groups to defend familiar territories with high resource quality. [ 28 ] Females generally have high energy expenditure when producing offspring, therefore inbreeding is costly for the females in terms of offspring survival and reproductive success. Females will then benefit more by dispersing and choosing amongst these territorial males. In addition, according to the Oedipus hypothesis , daughters of female birds can cheat their mothers through brood parasitism , therefore females will evict the females from the nest, forcing their daughters to disperse. Female dispersal is not seen only in birds; males may remain philopatric in mammals when the average adult male residency in a breeding group exceeds the average age for female maturation and conception. [ 30 ] For example, in a community of chimpanzees in Gombe National Park, males tend to remain in their natal community for the duration of their lives, while females typically move to other communities as soon as they reach maturity. [ 31 ] Male dispersal is more common in mammals with cooperative breeding and polygynous systems. Australian marsupial juvenile males have a greater tendency to disperse from their natal groups, while the females remain philopatric. [ 32 ] In Antechinus , this is due to the fact that males die immediately after mating; therefore when they disperse to mate, they often meet with female natal groups with zero males present. Furthermore, the Oedipus hypothesis also states that fathers in polygynous systems will evict sons with the potential to cuckold them. [ 28 ] Polygynous mating systems also influence intrasexual competition between males, where in cases where males can guard multiple females and exert their dominance, subordinate males are often forced to disperse to other non-natal groups. When species adopt alternative inbreeding avoidance mechanisms, they can indirectly influence whether a species will disperse. Their choice for non-natal group males then selects for male dispersal. The delayed sexual maturation of offspring in the presence of parents is another mechanism by which individuals avoid inbreeding. Delayed maturation scenarios can involve the removal of the original, opposite-sex parent, as is the case in female lions that exhibit estrus earlier following the replacement of their fathers with new males. Another form of delayed maturation involves parental presence that inhibits reproductive activity, such as in mature marmosets offspring that are reproductively suppressed in the presence of opposite sex parents and siblings in their social groups. [ 12 ] Reproductive suppression occurs when sexually mature individuals in a group are prevented from reproducing due to behavioral or chemical stimuli from other group members that suppress breeding behavior. [ 33 ] Social cues from the surrounding environment often dictate when reproductive activity is suppressed and involves interactions between same-sex adults. If the current conditions for reproduction are unfavorable, such as when presented with only inbreeding as a means to reproduce, individuals may increase their lifetime reproductive success by timing their reproductive attempts to occur during more favorable conditions. This can be achieved by individuals suppressing their reproductive activity in poor reproduction conditions. Inbreeding avoidance between philopatric offspring and their parents/siblings severely restricts breeding opportunities of subordinates living in their social groups. A study by O'Riain et al. (2000) examined meerkats social groups and factors affecting reproductive suppression in subordinate females. They found that in family groups, the absence of a dominant individual of either sex led to reproductive quiescence . Reproductive activity only resumed upon another sexually mature female obtaining dominance, and immigration of an unrelated male. Reproduction required both the presence of an unrelated opposite-sex partner, which acted as appropriate stimulus on reproductively suppressed subordinates that were quiescent in the presence of the original dominant individual. [ 33 ] In various species, females benefit by mating with multiple males, thus producing more offspring of higher genetic diversity and potentially quality. Females that are pair bonded to a male of poor genetic quality, as can be the case in inbreeding, are more likely to engage in extra-pair copulations in order to improve their reproductive success and the survivability of their offspring. [ 34 ] This improved quality in offspring is generated from either the intrinsic effects of good genes , or from interactions between compatible genes from the parents. In inbreeding, loss of heterozygosity contributes to the overall decreased reproductive success, but when individuals engage in extra-pair copulations, mating between genetically dissimilar individuals leads to increased heterozygosity. [ 35 ] Extra-pair copulations involve a number of costs and benefits for both male and female animals. For males, extra-pair copulation involves spending more time away from the original pairing in search of other females. This risks the original female being fertilized by other males while the original male is searching for partners, leading to a loss of paternity. The tradeoff for this cost depends entirely on whether the male is able to fertilize the other females’ eggs in the extra-pair copulation. For females, extra-pair copulations ensure egg fertilization , and provide enhanced genetic variety with compatible sperm that avoid expression of damaging recessive genes that come with inbreeding. [ 36 ] Through extra-pair mating, females are able to maximize the genetic variability of their offspring, providing protection against environmental changes that may otherwise target more homozygous populations that inbreeding often produces. [ 37 ] Whether a female engages in extra-pair copulations for the sake of inbreeding avoidance depends on whether the costs of extra-pair copulation outweigh the costs of inbreeding. In extra-pair copulations, both inbreeding costs and pair-bond male loss (leading to the loss of paternal care) must be considered with the benefits of reproductive success that extra-pair copulation provides. When paternal care is absent or has little influence on offspring survivability, it is generally favorable for females to engage in extra-pair mating to increase reproductive success and avoid inbreeding. [ 34 ] Inbreeding avoidance has been studied via three main methods: (1) observing individual behavior in the presence and absence of close kin, (2) contrasting costs of avoidance with costs of tolerating close inbreeding, and (3) comparing observed and random frequencies of close inbreeding. [ 38 ] No method is perfect, giving rise to questions about the completeness and consistency of the inbreeding avoidance hypothesis. [ 38 ] [ 39 ] Although the first option, individual behavioral observation, is preferred and most widely used, there is still debate over whether it can provide definitive evidence for inbreeding avoidance. A majority of the literature on inbreeding avoidance was published at least 15 years ago, allowing for growth and development of the study through contemporary experimental methods and technology. Molecular techniques such as DNA fingerprinting have become more advanced and accessible, improving the efficiency and accuracy of measuring relatedness. [ 12 ] Studying inbreeding avoidance in carnivores has garnered increased interest due to ongoing work to explain their social behaviors. [ 40 ]
https://en.wikipedia.org/wiki/Inbreeding_avoidance
Inca technology includes devices, technologies and construction methods used by the Inca people of western South America (between the 1100s and their conquest by Spain in the 1500s), including the methods Inca engineers used to construct the cities and road network of the Inca Empire . The builders of the empire planned and built impressive waterworks in their city centers, including canals , fountains , drainage systems and expansive irrigation . Inca's infrastructure and water supply system have been hailed as “the pinnacle of the architectural and engineering works of the Inca civilization”. [ 1 ] Major Inca centers were chosen by experts who decided the site, its apportionment, and the basic layout of the city. In many cities we see great hydraulic engineering marvels. For example, in the city of Tipon , 3 irrigation canals diverted water from Rio Pukara to Tipon which is about 1.35 km north for Tipon's terraces. [ 2 ] Tipon also had natural springs that they built fountains for that supplied noble residents with water for non agricultural purposes. [ 2 ] In 1450, Machu Picchu was constructed. [ 3 ] This date was determined and based on the Carbon 14 test results. [ 3 ] The famous lost Inca city is an architectural remnant of a society whose understanding of civil and hydraulic engineering was advanced. Today, it is famously known for its remarkable preservation as well as the beauty of the building's architecture. [ 4 ] The site is located 120 km northwest of Cuzco in the Urubamba river valley, Peru. [ 4 ] At 2560 m above sea level, sitting atop a mountain, the city planners had to consider the steep slopes of the site as well as the humid and rainy climate. [ 4 ] The Inca people built this site atop a hill which was terraced (most likely for agricultural purposes). [ 4 ] In addition to terraces, Machu Picchu is composed of two additional basic architectural elements; elite residential compounds and religious structures. [ 4 ] The site is full of staircases and sculpted rock, which were also important to their architecture and engineering practices. [ 4 ] Making models out of clay before beginning to build, the city planners remained consistent with Inca architecture and laid out a city that separated the agriculture and urban areas. [ citation needed ] Before construction began the engineers had to assess the spring and whether it could provide for all of the city’s anticipated citizens. After evaluating the water supply, the civil engineers designed a 2,457-foot (749 m)-long canal to what would become the city’s center. The canal descends the mountain slope, enters the city walls, passes through the agricultural sector, then crosses the inner wall into the urban sector, where it feeds a series of fountains. The fountains are publicly accessible and partially enclosed by walls that are typically about 1.2 m high, except for the lowest fountain, which is a private fountain for the Temple of the Condor and has higher walls. At the head of each fountain, a cut stone conduit carries the water to a rectangular spout, which is shaped to create a jet of water suitable for filling aryballos–a typical Inca clay water jug. The water collects in a stone basin in the floor of the fountain, then enters a circular drain that delivers it to the approach channel for the next fountain. The Incas built the canals on steady grades, using cut stones as the water channels. Most citizens worked on the construction and maintenance of the canal and irrigation systems, bronze and stone tools to complete the water-tight stone canals. The water then traveled through the channels into sixteen fountains known as the "stairway of fountains", reserving the first water source for the Emperor . This incredible feat supplied the population of Machu Picchu, which varied between 300 and 1000 people when the emperor was present and also helped irrigate water to the farming steppes. The fountains and canal system were built so well that they would, after a few minor repairs, still work today. To go along with the Incas' advanced water supply system, an equally impressive drainage system was built as well. Machu Picchu contains nearly 130 outlets in the center that moved the water out of the city through walls and other structures. The agriculture terraces are a feature of the complicated drainage system; the steppes helped avoid erosion and were built on a slope to aim excess water into channels that ran alongside the stairways. These channels carried the runoff into the main drain, avoiding the main water supply. This carefully planned drainage system shows the Incas' concern and appreciation for clean water. Water engineer Ken Wright and his archaeological team found the emperor’s bathing room complete with a separate drain that carried off his used bath water so it would never re-enter Machu Picchu’s water supply. The Inca faced many problems with living in areas with steep terrain. Two large issues were soil erosion and area to grow crops. [ 5 ] [ 6 ] The solution to these problems was the development of terraces, called Andenes . These terraces allowed the Inca to utilize the land for farming that they never could in the past. [ 6 ] Everything about how the terrace functions, looks, its geometric alignment, etc. all depend on the slope of the land. [ 6 ] The different layering of materials is part of what makes the terraces so successful. It starts with a base layer of large rocks, followed by a second layer of smaller rocks, then a layer of sand-like material, and finally the topsoil. You can practice this in a simulation here . [ 7 ] The most impressive part of the terraces was their drainage systems. Drain outlets were placed in the numerous stone retaining walls. [ 6 ] [ 8 ] The larger rocks at the base of each terrace level are what allowed the water to flow more easily through the larger spaces in between the rocks, eventually coming out at the “Main Drain”. [ 8 ] The Inca even constructed different types of drainage channels that are used for different purposes throughout the city. Studies have indicated that when terraces like the ones in the Colca Valley were being constructed, the first step was excavating into the slope, and then a subsequent infilling of the slope. [ 6 ] A retaining wall was built to hold the fill material. [ 8 ] This wall had many uses, including absorbing heat from the sun during the day and radiating it back out at night, often keeping crops from freezing in the chilling nighttime temperatures, and holding back the different layers of sediment. After the wall is built, the larger rocks would be placed on the bottom, then smaller rocks, then sand, then soil. [ 6 ] [ 8 ] Since the soil was now level, the water did not rush down the side of the mountain, which is what causes erosion. Previously, this erosion was so powerful that it had potential to wipe out major areas of the Inca road, as well as wash away all of the nutrients and fertile soil. [ 9 ] Since the soil never washed away, nutrients would always be added from previously grown crops year after year. [ 6 ] The Inca even grew specific crops together, to balance out the optimal amount of nutrients for all plants. For example, a planting method is known as "three sisters" incorporated the growth of corn, beans, and squash in the same terrace. [ 10 ] This was because the fixed nitrogen in the beans helped the corn grow, while the squash acted as mulch keeping the soil moist, and also acted as a weed repellant. All food grown or killed by the Inca could be freeze dried . Freeze drying is still very popular today. One of the biggest benefits for freeze-drying is that it takes out all of the water and moisture but leaves all of the nutritious value. [ 11 ] The water in meats and vegetables is what gives them a lot of their weight. This is what made it very popular for transportation purposes and storage because dried meats lasted twice as long as non-freeze-dried foods. [ 12 ] Inca diet was largely vegetarian because large wild game was often reserved for special occasions. A very common and well known freeze-dried item was the potato, or when it was frozen, Chuño . [ 12 ] Common meats to freeze-dry included llama, alpaca, duck, and guinea pig. [ 11 ] [ 12 ] Transportation and storage of jerky ( ch'arki in Quechua ) was much easier to transport and lasted longer than not dried meats. [ 12 ] These all had potential to be freeze-dried. Both meats and vegetables went through a similar freezing process. They would start by laying all the different foods on rocks and during the cold nights in high altitudes with dry air they would freeze. [ 11 ] The next morning, a combination of the thin dry air and the heat from the sun would melt the ice and evaporate all the moisture .They would also trample over it in the morning to get any extra moisture out. [ 11 ] The process of freeze-drying was important for transportation and storage. [ 11 ] [ 12 ] The high elevation (low atmospheric pressure) and low temperatures of the Andes mountains is what allowed them to take advantage of this process. The chronicler Inca Garcilaso de la Vega described the use of a burning mirror as part of the annual " Inti Raymi " (sun festival): "The fire for that sacrifice had to be new, given by the hand of the sun, as they said. For which they took a large bracelet, which they call Chipana (similar to others that the Incas commonly wore on the left wrist) which the high priest had; it was large, larger than the common ones, it had for a medallion a concave vessel, the shape of a half orange and brightly polished, they put it against the sun, and at a certain point where the rays that came out of the vessel hit each other, they put a bit of finely unravelled cotton (they did not know how to make tinder), which caught fire naturally in a short space of time. With this fire, thus given by the hand of the sun, the sacrifice was burned and all the meat of that day was roasted." [ 13 ] The vast size of the Inca empire made it essential that efficient and effective transportation systems were created and built to assist in the exchanging of goods, services, people, etc. At one point, "their (the Inca) empire eventually extended across western South America from Quito in the north to Santiago in the south, making it the largest empire ever seen in the Americas and the largest in the world at that time (between c. 1400 and 1533 CE)." [ 12 ] It is known to have "extended some 3500-4000 km along the mountainous backbone of South America." [ 4 ] [ 14 ] The trails, roads, and bridges were designed not only to link the empire physically, but these structures also helped the empire to maintain communication. Rope bridges were an integral part of the Inca road system . "Five centuries ago, the Andes were strung with suspension bridges . By some estimates there were as many as 200 of them." [ 15 ] [ 16 ] As pictured to the right, these structures were used to connect two land masses, allowing for the flow of ideas, goods, people, animals, etc. across the Incan empire . "The Inca suspension bridges achieved clear spans of at least 150 feet, probably much greater. [ 17 ] This was a longer span than any European masonry bridges at the time." [ 16 ] Since the Incan people did not use wheeled vehicles, most traveled by foot and/or used animals to help in the transporting of goods. [ 14 ] [ 12 ] Although these bridges were assembled using twisted mountain grass, other vegetation, and saplings, they were dependable. [ 15 ] [ 16 ] These structures were able to both support the weight of traveling people and animals as well as withstand weather conditions over certain amounts of time. Since grass rots away over time, the bridges had to be rebuilt every year. [ 18 ] When the Inca people began building a grass suspension bridge, they would first gather natural materials of grass and other vegetation. They would then braid these elements together into rope. This contribution was made by the Inca women. [ 18 ] Vast amounts of thin-looking rope were produced. [ 17 ] The villagers would then deliver their quota of rope to the builders. [ 17 ] The rope was then divided into sections. [ 17 ] Each section consisted of an amount of thin rope being laid out together in preparation to create a thicker rope cord. [ 17 ] Once the sections are laid out, the strands of rope made earlier are twisted together tightly and evenly, producing the larger and thicker rope cord. [ 17 ] These larger ropes are then braided together to create cables, some as thick as a human torso. [ 15 ] [ 16 ] [ 17 ] Depending on the dimensions of the cable, each could weigh up to 200 pounds. [ 17 ] These cables were then delivered to the bridge site. [ 17 ] It was considered bad luck for women to be anywhere near the construction of the bridge, so the Inca men were therefore in charge of the on-site construction. [ 18 ] At the bridge site, a builder(s) would travel to the opposite landmass that they were working to connect. [ 17 ] Once they were positioned on the opposite side, one of the thin, light-weight ropes would be thrown over to them. [ 17 ] This rope would then be used to pull the main cables over the gorge. [ 17 ] Stone beams were built on either side of the gorge and were used in helping to position and secure the cables. [ 17 ] The cables were wrapped around these stone beams and tightened inch by inch to decrease any slack in the bridge. [ 17 ] Once this was finished, the riggers carefully made their way across the hanging cables, tying the foot-ropes together and connecting the handrails and the foot-ropes with the remainder of the thin grass ropes. [ 17 ] Not all rope bridges were exactly alike in terms of design and build. Some riggers also wove pieces of wood into the foot-ropes. Modern-day rope bridge builders in Huinchiri, Peru make offerings to Pacha Mama, otherwise known as "Mother Earth," throughout their building process to ensure that the bridge will be strong and safe. [ 18 ] [ 19 ] This may have been a practice used by the Inca people since they too were religious. If all went smoothly and if tasks were performed in a timely fashion, a bridge had the potential of being constructed in three days. [ 18 ] [ 19 ] People today continue to honor Incan traditions and expand their knowledge in the building of rope bridges . "Each June in Huinchiri, Peru, four Quechua communities on two sides of a gorge join together to build a bridge out of grass, creating a form of ancient infrastructure that dates back at least five centuries to the Inca Empire ." [ 20 ] The previous Q’eswachaka Bridge is cut down and swept away by the Apurímac River current and a new bridge is built in its place. [ 20 ] [ 21 ] This tradition links the Quechua communities of the Huinchiri, Chaupibanda, Choccayhua, and Ccollana Quehue to their past ancestors. [ 21 ] “According to our grandfathers, this bridge was built during the time of the Inkas 600 years ago, and on it they walked their llamas and alpacas carrying their produce.” - Eleuterio Ccallo Tapia [ 21 ] "A small portion of a 60-foot replica built by Quechua weavers is on view in The Great Inka Road: Engineering an Empire at the Smithsonian’s National Museum of the American Indian in Washington, DC." [ 20 ] This exhibit was on display at the museum through June 27, 2021. [ 22 ] Visitors are also encouraged to experience this exhibit online. [ 23 ] Either way, museums like the Smithsonian are working to preserve and display examples and knowledge of the Inca inspired rope bridges today. John Wilford shares in the New York Times that students at the Massachusetts Institute of Technology are learning much more than how objects are made. They are being taught to observe and test how archeology entwines with culture. [ 16 ] Wilford's article was written in 2007. [ 16 ] At this time, students involved in a course called “materials in human experience,” were busy making a 60-foot-long fiber bridge in the Peruvian style. [ 16 ] Through this project, they were introduced to the Inca people's way of thinking and building. [ 16 ] After creating their ropes and cables, they had planned to stretch the bridge across a dry basin between two campus buildings. [ 16 ] According to author Mark Cartwright, " Inca roads covered over 40,000 km (25,000 miles), principally in two main highways running north to south across the Inca Empire, which eventually spread over ancient Peru, Ecuador, Chile and Bolivia." [ 12 ] Several sources challenge Cartwright's claim in stating that the Inca roads covered either more or less area then he describes. This number is difficult to solidify since some of the pathways of the Inca still may remain unaccounted for, being that they may have been washed away or covered by natural forces. "Inca engineers were also undaunted by geographical difficulties and built roads across ravines, rivers, deserts, and mountain passes up to 5,000 meters high." [ 12 ] Many of the constructed roads are not uniform in design. [ 12 ] Most of the uncovered roads are about one to four meters wide. [ 12 ] Although this is true, some roads, such as the highway in Huanuco Pampa province, can be much larger. [ 12 ] As mentioned in the Pathway systems section, the Inca people mainly traveled on foot. Knowing this, the roads created were most likely built and paved for both humans and animals to walk and/or run along. Several roads were paved with stones or cobbles and some were "edged and protected with the use of small stone walls, stone markers, wooden or cane posts, or piles of stones." [ 12 ] Drainage was something that was of particular interest and importance to the Inca people. Drains and culverts were built to ensure that rainwater would effectively run off of the road's surface. [ 12 ] The drains and culverts helped in directing the accumulating water either along or under the road. [ 12 ] As mentioned in the section Pathway systems , there were several uses for the Inca roads. The most obvious way in which the Inca people used the road/trail systems was to transport goods. They did this on foot and sometimes with the help of animals (llamas and alpacas). Not only were goods transported throughout the vast empire, but so were ideas and messages. The Inca needed a system of communication, so they relied on Chasquis , otherwise known as messengers. [ 24 ] The Chasquis were chosen among the strongest and fittest young males. [ 24 ] They ran several miles per day, only to deliver messages. [ 24 ] These messengers resided in cabins called " tambos ." [ 24 ] These structures were positioned along the roads and built by the Inca people. [ 24 ] These buildings provided the Chasquis with a place to rest. [ 24 ] These places of rest could also be used to house the Inca army in a situation of rebellion or war. [ 24 ] Today, many people travel to South America to hike the Inca trail. [ 25 ] Walking and climbing the trail not only serves the purpose of allowing visitors to experience the historic pathways of the Inca people, but it allows for tourists and locals to see the Inca ruins, mountains, and exotic vegetation and animals. [ 25 ]
https://en.wikipedia.org/wiki/Inca_technology
Incertae sedis ( Latin for 'of uncertain placement') [ 2 ] or problematica is a term used for a taxonomic group where its broader relationships are unknown or undefined. [ 3 ] Alternatively, such groups are frequently referred to as "enigmatic taxa". [ 4 ] In the system of open nomenclature , uncertainty at specific taxonomic levels is indicated by incertae familiae (of uncertain family), incerti subordinis (of uncertain suborder), incerti ordinis (of uncertain order) and similar terms. [ 5 ] When formally naming a taxon, uncertainty about its taxonomic classification can be problematic. The International Code of Nomenclature for algae, fungi, and plants , stipulates that "species and subdivisions of genera must be assigned to genera, and infraspecific taxa must be assigned to species, because their names are combinations", but ranks higher than the genus may be assigned incertae sedis . [ 14 ] This excerpt from a 2007 scientific paper about crustaceans of the Kuril–Kamchatka Trench and the Japan Trench describes typical circumstances through which this category is applied in discussing: [ 15 ] ...the removal of many genera from new and existing families into a state of incertae sedis. Their reduced status was attributed largely to poor or inadequate descriptions but it was accepted that some of the vagueness in the analysis was due to insufficient character states. It is also evident that a proportion of the characters used in the analysis, or their given states for particular taxa, were inappropriate or invalid. Additional complexity, and factors that have misled earlier authorities, are intrusion by extensive homoplasies , apparent character state reversals and convergent evolution . If a formal phylogenetic analysis is conducted that does not include a certain taxon, the authors might choose to label the taxon incertae sedis instead of guessing its placement. This is particularly common when molecular phylogenies are generated, since tissue for many rare organisms is hard to obtain. It is also a common scenario when fossil taxa are included, since many fossils are defined based on partial information. For example, if the phylogeny was constructed using soft tissue and vertebrae as principal characters and the taxon in question is only known from a single tooth, it would be necessary to label it incertae sedis . [ 5 ] If conflicting results exist or if there is not a consensus among researchers as to how a taxon relates to other organisms, it may be listed as incertae sedis until the conflict is resolved. [ 5 ] The term incertae sedis refers to uncertainty about phylogenetic position of a taxon, which may be expressed, among others, by using a question mark after or before a taxon name. This should be distinguished from the situation where either it is uncertain how to use a name, often because the types have been lost ( nomen dubium , species inquirenda ), or whether a poorly preserved specimen should be included within a given species or genus, which is often expressed using a 'cf.' (from Latin confer , compare, before a taxon name); such a convention is especially widespread in palaeontology. [ 16 ] In zoological nomenclature, " incertae sedis " is not a nomenclatural term at all per se , but is used by taxonomists in their classifications to mean "of uncertain taxonomic position". [ 2 ] Glossary In botany, a name is not validly published if it is not accepted by the author in the same publication. [ 14 ] Article 36.1 In zoology, a name proposed conditionally may be available under certain conditions. [ 2 ] Articles 11 and 15 For uncertainties at lower levels, some authors have proposed a system of "open nomenclature", suggesting that question marks be used to denote a questionable assignment. [ 5 ] For example, if a new species was given the specific epithet album by Anton and attributed with uncertainty to Agenus , it could be denoted " Agenus ? album Anton (?Anton)"; the "(?Anton)" indicates the author that assigned the question mark. [ 5 ] So if Anton described Agenus album , and Bruno called the assignment into doubt, this could be denoted " Agenus ? album (Anton) (?Bruno)", with the parentheses around Anton because the original assignment (to Agenus ) was modified (to Agenus ?) by Bruno. [ 5 ] This practice is not included in the International Code of Zoological Nomenclature , and is used only by paleontologists. [ 5 ]
https://en.wikipedia.org/wiki/Incertae_sedis
In mathematics , an incidence matrix is a logical matrix that shows the relationship between two classes of objects, usually called an incidence relation . If the first class is X and the second is Y , the matrix has one row for each element of X and one column for each mapping from X to Y . The entry in row x and column y is 1 if the vertex x is part of (called incident in this context) the mapping that corresponds to y , and 0 if it is not. There are variations; see below. Incidence matrix is a common graph representation in graph theory . It is different to an adjacency matrix , which encodes the relation of vertex-vertex pairs. In graph theory an undirected graph has two kinds of incidence matrices: unoriented and oriented. The unoriented incidence matrix (or simply incidence matrix ) of an undirected graph is a n × m {\displaystyle n\times m} matrix B , where n and m are the numbers of vertices and edges respectively, such that For example, the incidence matrix of the undirected graph shown on the right is a matrix consisting of 4 rows (corresponding to the four vertices, 1–4) and 4 columns (corresponding to the four edges, e 1 , e 2 , e 3 , e 4 {\displaystyle e_{1},e_{2},e_{3},e_{4}} ): [ 1 1 1 0 1 0 0 0 0 1 0 1 0 0 1 1 ] . {\displaystyle {\begin{bmatrix}1&1&1&0\\1&0&0&0\\0&1&0&1\\0&0&1&1\\\end{bmatrix}}.} If we look at the incidence matrix, we see that the sum of each column is equal to 2. This is because each edge has a vertex connected to each end. The incidence matrix of a directed graph is a n × m {\displaystyle n\times m} matrix B where n and m are the number of vertices and edges respectively, such that (Many authors use the opposite sign convention.) The oriented incidence matrix of an undirected graph is the incidence matrix, in the sense of directed graphs, of any orientation of the graph. That is, in the column of edge e , there is one 1 in the row corresponding to one vertex of e and one −1 in the row corresponding to the other vertex of e , and all other rows have 0. The oriented incidence matrix is unique up to negation of any of the columns, since negating the entries of a column corresponds to reversing the orientation of an edge. The unoriented incidence matrix of a graph G is related to the adjacency matrix of its line graph L ( G ) by the following theorem: where A ( L ( G )) is the adjacency matrix of the line graph of G , B ( G ) is the incidence matrix, and I m is the identity matrix of dimension m . The discrete Laplacian (or Kirchhoff matrix) is obtained from the oriented incidence matrix B ( G ) by the formula The integral cycle space of a graph is equal to the null space of its oriented incidence matrix, viewed as a matrix over the integers or real or complex numbers . The binary cycle space is the null space of its oriented or unoriented incidence matrix, viewed as a matrix over the two-element field . The incidence matrix of a signed graph is a generalization of the oriented incidence matrix. It is the incidence matrix of any bidirected graph that orients the given signed graph. The column of a positive edge has a 1 in the row corresponding to one endpoint and a −1 in the row corresponding to the other endpoint, just like an edge in an ordinary (unsigned) graph. The column of a negative edge has either a 1 or a −1 in both rows. The line graph and Kirchhoff matrix properties generalize to signed graphs. The definitions of incidence matrix apply to graphs with loops and multiple edges . The column of an oriented incidence matrix that corresponds to a loop is all zero, unless the graph is signed and the loop is negative; then the column is all zero except for ±2 in the row of its incident vertex. A weighted graph can be represented using the weight of the edge in place of a 1. For example, the incidence matrix of the graph to the right is: [ 2 1 5 0 2 0 0 0 0 1 0 6 0 0 5 6 ] . {\displaystyle {\begin{bmatrix}2&1&5&0\\2&0&0&0\\0&1&0&6\\0&0&5&6\\\end{bmatrix}}.} Because the edges of ordinary graphs can only have two vertices (one at each end), the column of an incidence matrix for graphs can only have two non-zero entries. By contrast, a hypergraph can have multiple vertices assigned to one edge; thus, a general matrix of non-negative integers describes a hypergraph. The incidence matrix of an incidence structure C is a p × q matrix B (or its transpose), where p and q are the number of points and lines respectively, such that B i , j = 1 if the point p i and line L j are incident and 0 otherwise. In this case, the incidence matrix is also a biadjacency matrix of the Levi graph of the structure. As there is a hypergraph for every Levi graph, and vice versa , the incidence matrix of an incidence structure describes a hypergraph. An important example is a finite geometry . For instance, in a finite plane, X is the set of points and Y is the set of lines. In a finite geometry of higher dimension, X could be the set of points and Y could be the set of subspaces of dimension one less than the dimension of the entire space (hyperplanes); or, more generally, X could be the set of all subspaces of one dimension d and Y the set of all subspaces of another dimension e , with incidence defined as containment. In a similar manner, the relationship between cells whose dimensions differ by one in a polytope, can be represented by an incidence matrix. [ 1 ] Another example is a block design . Here X is a finite set of "points" and Y is a class of subsets of X , called "blocks", subject to rules that depend on the type of design. The incidence matrix is an important tool in the theory of block designs. For instance, it can be used to prove Fisher's inequality , a fundamental theorem of balanced incomplete 2-designs (BIBDs), that the number of blocks is at least the number of points. [ 2 ] Considering the blocks as a system of sets, the permanent of the incidence matrix is the number of systems of distinct representatives (SDRs).
https://en.wikipedia.org/wiki/Incidence_matrix
In mathematics, an incidence poset or incidence order is a type of partially ordered set that represents the incidence relation between vertices and edges of an undirected graph . The incidence poset of a graph G has an element for each vertex or edge in G ; in this poset, there is an order relation x ≤ y if and only if either x = y or x is a vertex, y is an edge, and x is an endpoint of y . As an example, a zigzag poset or fence with an odd number of elements, with alternating order relations a < b > c < d ... is an incidence poset of a path graph . Every incidence poset of a non-empty graph has height two. Its width equals the number of edges plus the number of acyclic connected components. Incidence posets have been particularly studied with respect to their order dimension , and its relation to the properties of the underlying graph. The incidence poset of a connected graph G has order dimension at most two if and only if G is a path graph, and has order dimension at most three if and only if G is at most planar ( Schnyder's theorem ). [ 1 ] However, graphs whose incidence posets have order dimension 4 may be dense [ 2 ] and may have unbounded chromatic number . [ 3 ] Every complete graph on n vertices, and by extension every graph on n vertices, has an incidence poset with order dimension O (log log n ). [ 4 ] If an incidence poset has high dimension then it must contain copies of the incidence posets of all small trees either as sub-orders or as the duals of sub-orders. [ 5 ]
https://en.wikipedia.org/wiki/Incidence_poset
An incinerating toilet is a type of dry toilet that burns human feces instead of flushing them away with water , as does a flush toilet . [ 1 ] The thermal energy used to incinerate the waste can be derived from electricity, fuel, oil, or liquified petroleum gas. They are relatively inefficient because of the fuel used. [ 2 ] The first commercially successful incinerating toilet was the Destroilet, patented in 1946. Destroilets were used on ships in the 1960s when laws were passed to prevent the dumping of raw sewage into American waterways. [ 3 ] In 2011, the Bill & Melinda Gates Foundation launched the "Reinvent the Toilet Challenge" to promote safer, more effective ways to treat human excreta. Several research teams have received funding to work on developing toilets based on solid waste combustion. [ 4 ] For example, a toilet under development by RTI International is based on electrochemical disinfection and solid waste combustion. [ 5 ] This technology converts feces into burnable pieces and then uses thermoelectric devices to convert the thermal energy into electrical energy. [ citation needed ] Incinerating toilets may be powered by electricity , gas , dried feces or other energy sources. [ 6 ] [ 7 ] Incinerating toilets gather excrement in an integral ashpan and then incinerate it, reducing it to pathogen -free ash. [ 8 ] Some will also incinerate "grey water" created from showers and sinks. [ citation needed ] Incinerating toilets are used only for niche applications, which include:
https://en.wikipedia.org/wiki/Incinerating_toilet
Incineration is a waste treatment process that involves the combustion of substances contained in waste materials. [ 1 ] Industrial plants for waste incineration are commonly referred to as waste-to-energy facilities. Incineration and other high-temperature waste treatment systems are described as " thermal treatment ". Incineration of waste materials converts the waste into ash , flue gas and heat. The ash is mostly formed by the inorganic constituents of the waste and may take the form of solid lumps or particulates carried by the flue gas. The flue gases must be cleaned of gaseous and particulate pollutants before they are dispersed into the atmosphere . In some cases, the heat that is generated by incineration can be used to generate electric power . Incineration with energy recovery is one of several waste-to-energy technologies such as gasification , pyrolysis and anaerobic digestion . While incineration and gasification technologies are similar in principle, the energy produced from incineration is high-temperature heat whereas combustible gas is often the main energy product from gasification. Incineration and gasification may also be implemented without energy and materials recovery. In several countries, there are still concerns from experts and local communities about the environmental effect of incinerators (see arguments against incineration ). In some countries [ which? ] , incinerators built just a few decades ago [ when? ] often did not include a materials separation to remove hazardous, bulky or recyclable materials before combustion. These facilities tended to risk the health of the plant workers and the local environment due to inadequate levels of gas cleaning and combustion process control. Most of these facilities did not generate electricity. [ citation needed ] Incinerators reduce the solid mass of the original waste by 80–85% and the volume (already compressed somewhat in garbage trucks ) by 95–96%, depending on composition and degree of recovery of materials such as metals from the ash for recycling. [ 2 ] This means that while incineration does not completely replace landfilling , it significantly reduces the necessary volume for disposal. Garbage trucks often reduce the volume of waste in a built-in compressor before delivery to the incinerator. Alternatively, at landfills, the volume of the uncompressed garbage can be reduced by approximately 70% by using a stationary steel compressor, albeit with a significant energy cost. In many countries, simpler waste compaction is a common practice for compaction at landfills. [ 3 ] Incineration has particularly strong benefits for the treatment of certain waste types in niche areas such as clinical wastes and certain hazardous wastes where pathogens and toxins can be destroyed by high temperatures. Examples include chemical multi-product plants with diverse toxic or very toxic wastewater streams, which cannot be routed to a conventional wastewater treatment plant. Waste combustion is particularly popular in countries such as Japan, Singapore and the Netherlands, where land is a scarce resource. Denmark and Sweden have been leaders by using the energy generated from incineration for more than a century, in localised combined heat and power facilities supporting district heating schemes. [ 4 ] In 2005, waste incineration produced 4.8% of the electricity consumption and 13.7% of the total domestic heat consumption in Denmark. [ 5 ] A number of other European countries rely heavily on incineration for handling municipal waste, in particular Luxembourg , the Netherlands, Germany, and France. [ 2 ] The first UK incinerators for waste disposal were built in Nottingham by Manlove, Alliott & Co. Ltd. in 1874 to a design patented by Alfred Fryer. They were originally known as destructors . [ 6 ] The first US incinerator was built in 1885 on Governors Island in New York, NY. [ 7 ] The first facility in Austria-Hungary was built in 1905 in Brunn . [ 8 ] An incinerator is a furnace for burning waste . Modern incinerators include pollution mitigation equipment such as flue gas cleaning. There are various types of incinerator plant design: moving grate, fixed grate, rotary-kiln, and fluidised bed. [ citation needed ] The burn pile or the burn pit is one of the simplest and earliest forms of waste disposal, essentially consisting of a mound of combustible materials piled on the open ground and set on fire, leading to pollution. Burn piles can and have spread uncontrolled fires, for example, if the wind blows burning material off the pile into surrounding combustible grasses or onto buildings. As interior structures of the pile are consumed, the pile can shift and collapse, spreading the burn area. Even in a situation of no wind, small lightweight ignited embers can lift off the pile via convection , and waft through the air into grasses or onto buildings, igniting them. [ citation needed ] Burn piles often do not result in full combustion of waste and therefore produce particulate pollution. [ citation needed ] The burn barrel is a somewhat more controlled form of private waste incineration, containing the burning material inside a metal barrel, with a metal grating over the exhaust. The barrel prevents the spread of burning material in windy conditions, and as the combustibles are reduced they can only settle down into the barrel. The exhaust grating helps to prevent the spread of burning embers. Typically steel 55-US-gallon (210 L) drums are used as burn barrels, with air vent holes cut or drilled around the base for air intake. [ 9 ] Over time, the very high heat of incineration causes the metal to oxidize and rust, and eventually the barrel itself is consumed by the heat and must be replaced. The private burning of dry cellulosic/paper products is generally clean-burning, producing no visible smoke, but plastics in the household waste can cause private burning to create a public nuisance, generating acrid odors and fumes that make eyes burn and water. A two-layered design enables secondary combustion, reducing smoke. [ 10 ] Most urban communities ban burn barrels and certain rural communities may have prohibitions on open burning, especially those home to many residents not familiar with this common rural practice. [ citation needed ] As of 2006 [update] in the United States, private rural household or farm waste incineration of small quantities was typically permitted so long as it is not a nuisance to others, does not pose a risk of fire such as in dry conditions, and the fire does not produce dense, noxious smoke. A handful of states, such as New York, Minnesota, and Wisconsin, have laws or regulations either banning or strictly regulating open burning due to health and nuisance effects. [ 11 ] People intending to burn waste may be required to contact a state agency in advance to check current fire risk and conditions, and to alert officials of the controlled fire that will occur. [ 12 ] The typical incineration plant for municipal solid waste is a moving grate incinerator. The moving grate enables the movement of waste through the combustion chamber to be optimized to allow a more efficient and complete combustion. A single moving grate boiler can handle up to 35 metric tons (39 short tons) of waste per hour, and can operate 8,000 hours per year with only one scheduled stop for inspection and maintenance of about one month's duration. Moving grate incinerators are sometimes referred to as municipal solid waste incinerators (MSWIs). The waste is introduced by a waste crane through the "throat" at one end of the grate, from where it moves down over the descending grate to the ash pit in the other end. Here the ash is removed through a water lock. Part of the combustion air (primary combustion air) is supplied through the grate from below. This air flow also has the purpose of cooling the grate itself. Cooling is important for the mechanical strength of the grate, and many moving grates are also water-cooled internally. Secondary combustion air is supplied into the boiler at high speed through nozzles over the grate. It facilitates complete combustion of the flue gases by introducing turbulence for better mixing and by ensuring a surplus of oxygen. In multiple/stepped hearth incinerators, the secondary combustion air is introduced in a separate chamber downstream the primary combustion chamber. According to the European Waste Incineration Directive , incineration plants must be designed to ensure that the flue gases reach a temperature of at least 850 °C (1,560 °F) for 2 seconds in order to ensure proper breakdown of toxic organic substances. In order to comply with this at all times, it is required to install backup auxiliary burners (often fueled by oil), which are fired into the boiler in case the heating value of the waste becomes too low to reach this temperature alone. The flue gases are then cooled in the superheaters , where the heat is transferred to steam, heating the steam to typically 400 °C (752 °F) at a pressure of 40 bars (580 psi ) for the electricity generation in the turbine . At this point, the flue gas has a temperature of around 200 °C (392 °F), and is passed to the flue gas cleaning system . In Scandinavia , scheduled maintenance is always performed during summer, where the demand for district heating is low. Often, incineration plants consist of several separate 'boiler lines' (boilers and flue gas treatment plants), so that waste can continue to be received at one boiler line while the others are undergoing maintenance, repair, or upgrading. The older and simpler kind of incinerator was a brick-lined cell with a fixed metal grate over a lower ash pit, with one opening in the top or side for loading and another opening in the side for removing incombustible solids called clinkers . Many small incinerators formerly found in apartment houses have now been replaced by waste compactors . [ 13 ] [ full citation needed ] The rotary-kiln incinerator [ 14 ] is used by municipalities and by large industrial plants. This design of incinerator has two chambers: a primary chamber and secondary chamber. The primary chamber in a rotary kiln incinerator consists of an inclined refractory lined cylindrical tube. The inner refractory lining serves as sacrificial layer to protect the kiln structure. This refractory layer needs to be replaced from time to time. [ 15 ] Movement of the cylinder on its axis facilitates movement of waste. In the primary chamber, there is conversion of solid fraction to gases, through volatilization, destructive distillation and partial combustion reactions. The secondary chamber is necessary to complete gas phase combustion reactions. The clinkers spill out at the end of the cylinder. A tall flue-gas stack, fan, or steam jet supplies the needed draft . Ash drops through the grate, but many particles are carried along with the hot gases. The particles and any combustible gases may be combusted in an "afterburner". [ 16 ] A strong airflow is forced through a sandbed. The air seeps through the sand until a point is reached where the sand particles separate to let the air through and mixing and churning occurs, thus a fluidized bed is created and fuel and waste can now be introduced. The sand with the pre-treated waste and/or fuel is kept suspended on pumped air currents and takes on a fluid-like character. The bed is thereby violently mixed and agitated keeping small inert particles and air in a fluid-like state. This allows all of the mass of waste, fuel and sand to be fully circulated through the furnace. [ citation needed ] Furniture factory sawdust incinerators need much attention as these have to handle resin powder and many flammable substances. Controlled combustion, burn back prevention systems are essential as dust when suspended resembles the fire catch phenomenon of any liquid petroleum gas. The heat produced by an incinerator can be used to generate steam which may then be used to drive a turbine in order to produce electricity. The typical amount of net energy that can be produced per tonne municipal waste is about 2/3 MWh of electricity and 2 MWh of district heating. [ 2 ] Thus, incinerating about 600 metric tons (660 short tons) per day of waste will produce about 400 MWh of electrical energy per day (17 MW of electrical power continuously for 24 hours) and 1200 MWh of district heating energy each day. Incineration has a number of outputs such as the ash and the emission to the atmosphere of flue gas . Before the flue gas cleaning system , if installed, the flue gases may contain particulate matter , heavy metals , dioxins , furans , sulfur dioxide , and hydrochloric acid . If plants have inadequate flue gas cleaning, these outputs may add a significant pollution component to stack emissions. In a study from 1997, Delaware Solid Waste Authority found that, for same amount of produced energy, incineration plants emitted fewer particles, hydrocarbons and less SO 2 , HCl, CO and NO x than coal-fired power plants, but more than natural gas–fired power plants. [ 17 ] According to Germany's Ministry of the Environment , waste incinerators reduce the amount of some atmospheric pollutants by substituting power produced by coal-fired plants with power from waste-fired plants. [ 18 ] The most publicized concerns about the incineration of municipal solid wastes (MSW) involve the fear that it produces significant amounts of dioxin and furan emissions. [ 19 ] Dioxins and furans are considered by many to be serious health hazards. The EPA announced in 2012 that the safe limit for human oral consumption is 0.7 picograms Toxic Equivalence (TEQ) per kilogram bodyweight per day, [ 20 ] which works out to 17 billionths of a gram for a 150 lb person per year. In 2005, the Ministry of the Environment of Germany, where there were 66 incinerators at that time, estimated that "...whereas in 1990 one third of all dioxin emissions in Germany came from incineration plants, for the year 2000 the figure was less than 1%. Chimneys and tiled stoves in private households alone discharge approximately 20 times more dioxin into the environment than incineration plants." [ 18 ] According to the United States Environmental Protection Agency , [ 11 ] the combustion percentages of the total dioxin and furan inventory from all known and estimated sources in the U.S. (not only incineration) for each type of incineration are as follows: 35.1% backyard barrels; 26.6% medical waste; 6.3% municipal wastewater treatment sludge ; 5.9% municipal waste combustion; 2.9% industrial wood combustion. Thus, the controlled combustion of waste accounted for 41.7% of the total dioxin inventory. In 1987, before the governmental regulations required the use of emission controls, there was a total of 8,905.1 grams (314.12 oz) Toxic Equivalence (TEQ) of dioxin emissions from US municipal waste combustors. Today, the total emissions from the plants are 83.8 grams (2.96 oz) TEQ annually, a reduction of 99%. Backyard barrel burning of household and garden wastes , still allowed in some rural areas, generates 580 grams (20 oz) of dioxins annually. Studies conducted by the US-EPA [ 21 ] demonstrated that one family using a burn barrel produced more emissions than an incineration plant disposing of 200 metric tons (220 short tons) of waste per day by 1997 and five times that by 2007 due to increased chemicals in household trash and decreased emission by municipal incinerators using better technology. [ 22 ] Most of the improvement in U.S. dioxin emissions has been for large-scale municipal waste incinerators. As of 2000, although small-scale incinerators (those with a daily capacity of less than 250 tons) processed only 9% of the total waste combusted, these produced 83% of the dioxins and furans emitted by municipal waste combustion. [ 11 ] The breakdown of dioxin requires exposure of the molecular ring to a sufficiently high temperature so as to trigger thermal breakdown of the strong molecular bonds holding it together. Small pieces of fly ash may be somewhat thick, and too brief an exposure to high temperature may only degrade dioxin on the surface of the ash. For a large volume air chamber, too brief an exposure may also result in only some of the exhaust gases reaching the full breakdown temperature. For this reason there is also a time element to the temperature exposure to ensure heating completely through the thickness of the fly ash and the volume of waste gases. There are trade-offs between increasing either the temperature or exposure time. Generally where the molecular breakdown temperature is higher, the exposure time for heating can be shorter, but excessively high temperatures can also cause wear and damage to other parts of the incineration equipment. Likewise the breakdown temperature can be lowered to some degree but then the exhaust gases would require a greater lingering period of perhaps several minutes, which would require large/long treatment chambers that take up a great deal of treatment plant space. A side effect of breaking the strong molecular bonds of dioxin is the potential for breaking the bonds of nitrogen gas ( N 2 ) and oxygen gas ( O 2 ) in the supply air. As the exhaust flow cools, these highly reactive detached atoms spontaneously reform bonds into reactive oxides such as NO x in the flue gas, which can result in smog formation and acid rain if they were released directly into the local environment. These reactive oxides must be further neutralized with selective catalytic reduction (SCR) or selective non-catalytic reduction (see below). The temperatures needed to break down dioxin are typically not reached when burning plastics outdoors in a burn barrel or garbage pit, causing high dioxin emissions as mentioned above. While plastic does usually burn in an open-air fire, the dioxins remain after combustion and either float off into the atmosphere, or may remain in the ash where it can be leached down into groundwater when rain falls on the ash pile. Fortunately, dioxin and furan compounds bond very strongly to solid surfaces and are not dissolved by water, so leaching processes are limited to the first few millimeters below the ash pile. The gas-phase dioxins can be substantially destroyed using catalysts, some of which can be present as part of the fabric filter bag structure. Modern municipal incinerator designs include a high-temperature zone, where the flue gas is sustained at a temperature above 850 °C (1,560 °F) for at least 2 seconds before it is cooled down. They are equipped with auxiliary heaters to ensure this at all times. These are often fueled by oil or natural gas, and are normally only active for a very small fraction of the time. Further, most modern incinerators utilize fabric filters (often with Teflon membranes to enhance collection of sub-micron particles) which can capture dioxins present in or on solid particles. For very small municipal incinerators, the required temperature for thermal breakdown of dioxin may be reached using a high-temperature electrical heating element, plus a selective catalytic reduction stage. Although dioxins and furans may be destroyed by combustion, their reformation by a process known as 'de novo synthesis' as the emission gases cool is a probable source of the dioxins measured in emission stack tests from plants that have high combustion temperatures held at long residence times. [ 11 ] As for other complete combustion processes, nearly all of the carbon content in the waste is emitted as CO 2 to the atmosphere. MSW contains approximately the same mass fraction of carbon as CO 2 itself (27%), so incineration of 1 ton of MSW produces approximately 1 ton of CO 2 . If the waste was landfilled without prior stabilization (typically via anaerobic digestion ), 1 ton of MSW would produce approximately 62 cubic metres (2,200 cu ft) methane via the anaerobic decomposition of the biodegradable part of the waste. Since the global warming potential of methane is 34 and the weight of 62 cubic meters of methane at 25 degrees Celsius is 40.7 kg, this is equivalent to 1.38 ton of CO 2 , which is more than the 1 ton of CO 2 which would have been produced by incineration. In some countries, large amounts of landfill gas are collected. Still the global warming potential of the landfill gas emitted to atmosphere is significant. In the US it was estimated that the global warming potential of the emitted landfill gas in 1999 was approximately 32% higher than the amount of CO 2 that would have been emitted by incineration. [ 23 ] Since this study, the global warming potential estimate for methane has been increased from 21 to 35, which alone would increase this estimate to almost the triple GWP effect compared to incineration of the same waste. In addition, nearly all biodegradable waste has biological origin. This material has been formed by plants using atmospheric CO 2 typically within the last growing season. If these plants are regrown the CO 2 emitted from their combustion will be taken out from the atmosphere once more. [ citation needed ] Such considerations are the main reason why several countries administrate incineration of biodegradable waste as renewable energy . [ 24 ] The rest – mainly plastics and other oil and gas derived products – is generally treated as non-renewables . Different results for the CO 2 footprint of incineration can be reached with different assumptions. Local conditions (such as limited local district heating demand, no fossil fuel generated electricity to replace or high levels of aluminium in the waste stream) can decrease the CO 2 benefits of incineration. The methodology and other assumptions may also influence the results significantly. For example, the methane emissions from landfills occurring at a later date may be neglected or given less weight, or biodegradable waste may not be considered CO 2 neutral. A study by Eunomia Research and Consulting in 2008 on potential waste treatment technologies in London demonstrated that by applying several of these (according to the authors) unusual assumptions the average existing incineration plants performed poorly for CO 2 balance compared to the theoretical potential of other emerging waste treatment technologies. [ 25 ] Other gaseous emissions in the flue gas from incinerator furnaces include nitrogen oxides , sulfur dioxide , hydrochloric acid , heavy metals , and fine particles . Of the heavy metals, mercury is a major concern due to its toxicity and high volatility, as essentially all mercury in the municipal waste stream may exit in emissions if not removed by emission controls. [ 26 ] The steam content in the flue may produce visible fume from the stack, which can be perceived as a visual pollution . It may be avoided by decreasing the steam content by flue-gas condensation and reheating, or by increasing the flue gas exit temperature well above its dew point. Flue-gas condensation allows the latent heat of vaporization of the water to be recovered, subsequently increasing the thermal efficiency of the plant. [ citation needed ] The quantity of pollutants in the flue gas from incineration plants may or may not be reduced by several processes, depending on the plant. Particulate is collected by particle filtration , most often electrostatic precipitators (ESP) and/or baghouse filters . The latter are generally very efficient for collecting fine particles . In an investigation by the Ministry of the Environment of Denmark in 2006, the average particulate emissions per energy content of incinerated waste from 16 Danish incinerators were below 2.02 g/GJ (grams per energy content of the incinerated waste). Detailed measurements of fine particles with sizes below 2.5 micrometres ( PM 2.5 ) were performed on three of the incinerators: One incinerator equipped with an ESP for particle filtration emitted 5.3 g/GJ fine particles, while two incinerators equipped with baghouse filters emitted 0.002 and 0.013 g/GJ PM 2.5 . For ultra fine particles (PM 1.0 ), the numbers were 4.889 g/GJ PM 1.0 from the ESP plant, while emissions of 0.000 and 0.008 g/GJ PM 1.0 were measured from the plants equipped with baghouse filters. [ 27 ] [ 28 ] Acid gas scrubbers are used to remove hydrochloric acid , nitric acid , hydrofluoric acid , mercury , lead and other heavy metals . The efficiency of removal will depend on the specific equipment, the chemical composition of the waste, the design of the plant, the chemistry of reagents, and the ability of engineers to optimize these conditions, which may conflict for different pollutants. For example, mercury removal by wet scrubbers is considered coincidental and may be less than 50%. [ 26 ] Basic scrubbers remove sulfur dioxide , forming gypsum by reaction with lime . [ 29 ] Waste water from scrubbers must subsequently pass through a waste water treatment plant. [ citation needed ] Sulfur dioxide may also be removed by dry desulfurisation by injection limestone slurry into the flue gas before the particle filtration. [ citation needed ] NO x is either reduced by catalytic reduction with ammonia in a catalytic converter ( selective catalytic reduction , SCR) or by a high-temperature reaction with ammonia in the furnace ( selective non-catalytic reduction , SNCR). Urea may be substituted for ammonia as the reducing reagent but must be supplied earlier in the process so that it can hydrolyze into ammonia. Substitution of urea can reduce costs and potential hazards associated with storage of anhydrous ammonia. [ citation needed ] Heavy metals are often adsorbed on injected active carbon powder, which is collected by particle filtration. [ citation needed ] Incineration produces fly ash and bottom ash just as is the case when coal is combusted. The total amount of ash produced by municipal solid waste incineration ranges from 4 to 10% by volume and 15–20% by weight of the original quantity of waste, [ 2 ] [ 30 ] and the fly ash amounts to about 10–20% of the total ash. [ 2 ] [ 30 ] The fly ash, by far, constitutes more of a potential health hazard than does the bottom ash because the fly ash often contain high concentrations of heavy metals such as lead, cadmium , copper and zinc as well as small amounts of dioxins and furans. [ 31 ] The bottom ash seldom contain significant levels of heavy metals. At present although some historic samples tested by the incinerator operators' group would meet the being ecotoxic criteria at present the EA say "we have agreed" to regard incinerator bottom ash as "non-hazardous" until the testing programme is complete. [ citation needed ] Odor pollution can be a problem with old-style incinerators, but odors and dust are extremely well controlled in newer incineration plants. They receive and store the waste in an enclosed area with a negative pressure with the airflow being routed through the boiler which prevents unpleasant odors from escaping into the atmosphere. A study found that the strongest odor at an incineration facility in Eastern China occurred at its waste tipping port. [ 32 ] An issue that affects community relationships is the increased road traffic of waste collection vehicles to transport municipal waste to the incinerator. Due to this reason, most incinerators are located in industrial areas. This problem can be avoided to an extent through the transport of waste by rail from transfer stations. [ citation needed ] Scientific researchers have investigated the human health effects of pollutants produced by waste incineration. Many studies have examined health impacts from exposure to pollutants utilizing U.S. EPA modeling guidelines. [ 33 ] [ 34 ] [ 35 ] Exposure through inhalation, ingestion, soil, and dermal contact are incorporated in these models. Research studies have also assessed exposure to pollutants through blood or urine samples of residents and workers who live near waste incinerators. [ 34 ] [ 36 ] Findings from a systematic review of previous research identified a number of symptoms and diseases related to incinerator pollution exposure. These include neoplasia, [ 34 ] respiratory issues, [ 37 ] congenital anomalies, [ 34 ] [ 37 ] [ 38 ] and infant deaths or miscarriages. [ 34 ] [ 38 ] Populations near old, inadequately maintained incinerators experience a higher degree of health issues. [ 34 ] [ 37 ] [ 38 ] Some studies also identified possible cancer risk. [ 38 ] However, difficulties in separating incinerator pollution exposure from combined industry, motor vehicle, and agriculture pollution limits these conclusions on health risks. [ 34 ] [ 36 ] [ 37 ] [ 38 ] Many communities have advocated for the improvement or removal of waste incinerator technology. Specific pollutant exposures, such as high levels of nitrogen dioxide, have been cited in community-led complaints relating to increased emergency room visits for respiratory issues. [ 39 ] [ 40 ] Potential health effects of waste incineration technology have been publicized, notably when located in communities already facing disproportionate health burdens. [ 41 ] For example Wheelabrator Baltimore in Maryland has been investigated due to increased rates of asthma in its neighboring community, which is predominantly occupied by low-income, people of color. [ 41 ] Community-led efforts have suggested a need for future research to address a lack of real-time pollution data. [ 40 ] [ 41 ] These sources have also cited a need for academic, government, and non-profit partnerships to better determine the health impacts of incineration. [ 40 ] [ 41 ] Use of incinerators for waste management is controversial. The debate over incinerators typically involves business interests (representing both waste generators and incinerator firms), government regulators, environmental activists and local citizens who must weigh the economic appeal of local industrial activity with their concerns over health and environmental risk. People and organizations professionally involved in this issue include the U.S. Environmental Protection Agency and a great many local and national air quality regulatory agencies worldwide. The history of municipal solid waste (MSW) incineration is linked intimately to the history of landfills and other waste treatment technology . The merits of incineration are inevitably judged in relation to the alternatives available. Since the 1970s, recycling and other prevention measures have changed the context for such judgements. Since the 1990s alternative waste treatment technologies have been maturing and becoming viable. Incineration is a key process in the treatment of hazardous wastes and clinical wastes. It is often imperative that medical waste be subjected to the high temperatures of incineration to destroy pathogens and toxic contamination it contains. The first incinerator in the U.S. was built in 1885 on Governors Island in New York. [ 71 ] In 1949, Robert C. Ross founded one of the first hazardous waste management companies in the U.S. He began Robert Ross Industrial Disposal because he saw an opportunity to meet the hazardous waste management needs of companies in northern Ohio. In 1958, the company built one of the first hazardous waste incinerators in the U.S. [ 72 ] The first full-scale, municipally operated incineration facility in the U.S. was the Arnold O. Chantland Resource Recovery Plant built in 1975 in Ames, Iowa . The plant is still in operation and produces refuse-derived fuel that is sent to local power plants for fuel. [ 73 ] The first commercially successful incineration plant in the U.S. was built in Saugus, Massachusetts , in October 1975 by Wheelabrator Technologies , and is still in operation today. [ 30 ] There are several environmental or waste management corporations that transport ultimately to an incinerator or cement kiln treatment center. Currently (2009), there are three main businesses that incinerate waste: Clean Harbours, WTI-Heritage, and Ross Incineration Services. Clean Harbours has acquired many of the smaller, independently run facilities, accumulating 5–7 incinerators in the process across the U.S. WTI-Heritage has one incinerator, located in the southeastern corner of Ohio across the Ohio River from West Virginia. [ citation needed ] Several old generation incinerators have been closed; of the 186 MSW incinerators in 1990, only 89 remained by 2007, and of the 6200 medical waste incinerators in 1988, only 115 remained in 2003. [ 74 ] No new incinerators were built between 1996 and 2007. [ citation needed ] The main reasons for lack of activity have been: There has been renewed interest in incineration and other waste-to-energy technologies in the U.S. and Canada. In the U.S., incineration was granted qualification for renewable energy production tax credits in 2004. [ 75 ] Projects to add capacity to existing plants are underway, and municipalities are once again evaluating the option of building incineration plants rather than continue landfilling municipal wastes. However, many of these projects have faced continued political opposition in spite of renewed arguments for the greenhouse gas benefits of incineration and improved air pollution control and ash recycling. In Europe, as a result of a ban on landfilling untreated waste, [ 44 ] many incinerators have been built in the last decade, with more under construction. Recently, a number of municipal governments have begun the process of contracting for the construction and operation of incinerators. In Europe, some of the electricity generated from waste is deemed to be from a 'Renewable Energy Source' (RES) and is thus eligible for tax credits if privately operated. Also, some incinerators in Europe are equipped with waste recovery, allowing the reuse of ferrous and non-ferrous materials found in the burned waste. A prominent example is the AEB Waste Fired Power Plant, Amsterdam. [ 76 ] [ 77 ] In Sweden, about 50% of the generated waste is burned in waste-to-energy facilities, producing electricity and supplying local cities' district heating systems. [ 78 ] The importance of waste in Sweden's electricity generation scheme is reflected on their 2,700,000 tons of waste imported per year (in 2014) to supply waste-to-energy facilities. [ 79 ] Due to increasing targets for municipal solid waste recycling in the EU, at least 55% by 2025 up to 65% by 2035, [ 80 ] several traditional incineration countries are at risk of not meeting them, since at most 35% will remain available for thermal treatment and disposal. [ 81 ] [ 82 ] Denmark has since decided to reduce its incineration capacity by 30% by 2030. [ 83 ] Incineration of non-hazardous waste was not included as a form of green investment in the EU taxonomy for sustainable activities [ 84 ] due to concerns about harming the circularity agenda, effectively stopping future EU funding to the municipal solid waste incineration sector. The technology employed in the UK waste management industry has been greatly lagging behind that of Europe due to the wide availability of landfills. The Landfill Directive set down by the European Union led to the Government of the United Kingdom imposing waste legislation including the landfill tax and Landfill Allowance Trading Scheme . This legislation is designed to reduce the release of greenhouse gases produced by landfills through the use of alternative methods of waste treatment. It is the UK Government's position that incineration will play an increasingly large role in the treatment of municipal waste and supply of energy in the UK. [ citation needed ] In 2008, plans for potential incinerator locations exists for approximately 100 sites. These have been interactively mapped by UK NGO's. [ 85 ] [ 86 ] [ 87 ] [ 88 ] Under a new plan in June 2012, a DEFRA-backed grant scheme (The Farming and Forestry Improvement Scheme) was set up to encourage the use of low-capacity incinerators on agricultural sites to improve their bio security. [ 89 ] A permit has recently been granted [ 90 ] for what would be the UK's largest waste incinerator in the centre of the Cambridge – Milton Keynes – Oxford corridor , in Bedfordshire . Following the construction of a large incinerator at Greatmoor in Buckinghamshire , and plans to construct a further one near Bedford , [ 91 ] the Cambridge – Milton Keynes – Oxford corridor will become a major incineration hub in the UK. Emergency incineration systems exist for the urgent and biosecure disposal of animals and their by-products following a mass mortality or disease outbreak. An increase in regulation and enforcement from governments and institutions worldwide has been forced through public pressure and significant economic exposure. Contagious animal disease has cost governments and industry $200 billion over 20 years to 2012 and is responsible for over 65% of infectious disease outbreaks worldwide in the past sixty years. One-third of global meat exports (approx 6 million tonnes) is affected by trade restrictions at any time and as such the focus of Governments, public bodies and commercial operators is on cleaner, safer and more robust methods of animal carcass disposal to contain and control disease. Large-scale incineration systems are available from niche suppliers and are often bought by governments as a safety net in case of contagious outbreak. Many are mobile and can be quickly deployed to locations requiring biosecure disposal. Small-scale incinerators exist for special purposes. For example, mobile small-scale incinerators are aimed for hygienically safe destruction of medical waste in developing countries . [ 92 ] Companies such as Inciner8, a UK based company, are a good example of mobile incinerator manufacturers with their I8-M50 and I8-M70 models. Small incinerators can be quickly deployed to remote areas where an outbreak has occurred to dispose of infected animals quickly and without the risk of cross contamination. [ citation needed ] Containerised incinerators are a unique type of incinerator that are specifically designed to function in remote locations where traditional infrastructure may not be available. These incinerators are typically built within a shipping container for easy transport and installation.
https://en.wikipedia.org/wiki/Incineration
Incinerator bottom ash ( IBA ) is a form of ash produced in incineration facilities. [ 1 ] [ 2 ] This material is discharged from the moving grate of municipal solid waste incinerators. [ 3 ] [ 2 ] Once IBA is processed by removing contaminants, it can be used as an aggregate. Following processing, the material is termed IBA aggregate or processed IBA . The aggregate uses include: Alternatively, if there are no local markets for the IBA the material is disposed of in a landfill . [ 2 ] IBA is a secondary source of ferrous and non-ferrous (NFe) metals. Between 5-15% of the bottom ash is made up of ferrous materials, and 1-5% is NFe. Despite glass making up 10-30% of IBA, it is not systematically recovered in any processing plants. On average, 63 kg of ferrous materials are removed from a single tonne of ash. [ 1 ] Foam concrete made from IBA led to an explosion in the United Kingdom in 2009. [ 4 ] [ 3 ] It was discovered that the aluminum in the concrete can form hydrogen gas deposits while the concrete sets. [ 4 ] The Health and Safety Executive updated their guidelines to suggest IBA aggregates can only be poured in open air, in well ventilated areas, and no spark generating tools, such as disc cutters and angle grinders , can be used during setting. [ 4 ] [ 3 ] This waste -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incinerator_bottom_ash
Incipient wetness impregnation ( IW or IWI ), also called capillary impregnation or dry impregnation, is a commonly used technique for the synthesis of heterogeneous catalysts . Typically, the active metal precursor is dissolved in an aqueous or organic solution. Then the metal-containing solution is added to a catalyst support containing the same pore volume as the volume of the solution that was added. Capillary action draws the solution into the pores. Solution added in excess of the support pore volume causes the solution transport to change from a capillary action process to a diffusion process, which is much slower. The catalyst can then be dried and calcined to drive off the volatile components within the solution, depositing the metal on the catalyst surface. The maximum loading is limited by the solubility of the precursor in the solution. The concentration profile of the impregnated compound depends on the mass transfer conditions within the pores during impregnation and drying. This catalysis article is a stub . You can help Wikipedia by expanding it . This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incipient_wetness_impregnation
IncludeOS is a minimal, open source , unikernel operating system for cloud services and IoT , developed by Alf Walla and Andreas Åkesson. [ 1 ] [ 2 ] IncludeOS allows users to run C++ applications in the cloud without any operating system. IncludeOS runs on virtual machines like Linux KVM , and VMWare ESXi/Fusion . [ 3 ] IncludeOS applications boot in about 300 ms. On Solo5/uKVM from IBM Research , boot times as low as 10 milliseconds are possible. [ 4 ] The minimalist architecture of IncludeOS means that it does not have any virtual memory space. In turn, therefore, there is no concept of either system calls or user space . [ 3 ]
https://en.wikipedia.org/wiki/IncludeOS
In cellular biology , inclusions are diverse intracellular [ 1 ] non-living substances ( ergastic substances ) [ 2 ] that are not bound by membranes . Inclusions are stored nutrients/ deutoplasmic substances, secretory products, and pigment granules. Examples of inclusions are glycogen granules in the liver and muscle cells , lipid droplets in fat cells , pigment granules in certain cells of skin and hair , and crystals of various types. [ 3 ] Cytoplasmic inclusions are an example of a biomolecular condensate arising by liquid-solid, liquid-gel or liquid-liquid phase separation . These structures were first observed by O. F. Müller in 1786. [ 1 ] Glycogen : Glycogen is the most common form of glucose in animals and is especially abundant in cells of muscles, and liver. It appears in electron micrograph as clusters, or a rosette of beta particles that resemble ribosomes , located near the smooth endoplasmic reticulum . [ 3 ] Glycogen is an important energy source of the cell; therefore, it will be available on demand. The enzymes responsible for glycogenolysis degrade glycogen into individual molecules of glucose and can be utilized by multiple organs of the body. [ 4 ] [ 2 ] Lipids : Lipids, which are stored as triglycerides , are the common form of inclusions. They are stored not only in specialized cells ( adipocytes ) but also are located as individuals droplets in various cell types, especially hepatocytes . [ 3 ] These are fluid at body temperature and appear in living cells as refractile spherical droplets. Lipids yield more than twice as many calories per gram as do carbohydrates. On demand, they serve as a local store of energy and a potential source of short carbon chains that are used by the cell in its synthesis of membranes and other lipid containing structural components or secretory products. [ 3 ] [ 4 ] Crystals : Crystalline inclusions have long been recognized as normal constituents of certain cell types such as Sertoli cells and Leydig cells of the human testis, and are found occasionally in macrophages . [ 4 ] It is believed that these structures are crystalline forms of certain proteins, and are located everywhere in the cell, including the nucleus , mitochondria , endoplasmic reticulum , Golgi body , and free in cytoplasmic matrix. [ 3 ] [ 4 ] Pigments : The most common pigment in the body, besides hemoglobin of red blood cells, is melanin , manufactured by melanocytes of the skin and hair, pigment cells of the retina and specialized nerve cells in the substantia nigra of the brain. [ 3 ] These pigments have protective functions in the skin and aid in the sense of sight in the retina, but their function in neurons is not understood completely. Furthermore, cardiac tissue and central nervous system neurons show a yellow to brown pigment called lipofuscin , believed by some to have lysosomal activity. [ 4 ]
https://en.wikipedia.org/wiki/Inclusion_(cell)
In mineralogy , an inclusion is any material trapped inside a mineral during its formation. In gemology , it is an object enclosed within a gemstone or reaching its surface from the interior. [ 1 ] According to James Hutton 's law of inclusions, fragments included in a host rock are older than the host rock itself. [ 2 ] [ 3 ] Inclusions are usually rocks or other minerals, less often water, gas or petroleum . Liquid and vapor create fluid inclusions . In amber , insects and plants are common inclusions. The analysis of atmospheric gas bubbles as inclusions in ice cores is an important tool in the study of climate change . [ 4 ] A xenolith is a preexisting rock which has been picked up by a lava flow. Melt inclusions form when bits of melt become trapped inside crystals as they form in the melt. Inclusions are one of the most important factors when it comes to gem valuation. They diminish the clarity and value of many gemstones, such as diamonds , and increase the value of others, such as star sapphires . [ 5 ] Many colored gemstones are expected to have inclusions which do not greatly affect their values. They are categorized into three types: [ 2 ] The term "inclusion" is also used in the context of metallurgy and metals processing. [ 6 ] [ 7 ] During the melt stage of processing particles such as oxides can enter or form in the liquid metal which are subsequently trapped when the melt solidifies. The term is usually used negatively such as when the particle could act as a fatigue crack nucleator or as an area of high stress intensity. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Inclusion_(mineral)
In taxonomy , inclusion refers to the scope of a taxon, whether it be the a trait describing rank and diversity, [ 1 ] or a process of assigning or moving a taxon to be included within or absorbed by another taxon. This could occur as two separate species being found to be a single species or a genus being found to belong to a certain family. [ 2 ] Among the most inclusive taxonomic ranks are Domains , Kingdoms , and Phyla , while the least inclusive are Subspecies , followed by Species .
https://en.wikipedia.org/wiki/Inclusion_(taxonomy)
Inclusion bodies are aggregates of specific types of protein found in neurons , and a number of tissue cells including red blood cells , bacteria , viruses , and plants . Inclusion bodies of aggregations of multiple proteins are also found in muscle cells affected by inclusion body myositis and hereditary inclusion body myopathy . [ 1 ] Inclusion bodies in neurons may accumulate in the cytoplasm or nucleus , and are associated with many neurodegenerative diseases . [ 2 ] Inclusion bodies in neurodegenerative diseases are aggregates of misfolded proteins ( aggresomes ) and are hallmarks of many of these diseases, including Lewy bodies in dementia with Lewy bodies , and Parkinson's disease , neuroserpin inclusion bodies called Collins bodies in familial encephalopathy with neuroserpin inclusion bodies , [ 3 ] inclusion bodies in Huntington's disease , Papp–Lantos bodies in multiple system atrophy , and various inclusion bodies in frontotemporal dementia including Pick bodies . [ 4 ] Bunina bodies in motor neurons are a core feature of amyotrophic lateral sclerosis . [ 5 ] Other usual cell inclusions are often temporary inclusions of accumulated proteins, fats, secretory granules, or other insoluble components. [ 6 ] Inclusion bodies are found in bacteria as particles of aggregated protein. They have a higher density than many other cell components but are porous. [ 7 ] They typically represent sites of viral multiplication in a bacterium or a eukaryotic cell and usually consist of viral capsid proteins . Inclusion bodies contain very little host protein, ribosomal components, or DNA/RNA fragments. They often almost exclusively contain the over-expressed protein and aggregation and has been reported to be reversible. It has been suggested that inclusion bodies are dynamic structures formed by an unbalanced equilibrium between aggregated and soluble proteins of Escherichia coli . There is a growing body of information indicating that formation of inclusion bodies occurs as a result of intracellular accumulation of partially folded expressed proteins which aggregate through non-covalent hydrophobic or ionic interactions or a combination of both. [ citation needed ] Inclusion bodies have a non-unit (single) lipid membrane [ citation needed ] . Protein inclusion bodies are classically thought to contain misfolded protein . However, this has been contested, as green fluorescent protein will sometimes fluoresce in inclusion bodies, which indicates some resemblance of the native structure and researchers have recovered folded protein from inclusion bodies. [ 8 ] [ 9 ] [ 10 ] When genes from one organism are expressed in another organism the resulting protein sometimes forms inclusion bodies. This is often true when large evolutionary distances are crossed: a cDNA isolated from Eukarya for example, and expressed as a recombinant gene in a prokaryote risks the formation of the inactive aggregates of protein known as inclusion bodies. While the cDNA may properly code for a translatable mRNA , the protein that results will emerge in a foreign microenvironment. This often has fatal effects, especially if the intent of cloning is to produce a biologically active protein . For example, eukaryotic systems for carbohydrate modification and membrane transport are not found in prokaryotes . The internal microenvironment of a prokaryotic cell ( pH , osmolarity ) may differ from that of the original source of the gene . Mechanisms for folding a protein may also be absent, and hydrophobic residues that normally would remain buried may be exposed and available for interaction with similar exposed sites on other ectopic proteins. Processing systems for the cleavage and removal of internal peptides would also be absent in bacteria . The initial attempts to clone insulin in a bacterium suffered all of these deficits. In addition, the fine controls that may keep the concentration of a protein low will also be missing in a prokaryotic cell , and overexpression can result in filling a cell with ectopic protein that, even if it were properly folded, would precipitate by saturating its environment. [ citation needed ] Inclusion bodies are aggregates of protein associated with many neurodegenerative diseases , accumulated in the cytoplasm or nucleus of neurons . [ 2 ] Inclusion bodies of aggregations of multiple proteins are also found in muscle cells affected by inclusion body myositis and hereditary inclusion body myopathy . [ 1 ] Inclusion bodies in neurodegenerative diseases are aggregates of misfolded proteins ( aggresomes ) and are hallmarks of many of these diseases, including Lewy bodies in Lewy body dementias , and Parkinson's disease , neuroserpin inclusion bodies called Collins bodies in familial encephalopathy with neuroserpin inclusion bodies , inclusion bodies in Huntington's disease , Papp-Lantos inclusions in multiple system atrophy , and various inclusion bodies in frontotemporal dementia including Pick bodies . [ 4 ] Bunina bodies in motor neurons are a core feature of amyotrophic lateral sclerosis . [ 5 ] A red blood cell (erythrocyte) does not usually have inclusions in the cytoplasm, but they may be seen in certain blood disorders. There are three kinds of red blood cell inclusions: Examples of viral inclusion bodies in animals are Cytoplasmic eosinophilic (acidophilic)- Nuclear eosinophilic (acidophilic)- Nuclear basophilic - Both nuclear and cytoplasmic- Examples of viral inclusion bodies in plants [ 13 ] include aggregations of virus particles (like those for Cucumber mosaic virus [ 14 ] ) and aggregations of viral proteins (like the cylindrical inclusions of potyviruses [ 15 ] ). Depending on the plant and the plant virus family these inclusions can be found in epidermal cells, mesophyll cells, and stomatal cells when plant tissue is properly stained. [ 16 ] Polyhydroxyalkanoates (PHA) are produced by bacteria as inclusion bodies. The size of PHA granules are limited in E. coli , due to its small size. [ 17 ] Bacterial cell's inclusion bodies are not as abundant intracellularly, in comparison to eukaryotic cells. Polymeric R bodies are found in the bacterial cytoplasm of some taxa, and thought to be involved in toxin delivery. [ 18 ] Between 70% and 80% of recombinant proteins expressed E. coli are contained in inclusion bodies (i.e., protein aggregates). [ 19 ] The purification of the expressed proteins from inclusion bodies usually require two main steps: extraction of inclusion bodies from the bacteria followed by the solubilisation of the purified inclusion bodies. Solubilisation of inclusions bodies often involves treatment with denaturing agents, such as urea or guanidine chloride at high concentrations, to de-aggregate the collapsed proteins. Renaturation follows the treatment with denaturing agents and often consists of dialysis and/or use of molecules that promote the refolding of denatured proteins (including chaotopic agents [ 7 ] and chaperones). [ 20 ] Pseudo-inclusions are invaginations of the cytoplasm into the cell nuclei , which may give the appearance of intranuclear inclusions. They may appear in papillary thyroid carcinoma . [ 21 ] Inclusion body diseases differ from amyloid diseases in that inclusion bodies are necessarily intracellular aggregates of protein, where amyloid can be intracellular or extracellular. Amyloid also necessitates protein polymerization where inclusion bodies do not. [ 22 ] Inclusion bodies are often made of denatured aggregates of inactive proteins. Although, the renaturation of inclusion bodies can sometimes lead to the solubilisation and the recovery of active proteins, the process is still very empirical, uncertain and of low efficiency. Several techniques have been developed over the years to prevent the formation of inclusion bodies. These techniques include:
https://en.wikipedia.org/wiki/Inclusion_bodies
In the mathematical field of order theory , an inclusion order is the partial order that arises as the subset -inclusion relation on some collection of objects. In a simple way, every poset P = ( X ,≤) is ( isomorphic to) an inclusion order (just as every group is isomorphic to a permutation group – see Cayley's theorem ). To see this, associate to each element x of X the set then the transitivity of ≤ ensures that for all a and b in X , we have There can be sets S {\displaystyle S} of cardinality less than | X | {\displaystyle |X|} such that P is isomorphic to the inclusion order on S . The size of the smallest possible S is called the 2-dimension of P . Several important classes of poset arise as inclusion orders for some natural collections, like the Boolean lattice Q n , which is the collection of all 2 n subsets of an n -element set, the interval-containment orders , which are precisely the orders of order dimension at most two, and the dimension- n orders, which are the containment orders on collections of n -boxes anchored at the origin . Other containment orders that are interesting in their own right include the circle orders , which arise from disks in the plane, and the angle orders . This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inclusion_order
In combinatorics , the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets ; symbolically expressed as where A and B are two finite sets and | S | indicates the cardinality of a set S (which may be considered as the number of elements of the set, if the set is finite ). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in the intersection of the two sets and the count is corrected by subtracting the size of the intersection. The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the sets A , B and C is given by This formula can be verified by counting how many times each region in the Venn diagram figure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total. Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union of n sets: The name comes from the idea that the principle is based on over-generous inclusion , followed by compensating exclusion . This concept is attributed to Abraham de Moivre (1718), [ 1 ] although it first appears in a paper of Daniel da Silva (1854) [ 2 ] and later in a paper by J. J. Sylvester (1883). [ 3 ] Sometimes the principle is referred to as the formula of Da Silva or Sylvester, due to these publications. The principle can be viewed as an example of the sieve method extensively used in number theory and is sometimes referred to as the sieve formula . [ 4 ] As finite probabilities are computed as counts relative to the cardinality of the probability space , the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella of measure theory . In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix. [ 5 ] This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. As Gian-Carlo Rota put it: [ 6 ] "One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem." In its general formula, the principle of inclusion–exclusion states that for finite sets A 1 , ..., A n , one has the identity This can be compactly written as or In words, to count the number of elements in a finite union of finite sets, first sum the cardinalities of the individual sets, then subtract the number of elements that appear in at least two sets, then add back the number of elements that appear in at least three sets, then subtract the number of elements that appear in at least four sets, and so on. This process always ends since there can be no elements that appear in more than the number of sets in the union. (For example, if n = 4 , {\displaystyle n=4,} there can be no elements that appear in more than 4 {\displaystyle 4} sets; equivalently, there can be no elements that appear in at least 5 {\displaystyle 5} sets.) In applications it is common to see the principle expressed in its complementary form. That is, letting S be a finite universal set containing all of the A i and letting A i ¯ {\displaystyle {\bar {A_{i}}}} denote the complement of A i in S , by De Morgan's laws we have As another variant of the statement, let P 1 , ..., P n be a list of properties that elements of a set S may or may not have, then the principle of inclusion–exclusion provides a way to calculate the number of elements of S that have none of the properties. Just let A i be the subset of elements of S which have the property P i and use the principle in its complementary form. This variant is due to J. J. Sylvester . [ 1 ] Notice that if you take into account only the first m<n sums on the right (in the general form of the principle), then you will get an overestimate if m is odd and an underestimate if m is even. A more complex example is the following. Suppose there is a deck of n cards numbered from 1 to n . Suppose a card numbered m is in the correct position if it is the m th card in the deck. How many ways, W , can the cards be shuffled with at least 1 card being in the correct position? Begin by defining set A m , which is all of the orderings of cards with the m th card correct. Then the number of orders, W , with at least one card being in the correct position, m , is Apply the principle of inclusion–exclusion, Each value A m 1 ∩ ⋯ ∩ A m p {\displaystyle A_{m_{1}}\cap \cdots \cap A_{m_{p}}} represents the set of shuffles having at least p values m 1 , ..., m p in the correct position. Note that the number of shuffles with at least p values correct only depends on p , not on the particular values of m {\displaystyle m} . For example, the number of shuffles having the 1st, 3rd, and 17th cards in the correct position is the same as the number of shuffles having the 2nd, 5th, and 13th cards in the correct positions. It only matters that of the n cards, 3 were chosen to be in the correct position. Thus there are ( n p ) {\textstyle {n \choose p}} equal terms in the p th summation (see combination ). | A 1 ∩ ⋯ ∩ A p | {\displaystyle |A_{1}\cap \cdots \cap A_{p}|} is the number of orderings having p elements in the correct position, which is equal to the number of ways of ordering the remaining n − p elements, or ( n − p )!. Thus we finally get: A permutation where no card is in the correct position is called a derangement . Taking n ! to be the total number of permutations, the probability Q that a random shuffle produces a derangement is given by a truncation to n + 1 terms of the Taylor expansion of e −1 . Thus the probability of guessing an order for a shuffled deck of cards and being incorrect about every card is approximately e −1 or 37%. The situation that appears in the derangement example above occurs often enough to merit special attention. [ 7 ] Namely, when the size of the intersection sets appearing in the formulas for the principle of inclusion–exclusion depend only on the number of sets in the intersections and not on which sets appear. More formally, if the intersection has the same cardinality, say α k = | A J |, for every k -element subset J of {1, ..., n }, then Or, in the complementary form, where the universal set S has cardinality α 0 , Given a family (repeats allowed) of subsets A 1 , A 2 , ..., A n of a universal set S , the principle of inclusion–exclusion calculates the number of elements of S in none of these subsets. A generalization of this concept would calculate the number of elements of S which appear in exactly some fixed m of these sets. Let N = [ n ] = {1,2,..., n }. If we define A ∅ = S {\displaystyle A_{\emptyset }=S} , then the principle of inclusion–exclusion can be written as, using the notation of the previous section; the number of elements of S contained in none of the A i is: If I is a fixed subset of the index set N , then the number of elements which belong to A i for all i in I and for no other values is: [ 8 ] Define the sets We seek the number of elements in none of the B k which, by the principle of inclusion–exclusion (with B ∅ = A I {\displaystyle B_{\emptyset }=A_{I}} ), is The correspondence K ↔ J = I ∪ K between subsets of N \ I and subsets of N containing I is a bijection and if J and K correspond under this map then B K = A J , showing that the result is valid. In probability , for events A 1 , ..., A n in a probability space ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} , the inclusion–exclusion principle becomes for n = 2 for n = 3 and in general which can be written in closed form as where the last sum runs over all subsets I of the indices 1, ..., n which contain exactly k elements, and denotes the intersection of all those A i with index in I . According to the Bonferroni inequalities , the sum of the first terms in the formula is alternately an upper bound and a lower bound for the LHS . This can be used in cases where the full formula is too cumbersome. For a general measure space ( S ,Σ, μ ) and measurable subsets A 1 , ..., A n of finite measure , the above identities also hold when the probability measure P {\displaystyle \mathbb {P} } is replaced by the measure μ . If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersection A I only depends on the cardinality of I , meaning that for every k in {1, ..., n } there is an a k such that then the above formula simplifies to due to the combinatorial interpretation of the binomial coefficient ( n k ) {\textstyle {\binom {n}{k}}} . For example, if the events A i {\displaystyle A_{i}} are independent and identically distributed , then P ( A i ) = p {\displaystyle \mathbb {P} (A_{i})=p} for all i , and we have a k = p k {\displaystyle a_{k}=p^{k}} , in which case the expression above simplifies to (This result can also be derived more simply by considering the intersection of the complements of the events A i {\displaystyle A_{i}} .) An analogous simplification is possible in the case of a general measure space ( S , Σ , μ ) {\displaystyle (S,\Sigma ,\mu )} and measurable subsets A 1 , … , A n {\displaystyle A_{1},\dots ,A_{n}} of finite measure. There is another formula used in point processes . Let S {\displaystyle S} be a finite set and P {\displaystyle P} be a random subset of S {\displaystyle S} . Let A {\displaystyle A} be any subset of S {\displaystyle S} , then P ( P = A ) = P ( P ⊃ A ) − ∑ j 1 ∈ S ∖ A P ( P ⊃ A ∪ j 1 ) + ∑ j 1 , j 2 ∈ S ∖ A j 1 ≠ j 2 P ( P ⊃ A ∪ j 1 , j 2 ) + … + ( − 1 ) | S | − | A | P ( P ⊃ S ) = ∑ A ⊂ J ⊂ S ( − 1 ) | J | − | A | P ( P ⊃ J ) . {\displaystyle {\begin{aligned}\mathbb {P} (P=A)&=\mathbb {P} (P\supset A)-\sum _{j_{1}\in S\setminus A}\mathbb {P} (P\supset A\cup {j_{1}})\\&+\sum _{j_{1},j_{2}\in S\setminus A\ j_{1}\neq j_{2}}\mathbb {P} (P\supset A\cup {j_{1},j_{2}})+\dots \\&+(-1)^{|S|-|A|}\mathbb {P} (P\supset S)\\&=\sum _{A\subset J\subset S}(-1)^{|J|-|A|}\mathbb {P} (P\supset J).\end{aligned}}} The principle is sometimes stated in the form [ 9 ] that says that if then The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of ( 2 ). Take m _ = { 1 , 2 , … , m } {\displaystyle {\underline {m}}=\{1,2,\ldots ,m\}} , f ( m _ ) = 0 {\displaystyle f({\underline {m}})=0} , and respectively for all sets S {\displaystyle S} with S ⊊ m _ {\displaystyle S\subsetneq {\underline {m}}} . Then we obtain respectively for all sets A {\displaystyle A} with A ⊊ m _ {\displaystyle A\subsetneq {\underline {m}}} . This is because elements a {\displaystyle a} of ∩ i ∈ m _ ∖ A A i {\displaystyle \cap _{i\in {\underline {m}}\smallsetminus A}A_{i}} can be contained in other A i {\displaystyle A_{i}} ( A i {\displaystyle A_{i}} with i ∈ A {\displaystyle i\in A} ) as well, and the ∩ ∖ ∪ {\displaystyle \cap \smallsetminus \cup } -formula runs exactly through all possible extensions of the sets { A i ∣ i ∈ m _ ∖ A } {\displaystyle \{A_{i}\mid i\in {\underline {m}}\smallsetminus A\}} with other A i {\displaystyle A_{i}} , counting a {\displaystyle a} only for the set that matches the membership behavior of a {\displaystyle a} , if S {\displaystyle S} runs through all subsets of A {\displaystyle A} (as in the definition of g ( A ) {\displaystyle g(A)} ). Since f ( m _ ) = 0 {\displaystyle f({\underline {m}})=0} , we obtain from ( 2 ) with A = m _ {\displaystyle A={\underline {m}}} that and by interchanging sides, the combinatorial and the probabilistic version of the inclusion–exclusion principle follow. If one sees a number n {\displaystyle n} as a set of its prime factors, then ( 2 ) is a generalization of Möbius inversion formula for square-free natural numbers . Therefore, ( 2 ) is seen as the Möbius inversion formula for the incidence algebra of the partially ordered set of all subsets of A . For a generalization of the full version of Möbius inversion formula, ( 2 ) must be generalized to multisets . For multisets instead of sets, ( 2 ) becomes where A − S {\displaystyle A-S} is the multiset for which ( A − S ) ⊎ S = A {\displaystyle (A-S)\uplus S=A} , and Notice that μ ( A − S ) {\displaystyle \mu (A-S)} is just the ( − 1 ) | A | − | S | {\displaystyle (-1)^{|A|-|S|}} of ( 2 ) in case A − S {\displaystyle A-S} is a set. Substitute g ( S ) = ∑ T ⊆ S f ( T ) {\displaystyle g(S)=\sum _{T\subseteq S}f(T)} on the right hand side of ( 3 ). Notice that f ( A ) {\displaystyle f(A)} appears once on both sides of ( 3 ). So we must show that for all T {\displaystyle T} with T ⊊ A {\displaystyle T\subsetneq A} , the terms f ( T ) {\displaystyle f(T)} cancel out on the right hand side of ( 3 ). For that purpose, take a fixed T {\displaystyle T} such that T ⊊ A {\displaystyle T\subsetneq A} and take an arbitrary fixed a ∈ A {\displaystyle a\in A} such that a ∉ T {\displaystyle a\notin T} . Notice that A − S {\displaystyle A-S} must be a set for each positive or negative appearance of f ( T ) {\displaystyle f(T)} on the right hand side of ( 3 ) that is obtained by way of the multiset S {\displaystyle S} such that T ⊆ S ⊆ A {\displaystyle T\subseteq S\subseteq A} . Now each appearance of f ( T ) {\displaystyle f(T)} on the right hand side of ( 3 ) that is obtained by way of S {\displaystyle S} such that A − S {\displaystyle A-S} is a set that contains a {\displaystyle a} cancels out with the one that is obtained by way of the corresponding S {\displaystyle S} such that A − S {\displaystyle A-S} is a set that does not contain a {\displaystyle a} . This gives the desired result. The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here. A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting all derangements of a finite set. A derangement of a set A is a bijection from A into itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality of A is n , then the number of derangements is [ n ! / e ] where [ x ] denotes the nearest integer to x ; a detailed proof is available here and also see the examples section above. The first occurrence of the problem of counting the number of derangements is in an early book on games of chance: Essai d'analyse sur les jeux de hazard by P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, " problème des rencontres ." [ 10 ] The problem is also known as the hatcheck problem. The number of derangements is also known as the subfactorial of n , written ! n . It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/ e as n grows. The principle of inclusion–exclusion, combined with De Morgan's law , can be used to count the cardinality of the intersection of sets as well. Let A k ¯ {\displaystyle {\overline {A_{k}}}} represent the complement of A k with respect to some universal set A such that A k ⊆ A {\displaystyle A_{k}\subseteq A} for each k . Then we have thereby turning the problem of finding an intersection into the problem of finding a union. The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such as graph coloring . [ 11 ] A well known application of the principle is the construction of the chromatic polynomial of a graph. [ 12 ] The number of perfect matchings of a bipartite graph can be calculated using the principle. [ 13 ] Given finite sets A and B , how many surjective functions (onto functions) are there from A to B ? Without any loss of generality we may take A = {1, ..., k } and B = {1, ..., n }, since only the cardinalities of the sets matter. By using S as the set of all functions from A to B , and defining, for each i in B , the property P i as "the function misses the element i in B " ( i is not in the image of the function), the principle of inclusion–exclusion gives the number of onto functions between A and B as: [ 14 ] A permutation of the set S = {1, ..., n } where each element of S is restricted to not being in certain positions (here the permutation is considered as an ordering of the elements of S ) is called a permutation with forbidden positions . For example, with S = {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By letting A i be the set of positions that the element i is not allowed to be in, and the property P i to be the property that a permutation puts element i into a position in A i , the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions. [ 15 ] In the given example, there are 12 = 2(3!) permutations with property P 1 , 6 = 3! permutations with property P 2 and no permutations have properties P 3 or P 4 as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus: The final 4 in this computation is the number of permutations having both properties P 1 and P 2 . There are no other non-zero contributions to the formula. The Stirling numbers of the second kind , S ( n , k ) count the number of partitions of a set of n elements into k non-empty subsets (indistinguishable boxes ). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of an n -set into k non-empty but distinguishable boxes ( ordered non-empty subsets). Using the universal set consisting of all partitions of the n -set into k (possibly empty) distinguishable boxes, A 1 , A 2 , ..., A k , and the properties P i meaning that the partition has box A i empty, the principle of inclusion–exclusion gives an answer for the related result. Dividing by k ! to remove the artificial ordering gives the Stirling number of the second kind: [ 16 ] A rook polynomial is the generating function of the number of ways to place non-attacking rooks on a board B that looks like a subset of the squares of a checkerboard ; that is, no two rooks may be in the same row or column. The board B is any subset of the squares of a rectangular board with n rows and m columns; we think of it as the squares in which one is allowed to put a rook. The coefficient , r k ( B ) of x k in the rook polynomial R B ( x ) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B . For any board B , there is a complementary board B ′ {\displaystyle B'} consisting of the squares of the rectangular board that are not in B . This complementary board also has a rook polynomial R B ′ ( x ) {\displaystyle R_{B'}(x)} with coefficients r k ( B ′ ) . {\displaystyle r_{k}(B').} It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume that n ≤ m , so this coefficient is r n ( B ). The number of ways to place n non-attacking rooks on the complete n × m "checkerboard" (without regard as to whether the rooks are placed in the squares of the board B ) is given by the falling factorial : Letting P i be the property that an assignment of n non-attacking rooks on the complete board has a rook in column i which is not in a square of the board B , then by the principle of inclusion–exclusion we have: [ 17 ] Euler's totient or phi function, φ ( n ) is an arithmetic function that counts the number of positive integers less than or equal to n that are relatively prime to n . That is, if n is a positive integer , then φ( n ) is the number of integers k in the range 1 ≤ k ≤ n which have no common factor with n other than 1. The principle of inclusion–exclusion is used to obtain a formula for φ( n ). Let S be the set {1, ..., n } and define the property P i to be that a number in S is divisible by the prime number p i , for 1 ≤ i ≤ r , where the prime factorization of Then, [ 18 ] The Dirichlet hyperbola method re-expresses a sum of a multiplicative function f ( n ) {\displaystyle f(n)} by selecting a suitable Dirichlet convolution f = g ∗ h {\displaystyle f=g\ast h} , recognizing that the sum can be recast as a sum over the lattice points in a region bounded by x ≥ 1 {\displaystyle x\geq 1} , y ≥ 1 {\displaystyle y\geq 1} , and x y ≤ n {\displaystyle xy\leq n} , splitting this region into two overlapping subregions, and finally using the inclusion–exclusion principle to conclude that In many cases where the principle could give an exact formula (in particular, counting prime numbers using the sieve of Eratosthenes ), the formula arising does not offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula is not directly applicable. In number theory , this difficulty was addressed by Viggo Brun . After a slow start, his ideas were taken up by others, and a large variety of sieve methods developed. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula. Let A 1 , ..., A n be arbitrary sets and p 1 , ..., p n real numbers in the closed unit interval [0, 1] . Then, for every even number k in {0, ..., n }, the indicator functions satisfy the inequality: [ 19 ] Choose an element contained in the union of all sets and let A 1 , A 2 , … , A t {\displaystyle A_{1},A_{2},\dots ,A_{t}} be the individual sets containing it. (Note that t > 0.) Since the element is counted precisely once by the left-hand side of equation ( 1 ), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected from A 1 , A 2 , … , A t {\displaystyle A_{1},A_{2},\dots ,A_{t}} . The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have: By the binomial theorem , Using the fact that ( t 0 ) = 1 {\displaystyle {\binom {t}{0}}=1} and rearranging terms, we have and so, the chosen element is counted only once by the right-hand side of equation ( 1 ). An algebraic proof can be obtained using indicator functions (also known as characteristic functions). The indicator function of a subset S of a set X is the function If A {\displaystyle A} and B {\displaystyle B} are two subsets of X {\displaystyle X} , then Let A denote the union ⋃ i = 1 n A i {\textstyle \bigcup _{i=1}^{n}A_{i}} of the sets A 1 , ..., A n . To prove the inclusion–exclusion principle in general, we first verify the identity for indicator functions, where: The following function is identically zero because: if x is not in A , then all factors are 0−0 = 0; and otherwise, if x does belong to some A m , then the corresponding m th factor is 1−1=0. By expanding the product on the left-hand side, equation ( 4 ) follows. To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation ( 4 ) over all x in the union of A 1 , ..., A n . To derive the version used in probability, take the expectation in ( 4 ). In general, integrate the equation ( 4 ) with respect to μ . Always use linearity in these derivations. This article incorporates material from principle of inclusion–exclusion on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Inclusion–exclusion_principle
Inclusive design is a design process in which a product, service, or environment is designed to be usable for as many people as possible, particularly groups who are traditionally excluded from being able to use an interface or navigate an environment. Its focus is on fulfilling as many user needs as possible, not just as many users as possible. [ 1 ] Historically, inclusive design has been linked to designing for people with physical disabilities, and accessibility is one of the key outcomes of inclusive design. [ 2 ] However, rather than focusing on designing for disabilities, inclusive design is a methodology that considers many aspects of human diversity that could affect a person's ability to use a product, service, or environment, such as ability, language, culture, gender, and age. [ 3 ] The Inclusive Design Research Center reframes disability as a mismatch between the needs of a user and the design of a product or system, emphasizing that disability can be experienced by any user. [ 4 ] With this framing, it becomes clear that inclusive design is not limited to interfaces or technologies, but may also be applied to the design of policies and infrastructure. Three dimensions in inclusive design methodology identified by the Inclusive Design Research Centre include: [ 5 ] Further iterations of inclusive design include product inclusion, a practice of bringing an inclusive lens throughout development and design. This term suggests looking at multiple dimensions of identity including race, age, gender and more. In the 1950s, Europe, Japan, and the United States began to move towards "barrier-free design", which sought to remove obstacles in built environments for people with physical disabilities. By the 1970s, the emergence of accessible design began to move past the idea of building solutions specifically for individuals with disabilities towards normalization and integration. In 1973, the United States passed the Rehabilitation Act , which prohibits discrimination on the basis of disability in programs conducted by federal agencies, a crucial step towards recognizing that accessible design was a condition for supporting people's civil rights. [ 6 ] In May 1974, the magazine Industrial Design published an article, "The Handicapped Majority," which argued that handicaps were not a niche concern and 'normal' users suffered from poor design of products and environments as well. [ 7 ] Clarkson and Coleman describe the emergence of inclusive design in the United Kingdom as a synthesis of existing projects and movement. [ 8 ] Coleman also published the first reference to the term in 1994 with The Case for Inclusive Design , a presentation at the 12th Triennial Congress of the International Ergonomics Association . [ 9 ] Much of this early work was inspired by an aging population and people living for longer times in older ages as voiced by scholars like Peter Laslett . [ 8 ] Public focus on accessibility further increased with the passage of the passage of the Americans with Disabilities Act of 1990 , which expanded the responsibility of accessible design to include both public and private entities. [ 10 ] In the 1990s, the United States followed the United Kingdom in shifting focus from universal design to inclusive design. [ 6 ] Around this time, Selwyn Goldsmith (in the UK) and Ronald 'Ron' Mace (in the US), two architects who had both survived polio and were wheelchair users, advocated for an expanded view of design for everyone. Along with Mace, nine other authors from five organizations in the United States developed the Principles of Universal Design in 1997. [ 6 ] In 1998, the United States amended Section 508 of the Rehabilitation Act to include inclusivity requirements for the design of information and technology. In 2016, the Design for All Showcase at the White House featured a panel on inclusive design. [ 11 ] [ 7 ] The show featured clothing and personal devices either on the market or in development, modeled by disabled people. [ 12 ] Rather than treating accessible and inclusive design as a product of compliance to legal requirements, the showcase positioned disability as a source of innovation. Inclusive design is often equated to accessible or universal design , as all three concepts are related to ensuring that products are usable by all people. However, subtle distinctions make each approach noteworthy. Accessibility is oriented towards the outcome of ensuring that a product supports individual users' needs. [ 13 ] Accessible design is often based upon compliance with government- or industry-designated guidelines, such as Americans with Disabilities Act (ADA) Accessibility Standards or Web Content Accessibility Guidelines (WCAG) . As a result, it is limited in scope and often focuses on specific accommodations to ensure that people with disabilities have access to products, services, or environments. In contrast, inclusive design considers the needs of a wider range of potential users, including those with capability impairments that may not be legally recognized as disabilities. [ 14 ] Inclusive design seeks out cases of exclusion from a product or environment, regardless of the cause, and seeks to reduce that exclusion. For example, a design that aims to reduce safety risks for people suffering from age-related long-sightedness would be best characterized as an inclusive design. Inclusive design also looks beyond resolving issues of access to improving the overall user experience. As a result, accessibility is one piece of inclusive design, but not the whole picture. In general, designs created through an inclusive design process should be accessible, as the needs of people with different abilities are considered during the design process. But accessible designs aren't necessarily inclusive if they don't move beyond providing access to people of different abilities and consider the wider user experience for different types of people—particularly those who may not suffer from recognized, common cognitive, or physical disabilities. [ 15 ] Universal design is design for everyone: the term was coined by Ronald Mace in 1980, and its aim is to produce designs that all people can use fully, without the need for adaptations. Universal design originated in work on the design of built environments, though its focus has expanded to encompass digital products and services as well. [ 13 ] Universal design principles include usefulness to people with diverse abilities; intuitive use regardless of user's skill level; perceptible communication of necessary information; tolerance for error; low physical effort; and appropriate size and space for all users. [ 16 ] Many of these principles are compatible with accessible and inclusive design, but universal design typically provides a single solution for a large user base, without added accommodations. [ 15 ] Therefore, while universal design supports the widest range of users, it does not aim to address individual accessibility needs. Inclusive design acknowledges that it is not always possible for one product to meet every user's needs, and thus explores different solutions for different groups of people. In general, inclusive design involves engaging with users and seeking to understand their needs. Frequently, inclusive design approaches include steps such as: developing empathy for the needs and contexts of potential users; forming diverse teams; creating and testing multiple solutions; encouraging dialogue regarding a design rather than debate; and using structured processes that guide conversations toward productive outcomes. [ 17 ] Five United States organizations—including the Institute for Human Centered Design (IHCD) and Ronald Mace at North Carolina State University —developed the Principles of Universal Design in 1997. The IHCD has since shifted the language of the principles from 'universal' to 'inclusive.' [ 2 ] The Commission for Architecture and the Built Environment (CABE) is an arm of the UK Design Council, which advises the government on architecture, urban design and public space. In 2006, they created the following set of inclusive design principles : The University of Cambridge's Inclusive Design Toolkit [ 18 ] advocates incorporating inclusive design elements throughout the design process in iterative cycles of: Microsoft emphasizes the role of learning from people who represent different perspectives in their inclusive design approach. They advocate for the following steps: [ 19 ] For Adobe , the inclusive design process begins with identification of situations where people are excluded from using a product. They describe the following principles of inclusive design: [ 20 ] For Google , the inclusive design process is slightly different and is called product inclusion, and looks at 13 dimensions of identity and the intersections of those dimensions throughout the product development and design process. https://about.google/belonging/in-products/ Participatory design is rooted in the design of Scandinavian workplaces in the 1970s, and is based in the idea that those affected by a design should be consulted during the design process. [ 21 ] Designers anticipate how users will actually use a product—and rather than focusing on merely designing a useful product, the whole infrastructure is considered: the goal is to design a good environment for the product at use time. [ 21 ] This methodology treats the challenge of design as an ongoing process. Further, rather than viewing the design process in phases, such as analysis, design, construction, and implementation, the participatory design approach looks at projects in terms of a collection of users and their experiences. There are numerous examples of inclusive design that apply to interfaces and technology, consumer products, and infrastructure.
https://en.wikipedia.org/wiki/Inclusive_design
Inclusive fitness is a conceptual framework in evolutionary biology first defined by W. D. Hamilton in 1964. [ 1 ] It is primarily used to aid the understanding of how social traits are expected to evolve in structured populations . [ 2 ] It involves partitioning an individual's expected fitness returns into two distinct components: direct fitness returns - the component of a focal individual’s fitness that is independent of who it interacts with socially; indirect fitness returns - the component that is dependent on who it interacts with socially. The direct component of an individual's fitness is often called its personal fitness, while an individual’s direct and indirect fitness components taken together are often called its inclusive fitness. [ 1 ] [ 3 ] Under an inclusive fitness framework direct fitness returns are realised through the offspring a focal individual produces independent of who it interacts with, while indirect fitness returns are realised by adding up all the effects our focal individual has on the (number of) offspring produced by those it interacts with weighted by the relatedness of our focal individual to those it interacts with. [ 3 ] This can be visualised in a sexually reproducing system (assuming identity by descent ) by saying that an individual's own child, who carries one half of that individual's genes, represents one offspring equivalent. A sibling's child, who will carry one-quarter of the individual's genes, will then represent 1/2 offspring equivalent (and so on - see coefficient of relationship for further examples). Neighbour-modulated fitness is the conceptual inverse of inclusive fitness. Where inclusive fitness calculates an individual’s indirect fitness component by summing the fitness that focal individual receives through modifying the productivities of those it interacts with (its neighbours), neighbour-modulated fitness instead calculates it by summing the effects an individual’s neighbours have on that focal individual’s productivity. [ 3 ] When taken over an entire population, these two frameworks give functionally equivalent results. [ 3 ] Hamilton’s rule is a particularly important result in the fields of evolutionary ecology and behavioral ecology that follows naturally from the partitioning of fitness into direct and indirect components, as given by inclusive and neighbour-modulated fitness. It enables us to see how the average trait value of a population is expected to evolve under the assumption of small mutational steps . [ 2 ] Kin selection is a well known case whereby inclusive fitness effects can influence the evolution of social behaviours. Kin selection relies on positive relatedness (driven by identity by descent) to enable individuals who positively influence the fitness of those they interact with at a cost to their own personal fitness, to outcompete individuals employing more selfish strategies. It is thought to be one of the primary mechanisms underlying the evolution of altruistic behaviour , alongside the less prevalent reciprocity (see also reciprocal altruism ), and to be of particular importance in enabling the evolution of eusociality among other forms of group living . Inclusive fitness has also been used to explain the existence of spiteful behaviour, where individuals negatively influence the fitness of those they interact with at a cost to their own personal fitness. Inclusive fitness and neighbour-modulated fitness are both frameworks that leverage the individual as the unit of selection . It is from this that the gene-centered view of evolution emerged: a perspective that has facilitated much of the work done into the evolution of conflict (examples include parent-offspring conflict , interlocus sexual conflict , and intragenomic conflict ). The British evolutionary biologist W. D. Hamilton showed mathematically that, because other members of a population may share one's genes, a gene can also increase its evolutionary success by indirectly promoting the reproduction and survival of other individuals who also carry that gene. This is variously called "kin theory", "kin selection theory" or "inclusive fitness theory". The most obvious category of such individuals is close genetic relatives, and where these are concerned, the application of inclusive fitness theory is often more straightforwardly treated via the narrower kin selection theory. [ 4 ] Hamilton's theory, alongside reciprocal altruism , is considered one of the two primary mechanisms for the evolution of social behaviors in natural species and a major contribution to the field of sociobiology , which holds that some behaviors can be dictated by genes, and therefore can be passed to future generations and may be selected for as the organism evolves. [ 5 ] Belding's ground squirrel provides an example; it gives an alarm call to warn its local group of the presence of a predator. By emitting the alarm, it gives its own location away, putting itself in more danger. In the process, however, the squirrel may protect its relatives within the local group (along with the rest of the group). Therefore, if the effect of the trait influencing the alarm call typically protects the other squirrels in the immediate area, it will lead to the passing on of more copies of the alarm call trait in the next generation than the squirrel could leave by reproducing on its own. In such a case natural selection will increase the trait that influences giving the alarm call, provided that a sufficient fraction of the shared genes include the gene(s) predisposing to the alarm call. [ 6 ] Synalpheus regalis , a eusocial shrimp , is an organism whose social traits meet the inclusive fitness criterion. The larger defenders protect the young juveniles in the colony from outsiders. By ensuring the young's survival, the genes will continue to be passed on to future generations. [ 7 ] Inclusive fitness is more generalized than strict kin selection , which requires that the shared genes are identical by descent . Inclusive fitness is not limited to cases where "kin" ('close genetic relatives') are involved. Hamilton's rule is most easily derived in the framework of neighbour-modulated fitness, where the fitness of a focal individual is considered to be modulated by the actions of its neighbours. This is the inverse of inclusive fitness where we consider how a focal individual modulates the fitness of its neighbours. However, taken over the entire population , these two approaches are equivalent to each other so long as fitness remains linear in trait value. [ 3 ] A simple derivation of Hamilton's rule can be gained via the Price equation as follows. If an infinite population is assumed, such that any non-selective effects can be ignored, the Price equation can be written as: Where z {\displaystyle z} represents trait value and w {\displaystyle w} represents fitness, either taken for an individual i {\displaystyle i} or averaged over the entire population. If fitness is linear in trait value, the fitness for an individual i {\displaystyle i} can be written as: Where α {\displaystyle \alpha } is the component of an individual's fitness which is independent of trait value, − c {\displaystyle -c} parameterizes the effect of individual i {\displaystyle i} 's phenotype on its own fitness (written negative, by convention, to represent a fitness cost), z n {\displaystyle z_{n}} is the average trait value of individual i {\displaystyle i} 's neighbours, and b {\displaystyle b} parameterizes the effect of individual i {\displaystyle i} 's neighbours on its fitness (written positive, by convention, to represent a fitness benefit). Substituting into the Price equation then gives: Since α {\displaystyle \alpha } by definition does not covary with z i {\displaystyle z_{i}} , this rearranges to: Since cov ⁡ ( z i , z i ) w ¯ = var ⁡ ( z i ) w ¯ {\displaystyle {\frac {\operatorname {cov} (z_{i},z_{i})}{\bar {w}}}={\frac {\operatorname {var} (z_{i})}{\bar {w}}}} this term must, by definition, be greater than 0. This is because variances can never be negative, and negative mean fitness is undefined (if mean fitness is 0 the population has crashed, similarly 0 variance would imply a monomorphic population, in both cases a change in mean trait value is impossible). It can then be said that that mean trait value will increase ( Δ z ¯ > 0 {\displaystyle \Delta {\bar {z}}>0} ) when: or Giving Hamilton's rule, where relatedness ( r {\displaystyle r} ) is a regression coefficient of the form cov ⁡ ( z n , z i ) cov ⁡ ( z i , z i ) {\displaystyle {\frac {\operatorname {cov} (z_{n},z_{i})}{\operatorname {cov} (z_{i},z_{i})}}} , or cov ⁡ ( z n , z i ) var ⁡ ( z i ) {\displaystyle {\frac {\operatorname {cov} (z_{n},z_{i})}{\operatorname {var} (z_{i})}}} . [ 8 ] Relatedness here can vary between a value of 1 (only interacting with individuals of the same trait value) and -1 (only interacting with individuals of a [most] different trait value), and will be 0 when all individuals in the population interact with equal likelihood. Fitness in practice, however, does not tend to be linear in trait value -this would imply an increase to an infinitely large trait value being just as valuable to fitness as a similar increase to a very small trait value. Consequently, to apply Hamilton's rule to biological systems the conditions under which fitness can be approximated to being linear in trait value must first be found. There are two main methods used to approximate fitness as being linear in trait value; performing a partial regression with respect to both the focal individual's trait value and its neighbours average trait value, [ 9 ] or taking a first order Taylor series approximation of fitness with respect to trait value. [ 10 ] [ 2 ] Performing a partial regression requires minimal assumptions, but only provides a statistical relationship as opposed to a mechanistic one, and cannot be extrapolated beyond the dataset that it was generated from. Linearizing via a Taylor series approximation, however, provides a powerful mechanistic relationship (see also causal model ), but requires the assumption that evolution proceeds in sufficiently small mutational steps that the difference in trait value between an individual and its neighbours is close to 0 (in accordance with Fisher's geometric model ): although in practice this approximation can often still retain predictive power under larger mutational steps. As a first order approximation ( linear in trait value), Hamilton's rule can only inform about how the mean trait value in a population is expected to change ( directional selection ). It contains no information about how the variance in trait value is expected to change ( disruptive selection ). As such it cannot be considered sufficient to determine evolutionary stability , even when Hamilton's rule predicts no change in trait value. This is because disruptive selection terms, and subsequent conditions for evolutionary branching , must instead be obtained from second order approximations ( quadratic in trait value) of fitness. [ 2 ] Gardner et al. (2007) suggest that Hamilton's rule can be applied to multi-locus models, but that it should be done at the point of interpreting theory, rather than the starting point of enquiry. [ 11 ] They suggest that one should "use standard population genetics, game theory, or other methodologies to derive a condition for when the social trait of interest is favoured by selection and then use Hamilton's rule as an aid for conceptualizing this result". [ 11 ] It is now becoming increasingly popular to use adaptive dynamics approaches to gain selection conditions which are directly interpretable with respect to Hamilton's rule. [ 2 ] The concept serves to explain how natural selection can perpetuate altruism . If there is an "altruism gene" (or complex of genes) that influences an organism's behaviour to be helpful and protective of relatives and their offspring, this behaviour also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent . In formal terms, if such a complex of genes arises, Hamilton's rule (rbc) specifies the selective criteria (in terms of cost, benefit and relatedness) for such a trait to increase in frequency in the population. Hamilton noted that inclusive fitness theory does not by itself predict that a species will necessarily evolve such altruistic behaviours, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." [ 12 ] In other words, while inclusive fitness theory specifies a set of necessary criteria for the evolution of altruistic traits, it does not specify a sufficient condition for their evolution in any given species. More primary necessary criteria include the existence of gene complexes for altruistic traits in gene pool, as mentioned above, and especially that "a suitable social object is available", as Hamilton noted. The American evolutionary biologist Paul W. Sherman gives a fuller discussion of Hamilton's latter point: [ 13 ] To understand any species' pattern of nepotism, two questions about individuals' behavior must be considered: (1) what is reproductively ideal?, and (2) what is socially possible? With his formulation of "inclusive fitness," Hamilton suggested a mathematical way of answering (1). Here I suggest that the answer to (2) depends on demography, particularly its spatial component, dispersal, and its temporal component, mortality. Only when ecological circumstances affecting demography consistently make it socially possible will nepotism be elaborated according to what is reproductively ideal. For example, if dispersing is advantageous and if it usually separates relatives permanently, as in many birds, on the rare occasions when nestmates or other kin live in proximity, they will not preferentially cooperate. Similarly, nepotism will not be elaborated among relatives that have infrequently coexisted in a population's or a species' evolutionary history. If an animal's life history characteristicsusually preclude the existence of certain relatives, that is if kin are usually unavailable, the rare coexistence of such kin will not occasion preferential treatment. For example, if reproductives generally die soon after zygotes are formed, as in many temperate zone insects, the unusual individual that survives to interact with its offspring is not expected to behave parentally. [ 13 ] The occurrence of sibling cannibalism in several species underlines the point that inclusive fitness theory should not be understood to simply predict that genetically related individuals will inevitably recognize and engage in positive social behaviours towards genetic relatives. [ 14 ] [ 15 ] [ 16 ] Only in species that have the appropriate traits in their gene pool, and in which individuals typically interacted with genetic relatives in the natural conditions of their evolutionary history, will social behaviour potentially be elaborated, and consideration of the evolutionarily typical demographic composition of grouping contexts of that species is thus a first step in understanding how selection pressures upon inclusive fitness have shaped the forms of its social behaviour. Richard Dawkins gives a simplified illustration: [ 17 ] If families [genetic relatives] happen to go around in groups, this fact provides a useful rule of thumb for kin selection: 'care for any individual you often see'." [ 17 ] Evidence from a variety of species [ 18 ] [ 19 ] [ 20 ] including primates [ 21 ] and other social mammals [ 22 ] suggests that contextual cues (such as familiarity) are often significant proximate mechanisms mediating the expression of altruistic behaviour, regardless of whether the participants are always in fact genetic relatives or not. This is nevertheless evolutionarily stable since selection pressure acts on typical conditions, not on the rare occasions where actual genetic relatedness differs from that normally encountered. [ 13 ] Inclusive fitness theory thus does not imply that organisms evolve to direct altruism towards genetic relatives. Many popular treatments do however promote this interpretation, as illustrated in a review: [ 23 ] [M]any misunderstandings persist. In many cases, they result from conflating "coefficient of relatedness" and "proportion of shared genes," which is a short step from the intuitively appealing—but incorrect—interpretation that "animals tend to be altruistic toward those with whom they share a lot of genes." These misunderstandings don't just crop up occasionally; they are repeated in many writings, including undergraduate psychology textbooks—most of them in the field of social psychology, within sections describing evolutionary approaches to altruism. (Park 2007, p860) [ 23 ] Such misunderstandings of inclusive fitness' implications for the study of altruism, even amongst professional biologists utilizing the theory, are widespread, prompting prominent theorists to regularly attempt to highlight and clarify the mistakes. [ 17 ] An example of attempted clarification is West et al. (2010): [ 24 ] In his original papers on inclusive fitness theory, Hamilton pointed out a sufficiently high relatedness to favour altruistic behaviours could accrue in two ways—kin discrimination or limited dispersal. There is a huge theoretical literature on the possible role of limited dispersal, as well as experimental evolution tests of these models. However, despite this, it is still sometimes claimed that kin selection requires kin discrimination. Furthermore, a large number of authors appear to have implicitly or explicitly assumed that kin discrimination is the only mechanism by which altruistic behaviours can be directed towards relatives... [T]here is a huge industry of papers reinventing limited dispersal as an explanation for cooperation. The mistakes in these areas seem to stem from the incorrect assumption that kin selection or indirect fitness benefits require kin discrimination (misconception 5), despite the fact that Hamilton pointed out the potential role of limited dispersal in his earliest papers on inclusive fitness theory. [ 25 ] [ 24 ] As well as interactions in reliable contexts of genetic relatedness, altruists may also have some way to recognize altruistic behaviour in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene [ 26 ] and The Extended Phenotype , [ 27 ] this must be distinguished from the green-beard effect . The green-beard effect is the act of a gene (or several closely linked genes), that: The green-beard effect was originally a thought experiment by Hamilton in his publications on inclusive fitness in 1964, [ 28 ] although it hadn't yet been observed. As of today, it has been observed in few species. Its rarity is probably due to its susceptibility to 'cheating' whereby individuals can gain the trait that confers the advantage, without the altruistic behaviour. This normally would occur via the crossing over of chromosomes which happens frequently, often rendering the green-beard effect a transient state. However, Wang et al. has shown in one of the species where the effect is common (fire ants), recombination cannot occur due to a large genetic transversion, essentially forming a supergene . This, along with homozygote inviability at the green-beard loci allows for the extended maintenance of the green-beard effect. [ 29 ] Equally, cheaters may not be able to invade the green-beard population if the mechanism for preferential treatment and the phenotype are intrinsically linked. In budding yeast ( Saccharomyces cerevisiae ), the dominant allele FLO1 is responsible for flocculation (self-adherence between cells) which helps protect them against harmful substances such as ethanol. While 'cheater' yeast cells occasionally find their way into the biofilm-like substance that is formed from FLO1 expressing yeast, they cannot invade as the FLO1 expressing yeast will not bind to them in return, and thus the phenotype is intrinsically linked to the preference. [ 30 ] Early writings on inclusive fitness theory (including Hamilton 1964) used K in place of B/C. Thus Hamilton's rule was expressed as K > 1 / r {\displaystyle K>1/r} is the necessary and sufficient condition for selection for altruism. Where B is the gain to the beneficiary, C is the cost to the actor and r is the number of its own offspring equivalents the actor expects in one of the offspring of the beneficiary. r is either called the coefficient of relatedness or coefficient of relationship, depending on how it is computed. The method of computing has changed over time, as has the terminology. It is not clear whether or not changes in the terminology followed changes in computation. Robert Trivers (1974) defined "parent-offspring conflict" as any case where [ 31 ] 1 < K < 2 {\displaystyle 1<K<2} i.e., K is between 1 and 2. The benefit is greater than the cost but is less than twice the cost. In this case, the parent would wish the offspring to behave as if r is 1 between siblings, although it is actually presumed to be 1/2 or closely approximated by 1/2. In other words, a parent would wish its offspring to give up ten offspring in order to raise 11 nieces and nephews. The offspring, when not manipulated by the parent, would require at least 21 nieces and nephews to justify the sacrifice of 10 of its own offspring. [ 31 ] The parent is trying to maximize its number of grandchildren, while the offspring is trying to maximize the number of its own offspring equivalents (via offspring and nieces and nephews) it produces. If the parent cannot manipulate the offspring and therefore loses in the conflict, the grandparents with the fewest grandchildren seem to be selected for. In other words, if the parent has no influence on the offspring's behaviour, grandparents with fewer grandchildren increase in frequency in the population. [ 31 ] By extension, parents with the fewest offspring will also increase in frequency. This seems to go against Ronald Fisher 's "Fundamental Theorem of Natural Selection" which states that the change in fitness over the course of a generation equals the variance in fitness at the beginning of the generation. Variance is defined as the square of a quantity—standard deviation —and as a square must always be positive (or zero). That would imply that e fitness could never decrease as time passes. This goes along with the intuitive idea that lower fitness cannot be selected for. During parent-offspring conflict, the number of stranger equivalents reared per offspring equivalents reared is going down. Consideration of this phenomenon caused Orlove (1979) [ 32 ] and Grafen (2006) [ 33 ] to say that nothing is being maximized. According to Trivers, if Sigmund Freud had tried to explain intra-family conflict after Hamilton instead of before him, he would have attributed the motivation for the conflict and for the castration complex to resource allocation issues rather than to sexual jealousy. [ 31 ] Incidentally, when k=1 or k=2, the average number of offspring per parent stays constant as time goes by. When k<1 or k>2 then the average number of offspring per parent increases as time goes by. The term "gene" can refer to a locus (location) on an organism's DNA—a section that codes for a particular trait. Alternative versions of the code at that location are called "alleles." If there are two alleles at a locus, one of which codes for altruism and the other for selfishness, an individual who has one of each is said to be a heterozygote at that locus. If the heterozygote uses half of its resources raising its own offspring and the other half helping its siblings raise theirs, that condition is called codominance. If there is codominance the "2" in the above argument is exactly 2. If by contrast, the altruism allele is more dominant, then the 2 in the above would be replaced by a number smaller than 2. If the selfishness allele is the more dominant, something greater than 2 would replace the 2. [ 34 ] A 2010 paper by Martin Nowak , Corina Tarnita , and E. O. Wilson suggested that standard natural selection theory is superior to inclusive fitness theory, stating that the interactions between cost and benefit cannot be explained only in terms of relatedness. This, Nowak said, makes Hamilton's rule at worst superfluous and at best ad hoc . [ 35 ] Gardner in turn was critical of the paper, describing it as "a really terrible article", and along with other co-authors has written a reply, submitted to Nature . [ 36 ] The disagreement stems from a long history of confusion over what Hamilton's rule represents. Hamilton's rule gives the direction of mean phenotypic change ( directional selection ) so long as fitness is linear in phenotype, and the utility of Hamilton's rule is simply a reflection of when it is suitable to consider fitness as being linear in phenotype. [ 37 ] The primary (and strictest) case is when evolution proceeds in very small mutational steps. Under such circumstances Hamilton's rule then emerges as the result of taking a first order Taylor series approximation of fitness with regards to phenotype. [ 10 ] This assumption of small mutational steps (otherwise known as δ- weak selection ) is often made on the basis of Fisher's geometric model [ 37 ] and underpins much of modern evolutionary theory. In work prior to Nowak et al. (2010), various authors derived different versions of a formula for r {\displaystyle r} , all designed to preserve Hamilton's rule. [ 34 ] [ 38 ] [ 39 ] Orlove noted that if a formula for r {\displaystyle r} is defined so as to ensure that Hamilton's rule is preserved, then the approach is by definition ad hoc. However, he published an unrelated derivation of the same formula for r {\displaystyle r} – a derivation designed to preserve two statements about the rate of selection – which on its own was similarly ad hoc. Orlove argued that the existence of two unrelated derivations of the formula for r {\displaystyle r} reduces or eliminates the ad hoc nature of the formula, and of inclusive fitness theory as well. [ 32 ] The derivations were demonstrated to be unrelated by corresponding parts of the two identical formulae for r {\displaystyle r} being derived from the genotypes of different individuals. The parts that were derived from the genotypes of different individuals were terms to the right of the minus sign in the covariances in the two versions of the formula for r {\displaystyle r} . By contrast, the terms left of the minus sign in both derivations come from the same source. In populations containing only two trait values, it has since been shown that r {\displaystyle r} is in fact Sewall Wright 's coefficient of relationship . [ 8 ] Engles (1982) suggested that the c/b ratio be considered as a continuum of this behavioural trait rather than discontinuous in nature. From this approach fitness transactions can be better observed because there is more to what is happening to affect an individual's fitness than just losing and gaining. [ 40 ]
https://en.wikipedia.org/wiki/Inclusive_fitness
Inclusive fitness in humans is the application of inclusive fitness theory to human social behaviour, relationships and cooperation. Inclusive fitness theory (and the related kin selection theory) are general theories in evolutionary biology that propose a method to understand the evolution of social behaviours in organisms. While various ideas related to these theories have been influential in the study of the social behaviour of non-human organisms, their application to human behaviour has been debated. Inclusive fitness theory is broadly understood to describe a statistical criterion by which social traits can evolve to become widespread in a population of organisms. However, beyond this some scientists have interpreted the theory to make predictions about how the expression of social behavior is mediated in both humans and other animals – typically that genetic relatedness determines the expression of social behaviour. Other biologists and anthropologists maintain that beyond its statistical evolutionary relevance the theory does not necessarily imply that genetic relatedness per se determines the expression of social behavior in organisms. Instead, the expression of social behavior may be mediated by correlated conditions, such as shared location, shared rearing environment, familiarity or other contextual cues which correlate with shared genetic relatedness, thus meeting the statistical evolutionary criteria without being deterministic. While the former position still attracts controversy, the latter position has a better empirical fit with anthropological data about human kinship practices, and is accepted by cultural anthropologists. Applying evolutionary biology perspectives to humans and human society has often resulted in periods of controversy and debate, due to their apparent incompatibility with alternative perspectives about humanity. Examples of early controversies include the reactions to On the Origin of Species , and the Scopes Monkey Trial . Examples of later controversies more directly connected with inclusive fitness theory and its use in sociobiology include physical confrontations at meetings of the Sociobiology Study Group and more often intellectual arguments such as Sahlins' 1976 book The use and abuse of biology , Lewontin et al.'s 1984 Not in Our Genes , and Kitcher's 1985 Vaulting Ambition:Sociobiology and the Quest for Human Nature . Some of these later arguments were produced by other scientists, including biologists and anthropologists, against Wilson's 1975 book Sociobiology: The New Synthesis , which was influenced by (though not necessarily endorsed by) Hamilton's work on inclusive fitness theory. A key debate in applying inclusive fitness theory to humans has been between biologists and anthropologists around the extent to which human kinship relationships (considered to be a large component of human solidarity and altruistic activity and practice) are necessarily based on or influenced by genetic relationships or blood-ties ('consanguinity'). The position of most social anthropologists is summarized by Sahlins (1976), that for humans "the categories of 'near' and 'distant' [kin] vary independently of consanguinal distance and that these categories organize actual social practice" (p. 112). Biologists wishing to apply the theory to humans directly disagree, arguing that "the categories of 'near' and 'distant' do not 'vary independently of consanguinal distance', not in any society on earth." (Daly et al. 1997, p282). This disagreement is central because of the way the association between blood ties/genetic relationships and altruism are conceptualized by many biologists. It is frequently understood by biologists that inclusive fitness theory makes predictions about how behaviour is mediated in both humans and other animals. For example, a recent experiment conducted on humans by the evolutionary psychologist Robin Dunbar and colleagues was, as they understood it, designed "to test the prediction that altruistic behaviour is mediated by Hamilton's rule" (inclusive fitness theory) and more specifically that "If participants follow Hamilton's rule, investment (time for which the [altruistic] position was held) should increase with the recipient's relatedness to the participant. In effect, we tested whether investment flows differentially down channels of relatedness." From their results, they concluded that " human altruistic behaviour is mediated by Hamilton's rule ... humans behave in such a way as to maximize inclusive fitness: they are more willing to benefit closer relatives than more distantly related individuals. " (Madsen et al. 2007). This position continues to be rejected by social anthropologists as being incompatible with the large amount of ethnographic data on kinship and altruism that their discipline has collected over many decades, that demonstrates that in many human cultures, kinship relationships (accompanied by altruism) do not necessarily map closely onto genetic relationships. Whilst the above understanding of inclusive fitness theory as necessarily making predictions about how human kinship and altruism is mediated is common amongst evolutionary psychologists, other biologists and anthropologists have argued that it is at best a limited (and at worst a mistaken) understanding of inclusive fitness theory. These scientists argue that the theory is better understood as simply describing an evolutionary criterion for the emergence of altruistic behaviour, which is explicitly statistical in character, not as predictive of proximate or mediating mechanisms of altruistic behaviour, which may not necessarily be determined by genetic relatedness (or blood ties) per se . These alternative non-deterministic and non-reductionist understandings of inclusive fitness theory and human behavior have been argued to be compatible with anthropologists' decades of data on human kinship, and compatible with anthropologists' perspectives on human kinship. This position (e.g. nurture kinship ) has been largely accepted by social anthropologists, whilst the former position (still held by evolutionary psychologists, see above) remains rejected by social anthropologists. Inclusive fitness theory, first proposed by Bill Hamilton in the early 1960s, proposes a selective criterion for the potential evolution of social traits in organisms, where social behavior that is costly to an individual organism's survival and reproduction could nevertheless emerge under certain conditions. The key condition relates to the statistical likelihood that significant benefits of a social trait or behavior accrue to (the survival and reproduction of) other organisms who also carry the social trait. Inclusive fitness theory is a general treatment of the statistical probabilities of social traits accruing to any other organisms likely to propagate a copy of the same social trait. Kin selection theory treats the narrower but more straightforward case of the benefits accruing to close genetic relatives (or what biologists call 'kin') who may also carry and propagate the trait. Under conditions where the social trait sufficiently correlates (or more properly, regresses ) with other likely bearers, a net overall increase in reproduction of the social trait in future generations can result. The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes or heritable factors) that influence an organism's behavior in such a way that is helpful and protective of relatives and their offspring, this behavior can also increase the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rb>c) specifies the selective criteria (in terms of relatedness (r), cost (c) benefit (b)) for such a trait to increase in frequency in the population (see Inclusive fitness for more details). Hamilton noted that inclusive fitness theory does not by itself predict that a given species will necessarily evolve such altruistic behaviors, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." (Hamilton 1987, 420). [ 1 ] In other words, whilst inclusive fitness theory specifies a set of necessary criteria for the evolution of certain altruistic traits, it does not specify a sufficient condition for their evolution in any given species, since the typical ecology, demographics and life pattern of the species must also allow for social interactions between individuals to occur before any potential elaboration of social traits can evolve in regard to those interactions. The initial presentation of inclusive fitness theory (in the mid-1960s, see The Genetical Evolution of Social Behaviour ) focused on making the general mathematical case for the possibility of social evolution. However, since many field biologists mainly use theory as a guide to their observations and analysis of empirical phenomena, Hamilton also speculated about possible proximate behavioural mechanisms that might be observable in organisms whereby a social trait could effectively achieve this necessary statistical correlation between its likely bearers: The selective advantage which makes behaviour conditional in the right sense on the discrimination of factors which correlate with the relationship of the individual concerned is therefore obvious. It may be, for instance, that in respect of a certain social action performed towards neighbours indiscriminately, an individual is only just breaking even in terms of inclusive fitness. If he could learn to recognise those of his neighbours who really were close relatives and could devote his beneficial actions to them alone an advantage to inclusive fitness would at once appear. Thus a mutation causing such discriminatory behaviour itself benefits inclusive fitness and would be selected. In fact, the individual may not need to perform any discrimination so sophisticated as we suggest here; a difference in the generosity of his behaviour according to whether the situations evoking it were encountered near to, or far from, his own home might occasion an advantage of a similar kind." (Hamilton 1996 [1964], 51) [ 2 ] Hamilton here was suggesting two broad proximate mechanisms by which social traits might meet the criterion of correlation specified by the theory: Kin recognition (active discrimination) : If a social trait enables an organism to distinguish between different degrees of genetic relatedness when interacting in a mixed population, and to discriminate (positively) in performing social behaviours on the basis of detecting genetic relatedness, then the average relatedness of the recipients of altruism could be high enough to meet the criterion. In another section of the same paper (page 54) Hamilton considered whether 'supergenes' that identify copies of themselves in others might evolve to give more accurate information about genetic relatedness. He later (1987, see below) considered this to be wrong-headed and withdrew the suggestion. Viscous populations (spatial cues) : Even indiscriminate altruism may achieve the correlation in 'viscous' populations where individuals have low rates of dispersal or short distances of dispersal from their home range (their location of birth). Here, social partners are typically genealogically closely related, and so altruism can flourish even in the absence of kin recognition and kin discrimination faculties – spatial proximity and circumstantial cues provide the necessary correlation. These two alternative suggestions had important effects on how field biologists understood the theory and what they looked for in the behavior of organisms. Within a few years biologists were looking for evidence that 'kin recognition' mechanisms might occur in organisms, assuming this was a necessary prediction of inclusive fitness theory, leading to a sub-field of 'kin recognition' research. A common source of confusion around inclusive fitness theory is that Hamilton's early analysis included some inaccuracies, that, although corrected by him in later publications , are often not fully understood by other researchers who attempt to apply inclusive fitness to understanding organisms' behaviour. For example, Hamilton had initially suggested that the statistical correlation in his formulation could be understood by a correlation coefficient of genetic relatedness, but quickly accepted George Price's correction that a general regression coefficient was the more relevant metric, and together they published corrections in 1970. A related confusion is the connection between inclusive fitness and multi-level selection , which are often incorrectly assumed to be mutually exclusive theories. The regression coefficient helps to clarify this connection: Because of the way it was first explained, the approach using inclusive fitness has often been identified with 'kin selection' and presented strictly as an alternative to 'group selection'. But the foregoing discussion shows that kinship should be considered just one way of getting positive regression of genotype in the recipient, and that it is this positive regression that is vitally necessary for altruism. Thus the inclusive-fitness concept is more general than 'kin selection'.(Hamilton 1996 [1975], 337) [ 3 ] Hamilton also later modified his thinking about likely mediating mechanisms whereby social traits achieve the necessary correlation with genetic relatedness. Specifically he corrected his earlier speculations that an innate ability (and 'supergenes') to recognise actual genetic relatedness was a likely mediating mechanism for kin altruism: But once again, we do not expect anything describable as an innate kin recognition adaptation, used for social behaviour other than mating, for the reasons already given in the hypothetical case of the trees. (Hamilton 1987, 425) [ 1 ] The point about inbreeding avoidance is significant, since the whole genome of sexual organisms benefits from avoiding close inbreeding; there is a different selection pressure at play compared to the selection pressure on social traits (see Kin recognition for more information). It does not follow… that ability to discriminate degrees of relatedness automatically implies that kin selection is the model relevant to its origin. In fact, since even earlier than Darwin, it had been realised that most organisms tend to avoid closely inbred matings. The reasons must have to do with the function of sexuality and this is not quite yet resolved (see e.g. Bell, 1982; Shields, 1982; Hamilton, 1982); but whatever the function is, here must be another set of reasons for discriminating. Some animals clearly do use discrimination for purposes of mate selection. Japanese quail for example evidently use an early imprinting of their chick companions towards obtaining, much later, preferred degrees of consanguinity in their mates (Bateson 1983). (Hamilton 1987, 419) Since Hamiton's 1964 speculations about active discrimination mechanisms (above), other theorists such as Richard Dawkins had clarified that there would be negative selection pressure against mechanisms for genes to recognize copies of themselves in other individuals and discriminate socially between them on this basis. Dawkins used his ' Green beard ' thought experiment, where a gene for social behaviour is imagined also to cause a distinctive phenotype that can be recognised by other carriers of the gene. Due to conflicting genetic similarity in the rest of the genome, there would be selection pressure for green-beard altruistic sacrifices to be suppressed via meitoic drive . Hamilton's later clarifications often go unnoticed, and because of the long-standing assumption that kin selection requires innate powers of kin recognition, some theorists have later tried to clarify the position: [T]he fact that animals benefit from engaging in spatially mediated behaviors is not evidence that these animals can recognize their kin, nor does it support the conclusion that spatially based differential behaviors represent a kin recognition mechanism (see also discussions by Blaustein, 1983; Waldman, 1987; Halpin 1991). In other words, from an evolutionary perspective it may well be advantageous for kin to aggregate and for individuals to behave preferentially towards nearby kin, whether or not this behaviour is the result of kin recognition per se" (Tang-Martinez 2001, 25) [ 4 ] In his original papers on inclusive fitness theory, Hamilton pointed out a sufficiently high relatedness to favour altruistic behaviours could accrue in two ways – kin discrimination or limited dispersal (Hamilton, 1964, 1971,1972, 1975). There is a huge theoretical literature on the possible role of limited dispersal reviewed by Platt & Bever (2009) and West et al. (2002a), as well as experimental evolution tests of these models (Diggle et al., 2007; Griffin et al., 2004; Kümmerli et al., 2009). However, despite this, it is still sometimes claimed that kin selection requires kin discrimination (Oates & Wilson, 2001; Silk, 2002 ). Furthermore, a large number of authors appear to have implicitly or explicitly assumed that kin discrimination is the only mechanism by which altruistic behaviours can be directed towards relatives... [T]here is a huge industry of papers reinventing limited dispersal as an explanation for cooperation. The mistakes in these areas seem to stem from the incorrect assumption that kin selection or indirect fitness benefits require kin discrimination (misconception 5), despite the fact that Hamilton pointed out the potential role of limited dispersal in his earliest papers on inclusive fitness theory (Hamilton, 1964; Hamilton, 1971; Hamilton, 1972; Hamilton, 1975). (West et al. 2010, p.243 and supplement) [ 5 ] The assumption that 'kin selection requires kin discrimination' has obscured the more parsimonious possibility that spatial-cue-based mediation of social cooperation based on limited dispersal and shared developmental context are commonly found in many organisms that have been studied, including in social mammal species. As Hamilton pointed out, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start" (Hamilton 1987, see above section). Since a reliable context of interaction between social actors is always a necessary condition for social traits to emerge, a reliable context of interaction is necessarily present to be leveraged by context-dependent cues to mediate social behaviours. Focus on mediating mechanisms of limited dispersal and reliable developmental context has allowed significant progress in applying kin selection and inclusive fitness theory to a wide variety of species, including humans, [ 6 ] on the basis of cue-based mediation of social bonding and social behaviours (see below). In mammals , as well as in other species, ecological niche and demographic conditions strongly shape typical contexts of interaction between individuals, including the frequency and circumstances surrounding the interactions between genetic relatives. Although mammals exist in a wide variety of ecological conditions and varying demographic arrangements, certain contexts of interaction between genetic relatives are nevertheless reliable enough for selection to act upon. New born mammals are often immobile and always totally dependent (socially dependent if you will) on their carer(s) for nursing with nutrient rich milk and for protection. This fundamental social dependence is a fact of life for all mammals, including humans. These conditions lead to a reliable spatial context in which there is a statistical association of replica genes between a reproductive female and her infant offspring (and has been evolutionary typical) for most mammal species. Beyond this natal context, extended possibilities for frequent interaction between related individuals are more variable and depend on group living vs. solitary living, mating patterns, duration of pre-maturity development, dispersal patterns, and more. For example, in group living primates with females remaining in their natal group for their entire lives, there will be lifelong opportunities for interactions between female individuals related through their mothers and grandmothers etc. These conditions also thus provide a spatial-context for cue-based mechanisms to mediate social behaviours. The most widespread and important mechanism for kin recognition in mammals appears to be familiarity through prior association (Bekoff, 1981; Sherman, 1980). During development, individuals learn and respond to cues from the most familiar or most commonly encountered conspecifics in their environment. Individuals respond to familiar individuals as kin and unfamiliar individuals as nonkin. (Erhart et al. 1997, 153–154) Mammalian young are born into a wide variety of social situations, ranging from being isolated from all other individuals except their mother (and possibly other siblings) to being born into large social groups. Although siblings do interact in a wide variety of species having different life histories, there are certain conditions, almost all of which have to do with the developmental environment, that will favor a biased occurrence of interactions between littermates and/or different-aged siblings. It will be argued later that it is these, and perhaps other, conditions that predispose (in a probabilistic way) siblings to interact with one another. However, if two (or more) very young unrelated individuals (assume conspecifics for simplicity) are exposed to these conditions, they too will behave like siblings. That is, although [relatedness] and [familiarity] are tightly linked in many mammals, it is [familiarity] that can override [relatedness], rather than the reverse. (Bekoff 1981, 309) In addition to the above examples, a wide variety of evidence from mammal species supports the finding that shared context and familiarity mediate social bonding, rather than genetic relatedness per se . [ 6 ] Cross-fostering studies (placing unrelated young in a shared developmental environment) strongly demonstrate that unrelated individuals bond and cooperate just as would normal littermates. The evidence therefore demonstrates that bonding and cooperation are mediated by proximity, shared context and familiarity, not via active recognition of genetic relatedness. This is problematic for those biologists who wish to claim that inclusive fitness theory predicts that social cooperation is mediated via genetic relatedness, rather than understanding the theory simply to state that social traits can evolve under conditions where there is statistical association of genetically related organisms. The former position sees the expression of cooperative behaviour as more or less deterministically caused by genetic relatedness, where the latter position does not. The distinction between cooperation mediated by shared context, and cooperation mediated by genetic relatedness per se , has significant implications for whether inclusive fitness theory can be seen as compatible with the anthropological evidence on human social patterns or not. The shared context perspective is largely compatible, the genetic relatedness perspective is not (see below). The debate about how to interpret the implications of Inclusive fitness theory for human social cooperation has paralleled some of the key misunderstandings outlined above. Initially, evolutionary biologists interested in humans wrongly assumed that in the human case, 'kin selection requires kin discrimination' along with their colleagues studying other species (see West et al., above). In other words, many biologists assumed that strong social bonds accompanied by altruism and cooperation in human societies (long studied by the anthropological field of kinship) were necessarily built upon recognizing genetic relatedness (or 'blood ties'). This seemed to fit well with historical research in anthropology originating in the nineteenth century (see history of kinship ) that often assumed that human kinship was built upon a recognition of shared blood ties. However, independently of the emergence of inclusive fitness theory, from 1960s onwards many anthropologists themselves had reexamined the balance of findings in their own ethnographic data and had begun to reject the notion that human kinship is 'caused by' blood ties (see Kinship ). Anthropologists have gathered very extensive ethnographic data on human social patterns and behaviour over a century or more, from a wide spectrum of different cultural groups. The data demonstrates that many cultures do not consider 'blood ties' (in the genealogical sense) to underlie their close social relationships and kinship bonds. Instead social bonds are often considered to be based on location-based shared circumstances including living together (co-residence), sleeping close together, working together, sharing food (commensality) and other forms of shared life together. Comparative anthropologists have shown [ 6 ] that these aspects of shared circumstances are a significant component of what influences kinship in most human cultures, notwithstanding whether or not 'blood ties' are necessarily present (see Nurture kinship , below). Although blood ties (and genetic relatedness) often correlate with kinship, just as in the case of mammals (above section), evidence from human societies suggests that it is not the genetic relatedness per se that is the mediating mechanism of social bonding and cooperation, instead it is the shared context (albeit typically consisting of genetic relatives) and the familiarity that arises from it, that mediate the social bonds. This implies that genetic relatedness is not the determining mechanism nor required for the formation of social bonds in kinship groups, or for the expression of altruism in humans, even if statistical correlations of genetic relatedness are an evolutionary criterion for the emergence of such social traits in biological organisms over evolutionary timescales. Understanding this distinction between the statistical role of genetic relatedness in the evolution of social traits and yet its lack of necessary determining role in mediating mechanisms of social bonding and the expression of altruism is key to inclusive fitness theory's proper application to human social behaviour (as well as to other mammals). Compatible with biologists' emphasis on familiarity and shared context mediating social bonds, the concept of nurture kinship in the anthropological study of human social relationships highlights the extent to which such relationships are brought into being through the performance of various acts of sharing, acts of care, and performance of nurture between individuals who live in close proximity. Additionally the concept highlights ethnographic findings that, in a wide swath of human societies, people understand, conceptualize and symbolize their relationships predominantly in terms of giving, receiving and sharing nurture. The concept stands in contrast to the earlier anthropological concepts of human kinship relations being fundamentally based on "blood ties", some other form of shared substance, or a proxy for these (as in fictive kinship ), and the accompanying notion that people universally understand their social relationships predominantly in these terms. The nurture kinship perspective on the ontology of social ties, and how people conceptualize them, has become stronger in the wake of David M. Schneider 's influential Critique of the Study of Kinship [1] and Holland's subsequent Social Bonding and Nurture Kinship : Compatibility between Cultural and Biological approaches , demonstrating that as well as the ethnographic record, biological theory and evidence also more strongly support the nurture perspective than the blood perspective. Both Schneider and Holland argue that the earlier blood theory of kinship derived from an unwarranted extension of symbols and values from anthropologists' own cultures (see ethnocentrism ).
https://en.wikipedia.org/wiki/Inclusive_fitness_in_humans
In petrology and geochemistry , an incompatible element is one that is unsuitable in size and/or charge to the cation sites of the minerals in which it is included. It is defined by a partition coefficient between rock-forming minerals and melt being much smaller than 1. [ 1 ] During the fractional crystallization of magma and magma generation by the partial melting of the Earth's mantle and crust , elements that have difficulty in entering cation sites of the minerals are concentrated in the melt phase of the magma ( liquid phase ). Two groups of incompatible elements that have difficulty entering the solid phase are known by acronyms. One group includes elements having large ionic radius , such as potassium , rubidium , caesium , strontium , and barium (called LILE , or large-ion lithophile elements), and the other group includes elements of large ionic valences (or high electrical charges), such as zirconium , niobium , hafnium , rare-earth elements (REE), thorium , uranium and tantalum (called HFSE , or high-field-strength elements). [ 1 ] Another way to classify incompatible elements is by mass ( lanthanide series ): light rare-earth elements (LREE) are La , Ce , Pr , Nd , and Sm , and heavy rare-earth elements (HREE) are Eu – Lu . Rocks or magmas that are rich, or only slightly depleted, in light rare-earth elements are referred to as "fertile", and those with strong depletions in LREE are referred to as "depleted". [ 2 ] Compatibility (geochemistry) This article about igneous petrology is a stub . You can help Wikipedia by expanding it . This geochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incompatible_element
In economic theory, the field of contract theory can be subdivided in the theory of complete contracts and the theory of incomplete contracts . In contract law, an incomplete contract is one that is defective or uncertain in a material respect. A complete contract in economic theory means a contract which provides for the rights, obligations and remedies of the parties in every possible state of the world. [ 1 ] However, since the human mind is a scarce resource and the mind cannot collect, process, and understand an infinite amount of information, economic actors are limited in their rationality (the limitations of the human mind in understanding and solving complex problems) and one cannot anticipate all possible contingencies. [ 2 ] [ 3 ] Or perhaps because it is too expensive to write a complete contract, the parties will opt for a "sufficiently complete" contract. [ 4 ] In short, every contract is incomplete for a variety of reasons and limitations. The incompleteness of a contract also means that the protection it provides may be inadequate. [ 5 ] Even if a contract is incomplete, the legal validity of the contract cannot be denied, and an incomplete contract does not mean that it is unenforceable. The terms and provisions of the contract still have influence and are binding on the parties to the contract. As for contractual incompleteness, the law is concerned with when and how a court should fill gaps in a contract when there are too many or too uncertain to be enforceable, and when it is obliged to negotiate to make an incomplete contract fully complete or to achieve the desired final contract. [ 1 ] The incomplete contracting paradigm was pioneered by Sanford J. Grossman , Oliver D. Hart , and John H. Moore . In their seminal contributions, Grossman and Hart (1986), Hart and Moore (1990), and Hart (1995) argue that in practice, contracts cannot specify what is to be done in every possible contingency. [ 6 ] [ 7 ] [ 8 ] At the time of contracting, future contingencies may not even be describable. Moreover, parties cannot commit themselves never to engage in mutually beneficial renegotiation later on in their relationship. Thus, an immediate consequence of the incomplete contracting approach is the so-called hold-up problem . [ 9 ] Since at least in some states of the world the parties will renegotiate their contractual arrangements later on, they have insufficient incentives to make relationship-specific investments (since a party's investment returns will partially go to the other party in the renegotiations). Oliver Hart and his co-authors argue that the hold-up problem may be mitigated by choosing a suitable ownership structure ex-ante (according to the incomplete contracting paradigm, more complex contractual arrangements are ruled out). Hence, the property rights approach to the theory of the firm can explain the pros and cons of vertical integration , thus providing a formal answer to important questions regarding the boundaries of the firm that were first raised by Ronald Coase (1937). [ 10 ] The incomplete contracting approach has been subject of a still ongoing discussion in contract theory. In particular, some authors such as Maskin and Tirole (1999) argue that rational parties should be able to solve the hold-up problem with complex contracts, while Hart and Moore (1999) point out that these contractual solutions do not work if renegotiation cannot be ruled out. [ 11 ] [ 12 ] [ 13 ] Some authors have argued that the pros and cons of vertical integration can sometimes also be explained in complete contracting models. [ 14 ] The property rights approach based on incomplete contracting has been criticized by Williamson (2000) because it is focused on ex-ante investment incentives, while it neglects ex-post inefficiencies. [ 15 ] It has been pointed out by Schmitz (2006) that the property rights approach can be extended to the case of asymmetric information, which may explain ex-post inefficiencies. [ 16 ] The property rights approach has also been extended by Chiu (1998) and DeMeza and Lockwood (1998), who allow for different ways to model the renegotiations. [ 17 ] [ 18 ] In a more recent extension, Hart and Moore (2008) have argued that contracts may serve as reference points. [ 19 ] The theory of incomplete contracts has been successfully applied in various contexts, including privatization , [ 20 ] [ 21 ] international trade , [ 22 ] [ 23 ] management of research & development , [ 24 ] [ 25 ] allocation of formal and real authority, [ 26 ] advocacy, [ 27 ] and many others. The 2016 Nobel Prize in Economics was awarded to Oliver D. Hart and Bengt Holmström for their contribution to contract theory, including incomplete contracts. [ 28 ] In 1986, Grossman and Hart (1986) used incomplete contract theory in their seminal paper on the costs and benefits of vertical integration to answer the question " What is a firm and what determines its boundaries?". The Grossman-Hart theory of property rights is the first to explain [ citation needed ] in a straightforward manner why markets are so important in the context of organizational choice. The advantage of non-integrated markets is that the owners (entrepreneurs) can exercise their control, while the advantage of market transactions also stems from the power of restraint conferred by ownership. [ 29 ] The fact that economic actors are only finitely rational and cannot foresee all possible contingencies is perhaps at the heart of the problem. [ 30 ] However, as this uncertain state of nature or behavior cannot be written into an enforceable contract, when the contract is incomplete, not all uses of the asset can be specified in advance and any contract negotiated in advance must leave some discretion as to the use of the asset, with the 'owner' of the company being the party to whom residual control is allocated at the contract stage. Grossman and Hart claim that the essence of the firm lies in the decision-making power conferred by the ownership of its assets. In a world of incomplete contracts, decision-making power plays a key role in determining the incentives of owners. [ 31 ] Grossman and Hart believe that the optimal allocation or governance structure of property rights is the allocation that minimizes efficiency losses. Therefore, where Party A's investment is more important than Party B's, it is preferable to allocate title to the asset to Party A, even if this discourages Party B's investment. [ 32 ] Incomplete contractual/property rights approach gives rise to theories of ownership and vertical integration, and it also directly addresses the question of what constitutes a firm. Both Grossman and Hart consider the firm to be a collection of assets over which the owners have residual control. [ 3 ] In 1990, Oliver Hart and John Moore published another article, "Property Rights and the Nature of the Firm", which provided a framework for addressing when transactions should take place within the firm and when they should take place through the market. [ 33 ] The essence of the 1986 Grossman-Hart model is about the optimal allocation of the constraining forces conferred by ownership, and its model of property rights is about the allocation of assets between individuals (entrepreneurs) rather than firms. Whereas the Hart-Moore model of 1990 extends this optimal allocation of traction, property rights theory clarifies the content of the asset allocation assumptions between firms and identifies a firm with the assets that its owners control. [ 34 ] One of Hart-Moore's key findings suggests an explanation for why firms, rather than workers, tend to own most of the non-human assets used to produce goods and services: complementary assets should be owned by one person. [ 31 ] Incomplete contracts can create scenarios that lead to inefficient investments and market failures, but incompleteness is essentially a feasibility constraint. The 'strategic ambiguity hypothesis' assumes that the optimal formal contract may be deliberately incomplete. Companies use strategic ambiguity to circumvent legal constraints. Invalidate these agreements and make the law insufficient to prevent their formation and performance. [ 4 ] Contracts have many restrictions in terms. Incomplete contracts are also limited by them. Contractual terms are the specific details of an agreement, including the rights and obligations of the parties. Contractual terms are broadly divided into two types, express terms and implied terms. Express terms are included in the signed contract, or a caveat that is reasonably noticeable to the other party. Implicit terms include those implied by the court and any relevant legal provisions. [ 35 ] Courts are often willing to imply a term in a settled contract to "fill in the gaps" as long as it is: Example: Example: ACL’s ( Australian Consumer Law ) implied terms in consumer contracts are intended to protect the buyer, and there is an implied term in every contract for the sale of goods. Conditions of ownership by the seller, implies the right to sell these goods to the buyer: [ 37 ] ——Criminal or tortious contracts [ 39 ] ——Contracts to promote corruption in public office [ 40 ] ——Contracts intended to avoid paying taxes [ 41 ] ——Contracts to prevent or delay the administration of justice [ 42 ] The effect of a breach of a statutory provision on the validity and enforceability of a contract depends on the wording of the regulation itself. [ 43 ] An agreement may just be illegal because it violates a statutory prohibition. [ 44 ]
https://en.wikipedia.org/wiki/Incomplete_contracts
Incomplete lineage sorting (ILS) [ 1 ] [ 2 ] [ 3 ] (also referred to as hemiplasy, deep coalescence , retention of ancestral polymorphism , or trans-species polymorphism) is a phenomenon in evolutionary biology and population genetics that results in discordance between species and gene trees . [ 4 ] [ 5 ] By contrast, complete lineage sorting results in concordant species and gene trees. ILS occurs in the context of a gene in an ancestral species which exists in multiple alleles. If a speciation event occurs in this situation, either complete lineage sorting will occur, and both daughter species will inherit all alleles of the gene in question, or incomplete lineage sorting will occur, when one or both daughter species inherits a subset of alleles present in the parental species. For example, if two alleles of a gene are present and a speciation event occurs, one of the two daughter species might inherit both alleles, but the second daughter species only inherits one of the two alleles. In this case, incomplete lineage sorting has occurred. [ 6 ] The concept of incomplete lineage sorting has some important implications for phylogenetic techniques. The persistence of polymorphisms across different speciation events can cause incomplete lineage sorting. Suppose two subsequent speciation events occur where an ancestor species gives rise firstly to species A, and secondly to species B and C. When studying a single gene, it can have multiple versions ( alleles ) causing different characters to appear (polymorphisms). In the example shown in Figure 1, the gene G has two versions (alleles), G0 and G1. The ancestor of A, B and C originally had only one version of gene G, G0. At some point, a mutation occurred and the ancestral population became polymorphic, with some individuals having G0 and others G1. When species A split off, it retained only G1, while the ancestor of B and C remained polymorphic. When B and C diverged, B retained only G1 and C only G0; neither were now polymorphic in G. The tree for gene G shows A and B as sisters, whereas the species tree shows B and C as sisters. If the phylogeny of these species is based on gene G, it will not represent the actual relationships between the species. In other words, the most related species will not necessarily inherit the most related genes. This is of course a simplified example of incomplete lineage sorting, and in real research it is usually more complex containing more genes and species. [ 7 ] [ 8 ] However, other mechanisms can lead to the same apparent discordancy, for example, alleles can move across species boundaries via hybridization, and DNA can be transferred between species by viruses. [ 9 ] This is illustrated in Figure 2. Here the ancestor of A, B and C, and the ancestor of B and C, had only the G0 version of gene G. A mutation occurred at the divergence of B and C, and B acquired a mutated version, G1. Some time later, the arrow shows that G1 was transferred from B to A by some means (e.g. hybridization or horizontal gene transfer). Studying only the final states of G in the three species makes it appear that A and B are sisters rather than B and C, as in Figure 1, but in Figure 2 this is not caused by incomplete lineage sorting. Incomplete lineage sorting has important implications for phylogenetic research. There is a chance that when creating a phylogenetic tree it may not resemble actual relationships because of this incomplete lineage sorting. However, gene flow between lineages by hybridization or horizontal gene transfer may produce the same conflicting phylogenetic tree. Distinguishing these different processes may seem difficult, but much research and different statistical approaches are (being) developed to gain greater insight in these evolutionary dynamics. [ 10 ] One of the resolutions to reduce the implications of incomplete lineage sorting is to use multiple genes for creating species or population phylogenies. The more genes used, the more reliable the phylogeny becomes. [ 8 ] Incomplete lineage sorting commonly happens with sexual reproduction because the species cannot be traced back to a single person or breeding pair. When organism tribe populations are large (i.e. thousands) each gene has some diversity and the gene tree consists of other pre-existing lineages. If the population is bigger these ancestral lineages are going to persist longer. When you get large ancestral populations together with closely timed speciation events these different pieces of DNA retain conflicting affiliations. This makes it hard to determine a common ancestor or points of branching. [ 4 ] When studying primates, chimpanzees and bonobos are more related to each other than any other taxa and are thus sister taxa . Still, for 1.6% of the bonobo genome, sequences are more closely related to homologues of humans than to chimpanzees, which is probably a result of incomplete lineage sorting. [ 4 ] A study of more than 23,000 DNA sequence alignments in the family Hominidae (great apes, including humans) showed that about 23% did not support the known sister relationship of chimpanzees and humans. [ 9 ] In human evolution, incomplete lineage sorting is used to diagram hominin lineages that may have failed to sort out at the same time that speciation occurred in prehistory. [ 11 ] Due to the advent of genetic testing and genome sequencing, researchers found that the genetic relationships between hominin lineages might disagree with previous understandings of their relatedness based on physical characteristics. [ 11 ] Moreover, divergence of the last common ancestor (LCA) may not necessarily occur at the same time as speciation. [ 12 ] Lineage sorting is a method that allows paleoanthropologists to explore the genetic relationships and divergences that may not fit with their previous speciation models based on phylogeny alone. [ 11 ] Incomplete lineage sorting of the human family tree is an area of great interest. There are a number of unknowns when considering both the transition from archaic humans to modern humans and divergence of the other great apes from the hominin lineage. [ 13 ] Incomplete lineage sorting means that the average divergence time between genes may differ from the divergence time between species. Models suggest that the average divergence time between the genes in the human and chimpanzee genome is older than the split between humans and gorillas. What this means is the common ancestor of humans and chimpanzees has left traces of genetic material that was present in the common ancestor of humans, chimpanzees, and gorillas. [ 12 ] However, the genetic tree slightly differs from that of the species or phylogeny tree. [ 14 ] In the phylogeny tree when we look at the evolutionary relationship between the human, bonobo chimpanzee, and gorilla, the results show that the separation of bonobo and chimpanzee transpired in a close proximity of time to the common ancestor of the bonobo-chimpanzee ancestor and humans, [ 12 ] indicating that humans and chimpanzees shared a common ancestor for several million years after separation from gorillas. This creates the phenomenon that is incomplete lineage sorting. Today researchers are relying on DNA fragments in order to study the evolutionary relationships among humans and their counterparts in the hope that it will provide information about speciation and ancestral processes from genomes from different types of humans. [ 15 ] Incomplete lineage sorting is a common feature in viral phylodynamics , where the phylogeny represented by transmission of a disease from one person to the next, which is to say the population level tree, often doesn't correspond to the tree created from a genetic analysis due to the population bottlenecks that are an inherent feature of viral transmission of disease. Figure 3 illustrates how this can occur. This has relevance to criminal transmission of HIV where in some criminal cases, a phylogenetic analysis of one or two genes from the strains from the accused and the victim have been used to infer transmission; however, the commonality of incomplete lineage sorting means that transmission cannot be inferred solely on the basis of such a basic analysis. [ 16 ] Jacques and List (2019) [ 17 ] show that the concept of incomplete lineage sorting can be applied to account for non-treelike phenomena in language evolution. Kalyan and François (2019), proponents of the method of historical glottometry , a model challenging the applicability of the tree model in historical linguistics, concur that "Historical Glottometry does not challenge the family tree model once incomplete lineage sorting has been taken into account." [ 18 ]
https://en.wikipedia.org/wiki/Incomplete_lineage_sorting
In mathematics , the incompressibility method is a proof method like the probabilistic method , the counting method or the pigeonhole principle . To prove that an object in a certain class (on average) satisfies a certain property, select an object of that class that is incompressible . If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved (not just the average). To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits (are incompressible). The incompressibility method depends on an objective, fixed notion of incompressibility. Such a notion was provided by the Kolmogorov complexity theory, named for Andrey Kolmogorov . [ 1 ] One of the first uses of the incompressibility method with Kolmogorov complexity in the theory of computation was to prove that the running time of a one-tape Turing machine is quadratic for accepting a palindromic language and sorting algorithms require at least n log ⁡ n {\displaystyle n\log n} time to sort n {\displaystyle n} items. [ 2 ] One of the early influential papers using the incompressibility method was published in 1980. [ 3 ] The method was applied to a number of fields, and its name was coined in a textbook. [ 4 ] According to an elegant Euclidean proof, there is an infinite number of prime numbers . Bernhard Riemann demonstrated that the number of primes less than a given number is connected with the 0s of the Riemann zeta function . Jacques Hadamard and Charles Jean de la Vallée-Poussin proved in 1896 that this number of primes is asymptotic to n / ln ⁡ n {\displaystyle n/\ln n} ; see Prime number theorem (use ln {\displaystyle \ln } for the natural logarithm and log {\displaystyle \log } for the binary logarithm). Using the incompressibility method, G. J. Chaitin argued as follows: Each n {\displaystyle n} can be described by its prime factorization n = p 1 n 1 ⋯ p k n k {\displaystyle n=p_{1}^{n_{1}}\cdots p_{k}^{n_{k}}} (which is unique), where p 1 , … , p k {\displaystyle p_{1},\ldots ,p_{k}} are the first k {\displaystyle k} primes which are (at most) n {\displaystyle n} and the exponents (possibly) 0. Each exponent is (at most) log ⁡ n {\displaystyle \log n} , and can be described by log ⁡ log ⁡ n {\displaystyle \log \log n} bits. The description of n {\displaystyle n} can be given in k log ⁡ log ⁡ n {\displaystyle k\log \log n} bits, provided we know the value of log ⁡ log ⁡ n {\displaystyle \log \log n} (enabling one to parse the consecutive blocks of exponents). To describe log ⁡ log ⁡ n {\displaystyle \log \log n} requires only log ⁡ log ⁡ log ⁡ n {\displaystyle \log \log \log n} bits. Using the incompressibility of most positive integers, for each k > 0 {\displaystyle k>0} there is a positive integer n {\displaystyle n} of binary length l ≈ log ⁡ n {\displaystyle l\approx \log n} which cannot be described in fewer than l {\displaystyle l} bits. This shows that the number of primes, π ( n ) {\displaystyle \pi (n)} less than n {\displaystyle n} , satisfies A more-sophisticated approach attributed to Piotr Berman (present proof partially by John Tromp ) describes every incompressible n {\displaystyle n} by k {\displaystyle k} and n / p k {\displaystyle n/p_{k}} , where p k {\displaystyle p_{k}} is the largest prime number dividing n {\displaystyle n} . Since n {\displaystyle n} is incompressible, the length of this description must exceed log ⁡ n {\displaystyle \log n} . To parse the first block of the description k {\displaystyle k} must be given in prefix form P ( k ) = log ⁡ k + log ⁡ log ⁡ k + log ⁡ ε ( k ) {\displaystyle P(k)=\log k+\log \log k+\log \varepsilon (k)} , where ε ( k ) {\displaystyle \varepsilon (k)} is an arbitrary, small, positive function. Therefore, log ⁡ p k ≤ P ( k ) {\displaystyle \log p_{k}\leq P(k)} . Hence, p k ≤ n k {\displaystyle p_{k}\leq n_{k}} with n k = ε ( k ) k log ⁡ k {\displaystyle n_{k}=\varepsilon (k)k\log k} for a special sequence of values n 1 , n 2 , … {\displaystyle n_{1},n_{2},\ldots } . This shows that the expression below holds for this special sequence, and a simple extension shows that it holds for every n > 0 {\displaystyle n>0} : Both proofs are presented in more detail. [ 4 ] A labeled graph G = ( V , E ) {\displaystyle G=(V,E)} with n {\displaystyle n} nodes can be represented by a string E ( G ) {\displaystyle E(G)} of ( n 2 ) {\displaystyle {n \choose 2}} bits, where each bit indicates the presence (or absence) of an edge between the pair of nodes in that position. K ( G ) ≥ ( n 2 ) {\displaystyle K(G)\geq {n \choose 2}} , and the degree d {\displaystyle d} of each vertex satisfies To prove this by the incompressibility method, if the deviation is larger we can compress the description of G {\displaystyle G} below K ( G ) {\displaystyle K(G)} ; this provides the required contradiction. This theorem is required in a more complicated proof, where the incompressibility argument is used a number of times to show that the number of unlabeled graphs is A transitive tournament is a complete directed graph , G = ( V , E ) {\displaystyle G=(V,E)} ; if ( i , j ) , ( j , k ) ∈ E {\displaystyle (i,j),(j,k)\in E} , ( i , k ) ∈ E {\displaystyle (i,k)\in E} . Consider the set of all transitive tournaments on n {\displaystyle n} nodes. Since a tournament is a labeled, directed complete graph , it can be encoded by a string E ( G ) {\displaystyle E(G)} of ( n 2 ) {\displaystyle {n \choose 2}} bits where each bit indicates the direction of the edge between the pair of nodes in that position. Using this encoding, every transitive tournament contains a transitive subtournament on (at least) v ( n ) {\displaystyle v(n)} vertices with This was shown as the first problem. [ 6 ] It is easily solved by the incompressibility method, [ 7 ] as are the coin-weighing problem, the number of covering families and expected properties; for example, at least a fraction of 1 − 1 / n {\displaystyle 1-1/n} of all transitive tournaments on n {\displaystyle n} vertices have transitive subtournaments on not more than 1 + 2 ⌈ 2 log ⁡ n ⌉ {\displaystyle 1+2\lceil 2\log n\rceil } vertices. n {\displaystyle n} is large enough. If a number of events are independent (in probability theory ) of one another, the probability that none of the events occur can be easily calculated. If the events are dependent, the problem becomes difficult. Lovász local lemma [ 8 ] is a principle that if events are mostly independent of one another and have an individually-small probability, there is a positive probability that none of them will occur. [ 9 ] It was proven by the incompressibility method. [ 10 ] Using the incompressibility method, several versions of expanders and superconcentrator graphs were shown to exist. [ 11 ] In the Heilbronn triangle problem , throw n {\displaystyle n} points in the unit square and determine the maximum of the minimal area of a triangle formed by three of the points over all possible arrangements. This problem was solved for small arrangements, and much work was done on asymptotic expression as a function of n {\displaystyle n} . The original conjecture of Heilbronn was O ( 1 / n 2 ) {\displaystyle O(1/n^{2})} during the early 1950s. Paul Erdős proved that this bound is correct for n {\displaystyle n} , a prime number. The general problem remains unsolved, apart from the best-known lower bound Ω ( ( log ⁡ n ) / n 2 ) {\displaystyle \Omega ((\log n)/n^{2})} (achievable; hence, Heilbronn 's conjecture is not correct for general n {\displaystyle n} ) and upper bound exp ⁡ ( c log ⁡ n ) / n 8 / 7 {\displaystyle \exp(c{\sqrt {\log n}})/n^{8/7}} (proven by Komlos, Pintsz and Szemeredi in 1982 and 1981, respectively). Using the incompressibility method, the average case was studied. It was proven that if the area is too small (or large) it can be compressed below the Kolmogorov complexity of a uniformly-random arrangement (high Kolmogorov complexity). This proves that for the overwhelming majority of the arrangements (and the expectation), the area of the smallest triangle formed by three of n {\displaystyle n} points thrown uniformly at random in the unit square is Θ ( 1 / n 3 ) {\displaystyle \Theta (1/n^{3})} . In this case, the incompressibility method proves the lower and upper bounds of the property involved. [ 12 ] The law of the iterated logarithm , the law of large numbers and the recurrence property were shown to hold using the incompressibility method [ 13 ] and Kolmogorov's zero–one law , [ 14 ] with normal numbers expressed as binary strings (in the sense of E. Borel ) and the distribution of 0s and 1s in binary strings of high Kolmogorov complexity. [ 15 ] The basic Turing machine, as conceived by Alan Turing in 1936, consists of a memory: a tape of potentially-infinite cells on which a symbol can be written and a finite control, with a read-write head attached, which scans a cell on the tape. At each step, the read-write head can change the symbol in the cell being scanned and move one cell left, right, or not at all according to instruction from the finite control. Turing machines with two tape symbols may be considered for convenience, but this is not essential. In 1968, F. C. Hennie showed that such a Turing machine requires order n 2 {\displaystyle n^{2}} to recognize the language of binary palindromes in the worst case . In 1977, W. J. Paul [ 2 ] presented an incompressibility proof which showed that order n 2 {\displaystyle n^{2}} time is required in the average case. For every integer n {\displaystyle n} , consider all words of that length. For convenience, consider words with the middle third of the word consisting of 0s. The accepting Turing machine ends with an accept state on the left (the beginning of the tape). A Turing-machine computation of a given word gives for each location (the boundary between adjacent cells) a sequence of crossings from left to right and right to left, each crossing in a particular state of the finite control. Positions in the middle third of a candidate word all have a crossing sequence of length O ( n ) {\displaystyle O(n)} (with a total computation time of O ( n 2 ) {\displaystyle O(n^{2})} ), or some position has a crossing sequence of o ( n ) {\displaystyle o(n)} . In the latter case, the word (if it is a palindrome ) can be identified by that crossing sequence. If other palindromes (ending in an accepting state on the left) have the same crossing sequence, the word (consisting of a prefix up to the position of the involved crossing sequence) of the original palindrome concatenated with a suffix the remaining length of the other palindrome would be accepted as well. Taking the palindrome of Ω ( n ) {\displaystyle \Omega (n)} , the Kolmogorov complexity described by o ( n ) {\displaystyle o(n)} bits is a contradiction. Since the overwhelming majority of binary palindromes have a high Kolmogorov complexity, this gives a lower bound on the average-case running time . The result is much more difficult, and shows that Turing machines with k + 1 {\displaystyle k+1} work tapes are more powerful than those with k {\displaystyle k} work tapes in real time (here one symbol per step). [ 3 ] In 1984, W. Maass [ 16 ] and M. Li and P. M. B. Vitanyi [ 17 ] showed that the simulation of two work tapes by one work tape of a Turing machine takes Θ ( n 2 ) {\displaystyle \Theta (n^{2})} time deterministically (optimally, solving a 30-year open problem ) and Ω ( n 2 / ( log ⁡ n log ⁡ log ⁡ n ) ) {\displaystyle \Omega (n^{2}/(\log n\log \log n))} time nondeterministically [ 17 ] (in, [ 16 ] this is Ω ( n 2 / ( log 2 ⁡ n log ⁡ log ⁡ n ) ) {\displaystyle \Omega (n^{2}/(\log ^{2}n\log \log n))} . More results concerning tapes, stacks and queues , deterministically and nondeterministically, [ 17 ] were proven with the incompressibility method. [ 4 ] Heapsort is a sorting method, invented by J. W. J. Williams and refined by R. W. Floyd , which always runs in O ( n log ⁡ n ) {\displaystyle O(n\log n)} time. It is questionable whether Floyd's method is better than Williams' on average, although it is better in the worst case. Using the incompressibility method, it was shown [ 4 ] that Williams' method runs on average in 2 n log ⁡ n + O ( n ) {\displaystyle 2n\log n+O(n)} time and Floyd's method runs on average in n log ⁡ n + O ( n ) {\displaystyle n\log n+O(n)} time. The proof was suggested by Ian Munro . Shellsort , discovered by Donald Shell in 1959, is a comparison sort which divides a list to be sorted into sublists and sorts them separately. The sorted sublists are then merged, reconstituting a partially-sorted list. This process repeats a number of times (the number of passes). The difficulty of analyzing the complexity of the sorting process is that it depends on the number n {\displaystyle n} of keys to be sorted, on the number p {\displaystyle p} of passes and the increments governing the scattering in each pass; the sublist is the list of keys which are the increment parameter apart. Although this sorting method inspired a large number of papers, only the worst case was established. For the average running time, only the best case for a two-pass Shellsort [ 18 ] and an upper bound of O ( n 23 / 15 ) {\displaystyle O(n^{23/15})} [ 19 ] for a particular increment sequence for three-pass Shellsort were established. A general lower bound on an average p {\displaystyle p} -pass Shellsort was given [ 20 ] which was the first advance in this problem in four decades. In every pass, the comparison sort moves a key to another place a certain distance (a path length). All these path lengths are logarithmically coded for length in the correct order (of passes and keys). This allows the reconstruction of the unsorted list from the sorted list. If the unsorted list is incompressible (or nearly so), since the sorted list has near-zero Kolmogorov complexity (and the path lengths together give a certain code length) the sum must be at least as large as the Kolmogorov complexity of the original list. The sum of the path lengths corresponds to the running time, and the running time is lower-bounded in this argument by Ω ( p n 1 + 1 / p ) {\displaystyle \Omega (pn^{1+1/p})} . This was improved to a lower bound of where h 0 = n {\displaystyle h_{0}=n} . [ 21 ] This implies for example the Jiang-Li-Vitanyi lower bound for all p {\displaystyle p} -pass increment sequences and improves that lower bound for particular increment sequences; the Janson-Knuth upper bound is matched by a lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses Θ ( n 23 / 15 ) {\displaystyle \Theta (n^{23/15})} inversions. Another example is as follows. n , r , s {\displaystyle n,r,s} are natural numbers and 2 log ⁡ n ≤ r , s ≤ n / 4 {\displaystyle 2\log n\leq r,s\leq n/4} , it was shown that for every n {\displaystyle n} there is a Boolean n × n {\displaystyle n\times n} matrix; every s × ( n − r ) {\displaystyle s\times (n-r)} submatrix has a rank at least n / 2 {\displaystyle n/2} by the incompressibility method. According to Gödel's first incompleteness theorem , in every formal system with computably enumerable theorems (or proofs) strong enough to contain Peano arithmetic , there are true (but unprovable) statements or theorems. This is proved by the incompressibility method; every formal system F {\displaystyle F} can be described finitely (for example, in f {\displaystyle f} bits). In such a formal system, we can express K ( x ) ≥ | x | {\displaystyle K(x)\geq |x|} since it contains arithmetic. Given F {\displaystyle F} and a natural number n ≫ f {\displaystyle n\gg f} , we can search exhaustively for a proof that some string y {\displaystyle y} of length n {\displaystyle n} satisfies K ( y ) ≥ n {\displaystyle K(y)\geq n} . In this way, we obtain the first such string; K ( y ) ≤ log ⁡ n + f {\displaystyle K(y)\leq \log n+f} : contradiction. [ 22 ] Although the probabilistic method generally shows the existence of an object with a certain property in a class, the incompressibility method tends to show that the overwhelming majority of objects in the class (the average, or the expectation) have that property. It is sometimes easy to turn a probabilistic proof into an incompressibility proof or vice versa. In some cases, it is difficult or impossible to turn a proof by incompressibility into a probabilistic (or counting proof). In virtually all the cases of Turing-machine time complexity cited above, the incompressibility method solved problems which had been open for decades; no other proofs are known. Sometimes a proof by incompressibility can be turned into a proof by counting, as happened in the case of the general lower bound on the running time of Shellsort . [ 20 ]
https://en.wikipedia.org/wiki/Incompressibility_method
Incongruent melting occurs when a solid substance being partially melted does not melt uniformly, so that the chemical composition of neither the resulting liquid nor the resulting solid is the same as that of the original solid. For example, melting of orthoclase (KAlSi 3 O 8 ) produces leucite (KAlSi 2 O 6 ) in addition to a melt. The melt produced is richer in silica (SiO 2 ). The proportions of leucite and melt formed can be recombined to yield the bulk composition of the starting feldspar . Another mineral that can melt incongruently is enstatite (Mg 2 Si 2 O 6 ), which produces forsterite (Mg 2 SiO 4 ) in addition to a melt richer in SiO 2 when melting at low pressure. Enstatite melts congruently at higher pressures between 2.5 and 5.5 kilobars . This article related to petrology is a stub . You can help Wikipedia by expanding it . This geochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incongruent_melting
Incongruent transition , in chemistry , is a mass transition between two phases which involves a change in chemical composition . This is contrasted with congruent transition , for which the composition remains the same. The transition may be that of melting , vaporization or allotropism . The concept is also often extended to related phenomena, for example, incongruent dissolution of a solid by a liquid solvent, which is often encountered in geology . The term "phase decomposition" is sometimes used to describe incongruent transition. However, it has to be kept in mind that incongruent transition is described by an equilibrium . For an example, see incongruent melting .
https://en.wikipedia.org/wiki/Incongruent_transition
In United States patent law , incredible utility is a concept according to which, in order for an invention to be patentable , it must have some credible useful function. If it does not have a credible useful function despite the assertions of the inventor, then the application for patent can be rejected as having "incredible utility". The invention does not have to work the way the inventor thinks it works, but it must do something useful. Patents that have been held invalid for incredible utility include: A rejection based on incredible utility can be overcome by providing evidence that, "if, considered as a whole, [...] leads a person of ordinary skill in the art to conclude that the asserted utility is more likely than not true". [ 2 ] This article relating to law in the United States or its constituent jurisdictions is a stub . You can help Wikipedia by expanding it . This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incredible_utility
In nonstandard analysis , a field of mathematics, the increment theorem states the following: Suppose a function y = f ( x ) is differentiable at x and that Δ x is infinitesimal . Then Δ y = f ′ ( x ) Δ x + ε Δ x {\displaystyle \Delta y=f'(x)\,\Delta x+\varepsilon \,\Delta x} for some infinitesimal ε , where Δ y = f ( x + Δ x ) − f ( x ) . {\displaystyle \Delta y=f(x+\Delta x)-f(x).} If Δ x ≠ 0 {\textstyle \Delta x\neq 0} then we may write Δ y Δ x = f ′ ( x ) + ε , {\displaystyle {\frac {\Delta y}{\Delta x}}=f'(x)+\varepsilon ,} which implies that Δ y Δ x ≈ f ′ ( x ) {\textstyle {\frac {\Delta y}{\Delta x}}\approx f'(x)} , or in other words that Δ y Δ x {\textstyle {\frac {\Delta y}{\Delta x}}} is infinitely close to f ′ ( x ) {\textstyle f'(x)} , or f ′ ( x ) {\textstyle f'(x)} is the standard part of Δ y Δ x {\textstyle {\frac {\Delta y}{\Delta x}}} . A similar theorem exists in standard Calculus. Again assume that y = f ( x ) is differentiable, but now let Δ x be a nonzero standard real number. Then the same equation Δ y = f ′ ( x ) Δ x + ε Δ x {\displaystyle \Delta y=f'(x)\,\Delta x+\varepsilon \,\Delta x} holds with the same definition of Δ y , but instead of ε being infinitesimal, we have lim Δ x → 0 ε = 0 {\displaystyle \lim _{\Delta x\to 0}\varepsilon =0} (treating x and f as given so that ε is a function of Δ x alone).
https://en.wikipedia.org/wiki/Increment_theorem
Incremental encoding , also known as front compression , back compression , or front coding , is a type of delta encoding compression algorithm whereby common prefixes or suffixes and their lengths are recorded so that they need not be duplicated. This algorithm is particularly well-suited for compressing sorted data , e.g., a list of words from a dictionary . For example: The encoding used to store the common prefix length itself varies from application to application. Typical techniques are storing the value as a single byte; delta encoding , which stores only the change in the common prefix length; and various universal codes . It may be combined with other general lossless data compression techniques such as entropy encoding and dictionary coders to compress the remaining suffixes. Incremental encoding is widely used in information retrieval to compress the lexicons used in search indexes ; these list all the words found in all the documents and a pointer for each one to a list of locations. Typically, it compresses these indexes by about 40%. [ 1 ] As one example, incremental encoding is used as a starting point by the GNU locate utility, in an index of filenames and directories. The GNU locate utility further uses bigram encoding to further shorten popular filepath prefixes. This computer data storage -related software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Incremental_encoding
Incremental launch is a method in civil engineering of building a complete bridge deck from one abutment of the bridge only, manufacturing the superstructure of the bridge by sections to the other side. In current applications, the method is highly mechanised and uses pre-stressed concrete . [ 1 ] : 1 The first bridge to have been incrementally launched appears to have been the Waldshut–Koblenz Rhine Bridge , a wrought iron lattice truss railway bridge , completed in 1859. The second incrementally launched bridge was the Rhine Bridge , a railway bridge that spanned the Upper Rhine between Kehl , Germany and Strasbourg , France, completed in 1861 and subsequently destroyed and rebuilt on several occasions. The first incrementally launched concrete bridge was the 96-metre (315 ft) span box girder bridge over the Caroní River , completed in 1964. The second incrementally launched concrete bridge was over the Inn River , Kufstein in Austria , completed in 1965. The structural engineers for both bridges were Professor Dr Fritz Leonhardt and his partner, Willi Baur. [ 1 ] : 1 The usual method of building concrete bridges is the segmental method , one span at a time. [ 2 ] The bridges are mostly of the box girder design and work with straight or constant curve shapes, with a constant radius. 15-to-30-metre (49 to 98 ft) box girder sections of the bridge deck are fabricated at one end of the bridge in factory conditions. Each section is manufactured in around one week. [ citation needed ] The first section of the launch, the launching nose, is not made of concrete, but is a stiffened steel plate girder and is around 60% of the length of a bridge span, and reduces the cantilever moment. [ 3 ] The sections of bridge deck slide over sliding bearings, which are concrete blocks covered with stainless steel and reinforced elastomeric pads .
https://en.wikipedia.org/wiki/Incremental_launch
An incubator is a device used to grow and maintain microbiological cultures or cell cultures . The incubator maintains optimal temperature , humidity and other conditions such as the CO 2 and oxygen content of the atmosphere inside. Incubators are essential for much experimental work in cell biology , microbiology and molecular biology and are used to culture both bacterial and eukaryotic cells. An incubator is made up of a chamber with a regulated temperature . Some incubators also regulate humidity , gas composition, or ventilation within that chamber. The simplest incubators are insulated boxes with an adjustable heater, typically going up to 60 to 65 °C (140 to 149 °F), though some can go slightly higher (generally to no more than 100 °C). The most commonly used temperature both for bacteria such as the frequently used E. coli as well as for mammalian cells is approximately 37 °C (99 °F), as these organisms grow well under such conditions. For other organisms used in biological experiments, such as the budding yeast Saccharomyces cerevisiae , a growth temperature of 30 °C (86 °F) is optimal. More elaborate incubators can also include the ability to lower the temperature (via refrigeration), or the ability to control humidity or CO 2 levels. This is important in the cultivation of mammalian cells, where the relative humidity is typically >80% to prevent evaporation and a slightly acidic pH is achieved by maintaining a CO 2 level of 5%. From aiding in hatching chicken eggs to enabling scientists to understand and develop vaccines for deadly viruses, the laboratory incubator has seen numerous applications over the years it has been in use. The incubator has also provided a foundation for medical advances and experimental work in cellular and molecular biology . While many technological advances have occurred since the primitive incubators first used in ancient Egypt and China, the main purpose of the incubator has remained unchanged: to create a stable, controlled environment conducive to research, study, and cultivation. The earliest incubators were invented thousands of years ago in ancient Egypt and China, where they were used to keep chicken eggs warm. [ 1 ] Use of incubators revolutionized food production, as it allowed chicks to hatch from eggs without requiring that a hen sit on them, thus freeing the hens to lay more eggs in a shorter period of time. Both early Egyptian and Chinese incubators were essentially large rooms that were heated by fires, where attendants turned the eggs at regular intervals to ensure even heat distribution. The incubator received an update in the 16th century when Jean Baptiste Porta drew on ancient Egyptian design to create a more modern egg incubator. While he eventually had to discontinue his work due to the Spanish Inquisition, Rene-Antoine Ferchault de Reaumur took up the challenge in the middle of the 17th century. [ 2 ] Reaumur warmed his incubator with a wood stove and monitored its temperature using the Reaumur thermometer, another of his inventions. In the 19th century, researchers finally began to recognize that the use of incubators could contribute to medical advancements. They began to experiment to find the ideal environment for maintaining cell culture stocks. These early incubators were simply made up of bell jars that contained a single lit candle. Cultures were placed near the flame on the underside of the jar's lid, and the entire jar was placed in a dry, heated oven. In the late 19th century, doctors realized another practical use for incubators: keeping premature or weak infants alive. The first infant incubator, used at a women's hospital in Paris, was heated by kerosene lamps . Fifty years later, Julius H. Hess , an American physician often considered to be the father of neonatology, designed an electric infant incubator that closely resembles the infant incubators in use today. [ 3 ] The next innovation in incubator technology came in the 1960s, when the CO 2 incubator was introduced to the market. [ 4 ] Demand came when doctors realized that they could use CO 2 incubators to identify and study pathogens found in patients' bodily fluids. To do this, a sample was harvested and placed onto a sterile dish and into the incubator. The air in the incubator was kept at 37 degrees Celsius, the same temperature as the human body, and the incubator maintained the atmospheric carbon dioxide and nitrogen levels necessary to promote cell growth. At this time, incubators also began to be used in genetic engineering . Scientists could create biologically essential proteins, such as insulin, with the use of incubators. Genetic modification could now take place on a molecular level, helping to improve the nutritional content and resistance to pestilence and disease of fruits and vegetables. Incubators serve a variety of functions in a scientific lab. Incubators generally maintain a constant temperature, however additional features are often built in. Many incubators also control humidity. Shaking incubators incorporate movement to mix cultures. Gas incubators regulate the internal gas composition. Some incubators have a means of circulating the air inside of them to ensure even distribution of temperatures. Many incubators built for laboratory use have a redundant power source, to ensure that power outages do not disrupt experiments. Incubators are made in a variety of sizes, from tabletop models, to warm rooms, which serve as incubators for large numbers of samples.
https://en.wikipedia.org/wiki/Incubator_(culture)
Incyte Corporation is an American multinational pharmaceutical company with headquarters in Wilmington, Delaware . [ 2 ] The company currently operates manufacturing and R&D locations in North America, Europe, and Asia. [ 3 ] Incyte Corporation currently develops and manufactures prescription biopharmaceutical medications in multiple therapeutic areas including oncology, inflammation, and autoimmunity. In 2014, Incyte named Hervé Hoppenot president and CEO, and in 2015 he was appointed chairman of the Board of Directors. [ 4 ] [ 5 ] Hoppenot had previously served as the president of Novartis Oncology; he had been with Novartis since 2003. [ 5 ] In September 2015, the company announced it had gained exclusive development and commercial right pertaining to Jiangsu Hengrui Medicine Co., Ltd's anti-PD-1 monoclonal antibody , SHR-1210 , in a deal worth $795+ million. [ 6 ] In January 2020, Incyte signed a collaboration and license agreement for the global development and commercialization of tafasitamab with MorphoSys . [ 7 ] On March 3, 2020, the agreement received antitrust clearance and thus became effective. [ 8 ] Incyte established a European headquarters in Morges, Switzerland , in 2021. [ 9 ] Incyte Corporation currently has seven marketed and co-marketed pharmaceutical products, including Jakafi (ruxolitinib) , Pemazyre (pemigatinib) , Monjuvi (tafasitamab-cxix) , Opzelura ( Ruxolitinib ), Tabrecta (capmatinib) , Olumiant (Baricitinib) , and Iclusig (ponatinib) . [ 10 ] [ 11 ] In 2013, Novartis acquired Incyte's c-Met inhibitor capmatinib (INC280, INCB028060), which is marketed under the brand name Tabrecta. [ 12 ] As of 2014, the company was developing baricitinib , an oral JAK1 and JAK2 inhibitor drug for rheumatoid arthritis in partnership with Eli Lilly . [ 13 ] [ 14 ] It gained EU approval in February 2017. [ 15 ] In April 2017, the US FDA issued a rejection, citing concerns about dosing and safety. [ 16 ] [ 17 ] In May 2018, baricitinib was approved in the United States for the treatment of rheumatoid arthritis under the brand name Olumiant. [ 18 ] [ 19 ] As of 2016 epacadostat , an indoleamine 2,3-dioxygenase (IDO1) inhibitor, was in development for various cancers and was in combination trials with Merck 's pembrolizumab (Keytruda) and Bristol Myers Squibb 's nivolumab (Opdivo). [ 20 ] [ 21 ] In May 2024, Incyte completed its acquisition of Escient Pharmaceuticals for $750 million. [ 22 ]
https://en.wikipedia.org/wiki/Incyte
IndExs – Index of Exsiccatae is an online biological database that plays a pivotal role in documenting more than 2,400 historical and ongoing series of exsiccatae and exsiccata-like works. Managed by the Botanische Staatssammlung München in München , IndExs serves as a comprehensive data repository for these series, acting as directory providing detailed titles with information on the more than 1,300 editors, bibliographic information, exsiccatal numbers, publication timespans, ranges, information on preceding and superseding series and publishers. Exsiccatae, organised series of biological specimens distributed among biological collections , are essential resources found in major herbaria worldwide. Open access to the general information on exsiccatae facilitates global scientific engagement and research. [ 1 ] [ 2 ] [ 3 ] The database tells surprising stories on the common cultural history of taxonomic collections worldwide as well as on the persons and organisations involved in plant specimen exchange and trade projects. [ 4 ] Launched in 2001, IndExs has become indispensable to herbaria, fungaria, and digitisation initiatives, catering to their evolving scientific content needs. [ 5 ] [ 6 ] The database categorises exsiccatae (including exsiccata-like series) and the specimens they distribute, focusing on major organismic groups within botany and mycology . This categorisation is enhanced by images of exemplary specimen labels from more than 110 herbaria citing their Index Herbariorum ´s acronyms. [ 7 ] [ 8 ] IndExs standardizes the naming conventions for exsiccata series. [ 9 ] IndExs leverages the DiversityExsiccatae module of the Diversity Workbench framework as database management system. [ 10 ] Recently, efforts of the biodiversity informatics department of the Bavarian State Collections of Natural History to align its technical infrastructure with Wikidata concepts have begun, indicating a move towards more integrated and semantic web -based data management practices. [ 11 ] [ 12 ] The main focus is on the disambiguation of names of more than 1,300 editors and the integration of information for 2,400 exsiccata series into the semantic web. This move involves adopting Wikidata identifiers for the precise identification of editors, contributing additional details to existing Wikidata entries about their scientific work, and creating new entries where necessary. A proposed new property, the "IndExs Exsiccata ID," aims to link directly to IndExs, allowing for the incorporation of exsiccata series information into Wikidata. This integration facilitates a linked data environment, ensuring that the significant contributions of botanical and mycological editors and their work are more accessible and accurately represented in global data repositories. [ 13 ] [ 14 ] The practical application of IndExs is highlighted by its use in the cataloguing and analysis of the bryophyte collection at the National Herbarium in Pretoria (PRE), South Africa. In this project, IndExs was used to compile the first catalogue of exsiccatae for the PRE collection and provided the necessary framework and reference material to identify and classify the 66 exsiccatae series represented in the collection. Through IndExs, the project team was able to identify and document significant exsiccatae within the PRE collection, including rare sets like Anton Rehmann 's Musci Austro-Africani and Bryophyta Antarctica exsiccata edited by Ryszard Ochyra . [ 15 ] [ 16 ] This case shows how IndExs facilitates the documentation and study of botanical specimens, enabling the first comprehensive catalog of exsiccatae within a major herbarium collection. IndExs with its standardised titles is used in a similar way to create the dataset Bound Volumes and Exsiccatae in the Botanical Collections at the Natural History Museum, London . [ 6 ] Through IndExs, researchers can access information on exsiccatae, contributing to botanical research, taxonomy, and the history of botanical exploration and exchange. [ 17 ] Similarly, the IndExs has been proposed for use as a reference list in the digitisation of German herbaria specimens and related biodiversity informatics projects. [ 18 ] IndExs is acknowledged as useful resource to explain abbreviations and references as found on handwritten and printed herbarium labels (schedae) [ 19 ] and is cited as information resource in reviews on historical herbaria and plant collectors like Ignaz Dörfler [ de ] . [ 20 ]
https://en.wikipedia.org/wiki/IndExs_–_Index_of_Exsiccatae
Indatuximab ravtansine (BT062) is an immunomodulator and antineoplastic antibody-drug conjugate . It is the anti- CD138 chimerized MAb (nBT062) [ 1 ] linked to the maytansinoid DM4. [ 1 ] It is being investigated as part of a treatment for multiple myeloma . [ 1 ] Preliminary data has been reported in 2013 from an early stage clinical trial in combination with Lenalidomide and Dexamethasone . [ 1 ] Follow up data reported "encouraging efficacy" in December 2014. [ 2 ] As of December 2014 [update] , it is in clinical trials for triple negative metastatic breast cancer and metastatic urinary bladder cancer . [ 2 ] CD138 ( Syndecan-1 ) is highly overexpressed on various solid tumors and in hematological malignancies, and represents one of the most specific target antigens for identification of multiple myeloma (MM) cells. [ 1 ] The antibody part binds to CD138 on the target cells and then the DM4 kills the cell. [ citation needed ]
https://en.wikipedia.org/wiki/Indatuximab_ravtansine
In discrete calculus the indefinite sum operator (also known as the antidifference operator), denoted by ∑ x {\textstyle \sum _{x}} or Δ − 1 {\displaystyle \Delta ^{-1}} , [ 1 ] [ 2 ] is the linear operator , inverse of the forward difference operator Δ {\displaystyle \Delta } . It relates to the forward difference operator as the indefinite integral relates to the derivative . Thus More explicitly, if ∑ x f ( x ) = F ( x ) {\textstyle \sum _{x}f(x)=F(x)} , then If F ( x ) is a solution of this functional equation for a given f ( x ), then so is F ( x )+ C ( x ) for any periodic function C ( x ) with period 1. Therefore, each indefinite sum actually represents a family of functions. However, due to the Carlson's theorem , the solution equal to its Newton series expansion is unique up to an additive constant C . This unique solution can be represented by formal power series form of the antidifference operator: Δ − 1 = 1 e D − 1 {\displaystyle \Delta ^{-1}={\frac {1}{e^{D}-1}}} . Indefinite sums can be used to calculate definite sums with the formula: [ 3 ] The Laplace summation formula allows the indefinite sum to be written as the indefinite integral plus correction terms obtained from iterating the difference operator , although it was originally developed for the reverse process of writing an integral as an indefinite sum plus correction terms. As usual with indefinite sums and indefinite integrals, it is valid up to an arbitrary choice of the constant of integration . Using operator algebra avoids cluttering the formula with repeated copies of the function to be operated on: [ 4 ] ∑ x = ∫ + 1 2 − 1 12 Δ + 1 24 Δ 2 − 19 720 Δ 3 + 3 160 Δ 4 − ⋯ {\displaystyle \sum _{x}=\int {}+{\frac {1}{2}}-{\frac {1}{12}}\Delta +{\frac {1}{24}}\Delta ^{2}-{\frac {19}{720}}\Delta ^{3}+{\frac {3}{160}}\Delta ^{4}-\cdots } In this formula, for instance, the term 1 2 {\displaystyle {\tfrac {1}{2}}} represents an operator that divides the given function by two. The coefficients + 1 2 , − 1 12 , {\displaystyle +{\tfrac {1}{2}},-{\tfrac {1}{12}},} etc., appearing in this formula are the Gregory coefficients , also called Laplace numbers. The coefficient in the term Δ n − 1 {\displaystyle \Delta ^{n-1}} is [ 4 ] C n n ! = ∫ 0 1 ( x n ) d x {\displaystyle {\frac {{\mathcal {C}}_{n}}{n!}}=\int _{0}^{1}{\binom {x}{n}}\,dx} where the numerator C n {\displaystyle {\mathcal {C}}_{n}} of the left hand side is called a Cauchy number of the first kind, although this name sometimes applies to the Gregory coefficients themselves. [ 4 ] Faulhaber's formula provides that the right-hand side of the equation converges. If lim x → + ∞ f ( x ) = 0 , {\displaystyle \lim _{x\to {+\infty }}f(x)=0,} then [ 5 ] Often the constant C in indefinite sum is fixed from the following condition. Let Then the constant C is fixed from the condition or Alternatively, Ramanujan's sum can be used: or at 1 respectively [ 6 ] [ 7 ] Indefinite summation by parts: Definite summation by parts: If T {\displaystyle T} is a period of function f ( x ) {\displaystyle f(x)} then If T {\displaystyle T} is an antiperiod of function f ( x ) {\displaystyle f(x)} , that is f ( x + T ) = − f ( x ) {\displaystyle f(x+T)=-f(x)} then Some authors use the phrase "indefinite sum" to describe a sum in which the numerical value of the upper limit is not given: In this case a closed form expression F ( k ) for the sum is a solution of which is called the telescoping equation. [ 8 ] It is the inverse of the backward difference ∇ {\displaystyle \nabla } operator. It is related to the forward antidifference operator using the fundamental theorem of discrete calculus described earlier. This is a list of indefinite sums of various functions. Not every function has an indefinite sum that can be expressed in terms of elementary functions.
https://en.wikipedia.org/wiki/Indefinite_sum
Indel ( in sertion- del etion) is a molecular biology term for an insertion or deletion of bases in the genome of an organism. Indels ≥ 50 bases in length are classified as structural variants . [ 1 ] [ 2 ] In coding regions of the genome, unless the length of an indel is a multiple of 3, it will produce a frameshift mutation . For example, a common microindel which results in a frameshift causes Bloom syndrome in the Jewish or Japanese population. [ 3 ] Indels can be contrasted with a point mutation . An indel inserts or deletes nucleotides from a sequence, while a point mutation is a form of substitution that replaces one of the nucleotides without changing the overall number in the DNA. Indels can also be contrasted with Tandem Base Mutations (TBM), which may result from fundamentally different mechanisms. [ 4 ] A TBM is defined as a substitution at adjacent nucleotides (primarily substitutions at two adjacent nucleotides, but substitutions at three adjacent nucleotides have been observed). [ 5 ] Indels, being either insertions, or deletions, can be used as genetic markers in natural populations, especially in phylogenetic studies. [ 6 ] [ 7 ] It has been shown that genomic regions with multiple indels can also be used for species-identification procedures. [ 8 ] [ 9 ] [ 10 ] An indel change of a single base pair in the coding part of an mRNA results in a frameshift during mRNA translation that could lead to an inappropriate (premature) stop codon in a different frame. Indels that are not multiples of 3 are particularly uncommon in coding regions but relatively common in non-coding regions. [ 11 ] [ 12 ] There are approximately 192-280 frameshifting indels in each person. [ 13 ] Indels are likely to represent between 16% and 25% of all sequence polymorphisms in humans. [ 14 ] In most known genomes, including humans, indel frequency tends to be markedly lower than that of single nucleotide polymorphisms (SNP) , except near highly repetitive regions, including homopolymers and microsatellites . [ 15 ] The term "indel" has been co-opted in recent years by genome scientists for use in the sense described above. This is a change from its original use and meaning, which arose from systematics . In systematics, researchers could find differences between sequences, such as from two different species. But it was impossible to infer if one species lost the sequence or the other species gained it. For example, species A has a run of 4 G nucleotides at a locus and species B has 5 G's at the same locus. If the mode of selection is unknown, one can not tell if species A lost one G (a "deletion" event") or species B gained one G (an "insertion" event). When one cannot infer the phylogenetic direction of the sequence change, the sequence change event is referred to as an "indel". [ citation needed ] Using passenger-immunoglobulin mouse models, a study found that the most prevalent indel events are the activation-induced cytidine deaminase (AID)-dependent ±1-base pair (bp) indels, which can lead to deleterious outcomes, whereas longer in-frame indels were rare outcomes. [ 16 ] ÷⊈⊂⊃⊅
https://en.wikipedia.org/wiki/Indel
Indentation hardness tests are used in mechanical engineering to determine the hardness of a material to deformation . Several such tests exist, wherein the examined material is indented until an impression is formed; these tests can be performed on a macroscopic or microscopic scale. When testing metals, indentation hardness correlates roughly linearly with tensile strength , [ 1 ] but it is an imperfect correlation often limited to small ranges of strength and hardness for each indentation geometry. This relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers. Different techniques are used to quantify material characteristics at smaller scales. Measuring mechanical properties for materials, for instance, of thin films , cannot be done using conventional uniaxial tensile testing. As a result, techniques testing material "hardness" by indenting a material with a very small impression have been developed to attempt to estimate these properties. Hardness measurements quantify the resistance of a material to plastic deformation. Indentation hardness tests compose the majority of processes used to determine material hardness, and can be divided into three classes: macro, micro and nanoindentation tests. [ 2 ] [ 3 ] Microindentation tests typically have forces less than 2 N (0.45 lb f ). Hardness, however, cannot be considered to be a fundamental material property. [ citation needed ] Classical hardness testing usually creates a number which can be used to provide a relative idea of material properties. [ 3 ] As such, hardness can only offer a comparative idea of the material's resistance to plastic deformation since different hardness techniques have different scales. The equation based definition of hardness is the pressure applied over the contact area between the indenter and the material being tested. As a result hardness values are typically reported in units of pressure, although this is only a "true" pressure if the indenter and surface interface is perfectly flat. [ citation needed ] Instrumented indentation basically indents a sharp tip into the surface of a material to obtain a force-displacement curve. The results provide a lot of information about the mechanical behavior of the material, including hardness , e.g., elastic moduli and plastic deformation . One key factor of instrumented indentation test is that the tip needs to be controlled by force or displacement that can be measured simultaneously throughout the indentation cycle. [ 4 ] Current technology can realize accurate force control in a wide range. Therefore hardness can be characterized at many different length scales, from hard materials like ceramics to soft materials like polymers. The earliest work was finished by Bulychev, Alekhin, Shorshorov in the 1970s, who determined that Young's modulus of a material can be determined from the slope of a force vs. displacement indentation curve as: [ 5 ] Where E s {\displaystyle E_{s}} and ν s {\displaystyle \nu _{s}} are the Young's modulus and Poisson's ratio of the sample, an E i {\displaystyle E_{i}} and ν i {\displaystyle \nu _{i}} are that of the indenter. Since typically, E i >> E s {\displaystyle E_{i}>>E_{s}} , the second term can typically be ignored. The most critical information, hardness, can be calculated by: Commonly used indentation techniques, as well as detailed calculation of each different method, are discussed as follows. The term "macroindentation" is applied to tests with a larger test load, such as 1 kgf or more. There are various macroindentation tests, including: There is, in general, no simple relationship between the results of different hardness tests. Though there are practical conversion tables for hard steels, for example, some materials show qualitatively different behaviors under the various measurement methods. The Vickers and Brinell hardness scales correlate well over a wide range, however, with Brinell only producing overestimated values at high loads. Indentation procedures can, however, be used to extract genuine stress-strain relationships. Certain criteria need to be met if reliable results are to be obtained. These include the need to deform a relatively large volume, and hence to use large loads. The methodologies involved are often grouped under the term Indentation plastometry , which is described in a separate article. The term " microhardness " has been widely employed in the literature to describe the hardness testing of materials with low applied loads. A more precise term is "microindentation hardness testing." In microindentation hardness testing, a diamond indenter of specific geometry is impressed into the surface of the test specimen using a known applied force (commonly called a "load" or "test load") of 1 to 1000 gf . Microindentation tests typically have forces of 2 N (roughly 200 gf) and produce indentations of about 50 μm . Due to their specificity, microhardness testing can be used to observe changes in hardness on the microscopic scale. Unfortunately, it is difficult to standardize microhardness measurements; it has been found that the microhardness of almost any material is higher than its macrohardness. Additionally, microhardness values vary with load and work-hardening effects of materials. [ 3 ] The two most commonly used microhardness tests are tests that also can be applied with heavier loads as macroindentation tests: In microindentation testing, the hardness number is based on measurements made of the indent formed in the surface of the test specimen. The hardness number is based on the applied force divided by the surface area of the indent itself, giving hardness units in kgf/mm 2 . Microindentation hardness testing can be done using Vickers as well as Knoop indenters. For the Vickers test, both the diagonals are measured and the average value is used to compute the Vickers pyramid number. In the Knoop test, only the longer diagonal is measured, and the Knoop hardness is calculated based on the projected area of the indent divided by the applied force, also giving test units in kgf/mm 2 . The Vickers microindentation test is carried out in a similar manner welling to the Vickers macroindentation tests, using the same pyramid. The Knoop test uses an elongated pyramid to indent material samples. This elongated pyramid creates a shallow impression, which is beneficial for measuring the hardness of brittle materials or thin components. Both the Knoop and Vickers indenters require polishing of the surface to achieve accurate results. [ citation needed ] Scratch tests at low loads, such as the Bierbaum microcharacter test , performed with either 3 gf or 9 gf loads, preceded the development of microhardness testers using traditional indenters. In 1925, Smith and Sandland of the UK developed an indentation test that employed a square-based pyramidal indenter made from diamond. [ 11 ] They chose the pyramidal shape with an angle of 136° between opposite faces in order to obtain hardness numbers that would be as close as possible to Brinell hardness numbers for the specimen. The Vickers test has a great advantage of using one hardness scale to test all materials. The first reference to the Vickers indenter with low loads was made in the annual report of the National Physical Laboratory in 1932. Lips and Sack describes the first Vickers tester using low loads in 1936. [ citation needed ] There is some disagreement in the literature regarding the load range applicable to microhardness testing. ASTM Specification E384, for example, states that the load range for microhardness testing is 1 to 1000 gf. For loads of 1 kgf and below, the Vickers hardness (HV) is calculated with an equation, wherein load ( L ) is in grams force and the mean of two diagonals ( d ) is in millimeters: For any given load, the hardness increases rapidly at low diagonal lengths, with the effect becoming more pronounced as the load decreases. Thus at low loads, small measurement errors will produce large hardness deviations. Thus one should always use the highest possible load in any test. Also, in the vertical portion of the curves, small measurement errors will produce large hardness deviations. The main sources of error with indentation tests are poor technique, poor calibration of the equipment, and the strain hardening effect of the process. However, it has been experimentally determined through "strainless hardness tests" that the effect is minimal with smaller indentations. [ 12 ] Surface finish of the part and the indenter do not have an effect on the hardness measurement, as long as the indentation is large compared to the surface roughness. This proves to be useful when measuring the hardness of practical surfaces. It also is helpful when leaving a shallow indentation, because a finely etched indenter leaves a much easier to read indentation than a smooth indenter. [ 13 ] The indentation that is left after the indenter and load are removed is known to "recover", or spring back slightly. This effect is properly known as shallowing . For spherical indenters the indentation is known to stay symmetrical and spherical, but with a larger radius. For very hard materials the radius can be three times as large as the indenter's radius. This effect is attributed to the release of elastic stresses. Because of this effect the diameter and depth of the indentation do contain errors. The error from the change in diameter is known to be only a few percent, with the error for the depth being greater. [ 14 ] Another effect the load has on the indentation is the piling-up or sinking-in of the surrounding material. If the metal is work hardened it has a tendency to pile up and form a "crater". If the metal is annealed it will sink in around the indentation. Both of these effects add to the error of the hardness measurement. [ 15 ] When hardness, H {\displaystyle H} , is defined as the mean contact pressure (load/ projected contact area), the yield stress, σ y {\displaystyle \sigma _{y}} , of many materials is proportional to the hardness by a constant known as the constrain factor, C. [ 16 ] where: The hardness differs from the uni-axial compressive yield stress of the material because different compressive failure modes apply. A uni-axial test only constrains the material in one dimension, which allows the material to fail as a result of shear . Indentation hardness on the other hand is constrained in three dimensions which prevent shear from dominating the failure. [ 16 ]
https://en.wikipedia.org/wiki/Indentation_hardness
Indentation plastometry is the idea of using an indentation-based procedure to obtain (bulk) mechanical properties (of metals) in the form of stress-strain relationships in the plastic regime (as opposed to hardness testing , which gives numbers that are only semi-quantitative indicators of the resistance to plastic deformation). Since indentation is a much easier and more convenient procedure than conventional tensile testing , with far greater potential for mapping of spatial variations, this is an attractive concept (provided that the outcome is at least approximately as reliable as those of standard uniaxial tests). Capturing of macroscopic (size-independent) properties brings in a requirement [ 1 ] [ 2 ] [ 3 ] [ 4 ] to deform a volume of material that is large enough to be representative of the bulk. This depends on the microstructure , but usually means that it must contain “many” grains and is typically of the order of hundreds of microns in linear dimensions. The indentation size effect , in which the measured hardness tends to increase as the deformed volume becomes small, is at least partly due to a failure to interrogate a representative volume. The indenter, which is normally spherical, therefore needs to have a radius in the approximate range of several hundred microns up to a mm or two. A further requirement concerns the plastic strains generated in the sample. The indentation response must be sensitive to the plasticity characteristics of the material over the strain range of interest, which normally extends up to at least several % and commonly up to several tens of %. The strains created in the sample must therefore also range up to values of this order. This typically requires that the “penetration ratio” (penetration depth over indenter radius) should be at least about 10%. Finally, depending on the hardness of the metal, this in turn requires that the facility should have a relatively high load capability – usually of the order of several kN. The simplest indentation procedures, which have been in use for many decades, involve the application of a pre-determined load (often from a dead weight), followed by measurement of the lateral size of the residual indent (or possibly its depth). However, many indentation procedures are now based on “instrumented” set-ups, in which the load is progressively ramped up and both load and penetration (displacement) are continuously monitored during indentation. A key experimental outcome is thus the load-displacement curve. Various types of equipment can be used to generate such curves. These include those designed to carry out so-called “ nanoindentation ” - for which both the load (down to the mN range) and the displacement (commonly sub-micron) are very small. However, as noted above, if the deformed volume is small, then it’s not possible to obtain “bulk” properties. Moreover, even with relatively large loads and displacements, some kind of “compliance correction” may be required, to separate the response of the sample from displacements associated with the loading system. The other main form of experimental outcome is the shape of the residual indent. As mentioned above, early types of hardness tester focused on this, in the form of (relatively crude) measurement of the “width” of the indent – commonly via simple optical microscopy. However, much richer information can be extracted by using a profilometer (optical or stylus) to obtain the full shape of the residual indent. With a spherical indenter (and a sample that is isotropic in the plane of the indented surface), the indent will exhibit radial symmetry and its shape can be captured in the form of a single profile (of depth against radial position). The details of this shape (for a given applied load) exhibit a high sensitivity to the stress-strain relationship of the sample. [ 5 ] [ 6 ] [ 7 ] Also, it is easier to obtain than a load-displacement curve, partly because no measurements need to be made during loading. Finally, such profilometry has potential for the detection and characterization [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] of sample anisotropy (whereas load-displacement curves carry no such information). Two main approaches have evolved for obtaining stress-strain relationships from experimental indentation outcomes (load-displacement curves or residual indent profiles). The simpler of the two involves direct “conversion” of the load-displacement curve. This is usually done [ 13 ] [ 14 ] by obtaining a series of “equivalent”, “effective” or “representative” values of the stress in the loaded part of the sample (from the applied load) and a corresponding set of values of the strain in the deformed region (from the displacement). The assumptions involved in carrying out such conversions are inevitably very crude, since (even for a spherical indenter) the fields of both stress and strain within the sample are highly complex and evolve throughout the process – the figure shows some typical plastic strain fields. Various empirical correction factors are commonly employed, with neural network “training” procedures sometimes being applied [ 15 ] [ 16 ] to sets of load-displacement data and corresponding stress-strain curves, to help evaluate them. It’s also common for loading to be periodically interrupted, and data from partial unloading procedures to be used in the conversion. However, unsurprisingly, universal conversions of this type (applied to samples with unknown stress-strain curves) tend to be unreliable [ 17 ] [ 18 ] [ 19 ] and it is now widely accepted that the procedure cannot be used with any confidence. The other main approach is a more cumbersome one, although with much greater potential for obtaining reliable results. It involves iterative numerical ( Finite element method – FEM) modelling of the indentation procedure. This is first done with a trial stress-strain relationship (in the form of an analytical expression – often termed a constitutive equation ), followed by convergence on the best fit version (set of parameter values in the equation), giving optimal agreement between experimental and modelled outcomes (load-displacement plots or residual indent profiles). This procedure fully captures the complexity of the evolving stress and strain fields during indentation. While it is based on relatively intensive modelling computations, protocols have been developed in which the convergence is automated and rapid. It has become clear that important advantages are offered by using the residual indent profile as the target outcome, rather than the load-displacement curve. These include easier measurement, greater sensitivity of the experimental outcome to the stress-strain relationship and potential for detection and characterisation of sample anisotropy – see above. The figure gives an indication of the sensitivity of the profile to the stress-strain curve of the material. The term PIP thus encompasses the following features: 1) Obtaining stress-strain curves characteristic of the bulk of a material (by using relatively large spherical indenters and relatively deep penetration), 2) Experimental measurement of the residual indent profile and 3) Iterative FEM simulation of the indentation test, to obtain the stress-strain curve (captured in a constitutive equation) that gives the best fit between modelled and measured profiles. For tractable and user-friendly application, an integrated facility is needed, in which the procedures of indentation, profilometry and convergence on the optimal stress-strain curve are all under automated control
https://en.wikipedia.org/wiki/Indentation_plastometry
The indentation size effect (ISE) is the observation that hardness tends to increase as the indent size decreases at small scales. [ 1 ] [ 2 ] When an indent (any small mark, but usually made with a special tool) is created during material testing, the hardness of the material is not constant. At the small scale, materials will actually be harder than at the macro-scale. For the conventional indentation size effect, the smaller the indentation, the larger the difference in hardness. The effect has been seen through nanoindentation and microindentation measurements at varying depths. Dislocations increase material hardness by increasing flow stress through dislocation blocking mechanisms. [ 3 ] [ clarification needed ] Materials contain statistically stored dislocations (SSD) which are created by homogeneous strain and are dependent upon the material and processing conditions. [ 4 ] Geometrically necessary dislocations (GND) on the other hand are formed, in addition to the dislocations statistically present, to maintain continuity within the material. These additional geometrically necessary dislocations (GND) further increase the flow stress in the material and therefore the measured hardness. Theory suggests that plastic flow is impacted by both strain and the size of the strain gradient experienced in the material. [ 5 ] [ 6 ] Smaller indents have higher strain gradients relative to the size of the plastic zone and therefore have a higher measured hardness in some materials. For practical purposes this effect means that hardness in the low micro and nano regimes cannot be directly compared if measured using different loads. However, the benefit of this effect is that it can be used to measure the effects of strain gradients on plasticity . Several new plasticity models have been developed using data from indentation size effect studies, [ 4 ] which can be applied to high strain gradient situations such as thin films. [ 7 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Indentation_size_effect
Independence-friendly logic ( IF logic ; proposed by Jaakko Hintikka and Gabriel Sandu [ fr ] in 1989) [ 1 ] is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form ( ∃ v / V ) {\displaystyle (\exists v/V)} and ( ∀ v / V ) {\displaystyle (\forall v/V)} , where V {\displaystyle V} is a finite set of variables. The intended reading of ( ∃ v / V ) {\displaystyle (\exists v/V)} is "there is a v {\displaystyle v} which is functionally independent from the variables in V {\displaystyle V} ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic ( Σ 1 1 {\displaystyle \Sigma _{1}^{1}} ). For example, it can express branching quantifier sentences, such as the formula ∃ c ∀ x ∃ y ∀ z ( ∃ w / { x , y } ) ( ( x = z ↔ y = w ) ∧ y ≠ c ) {\displaystyle \exists c\forall x\exists y\forall z(\exists w/\{x,y\})((x=z\leftrightarrow y=w)\land y\neq c)} which expresses infinity in the empty signature; this cannot be done in FOL. Therefore, first-order logic cannot, in general, express this pattern of dependency, in which y {\displaystyle y} depends only on x {\displaystyle x} and c {\displaystyle c} , and w {\displaystyle w} depends only on z {\displaystyle z} and c {\displaystyle c} . IF logic is more general than branching quantifiers , for example in that it can express dependencies that are not transitive, such as in the quantifier prefix ∀ x ∃ y ( ∃ z / { x } ) {\displaystyle \forall x\exists y(\exists z/\{x\})} , which expresses that y {\displaystyle y} depends on x {\displaystyle x} , and z {\displaystyle z} depends on y {\displaystyle y} , but z {\displaystyle z} does not depend on x {\displaystyle x} . The introduction of IF logic was partly motivated by the attempt of extending the game semantics of first-order logic to games of imperfect information . Indeed, a semantics for IF sentences can be given in terms of these kinds of games (or, alternatively, by means of a translation procedure to existential second-order logic). A semantics for open formulas cannot be given in the form of a Tarskian semantics ; [ 2 ] an adequate semantics must specify what it means for a formula to be satisfied by a set of assignments of common variable domain (a team ) rather than satisfaction by a single assignment. Such a team semantics was developed by Hodges . [ 3 ] Independence-friendly logic is translation equivalent, at the level of sentences, with a number of other logical systems based on team semantics, such as dependence logic , dependence-friendly logic, exclusion logic and independence logic; with the exception of the latter, IF logic is known to be equiexpressive to these logics also at the level of open formulas. However, IF logic differs from all the above-mentioned systems in that it lacks locality : the meaning of an open formula cannot be described just in terms of the free variables of the formula; it is instead dependent on the context in which the formula occurs. Independence-friendly logic shares a number of metalogical properties with first-order logic, but there are some differences, including lack of closure under (classical, contradictory) negation and higher complexity for deciding the validity of formulas. Extended IF logic addresses the closure problem, but its game-theoretical semantics is more complicated, and such logic corresponds to a larger fragment of second-order logic, a proper subset of Δ 2 1 {\displaystyle \Delta _{2}^{1}} . [ 4 ] Hintikka argued [ 5 ] that IF and extended IF logic should be used as a basis for the foundations of mathematics ; this proposal was met in some cases with skepticism. [ 6 ] A number of slightly different presentations of independence-friendly logic have appeared in the literature; here we follow Mann et al (2011). [ 7 ] For a fixed signature σ, terms and atomic formulas are defined exactly as in first-order logic with equality . Formulas of IF logic are defined as follows: The set Free ( φ ) {\displaystyle {\mbox{Free}}(\varphi )} of the free variables of an IF formula φ {\displaystyle \varphi } is defined inductively as follows: The last clause is the only one that differs from the clauses for first-order logic, the difference being that also the variables in the slash set V {\displaystyle V} are counted as free variables. An IF formula φ {\displaystyle \varphi } such that Free ( ϕ ) = ∅ {\displaystyle {\mbox{Free}}(\phi )=\emptyset } is an IF sentence . Three main approaches have been proposed for the definition of the semantics of IF logic. The first two, based respectively on games of imperfect information and on Skolemization, are mainly used in the definition of IF sentences only. The former generalizes a similar approach, for first-order logic, which was based instead on games of perfect information. The third approach, team semantics , is a compositional semantics in the spirit of Tarskian semantics. However, this semantics does not define what it means for a formula to be satisfied by an assignment (rather, by a set of assignments). The first two approaches were developed in earlier publications on if logic; [ 8 ] [ 9 ] the third one by Hodges in 1997. [ 10 ] [ 11 ] In this section, we differentiate the three approaches by writing distinct pedices, as in ⊨ G T S , ⊨ S k , ⊨ {\displaystyle \models _{GTS},\models _{Sk},\models } . Since the three approaches are fundamentally equivalent, only the symbol ⊨ {\displaystyle \models } will be used in the rest of the article. Game-Theoretical Semantics assigns truth values to IF sentences according to the properties of some 2-player games of imperfect information. For ease of presentation, it is convenient to associate games not only to sentences, but also to formulas. More precisely, one defines games G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} for each triple formed by an IF formula φ {\displaystyle \varphi } , a structure M {\displaystyle {\mathcal {M}}} , and an assignment s : U ⊇ Free ( φ ) → M {\displaystyle s:U\supseteq {\mbox{Free}}(\varphi )\rightarrow {\mathcal {M}}} . The semantic game G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} has two players, called Eloise (or Verifier) and Abelard (or Falsifier). The allowed moves in the semantic game G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} are determined by the synctactical structure of the formula under consideration. For simplicity, we first assume that φ {\displaystyle \varphi } is in negation normal form, with negations symbols occurring only in front of atomic subformulas. More generally, if φ {\displaystyle \varphi } is not in negation normal form, we can state, as a rule for negation, that, when a game G ( ¬ φ , M , s ) {\displaystyle G(\lnot \varphi ,{\mathcal {M}},s)} is reached, the players begin playing a dual game G ∗ ( φ , M , s ) {\displaystyle G^{*}(\varphi ,{\mathcal {M}},s)} in which the roles of Verifiers and Falsifier are switched. Informally, a sequence of moves in a game G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} is a history. At the end of each history h {\displaystyle h} , some subgame G ( ψ h , M , s h ) {\displaystyle G(\psi _{h},{\mathcal {M}},s_{h})} is played; we call s h {\displaystyle s_{h}} the assignment associated to h {\displaystyle h} , and ψ h {\displaystyle \psi _{h}} the subformula occurrence associated to h {\displaystyle h} . The player associated to h {\displaystyle h} is Eloise in case the most external logical operator in ψ h {\displaystyle \psi _{h}} is ∨ {\displaystyle \lor } or ∃ {\displaystyle \exists } , and Abelard in case it is ∧ {\displaystyle \land } or ∀ {\displaystyle \forall } . The set h {\displaystyle h} of allowed moves in a history h {\displaystyle h} is M {\displaystyle {\mathcal {M}}} if the most external operator of ψ h {\displaystyle \psi _{h}} is ∃ {\displaystyle \exists } or ∀ {\displaystyle \forall } ; it is { L , R } {\displaystyle \{L,R\}} ( L , R {\displaystyle L,R} being any two distinct objects, symbolizing 'left' and 'right') in case the most external operator of ψ h {\displaystyle \psi _{h}} is ∨ {\displaystyle \lor } or ∧ {\displaystyle \land } . Given two assignments s , t {\displaystyle s,t} of same domain, and V ⊆ d o m ( s ) {\displaystyle V\subseteq dom(s)} we write s ∼ V t {\displaystyle s\sim _{V}t} if s ( w ) = t ( w ) {\displaystyle s(w)=t(w)} on any variable w ∈ d o m ( s ) ∖ V {\displaystyle w\in dom(s)\setminus V} . Imperfect information is introduced in the games by stipulating that certain histories are indistinguishable for the associated player; indistinguishable histories are said to form an 'information set'. Intuitively, if the history h {\displaystyle h} is in the information set I {\displaystyle I} , the player associated to h {\displaystyle h} does not know whether he is in h {\displaystyle h} or in some other history of I {\displaystyle I} . Consider two histories h , h ′ {\displaystyle h,h'} such that the associated ψ h , ψ h ′ {\displaystyle \psi _{h},\psi _{h'}} are identical subformula occurrences of the form ( Q v / V ) χ {\displaystyle (Qv/V)\chi } ( Q = ∃ {\displaystyle Q=\exists } or ∀ {\displaystyle \forall } ); if furthermore s h ∼ V s h ′ {\displaystyle s_{h}\sim _{V}s_{h'}} , we write h ∼ ∃ h ′ {\displaystyle h\sim _{\exists }h'} (in case Q = ∃ {\displaystyle Q=\exists } ) or h ∼ ∀ h ′ {\displaystyle h\sim _{\forall }h'} (in case Q = ∀ {\displaystyle Q=\forall } ), in order to specify that the two histories are indistinguishable for Eloise, resp. for Abelard. We also stipulate, in general, reflexivity of this relation: if ψ = χ 1 ∨ χ 2 {\displaystyle \psi =\chi _{1}\lor \chi _{2}} , then h ∼ ∃ h ′ {\displaystyle h\sim _{\exists }h'} ; and if ψ = χ 1 ∧ χ 2 {\displaystyle \psi =\chi _{1}\land \chi _{2}} , then h ∼ ∀ h ′ {\displaystyle h\sim _{\forall }h'} . For a fixed game G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} , write H ∃ {\displaystyle H_{\exists }} for the set of histories to which Eloise is associated, and similarly H ∀ {\displaystyle H_{\forall }} for the set of histories of Abelard. A strategy for Eloise in the game G ( φ , M , s ) {\displaystyle G(\varphi ,{\mathcal {M}},s)} is any function that assigns, to any possible history in which it is Eloise's turn to play, a legal move; more precisely, any function σ : H ∃ → ∏ h ∈ H ∃ A ( h ) {\displaystyle \sigma :H_{\exists }\rightarrow \prod _{h\in H_{\exists }}A(h)} such that σ ( h ) ∈ A ( h ) {\displaystyle \sigma (h)\in A(h)} for every history h ∈ H ∃ {\displaystyle h\in H_{\exists }} . One can define dually the strategies of Abelard. A strategy for Eloise is uniform if, whenever h ∼ ∃ h ′ {\displaystyle h\sim _{\exists }h'} , σ ( h ) = σ ( h ′ ) {\displaystyle \sigma (h)=\sigma (h')} ; for Abelard, if h ∼ ∀ h ′ {\displaystyle h\sim _{\forall }h'} implies σ ( h ) = σ ( h ′ ) {\displaystyle \sigma (h)=\sigma (h')} . A strategy σ {\displaystyle \sigma } for Eloise is winning if Eloise wins in each terminal history that can be reached by playing according to σ {\displaystyle \sigma } . Similarly for Abelard. An IF sentence φ {\displaystyle \varphi } is true in a structure M {\displaystyle {\mathcal {M}}} ( M ⊨ G T S + φ {\displaystyle {\mathcal {M}}\models _{GTS}^{+}\varphi } ) if Eloise has a uniform winning strategy in the game G ( φ , M , ∅ ) {\displaystyle G(\varphi ,{\mathcal {M}},\emptyset )} . It is false ( M ⊨ G T S − φ {\displaystyle {\mathcal {M}}\models _{GTS}^{-}\varphi } ) if Abelard has a winning strategy. It is undetermined if neither Eloise nor Abelard has a winning strategy. The semantics of IF logic thus defined is a conservative extension of first-order semantics, in the following sense. If φ {\displaystyle \varphi } is an IF sentence with empty slash sets, associate to it the first-order formula φ ′ {\displaystyle \varphi '} which is identical to it, except in that each IF quantifier ( Q v / ∅ ) {\displaystyle (Qv/\emptyset )} is replaced by the corresponding first-order quantifier Q v {\displaystyle Qv} . Then M ⊨ G T S + φ {\displaystyle {\mathcal {M}}\models _{GTS}^{+}\varphi } iff M ⊨ φ ′ {\displaystyle {\mathcal {M}}\models \varphi '} in the Tarskian sense; and M ⊨ G T S − φ {\displaystyle {\mathcal {M}}\models _{GTS}^{-}\varphi } iff M ⊭ φ ′ {\displaystyle {\mathcal {M}}\not \models \varphi '} in the Tarskian sense. More general games can be used to assign a meaning to (possibly open) IF formulas; more exactly, it is possible to define what it means for an IF formula φ {\displaystyle \varphi } to be satisfied, on a structure M {\displaystyle {\mathcal {M}}} , by a team X {\displaystyle X} (a set of assignments of common variable domain d o m ( X ) {\displaystyle dom(X)} and codomain M {\displaystyle {\mathcal {M}}} ). The associated games G ( φ , M , X ) {\displaystyle G(\varphi ,M,X)} begin with the random choice of an assignment s ∈ X {\displaystyle s\in X} ; after this initial move, the game G ( φ , M , s ) {\displaystyle G(\varphi ,M,s)} is played. The existence of a winning strategy for Eloise defines positive satisfaction ( M , X ⊨ G T S + φ {\displaystyle M,X\models _{GTS}^{+}\varphi } ), and existence of a winning strategy for Abelard defines negative satisfaction ( M , X ⊨ G T S − φ {\displaystyle M,X\models _{GTS}^{-}\varphi } ). At this level of generality, Game-theoretical Semantics can be replaced by an algebraic approach, team semantics (defined below). A definition of truth for IF sentences can be given, alternatively, by means of a translation into existential second-order logic. The translation generalizes the Skolemization procedure of first-order logic. Falsity is defined by a dual procedure called Kreiselization. Given an IF formula φ {\displaystyle \varphi } , we first define its skolemization relativized to a finite set U ⊇ Free ( φ ) {\displaystyle U\supseteq {\mbox{Free}}(\varphi )} of variables. For every existential quantifier ( ∃ v / V ) {\displaystyle (\exists v/V)} occurring in φ {\displaystyle \varphi } , let f v {\displaystyle f_{v}} be a new function symbol (a "Skolem function"). We write S u b s t ( φ , v , t ) {\displaystyle Subst(\varphi ,v,t)} for the formula which is obtained substituting, in φ {\displaystyle \varphi } , all free occurrences of the variable v {\displaystyle v} with the term t {\displaystyle t} . The Skolemization of φ {\displaystyle \varphi } relative to U {\displaystyle U} , denoted Sk U ( φ ) {\displaystyle {\mbox{Sk}}_{U}(\varphi )} , is defined by the following inductive clauses: If φ {\displaystyle \varphi } is an IF sentence, its (unrelativized) Skolemization is defined as Sk ( φ ) = Sk ∅ ( φ ) {\displaystyle {\mbox{Sk}}(\varphi )={\mbox{Sk}}_{\varnothing }(\varphi )} . Given an IF formula φ {\displaystyle \varphi } , associate, to each universal quantifier ( ∀ v / V ) {\displaystyle (\forall v/V)} occurring in it, a new function symbol g v {\displaystyle g_{v}} (a "Kreisel function"). Then, the Kreiselization Kr U ( φ ) {\displaystyle {\mbox{Kr}}_{U}(\varphi )} of φ {\displaystyle \varphi } relative to a finite set of variables U ⊇ Free ( φ ) {\displaystyle U\supseteq {\mbox{Free}}(\varphi )} , is defined by the following inductive clauses: If φ {\displaystyle \varphi } is an IF sentence, its (unrelativized) Kreiselization is defined as Kr ( φ ) = Kr ∅ ( φ ) {\displaystyle {\mbox{Kr}}(\varphi )={\mbox{Kr}}_{\varnothing }(\varphi )} . Given an IF sentence φ {\displaystyle \varphi } with n {\displaystyle n} existential quantifiers, a structure M {\displaystyle {\mathcal {M}}} , and a list f → {\displaystyle {\vec {f}}} of n {\displaystyle n} functions of appropriate arities, we denote as ( M , f → ) {\displaystyle ({\mathcal {M}},{\vec {f}})} the expansion of M {\displaystyle {\mathcal {M}}} which assigns the functions f → {\displaystyle {\vec {f}}} as interpretations for the Skolem functions of φ {\displaystyle \varphi } . An IF sentence is true on a structure M {\displaystyle {\mathcal {M}}} , written M ⊨ Sk + φ {\displaystyle {\mathcal {M}}\models _{\mbox{Sk}}^{+}\varphi } , if there is a tuple f → {\displaystyle {\vec {f}}} of functions such that ( M , f → ) ⊨ Sk ( φ ) {\displaystyle ({\mathcal {M}},{\vec {f}})\models {\mbox{Sk}}(\varphi )} . Similarly, M ⊨ Sk − φ {\displaystyle {\mathcal {M}}\models _{\mbox{Sk}}^{-}\varphi } if there is a tuple f → {\displaystyle {\vec {f}}} of functions such that ( M , f → ) ⊨ Kr ( φ ) {\displaystyle ({\mathcal {M}},{\vec {f}})\models {\mbox{Kr}}(\varphi )} ; and M ⊨ Sk 0 φ {\displaystyle {\mathcal {M}}\models _{\mbox{Sk}}^{0}\varphi } iff neither of the previous conditions holds. For any IF sentence, Skolem Semantics returns the same values as Game-theoretical Semantics. [ citation needed ] By means of team semantics, it is possible to give a compositional account of the semantics of IF logic. Truth and falsity are grounded on the notion of 'satisfiability of a formula by a team'. Let M {\displaystyle {\mathcal {M}}} be a structure and let V = { v 1 , … , v n } {\displaystyle V=\{v_{1},\ldots ,v_{n}\}} be a finite set of variables. Then a team over M {\displaystyle {\mathcal {M}}} with domain V {\displaystyle V} is a set of assignments over M {\displaystyle {\mathcal {M}}} with domain V {\displaystyle V} , that is, a set of functions s {\displaystyle s} from V {\displaystyle V} to M {\displaystyle {\mathcal {M}}} . Duplicating and supplementing are two operations on teams which are related to the semantics of universal and existential quantification. It is customary to replace repeated applications of these two operation with more succinct notations, such as X [ M F / u v ] {\displaystyle X[{\mathcal {M}}F/uv]} for ( X [ M / u ] ) [ F / v ] {\displaystyle (X[{\mathcal {M}}/u])[F/v]} . As above, given two assignments s , t {\displaystyle s,t} with same variable domain, we write s ∼ V t {\displaystyle s\sim _{V}t} if s ( w ) = t ( w ) {\displaystyle s(w)=t(w)} for every variable w ∈ d o m ( s ) ∖ V {\displaystyle w\in dom(s)\setminus V} . Given a team X {\displaystyle X} on a structure M {\displaystyle {\mathcal {M}}} and a finite set V {\displaystyle V} of variables, we say that a function F : X → M {\displaystyle F:X\rightarrow {\mathcal {M}}} is V {\displaystyle V} -uniform if F ( s ) = F ( t ) {\displaystyle F(s)=F(t)} whenever s ∼ V t {\displaystyle s\sim _{V}t} . Team semantics is three-valued, in the sense that a formula may happen to be positively satisfied by a team on a given structure, or negatively satisfied by it, or neither. The semantics clauses for positive and negative satisfaction are defined by simultaneous induction on the synctactical structure of IF formulas. Positive satisfaction: Negative satisfaction: According to team semantics, an IF sentence φ {\displaystyle \varphi } is said to be true ( M ⊨ + φ {\displaystyle {\mathcal {M}}\models ^{+}\varphi } ) on a structure M {\displaystyle {\mathcal {M}}} if it is satisfied on M {\displaystyle {\mathcal {M}}} by the singleton team { ∅ } {\displaystyle \{\emptyset \}} , in symbols: M , { ∅ } ⊨ + φ {\displaystyle {\mathcal {M}},\{\emptyset \}\models ^{+}\varphi } . Similarly, φ {\displaystyle \varphi } is said to be false ( M ⊨ − φ {\displaystyle {\mathcal {M}}\models ^{-}\varphi } ) on M {\displaystyle {\mathcal {M}}} if M , { ∅ } ⊨ − φ {\displaystyle {\mathcal {M}},\{\emptyset \}\models ^{-}\varphi } ; it is said to be undetermined ( M ⊨ 0 φ {\displaystyle {\mathcal {M}}\models ^{0}\varphi } ) if M , { ∅ } ⊭ + φ {\displaystyle {\mathcal {M}},\{\emptyset \}\not \models ^{+}\varphi } and M , { ∅ } ⊭ − φ {\displaystyle {\mathcal {M}},\{\emptyset \}\not \models ^{-}\varphi } . For any team X {\displaystyle X} on a structure M {\displaystyle {\mathcal {M}}} , and any IF formula φ {\displaystyle \varphi } , we have: M , X ⊨ + φ {\displaystyle {\mathcal {M}},X\models ^{+}\varphi } iff M , X ⊨ G T S + φ {\displaystyle {\mathcal {M}},X\models _{GTS}^{+}\varphi } and M , X ⊨ − φ {\displaystyle {\mathcal {M}},X\models ^{-}\varphi } iff M , X ⊨ G T S − φ {\displaystyle {\mathcal {M}},X\models _{GTS}^{-}\varphi } . From this it immediately follows that, for sentences φ {\displaystyle \varphi } , M ⊨ + φ ⇔ M ⊨ G T S + φ {\displaystyle {\mathcal {M}}\models ^{+}\varphi \Leftrightarrow {\mathcal {M}}\models _{GTS}^{+}\varphi } , M ⊨ − φ ⇔ M ⊨ G T S − φ {\displaystyle {\mathcal {M}}\models ^{-}\varphi \Leftrightarrow {\mathcal {M}}\models _{GTS}^{-}\varphi } and M ⊨ 0 φ ⇔ M ⊨ G T S 0 φ {\displaystyle {\mathcal {M}}\models ^{0}\varphi \Leftrightarrow {\mathcal {M}}\models _{GTS}^{0}\varphi } . Since IF logic is, in its usual acception, three-valued, multiple notions of formula equivalence are of interest. Let φ , ψ {\displaystyle \varphi ,\psi } be two IF formulas. φ ⊨ + ψ {\displaystyle \varphi \models ^{+}\psi } ( φ {\displaystyle \varphi } truth entails ψ {\displaystyle \psi } ) if M , X ⊨ + φ ⇒ M , X ⊨ + ψ {\displaystyle {\mathcal {M}},X\models ^{+}\varphi \Rightarrow {\mathcal {M}},X\models ^{+}\psi } for any structure M {\displaystyle {\mathcal {M}}} and any team X {\displaystyle X} such that d o m ( X ) ⊇ Free ( φ ) ∪ Free ( ψ ) {\displaystyle dom(X)\supseteq {\mbox{Free}}(\varphi )\cup {\mbox{Free}}(\psi )} . φ ≡ + ψ {\displaystyle \varphi \equiv ^{+}\psi } ( φ {\displaystyle \varphi } is truth equivalent to ψ {\displaystyle \psi } ) if φ ⊨ + ψ {\displaystyle \varphi \models ^{+}\psi } and ψ ⊨ + φ {\displaystyle \psi \models ^{+}\varphi } . φ ⊨ − ψ {\displaystyle \varphi \models ^{-}\psi } ( φ {\displaystyle \varphi } falsity entails ψ {\displaystyle \psi } ) if M , X ⊨ − ψ ⇒ M , X ⊨ − φ {\displaystyle {\mathcal {M}},X\models ^{-}\psi \Rightarrow {\mathcal {M}},X\models ^{-}\varphi } for any structure M {\displaystyle {\mathcal {M}}} and any team X {\displaystyle X} such that d o m ( X ) ⊇ Free ( φ ) ∪ Free ( ψ ) {\displaystyle dom(X)\supseteq {\mbox{Free}}(\varphi )\cup {\mbox{Free}}(\psi )} . φ ≡ − ψ {\displaystyle \varphi \equiv ^{-}\psi } ( φ {\displaystyle \varphi } is falsity equivalent to ψ {\displaystyle \psi } ) if φ ⊨ − ψ {\displaystyle \varphi \models ^{-}\psi } and ψ ⊨ − φ {\displaystyle \psi \models ^{-}\varphi } . φ ⊨ ψ {\displaystyle \varphi \models \psi } ( φ {\displaystyle \varphi } strongly entails to ψ {\displaystyle \psi } ) if φ ⊨ + ψ {\displaystyle \varphi \models ^{+}\psi } and φ ⊨ − ψ {\displaystyle \varphi \models ^{-}\psi } . φ ≡ ψ {\displaystyle \varphi \equiv \psi } ( φ {\displaystyle \varphi } is strongly equivalent to ψ {\displaystyle \psi } ) if φ ≡ + ψ {\displaystyle \varphi \equiv ^{+}\psi } and φ ≡ − ψ {\displaystyle \varphi \equiv ^{-}\psi } . The definitions above specialize for IF sentences as follows. Two IF sentences φ , ψ {\displaystyle \varphi ,\psi } are truth equivalent if they are true in the same structures; they are falsity equivalent if they are false in the same structures; they are strongly equivalent if they are both truth and falsity equivalent. Intuitively, using strong equivalence amounts to considering IF logic as 3-valued (true/undetermined/false), while truth equivalence treats IF sentences as if they were 2-valued (true/untrue). Many logical rules of IF logic can be adequately expressed only in terms of more restricted notions of equivalence, which take into account the context in which a formula might appear. For example, if U {\displaystyle U} is a finite set of variables and U ⊇ Free ( φ ) ∪ Free ( ψ ) {\displaystyle U\supseteq {\mbox{Free}}(\varphi )\cup {\mbox{Free}}(\psi )} , one can state that φ {\displaystyle \varphi } is truth equivalent to ψ {\displaystyle \psi } relative to U {\displaystyle U} ( φ ≡ U ψ {\displaystyle \varphi \equiv _{U}\psi } ) in case M , X ⊨ + ψ ⇔ M , X ⊨ + φ {\displaystyle {\mathcal {M}},X\models ^{+}\psi \Leftrightarrow {\mathcal {M}},X\models ^{+}\varphi } for any structure M {\displaystyle {\mathcal {M}}} and any team X {\displaystyle X} of domain U {\displaystyle U} . IF sentences can be translated in a truth-preserving fashion into sentences of (functional) existential second-order logic ( Σ 1 1 {\displaystyle \Sigma _{1}^{1}} ) by means of the Skolemization procedure (see above). Vice versa, every Σ 1 1 {\displaystyle \Sigma _{1}^{1}} can be translated into an IF sentence by means of a variant of the Walkoe-Enderton translation procedure for partially-ordered quantifiers ( [ 13 ] [ 14 ] ). In other words, IF logic and Σ 1 1 {\displaystyle \Sigma _{1}^{1}} are expressively equivalent at the level of sentences. This equivalence can be used to prove many of the properties that follow; they are inherited from Σ 1 1 {\displaystyle \Sigma _{1}^{1}} and in many cases similar to properties of FOL. We denote by T {\displaystyle T} a (possibly infinite) set of IF sentences. The notion of satisfiability by a team has the following properties: Since IF formulas are satisfied by teams and formulas of classical logics are satisfied by assignments, there is no obvious intertranslation between IF formulas and formulas of some classical logic system. However, there is a translation procedure [ 18 ] of IF formulas into sentences of relational Σ 1 1 {\displaystyle \Sigma _{1}^{1}} (actually, one distinct translation τ U , R {\displaystyle \tau _{U,R}} for each finite U ⊇ Free ( φ ) {\displaystyle U\supseteq {\mbox{Free}}(\varphi )} and for each choice of a predicate symbol R {\displaystyle R} of arity c a r d ( U ) {\displaystyle card(U)} ). In this kind of translation, an extra n-ary predicate symbol R {\displaystyle R} is used to represent an n-variable team X {\displaystyle X} . This is motivated by the fact that, once an ordering v 1 … v n {\displaystyle v_{1}\dots v_{n}} of the variables of d o m ( X ) {\displaystyle dom(X)} has been fixed, it is possible to associate a relation R e l v 1 … v n ( X ) = { ( s ( v 1 ) , … , s ( v n ) ) | s ∈ X } {\displaystyle Rel_{v_{1}\dots v_{n}}(X)=\{(s(v_{1}),\dots ,s(v_{n}))|s\in X\}} to the team X {\displaystyle X} . With this conventions, an IF formula is related to its translation thus: where ( M , R e l v 1 … v n ( X ) ) {\displaystyle (M,Rel_{v_{1}\dots v_{n}}(X))} is the expansion of M {\displaystyle {\mathcal {M}}} that assigns R e l v 1 … v n ( X ) {\displaystyle Rel_{v_{1}\dots v_{n}}(X)} as interpretation for the predicate R {\displaystyle R} . Through this correlation, it is possible to say that, on a structure M {\displaystyle {\mathcal {M}}} , an IF formula φ {\displaystyle \varphi } of n free variables defines a family of n-ary relations over M {\displaystyle {\mathcal {M}}} (the family of the relations R e l v 1 … v n ( X ) {\displaystyle Rel_{v_{1}\dots v_{n}}(X)} such that M , X ⊨ φ {\displaystyle {\mathcal {M}},X\models \varphi } ). In 2009, Kontinen and Väänänen, [ 19 ] showed, by means of a partial inverse translation procedure, that the families of relations that are definable by IF logic are exactly those that are nonempty, downward closed and definable in relational Σ 1 1 {\displaystyle \Sigma _{1}^{1}} with an extra predicate R {\displaystyle R} (or, equivalently, nonempty and definable by a Σ 1 1 {\displaystyle \Sigma _{1}^{1}} sentence in which R {\displaystyle R} occurs only negatively). IF logic is not closed under classical negation. The boolean closure of IF logic is known as extended IF logic and it is equivalent to a proper fragment of Δ 2 1 {\displaystyle \Delta _{2}^{1}} (Figueira et al. 2011). Hintikka (1996, p. 196) claimed that "virtually all of classical mathematics can in principle be done in extended IF first-order logic". A number of properties of IF logic follow from logical equivalence with Σ 1 1 {\displaystyle \Sigma _{1}^{1}} and bring it closer to first-order logic including a compactness theorem , a Löwenheim–Skolem theorem , and a Craig interpolation theorem. (Väänänen, 2007, p. 86). However, Väänänen (2001) proved that the set of Gödel numbers of valid sentences of IF logic with at least one binary predicate symbol (set denoted by Val IF ) is recursively isomorphic with the corresponding set of Gödel numbers of valid (full) second-order sentences in a vocabulary that contains one binary predicate symbol (set denoted by Val 2 ). Furthermore, Väänänen showed that Val 2 is the complete Π 2 -definable set of integers, and that it is Val 2 not in Σ n m {\displaystyle \Sigma _{n}^{m}} for any finite m and n . Väänänen (2007, pp. 136–139) summarizes the complexity results as follows: Feferman (2006) cites Väänänen's 2001 result to argue (contra Hintikka) that while satisfiability might be a first-order matter, the question of whether there is a winning strategy for Verifier over all structures in general "lands us squarely in full second order logic " (emphasis Feferman's). Feferman also attacked the claimed usefulness of the extended IF logic, because the sentences in Π 1 1 {\displaystyle \Pi _{1}^{1}} do not admit a game-theoretic interpretation.
https://en.wikipedia.org/wiki/Independence-friendly_logic
In mathematical logic , independence is the unprovability of some specific sentence from some specific set of other sentences. The sentences in this set are referred to as "axioms". A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T , and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T . (This concept is unrelated to the idea of " decidability " as in a decision problem .) A theory T is independent if no axiom in T is provable from the remaining axioms in T . A theory for which there is an independent set of axioms is independently axiomatizable . Some authors say that σ is independent of T when T simply cannot prove σ, and do not necessarily assert by this that T cannot refute σ. These authors will sometimes say "σ is independent of and consistent with T " to indicate that T can neither prove nor refute σ. Many interesting statements in set theory are independent of Zermelo–Fraenkel set theory (ZF). The following statements in set theory are known to be independent of ZF, under the assumption that ZF is consistent: The following statements (none of which have been proved false) cannot be proved in ZFC (the Zermelo–Fraenkel set theory plus the axiom of choice) to be independent of ZFC, under the added hypothesis that ZFC is consistent. The following statements are inconsistent with the axiom of choice, and therefore with ZFC. However they are probably independent of ZF, in a corresponding sense to the above: They cannot be proved in ZF, and few working set theorists expect to find a refutation in ZF. However ZF cannot prove that they are independent of ZF, even with the added hypothesis that ZF is consistent. Since 2000, logical independence has become understood as having crucial significance in the foundations of physics. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Independence_(mathematical_logic)
In combinatorial mathematics , an independence system ⁠ S {\displaystyle S} ⁠ is a pair ( V , I ) {\displaystyle (V,{\mathcal {I}})} , where ⁠ V {\displaystyle V} ⁠ is a finite set and ⁠ I {\displaystyle {\mathcal {I}}} ⁠ is a collection of subsets of ⁠ V {\displaystyle V} ⁠ (called the independent sets or feasible sets ) with the following properties: Another term for an independence system is an abstract simplicial complex . HYPERGRAPHS ⊃ INDEPENDENCE-SYSTEMS = ABSTRACT-SIMPLICIAL-COMPLEXES ⊃ MATROIDS. This combinatorics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Independence_system
The Independent Engineer Battalion "Codru" ( Romanian : Batalionul de geniu „Codru” ) is the engineering formation of the Moldovan National Army , based in the village of Negrești , Strășeni District . Soldiers of the battalion soldiers have been on international missions, including the Kosovo Force mission in Kosovo . The battalion was formed on 16 October 1992. [ 1 ] It was created to assist the regular army during the Transnistrian War in the early 90's. It was the first unit of the National Army to be decorated by presidential decree with the state order "Faith of the Fatherland", class I. [ 2 ] Members of the unit deployed to Iraq both in 2003 and 2008. The Moldovan Ministry of Defense reported that in 2013, the battalion were called 133 times to safely dispose over 1,800 pieces of ordnance. Since January 2014, it has safely removed 192 pieces of unexploded ordnance. [ 3 ]
https://en.wikipedia.org/wiki/Independent_Engineer_Battalion_"Codru"
In condensed matter physics , the independent electron approximation is a simplification used in complex systems, consisting of many electrons , that approximates the electron–electron interaction in crystals as null . It is a requirement for both the free electron model and the nearly-free electron model , where it is used alongside Bloch's theorem . [ 1 ] In quantum mechanics , this approximation is often used to simplify a quantum many-body problem into single-particle approximations. [ 1 ] While this simplification holds for many systems, electron–electron interactions may be very important for certain properties in materials. For example, the theory covering much of superconductivity is BCS theory , in which the attraction of pairs of electrons to each other, termed " Cooper pairs ", is the mechanism behind superconductivity. One major effect of electron–electron interactions is that electrons distribute around the ions so that they screen the ions in the lattice from other electrons. [ citation needed ] For an example of the Independent electron approximation's usefulness in quantum mechanics , consider an N -atom crystal with one free electron per atom (each with atomic number Z ). Neglecting spin, the Hamiltonian of the system takes the form: [ 1 ] where ℏ {\displaystyle \hbar } is the reduced Planck constant , e is the elementary charge , m e is the electron rest mass , and ∇ i {\displaystyle \nabla _{i}} is the gradient operator for electron i . The capitalized R I {\displaystyle \mathbf {R} _{I}} is the I th lattice location (the equilibrium position of the I th nuclei) and the lowercase r i {\displaystyle \mathbf {r} _{i}} is the i th electron position. The first term in parentheses is called the kinetic energy operator while the last two are simply the Coulomb interaction terms for electron–nucleus and electron–electron interactions, respectively. If the electron–electron term were negligible, the Hamiltonian could be decomposed into a set of N decoupled Hamiltonians (one for each electron), which greatly simplifies analysis. The electron–electron interaction term, however, prevents this decomposition by ensuring that the Hamiltonian for each electron will include terms for the position of every other electron in the system. [ 1 ] If the electron–electron interaction term is sufficiently small, however, the Coulomb interactions terms can be approximated by an effective potential term, which neglects electron–electron interactions. [ 1 ] This is known as the independent electron approximation . [ 1 ] Bloch's theorem relies on this approximation by setting the effective potential term to a periodic potential of the form V ( r ) {\displaystyle V(\mathbf {r} )} that satisfies V ( r + R j ) = V ( r ) {\displaystyle V(\mathbf {r} +\mathbf {R} _{j})=V(\mathbf {r} )} , where R j {\displaystyle \mathbf {R} _{j}} is any reciprocal lattice vector (see Bloch's theorem ). [ 1 ] This approximation can be formalized using methods from the Hartree–Fock approximation or density functional theory . [ 1 ]
https://en.wikipedia.org/wiki/Independent_electron_approximation
An independent equation is an equation in a system of simultaneous equations which cannot be derived algebraically from the other equations. [ 1 ] The concept typically arises in the context of linear equations . If it is possible to duplicate one of the equations in a system by multiplying each of the other equations by some number (potentially a different number for each equation) and summing the resulting equations, then that equation is dependent on the others. But if this is not possible, then that equation is independent of the others. If an equation is independent of the other equations in its system, then it provides information beyond that which is provided by the other equations. In contrast, if an equation is dependent on the others, then it provides no information not contained in the others collectively, and the equation can be dropped from the system without any information loss. [ 2 ] The number of independent equations in a system equals the rank of the augmented matrix of the system—the system's coefficient matrix with one additional column appended, that column being the column vector of constants. The number of independent equations in a system of consistent equations (a system that has at least one solution) can never be greater than the number of unknowns. Equivalently, if a system has more independent equations than unknowns, it is inconsistent and has no solutions. The concepts of dependence and independence of systems are partially generalized in numerical linear algebra by the condition number , which (roughly) measures how close a system of equations is to being dependent (a condition number of infinity is a dependent system, and a system of orthogonal equations is maximally independent and has a condition number close to 1.) This linear algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Independent_equation
An Independent Hardware Vendor (IHV) is a company that designs, manufactures or sells hardware or peripherals compatible with operating systems . [ 1 ] Examples of Independent hardware vendors are Intel , AMD and Samsung . [ 1 ] This computer hardware article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Independent_hardware_vendor
An independent water and power plant ( IWPP ) or an integrated water and power project is a combined facility which serves as both a desalination plant and a power plant . IWPPs are more common in the Middle East , where demand for both electricity and salt water desalinisation are high. [ 1 ] Independent water and power producers negotiate both a feed-in power tariff and a water tariff in the same deal with the utility company, who also purchases both products. IWPPs tend to have an installed capacity of over 1 gigawatt (1,000 megawatts ) and generates power in a typical thermal power station setup. Seawater is purified by integrating MSF , MED , TVC , or RO water desalination technologies with the power plant, thus increasing overall efficiency. [ 1 ] This article about a power station is a stub . You can help Wikipedia by expanding it . This water supply –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Independent_water_and_power_plant
In mathematics , an indeterminate or formal variable is a variable (a symbol , usually a letter) that is used purely formally in a mathematical expression , but does not stand for any value. [ 1 ] [ 2 ] [ better source needed ] In analysis , a mathematical expression such as ⁠ 3 x 2 + 4 x {\displaystyle 3x^{2}+4x} ⁠ is usually taken to represent a quantity whose value is a function of its variable ⁠ x {\displaystyle x} ⁠ , and the variable itself is taken to represent an unknown or changing quantity. Two such functional expressions are considered equal whenever their value is equal for every possible value of ⁠ x {\displaystyle x} ⁠ within the domain of the functions . In algebra , however, expressions of this kind are typically taken to represent objects in themselves, elements of some algebraic structure – here a polynomial , element of a polynomial ring . A polynomial can be formally defined as the sequence of its coefficients , in this case ⁠ [ 0 , 4 , 3 ] {\displaystyle [0,4,3]} ⁠ , and the expression ⁠ 3 x 2 + 4 x {\displaystyle 3x^{2}+4x} ⁠ or more explicitly ⁠ 0 x 0 + 4 x 1 + 3 x 2 {\displaystyle 0x^{0}+4x^{1}+3x^{2}} ⁠ is just a convenient alternative notation, with powers of the indeterminate ⁠ x {\displaystyle x} ⁠ used to indicate the order of the coefficients. Two such formal polynomials are considered equal whenever their coefficients are the same. Sometimes these two concepts of equality disagree. Some authors reserve the word variable to mean an unknown or changing quantity, and strictly distinguish the concepts of variable and indeterminate . Other authors indiscriminately use the name variable for both. Indeterminates occur in polynomials , rational fractions (ratios of polynomials), formal power series , and, more generally, in expressions that are viewed as independent objects. A fundamental property of an indeterminate is that it can be substituted with any mathematical expressions to which the same operations apply as the operations applied to the indeterminate. Some authors of abstract algebra textbooks define an indeterminate over a ring R as an element of a larger ring that is transcendental over R . [ 3 ] [ 4 ] [ 5 ] This uncommon definition implies that every transcendental number and every nonconstant polynomial must be considered as indeterminates. A polynomial in an indeterminate X {\displaystyle X} is an expression of the form a 0 + a 1 X + a 2 X 2 + … + a n X n {\displaystyle a_{0}+a_{1}X+a_{2}X^{2}+\ldots +a_{n}X^{n}} , where the a i {\displaystyle a_{i}} are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. [ 6 ] In contrast, two polynomial functions in a variable x {\displaystyle x} may be equal or not at a particular value of x {\displaystyle x} . For example, the functions are equal when x = 3 {\displaystyle x=3} and not equal otherwise. But the two polynomials are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact, does not hold unless a = 2 {\displaystyle a=2} and b = 3 {\displaystyle b=3} . This is because X {\displaystyle X} is not, and does not designate, a number. The distinction is subtle, since a polynomial in X {\displaystyle X} can be changed to a function in x {\displaystyle x} by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2 , we have that: so the polynomial function x − x 2 {\displaystyle x-x^{2}} is identically equal to 0 for x {\displaystyle x} having any value in the modulo-2 system. However, the polynomial X − X 2 {\displaystyle X-X^{2}} is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero. A formal power series in an indeterminate X {\displaystyle X} is an expression of the form a 0 + a 1 X + a 2 X 2 + … {\displaystyle a_{0}+a_{1}X+a_{2}X^{2}+\ldots } , where no value is assigned to the symbol X {\displaystyle X} . [ 7 ] This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of x {\displaystyle x} , such as 1 + x + 2 x 2 + 6 x 3 + … + n ! x n + … {\displaystyle 1+x+2x^{2}+6x^{3}+\ldots +n!x^{n}+\ldots \,} , are allowed. Indeterminates are useful in abstract algebra for generating mathematical structures . For example, given a field K {\displaystyle K} , the set of polynomials with coefficients in K {\displaystyle K} is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates X {\displaystyle X} and Y {\displaystyle Y} are used, then the polynomial ring K [ X , Y ] {\displaystyle K[X,Y]} also uses these operations, and convention holds that X Y = Y X {\displaystyle XY=YX} . Indeterminates may also be used to generate a free algebra over a commutative ring A {\displaystyle A} . For instance, with two indeterminates X {\displaystyle X} and Y {\displaystyle Y} , the free algebra A ⟨ X , Y ⟩ {\displaystyle A\langle X,Y\rangle } includes sums of strings in X {\displaystyle X} and Y {\displaystyle Y} , with coefficients in A {\displaystyle A} , and with the understanding that X Y {\displaystyle XY} and Y X {\displaystyle YX} are not necessarily identical (since free algebra is by definition non-commutative).
https://en.wikipedia.org/wiki/Indeterminate_(variable)
In biology and botany , indeterminate growth is growth that is not terminated, in contrast to determinate growth that stops once a genetically predetermined structure has completely formed. Thus, a plant that grows and produces flowers and fruit until killed by frost or some other external factor is called indeterminate. For example, the term is applied to tomato varieties that grow in a rather gangly fashion, producing fruit throughout the growing season. In contrast, a determinate tomato plant grows in a more bushy shape and is most productive for a single, larger harvest, then either tapers off with minimal new growth or fruit or dies. In reference to an inflorescence (a shoot specialised for bearing flowers, and bearing no leaves other than bracts ), an indeterminate type (such as a raceme ) is one in which the first flowers to develop and open are from the buds at the base, followed progressively by buds nearer to the growing tip. The growth of the shoot is not impeded by the opening of the early flowers or development of fruits and its appearance is of growing, producing, and maturing flowers and fruit indefinitely. In practice the continued growth of the terminal end necessarily peters out sooner or later, though without producing any definite terminal flower, and in some species it may stop growing before any of the buds have opened. Not all plants produce indeterminate inflorescences however; some produce a definite terminal flower that terminates the development of new buds towards the tip of that inflorescence. In most species that produce a determinate inflorescence in this way, all of the flower buds are formed before the first ones begin to open, and all open more or less at the same time. In some species with determinate inflorescences however, the terminal flower blooms first, which stops the elongation of the main axis, but side buds develop lower down. One type of example is Dianthus ; another type is exemplified by Allium ; and yet others, by Daucus . In zoology, indeterminate growth refers to the condition where animals grow rapidly when young, and continue to grow after reaching adulthood although at a slower pace. [ 1 ] It is common in fish, amphibians, reptiles, and many molluscs. [ 2 ] The term also refers to the pattern of hair growth sometimes seen in humans and a few domestic breeds, where hair continues to grow in length until it is cut. Some mushrooms – notably Cantharellus californicus – also exhibit indeterminate growth. [ 3 ]
https://en.wikipedia.org/wiki/Indeterminate_growth
In mathematics , particularly in number theory , an indeterminate system has fewer equations than unknowns but an additional a set of constraints on the unknowns, such as restrictions that the values be integers. [ 1 ] In modern times indeterminate equations are often called Diophantine equations . [ 2 ] [ 3 ] : iii An example linear indeterminate equation arises from imaging two equally rich men, one with 5 rubies, 8 sapphires, 7 pearls and 90 gold coins; the other has 7, 9, 6 and 62 gold coins; find the prices (y, c, n) of the respective gems in gold coins. As they are equally rich: 5 y + 8 c + 7 n + 90 = 7 y + 9 c + 6 n + 62 {\displaystyle 5y+8c+7n+90=7y+9c+6n+62} Bhāskara II gave an general approach to this kind of problem by assigning a fixed integer to one (or N-2 in general) of the unknowns, e.g. n = 1 {\displaystyle n=1} , resulting a series of possible solutions like (y, c, n)=(14, 1, 1), (13, 3, 1). [ 3 ] : 43 For given integers a , b and n , the general linear indeterminant equation is a x + b y = n {\displaystyle ax+by=n} with unknowns x and y restricted to integers. The necessary and sufficient condition for solutions is that the greatest common divisor , ( a , b ) {\displaystyle (a,b)} , is divisable by n . [ 1 ] : 11 Early mathematicians in both India and China studied indeterminate linear equations with integer solutions. [ 4 ] Indian astronomer Aryabhata developed a recursive algorithm to solve indeterminate equations now known to be related to Euclid's algorithm . [ 5 ] The name of the Chinese remainder theorem relates to the view that indeterminate equations arose in these asian mathematical traditions, but it is likely that ancient Greeks also worked with indeterminate equations. [ 4 ] The first major work on indeterminate equations appears in Diophantus ’ Arithmetica in the 3rd century AD. Diophantus sought solutions constrained to be rational numbers , but Pierre de Fermat 's work in the 1600s focused on integer solutions and introduced the idea of characterizing all possible solutions rather than any one solution. [ 6 ] In modern times integer solutions to indeterminate equations have come to be called analysis of Diophantine equations . [ 3 ] : iii The original paper Henry John Stephen Smith that defined the Smith normal form was written for linear indeterminate systems. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Indeterminate_system
In databases an index is a data structure, part of the database, used by a database system to efficiently navigate access to user data . Index data are system data distinct from user data, and consist primarily of pointers . Changes in a database (by insert, delete, or modify operations), may require indexes to be updated to maintain accurate user data accesses. [ 1 ] Index locking is a technique used to maintain index integrity. A portion of an index is locked during a database transaction when this portion is being accessed by the transaction as a result of attempt to access related user data. Additionally, special database system transactions (not user-invoked transactions) may be invoked to maintain and modify an index, as part of a system's self-maintenance activities. When a portion of an index is locked by a transaction, other transactions may be blocked from accessing this index portion (blocked from modifying, and even from reading it, depending on lock type and needed operation). Index Locking Protocol guarantees that phantom read phenomenon won't occur. Index locking protocol states: [ 1 ] Specialized concurrency control techniques exist for accessing indexes. These techniques depend on the index type, and take advantage of its structure. They are typically much more effective than applying to indexes common concurrency control methods applied to user data. Notable and widely researched are specialized techniques for B-trees ( B-Tree concurrency control [ 2 ] ) which are regularly used as database indexes. Index locks are used to coordinate threads accessing indexes concurrently, and typically shorter-lived than the common transaction locks on user data. In professional literature, they are often called latches . [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it . This database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Index_locking
This is an alphabetical list of articles pertaining specifically to aerospace engineering . For a broad overview of engineering, see List of engineering topics . For biographies, see List of engineers .
https://en.wikipedia.org/wiki/Index_of_aerospace_engineering_articles
Articles related to anatomy include:
https://en.wikipedia.org/wiki/Index_of_anatomy_articles
This is an alphabetical index of articles related to architecture .
https://en.wikipedia.org/wiki/Index_of_architecture_articles
Biochemistry is the study of the chemical processes in living organisms . It deals with the structure and function of cellular components such as proteins , carbohydrates , lipids , nucleic acids and other biomolecules . Articles related to biochemistry include: 2-amino-5-phosphonovalerate - 3' end - 5' end ABC-Transporter Genes - abl gene - acetic acid - acetyl CoA - acetylcholine - acetylcysteine - acid - acidic fibroblast growth factor - acrosin - actin - action potential - activation energy - active site - active transport - adenosine - adenosine diphosphate (ADP) - adenosine monophosphate (AMP) - adenosine triphosphate (ATP) - adenovirus - adrenergic receptor - adrenodoxin - aequorin - aerobic respiration - agonist - alanine - albumin - alcohol - alcoholic fermentation - alicyclic compound - aliphatic compound - alkali - allosteric site - allostery - allotrope - allotropy - alpha adrenergic receptor - alpha helix - alpha-1 adrenergic receptor - alpha-2 adrenergic receptor - alpha-beta T-cell antigen receptor - alpha-fetoprotein - alpha-globulin - alpha-macroglobulin - alpha-MSH - Ames test - amide - amine - amino - amino acid - amino acid receptor - amino acid sequence - amino acid sequence homology - aminobutyric acid - ammonia - AMPA receptor - amyloid - anabolism - anaerobic respiration - analytical chemistry - androgen receptor - angiotensin - angiotensin II - angiotensin receptor - ankyrin - annexin II - antibiotic - antibody - apoenzyme - apolipoprotein - apoptosis - aquaporin - archaea - arginine - argipressin - aromatic amine - aromatic compound - arrestin - Arrhenius equation - aryl hydrocarbon receptor - asparagine - aspartic acid - atom - atomic absorption spectroscopy - atomic mass - atomic mass unit - atomic nucleus - atomic number - atomic orbital - atomic radius - Atomic weight - ATP synthase - ATPase - atrial natriuretic factor - atrial natriuretic factor receptor - Avogadro constant - axon B cell - bacteria - bacterial conjugation - bacterial outer membrane protein - bacterial protein - bacteriorhodopsin - base (chemistry) - base pair - base sequence - basic fibroblast growth factor - Bcl-2 - bcr-abl fusion protein - benzene - benzene ring - beta-2 microglobulin - beta adrenergic receptor - beta sheet - beta-1 adrenergic receptor - beta-2 adrenergic receptor - beta-thromboglobulin - bioaccumulation - biochemistry - biodiversity - bioethics - biogenic amine receptor - bioinformatics - biological membrane - biologist - biology - biomechanics - biomedical model - biomolecule - biophysics - biopolymer - biosalinity - biotechnology - BLAST - blood proteins - boiling point - Boltzmann distribution - Boltzmann principle - bombesin - bombesin receptor - bone morphogenetic protein - bradykinin - bradykinin receptor - BRCA1 - buffer solution C-terminus - C4 photosynthesis - cadherin - calbindin - calcitonin - calcitonin gene-related peptide - calcitonin gene-related peptide receptor - calcitonin receptor - calcitriol receptor - calcium channel - calcium signaling - calcium-binding protein - calmodulin - calmodulin-binding protein - Calvin cycle - CAM photosynthesis - CAM plants - cancer - capsid - carbohydrate - carbon - carbon fixation - carboxylic acid - carcinoembryonic antigen - carrier - carrier protein - CAS registry number - casein - catabolism - catalyst - catalytic domain - CCR5 receptor - CD4 antigen - CD45 antigen - CD95 antigen - CDC28 protein kinase - cell - cell adhesion molecule - cell biology - cell cycle protein - cell membrane - cell membrane transport - cell nucleus - cell surface receptor - cellular respiration - cellulose - centriole - centromere - centrosome - chaperone - chelation - chemical biology - chemical bond - chemical compound - conformation - chemical element - chemical equilibrium - chemical formula - chemical nomenclature - chemical property - chemical reaction - chemical series - chemical thermodynamics - cheminformatics - chemiosmosis - chemiosmotic hypothesis - chemiosmotic potential - chemist - chemistry - chemistry basic topics - chemotroph - chemokine receptor - chemoreceptor - chiasma - chimera (protein) - chimeric protein - chirality - chloride channel - chlorophyll - chloroplast - chloroplast membrane - cholecystokinin receptor - cholesterine - cholinergic receptor - chorionic gonadotropin - chromatid - chromatin - ciclosporin - chromatography - chromosomal crossover - chromosome - chromosome walking - cilium - circular dichroism - cis face - citric acid - citric acid cycle - cladistics - cloning - coenzyme - cofactor (biochemistry) - colchicine - collagen - colloid - colony-stimulating factor - colony stimulating factor 1 receptor - colorimeter - comparative biochemistry - competitive inhibition - complement 3A - complement 5A - complement factor B - complement membrane attack complex - complement receptor - complex - computational biology - computational chemistry - computational genomics - concanavalin A - concentration - concentration gradient - consensus sequence - conserved sequence - cooperative - cooperative binding - cooperativity - cooperativity cellular respiration - corticotropin - corticotropin receptor - corticotropin-releasing hormone - corticotropin-releasing hormone receptor - cotransport metabolism - covalent bond - covalent radius - CpG island - cristae - cryptobiology - crystal structure - crystallography - cuticula - CXCR4 receptor - cyclic AMP receptor - cyclic AMP receptor protein - cyclic AMP-responsive DNA-binding protein - cyclic electron flow - cyclic nucleotide - cyclic peptide - cyclin - cyclin A - cyclin B - cyclin E - cyclin-dependent kinase - cycloleucine - cyclosporin - cyclosporine - cystatin - cysteine - cystic fibrosis transmembrane conductance regulator - cytochrome B - cytochrome C - cytochrome P-450 - cytochrome P-450 CYP1A1 - cytochrome C oxidase - cytokine receptor - cytoplasm - cytoplasmic and nuclear receptor - cytosine - cytoskeletal protein - cytoskeleton - cytosol - cytotoxic T cell dactinomycin - decarboxylation reaction - delta opioid receptor - denaturation (biochemistry) - dendrite - dendritic cell - dendritic spine - deoxyribonucleoprotein - deoxyribose - desmopressin - deuterium - developmental biology - dialysis (chemical) - diffusion - dimer - dinucleotide repeat - diploid - disaccharide - dissociation constant - disulfide bond - disulfide bridge - DNA - DNA fragmentation - DNA replication - DNA sequence - DNA topology - DNA transposable element - DNA virus - DNA-binding protein - dopamine D1 receptor - dopamine D2 receptor - dopamine receptor - double helix - Drosophila - drugs - dynorphin eIF-2 - eIF-2 kinase - electrochemical potential - electron - electron capture - electron configuration - electron microscopy - electron shell - electron transport chain - electron volt - electronegativity - electrophile - electrophoresis - electrophysiology - element - element symbol - ELISA - ELISPOT - embryo - embryonal development - emulsion - endergonic reaction - endodermis - endomembrane system - endoplasmic reticulum - endothelin receptor - endothelin-1 - energy decomposition cycles - energy level - enhancer - enkephalin - enthalpy - entomology - entropy - env gene product - environmental chemistry - enzyme - epidermal growth factor - epidermal growth factor receptor - epidiorite - epigenetics - epinephrine - equine gonadotropin - erbA gene - erbB gene - erbB-2 gene - erbB-2 receptor - erythropoietin - erythropoietin receptor - essential amino acid - ester - estradiol receptor - estrogen receptor - ethanol - ether - eukaryote - evolution - evolutionary biology - evolutionary developmental biology - evolutionary tree - excretion - exergonic reaction - exon - extracellular matrix protein - eye proteins fab immunoglobulin - facilitated diffusion - factor VIII - FADH - FADH2 - Fat - Fatty acid - fc immunoglobulin - fc receptor - feedback inhibition - fermentation - fetal protein - fibroblast growth factor - fibroblast growth factor receptor - fibronectin - Fick's law of diffusion - Filtration - fitness (biology) - fitness landscape - flagellum - flavin adenine dinucleotide - flavine - flavoprotein - fluid mosaic model - fms gene - Formaldehyde - fos gene - free energy - freezing point - FSH receptor - functional group - fungal protein - fungi - fusion oncogene protein G protein - G protein-coupled receptor - G3P - GABA - GABA receptor - GABA-A receptor - gag-onc fusion protein - galanin - gamete - gamma-chain immunoglobulin - gamma-delta T-cell antigen receptor - gastrin - gastrointestinal hormone receptor - gastrula - gel electrophoresis - gene - gene expression - gene pool - gene regulatory network - genetic carrier - genetic code - genetic drift - genetic engineering - genetic fingerprint - genetic recombination - genetics - genome - genomics - genotype - glial fibrillary acidic protein - globin - glucagon - glucagon receptor - glucocorticoid receptor - glucose - glutamate - glutamate receptor - glutamic acid - glutamine - glycerine - glycine - glycine receptor - glycolipid - glycolysis - glycoprotein - gonadorelin - gradient - granulocyte colony-stimulating factor - granulocyte colony-stimulating factor receptor - granulocyte-macrophage colony-stimulating factor - granulocyte-macrophage colony-stimulating factor receptor - granzyme - growth factor receptor - GTP-binding protein - GTPase hair cell - half-life - halobacteria - halotolerance - haploid - heat of fusion - heat of vaporization - heat shock protein - Hsp70 ( 70 kDa heat shock proteins ) - Hsp90 ( 90 kDa heat shock proteins ) - heavy-chain immunoglobulin - Hela cell - helminth protein - helper T cell - hemopexin - hemoglobin - herpes simplex virus protein vmw65 - heterocyclic compound - heterotroph - heterozygote - Hfr cell - Hill reaction - His tag - histamine H1 receptor - histamine H2 receptor - histamine receptor - histidine - histone - history of science and technology - HIV receptor - holoenzyme - homeobox - homeodomain protein - homology - homoserine - homozygote - homunculus - hormone - housekeeping gene - Human Genome Project - hybridization - hydrocarbon - hydrogen - hydrogen bond - hydrogenation - hydrogen-deuterium exchange - hydrolysis - hydrolytic enzyme - hydrophilic - hydrophobe - hydrophobic - hydrophobicity analysis - hydroxyl IgA - IgE receptor - IGF type 1 receptor - IGF type 2 receptor - IgG - IgM - immediate-early protein - immune cell - immune system - immunoglobulin - immunoglobulin joining region - immunoglobulin variable region - immunologic receptor - immunology - In vivo - infrared spectroscopy - inhibin - inhibitor - inhibitory gi G-protein - Inorganic chemistry - insect protein - Insulin - insulin receptor - insulin-like growth factor I - Integral membrane protein - intein - intercellular adhesion molecule-1 - interferon receptor - interferon type I - interferon type II - interferon-alpha - interferon-beta - interleukin receptor - interleukin-1 receptor - interleukin-2 receptor - interleukin-3 - interleukin-3 receptor - intermediate filament - intermediate filament protein - intermembrane space - Intermolecular force - International Union of Pure and Applied Chemistry (IUPAC) - interphase - intracisternal A-particle gene - Intramolecular force - intron - Inverse agonist - invertebrate peptide receptor - invertebrate photoreceptor - Ion channel - ion channel gating - Ionic bond - ionization potential - iron–sulfur protein - isoenzyme - isoleucine - Isomer - Isothermal titration calorimeter - Isotopic tracer junk DNA kainic acid receptor - kallidin - kappa opioid receptor - kappa-chain immunoglobulin - karyoplasm - karyotype - kelvin - keratin - kinase - kinesin - kinetic energy - kinetic exclusion assay - kinetics - knock-out mouse - Krebs cycle lactalbumin - lactic acid - lactic acid autotroph - lactic fermentation - lagging strand - laminin - LDL receptor - Le Chatelier's principle - lectin - leucine - leucine-2-alanine enkephalin - leukotriene B4 receptor - LH - LH receptor - LHRH receptor - life - life form - ligand - light reactions - Lineweaver-Burk diagram - lipase - lipid - lipid anchored protein - lipid bilayer - lipoprotein - liquid - list of compounds - list of gene families - locus - luminescent protein - lymphocyte homing receptor - lysine - lysis - lysis buffer - lysozyme - lytic cycle macroevolution - macromolecular system - macromolecule - macrophage colony-stimulating factor - major histocompatibility complex - Malpighi body - Malpighi layer - marine biology - maslinic acid - mass spectrometer - maturation-promoting factor - mechanoreceptor - medicine - meiosis - melting point - membrane glycoprotein - membrane protein - membrane topology - membrane transport - memory B cell - memory T cell - Mendelian inheritance - metabolic pathway - metabolism - metabotropic glutamate receptor - metalloprotein - metaphase - metazoa - methionine - micelle - Michaelis-Menten kinetics - microbe - microbiology - microevolution - microfilament - microfilament protein - microsatellite - microscope - microtiter plate - microtubule-associated protein - mineralocorticoid receptor - minisatellite - mitochondrial membrane - mitochondrion - mitogen receptor - mitosis - mitotic spindle - mixture - modern evolutionary synthesis - molar volume - mole (unit) - molecular biology - molecular chaperone - molecular dynamics - molecular engineering - molecular evolution - molecular mechanics - molecular modelling - molecular orbital - molecular phylogeny - molecular sequence data - molecule - monoamine - monoclonal antibody - monomer - monosaccharide - monosaccharide transport protein - morphogenesis - morphogenetic field - mos gene - Mössbauer spectroscopy - MRI - MSH - mu opioid receptor - mu-chain immunoglobulin - mucin - Muller's ratchet - multiresistance - muscarinic receptor - muscle - muscle protein - mutagen - mutation - myc gene - mycology - myelin basic protein - myeloma protein - myosin N -formylmethionine - N-formylmethionine leucyl-phenylalanine - N-methyl-D-aspartate receptor - N-methylaspartate - N-terminus - NADH - NADPH - NaKATPase - native state - nef gene product - neoplasm protein - Nernst equation - nerve - nerve growth factor - nerve growth factor receptor - nerve tissue protein - nerve tissue protein S 100 - nervous system - neurobiology - neurofilament protein - neurokinin A - neurokinin K - neurokinin-1 receptor - neurokinin-2 receptor - neuron - neuronal cell adhesion molecule - neuropeptide - neuropeptide receptor - neuropeptide Y - neuropeptide Y receptor - neuroscience - neurotensin - neurotensin receptor - neurotransmitter - neurotransmitter receptor - neutral theory of molecular evolution - neutron - neutron activation analysis - NF-kappa B - nicotinic receptor - nitrogen - nitroglycerine - Nobel Prize in Chemistry - non-competitive inhibition - nuclear lamina - nuclear localization signal - nuclear magnetic resonance - NMR - nuclear protein - nucleic acid - nucleic acid regulatory sequence - nucleic acid repetitive sequence - nucleic acid sequence homology - nucleon - nucleophile - nucleoside - nucleosome - nucleotide - nutrition octreotide - odorant receptor - olfaction - olfactory receptor neuron - oligopeptide - oncogene - oncogene protein - oncogene proteins V-abl - oncogenic retroviridae protein - open reading frame - opioid receptor - opsin - optical isomerism - organ (anatomy) - organelle - organic chemistry - organic compound - organic nomenclature - organic reaction - organism - osmosis - osteocalcin - outer hair cell - outline of biochemical techniques - ovalbumin - oxidation - oxidation number - oxidation state - oxidative decarboxylation - oxidative phosphorylation - oxygen - oxytocin - oxytocin receptor P42 MAP kinase - p53 - pancreatic polypeptide - parathyroid hormone receptor - partial pressure - passive transport - Pauling scale - PCR - peptide - peptide bond - peptide elongation factor - peptide elongation factor tu - peptide fragment - peptide initiation factor - peptide receptor - peptide termination factor - peripheral membrane protein - pesticide - pH - phage display - pharmaceutical - pharmacist - pharmacology - phenol - phenotype - phenyl group - phenylalanine - Philadelphia chromosome - phospholipid - phospholipid bilayer - phosphopeptide - phosphoprotein - phosphorus - phosphorylation - phosphoserine - phosphothreonine - phosphotyrosine - photobiology - photolysis - photophosphorylation - photoreceptor - photorespiration - photosynthesis - photosystem I - photosystem II - phototransduction - phylogenetics - phylogeny - physical chemistry - physiology - phytohaemagglutinin - pituitary hormone receptor - pituitary hormone-regulating hormone receptor - plant protein - plasma membrane - plasmid - plasmin - plasminogen - platelet glycoprotein GPIb-IX complex - platelet membrane glycoprotein - platelet-derived growth factor - platelet-derived growth factor receptor - polymer - polymerase chain reaction - polymerization - polymyxin - polymyxin B - polyomavirus transforming antigen - polypeptide - polysaccharide - porphyrin - Posttranslational modification - potassium - potassium channel - potential energy - pregnancy proteins - primary nutritional groups - primary structure - primer - prion - progesterone receptor - prokaryote - prolactin - prolactin receptor - proline - promoter - prostaglandin e receptor - prostaglandin receptor - protein - protein biosynthesis - Protein Data Bank - protein design - protein expression - protein folding - protein isoform - protein nuclear magnetic resonance spectroscopy - protein P16 - protein P34cdc2 - protein precursor - protein structure prediction - protein subunit - protein synthesis - protein targeting - protein translocation - protein-tyrosine kinase - protein-tyrosine-phosphatase - proteinoid - proteomics - protirelin - proto-oncogene - proto-oncogene proteins - proto-oncogene protein C-kit - proto-oncogene proteins c-abl - proto-oncogene proteins c-bcl-2 - Proto-oncogene proteins c-fos - proto-oncogene proteins c-jun - proto-oncogene proteins c-mo - proto-oncogene proteins c-myc - proto-oncogene proteins c-raf - proton - proton pump - protozoan proteins - purine - purinergic P1 receptor - purinergic P2 receptor - purinergic receptor - pyridine - pyrimidine - pyruvate - pyruvate oxidation quantum chemistry - quaternary structure radioisotope - radioisotopic labelling - Raman spectroscopy - random coil - Ras gene - Ras protein - reading frame - receptor (biochemistry) - receptor antagonist - receptor protein-tyrosine kinase - recombinant fusion protein - recombinant interferon-gamma - recombinant protein - recombination - redox - redox reaction - redox system - reflux - replication origin - replicon - repressor - repressor protein - respiration (physiology) - restriction enzyme - retinoblastoma protein - retinoic acid receptor - retinol-binding protein - retroelement - retroviridae protein - retrovirus - Reverse transcriptase - RFLP - rho factor - rhodopsin - ribonucleoprotein - ribose - ribosomal protein - ribosomal protein S6 kinase - ribosome - RNA - RNA virus - RNA-binding protein - RNA-directed DNA polymerase - rod outer segment - rough ER sarcoplasmic reticulum - satellite DNA - scientific notation - SDS-PAGE - second messenger - second messenger system - secondary structure - secretin - selectin - sensory receptor - sequence (biology) - sequence homology - sequence motif - sequencing - serine - serotonin - serotonin receptor - serpin - sexual reproduction - SH3 domain - SI - sigma factor - signal peptide - signal recognition particle - signal sequence - signal transduction - sincalide - skeleton - skin - smooth ER - sodium channel - sodium-hydrogen antiporter - soluble - solution - solvation - solvent - somatomedin - somatomedin receptor - somatostatin - somatostatin receptor - somatotropin - somatotropin receptor - somatotropin-releasing hormone - somatropin - sp1 transcription factor - spectrin - spectroscopy - src gene - src-family kinase - SSRI - starch - stem cell - stereochemistry - steroid 17alpha-monooxygenase - steroid 21-monooxygenase - steroid receptor - stimulatory gs G-protein - stoichiometry - structural biology - structural domain - Structural formula - structural motif - substance P - substrate - sugar - sulfur - supercoil - superfamily - superoxide - surface immunoglobulin - surface plasmon resonance - suspension (chemistry) - synapse - synthetic vaccine - systems biology T cell - T-cell antigen receptors - tachykinin - tachykinin receptor - talin protein - tandem repeat sequence - taste bud - TATA box - tax gene product - taxonomy - telophase - tertiary structure - tetrodotoxin - thermochemistry - thermometer - thiamin - thioredoxin - threonine - thrombin - thrombin receptor - thrombomodulin - thromboxane receptor - thylakoid - thyroid hormone receptor - thyrotropin - thyrotropin receptor - thyrotropin-releasing hormone receptor - thyroxine - timeline of biology and organic chemistry - titration - tobacco mosaic virus - topoisomerase - toxin - trans-activator - transcription factor - transcription factor AP-1 - transducin - transformation - transforming growth factor - transforming growth factor alpha - transforming growth factor beta - transforming growth factor beta receptor - transient receptor potential - translation (biology) - transmembrane ATPase - transmembrane helix - transmembrane protein - transmembrane receptor - transport protein - transport vesicle - triiodothyronine - trinucleotide repeat - triose - tropomyosin - troponin - tryptophan - tubulin - tumor necrosis factors - tumor necrosis factor receptor - tyrosine - tyrosine 3-monooxygenase ubiquitin - urea - urea cycle - uric acid - UV/VIS spectroscopy vaccine - vacuole - valence - valine - van der Waals force - van der Waals radius - vapor pressure - vapour pressure - vasoactive intestinal peptide - vasoactive intestinal peptide receptor - vasopressin - vasopressin receptor - venom - vertebrate photoreceptor - vesicle - vestibular system - vimentin - viral envelope protein - viral oncogene protein - viral protein - virology - virus (biology) - vitamin - vitamin D-dependent calcium-binding protein - vitellogenin - vitronectin - von Willebrand factor water Y chromosome - yeast zymology
https://en.wikipedia.org/wiki/Index_of_biochemistry_articles
This is a list of topics in biodiversity . Abiotic stress — Adaptation — Agricultural biodiversity — Agroecological restoration — All-taxa biodiversity inventory — Alpha diversity — Applied ecology — Arca-Net — ASEAN Center for Biodiversity — ASEAN Heritage Parks — Aquatic biomonitoring — Axe Lake Swamp State Nature Preserve — Bank of Natural Capital — Beta diversity — BioBlitz — Biocomplexity — Biocultural diversity — Biodiversity action plan — Biodiversity and drugs — Biodiversity and food — Biodiversity banking — Biodiversity databases (list) — Biodiversity hotspot — Biodiversity in Israel, the West Bank, and the Gaza Strip — Biodiversity Indicators Partnership — Biodiversity informatics — Biodiversity Monitoring Switzerland — Biodiversity of Borneo — Biodiversity of Cape Town — Biogeography — Bioindicator — Bioinformatics — BIOPAT - Patrons for Biodiversity — Biorisk — Biosafety Clearing-House — BioSearch — Biota by conservation status (list) — Biosurvey — BioWeb — Body size and species richness — Box corer — Bray–Curtis dissimilarity — Caribbean Initiative — Carta di Siracusa on Biodiversity — Cartagena Protocol on Biosafety — Center for Biological Diversity — Centres of Plant Diversity — Chresonym — Comisión Nacional para el Conocimiento y Uso de la Biodiversidad — Conservation Biology — Conservation Commons — Conservation ethic — Conservation in Papua New Guinea — Conservation reliant species — Conservation status — Conservation status (biota - list) — Convention on Biological Diversity — Critically Endangered — Crop diversity — Data Deficient — Deforestation — Diversitas — Diversity-function debate — Diversity index — ECNC-European Centre for Nature Conservation — Ecological economics — Ecological effects of biodiversity — Ecological goods and services — Ecological restoration — Ecology — Economics of biodiversity — Ecosystem diversity — EDGE species (list) — Effect of climate change on plant biodiversity — Eichler's rule — Endemic Bird Areas of the World: Priorities for Biodiversity Conservation — Endemic Species in Slovakia — Endemism — Enzootic — Ethnic diversity — Ewens sampling formula — Extinct in the Wild — Extinction — Felidae Conservation Fund — Flora and vegetation of Turkey — Forest farming — Functional agrobiodiversity — Gamma diversity — Gene pool — Genetic diversity — Genetic erosion — Genetic pollution — Global 200 — Global Biodiversity Information Facility — Global biodiversity — Global Crop Diversity Trust — Global warming — Green Revolution — Habitat conservation — Habitat fragmentation — Heirloom plant — Heirloom tomato — Holocene extinction event — Indicator species — Indicator value — Insect biodiversity — Intact forest landscape — Inter-American Biodiversity Information Network — Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services — Intermediate Disturbance Hypothesis — International Cooperative Biodiversity Group — International Council for Game and Wildlife Conservation — International Day for Biological Diversity — International Institute of Tropical Agriculture — International Mechanism of Scientific Expertise on Biodiversity — International Treaty on Plant Genetic Resources for Food and Agriculture — International Union for Conservation of Nature — International Year of Biodiversity — IUCN Red List — IUCN Red List vulnerable species (list) — Key Biodiversity Areas — Land use, land-use change and forestry — Langtang National Park — Latitudinal gradients in species diversity — Least Concern — List of biodiversity databases — List of environmental issues — List of environmental topics — Livestock Keepers' Rights — Living Planet Index — Local Biodiversity Action Plan — Man and the Biosphere Programme — Measurement of biodiversity — Measurement of biodiversity (list) — Megadiverse countries — Millennium Ecosystem Assessment — Millennium Seed Bank Project — Monoculture — Monodominance — Mutation — NaGISA — National Biodiversity Centre (Singapore) — National Biodiversity Network — National Biological Information Infrastructure — Native Vegetation Management Framework — Natural environment — Natural heritage — Natural landscape — Nature — Nature Conservation Act vulnerable flora of Queensland (list) — NatureServe — NatureServe conservation status — Near Threatened — Niche apportionment models — Not Evaluated — Nutritional biodiversity — NatureServe vulnerable species (list) — Occupancy–abundance relationship — Organic farming and biodiversity — Park Grass Experiment — Parsa National Park — Phylogenetic diversity — Plant Resources of Tropical Africa — Range condition scoring — Rank abundance curve — Rare species — Rarefaction (ecology) — Reconciliation ecology — RECOrd (Local Biological Records Centre) — Regional Red List — Relative species abundance — Renkonen similarity index — Satoyama — SAVE Foundation — Seedbank — Seedy Sunday — Shivapuri Nagarjun National Park — Soil biodiversity — Species evenness — Species richness — Subsurface Lithoautotrophic Microbial Ecosystem — Sustainability — Sustainable development — Sustainable forest management — The Economics of Ecosystems and Biodiversity — Threatened species — Unified neutral theory of biodiversity — United Nations Decade on Biodiversity — University of California, Riverside Herbarium — Vulnerable animals — Vulnerable fauna of the United States — Vulnerable flora of Queensland, Nature Conservation Act list — Vulnerable plants — Vulnerable species — Vulnerable species, IUCN Red List — Vulnerable species, NatureServe (list) — Wild Solutions — Wildlife preserve — Wooded meadow — World Conservation Monitoring Centre — World Conservation Union — World Forestry Congress — World Network of Biosphere Reserves — Yasuni National Park
https://en.wikipedia.org/wiki/Index_of_biodiversity_articles
An index of biological integrity ( IBI ), also called an index of biotic integrity , is a scientific tool typically used to identify and classify water pollution problems, although there have been some efforts to apply the idea to terrestrial environments. [ 1 ] An IBI associates anthropogenic influences on a water body with biological activity in the water body, and is formulated using data developed from biosurveys . Biological integrity is associated with how "pristine" an environment is and its function relative to the potential or original state of an ecosystem before human alterations were imposed. [ 2 ] Biological integrity is built on the assumption that a decline in the values of an ecosystem's functions are primarily caused by human activity or alterations. The more an environment and its original processes are altered, then by definition, the less biological integrity it holds for the community as a whole. If these processes were to change over time naturally, without human influence, the integrity of the ecosystem would remain intact. Similar to the concept of ecosystem health , the integrity of the ecosystem relies heavily on the processes that occur within it because those determine which organisms can inhabit an area and the complexities of their interactions. Deciding which of the many possible states or conditions of an ecosystem is appropriate or desirable is a political or policy decision. [ 2 ] To quantitatively assess changes in the composition of biologic communities, IBIs are developed to accurately reflect the ecological complexity from statistical analysis. There is no one universal IBI, and developing metrics that consistently give accurate assessment of the monitored population requires rigorous testing to confirm its validity for a given subject. Often IBIs are region-specific and require experienced professionals to provide sufficient quality data to correctly assess a score. Because communities naturally vary as do samples collected from a larger population, identifying robust statistics with acceptable variance is an area of active and important research. This can be a powerful tool to identify systemic impacts on the health of biological systems. IBIs are increasingly involved in the identification of impairment , and confirmation of recovery of impaired waters, in the total maximum daily load process required by the Clean Water Act in the USA. Unlike chemical testing of water samples, which gives brief snap-shots of chemical concentrations, an IBI captures an integrated net impact on a biological community structure. While the complete absence, particularly sudden disappearance of, suites of indicator species can constitute powerful evidence of a specific pollutant or stress factor, IBIs generally do not resolve a specific cause of impairment. The IBI concept was formulated by James Karr in 1981. [ 3 ] [ 4 ] To date IBIs have been developed for fish , algae , macroinvertebrates , pupal exuvia (shed skins of chironomidae ), vascular plants , and combinations of these. Comparatively little work has been done to develop IBIs for terrestrial ecosystems. Biosurvey protocols have been published for use in different waterbody types and ecoregions. One such publication is the Rapid Bioassessment Protocols manual for streams and rivers, issued by the U.S. Environmental Protection Agency (EPA). [ 5 ] Such protocols provide a structure for developing an IBI, which may include measures such as richness of taxa (species, genera , etc.) and proportion of pollution-tolerant or intolerant taxa. It is possible to create IBIs for use by minimally trained monitoring personnel, however the precision obtainable is lower than that conducted by trained professionals. Safeguards to assure robustness in spite of potential misidentifications or protocol variations require careful testing. Ongoing quality control by established experts is needed to maintain data integrity , and the analysis of IBI results becomes more complex. Use of trained volunteers is being pioneered by government agencies responsible for monitoring large numbers of water bodies with limited resources, such as the Minnesota Pollution Control Agency (MPCA) and local volunteer stream monitoring programs supported by MPCA. [ 6 ] EPA has published guidance to assist volunteer programs in formulating IBIs and related findings. [ 7 ] While IBIs from such programs are legally admissible in US courts , defending the validity of conclusions based solely on such results is unlikely to be feasible. Agreement among multiple IBIs from data collected by established professionals can be more conclusive. A case in point is the phenomenon that stream IBI scores indicate significant impairment, or partial ecological collapse where more than 10 to 15 percent of the immediately surrounding watershed is impervious due to urbanization . [ 8 ] Identifying reasons for such impairments, and possible exceptions to these trends, are major research challenges for academics studying cumulative watershed effects, and the use of low-impact development techniques to mitigate the impacts of stormwater runoff pollution.
https://en.wikipedia.org/wiki/Index_of_biological_integrity
This is a list of articles on biophysics .
https://en.wikipedia.org/wiki/Index_of_biophysics_articles
Biotechnology is a technology based on biology , especially when used in agriculture , food science , and medicine . Of the many different definitions available, the one formulated by the UN Convention on Biological Diversity is one of the broadest: This page provides an alphabetical list of articles and other pages (including categories, lists, etc.) about biotechnology. Adeno-associated virus -- Agricultural biotechnology -- Agrobacterium -- Affimer -- Affymetrix -- Alcoholic beverages -- Category:Alcoholic beverages -- Algal fuel -- Amgen -- Antibiotic -- Antibody drug conjugate -- Antisense RNA -- Aptamer -- Assisted reproductive technology -- Artificial selection Bacteriology -- Biochemical engineering -- Biochip -- Biochemistry -- Biodiesel -- Bioengineering -- Bioethics -- Biofuel -- Biogas -- Biogen Idec -- Bioindicator -- Bioinformatics -- Category:Bioinformatics -- Bioleaching -- Biological agent -- Biological warfare -- Bioluminescence -- Biomedical Engineering -- Biomimetics -- Biomining -- Bionanotechnology -- Bionics -- Biopharmacology -- Biophotonics -- Bioprocessing -- Bioprospecting -- Bioreactor -- Bioremediation -- Biostatistics -- Biostimulation -- Biosynthesis -- Biotechnology -- Category:Biotechnology -- Category:Biotechnology companies -- Category:Biotechnology products -- Bt corn Cancer immunotherapy -- CAR T cell -- Cell therapy -- Chemogenomics -- Chimera (genetics) -- Chinese hamster -- Chinese Hamster Ovary cell -- Chiron Corp. -- Cloning -- Compost -- Composting -- Computational biology -- Connectomics -- Convention on Biological Diversity -- Chromatography -- CRISPR -- CRISPR gene editing Designer baby -- Directive on the patentability of biotechnological inventions -- DNA microarray -- DNA sequencing -- Dolly -- Dwarfing Enzymes -- Electroporation -- Environmental biotechnology -- Ethanol -- Eugenics -- Extremophiles in biotechnology Fermentation -- Category:Fermented foods Gene knockout -- Gene therapy -- Genentech -- Genetic engineering -- Genetically modified crops -- Genetically modified food -- Genetically modified food controversies -- Genetically modified maize -- Genetically modified organisms -- Genetics -- Genomics -- Genome editing -- Genzyme -- Global Knowledge Center on Crop Biotechnology - Glycomics -- Golden rice -- Green fluorescent protein Human cloning -- Human Genome Project Immunology -- Immunotherapy -- Immune suppression -- Industrial biotechnology -- Interactomics Lipidomics Machine learning in bioinformatics -- MedImmune -- Metabolic engineering -- Metabolomics -- Metagenomics -- Microbial Fuel Cell -- Microfluidics -- Millennium Pharmaceuticals -- Model Organism -- Molecular Biology -- Molecular machines -- Monoclonal antibodies -- Mycofiltration -- Mycoremediation Nanobiotechnology Omics Penicillin -- Phosphatases -- Pfizer Inc. -- Phage therapy -- Pharmacogenomics -- Pharming (genetics) -- Plant-made pharmaceuticals -- Plantibody -- Proteomics Recombinant DNA -- Regulation of the release of genetic modified organisms -- Reporter gene Selective breeding -- Serono -- Shotgun sequencing -- Stem cell -- STR multiplex systems -- Sustainability -- Sustainable development -- Synthetic biology Terminator technology -- Transcriptomics -- Transgenic animal -- Transgenic plants -- Transgenic plant production Use of biotechnology in pharmaceutical manufacturing Vaccine -- Vector -- Virology Xenotransplantation Zoology
https://en.wikipedia.org/wiki/Index_of_biotechnology_articles
This is an alphabetical list of articles pertaining specifically to chemical engineering . Absorption -- Adsorption -- Analytical chemistry -- Bioaccumulate -- Biochemical engineering -- Biochemistry -- Biochemistry topics list -- Bioinformatics -- Biology -- Bioprocess Engineering -- Biomolecular engineering -- Bioinformatics -- Biomedical engineering -- Bioseparation -- Biotechnology -- Bioreactor -- Biotite -- Catalysis -- Catalytic cracking -- Catalytic reforming -- Catalytic reaction engineering -- Ceramics -- Certified Chartered Chemical Engineers -- Chartered Chemical Engineers -- Chemical engineering -- Chemical kinetics -- Chemical reaction -- Chemical synthesis -- Chemical vapor deposition (CVD) -- Chemical solution deposition -- Chemistry -- Chromatographic separation -- Circulating fluidized bed -- Combustion -- Computational fluid dynamics (CFD) -- Conservation of energy -- Conservation of mass -- Conservation of momentum -- Crystallization processes -- Deal-Grove model -- Dehumidification -- Dehydrogenation -- Depressurization -- Desorption -- Desulfonation -- Desulfurization -- Diffusion -- Distillation -- Drag coefficient -- Drying -- Electrochemical engineering -- Electrodialysis -- Electrokinetic phenomena -- Electrodeposition -- Electrolysis -- Electrolytic reduction -- Electroplating -- Electrostatic precipitation -- Electrowinning -- Emulsion -- Energy -- Engineering -- Engineering economics -- Enzymatic reaction -- Filtration -- Fluid dynamics -- Flow battery -- Fuel cell -- Fuel technology -- Gasification -- Heat transfer -- History of chemical engineering -- Hydrometallurgy -- Immobilization -- Inorganic chemistry -- Ion exchange -- Kinetics (physics) -- Laboratory -- Leaching -- Mass balance -- Mass transfer -- Materials science -- Medicinal chemistry -- Microelectronics -- Microfluidics -- Microreaction technology -- Mineral processing -- Mixing -- Momentum transfer -- Nanoengineering -- Nanotechnology -- Organic chemistry -- Periodic table -- Pharmacology -- Physical chemistry -- Plastic -- Polymer -- Process control -- Process design -- Process modeling -- Process safety -- Qualitative inorganic analysis -- Quantitative analysis -- Quantum chemistry -- Quartz -- Rate equation -- Reverse osmosis -- Science -- Separation processes -- Solid-state chemistry -- Solvent extraction -- Supercritical fluids -- Thermodynamics -- Timeline of chemical element discovery -- Transport phenomena Ultrafiltration -- Unit operation -- Volatility -- Water and waste water treatment -- Waste minimization -- Zeolite -- Zinc -- Zinnwaldite -- Zircon -- Zirconium -- Zone melting --
https://en.wikipedia.org/wiki/Index_of_chemical_engineering_articles